AI turns the world of competition law on its head

Bas Braeken & Lara Elzas
16 Dec 2024

Introduction

Artificial Intelligence (“AI”) is booming. In San Francisco, robot taxis from Waymo, a subsidiary of Google’s parent company Alphabet, are already driving with great success. Siri and Alexa have become part and parcel of our daily lives. The vast majority of companies have equipped their products with AI to create smart technologies and services. Such as robots putting together packages for Zalando, ChatGPT from OpenAI, chatbots as customer service and AI in cancer detection. Companies are investing heavily in AI. According to the Financial Times, the biggest tech companies alone (Microsoft, Alphabet, Amazon and Meta) have already invested over 100 billion in AI in the first half of 2024.

AI is turning the world of competition on its head. With the rise of AI, traditional companies are suddenly being rivalled by big tech companies. Waymo and Uber, for example, have announced they are teaming up. With this, Waymo’s parent company Alphabet is suddenly entering the taxi market, competing with local taxi companies.

Competition authorities have been closely monitoring the rise of AI for some time. The German Bundeskartellamt, the French Autorité de la concurrence and the Dutch Authority for Consumers an Markets (“ACM”), among others, have already published several position papers on monitoring the use of algorithms (see here, here and here). The US government issued a presidential action on 30 October 2023 on secure and reliable development and use of artificial intelligence, stressing the importance of healthy competition in the AI market. (Former) European Commissioner Margrethe Vestager also warns about the competition risks of AI. Vestager stresses that swift and strong enforcement is needed to prevent monopolisation by the big tech companies on AI. The European Commission, the UK Competition and Markets Authority (“CMA”) and the US Federal Trade Commission (“FTC”) and Department of Justice (“DoJ”) also recognise that competition problems in AI are not limited to country borders. In their Joint Statement on Competition in Generative AI Foundation Models and AI Products, they discuss the competition risks of AI and indicate that they seek cooperation and share knowledge. Thus, competition law aspects of the AI market are receiving broad attention in the world of competition.

In this blog, we discuss four relevant competition law aspects of AI:

 

AI and market power of tech companies

Big tech companies like Apple, Microsoft and Google have the resources to develop and implement AI technologies faster. This can lead to a situation where these companies dominate the market. This poses competition risks.

The first is the fear by several competition authorities that dominant companies may abuse their market power by tuning algorithmic functions so that their own products and services receive preferential treatment. These fears stand in light of the Google Shopping case, in which Google was fined EUR 2.42 billion for giving preferential treatment to its own online marketplace Google Shopping on Google’s search engine.

In addition, several competition authorities fear that large tech companies are using their existing market power in adjacent markets to keep new entrants out of the AI market. AI systems often need access to large data sets to function effectively. Companies with access to large amounts of data can therefore gain an unfair competitive advantage. This creates high barriers to entry for startups. They are often forced to partner with one of the big tech companies to gain access to that data. This collaboration can lead to a series of anti-competitive practices, including tying, where the sale of one product is made conditional on the purchase of another. This dynamic allows the AI market to consolidate rapidly, as each of these companies incorporates an AI model and bets on its success. As a result, the barriers to entry in the AI market may become higher and higher, reducing competition.

As part of this concern, the Commission is investigating, for example, the cooperation agreement between Microsoft and OpenAI (the developer of ChatGPT). The Commission has requested additional information on the exclusive cloud agreement that is part of the cooperation. Indeed, Microsoft Azure, Microsoft’s cloud computing service, is OpenAI’s exclusive cloud provider. The CMA, DoJ and FTC are also investigating the partnership between Microsoft and OpenAI. In the US, the partnership between OpenAI and Microsoft is also being challenged civilly. On 29 November 2024, Tesla boss Elon Musk launched an action alleging OpenAI engaged in anti-competitive conduct in violation of US antitrust laws

A deal between Google and Samsung has also led to further investigation by the Commission. Samsung has agreed to build Google’s Gemini Nano AI model into the Samsung Galaxy S24. The Commission investigates, among other things, whether this deal means that no other AI systems can be installed on the Samsung device, whether interoperability between other chatbots and apps on the Samsung device is limited by this collaboration, and how this collaboration was established.

At the same time, AI also offers great advantages in the investigative work of competition authorities. For instance, AI tools can quickly and effectively analyse large data sets in the context of an investigation into possible abuse of a dominant position. As competition authorities have more data at their disposal, the use of AI allows them to detect early developments in the market that indicate reduced competition. This allows them to efficiently deploy their investigative capacity and more smoothly anticipate rapidly changing (digital) markets.

 

AI and the cartel ban

AI can also lead to cartel violations. For example, AI can be used to facilitate collusion. Collusion, in short, is the explicit or tacit coordination of competition-relevant behaviour between market participants.

The use of AI can facilitate existing forms of collusion. For instance, algorithmic functions can be used to better monitor an existing price cartel and more easily sanction deviant behaviour. With the increasing availability of large amounts of data on specific markets, these markets are also becoming more transparent. This may result in companies behaving less independently. The more transparent a market is, the less uncertain companies are about competitors’ market behaviour.

The use of AI may also foster new forms of collusion. In particular, regulators point to the use of algorithmic functions to automate competitively sensitive aspects of business operations, such as price, output and production. Algorithms, for example, allow constant monitoring of prices and quick reaction to price changes. For instance, in its investigation into the use of algorithmic trading in the energy market, the ACM points to the possibility that price algorithms can arrive at a higher average price level than if these algorithms were not used. In that case, there is tacit price alignment through the use of algorithms.

Currently, many companies still use so-called rule-based algorithms, simple algorithms in which the variables can easily be set and adjusted in advance. However, there is an increased use of learning-based algorithms, which can create the strange situation of algorithms themselves tuning prices to achieve equilibrium, whether or not outside the science or desire of companies. AI systems, for example, can use other companies’’ pricing as input to set prices. If several companies in the same market use a particular AI system, the fear is that prices will at some point reach an equilibrium at which profitability is optimised. Firms in such a situation no longer have an incentive to price competitively and thereby generate lower sales in exchange for a competitive pricing proposition, as long as the other firms do not do the same. How this kind of behaviour should be qualified under competition law is unclear, but what is clear is that in such a situation firms no longer compete on price, ultimately to the detriment of consumer welfare..

Detecting this kind of collusion is challenging. Due to the large amount of data that algorithms need to function adequately; collusion can be difficult to detect. The recently adopted AI Regulation may facilitate the supervisory function of authorities through the obligation introduced in Article 53(1) for providers of certain AI models to prepare and disclose sufficiently detailed summaries of the content used to train the AI model. Moreover, under Article 74(2) of the AI Regulation, market surveillance authorities report to national competition authorities and the Commission any information obtained in the course of surveillance activities that may be relevant to the application of competition rules. AI developers thus have a far-reaching transparency obligation and national and European authorities cooperate intensively to prevent or detect anti-competitive behaviour.

On the other hand, competition authorities are increasingly using AI to detect cartels. For example, the CMA introduced a screening tool that allowed it to easily detect cartels in tenders. The algorithms in the tool mapped tenders where bidrigging agreements were more likely. The tool is now no longer in use but there is a good chance that other competition authorities are also developing and using such tools without making it public. One reason for secrecy is that possible information on the factors on which tenders are selected will not be revealed to cartel participants and they could therefore circumvent cartel detection.

 

Merger control and AI

The dynamics described above between big powerful companies and AI developers also affect merger control. Due to the need for big data and pre-existing ecosystems, almost all serious AI developers enter into partnership agreements with large companies. The Commission sees the value of these partnerships, as they are essential for the development of AI models, but also warns of the potentially anti-competitive consequences. For instance, collaborations can lead to entrenched market positions, for example by agreeing on exclusivity rights. For example, a tech company may stipulate that in exchange for providing access to data and capital, the AI developer will only use the tech company’s services and align its AI model with the tech company’s services. This does not enhance competition in the AI market and adjacent markets.

In this context, the above-described cooperation between Microsoft and OpenAI was considered under the European merger control regime in addition to the potential competition infringement. However, the Commission concluded that the form of cooperation did not qualify as a concentration, as no lasting change of control was created within the meaning of Article 3(1) of the EU Merger Regulation.

The CMA also recently announced it was launching a formal investigation into the collaboration between Alphabet, Google’s parent company, and AI startup Anthropic. However, the investigation was soon abandoned when it was found that the revenue thresholds were not met. The CMA is actively investigating collaborations between tech companies and AI startups under the merger control regime. It also already investigated Microsoft’s investment in Inflection AI and Amazon’s partnership in Anthropic.

 

AI and the DMA

Investigations into an antitrust or abuse-of-dominant position violation by large tech companies can take a very long time due to the complexity. For example, the Google Shopping case above took more than 14 years. That is very long in a dynamic and fast-changing market, increasing the risk of significant and irreparable competitive harm.

The Digital Market Act (“DMA”) is intended to ease the rigmarole of market surveillance in the digital sector. Since 7 March 2024, all gatekeepers appointed by the Commission must comply with the obligations of Articles 5, 6 and 7 of the DMA (see also our earlier blogs of 5 December 2023 on the content of the DMA and 7 March 2024 on gatekeepers’ compliance with these obligations). These rules include the collection, processing and combination of (personal) data, interoperability obligations and the prohibition of parity clauses. Because the DMA entails so-called ex ante supervision, in principle it prevents designated gatekeepers from engaging in anti-competitive behaviour for years before it is stopped by the Commission. The DMA also contains far-reaching transparency obligations for designated gatekeepers to detail in compliance reports how the gatekeeper complies with all obligations. This provides the Commission with a wealth of information on the behaviour of these gatekeepers and the inter-operability of their various services.

The DMA also plays a role in the regulation of AI. Admittedly, AI is not one of the included core platform services that the DMA looks at. Nevertheless, the European Parliament has called for certain AI models to be included in the DMA as a core platform service. In the meantime, AI is already partially regulated by the DMA. In a statement dated 22 May 2024, the high-level group for the DMA outlined how the DMA influences the use of AI by designated gatekeepers. Once an AI model is integrated into another core platform service, such as Google’s search engine, Apple’s operating system or Facebook’s social networking service, the DMA applies to the AI model used in the context of the core platform service. A gatekeeper’s compliance with the obligations under the DMA should therefore take into account how AI models used are part of the core platform service in question.

Moreover, the DMA regulates whether and how gatekeepers may process personal, and business data generated on the core platform service. This curbs the previously described data dominance of large tech companies. For example, under the DMA, gatekeepers are not allowed to collect personal data of end users from third parties without prior consent. In addition, gatekeepers may not use personal data derived from the core platform service in other services offered by the gatekeeper. This limits the amount of data that gatekeepers can use to train their AI models.

Finally, the DMA contains a merger information requirement. Gatekeepers must inform the Commission of any proposed concentration in the digital sector, regardless of whether the proposed concentration must be notified to the Commission under the EU Merger Regulation or to a national competition authority. This information requirement was used by the Commission, among other things, to establish a so-called Article 22 referral. Through such a referral, the Commission could still, at the request of one or more member states, examine and possibly prohibit, or only approve subject to conditions, a non-notifiable concentration. This could prevent large powerful companies from acquiring a smaller, innovative and start-up AI competitor with the aim or effect of weakening innovation and/or eliminating potential competition (so-called killer acquisitions). The Court of Justice, in its Illumina Grail ruling on 3 September 2024, drew a line under the scope of Article 22, severely limiting its scope. Incidentally, in Germany and Austria, it is already possible to assess killer acquisitions because the value of the transaction is also taken into account. In addition, national competition authorities in Denmark, Hungary, Ireland, Italy, Lithuania, Slovenia and Sweden have introduced call-in powers. This allows them to still investigate mergers below the notification thresholds. These countries can also still refer transactions to the Commission under Article 22 of the EU Merger Regulation.

 

Conclusion

The effective enforcement of competition infringements related to AI faces significant challenges. The widespread use of AI can lead both to coordinated behaviour between firms and abuse of market power by dominant firms. Communications from various competition authorities on AI and competition law show that these authorities have learnt from the emergence of digital markets at the beginning of this century, and are making efforts to avoid making the same mistakes when supervising AI as when supervising Big Tech. Moreover, consideration is being giving about sharpening competition tools to meet the new reality. The DMA plays an important role in this matter, but, as Vestager also noted in her speech on 28 June 2024, the basic principles of competition enforcement are still the same. Monopolies are monopolies and price fixing is price fixing, whether we are dealing with car manufacturing, cement production or machine learning.

 

Bas Braeken and Lara Elzas

To
top