Across the world, companies are increasingly using AI to power their operations. The speed of progress is staggering, as we move from AI systems that talk (chatbots) to those that can take actions in the real world (agents). Agentic AI systems plan, reason and coordinate multi-step actions to achieve a goal, with limited human oversight. Crucially, these agents learn and adapt from experience — including on pricing — and are able to interact with one another, sometimes in unexpected ways.
This post explores the key antitrust risks posed by agentic AI and steps companies can take to ensure compliance.
The antitrust risks
Antitrust regulators are increasingly focused on getting ahead of the issues. The UK CMA recently published research on agentic AI, highlighting the close interplay between competition and consumer law in regulating these systems. The OECD also explored agentic AI as a potential antitrust concern in a recent report.
1. Agentic collusion
AI agents are goal-oriented. When told to optimise pricing or commercial strategies as their primary objective, AI systems may identify collusion as the best route to achieve their goal — even when explicitly instructed to abide by antitrust laws — treating the law as a side-constraint to maximising profit.
2. Price signalling
AI agents process vast amounts of data, monitor competing agents' behaviour, learn and react. They can therefore learn to intentionally signal and align conduct with competing agents, particularly in concentrated markets, synchronising on high prices without explicit communication. Research from 2025 using a repeated pricing game found that AI agents consistently and autonomously learned to charge higher prices and to punish rivals' price cuts before returning to elevated prices.
Purely parallel conduct (or tacit collusion) is not itself unlawful, but drawing the line between such conduct and illegal price signalling is a traditionally difficult enforcement issue.
3. Prompt injections
Hidden instructions embedded in content processed by an AI agent (e.g. white text on white background) can cause it to deviate from its intended behaviour. Examples include:
- A supplier inserting hidden text on its homepage, asking agents scraping it to favour its own products or ignore competing offers — potentially an abuse of dominance.
- Multiple participants synchronising the deployment of prompt injections — competitors using regularly published materials to influence each other's agents to increase prices on a set date. Where a third-party platform mediates the exchange, this could be a hub-and-spoke arrangement.
4. Lock-in
Where agentic AI operates within a closed ecosystem, agents could leverage network effects to favour their own offerings — e.g. systematically prioritising group company services. Regulators could require platform-operated agents to act neutrally, mirroring the European Commission's recent DMA measures requiring Google to open up Android to competing AI service providers.
"The algorithm did it" — questions of liability
Generally, companies cannot hide behind algorithms to avoid liability. The Commission's Horizontal Guidelines confirm that "if pricing practices are illegal when implemented offline, there is a high probability that they will also be illegal when implemented online... an algorithm remains under the firm's control, and therefore the firm is liable even if its actions were informed by algorithms."
The lack of direct human control in agentic AI shifts the focus to design choices, governance processes, and oversight mechanisms. Enforcers are expected to focus on whether a company could have reasonably predicted or prevented its algorithm's collusive or exclusionary behaviour. An effective compliance programme may help establish this — although it will not reduce subsequent fines under EU or UK practice. Breaches can lead to fines of up to 10% of global turnover.
What should businesses do?
- Train agentic systems to comply with antitrust requirements as a core objective (updated regularly to reflect regulatory developments) and to be transparent about limitations, incentives and affiliations.
- Monitor performance with compliance agents and human oversight, including errors, bias, complaints and unintended outcomes.
- Require human confirmation for high-risk actions.
- Be wary of input, training materials and prompts. AI-persistent memory features can lead to memory poisoning, which may permanently impact performance, security and compliance.
- Ensure clear processes to escalate and resolve issues and that staff are appropriately trained on antitrust risks.
- Keep audit logs and risk assessments, with swift action to address problems.
Authors (Linklaters):
Lodewick Prompers (Brussels), Jonny Ford (London/Dublin), Sebastian Plötz (Düsseldorf), Tara Rudra, Schweta Batohi.