AI Agents Could Soon Be Making Deals With Each Other – But Are We Ready?

The future of artificial intelligence may involve your AI assistant booking lunch with someone else's AI assistant, but new research suggests we're not prepared for the complex infrastructure needed to make this work safely and effectively.

A recent paper by researchers from the US, UK, and Australia has outlined the significant technical and legal challenges that must be addressed before AI agents can interact autonomously in the real world, according to a new analysis by Gilbert & Tobin Lawyers.

Unlike current AI tools like ChatGPT and Claude that respond to human prompts, AI agents are designed to act independently – planning, deciding, and executing tasks without constant human oversight. But this autonomy creates unprecedented challenges for accountability, security, and social interaction.

"As an AI can act autonomously, it is necessary to be able to link the AI agent to a legal entity, a person or a company," the legal analysis notes, highlighting concerns about accountability when AI agents make mistakes or cause harm.

The researchers identify three critical infrastructure needs: attribution (knowing who's responsible for an AI agent's actions), interaction (how agents communicate securely), and response (what happens when things go wrong).

One of the most pressing concerns is identity verification. The analysis warns that "there could be a whole new world of identity theft as scammers jailbreak your AI agent and use it to deal with third parties under your authorisation."

The proposed solution involves trusted intermediaries that would certify AI agents are linked to humans while protecting privacy – similar to how mobile phone SIM cards work but on a global scale.

Communication between AI agents presents another major hurdle. The researchers argue that AI agent traffic should travel through separate channels from regular internet traffic to prevent the spread of malicious code that could "both trick LLMs into generating their own malicious prompts and extract sensitive personal data."

Perhaps most intriguingly, the research suggests AI agents could eventually make collective decisions – potentially for good, such as warning about computer viruses, or for ill, such as engaging in price-fixing that violates competition laws.

"If AI agents have a degree of individual autonomy, it seems inevitable that a group of AI agents could decide to act collectively," the analysis observes.

The social implications are equally complex. When AI agents negotiate on behalf of humans, they may lack the subtle social cues that govern human interaction.

"If AI agents do not reflect some of these 'softer' rules, a growing reliance on AI agent intermediaries between humans could produce a more uniform, sharper-edged social culture," the researchers warn.

The analysis suggests that without proper infrastructure, AI agent deployment will initially be limited to closed enterprise networks, customer service functions, and internet outsourcing tasks.

The legal precedent is already emerging. In 2024, Air Canada unsuccessfully tried to distance itself from incorrect information provided by its chatbot, claiming the bot was "responsible for its own actions."

As AI agents become more capable, the researchers emphasize that "mitigating risks will come down to ensuring agents are appropriately permissioned and configured."

The full analysis is available at: https://www.gtlaw.com.au/insights/how-will-your-ai-agent-talk-to-my-ai-agent