Legal Experts Flag AI Blind Spots in FOI Bill

Australia's proposed Freedom of Information Amendment Bill 2025 fails to address how artificial intelligence should be used in making or processing government transparency requests, according to a new analysis by law firm King & Wood Mallesons that identifies critical gaps in the legislation.

The Bill, introduced into the House of Representatives last week, aims to "modernise the requirements for Freedom of Information requests" but remains silent on AI's role in FOI processes, write senior associate Kendall Mutton and partner Rebekha Pattison in their legal analysis.

"Curiously, the Bill is silent on the use of AI systems to make or process freedom of information requests," the authors note, despite AI already being implemented across multiple Australian Government agencies.

The lawyers identify three key areas where the legislation creates uncertainty: whether AI can autonomously make FOI requests, how to process requests involving AI-generated government decisions, and using AI to handle FOI applications.

"It is conceivable that an AI system could be trained to itself make an FOI request," Mutton and Pattison write, but conclude that "the better view for the current compilation of the FOI Act is that AI cannot itself make a valid FOI request as AI is not a 'person'."

A significant compliance question emerges around whether AI-generated government information constitutes a "document" subject to FOI disclosure. Under the FOI Act, documents are broadly defined to include "any article on which information has been stored or recorded, either mechanically or electronically, as well as any other record of information."

Crucially, the lawyers note there is "no requirement in the FOI Act for a document to have been created by a 'person'" and this gap isn't addressed in the Bill. They conclude that "AI-generated information seems likely to be considered an article on which information has been stored or recorded, either mechanically or electronically, for the purpose of the FOI Act."

This interpretation gains support from case law in other jurisdictions. In DPP v Khan, the Supreme Court of the Australian Capital Territory referred to AI-generated character references as "documents," though "there was no detailed commentary on whether and why AI-generated material constitutes a document."

However, "whether the underlying algorithm used to generate the information will be similarly caught is a trickier question," the analysis states. This depends on algorithm content and request scope – for example, if an FOI request seeks documents containing specific terms, it "could conceivably include an AI algorithm which uses that term, if the algorithm is considered to be a document."

The analysis highlights practical enforcement challenges, noting: "As a practical matter, unless the FOI Act is amended or an agency requests that the use of AI is disclosed when making a request, it may be difficult to identify when a request has been generated using AI."

For government agencies implementing AI systems, the legal experts identify "one exciting opportunity" in using AI to generate "significant efficiencies" in processing FOI requests. United States government agencies have been testing AI since mid-2023 to perform keyword searches, summarise document characteristics, and identify potential exemptions.

However, the Bill doesn't clarify whether automated systems can make FOI decisions. "It is not clear whether this was a deliberate decision to keep this type of administrative action in the hands of human decision makers given the levels of judgment that are often required, or whether this was a missed opportunity to open the door for more efficient processing," the authors observe.

View the original article here