Law Firm Releases AI Procurement Guide
Australian law firm MinterEllison has released comprehensive guidance for organisations procuring artificial intelligence systems, emphasising risk-based classification and robust contractual protections as businesses increasingly integrate AI into operations.
The guide addresses critical procurement considerations including data ownership, liability management, and compliance with Australia's strengthened privacy laws, which now require organisations to disclose automated decision-making processes in privacy policies.
"AI procurement must be risk-classified to determine due diligence, governance, and contract protections, especially for high-risk use cases like legal or personal data processing," the guidance states.
The framework categorises AI systems by organisational risk levels - high, low, or exempt - based on intended use, data sensitivity, and potential impact on individuals or compliance obligations. High-risk systems require independent audits, clear training data documentation, and ongoing monitoring mechanisms.
The guidance outlines essential contractual clauses organisations should negotiate, including:
- Clear data ownership definitions preventing AI systems from training on organisational information
- Indemnities for legal breaches, intellectual property infringement, and AI-generated harm
- Transparency requirements including model cards and decision logic summaries
- Human oversight provisions allowing review and override of automated decisions
- Security standards and incident response protocols
Australian Privacy Law Implications
The guide highlights increased risks under Australia's recent privacy law reforms. These changes require organisations to update privacy policies when using automated decision-making processes, reflecting growing regulatory concern over AI and personal data use.
Organisations face potential fines or litigation if privacy laws are violated, particularly when AI vendors seek to use client data for system improvements that could expose information elsewhere.
The guidance warns that AI's use of large datasets has significantly increased intellectual property infringement risks. This occurs when copyrighted data enters AI systems or when outputs reproduce copyrighted training material.
While organisations can typically shift infringement risks to vendors through indemnities, resistant AI vendors may require more extensive due diligence into training datasets.
The procurement framework emphasises thorough vendor assessment, including verification of internal governance frameworks, audit trails, and accountability mechanisms. Organisations should seek independent certifications such as ISO/IEC 42001, signalling commitment to responsible AI development.
The guide recommends confirming vendors can provide clear documentation about AI system functionality, limitations, training frequency, and data sources - whether live internet data or contained organisational datasets.
Implementation Checklist
MinterEllison also provides a practical checklist covering eight key contractual areas: data use and ownership, liability and indemnity, performance standards, transparency requirements, regulatory compliance, human oversight, security protocols, and termination rights.
The guidance acknowledges organisations' negotiating position with AI vendors will vary but emphasises incorporating these considerations early in procurement processes to safeguard data and ensure compliance.
This guidance addresses growing demand for structured AI procurement approaches as Australian organisations balance digital transformation opportunities with complex legal and ethical obligations.
View the guide here.