66% of Australian Boards Are Approving AI Despite Security and Compliance Risks. Here Is What That Means for Your Business.
TrendAI research released today found two-thirds of Australian business decision-makers have felt pressured to approve AI initiatives with known security or compliance risks. 44% lack confidence in their understanding of legal and governance frameworks. With the December 2026 Privacy Act deadline eight months away, the governance gap is now a board-level liability.

AI PM at SOLIDWORKS. Founder, Akira Data.
*Published 26 March 2026.*
TrendAI released research today that cuts to the centre of what is actually happening in Australian boardrooms right now. The study — which surveyed 3,700 business and IT decision-makers across 23 countries, including Australia — found that 66% of Australian business decision-makers said they had felt pressure to approve AI initiatives that posed potential security or compliance risks.
One in five described those concerns as extreme — and said they were overridden anyway, to keep up with competitors and internal demand.
The same study found 80% of Australian business decision-makers feel prepared for AI adoption. Yet 44% lacked confidence in their own understanding of legal and governance frameworks. Only 26% had completed formal, mandatory AI training. And 68% said AI was advancing more quickly in their organisation than they could secure it.
These numbers describe a specific and dangerous pattern: organisations where confidence is high, actual governance capability is low, and the person responsible for risk is being overruled before they can close the gap.
With the Australian Privacy Act's automated decision-making transparency obligations taking effect 10 December 2026 — eight months away — this governance gap is no longer just an operational risk. It is a board-level legal liability.
The Governance Gap Is Not a Knowledge Problem
The most important finding in the TrendAI research is not that Australian businesses lack awareness of AI risk — 64% of Australian organisations reported having comprehensive AI policies in place. The problem is that having a policy and being able to execute on it are different things.
A comprehensive AI policy in a PDF document does not create audit trails. It does not build explainability infrastructure. It does not ensure that the third-party AI tools your teams are using have been assessed for Privacy Act compliance. It does not train your privacy team to handle automated decision-making explanation requests.
The gap the TrendAI data reveals is an execution gap: organisations know they should have AI governance, and many have written it down, but more than 40% still cite unclear regulation or compliance standards — alongside weak internal policy and governance — as barriers to safe adoption. The AI is in production. The governance is in draft.
For APRA-regulated entities, this creates a specific CPS 230 exposure: AI systems embedded in critical operations that were not in the original critical operations register. For all APP entities, it creates a December 2026 Privacy Act exposure: automated decision-making systems operating without the transparency infrastructure the law requires.
What "Pressured to Approve" Actually Means
The 66% figure — two-thirds of Australian business decision-makers pressured to approve AI with known risks — deserves unpacking.
The pressure is rarely explicit. It does not arrive as a directive from the CEO saying "approve this AI project despite the compliance risk." It arrives as:
- A competitor announcement that your board wants to respond to
- A vendor demo that generates board enthusiasm before a proper assessment is complete
- A departmental initiative that has already been half-built before IT and compliance are involved
- A productivity target that assumes AI adoption is happening, regardless of whether the governance is ready
- A budget cycle where AI is the narrative that gets funding approved
The TrendAI research found that 68% of respondents said AI is advancing faster than they can secure it. This is not a statement about technical complexity — it is a statement about the sequencing. The AI is deployed. Then compliance tries to catch up. For organisations deploying AI before their governance frameworks are ready, the December 2026 Privacy Act deadline is not a future risk. It is a current state of non-compliance.
The Four Governance Gaps Australian Businesses Need to Close Before December
Based on the TrendAI findings and the Privacy Act obligations taking effect in December, the four gaps most commonly driving exposure are:
Gap 1: No AI Agent Register
Most Australian organisations do not have a comprehensive inventory of the AI systems — including third-party tools — that are processing personal information and making or substantially assisting in decisions affecting individuals. Marketing automation, HR screening tools, customer service chatbots, credit decisioning models, document processing agents — these are often owned by different departments, procured independently, and not tracked centrally.
The Privacy Act's automated decision-making obligations apply to each of these systems individually. You cannot build the transparency infrastructure you do not know you need.
The fix: An AI agent register — a structured inventory of every AI system in production or build, the decisions it makes, the personal data it processes, the business owner, and the current audit trail status. This is the foundation. Everything else builds on it.
Gap 2: Third-Party AI Tools Without Privacy Assessment
The TrendAI study found that shadow AI — employees using AI tools without formal IT or security approval — is endemic. But even formally approved tools are frequently deployed without a formal Privacy Act assessment. The question — does this third-party AI tool create Privacy Act obligations, and if so, are we meeting them? — is not being asked systematically.
The Clearview AI ruling, published by the OAIC two days ago, makes the point explicitly: Australian businesses that engage offshore AI vendors to process Australian personal data are responsible for the Privacy Act compliance of that processing. The vendor's terms of service are not a compliance framework.
The fix: A third-party AI vendor review process — a lightweight assessment checklist applied to every AI tool that touches personal information. What personal data does it process? For what purpose? Where is the data stored and processed? Is it covered by a privacy policy disclosure? Can you produce an explanation if requested?
Gap 3: Audit Trail Infrastructure Not Built
64% of Australian organisations report having comprehensive AI policies. But a policy is not an audit trail. From 10 December 2026, if your AI system makes a decision significantly affecting an individual and they request an explanation, you need to retrieve: the decision timestamp, the individual's identity, the input data the system processed, the reasoning steps, and the decision taken. If that information is not logged in a structured, queryable format, your policy is not your compliance.
Most AI systems deployed without compliance-by-design do not generate this audit log. The decision happens. A result is returned. There is no record of the reasoning, and often no reliable record of the input state.
The fix: Structured decision logging built into every Tier 1 AI system — those making decisions significantly affecting individuals. This is an architecture decision that is easy to build in at the start and expensive to retrofit. For systems already in production, the assessment starts with: can this system currently produce an explanation if the OAIC asks for one?
Gap 4: Privacy Policy Not Updated to Reflect AI Use
The Privacy Act requires disclosure in your privacy policy of any automated decision-making involving personal information. The OAIC's January 2026 proactive compliance sweep — targeting 60 organisations across six sectors — is actively checking this. Most Australian business privacy policies were written before the current generation of AI tools was deployed. They describe data collection and storage. They do not describe AI decision-making.
The fix: A privacy policy update that accurately describes: (1) which decisions involve automated processing, (2) what personal data is used, (3) whether decisions are made solely by automated means or with human review, and (4) how individuals can request explanations. This is a legal drafting task, not a technical one — but it depends on having completed the AI agent register so you know what to disclose.
The Board Conversation to Have Now
The TrendAI research describes a situation where business decision-makers are approving AI at a faster rate than governance can keep up. The correction to this is not slowing AI adoption — it is ensuring that governance infrastructure is scoped and resourced as part of every AI project, not as a follow-on task.
For boards and executive teams, the question to ask at the next AI proposal review is: Does this deployment include the audit trail and explainability infrastructure required for December 2026 Privacy Act compliance? If the answer is "we will address that separately," the governance gap is being created in real time.
The businesses that are managing AI adoption well in 2026 are not the ones that have slowed down — they are the ones where governance is built into the project scope rather than bolted on afterward. The AI Readiness Sprint (AUD $7,500, 2 weeks) is designed to give boards and executive teams the inventory, gap assessment, and prioritised remediation plan they need to close these gaps before the December deadline.
*Akira Data helps Australian mid-market businesses close the AI governance gap — agent registers, audit trail infrastructure, Privacy Act compliance builds, and board-level AI governance frameworks. The AI Readiness Sprint (AUD $7,500) is the right starting point. The Privacy-Safe AI Implementation (from AUD $20,000) delivers the full compliance build.*
*This article references TrendAI AI Adoption and Governance Research 2026 (released 26 March 2026), the Privacy and Other Legislation Amendment Act 2024, and the OAIC's January 2026 proactive compliance sweep findings. It is general information and does not constitute legal advice.*
Share this article