Back to Insights
Strategy11 min read

Shadow AI in Australian Organisations: How to Find the AI Your Business Doesn't Know It's Running

The OAIC's December 2026 automated decision-making transparency obligations apply to every AI tool in your organisation — including the ones your IT team doesn't know about. Shadow AI in Australian businesses is typically 3–5x larger than the formal IT asset register. Here is how to find it, classify it, and close the compliance gap before December 10.

Kishore Reddy Pagidi
Kishore Reddy Pagidi

AI PM at SOLIDWORKS. Founder, Akira Data.

*Published 3 April 2026.*

In January 2026, the OAIC launched its first proactive compliance sweep targeting 60 Australian organisations across six sectors. The sweep is explicitly assessing automated decision-making disclosures, third-party AI tool processing, and privacy policy accuracy.

Here is the problem almost every organisation the OAIC reviews is about to discover: their AI inventory is incomplete. Not because they are hiding systems — because their employees have been quietly deploying AI tools for months without IT approval, and nobody in the organisation has a complete picture of what is running.

Shadow AI is the defining governance failure of 2026. This article is about how to find it, classify it, and close the compliance gap before December 10.

What Shadow AI Actually Looks Like

Shadow AI is not employees using ChatGPT in their spare time. It is AI embedded into the tools your business already uses — and it is employees deliberately adopting AI tools to do their jobs faster without waiting for IT procurement.

The most common shadow AI in Australian businesses right now:

Microsoft 365 Copilot — partially deployed. A CIO approved Copilot for one business unit. Other units saw the productivity lift and started using it through their individual Microsoft accounts. IT has licensing visibility on 40 seats. The actual use is 140 seats, processing customer data in email threads and documents that were never scoped for AI processing.

Embedded AI in SaaS tools. Salesforce Einstein. HubSpot AI. Zendesk AI. Xero AI features. These are enabled by default or activated by a product admin who did not consult IT or legal. The AI features are processing customer personal data — making inferences, scoring leads, drafting responses. None of it is in the AI agent register. None of it is disclosed in the privacy policy.

Department-level AI tool purchases. Finance bought an AI accounts payable tool. HR is trialling an AI screening tool for job applications. Marketing is using an AI personalisation platform. Each was approved at the department head level and expensed through operational budgets. None went through the IT procurement process. None was assessed for Privacy Act compliance.

Browser extensions and productivity AI. Grammarly, Notion AI, Otter.ai for meeting transcription, Fireflies.ai. These tools process whatever content employees feed them — customer communications, internal documents, recorded meetings with clients. Many of them store data on US infrastructure.

Individual AI API use. Developers using OpenAI or Anthropic API keys under their personal accounts to build workflow automation, scripts, or data processing tools. The API calls process production data. The keys are not managed. The data residency is the API provider's default (US East).

In every organisation that has done a thorough shadow AI inventory, the discovered count is three to five times the formal AI asset register. The compliance implication is direct: every one of these systems that touches personal data about individuals falls under the Privacy Act's December 2026 automated decision-making transparency requirements, whether IT knew about it or not.

Why Shadow AI Explodes in 2026

The conditions that create shadow AI are structural, not behavioural.

AI tools have become commodified. The subscription cost for an AI tool that previously would have required a capital budget and IT project is now AUD $20–$200 per month per user — within discretionary expense limits. The procurement gate that would have triggered an IT review does not exist at that price point.

The productivity premium is visible. An employee using an AI tool to complete a task in one hour that previously took four hours is not going to wait six months for IT procurement when they can expense it this afternoon. The incentive structure rewards shadow AI adoption.

AI is embedded in tools that are already approved. Every major SaaS vendor activated AI features in 2025 and 2026. The procurement approval for Salesforce was granted in 2019. The AI features launched last year. The approval process for the AI capabilities built into an already-approved tool is often never triggered.

The result: by April 2026, the average Australian mid-market organisation has AI touching personal data in more systems than it has formally reviewed. The OAIC's compliance sweep is designed to surface exactly this.

The December 10 Compliance Exposure

The Privacy and Other Legislation Amendment Act 2024 creates specific obligations from 10 December 2026 for any APP entity using AI to make or substantially assist in making decisions that could significantly affect individuals.

The obligations are:

  • Disclose in your privacy policy the types of automated decisions made, the personal information used, and how individuals can request explanations
  • Be able to notify individuals when significant decisions affecting them were made using automated means
  • Provide meaningful explanations of AI decisions on request
  • Maintain audit trails sufficient to retrieve the basis of each decision

The compliance failure mode for shadow AI is not wilful non-disclosure. It is structural impossibility: you cannot disclose automated decision-making processes you do not know about. You cannot produce audit trails for systems you have not instrumented. You cannot notify individuals about AI decisions made by tools that were never in your governance framework.

When the OAIC asks "describe your automated decision-making systems" — the question the compliance sweep is built around — the correct answer requires a complete inventory. The organisations that have done the shadow AI discovery work can answer. The organisations that have not will discover their exposure in the response process.

The Shadow AI Discovery Process

A thorough shadow AI inventory takes two to four weeks for a mid-market organisation. It has five components:

Component 1: IT Asset Register Gap Analysis

Start with what IT knows. Pull every approved SaaS application from your IT asset register, your SSO directory, and your expense management system. For each approved application, check whether it has AI features enabled — either by default or by administrator configuration. The gap analysis is which of those enabled AI features are processing personal data about customers, employees, or members of the public.

This component typically surfaces the most volume: AI embedded in approved tools that was never separately reviewed.

Component 2: Expense Data Mining

Pull 12 months of expense data and filter for AI-adjacent vendors. The keyword list to search: AI, GPT, Claude, Gemini, Copilot, assistant, transcription, automation, screening, scoring, analytics. Include vendor names: OpenAI, Anthropic, Google AI, Microsoft AI, Grammarly, Otter, Fireflies, Notion, Jasper, Copy.ai, HubSpot AI, Salesforce Einstein, any productivity AI subscription.

This surfaces the department-level AI tool purchases that bypassed IT procurement.

Component 3: Developer Tooling Audit

For organisations with engineering or data teams: audit API key management. What AI API keys exist in your secrets management system, your CI/CD pipelines, your infrastructure configuration? What about developer workstation environment variables — personal keys embedded in local development environments that are used to process production data?

This surfaces the API-level shadow AI that creates both compliance and security exposure.

Component 4: Employee Survey

A targeted survey to department heads and team leads asking: what AI tools are you or your team using to do your work? What does the tool do? Does it process customer data? This is not a punitive exercise — the goal is information, not disciplinary action. Communicate that clearly.

Many employees are unaware that the AI tools they are using create compliance obligations. They adopted the tool because it made them more productive. The survey creates the inventory.

Component 5: Network Traffic Analysis

For organisations with the technical capability: analyse outbound API traffic to identify calls to AI provider endpoints (api.openai.com, api.anthropic.com, generativelanguage.googleapis.com, azure AI endpoints). This is the most reliable method for capturing individual API use and will often surface systems the employee survey misses.

Classification: What Needs Attention First

Once you have the inventory, classify each system against two axes: the volume of personal data processed, and the type of decisions made or influenced.

Tier 1 — High priority: Systems making or substantially assisting in decisions that significantly affect individuals. Credit assessments, insurance triage, employment screening, access to services, clinical decision support, financial advice. These systems require the full Privacy Act compliance build: privacy policy disclosure, audit trail infrastructure, explanation process, individual notification capability. All Tier 1 systems need to be compliant by December 10.

Tier 2 — Medium priority: Systems processing personal data but making low-stakes automated decisions. Content recommendations, meeting transcription, document drafting assistance, customer communication AI. These require privacy policy disclosure and appropriate data processing agreements with vendors but do not typically require the full audit infrastructure for individual decision explanations.

Tier 3 — Low priority: AI tools not processing personal data about identifiable individuals. Internal productivity tools, code generation AI operating on internal systems without customer data, document summarisation of non-personal documents. These require registration in the AI agent register but minimal additional compliance action.

Most shadow AI lives in Tier 2 and Tier 3. The Tier 1 discoveries — AI tools making consequential decisions that nobody in the business formally authorised — are the ones that create immediate compliance exposure.

The Most Common Tier 1 Shadow AI Findings

Based on the pattern of shadow AI discovery work in Australian mid-market organisations, the most common Tier 1 findings are:

HR screening AI. Recruitment teams using AI tools to screen CVs, score candidates, or prioritise applications. Employment decisions significantly affect individuals. Any AI tool used in the screening process is in scope for the December 2026 obligations. Several popular HR platforms have AI screening features that are enabled by default and were not assessed when the platform was originally procured.

Credit and collections AI. Finance teams using AI to prioritise follow-up on overdue accounts, assess payment risk, or make recommendations about credit terms. These decisions affect individuals and companies. If the AI system's reasoning is not documented, the obligation to explain on request cannot be met.

Customer service routing and triaging. Call centre and customer service AI that automatically categorises, prioritises, or routes service requests based on customer data. When the categorisation affects which customers get priority service or which complaints are escalated, it is making decisions that significantly affect individuals.

Marketing personalisation and exclusion. AI systems that determine which customers receive which offers, or that flag customers for exclusion from promotional campaigns. If the AI is using inferred attributes — purchase propensity scores, risk scores, behavioural classifications — those inferences are personal information and the decisions they drive require disclosure.

The Governance Fix

Once the inventory is complete and classified, the governance framework has five components:

1. Acceptable Use Policy. A clear policy that specifies which AI tool categories are permitted, which require prior approval, and which are prohibited. It must be communicated, not just published. Every employee in your organisation using AI tools for work purposes needs to have read and acknowledged it.

2. AI Procurement Gate. A lightweight but mandatory process that every new AI tool purchase must go through — even if the cost is within discretionary limits. The assessment should take hours, not weeks. Three questions: Does this tool process personal information? Does it make or influence decisions about individuals? Has the vendor's privacy policy been reviewed for cross-border transfer compliance? If all three answers are no, the tool is approved. If any are yes, the tool goes to a second stage.

3. Vendor Data Processing Agreements. For every AI tool processing personal data, you need a Data Processing Agreement (DPA) in place. Most major AI vendors have a DPA available — but most Australian businesses have never executed one. The DPA needs to cover data residency, retention, training data use, breach notification obligations, and deletion rights.

4. Privacy Policy Update. The privacy policy needs to reflect what you now know your AI systems do. This is not a one-time exercise — every time a new Tier 1 or Tier 2 AI tool is approved, the privacy policy disclosure needs to be updated.

5. Ongoing Register. An AI agent register that is updated as new tools are adopted and reviewed on a quarterly basis. The OAIC is not looking for perfection — it is looking for evidence of a functioning governance programme. A maintained register, an acceptable use policy, and evidence of vendor due diligence is the demonstrated preparation posture that puts you in a materially better compliance position.

How Long Do You Have?

The December 10 deadline is 251 days from the date of this article.

For Tier 1 systems with no audit trail infrastructure, the engineering build takes 6–8 weeks. For shadow AI tools that are not compliant and cannot be made compliant — tools storing data on US infrastructure with no DPA, tools retaining personal data for model training without consent — the decision to replace them should happen in April, not September. Replacement procurement, implementation, and data migration takes time.

The businesses that start the shadow AI discovery process in April will have comfortable runway to December 10. The businesses that start in August will be in retrofit mode under deadline pressure, paying emergency rates for compliance remediation work.

The OAIC compliance sweep is active now. The question is not whether your organisation will eventually be required to demonstrate a compliant AI inventory. It is whether you discover and document what you are running before the OAIC asks — or after.

The Next Step

The AI Readiness Sprint (AUD $7,500, 2 weeks) includes a complete shadow AI discovery process as a core deliverable: IT asset gap analysis, expense data mining, developer tooling audit, and classification against the December 2026 compliance framework. The output is an AI agent register, Tier 1/2/3 classification, a vendor DPA gap list, and a privacy policy update briefing document for your legal team.

If you have received an OAIC compliance sweep notice, the full gap analysis can be completed in 5 business days.

*Akira Data is an Australian AI consulting firm. We help mid-market businesses implement practical AI systems that are Privacy Act compliant from day one. All data processed on Australian jurisdiction infrastructure. [Contact us](/contact) to discuss your shadow AI inventory.*

Share this article

Related Articles

Continue exploring these topics