Back to Insights
Strategy9 min read

The December 2026 Privacy Act Deadline: What Australian Businesses Using AI Must Know

From 10 December 2026, every Australian business using AI to make decisions affecting individuals must comply with mandatory transparency obligations under the Privacy Act. Here is exactly what that means, who it affects, and what to do about it before the deadline.

Rahul Pagidi
Rahul Pagidi

Data Engineer. Azure 6x Microsoft Certified. Monash University.

On 10 December 2026, a new set of obligations under the Privacy Act 1988 (Cth) comes into effect. They are specifically targeted at organisations that use automated systems — AI, computer programs, algorithmic decision-making — to make decisions that significantly affect individuals.

If your business uses AI to process loan applications, assess insurance claims, triage customer complaints, screen job applicants, set dynamic pricing, or make any other decision that meaningfully affects a person, these obligations apply to you.

Less than nine months remain to get ready.

What the Law Actually Requires

The Privacy Legislation Amendment (Enhancing Online Privacy and Other Measures) Act 2024 amended the Privacy Act 1988 to introduce automated decision-making transparency obligations. These obligations take effect from 10 December 2026.

From that date, any APP entity (more on who this covers below) that relies on a computer program to make, or substantially assist in making, decisions that significantly affect individuals must:

1. Disclose in their privacy policy:

  • Which types of decisions are made or substantially assisted by automated means
  • What types of personal information are used in those automated decisions

2. Notify affected individuals: When an automated decision significantly affects an individual, the individual must be notified that automated decision-making was involved and that they can request more information.

3. Provide meaningful explanations on request: An individual can request an explanation of a decision that significantly affected them. The explanation must cover the key factors that led to the decision — not a generic statement that "computer systems were used."

This is not theoretical. From 10 December 2026, this is enforceable law, with the OAIC able to investigate complaints and issue substantial penalties.

Who Is Affected

The obligations apply to APP entities — which under Australian law means:

  • Any organisation or company with an annual turnover greater than $3 million AUD
  • Health service providers of any size
  • Credit reporting bodies
  • Operators of a business or undertaking that trades in personal information
  • Commonwealth government agencies

If your business has $3M+ in annual revenue and uses AI to make decisions that affect your customers, staff, suppliers, or any other individuals, you are an APP entity and these obligations apply to you.

What Counts as "Automated Decision-Making"

The law covers decisions made or substantially assisted by a computer program. "Substantially assisted" is key — it is not limited to fully automated systems with zero human involvement. It also covers:

  • A system that produces a risk score, recommendation, or ranking that a human then acts on
  • AI-generated summaries used to inform hiring or lending decisions
  • Algorithmic pricing systems that set the rate a customer receives
  • Triage systems that categorise or prioritise cases
  • Any AI output that materially influences a decision about an individual

The test is whether the program's output plays a significant role in the decision — not whether a human technically pressed the final button.

The Practical Compliance Problem

Most businesses that use AI tools — even relatively simple ones — cannot currently answer these three questions:

  • Which of our AI systems make decisions or substantially assist in decisions affecting individuals?
  • What personal information does each of those systems use?
  • If a customer asked for an explanation of a specific decision made about them last month, could we produce one?

For many organisations, the honest answer to question three is "no." The AI system ran, a decision was made, but there is no structured audit trail linking that specific decision to the specific data and model version that produced it.

Under the post-December obligations, that is a compliance gap.

The Four Things You Need to Do Before 10 December 2026

1. Inventory your automated decision-making systems

Map every system in your business that uses AI, machine learning, algorithmic scoring, or any automated processing to produce outputs that affect individuals. Include:

  • Customer-facing systems (loan decisioning, pricing, recommendation engines)
  • Internal HR systems (screening, performance scoring)
  • Supplier or vendor assessment tools
  • Any SaaS platform that processes personal information about your customers

For each system, document: what decision it makes, what personal data it uses, and what the decision output looks like.

2. Update your privacy policy

Your privacy policy needs a new section disclosing your automated decision-making practices. At minimum, it should cover:

  • Which categories of decisions are made or substantially assisted by automated means
  • Which categories of personal information are used
  • What rights individuals have to request information about those decisions

Generic boilerplate will not satisfy this requirement. The disclosure needs to be specific to your actual systems and practices.

3. Build the technical capability to explain decisions

This is where most organisations have work to do. To respond to a request for explanation, you need:

  • Structured logging — every automated decision recorded with timestamp, system ID, inputs used, output produced, and any confidence or scoring metrics
  • Trace IDs — a unique identifier for each decision that allows you to pull the complete audit trail
  • Input snapshots — a record of the exact data state at decision time, so you can explain why a decision was made even if the underlying data has since changed
  • Human-readable output — the ability to translate the system's output into an explanation a non-technical person can understand

Without this infrastructure, you cannot respond to explanation requests in a meaningful way. Building it retroactively after a complaint is received is significantly harder — and more expensive — than building it into the system from the start.

4. Create an explanation request process

Draft a simple internal process for how your team responds when someone requests an explanation of an automated decision that affected them. Define:

  • Who receives the request
  • How they access the audit trail
  • What timeframe they have to respond (align to your existing ARO processes)
  • What the explanation looks like

The Connection to Observability

At Akira Data, our observability-first approach to AI systems is directly aligned with these compliance requirements. When we build an AI agent or workflow, it comes with:

  • Full audit trails — every decision logged with inputs, outputs, timing, and model version
  • Trace IDs — every run has a unique identifier you can search by customer, date, or decision type
  • Explainability layers — for decisions that affect individuals, the system generates a structured explanation at decision time, not retrospectively
  • Data lineage — you can trace exactly which data points influenced a given output

Systems built without these capabilities will need to be retrofitted before December. Systems built with them are compliant from day one.

Specific Industries to Watch

Some industries carry higher regulatory risk because automated decisions in their context are more likely to significantly affect individuals:

Financial Services (APRA-regulated entities) Loan decisioning, credit scoring, insurance underwriting, fraud detection, investment recommendations. These are core use cases of AI in financial services — and all of them fall squarely within the scope of the new obligations. APRA has separately signalled it is watching AI governance in financial services closely.

Healthcare AI-assisted clinical triage, diagnosis support, treatment recommendations, and claims processing all involve decisions that significantly affect individuals. Health service providers of any size are APP entities regardless of turnover.

Professional Services (Legal, Accounting, HR) Contract review AI that categorises risk, accounting AI that flags audit issues, HR AI that screens applications — these all produce outputs that substantially assist decisions affecting individuals.

Retail & eCommerce Dynamic pricing algorithms that set individual customer prices, fraud detection systems that block transactions, and loyalty scoring systems all touch personal information in ways that may require disclosure.

What Happens If You Don't Comply

The OAIC (Office of the Australian Information Commissioner) can investigate complaints and, under the 2024 amendments, has significantly expanded enforcement powers. Penalties for serious or repeated Privacy Act interference are up to:

  • AUD $50 million for body corporates
  • Three times the benefit obtained from the interference
  • 30% of adjusted turnover in the relevant period

For mid-market companies, the reputational and regulatory risk of an OAIC investigation is substantial — independent of the financial penalty.

The Bottom Line

You have until 10 December 2026. That sounds like a long time. It is not, because the work is:

  • An inventory exercise — requires business-side engagement, not just IT
  • A policy rewrite — requires legal review and sign-off
  • A technical build — adding observability and explainability to AI systems takes time, especially for systems you inherited or bought rather than built
  • A process design — training staff on how to respond to explanation requests

Start with the inventory. It will tell you exactly how much work you actually have.


*Akira Data builds AI systems with Privacy Act compliance built in from the start — including automated decision-making transparency, full audit trails, and explainability. If you need to assess your current exposure before December 2026, our [AI Readiness Sprint](/services#readiness) includes a Privacy Act compliance gap analysis.*

*This article is general information and does not constitute legal advice. Consult your legal advisers for guidance specific to your organisation.*

Share this article

Related Articles

Continue exploring these topics