Australians Can Now Sue Your Business for Privacy Breaches. Here Is What the New Tort Means for AI.
The Privacy and Other Legislation Amendment Act 2024 introduced a statutory tort of serious invasions of privacy. For the first time, Australians have a personal right of action to sue when their privacy is breached. Combined with the OAIC's active compliance sweep and the December 2026 AI deadline, businesses using AI to process personal data now face dual exposure: regulator enforcement and individual civil lawsuits. Here is what changed, who is most at risk, and what to do.

AI PM at SOLIDWORKS. Founder, Akira Data.
*Published 30 March 2026.*
Something changed in Australian privacy law that most businesses have not fully absorbed.
On 10 December 2024, the Privacy and Other Legislation Amendment Act 2024 received Royal Assent. Within it was a provision that fundamentally changed the risk landscape for every Australian business processing personal data: a new statutory tort of serious invasions of privacy.
The tort commenced within six months — by June 2025, Australians had a personal right to sue businesses that seriously invade their privacy. Not just report to the OAIC. Not just wait for regulatory action. Actually sue.
For Australian mid-market businesses using AI to process personal data — and the OAIC's January 2026 compliance sweep indicates most are doing so without complete compliance infrastructure — this is not an abstract legal development. It is a material change to the risk exposure of every AI deployment that processes personal information about individuals.
The regulator has always been the primary enforcement mechanism. Now individuals are plaintiffs too.
What the Tort Actually Says
The statutory tort of serious invasions of privacy allows an individual to bring legal proceedings against a person or organisation where:
- There was an intrusion upon seclusion (accessing private information or observing private activities without consent) or misuse of private information (using information in a way that violates a reasonable expectation of privacy)
- The invasion was serious — trivial or minor interferences do not qualify
- The individual had a reasonable expectation of privacy in the circumstances
- The public interest does not outweigh the privacy interest
The court can award:
- Compensatory damages for distress, loss, and consequences of the breach
- Aggravated damages where conduct was deliberate or reckless
- Account of profits — requiring the defendant to disgorge any profit made from the privacy invasion
- Injunctive relief — orders to stop the conduct or require specific actions
- Declarations that the conduct constituted a serious invasion of privacy
The limitation period is three years from when the plaintiff knew or ought reasonably to have known of the invasion.
Why AI Deployments Create Specific Tort Exposure
The new tort was designed with the modern data environment in mind. AI systems create specific exposure because of how they process, infer, and act on personal information.
Inferred Sensitive Attributes
The 2024 Privacy Act amendments also expanded the definition of personal information to include inferences and derived attributes. An AI model that infers an individual's health status from purchase patterns, their financial stress from transaction behaviour, or their political views from content engagement — is creating personal information.
When that inferred information is used in a way that significantly affects the individual — a credit decision, a service denial, a pricing differential — and the individual had a reasonable expectation that their shopping behaviour would not be used to infer health status and then affect their insurance premiums — the tort may apply.
Australian example: A health insurance AI that analyses customers' grocery purchase history (through a loyalty program data partnership) and uses inferences about diet and health behaviours to inform premium pricing, without clear disclosure and consent, is exactly the profile of conduct the tort was designed to address.
Unauthorised Profiling
AI systems that build detailed profiles of individuals from multiple data sources — combining social media behaviour, transaction history, location data, and demographic information — create profiles the individuals likely did not know were being built.
Under the tort, the question is whether there was a reasonable expectation of privacy. Most Australians, when they use a loyalty card, would not expect their supermarket visits to be combined with their social media activity and bank transactions to create a behavioural profile used to determine what price they are offered on a service. That gap between expectation and reality is where tort liability lives.
Automated Decisions With Unexplained Consequences
The tort's serious invasion threshold means courts will consider the consequences to the individual. An automated credit decision that denies someone a home loan — made using data processing that was not disclosed, based on inferences the individual could not anticipate, with no explanation provided — creates a more compelling tort case than a minor inconvenience.
Combined with the December 2026 Privacy Act obligations for automated decision-making transparency: businesses deploying AI for significant decisions without the required infrastructure are simultaneously creating Privacy Act regulatory exposure and potential tort exposure for the individuals affected.
The Third-Party AI Vendor Dimension
The Clearview AI OAIC determination (published March 2026) established that offshore AI vendors processing Australian personal data create compliance obligations for the Australian business engaging them. The tort extends this principle to civil liability.
If an Australian business uses an AI vendor whose data practices result in a serious invasion of privacy for an Australian individual — inferred health information disclosed to third parties, biometric data used without consent, personal data used for purposes far outside what was disclosed — the Australian business that collected the data and engaged the vendor is not insulated from liability by the vendor relationship.
The OAIC Compliance Sweep: Specifically Checking Privacy Policies Now
In January 2026, the OAIC launched its first proactive compliance sweep — targeting approximately 60 organisations across financial services, healthcare, retail, telecommunications, professional services, and real estate.
The sweep is specifically looking at privacy policies. The OAIC's published methodology confirms it is assessing whether privacy policies accurately describe:
- What personal information is collected
- How it is used and disclosed
- Whether automated decision-making is used
- How individuals can access, correct, and complain
The connection to the tort is direct. An individual who suffers a serious privacy invasion from an AI system, and whose privacy policy review reveals no disclosure of the AI's existence or data practices, has a stronger tort case because the non-disclosure demonstrates the business knew they were violating reasonable privacy expectations.
In practical terms: a business that updates its privacy policy to accurately disclose AI data practices — and does so before a tort claim is filed — is in a materially better defensive position than one that does not.
The OAIC sweep gives businesses a specific reason to do this now rather than waiting until December.
The Dual Pressure Australian Businesses Now Face
Before June 2025, privacy enforcement in Australia was primarily regulatory. The OAIC investigated, the Commissioner made determinations, and penalties were administrative.
After June 2025, every individual affected by a serious invasion of privacy is a potential plaintiff.
The dual pressure looks like this:
Regulatory track: OAIC investigates, issues compliance notices, and can refer for civil penalty proceedings. Penalties up to AUD $50 million for serious or repeated breaches. Active since January 2026 compliance sweep.
Civil track: An individual (or a class of individuals) brings proceedings in the Federal Court or Federal Circuit and Family Court. The individual must show a serious invasion, reasonable expectation of privacy, and damages. Three-year limitation period from discovery.
For businesses operating AI systems that process personal data at scale, the civil track creates exposure that is qualitatively different from the regulatory track. A regulatory investigation affects the business. A class action affects the business and its shareholders.
The class action dimension: The Privacy Act tort is a right available to individuals. Where a single AI system has affected thousands of individuals in similar ways — a systematic failure to disclose AI processing, a bulk data practice that affected an entire customer cohort — the tort creates the conditions for a representative action.
Who Is Most At Risk
Highest risk: Consumer financial services
Credit decisions, insurance assessments, investment platform decisions. The individuals affected are seeking access to financial services they may not have obtained. The decisions are significant. The data processing is complex. The AI is often making or substantially influencing decisions based on inferences the individual cannot see.
APRA-regulated entities face this exposure alongside CPS 230 and Privacy Act regulatory obligations. An individual denied a loan based on an AI system that used inferred financial stress indicators from transaction data the individual did not know was being profiled has a compelling tort case if the practice was not disclosed.
High risk: Healthcare and aged care
Clinical decision support, triage AI, clinical documentation analysis. Health information is the most sensitive category of personal information. AI inferences about health status from non-health data (purchasing patterns, fitness tracking, social media) are particularly sensitive.
Significant risk: HR and recruitment
AI screening tools, performance analytics, workforce monitoring systems. The individuals affected are employees or job applicants with acute economic interests in the outcome. Discrimination claims interact with privacy tort claims where the AI system processes protected attributes.
Moderate risk: Retail and consumer services
Personalisation engines, loyalty program analytics, dynamic pricing. The risk is lower because individual decisions are less consequential. But large-scale profiling without adequate disclosure creates systematic risk that aggregates across an affected customer base.
The Five Actions That Reduce Tort Exposure
Action 1: Update your privacy policy to disclose AI data practices
Your privacy policy must now accurately describe:
- Which AI systems process personal data and for what purposes
- What categories of personal data are used for AI processing
- Whether inferences or derived attributes are created and used
- Whether automated decision-making is used for decisions affecting individuals
- What rights individuals have regarding AI-related data practices
The OAIC sweep is specifically checking this. Update it before you receive a compliance notice.
Action 2: Audit third-party AI vendors for data practices
The tort applies to how Australian personal data is processed, regardless of who processes it. Confirm for each AI vendor:
- Data residency: is data processed in Australia?
- Purpose limitation: is data used only for the contracted purpose?
- Model training: is your customer data used to train vendor models?
- Sub-processors: who does the vendor share data with?
Action 3: Build the explanation infrastructure
An AI system that can explain its decisions — what data was processed, what inferences were made, what the outcome was and why — is in a better defensive position in any tort proceedings. The December 2026 Privacy Act obligations require this. Building it now serves both compliance requirements and tort defence.
Action 4: Conduct a Privacy Impact Assessment for each AI system
A business that conducted a PIA, documented the risks, and implemented mitigations is in a materially stronger position in any subsequent tort proceeding. For AI systems already in production without a PIA: a retrospective assessment now is better than none.
Action 5: Create a privacy incident response plan
When a privacy incident occurs, the response matters. Under the new tort, the speed, transparency, and remediation offered is relevant to whether a court finds the invasion was serious and whether aggravated damages are appropriate.
The Timing Argument for Acting Now
The Privacy Act tort became active in mid-2025. The OAIC compliance sweep began in January 2026. The December 2026 automated decision-making transparency obligations are nine months away. The AI systems creating exposure are running in production today.
The limitation period is three years from when the plaintiff knew or ought to have known. For AI decisions made today without proper disclosure, the limitation clock is running.
The businesses that act now — updating privacy policies, conducting PIAs, building explanation infrastructure — are doing so ahead of any claims. The businesses that wait are building a backlog of potential tort exposure with every passing month.
Australian privacy law has entered a new era. The regulator is checking. Individuals have standing to sue. The AI systems most Australian businesses are deploying are exactly the systems the new law was designed to govern.
*Akira Data builds Privacy Act-compliant AI systems for Australian mid-market businesses — audit trails, explainability, Privacy Impact Assessments, and privacy policy updates that protect against both OAIC regulatory action and the new statutory tort of serious invasions of privacy. The Privacy-Safe AI Implementation (AUD $20,000, 4-6 weeks) covers the complete compliance build. The AI Readiness Sprint (AUD $7,500, 2 weeks) is the starting point for businesses that need to understand their current exposure.*
*This article was published 30 March 2026. It references the Privacy and Other Legislation Amendment Act 2024 (Royal Assent 10 December 2024), the new statutory tort of serious invasions of privacy, the OAIC's January 2026 proactive compliance sweep, and the December 2026 automated decision-making transparency obligations. This article is general information only and does not constitute legal advice.*
Share this article