Clearview AI Breached Australia's Privacy Act. What Every Australian Business Using AI Must Do Now.
The OAIC just found Clearview AI breached the Australian Privacy Act by scraping facial images without consent. It's the first major AI-specific enforcement ruling in Australia — and it changes the risk calculus for every business using AI to process personal information. Here is what the decision actually means, who is now exposed, and the practical steps to take before you become the next case study.

AI PM at SOLIDWORKS. Founder, Akira Data.
*Published 24 March 2026.*
The Office of the Australian Information Commissioner made a finding this week that every Australian business using AI should read carefully: Clearview AI breached the Australian Privacy Act 1988 by scraping Australians' facial images from the internet and using them in a facial recognition tool without consent.
This is not a hypothetical enforcement scenario. This is a completed OAIC investigation with a formal determination — the first major AI-specific Privacy Act enforcement action in Australian history.
For Australian mid-market businesses, the Clearview ruling is a wake-up call with a specific message: the OAIC is actively investigating AI-related privacy breaches, it is willing to make findings against large technology companies, and the behaviour that triggered this investigation — collecting and using personal data for AI without consent — is more common in Australian businesses than most boards realise.
What Clearview Did and Why the OAIC Found It Breached the Privacy Act
Clearview AI built a facial recognition database by scraping billions of images from the internet — social media profiles, news articles, public websites — and using them to train and power a facial recognition tool sold to law enforcement agencies.
The OAIC's determination found that this conduct breached Australian Privacy Principles in three specific ways:
APP 3 — Collection of personal information: The Privacy Act requires that personal information be collected only if reasonably necessary and, for sensitive information, only with consent. Facial images are biometric data — a category of sensitive information under Australian law. Clearview collected this data without consent from the individuals whose images were scraped.
APP 5 — Notice of collection: Where organisations collect personal information, they must take reasonable steps to notify individuals of the collection and its purpose. Clearview provided no such notice to the Australians whose images were collected.
APP 6 — Use and disclosure: Personal information can only be used for the purpose for which it was collected, or a related secondary purpose the individual would reasonably expect. Collecting images from social media for a law enforcement facial recognition tool does not meet this test.
The OAIC investigation was initiated following a complaint and was conducted jointly with privacy commissioners in the UK and Canada, who made similar findings. It is worth noting that Clearview does not operate in Australia and has no Australian customers — and the OAIC still found a breach, because the individuals whose data was collected are Australians, and the Privacy Act protects their data regardless of where the company collecting it is located.
Why This Matters for Australian Businesses (Even If You Are Not Clearview)
The instinct, on reading about a US facial recognition company being found to breach Australian privacy law, is that this is someone else's problem. It is not.
The behaviours the OAIC identified as breaching the Privacy Act are not unique to Clearview. They are variations on behaviours happening in Australian businesses right now — with less extreme but equally real privacy implications.
The Scraping Parallel
Many Australian businesses use AI tools that are trained on data scraped from the internet. Software tools that analyse competitor pricing, marketing tools that aggregate social media data about customers, HR tools that profile candidates from their online presence — all of these may be scraping publicly available personal information without consent from the individuals involved.
The Clearview ruling clarifies: publicly available information is not unprotected information. The fact that someone posted a photo on a social media platform does not mean a business can collect that photo for its own AI purposes without consent. The sensitivity of the data matters (facial images = biometric = sensitive). The purpose matters. The consent matters.
Australian businesses using third-party AI tools that aggregate publicly available personal information should review those tools against the Clearview precedent.
The Inferred Personal Information Parallel
The 2024 Privacy Act amendments extended the definition of personal information to include inferred attributes — information derived about a person from other data. AI systems that infer personal attributes from behavioural data, transaction history, or other data sources are creating personal information under Australian law.
A retail AI system that infers a customer's health status from purchase history and uses that inference in marketing targeting is creating sensitive personal information (health information) without consent. A financial services AI that infers financial stress indicators from transaction patterns and acts on those inferences is doing the same.
These systems are not as dramatic as Clearview's facial recognition tool. But the Privacy Act framework the OAIC applied to Clearview applies equally to them.
The Commercial AI Data Training Parallel
Many Australian businesses are exploring fine-tuning AI models on their own data — customer records, support tickets, transaction histories. Before doing this, businesses need to assess: did the customers whose data you are using for AI training consent to that use? Was it within the purpose they were told their data would be used for?
"We always intended to use this data to improve our services" is not a consent framework. The purpose of collection needs to have been disclosed, and using customer data to train a commercial AI model is a materially different purpose from providing the service.
The OAIC Enforcement Posture Has Changed
The Clearview ruling follows other significant OAIC enforcement activity in early 2026. The regulator launched its first-ever proactive compliance sweep in January 2026 — 60 organisations, six sectors. It has clearly shifted from reactive complaint-response to proactive enforcement.
The enforcement pattern is also significant: the OAIC is willing to investigate and make findings against companies that have no Australian presence, no Australian customers, and no Australian employees — because the individuals affected are Australian. The extraterritorial reach of Australian privacy law is real and active.
For Australian businesses, this means:
The "we're just using third-party tools" argument does not work. If you are a data controller — the entity that decides why and how personal data is processed — you are responsible for your AI vendors' compliance with the Privacy Act. This includes reviewing what your AI tool providers are doing with Australian personal data.
The "we haven't had a complaint" argument does not work. The Clearview investigation was initiated following a complaint, but the January 2026 compliance sweep was proactive. The OAIC does not need to wait for a complaint to investigate.
The "it's not obvious personal information" argument does not work. Facial images are obviously personal. But the OAIC's framework — and the 2024 amendments — cover a much broader category of data, including inferred attributes and behavioural profiles.
The December 2026 Compliance Deadline Is Not Theoretical
The Clearview ruling arrives eight and a half months before the Privacy Act's automated decision-making transparency obligations take effect on 10 December 2026. Australian businesses using AI to make decisions affecting individuals have a mandatory transparency and explanation framework to comply with.
The OAIC's active enforcement posture — proactive sweeps, extraterritorial investigations, high-profile rulings — makes clear that December 2026 is a real deadline with real enforcement behind it, not a soft regulatory aspiration.
Two signals from this week together: the Clearview ruling and the OAIC's compliance sweep activity. The message is consistent: AI-related privacy compliance is not a future concern. It is a current enforcement priority.
The Specific Risks for Australian Industry Sectors
Retail, eCommerce, and Consumer Services
Retailers collecting and analysing personal data through loyalty programmes, website analytics, and customer behaviour tracking are operating in the shadow of the Clearview precedent. Specifically:
Facial recognition in retail: Any retailer using facial recognition for loss prevention, customer recognition, or marketing personalisation is using biometric sensitive information under the Privacy Act. The Clearview ruling is direct precedent that this requires explicit consent and disclosure. Retailers using these systems without consent are now in a significantly more exposed position.
Behavioural profiling without disclosure: Personalisation engines that build individual profiles from behavioural data — browsing patterns, purchase history, location — are creating personal information. The purposes for which this data is used need to be disclosed in the privacy policy and must align with what customers would reasonably expect when they shop.
Financial Services
Banks, insurers, and lenders using AI to process customer data for credit assessment, fraud detection, or pricing are subject to the Privacy Act alongside APRA's prudential framework. The Clearview ruling strengthens the OAIC's hand in any investigation of financial services AI.
The specific Clearview parallel: credit scoring AI trained on data scraped from social media profiles — a practice some fintech companies use — is now directly in the firing line. Collection of social media data for credit purposes requires consent, and the individuals affected almost certainly do not know it is happening.
Healthcare
Health information has heightened Privacy Act protections regardless of whether AI is involved. Any healthcare provider or health AI vendor using patient data to train AI models needs explicit consent for that purpose — healthcare provision and AI model training are not the same purpose.
HR Technology
Candidate profiling tools that aggregate publicly available information about job applicants — LinkedIn profiles, social media activity, public posts — and use it to assess candidates are collecting personal information (and potentially sensitive information like religious affiliation, political views, or ethnicity visible in public posts) without consent. The Clearview precedent applies.
Data Brokers and Analytics Companies
If your business model involves aggregating personal data from multiple sources and selling or using it for AI analysis — the Clearview ruling is the most direct warning you will receive. The OAIC has demonstrated it is willing to pursue extraterritorial enforcement. An Australian data aggregation business doing what Clearview did is at significantly more risk than Clearview was — it is subject to Australian jurisdiction directly.
What to Do in the Next 30 Days
The Clearview ruling creates urgency for three specific categories of review.
Review 1: AI Systems Processing Biometric or Sensitive Personal Information
Biometric data (facial images, fingerprints, voice recordings), health information, religious beliefs, political opinions, and sexual orientation all carry heightened Privacy Act protections. If any of your AI systems process these categories of data, you need to:
- Confirm you have explicit consent for the specific AI use (not just general service use)
- Confirm the collection is reasonably necessary for your legitimate purposes
- Confirm you have given individuals adequate notice of the collection and its AI use
- Confirm you have not disclosed this data to third-party AI providers without appropriate authorisation
If the answer to any of these is "I'm not sure," that is the gap to close first.
Review 2: Third-Party AI Tools That Access Personal Data
For every AI tool your business uses that processes Australian personal information:
- Review the tool's privacy policy and data processing agreement: where does it say data goes?
- Confirm data is processed within Australian jurisdiction, or that appropriate cross-border protections are in place
- Review whether the tool vendor's use of data for model training is consistent with what you told your customers their data would be used for
- Check whether the vendor's data practices are consistent with the specific consent your customers gave
Many businesses will discover that their AI vendors are using customer data in ways that were not disclosed to customers at the point of collection. That disclosure gap is a Privacy Act issue under the Clearview framework.
Review 3: Your Privacy Policy's AI Disclosures
Your privacy policy should now explicitly address:
- Whether any AI systems are used to process personal information, and for what purposes
- Whether personal information is used to train AI models (if yes, the basis for this)
- Whether automated decision-making is used for decisions that significantly affect individuals (the December 2026 obligation, but best practice to address now)
- Whether personal information is transferred to AI providers, and where those providers process it
A privacy policy that still reads like it was written for a pre-AI business — disclosing collection via website forms and use for service delivery — is likely not fit for purpose given your actual AI data practices.
The Clearview Lesson Applied to Australian Mid-Market Businesses
Clearview's mistake was not technical. Its AI worked as designed. The mistake was treating publicly available data as unprotected data, and treating scale as a defence ("everyone's data is public, we just have a lot of it").
Neither of those assumptions holds under Australian law:
- Publicly available information about individuals remains personal information with Privacy Act protections
- Scale is not a defence — it is an aggravating factor
The equivalent mistakes in Australian mid-market businesses:
- "Our AI tool just analyses public-facing customer behaviour — it's not really personal data." The inferred attributes and behaviour profiles created are personal information.
- "We have everyone's data in our CRM already, so using it for AI training is fine." Collection consent for service provision does not cover AI model training.
- "The data is anonymised before the AI sees it." Anonymisation must be genuine and irreversible — pseudonymisation or data aggregation that can be de-anonymised is not sufficient.
What Akira Data Does Here
Every AI system Akira Data builds for Australian businesses is designed around the Privacy Act framework — not as a compliance checkbox, but as an architectural principle.
The Privacy-Safe AI Implementation engagement (AUD $20,000, 4–6 weeks) includes:
- A Privacy Impact Assessment specific to your AI use case
- Review of your third-party AI vendor data practices
- Audit trail and explanation infrastructure for automated decisions
- Data residency controls (Australian jurisdiction by default)
- Privacy policy update guidance covering AI data practices
- OAIC-ready explanation request process design
The AI Readiness Sprint (AUD $7,500, 2 weeks) includes a privacy compliance gap analysis as a core deliverable — identifying specifically whether your current or planned AI systems have exposure similar to the Clearview precedent.
The Clearview ruling is not the end of OAIC AI enforcement. It is the beginning.
[Get a privacy compliance assessment →](/contact?type=privacy-compliance)
*This article was published 24 March 2026. It references the OAIC determination that Clearview AI breached the Australian Privacy Act 1988 (published March 2026), the Privacy and Other Legislation Amendment Act 2024, and the OAIC's January 2026 compliance sweep. This article is general information only and does not constitute legal advice. Consult your legal advisers for guidance specific to your organisation.*
Share this article