The Price of Intelligence Just Collapsed. Every Australian Business Now Needs an Agentic Constitution.
CIO.com published a stark warning this week: as AI agents replace human workers across Australia, "smart IT teams are encoding rules as code" — an agentic constitution — because AI agents cannot read dusty policy PDFs. Without this, autonomy scales but control doesn't. Here is the practical playbook for Australian businesses.

AI PM at SOLIDWORKS. Founder, Akira Data.
*Published 20 March 2026.*
Two things happened this week that, taken together, define the strategic challenge for every Australian business leader in 2026.
The first: Thomson Reuters published a piece titled *"Beyond the Hype: 2026 is the year AI has to prove itself."* Their thesis: the era of AI being funded on promise is over. The expectation is now delivery — measurable business outcomes, not pilots.
The second: CIO.com published a warning about a different kind of delivery failure — one that happens *after* AI is deployed. The article: *"Why your 2026 IT strategy needs an agentic constitution."* The core insight: "AI agents can't read dusty SOP PDFs, so smart IT teams are encoding rules as code to let autonomy scale without losing control."
Read together, these two pieces describe a pincer movement closing on Australian businesses. You are being pushed to deploy AI faster (prove ROI or lose the budget) and simultaneously being warned that faster deployment without governance creates a new class of risk (autonomy without control).
The businesses that navigate this well will do one thing differently from everyone else: they will build their governance framework *before* they deploy at scale, not after. In practice, that means building an agentic constitution.
What the "Price of Intelligence Collapse" Actually Means
For the past three years, the dominant narrative around AI was that it was expensive — enterprise contracts, specialised talent, proprietary infrastructure. That narrative is now definitively false.
The cost of AI inference has dropped by approximately 95% since GPT-4's release in early 2023. Open-weight models like Meta's Llama 3, Microsoft's Phi-4, and Mistral's latest generation can run on commodity infrastructure at negligible cost. Frontier model APIs — Claude, GPT-4o, Gemini — have dropped prices repeatedly as competition intensifies.
What this means in practice for Australian mid-market businesses: the economics that previously made AI deployment rational only for enterprises with eight-figure technology budgets now make it rational at the $20M revenue level. A document processing agent that would have cost $500,000 to build in 2022 costs $25,000–$50,000 in 2026.
ABC News Australia reported this week that agentic AI "is replacing human workers" — citing WiseTech Global's 2,000 cuts, Atlassian's 1,600 cuts, and Telstra's 442 cuts in a single month. The journalist's framing: "as the price of intelligence collapses, agentic AI is replacing human workers."
That framing is accurate. And it applies to your business.
The workflows that WiseTech, Atlassian, and Telstra automated are not unique to technology companies. Document processing, request triage, data extraction, structured decision support — these workflows exist in financial services, healthcare, professional services, mining, and retail. The technology that made them automatable for large technology companies is now available at the price point that makes it automatable for Australian mid-market businesses.
The question for Australian business leaders is no longer whether to deploy agentic AI. The economic case is settled. The question is how to deploy it without creating the governance crisis that follows unchecked autonomy.
The Agentic Constitution: What It Is and Why You Need One
The CIO.com piece introduced a term — "agentic constitution" — that describes a practical solution to a specific problem.
The problem: AI agents make decisions. Decisions require constraints. Constraints are traditionally communicated through policy documents, standard operating procedures, and training manuals. AI agents cannot read policy documents. They cannot absorb SOPs. They do not benefit from training sessions.
If you want an AI agent to operate within boundaries — to never send an external communication without human approval, to never access production databases directly, to escalate when a decision involves more than AUD $10,000 — you cannot rely on the agent reading a policy document. You must encode those rules in a form the agent can actually use.
That encoding is the agentic constitution.
The term "constitution" is deliberate. A constitution does not describe every action an organisation takes. It sets the foundational rules within which all actions must occur. An agentic constitution does the same for AI systems: it encodes the boundaries, permissions, and escalation rules that govern every agent's behaviour, in a form that is machine-readable, version-controlled, and auditable.
For Australian businesses, the agentic constitution is also a compliance artefact. The December 2026 Privacy Act automated decision-making transparency obligations require that AI agents operating in regulated contexts have documented governance controls. An agentic constitution is the most efficient way to create and maintain those controls.
The Five Components of an Australian Agentic Constitution
An agentic constitution for an Australian mid-market business needs five components. Each component addresses a specific risk.
Component 1: Agent Identity and Scope Declaration
Every AI agent needs a formal identity declaration — a structured document that defines:
- Agent name and identifier — a unique name that can be referenced in logs, audit trails, and governance reviews
- Purpose statement — a plain-language description of what the agent is designed to do (and, explicitly, what it is not designed to do)
- Permitted inputs — what data the agent is allowed to receive and process
- Permitted outputs — what actions the agent is allowed to take, what systems it can write to, what communications it can initiate
- Owner — the named business unit and individual responsible for the agent's behaviour
- Review date — when the scope declaration will next be assessed
This is the agentic equivalent of a job description. No employee in an Australian business operates without a job description. No AI agent should either.
Why it matters for Australian compliance: Under the December 2026 Privacy Act obligations, you must be able to demonstrate that every AI agent making decisions affecting individuals has defined scope and accountability. The scope declaration is that demonstration.
Component 2: Permission Hierarchy
The permission hierarchy encodes what each agent can do without approval, what requires human review, and what is prohibited entirely.
A simple permission hierarchy for an Australian business:
Autonomous permissions (no human approval required):
- Read operations on approved data sources
- Generate internal reports and summaries
- Flag items for human review
- Send pre-approved notification templates
Supervised permissions (human review before execution):
- External communications to customers or suppliers (unless using a pre-approved template for defined routine messages)
- Any write operation on a production database
- Any financial transaction above a defined threshold (e.g., AUD $500)
- Any decision affecting an individual's access to services
Prohibited actions (never, regardless of instructions):
- Accessing data stores outside the defined scope
- Generating communications that appear to be from a named human without disclosure
- Making irreversible changes to customer records without a human review step
- Transferring personal data outside Australian jurisdiction
The permission hierarchy must be machine-readable. In practice, this means encoding it as configuration for the agent's runtime environment — defining which tools the agent has access to, which API credentials it holds, and which actions require a human-in-the-loop checkpoint.
Why it matters for Australian compliance: APRA's CPG 220 model risk standard requires regulated entities to maintain appropriate controls over AI systems that influence significant decisions. A permission hierarchy is the technical implementation of those controls.
Component 3: Escalation Protocol
The escalation protocol defines what the agent does when it encounters a situation outside its defined operating parameters. Without a clear escalation protocol, agents faced with unexpected situations either fail silently or take the best action available — which may be the wrong action.
An escalation protocol answers:
- When should the agent escalate? Specific conditions: confidence below a defined threshold, input outside defined parameters, action that would exceed a permission level, any decision affecting a named sensitive category (health information, financial position, employment)
- How should the agent escalate? What signal does it send? To whom? Through what channel?
- What does the agent do while waiting for escalation resolution? Queue the item? Return it to the originator? Apply a safe default?
- What is the maximum wait time before the agent times out and escalates further?
For Australian businesses with Privacy Act obligations, the escalation protocol needs to be especially clear for any case where an automated decision might significantly affect an individual. These are precisely the cases where human judgement should be preserved, and where the December 2026 explanation obligations are most likely to be triggered.
Component 4: Audit and Explanation Infrastructure
The audit and explanation infrastructure is the technical layer that makes the agentic constitution auditable and demonstrable.
Every agent action must generate:
- A run record — timestamp, agent ID, inputs received, outputs produced, permissions exercised
- A trace — the step-by-step sequence of actions taken within the run
- A decision rationale — for any decision affecting an individual, a human-readable account of the key factors
These records serve three purposes simultaneously:
- Operational debugging — when something goes wrong, you can trace exactly what happened
- ROI demonstration — when the CFO asks what the agent did this week, you have a quantified answer
- Privacy Act compliance — when an individual requests an explanation of a decision, you can retrieve and translate it within the required timeframe
The infrastructure should be built into the agent from the first deployment — not added as a retrofit when a compliance query arrives. Retrofitting is substantially more expensive and often incomplete.
Component 5: Constitutional Update Process
The agentic constitution is not a one-time document. As agents are deployed, expanded in scope, or as regulatory requirements change, the constitution must evolve.
The update process defines:
- Who can propose a constitutional amendment — a change to an agent's permissions or scope
- What review is required — Privacy Impact Assessment, legal review, security assessment, depending on the nature of the change
- How changes are versioned — the current constitution and all previous versions are version-controlled, so you can demonstrate what rules applied at any point in time
- Who approves — the named owner and, for Tier 1 agents, a privacy officer or legal reviewer
The version control requirement is particularly important for Australian Privacy Act compliance. If an agent's behaviour changes between when a decision was made and when an explanation is requested, you need to be able to demonstrate what version of the constitution was in effect at decision time.
Building Your Agentic Constitution in 90 Days
For Australian businesses that have not yet built an agentic constitution but need to comply with December 2026 obligations, a 90-day build sequence works as follows.
Days 1–14: Constitutional inventory
Identify every AI agent operating in your business. Include:
- Formally approved IT deployments
- SaaS tools with embedded agentic features (AI-powered email automation, AI customer service routing, AI document processing)
- Departmental AI tools adopted without formal IT approval
For each agent, document: what does it do? What data does it access? What actions can it take? Who owns it?
This is the inventory phase. For most Australian mid-market businesses, the number of agents discovered will be two to three times the number in the formal IT asset register. The undiscovered agents — the shadow AI — are often the highest-compliance-risk ones.
Days 15–30: Tier classification and prioritisation
Apply a compliance classification to each agent:
Tier 1 (highest priority): Agents that make or substantially assist in decisions significantly affecting individuals — credit decisions, hiring screening, insurance triage, healthcare routing, access control. These agents need the full agentic constitution built before December 2026.
Tier 2: Agents that process personal data but do not make decisions significantly affecting individuals — document processing, communication routing, internal summarisation. These agents need a scope declaration and audit infrastructure but not the full constitutional framework.
Tier 3: Agents operating entirely on non-personal data — internal analytics, code review, research summarisation. These agents need a scope declaration for operational governance but have lower compliance urgency.
Days 31–60: Build the Tier 1 constitutions
For each Tier 1 agent:
- Draft the scope declaration (purpose, permitted inputs, permitted outputs, owner)
- Encode the permission hierarchy in the agent's runtime configuration
- Define and test the escalation protocol
- Build audit and explanation infrastructure (run logs, trace IDs, decision rationale)
- Update the privacy policy to disclose the agent's existence and decision-making role
This is the engineering-heavy phase. For complex production agents without existing observability, the audit and explanation infrastructure typically takes 2–3 weeks of engineering effort.
Days 61–90: Tier 2 constitutions and constitutional process
Complete the Tier 2 scope declarations. Establish the constitutional update process — who proposes changes, who reviews, who approves, how versions are managed.
By Day 90, your organisation has:
- A complete inventory of all agents
- Full constitutional coverage for Tier 1 agents (compliance-ready for December 2026)
- Scope declarations for Tier 2 agents
- A repeatable process for updating constitutions as agents evolve
Why 2026 Is the Year You Cannot Skip This
Thomson Reuters titled their piece "Beyond the Hype: 2026 is the year AI has to prove itself." Their argument applies equally to governance: 2026 is the year AI governance has to prove itself.
The Australian regulatory environment has moved from "we are watching AI" to active enforcement. The OAIC launched its first proactive compliance sweep in January 2026. The December 2026 Privacy Act deadline is now less than nine months away. The OAIC's published guidance makes clear that having "an AI strategy document" is not compliance — having technical controls, audit trails, and explainability infrastructure is.
The cost of building an agentic constitution before you need it is the engineering effort to encode rules that you already have informally. Your business already knows that agents should not send external communications without approval. That it should not access databases beyond their defined scope. That decisions affecting individual customers require explanation capability.
The agentic constitution is the translation of those informal understandings into machine-readable, version-controlled, auditable form.
The cost of not building it — an OAIC investigation, a penalty proceeding, a compliance retrofit under deadline pressure — is substantially higher.
The Competitive Dimension
There is a competitive argument for building the agentic constitution now, beyond the compliance argument.
Organisations that have encoded their governance rules as code can deploy AI agents in regulated contexts that organisations without this infrastructure cannot. A financial services business with a formal agentic constitution for its credit decision AI can deploy that agent into APRA-regulated workflows. A healthcare provider with constitutional controls for its triage agent can deploy into clinical settings with clear accountability lines.
The agentic constitution is not just a compliance tool. It is the infrastructure that lets you deploy AI in the highest-value, highest-trust contexts — the exact contexts where your competitors who have not built this cannot yet operate.
The price of intelligence has collapsed. Every Australian business will deploy AI. The ones that deploy it with constitutional controls will be able to deploy it everywhere. The ones that deploy it without controls will be able to deploy it only where the stakes are low enough to tolerate the risk.
What Akira Data Builds
AI Readiness Sprint (AUD $7,500 · 2 weeks) Identifies your highest-ROI agent use case, assesses your compliance posture, and delivers a production build plan — including a draft agentic constitution scoped to your first deployment.
Agentic Workflow Build (from AUD $25,000 · 4–8 weeks) Full production deployment with constitutional controls built in: scope declaration, permission hierarchy, escalation protocol, audit and explanation infrastructure. Privacy Act compliant from day one.
Privacy-Safe AI Implementation (from AUD $20,000) For organisations with existing agents that lack constitutional controls. Full agentic constitution build for your Tier 1 agents, audit trail retrofit, and December 2026 compliance readiness.
AI Strategy Retainer (AUD $8,000/month) Ongoing constitutional governance — quarterly reviews, permission hierarchy updates as agents evolve, compliance monitoring ahead of regulatory changes.
*Akira Data builds agentic AI for Australian mid-market businesses — constitutionally governed, Privacy Act compliant, with the observability infrastructure required for December 2026. [Start with an AI Readiness Sprint](/contact) — AUD $7,500, two weeks, one workflow, one agentic constitution.*
*This article was published 20 March 2026. It references CIO.com "Why your 2026 IT strategy needs an agentic constitution" (January 2026), Thomson Reuters Business Insight Australia "Beyond the Hype: 2026 is the year AI has to prove itself" (March 2026), ABC News Australia "As the price of intelligence collapses, agentic AI is replacing human workers" (March 2026), and the Privacy and Other Legislation Amendment Act 2024.*
Share this article