TL;DR. AI hasn't changed your obligations under ASIC, APRA, AUSTRAC, the Corporations Act, the Credit Act or the Privacy Act. It changed the artefacts compliance has to defend. The answer is not a new "AI policy." It is a Compliance Translation Layer: a three-column table mapping every AI output (artefact) to the existing rule it sits inside (obligation) and what a reviewer must be able to show (evidence). Build it once. Apply it to every artefact. AI deployment accelerates and audits get faster at the same time.
AI did not change your compliance. It changed your artefacts.
AI did not change your compliance obligations. It changed the artefacts you have to defend. The work in front of every Australian financial services firm is translation, not rewriting.
Compliance teams across advice, insurance, mortgage, super and banking are looking at AI-generated meeting notes, AI-drafted client letters and AI-assembled onboarding packs, and reaching for the wrong tool. They are reaching for a new AI policy. Most of what they need is already in the existing compliance manual.
The translation has not been done.
That is the pattern.
What we see when compliance reaches for the wrong tool
Across the firms we work with, the same scene plays out. An AFSL holder, an ACL holder, a super trustee or an APRA-regulated entity has bought or built an AI tool. Compliance is asked, "is this allowed." Eight weeks later there is a draft AI policy that uses words like "appropriate use" and "human oversight" without naming a section of the Corporations Act, the Credit Act, the AML/CTF Act, the Privacy Act or an APRA prudential standard those words satisfy.
Then the firm wonders why nobody is using the tool.
Norton Rose Fulbright's February 2026 AI compliance primer made the position plain. ASIC, APRA, OAIC and AUSTRAC are not waiting for an AI-specific Act. They are applying the obligations that already exist. EY's 2025 Wealth and Asset Management survey found 95% of firms had scaled GenAI to multiple use cases, and only a quarter reported substantial business impact. The gap is rarely a model problem. It is usually a translation problem.
Treat this as "AI compliance" and you write a generic policy nobody can apply to a real client situation. Treat it as "AI risk" and you buy a tool that scores AI outputs, only to find scores do not satisfy ASIC, APRA or AUSTRAC.
The obligations have not moved. The artefacts have. The line between artefact and obligation has not been drawn.
Both look like AI is failing. AI is not failing. The translation is missing.
The Compliance Translation Layer
A better mental model: every AI artefact in a financial services firm sits inside a Compliance Translation Layer with three columns:
- Artefact. What the AI produced. A meeting note, a draft client letter, an onboarding pack, a claims summary, a risk profile draft.
- Obligation. The existing rule that already applied. A Corporations Act section, a Credit Act provision, an AUSTRAC requirement, an Australian Privacy Principle, an APRA prudential standard.
- Evidence. What a reviewer can show on demand: source data, source-to-final chain, human approval, retention.
Build the layer once. Apply it to every artefact. The work compounds across business lines.
This is the free consulting bit. Three worked examples follow.
Worked example: the AI meeting note
Applies to anyone running client meetings. Advice, private banking, mortgage brokers, insurance brokers, business banking, complex super queries.
Artefact: a structured summary of a client meeting, usually pulled from an AI note-taker.
Obligation: record-keeping duties under s912A of the Corporations Act for AFSL holders, the equivalent under the Credit Act for ACL holders, and Privacy Act / APP obligations triggered by the recording. AUSTRAC obligations apply where the meeting touches identification, due diligence or a transaction.
Evidence: original audio or transcript source, AI draft, user edits, final approved version, timestamp showing approval before any client action, retention aligned to the standard record rule (typically seven years for advice and credit, longer for AML).
No new obligation. New evidence trail. Sample five meeting notes a quarter; confirm the source-to-final chain is intact.
Worked example: the AI-drafted client document
Applies wherever AI drafts something a client receives or relies on. Statements of Advice, Records of Advice, insurance underwriting letters, claims communications, mortgage credit guides, super member communications, product disclosure summaries.
Artefact: a draft of a client-facing document or section.
Obligation: whatever already governed that document. For personal advice, s961B (best interests), s961G (appropriate advice), s946A (SOA content). For credit, Chapter 3 of the Credit Act. For product issuers and distributors, Design and Distribution Obligations under Part 7.8A of the Corporations Act (s994B, s994C). For complaints-adjacent communications, RG 271. The Privacy Act over all of it.
Evidence: the source data the AI used, a qualified human approving each material claim, that human confirming the document meets the relevant obligation for the specific client or member, internal attribution showing which sections were AI-drafted (compliance file only, not the client copy).
The content rules did not change because a draft was machine-assisted. The audit trail did. A reviewer should get an answer to "show me how this section was produced and approved" in two clicks. If not, the firm has shipped an AI tool ahead of its translation layer.
Worked example: the AI-assembled onboarding pack
Everywhere. KYC packs in banking, member onboarding in super, insurance application bundles, mortgage application packs, advice fact-finds, business banking credit packs.
Artefact: a summary or multi-page pack an agent assembled from the CRM, the core platform, document storage and external data sources.
Obligation: the AML/CTF Act and AUSTRAC Rules (Part 1A customer due diligence, s41 suspicious matter reporting), the Privacy Act and APPs, APRA's CPS 230 once the agent sits inside a critical operation for an APRA-regulated entity (in force since 1 July 2025), and the licensee or credit licensee's reasonable inquiry obligations.
Evidence: the data sources the agent read from, the retention period for the pack, a record showing the human used the pack as input to a decision (not as the decision), a boundary in the prompt or workflow preventing the agent from pulling fields outside scope.
Multiple obligations. All already present. All answerable. None require an "AI policy."
The pattern repeats. Build it once.
The same three columns work for AI-generated client emails, compliance triage outputs, risk profile summaries, broker file reviews, claim assessment notes. The artefact changes. The obligation does not. After the first ten rows, almost every new AI feature your vendor ships will fit a row you already wrote.
A firm can produce a defensible first version in two days. A compliance manager and an operations or business-line lead in a room.
- Hour 1-2. List every AI artefact already running. Vendors will tell you what their tool generates.
- Hour 3-4. Pair each artefact with the obligation it sits inside. Use the existing compliance manual.
- Hour 5-8. Define the evidence trail. Source, human approval point, retention, sample size.
- Day 2. Walk it with each business-line owner. Adjust. Sign off. Distribute.
What good translation buys you
Firms that run this translation early get two compounding advantages.
AI deployment speeds up. The policy question stops being "is this allowed" and becomes "which row does this artefact sit in." New tools get assessed in days, not quarters.
Audits get faster because the reviewer gets the same answer the firm gives itself. ASIC's INFO 225 anchors the digital decisioning guidance; the same logic carries through to APRA's CPS 230 and AUSTRAC's enforcement posture. The translation layer is exactly the artefact regulators are asking for, even if not by name.
We have seen this before. When firms first put advice on a CRM, or claims on a workflow engine, the obligations did not change. The artefacts did. The firms that mapped them early ran clean audits for a decade. The ones that wrote a new "CRM policy" are still cleaning up records 12 years later.
The AI cycle is the same shape. Faster, broader, less forgiving.
The audit you have not been asked for yet
If your AI policy does not name a single section of the Corporations Act, the Credit Act, the AML/CTF Act, the Privacy Act or an APRA prudential standard, that policy is decoration. Replace the working version with a translation layer.
The work is unglamorous. It is the single highest-impact piece of compliance work an Australian financial services firm will do this year. Two days produces the first version. A monthly review keeps it current.
AI did not change your compliance obligations. It changed the artefacts you have to defend. The firms doing the translation in 2026 will run AI through their business the way good firms have always run technology, regardless of which licence they hold. The ones still writing standalone AI policies in 2027 will be running remediation programs instead.
The obligations have not moved. Move the translation.
Frequently asked questions
Do we still need an AI policy at all?
Yes, but a short one. Two pages on principles (lawful, accountable, human-approved, fit for purpose) sitting over a translation layer that does the actual decisioning. The policy sets culture. The layer settles cases.
Who owns the translation layer in our firm?
The compliance manager owns the document. The head of advice, head of underwriting, head of credit, head of operations or head of member services owns the rows for their function. The Chief Risk Officer signs off the overall document where one exists.
How does this fit with APRA's CPS 230?
CPS 230 obliges APRA-regulated entities to identify critical operations and the controls that govern them. Any AI artefact inside a critical operation is in scope. The translation layer becomes a CPS 230 register of automated outputs and the controls wrapped around them, which is exactly the artefact APRA inspections look for.
Does this cover agentic AI as well as drafting AI?
Yes. An agent that takes action is a sequence of artefacts and decisions. Each step gets its own row. The Evidence column carries more weight because it must include the boundary the agent operated under and the audit trail of every action taken, not just the document produced.
How often should the translation layer be reviewed?
Monthly while AI tooling is changing fast. Quarterly once the firm is in steady state. Trigger an immediate review whenever a vendor ships a new feature, a new business line goes live with AI, or a regulator publishes updated guidance.