Australian privacy law’s head(AI)che

In 2025, privacy isn’t just a compliance topic, it’s a daily dilemma. Because artificial intelligence (AI) is no longer something on the horizon. It’s here, embedded into the way we live, work and make decisions and it’s making privacy regulators, and the law itself, sweat.    

Doctors are using AI to draft patient notes. Teachers are using it to personalise lesson plans. HR teams are using it to screen CVs. Start-ups are building new businesses off the back of it, while corporates are racing to catch up. Even lawyers are using it to (in)famously invent case citations, sorry learned colleagues, but that’s poor form.    

And if you think you’re organisation is not using AI? Think again. It’s probably already been rolled into your cloud tools by vendors, buried in your productivity platforms, or adopted unofficially by teams looking for a shortcut. This is the age of shadow AI and ignoring it won’t make it go away.    

Which is why this year’s Privacy Awareness Week theme, “Privacy: it’s everyone’s business”, couldn’t be timelier. Because those who fail to get across the privacy implications of AI won’t just face compliance risk, they’ll be left behind. The best organisations are using privacy governance as a competitive edge. The rest? They’re sleepwalking into problems they don’t yet see…    

1. AI is pushing privacy law to its limits    

AI is not a single technology, it’s a complex ecosystem. From data collection and preparation, to model training, deployment, inference, and continuous refinement, every stage of the AI lifecycle introduces new privacy risks. And increasingly, those risks are testing the boundaries of Australia’s privacy laws.    

The Privacy Act 1988 (Cth) was built for a world where data was largely static, processing was manual, and decisions were made by people, not algorithms. While its principles are deliberately technology-agnostic and flexible, they were not designed with large-scale scraping, opaque model training, or automated inference in mind. Key concepts are stretched when applied to modern AI systems.    

Even the Privacy Act Review Report, released in early 2023 just months after ChatGPT’s launch, was prepared before we fully understood the scale and speed of generative AI’s adoption. It contains important proposals, like transparency around significant automated decisions, but largely stops short of grappling with the full complexity of AI development and deployment.    

Today’s AI systems often operate in ways that make compliance deeply challenging. Data is frequently repurposed, sometimes scraped from public sources, sometimes drawn from customer records collected for unrelated purposes. Individuals often aren’t aware that their information is being used at all. Automated decision-making (ADM) can bypass the human judgement that the Privacy Act implicitly assumes will act as a safeguard.    

Responsibility, too, is fractured. A single AI use case might involve a model developer, cloud host, software integrator, and local deployer—none of whom have full oversight of how personal information is collected, used or protected. Models are trained on vast and varied datasets, often sourced from third parties, legacy systems, or data brokers. By the time an AI system is in production, the origins and legal basis of the data it uses may be entirely opaque.    

And yet, these systems are already shaping high-stakes decisions: in hiring, healthcare, education, policing, insurance and more. This isn’t just stretching the law, it’s stress-testing its very foundations: that data flows are traceable, that purposes are defined, and that individuals remain in control.    

So where does this leave the Australian Privacy Principles (APPs)? They still apply, but applying them to AI is anything but simple.    

2. Our privacy principles still apply—but not neatly    

There’s no “AI exemption” in the Privacy Act. The APPs still apply across the AI lifecycle but trying to map them neatly to how AI works is like applying road rules to a jet engine.    

To make sense of it, let’s break it down across the three critical stages in AI development and use: training, inputs, and outputs.    

Training: the hidden risks in building or fine tuning the model    

APP 3 – Collection for AI training or fine-tuning    

Many AI models are trained, or fine-tuned, on data that an organisation actively collects for the purpose of building or improving a system. This might include:    

  • Purchasing datasets from data brokers or third parties
  • Scraping publicly available content (e.g. websites, social media, forums)
  • Collecting information directly from individuals via forms, surveys, apps or customer interactions

Under APP 3, an organisation can only collect personal information if:    

  • The collection is reasonably necessary for one or more of its functions or activities, and
  • It is conducted by lawful and fair means

This is not a mere technicality. Organisations must actively assess:    

  • Is training or fine-tuning an AI model genuinely reasonably necessary for a function or activity of the organisation?
  • Is the method of collection lawful (e.g. not in breach of contract, scraping restrictions or consumer protections)?
  • Is it fair, or could it be perceived as misleading, covert or disproportionate?

Even where data is publicly accessible, large-scale scraping or the use of opaque data brokers may still breach the “fair means” requirement.    

APP 3.3 – Sensitive information requires consent    

Where the information being collected includes sensitive information, such as health information, racial or ethnic origin, political opinions, religious beliefs, sexual orientation or biometric data, express consent is required under APP 3.3, unless a limited exception applies.    

This presents a major challenge in AI training and fine-tuning contexts. Many datasets include sensitive information either:    

  • Explicitly (e.g. medical notes, facial images, voice recordings), or
  • Inferentially (e.g. content from which sensitive characteristics can be predicted)

APP 3.6 – Collecting directly from individuals

Wherever practicable, organisations must collect personal information directly from the individual rather than from a third party (APP 3.6). Collecting indirectly, such as through scraping, purchasing datasets, or using third-party enrichment tools, is only permitted if direct collection would be unreasonable or impracticable.    

This raises important questions for AI pipelines:    

  • Was it actually impracticable to get the data directly from the individual?
  • Could consent or notice have reasonably been sought?
  • Is there a valid justification for relying on intermediaries or passive data collection?

The less visibility or agency individuals have over the process, the harder it is to meet the expectations of APP 3.6.    

APP 5 – Notification at the point of collection    

Whether collected directly or indirectly, organisations must comply with APP 5 by taking reasonable steps to notify individuals. This includes:    

  • Explaining that their information will be used to train or fine-tune AI systems
  • Identifying any third parties who may receive the data (e.g. developers, cloud providers)
  • Disclosing whether personal information will be transferred overseas
  • Outlining rights to access, correct and complain about data handling

If indirect collection occurs, notification must still happen as soon as practicable, unless a specific exception applies.    

Privacy policy obligations (APP 1)    

AI training or fine-tuning using personal information should also be clearly and transparently disclosed in the organisation’s privacy policy. Vague language like “improving services” or “analytics” likely won’t cut it where large-scale data use, model development, or personalisation is occurring. Transparent communication builds trust, reduces regulatory risk, and demonstrates genuine privacy governance, all critical in the age of AI.    

APP 6 – Using or disclosing PI to train AI    

Another common way organisations obtain data for training or fine-tuning AI models is by turning to their existing internal datasets, such as customer service records, support tickets, emails, transaction histories or other operational data. These datasets may seem like a goldmine for AI development, but they come with risks under the Privacy Act.    

Unless the information was originally collected for the primary purpose of training AI and explicitly framed that way at the time of collection, any such use is a secondary use under APP 6, and must satisfy strict legal conditions.    

For personal information, APP 6 permits secondary use if the new purpose is related to the original purpose and within the individual’s reasonable expectations. In practice, this requires a careful assessment of:    

  • Whether the individual would reasonably expect their data to be used in this way; and
  • Whether the new purpose (AI training or fine-tuning) is sufficiently related to the context in which the data was originally collected.
  • For sensitive information, such as health data, biometric data, or information about race, religion or sexual orientation, the test is even stricter. The secondary use must be directly related to the original purpose, and in many cases will require express consent from the individual.

This distinction is crucial. A vague privacy policy or blanket catch-all clause won’t cut it. You must be able to demonstrate a clear and lawful basis for repurposing the data, especially where that repurposing involves building or enhancing AI capabilities.    

Treating your existing datasets as free training material may feel efficient but unless the reuse complies with APP 6, it may open the door to serious privacy risk and regulatory exposure.    

APP 11 –Using de-identified information    

When training AI systems, many organisations rely on de-identified datasets in an effort to minimise privacy risk and sidestep direct APP obligations. But this approach also comes with a material legal risk.    

Under the Privacy Act, information is only truly “de-identified” if the risk of re-identifying an individual is so low that it is no longer reasonably identifiable in the circumstances. This is a high bar and it must be assessed at the time of handling, in context, and with consideration of foreseeable advances in technology.    

That’s where AI changes the game. The capability of large-scale AI systems to reconstruct, infer, or relink data points makes traditional de-identification techniques increasingly fragile. Techniques like masking, pseudonymisation, or suppression may no longer offer sufficient protection when AI models can reverse-engineer or triangulate identity from sparse data fragments.    

This risk isn’t just theoretical. The OAIC has explicitly warned that re-identification risk must account for what is “reasonably likely”, not just what is technically improbable. And the risk landscape has shifted: AI can now be used by threat actors to process breached or scraped datasets and generate highly detailed synthetic profiles, even where the original data was “de-identified”.    

The OAIC and CSIRO’s Data61 De-Identification Decision-Making Framework provides best practice guidance for managing these risks. It recommends:    

  • Viewing de-identification as a risk-based process, not a binary outcome;
  • Regularly reviewing the effectiveness of techniques used;
  • Considering motivated adversaries and available auxiliary datasets;
  • And ensuring that residual re-identification risk is low enough that the information is no longer “personal information” under the Act.

When training AI models, the key takeaway is this: De-identified data may still be personal information if re-identification is reasonably likely and if so, all of the APPs will still apply. Additionally, organisations cannot assume that de-identification is a one-off process or a blanket defence. In the AI era, it must be treated as an ongoing obligation, especially as models become more capable, and threat actors more sophisticated.    

Inputs: the data you feed into the system    

APP 6 – Use and Disclosure:    

The everyday use of AI tools, from chatbots and virtual assistants to productivity enhancers and automated analytics platforms, presents privacy challenges under APP 6. That’s because inputting personal information into these tools may constitute a use or a disclosure, depending on how the system handles the data.    

  • If the AI tool retains, logs, or repurposes the personal information (e.g. for quality improvement, analytics, or commercial development), this is likely to be a disclosure to the provider of the tool.
  • If the tool processes the information only on behalf of the organisation, with no independent access or reuse by the vendor, it may be considered a use by the organisation — but the purpose must still align with the original reason the information was collected.

 Examples where use may cross the line:    

  • Feeding customer complaints into an AI summarisation tool hosted by a third party, without knowing how the data is handled
  • Using an AI-powered assistant to analyse employee performance notes, where the tool stores or learns from the data
  • Deploying a generative AI platform to create marketing content that incorporates or is informed by identifiable personal data

Outputs: what the AI generates    

APP 3 – Collection (again):    

It’s easy to assume that once an AI model has been trained, the hard privacy work is done. But the outputs of AI systems can also trigger collection obligations under APP 3, especially when those outputs include inferences about identifiable individuals.    

If an AI system generates a prediction, classification, or insight about a person, such as a risk score, behavioural profile, or sentiment analysis, that information may be considered personal information, even if it was algorithmically derived. And importantly, this applies regardless of whether the inference is correct or not. If the information is about an identifiable individual and reasonably capable of identifying them, it is personal information and the organisation using it is collecting it under the Privacy Act.    

This means organisations must:    

  • Ensure that the collection is reasonably necessary for their functions or activities,
  • Collect the information by lawful and fair means, and
  • Consider notification and transparency obligations, particularly where individuals are unaware that such profiling is occurring.

These risks mirror many of the issues raised earlier in the training stage. Just like when sourcing data to build or fine-tune models, organisations using AI outputs must:    

  • Assess whether the inference relates to and identifies a person,
  • Understand whether the AI’s collection method was lawful and transparent,
  • Consider how the model’s underlying training data or logic affects the reliability and risk profile of the output, and
  • Ensure appropriate governance, communication, and control mechanisms are in place.

Other considerations    

While most of the focus falls on data collection, use, and security, other privacy obligations come into play when organisations adopt AI tools, especially commercial or cloud-based platforms.    

APP 8 – Cross-border disclosures    

Many popular generative AI tools route user inputs through servers located outside Australia. If personal information is transmitted offshore — for example, to an AI provider’s processing infrastructure — APP 8 obligations apply. This means organisations must:    

  • Ensure the recipient is subject to a law or binding scheme substantially similar to the Privacy Act, or
  • Take reasonable steps to ensure the overseas provider handles the data in accordance with the APPs, or
  • Obtain informed consent from individuals for the overseas disclosure.

Treating all AI as internal-only is risky, especially where providers reserve the right to log, analyse, or reuse input data to improve services.    

APP 10 – Accuracy    

As organisations increasingly integrate AI tools into their operations, ensuring the accuracy of personal information becomes paramount. Under APP 10, entities are obligated to take reasonable steps to ensure that the personal information they collect, use, or disclose is accurate, up-to-date, complete, and relevant.    

AI systems, particularly generative models, are known to produce outputs that may be inaccurate or misleading. These inaccuracies can have significant privacy implications, especially when decisions affecting individuals are based on such outputs.    

Entities must ensure data accuracy at two critical points:    

  • At Collection: When collecting personal information, especially from third-party sources or through automated means, organisations should verify the data's accuracy and relevance.
  • At Use or Disclosure: Before using or disclosing personal information, particularly in AI-driven processes, entities should confirm that the data remains accurate and pertinent to the purpose.

Failure to maintain data accuracy can lead to adverse outcomes for individuals and potential non-compliance with privacy obligations.    

APP 11 – Security and destruction    

APP 11 requires organisations to take reasonable steps to protect personal information from misuse, interference, and loss, and from unauthorised access, modification or disclosure. But in the AI era, meeting this standard isn’t as simple as locking down a database or encrypting a file.    

AI systems challenge traditional notions of data security in profound ways. Once personal information is used to train or fine-tune a model, or even inputted through everyday GenAI tools, control over that data becomes far more complex and, in some cases, effectively irreversible.    

 Security is no longer just about storage    

Modern AI systems introduce new attack surfaces and risks, such as data poisoning, where attackers manipulate training inputs to skew a model’s behaviour, or inference attacks, where models inadvertently memorise and leak personal information in their outputs.    

If personal information is used during training, organisations must not only safeguard that data at rest, they must understand how it’s embedded, represented, and potentially surfaced later. Ensuring security under APP 11 now means stress-testing model outputs, not just system firewalls.    

The deletion dilemma    

APP 11 of the Privacy Act doesn’t just require organisations to keep personal information safe—it also requires them to delete or de-identify it when it is no longer needed. This is a thorny issue. Once personal data is incorporated into an AI model, especially in foundation models or through fine-tuning, it may become impossible to retrieve or remove with certainty. Unlike structured databases, models don't offer “delete” buttons.    

This raises serious compliance questions, especially in light of proposed reforms that would introduce a right to request erasure of personal information and a right to object to certain types of processing. These rights mirror the GDPR but are fundamentally difficult to reconcile with AI architectures that don't track data lineage or retain input–output mappings.    

What happens if an individual objects to their data being used to train a model or wants it erased entirely? If the data has been pooled, transformed, or compressed into statistical weights, there may be no feasible way to unwind it.    

Responsibility doesn’t end at procurement    

Many organisations now use third-party AI tools embedded into platforms, cloud software or enterprise solutions. But under APP 11, they remain responsible for ensuring appropriate security and governance.    

This means going beyond contractual assurances and asking:    

  • How is data processed, stored and retained by the tool?
  • Is it used to further train the provider’s model?
  • What safeguards are in place to prevent memorisation or leakage?

Without clear answers, even well-intentioned deployments can expose organisations to material risk.    

In short, APP 11 is not just about protecting databases anymore, it’s about safeguarding how AI systems are trained, tested, and behave over time. As reform looms and rights like deletion and objection gain traction, the legal expectation of control over personal information is colliding with the technical realities of AI. Organisations must begin bridging that gap now, before they’re forced to defend the indefensible.    

APP 12 – Access and correction    

If an AI system generates personal information about an individual, whether it's a score, a prediction, or a classification, the individual may have a right under APP 12 to access that information or request a correction. This creates practical challenges:    

  • What happens when the information was generated by a model the organisation didn’t build?
  • What if the output changes depending on when or how it’s queried?
  • And how can individuals correct something they were never made aware of?

As AI systems become more embedded in decision-making, ensuring that individuals can exercise their rights, even in technically complex environments, will be critical to maintaining privacy compliance and public trust.    

Automated decision making    

AI isn’t just helping people work faster, it’s making decisions that affect people’s lives. Whether it’s approving loans, shortlisting job candidates, adjusting insurance premiums, or flagging health risks, ADM raises critical questions under the Privacy Act.    

Under the current law and unlike the European GDPR, there is no general right to know when you’ve been subject to an automated decision, let alone to challenge it. While APP 1 and 5 require transparency about information handling, these obligations weren’t designed for algorithmic decision-making, where logic is opaque, models evolve, and the decision path is difficult to explain.    

The Privacy Act Review Report recognised this gap. Among the proposed reforms:    

  • Organisations would be required to update their privacy policies to explain when personal information is used in substantially automated decisions with legal or similarly significant effects.
  • Individuals would be entitled to request meaningful information about the logic of those decisions, and the impacts they may have.
  • The OAIC is expected to issue guidance on how to interpret and implement these obligations, particularly for high-impact sectors like financial services, health, and government.

Under the first tranche of privacy reforms implemented by the Privacy and Other Legislation Amendment Act 2024 (Cth), the first proposed reform listed above regarding privacy policy transparency will come into effect in December 2026. In short, where an organisation uses a computer program to either:    

  • make decisions in a fully automated manner (without human involvement), or
  • substantially and directly assist human decision-makers,

and personal information is used in a way that could reasonably be expected to significantly affect an individual’s rights or interests, the organisation’s privacy policy must include specific details about that use.    

These reforms partially align with international standards, such as the GDPR, by promoting transparency in ADM processes. While they do not grant individuals the right to opt out of ADM, they ensure individuals are informed about how their personal information is used in automated decisions.    

In the meantime, ADM still engages existing obligations:    

  • Under APP 1, organisations must have practices, procedures and systems in place to ensure compliance, including governance of privacy related ADM risks.
  • Under APP 3, if personal information is collected solely to enable ADM, collection requirements apply.
  • Under APP 6, if personal information is collected for a different purpose is now being used to make automated decisions, that could be a secondary use requiring further justification or consent.

3. The tech is evolving—what guidance do we have?    

While Australian privacy law wasn’t designed with AI in mind, the OAIC has issued increasingly targeted guidance and decisions that reveal how emerging technologies are being interpreted through the lens of the APPs. These cases and materials highlight the real-world boundaries of AI use and the compliance expectations that come with it.    

Facial recognition technology (FRT): Bunnings and 7-Eleven    

In separate investigations, the OAIC found that both Bunnings and 7-Eleven had breached the APPs by using facial recognition in their stores without clear, voluntary consent. Biometric data is sensitive information under the Privacy Act, and its collection requires express consent and a lawful, proportionate purpose.    

7-Eleven used facial recognition to "detect suspicious activity" at kiosks. Bunnings used it to manage customer aggression, protect its staff and for loss prevention. But in both cases, the Commissioner found that individuals were not adequately notified or able to opt out, and the collection could not be justified.    

Bunnings attempted to rely on exceptions to the consent requirement under APP 3.4, which permits the collection of sensitive information without consent if it is necessary to lessen or prevent a serious threat to life, health or safety, or to prevent or investigate unlawful activity. However, the Commissioner found that the use of facial recognition for general security or deterrence purposes did not meet this threshold.    

In applying the necessity requirement, the Commissioner adopted a three-step proportionality test:    

  • Suitability – Is the collection of biometric information capable of achieving the stated purpose (e.g. preventing harm or crime)?
  • Necessity – Is the collection reasonably necessary, or could the same objective be achieved through less privacy-intrusive means?
  • Balancing – Does the benefit to the organisation or the public outweigh the significant privacy intrusion?

The Commissioner concluded that Bunnings’ FRT deployment failed the necessity and balancing limbs of this test. While reducing aggression in stores was a legitimate goal, the indiscriminate collection of facial images from all customers was not proportionate to that aim, particularly where less invasive alternatives (e.g. signage, staff training, physical design) were available.    

These cases make it clear: the threshold for compliant FRT use is high. Routine surveillance, operational efficiency, or vague deterrence goals are unlikely to meet the Privacy Act’s requirements. As the Privacy Commissioner noted this week, using facial recognition isn’t inherently unlawful, you just need to comply with the law. That’s a much higher bar than many organisations currently realise.    

Scraping and profiling: Clearview AI    

In one of the most high-profile decisions to date, the OAIC found that Clearview AI had breached multiple APPs by scraping billions of publicly available images from the internet to train a facial recognition model. The company then offered its service, including for law enforcement use, without individuals’ knowledge or consent.    

The OAIC held that:    

  • Just because data is publicly available does not mean it can be collected without restriction (APP 3)
  • The method of collection was not fair, lawful or reasonable
  • Australian privacy law applies extraterritorially when Australian individuals are affected

The ruling is a direct warning to any business assuming the internet is fair game for AI training.    

 De-identification and re-identification risk: Harrison AI and I-MED    

As AI systems become more powerful, the assumption that "de-identified" data can be freely used or shared is increasingly fragile. The OAIC, in conjunction with CSIRO, has emphasised that de-identification exists on a spectrum and that re-identification risk must be actively managed.    

In late 2024, the OAIC commenced preliminary inquiries into I-MED Radiology’s disclosure of patient chest X-rays to Harrison.ai for model training. While I-MED claimed the data had been de-identified, the OAIC’s involvement highlights growing concern about how robust those de-identification processes truly are.    

The case drew public scrutiny because:    

  • The data involved highly sensitive health information
  • Patients had not been clearly informed or asked for consent
  • Harrison AI acknowledged receipt of data but asserted that compliance questions fell to I-MED

This situation underscores that even when de-identification is claimed, organisations must ensure the process meets legal and technical thresholds. Combining datasets or applying advanced AI analytics may reintroduce identifiability.    

Generative AI – OAIC guidance    

In 2024, the OAIC released two pieces of detailed guidance on the privacy implications of generative AI (GenAI) — one focused on the training and fine-tuning of models, and the other on the use of commercially available tools. The message was clear: the Privacy Act 1988 (Cth) applies across the AI lifecycle, regardless of whether your organisation built the model or is simply using it. The guidance echoes much of what I have said above, but in a nutshell:    

For developers: Training and fine-tuning GenAI models    

  • Lawful and fair collection (APP 3)

     Publicly available data isn’t a free pass. If you’re scraping or sourcing personal information, you still need to collect it lawfully, fairly, and for a clear, necessary purpose. Collecting sensitive information? You’ll need consent unless an exception applies.    

  • Secondary use and expectations (APP 6)

     Reusing existing data for training? You must show it’s related to the original purpose and within reasonable expectations or get consent. For sensitive info, the bar is higher: the use must be directly related.    

  • De-identification risk (APP 11)

     De-identified data can be re-identified especially when combined with other sources. The OAIC expects technical and governance safeguards to reduce that risk.    

  • Data quality and accuracy (APP 10)

     Garbage in, garbage out. You must take reasonable steps to ensure training data is accurate, complete and current. Disclaimers don’t override your obligations.    

For deployers: Using commercial GenAI tools    

  • Input and output risks (APP 3 & 6)

     Inputting personal data into a GenAI tool may be a use or a disclosure, depending on whether it’s stored or used to train the provider’s model. Outputs, like summaries or inferences, can also be collections if they relate to an identifiable individual.    

  • Transparency and notices (APP 1 & 5)

     You must explain how GenAI tools are used, what data goes in and comes out, and whether third parties are involved. Be clear with users and in your privacy policy and collection statements.    

  • Vendor due diligence (APP 11)

     Know your tool. Review contracts, check whether data is logged or reused, and ensure appropriate security controls are in place.    

  • Governance and oversight

     Use PIAs, update risk registers, and regularly review your use of GenAI. Just because it’s off-the-shelf doesn’t mean you’re off the hook. Even if you didn’t build the model, you’re still responsible for how it’s used. And in the OAIC’s eyes, ignorance is not a defence.    

4. “We don’t use AI”—and other myths that need debunking    

As AI tools become embedded in everyday operations, misunderstandings about privacy risks are becoming just as widespread as the technologies themselves. Let’s set the record straight on a few common myths:    

  • “We don’t use AI.” You probably do. Whether it’s Microsoft Copilot auto-drafting emails, ChatGPT assisting with reports, or an HR system recommending job candidates, AI is already part of your tech stack. Even if your organisation hasn’t formally adopted AI, staff may be using it unofficially. This so-called shadow AI creates serious blind spots. If your policies and risk controls haven’t kept up, your compliance obligations almost certainly haven’t been met either.
  • “We only use off-the-shelf tools, so it’s not our responsibility.” Not quite. Buying a commercial AI product doesn’t outsource your legal obligations. You are still responsible for how personal information is used, stored, or disclosed by that tool. Whether it’s a chatbot integrated into your website or an AI tool embedded in your CRM, you need to understand and manage how it handles data, especially if you’re inputting customer, employee, or client information.
  • “There’s no personal information involved, so we’re in the clear.” This is a common trap. Even when data is ‘de-identified’ or seems anonymous, it can still carry risk. Inferences drawn by AI, like a health condition, political belief or behavioural trait, can be personal information if they relate to an identifiable individual. And as AI systems become more sophisticated, they may memorise or re-identify information that was meant to be stripped of identifiers. The Privacy Act is triggered by more than just names and emails.
  • “Our AI tool is behind a login, so it’s secure.” A login screen is not a substitute for privacy governance. It might protect access, but it doesn’t address whether the tool was trained appropriately, what data it stores or shares, or whether individuals have been informed about how their data is used. Good privacy practice requires end-to-end oversight, from procurement to deployment and beyond.

The bottom line?    

The best organisations aren’t waiting for regulation to catch up. They’re building strong privacy and data governance foundations now, embedding AI into existing frameworks, conducting Privacy Impact Assessments, and asking the hard questions about data flows, vendor practices, and individual rights.    

5. The enforcement gap: David vs Goliath?    

When it comes to regulating AI, Australia’s privacy watchdog is caught in a familiar bind: global technology moves fast, and public sector budgets move slowly.    

The OAIC has made it clear, it won’t be going toe-to-toe with the global AI giants alone. Instead, it plans to rely on strategic coordination with international regulators, such as the UK’s ICO or Europe’s data protection authorities. The idea is sound: leverage shared investigations and pooled resources to avoid duplicating effort. But the reality is more complex.    

Despite a modest funding boost in recent years, the OAIC continues to operate with constrained enforcement resources. This makes prioritisation essential and that means choosing targets carefully. As a result, enforcement is likely to focus on where the OAIC can make the most impact: domestic deployment of AI systems, rather than upstream model developers or foreign vendors.    

For Australian businesses, that creates a lopsided risk profile. If you’re deploying a GenAI tool sourced from overseas or using plug-and-play AI features bundled into SaaS platforms, you may bear the compliance burden for decisions made far outside your control. In the eyes of the regulator, if you’re the one using the tool, you’re the one accountable.    

It’s a clear warning to local organisations: deployment governance matters. If you don’t understand how a model was trained, what data was used, or how outputs are generated, you could still find yourself in the regulator’s crosshairs. This is especially true where personal information is involved, and even more so in high-impact use cases like employment, health, finance, or policing.    

The OAIC may not have the resources to chase every global developer. But that doesn’t mean privacy enforcement is going away. If anything, it’s becoming more targeted and Australian businesses need to be ready.    

6. What organisations should be doing now    

Privacy isn't about saying “no” to AI, it’s about knowing how to say “yes” safely.    

As AI tools become more embedded across business functions, the regulatory and reputational stakes are only going to rise. And while Australia’s privacy regime is evolving, there’s already a clear path for action.    

So, what should organisations be doing right now?    

  • Know your data

You can’t govern what you don’t understand. Mapping your data flows, especially how data is used to train, fine-tune, or prompt AI tools, is foundational. Assess whether data was collected lawfully for this purpose, and whether re-use is allowed under APP 6. Sensitive information? You’ll likely need express consent if it’s a secondary purpose. Understanding the provenance, quality, and sensitivity of your data is also key to managing risk and getting return on your AI investments.    

           

  • Update your policies and notices

With reforms underway, privacy policies will soon need to explain any use of AI for significant automated decision-making. But even before those changes land, your existing obligations under APP 1 and APP 5 may require updates. Are individuals aware their data may be used in AI systems? Is that disclosed in your collection notices? Transparency isn’t just a compliance issue, it’s a trust issue.    

           

  • Govern the use of GenAI tools

Whether you’re using Copilot, ChatGPT, or other embedded AI tools, assume privacy obligations apply. If you’re inputting personal information, that might be a disclosure. If your outputs affect individuals, that could count as a new collection. Shadow AI, tools used without formal approval, also poses risks. Without policies, training, and oversight, you may be exposed to privacy breaches and compliance failures.    

           

  • Be ready for the ADM reforms

By December 2026, privacy policies will need to address any use of significant automated decision-making. That means reviewing your processes now, particularly where AI is involved in employment, finance, health, education or other impactful domains. Too little detail in your public statements might breach your transparency obligations. Too much might create legal or IP risks. A measured, practical approach will be key.    

           

  • Conduct PIAs (and not just once)

Privacy Impact Assessments aren’t just helpful, they’re becoming essential. Whether you’re building your own model or using off-the-shelf tools, PIAs help assess privacy risks at every stage of the AI lifecycle. They’re also a powerful way to demonstrate due diligence to regulators, boards, and customers.    

           

  • Don’t underestimate security and re-identification risks

AI tools often rely on large datasets, but just because data is ‘de-identified’ doesn’t mean it’s safe. As models grow more sophisticated, they may enable re-identification or infer highly sensitive insights. Organisations should take active steps to prevent this. That includes technical controls, internal governance, and reassessment as models evolve.    

           

  • Scrutinise your vendors

Using a third-party AI tool does not shield you from compliance. You’re responsible for what happens to the data you input, the outputs you generate, and the vendor practices you rely on. Review your contracts. Ask the hard questions. Ensure your procurement teams are trained to spot red flags and demand transparency.    

           

  • Build AI into your broader governance frameworks

AI governance is not an IT problem. It’s a cross-functional issue that spans legal, privacy, cyber, risk, data and ethics. Organisations should embed AI into existing governance structures, not bolt it on. This means assigning ownership, updating risk registers, and ensuring board-level visibility of AI use cases and exposures.    

 In her opening presentation for Privacy Awareness Week 2025, Australian Privacy Commissioner Carly Kind delivered a clear message: AI is transforming privacy risks and regulatory expectations are evolving quickly.    

She emphasised that AI is already reshaping how personal information is collected, used, and acted upon and that our legal and policy frameworks must keep up. The Commissioner outlined a more assertive regulatory stance, built on clearer guidance, sharper enforcement tools, and a growing focus on AI governance and automated decision-making. Her message to business was direct: privacy can't be bolted on at the end. It must be embedded from the star,t into the way technology is built, procured and deployed.    

But this also raises deeper questions. As AI becomes ubiquitous in daily life, will community expectations begin to shift? Once AI-powered tools are embedded in education, healthcare, communications and everyday work, will we still view the reuse of personal information for model training as unexpected, or simply part of the deal?    

The Privacy Act is built on principles like reasonable expectations and fairness, concepts that are shaped not just by law, but by public norms. And those norms are changing. It will be fascinating to watch whether, over time, broad AI training becomes so common that it’s no longer considered a secondary use at all because society accepts it as standard practice. Or, conversely, whether backlash and fatigue will push expectations the other way, demanding stronger boundaries, clearer consent, and tighter guardrails on repurposing personal information.    

Whatever direction the social tide turns, one thing is clear: organisations can’t afford to sit still. AI is moving fast and so is the regulatory response. Privacy is everyone’s business, because it’s good business. And in 2025, staying ahead means engaging now.    

James Patto
Founder & Principal
Follow us on social media:
Blog

Clarity in a changing world.

Stay ahead of the curve with expert analysis on the legal, regulatory and strategic issues shaping data, technology, privacy, cybersecurity and AI.

Whether you’re navigating complex reforms, responding to risk, or planning for what’s next, our insights are here to keep you informed and empowered.

Jul 7, 2025

Privacy regime may mean organisations flat-footed for the AI era

Australia’s lighter-touch privacy regime was meant to foster innovation, but as AI adoption accelerates, it may be doing the opposite. In this Privacy Awareness Week article, James unpacks why strong data governance is critical for AI success and how Australia risks falling behind without clearer, stronger privacy rules.

Read more
Built for the digital era. Ready when you are.

Work with Scildan Legal to lead with confidence across privacy, cyber, AI and technology.