When Australia overhauled its Privacy Act in 2012, the intention was to strike a balance: protect individuals’ privacy, but not at the expense of innovation or operational efficiency. The resulting framework was deliberately balanced, framed by lawmakers as an “innovative” approach that would support business agility in a digital economy and offer a level of privacy protection at the same time.
Fast forward to 2025. Claims of an innovation supporting regime appear greatly overstated and Australia is still waiting for major reform to land to bring its laws into the modern era:
- The GDPR has now been in full effect for more than seven years.
- Australia’s Privacy Act review began in 2019 but, as of 2025, significant compliance reforms have yet to be legislated.
- Even with optimistic timelines, it may be 2027 or later before Australian organisations operate under privacy laws that reflect global norms.
The Privacy and Other Legislation Amendment Act 2024 did introduce meaningful changes around enforcement and regulatory powers. But from a compliance uplift perspective, particularly in terms of privacy governance, data rights, and accountability, its impact was limited.
In effect, we remain in a holding pattern. And while we wait, the gap between global expectations and Australian laws continues to widen.
And now it’s becoming clearer that this regulatory minimalism has come at a cost. As artificial intelligence becomes a core part of how organisations operate, make decisions, and deliver services, Australian organisations are discovering a hard truth: AI doesn’t just need computing power, it needs governed, high-quality, legally usable data.
The uncomfortable reality is this:
The "innovative" privacy regime that Australian law makers celebrated and failed to reform to align with global standards may have condemned Australia to second-tier status in the AI-driven global economy.
Why data governance is critical to getting the best value from AI
There’s a growing tendency to talk about artificial intelligence in terms of algorithms, compute power, and digital transformation. But beneath it all, AI runs on data.
Alongside technology and cybersecurity, data is one of the three foundational pillars of any AI system and, arguably, the most important. Whether training a model, feeding it inputs, or applying AI tools to decision-making, the quality, governance, and legal status of data is fundamental to whether AI will deliver value or just a whole lot of risk.
From a risk perspective, poor data governance exposes organisations to:
- Privacy breaches and legal non-compliance,
- Inadvertent exposure of confidential or sensitive information,
- Misuse of data due to unclear provenance or repurposing,
- Ethical harms like bias, inaccuracy, and loss of explainability.
But the impact of poor data governance isn't limited to risk.
It also erodes the business value of AI initiatives. If you're feeding AI systems poor-quality, inconsistent, or ungoverned data, you're going to get:
- Weaker insights,
- Inaccurate predictions,
- Frustrated users, and
- AI outputs that are unreliable, misleading, or irrelevant.
The result? Lower benefit, slower uptake, and a far weaker return on AI investment.
By contrast, organisations with strong data governance such as clear ownership, good documentation, accurate and up-to-date records, and well-defined legal rights to use data, are markedly better positioned to make AI work for them. Not just in developing AI systems, but in deploying and using AI solutions confidently across the business.
If your datasets are:
- Accurate, current, and well-labelled,
- Properly cleaned and structured,
- Legally and ethically sound to use,
then you can:
- Apply over-the-top AI products (like analytics, decision support, and GenAI tools) with greater confidence,
- Extract more meaningful, useful insights,
- Fine-tune models with greater accuracy, resulting in smarter, faster, more context-aware performance,
- Reduce risks of hallucination, bias, or model drift by ensuring inputs are consistent and properly validated.
This unlocks major advantages:
- Faster time to market for AI-enabled solutions,
- Lower project costs by avoiding delays, duplication, or rework,
- Better fine-tuning and customisation of models for specific business contexts,
- Stronger outcomes by ensuring the AI is built on solid foundations.
In short: better data governance results in better AI performance, lower risk, and greater ROI.
This is not theoretical, it is playing out now. Organisations that invested in data maturity years ago are already extracting more value from AI today, while others scramble to clean, verify, and justify the data they hoped to leverage.
The lesson is clear: AI success depends on data readiness. And data readiness depends on governance.
How strong privacy laws help drive good data governance
Good data governance doesn’t happen by accident. It takes sustained investment in systems, processes, and culture. And in practice, one of the most powerful catalysts for that investment has been strong, enforceable privacy regulation.
When organisations are legally required to know what data they hold, document its use, and justify its retention, they begin to build the governance scaffolding that can later be extended to support emerging technologies like AI.
One of the most persistent challenges in uplifting data governance is securing executive buy-in. While boards and leaders generally recognise that privacy and data governance is important, they often struggle to justify sustained investment in the absence of clear legal obligations.
This leads to a common pattern:
- Privacy teams know what needs to be done,
- But without external pressure, internal momentum is hard to build,
- Funding remains piecemeal, and change initiatives stall.
Strong regulation changes that dynamic. When compliance becomes non-negotiable and enforcement credible, executive interest increases. Boards ask more questions. Budgets are unlocked. Uplift becomes an enterprise-wide priority. Just take a look at budgets and uplift in the cybersecurity space in the wake of ASIC enforcement activity and new cyber laws contained in the Security of Critical Infrastructure Act and the Cyber Security Act.
Regulation, when well-designed, doesn’t just create risk. It creates permission. It legitimises and prioritises the work that privacy, legal, and data teams have long advocated. It requires organisations to implement measures that will ultimately benefit them in the long term.
That’s exactly what happened in Europe with the General Data Protection Regulation (GDPR).
The GDPR is driving better data governance and AI outcomes
In Europe, data protection is grounded in human rights. The GDPR makes this explicit:
“This Regulation protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data.” (Article 1(2))
Privacy is treated not just as a regulatory obligation, but as an essential aspect of human dignity — one that warrants strong legal protection and structural safeguards and the compliance obligations in the GPR reflected this approach.
When the GDPR took effect, European organisations were required to fundamentally re-engineer their data handling practices. Compliance was not a box-ticking exercise, it meant rethinking how data was collected, managed, and used across the enterprise.
Organisations were compelled to:
- Build comprehensive data inventories and document processing activities
- Establish lawful bases for data use and repurposing
- Embed privacy-by-design into product development
- Create cross-functional oversight structures
- Undertake regular risk assessments, including DPIAs
- Appoint privacy officers and formalise governance roles
While challenging at the time, these investments have paid off. Many of the capabilities developed through GDPR compliance are directly relevant to AI governance:
- Accurate, well-documented datasets now support better AI model training and deployment.
- Governance structures built for privacy can be adapted to oversee AI use and risk.
- Organisational familiarity with privacy and risk frameworks has created internal fluency in managing complex compliance issues.
This means they can:
- Shorten the time to establish AI governance,
- Streamline business change efforts,
- Embed AI risk management into existing policy, risk and compliance frameworks,
- Deliver more consistent oversight across privacy, cyber, ethics, and AI use cases.
The result? They not only govern AI better, but they also get more out of it.
Their AI programs are:
- Faster to stand up,
- Less operationally disruptive,
- More aligned to organisational values,
- And better supported by business stakeholders.
In short: the investment in GDPR compliance has become a strategic enabler, turning what was once seen as regulatory cost into long-term innovation capability. The GDPR has given European organisations a data and a governance head start, a foundation to grab AI by the horns and leverage it with lower risk and greater return.
The Privacy Act has left Australian organisations unprepared for AI
In contrast, Australian law makers took a different approach.
Australia’s Privacy Act, even after its 2012 reforms, approaches privacy as something to be balanced. Section 2A of the Act sets out its objects, which include:
“...to recognise that the protection of the privacy of individuals is balanced with the interests of entities in carrying out their functions or activities.”
This balancing language matters. It reflects an underlying assumption that privacy protections should give way when they are seen to interfere with operational efficiency or commercial objectives. And that assumption has shaped how many organisations approach compliance and the nature of the compliance obligations included in the Privacy Act.
The central premise of Australia’s approach was that less regulation would unlock more innovation.
The theory went something like this:
- Lighter compliance burdens would allow faster product development and deployment.
- Fewer formal requirements, such as privacy impact assessments or data use limitations, would attract startups and investors.
- Organisations would have greater freedom to repurpose data, experiment with new models, and move quickly in response to market needs.
But this assumed innovation dividend has not materialised.
There is little evidence, then or now, that Australia’s privacy settings have helped the country outperform in data-driven innovation.
In practice, many of the most successful and responsible data innovators have emerged from jurisdictions with stronger privacy regulation. The EU remains a leader in health tech, fintech, and privacy-enhancing technologies. Canada and the UK continue to shape best practice on algorithmic accountability. And US states like California have shown that robust privacy rights and commercial success are not mutually exclusive.
The idea that regulatory flexibility would lead to competitive advantage has proven optimistic and increasingly untenable in a global environment that rewards trust, governance, and transparency.
Instead it is clear: the result of Australia’ less stringent privacy regulations is that, rather than building strong privacy foundations and good data governance, businesses have often sought to manage obligations at the margins, minimising friction, rather than embedding real and meaningful data governance. With fewer regulatory drivers and minimal enforcement, data governance has often remained underdeveloped.
The Governance Institute of Australia’s Data Governance in Australia 2023 Report highlights the lack of data governance maturity in Australian organisations. In fact, of 345 CEOs/C-suite executives, non-executive directors, surveyed, more than half rate their organisation’s data management as ‘average,’ with almost 5% calling it ‘poor.’[1] In contrast, a study by the European Commission—the 2023 Open Data Maturity Report—shows that European organisations are achieving high levels of data governance maturity, with EU member states attaining an impressive average maturity score of 83%.[2]
The consequences of weaker privacy regulation and underdeveloped data governance are no longer abstract, they are playing out in real time as Australian organisations attempt to adopt and scale AI solutions. Organisations without mature data governance are discovering that AI isn’t just harder to implement, it’s also delivering less value and more risk.
The impact is multifaceted:
- Lower quality outputs: Poorly labelled, inconsistent, or incomplete datasets result in unreliable predictions, hallucinations, or models that don’t generalise well.
- Training challenges: Without clean, structured, and properly permissioned data, training AI models becomes time-consuming, expensive, and technically compromised. In some cases, organisations may not be able to train models at all due to legal uncertainty around data use.
- Higher compliance risk: Lack of documentation, provenance, or lawful basis for data use increases exposure to legal and regulatory consequences, particularly as AI-specific regulation emerges.
- Ethical blind spots: Bias, discrimination, and explainability issues are harder to identify and mitigate without robust governance processes and oversight.
- Deployment delays: Teams spend time retrofitting privacy controls, remediating data, or seeking executive sign-off for AI use cases that trigger unexpected legal or reputational concerns.
- Fragmented governance: Without pre-existing structures, AI risk is often managed in isolation (by tech or innovation teams) rather than integrated across legal, privacy & data, cybersecurity, and risk functions.
- Reduced stakeholder trust: Customers, regulators, and the public are increasingly wary of AI use. Organisations that can’t demonstrate strong governance risk losing social licence and facing reputational damage.
In practical terms, this means:
- Slower time-to-value from AI investments
- Higher costs due to duplication, remediation, and risk management
- Missed opportunities to fine-tune models or build competitive offerings
- A reactive, tactical approach to AI adoption instead of strategic, scalable deployment
In short: organisations that failed to build strong data governance under the privacy regime of the past decade are now paying the price.
Additionally, as Australian privacy laws eventually catch up, Australian organisations now face the daunting task of uplifting data governance and building responsible AI governance at the same time, stretching leadership attention, change capacity, and investment capital at a critical juncture.
This is the operational legacy of a regulatory philosophy that deprioritised governance in the name of flexibility. And it is putting Australian organisations at a growing disadvantage in the global AI economy.
Instead of springboarding into the AI era, the legacy of lighter-touch privacy regulation is creating real operational and strategic drag on Australian organisations.
[1] https://www.governanceinstitute.com.au/advocacy/data-governance-in-australia/
[2] https://data.europa.eu/en/news-events/news/2023-open-data-maturity-report-has-been-released
The story we have been told for decades, that lighter privacy laws would unleash innovation, never eventuated and the narrative no longer fits the world we live in. Instead, it has left many Australian organisations unprepared to fully embrace potentially one of the most transformative technologies we have ever seen.
Today:
- Strong privacy governance enables responsible, scalable AI innovation,
- Robust data foundations reduce operational and reputational risk,
- Mature governance creates organisational agility in a world of rapid regulatory change.
If many Australian organisations want to compete globally in the AI economy, they must urgently rethink their approach, not because regulators demand it, but because the market demands it.
Waiting for law reform is no longer a strategy. Proactive investment in data & AI governance is now a competitive imperative.
Because in the AI era, good governance isn’t a brake on progress. It’s one of its most powerful fuels.