Productivity Commission Interim Report: Counting Costs, Missing the Privacy Dividend

Australia’s productivity problem is no longer a slow burn, it’s now a headline risk. The Reserve Bank has warned that our long-term productivity growth has fallen well below historical averages, undermining living standards and making it harder to manage inflation without squeezing households. Productivity is the quiet force behind wage growth, competitiveness, and resilience; without it, the national economy drifts into low-growth mediocrity.    

This is where the Productivity Commission’s Harnessing Data and Digital Technology inquiry matters. It is one of the few forums where we step back, take stock, and ask: what policy settings will unlock growth in the modern digital world? And critically, how do we ensure that the digital economy works for both business and the community.    

The Commission’s work is valuable because it forces this conversation into the open. But to truly meet the moment, the analysis must be holistic, measuring both the costs of regulation and the economic gains that arise when people and businesses trust the digital systems they depend on. That’s where this debate over privacy reform, and its intersection with productivity, gets interesting.    

As Privacy Commissioner Carly Kind notes in her Op Ed on this topic, productivity-enhancing technologies like AI scribes in GP clinics can both improve efficiency and patient care but only when supported by strong privacy safeguards. This aligns with my view: trust is not an optional extra in a digital economy; it’s the scaffolding that holds it up.    

Scope note: This post is only about the privacy component of the PC report. While privacy and AI regulation are inextricably linked and my argument that good digital laws lift productivity applies to both, I’ll cover AI recommendations separately in another post / article.    

The Productivity Lens: Why Privacy Is More Than a Cost    

When we talk about productivity and compliance, the gravitational pull is almost always towards the cost side of the ledger. That is, how much time, money, and effort will be consumed by compliance. That focus is fair as far as it goes, but it’s only half the equation.    

The PC’s interim report is no different. Its own economic modelling estimates that adopting an outcomes-based compliance model for privacy, along with other reforms, could lift labour productivity by around 0.5% within a decade (~$13 billion per year). But those gains are modelled almost entirely on the assumed efficiencies of flexibility and reduced prescription.    

What’s missing is any serious attempt to quantify the economic upside of strong, well-enforced privacy measures themselves, the “trust dividend” and risk-reduction effects that come from rights, governance, and accountability.    

Robust privacy protections are not just a safeguard; they are a productivity asset in their own right. For example:    

By leaving these benefits out of its quantified analysis, the PC’s modelling appears structurally incomplete. It values the efficiencies of its proposed compliance pathway, but it doesn’t put a dollar figure on the gains that come from trustworthy access, the kind underpinned by enforceable rights, clear guardrails, and credible oversight.    

If those benefits were modelled alongside compliance savings, the economic case for retaining core reforms may look very different.    

What the PC is proposing on privacy (in brief)    

In short, the interim report advances a dual-track (“alternative”) compliance pathway for the Privacy Act: one outcomes-based track (flexibility) and one more prescriptive track (certainty/safe harbour). This differs from the current act and the Attorney-General’s “fair and reasonable” proposal, the PC’s idea is about alternative requirements, not piling new duties on top of the APPs. The report also flags industry concerns about introducing a right to erasure, noting claims that APP 11 already achieves “substantively similar outcomes” and pointing to implementation burdens experienced under GDPR.    

On the productivity lens, the PC’s own modelling suggests that better data use (including outcomes-based privacy and digital financial reporting) could lift whole-of-economy labour productivity by ~0.5% (about $13b p.a.) within a decade, while it estimates about $2b in ongoing privacy spend by larger firms today.    

The Commissioner’s Response: Initial Pushback, Then a Sharpened Op-Ed    

Privacy Commissioner Carly Kind responded quickly and diplomatically to the PC’s interim report at a public event reported by InnovationAus. She accepted that some less-integral items might fall away to keep reform momentum, but was clear that the two proposals the PC advised removal of should stay on the table:    

  • Right to erasure — an individual’s ability to request deletion of their personal information; and
  • A “fair and reasonable” test — an objective standard for when collection, use and disclosure are appropriate.

Her message was that these are not fringe extras; they go to core agency for Australians and alignment with international norms.    

Kind then elaborated her position in an AFR op-ed, “Upend privacy laws for AI at the GP? This will not benefit patients.” Framing the issue through the rapid uptake of AI “scribes” in general practice, she argued that:    

  • Privacy law already enables trustworthy AI: consent, transparency and security obligations are the scaffolding that makes adoption safe; OAIC oversight provides recourse when things go wrong.
  • A “best interests” defence is unworkable: replacing prescribed rights with an outcomes-only pathway would make it hard for organisations to prove they acted in individuals’ best interests while denying those individuals basic controls (consent, notice, correction).
  • Upending rights risks backfiring: it could drive under-investment in privacy, more breaches and a deeper crisis of confidence, precisely the opposite of what a productivity agenda needs.
  • Evidence in market: recent OAIC work (e.g., the I-MED inquiry) shows organisations can develop and train AI consistently with existing privacy law, “trustworthy AI adoption consistent with privacy law is possible.”

My view    

The PC’s privacy proposals contain some valuable ideas, but its omissions and missteps risk weakening both the consumer protections and the productivity gains that can come from robust privacy regulation. Set out below are some good and not-so-good aspects to the PC report.    

1. Dual-track compliance...at least for small business    

The PC’s proposal for an “alternative compliance pathway” could be a productivity win and a privacy win, if it’s done properly and not in the way that has been proposed. Many small and smaller medium-sized businesses lack the resources to translate principles-based legislation into operational compliance in a meaningful sense. A prescriptive, safe-harbour track could:    

  • Provide certainty: Businesses know they’re compliant if they meet the defined steps.
  • Reduce cost: Compliance can be commoditised and scaled through templates and tools.
  • Lift the baseline: Clear, easy-to-implement standards improve privacy outcomes across the board.

This isn’t about lowering the bar, it’s about ensuring compliance is achievable without bespoke legal advice for every small operator. It provides for a known base level of compliance across the economy - a floor, not a ceiling.    

2. Outcomes-based privacy, only if it’s enforceable and high-trust    

The PC’s endorsement of outcomes-based privacy regulation is, in theory, an interesting idea. It could, on paper, allow flexibility, encourage innovation, and focus compliance resources on what matters most.    

But as Privacy Commissioner Carly Kind warned in her recent op-ed, replacing prescribed rights with an outcomes-only pathway risks creating an unworkable system. Rather than solving problems, it could create serious uncertainty, for both organisations and the regulator. Without the anchor of clear, enforceable rules, it becomes difficult to prove compliance, leaving too much room for subjective judgement and, at worst, providing a low-accountability route to avoid privacy obligations altogether. That risk is real, and it’s not just theoretical, it’s exactly the kind of environment in which under-investment in privacy and an erosion of trust can take hold.    

On balance, I share the Commissioner’s concern. For me, the burden of proof lies with anyone proposing such a model to show it can be measurable, robust, and auditable in practice. That means:    

  • Outcomes that are specific, quantifiable and benchmarked;
  • Independent verification to demonstrate they’ve been met; and
  • Restricting access to the pathway to organisations with proven privacy maturity and strong compliance records.

Absent those guardrails, outcomes-based privacy isn’t a modernisation, it’s a gamble with trust, and by extension, with the productivity dividend that trust delivers in a digital economy. The PC hasn't yet met that burden.    

3. Aligning compliance with risk and capability    

The PC’s recognition that compliance expectations should scale with both an organisation’s capability and the risk profile of its activities is a sound and pragmatic principle. This is consistent with approaches already adopted in other regulatory contexts, including AI, where higher-risk uses attract greater scrutiny and requirements.    

A proportional model can help ensure privacy regulation remains both effective and sustainable, avoiding the trap of applying the same level of complexity and cost to all entities regardless of scale or impact. Done well, it encourages better compliance outcomes by focusing regulatory effort where it matters most, while reducing unnecessary burdens on low-risk operators. An example of this is the proposed reform to mandate PIAs for high risk privacy projects. The challenge, as always, lies in striking the right balance between flexibility and accountability.    

4. No real assessment of privacy’s ROI    

The PC’s modelling focuses on the efficiency gains of its proposed outcomes-based model, greater flexibility, reduced prescription, and faster innovation. These are valid benefits. But the analysis is incomplete because, as mentioned previously, it doesn’t appear to attempt to quantify the economic benefits of strong privacy measures themselves.    

As set out above, global and domestic research clearly shows that robust privacy protections deliver measurable business benefits:    

  • The trust dividend – Higher adoption rates for new services, increased customer retention, and stronger brand reputation when organisations are seen as safe custodians of personal information.
  • Improved data quality – Strong governance produces cleaner, more accurate datasets, improving analytics, AI performance, and decision-making, while reducing costly rework.
  • Risk cost avoidance – Better minimisation and retention controls shrink breach blast radii and reduce incident downtime, saving millions in direct and indirect costs.
  • Market access – Interoperability with global frameworks (GDPR adequacy, Global CBPR) opens markets faster and reduces legal and procurement friction.

These benefits have been quantified by bodies like Cisco, IBM/Ponemon, and the OECD. Leaving them out of the model means the PC risks undervaluing protections, not because they lack economic value, but because their contribution hasn’t been measured.    

The Privacy Commissioner’s op-ed rightly reinforces this point: that privacy is the “foundation of a thriving data and digital economy” and that every major breach erodes the trust that drives uptake. In other words, the PC’s modelling has left half the productivity equation out,  the half that the Commissioner explicitly calls “key to supporting trustworthy data and AI adoption.”    

5. Right to erasure: ignoring both the evidence and the reform context    

The PC’s recommendation to abandon a legislated right to erasure overlooks both the policy intent behind this reform and the ability to adjust it so that it works in practice.    

Qualifying the right makes more sense than removing it    

The Privacy Act Review’s recommended right to erasure is a strong one: it would allow individuals to require deletion of their personal information, with limited exceptions. Privacy Commissioner Carly Kind has argued forcefully against dropping it, framing the right as central to individual agency and a key step in aligning Australia with international privacy norms. On this, I’m aligned in principle.    

This right is one of the clearest levers for mandating good data governance. If an organisation must be able to delete on demand, it has to know what it holds, where it’s stored, and when it should go, disciplines many entities are still developing. It also naturally encourages moves towards more centralised data holdings, such as consolidated data warehouses. That’s not just good for governance, it reduces cyber attack surfaces, cuts down on data duplication, and in turn, lowers both operational risk and cost.    

But it’s not the only lever. Requirements to set deletion periods in privacy policies and follow them can drive similar behaviour, embedding governance discipline without necessarily granting an absolute, unconditional right.    

As currently proposed, the erasure right has no threshold for practicality or proportionality. On paper, it could apply even where compliance is technically unfeasible or wildly disproportionate in cost. That’s a real operational concern, particularly in sectors with complex retention rules, complex technology such as AI or extensive backup and archiving practices.    

Rather than deleting the reform altogether, a better path could be introducing a clear, robust threshold, one that preserves the right’s intent in most cases while avoiding impractical or disproportionate scenarios. That way, the right to erasure remains a driver of strong governance and individual agency, but in a form that organisations can realistically implement without undermining the broader trust-productivity link that privacy enables.    

Why APP 11 is insufficient - The existing “reasonable steps to destroy or de-identify when no longer needed” duty under APP 11 has not shifted entrenched data-hoarding practices. Breaches continue to expose years-old personal data kept “just in case.” A user-driven, enforceable erasure right, even one with a practicality threshold, would be a stronger catalyst for change.    

Retention law complexity needs parallel reform - Australia’s data retention obligations are scattered across hundreds of statutes and multiple levels of government. This complexity is already under review, and reforming it in parallel with an erasure right would remove much of the uncertainty businesses face.    

Costs are real, but so are the returns - There will be an upfront compliance cost. The GDPR experience shows that system changes, process redesign, and training are necessary. But those investments tend to pay back: organisations report leaner data estates, reduced storage costs, more efficient governance, and smaller breach impacts.    

The PC’s stance of scrapping the right entirely ignores a middle ground. By qualifying the obligation and aligning it with streamlined retention rules, Australia could keep the benefits, greater trust, better data hygiene, and international interoperability, while ensuring the right is proportionate and workable.    

6. The fair and reasonable test: a safeguard with challenges    

Consent is, and should remain, a core pillar of Australia’s privacy framework. It reflects a broader legal principle: that individuals should be able to make informed choices about how their information is used. Organisations already face significant obligations to make that consent valid - clear disclosures, genuine choice, the ability to withdraw. And this is before upcoming proposed reforms to raise that bar even higher (or at least confirm its height).    

Overlaying an objective “fair and reasonable” standard on top of this framework creates a genuine tension. If an organisation secures fully informed, high-quality consent, but that decision can later be overridden on the basis of an over the top objective assessment, it risks making the initial process feel redundant. That’s not just a compliance inefficiency, it can introduce uncertainty for organisations and regulators alike, particularly where new data uses are concerned.    

While the fair and reasonable test has a clear role in protecting people where consent may be weak or illusory, its broad application could have unintended consequences for innovation and investment. The challenge is finding the balance: preserving the safeguard without undermining the primacy of consent in our legal system.    

The Balanced Privacy Package    

The Privacy Act Review proposed a package of reforms intended to work together, strengthening individual rights, lifting organisational accountability, and modernising the law for a digital economy. The PC’s interim report suggests removing some of the more ambitious elements.    

There’s merit in looking at the cumulative cost of reforms, but we can’t only measure cost of implementation and the benefit of removing them. The missing piece here is any attempt to quantify the return on investment (ROI) from stronger privacy rights. As the evidence shows, good privacy governance drives uptake, reduces incident costs, accelerates sales cycles, and opens market access. That’s not just compliance, it’s competitive advantage.    

A recalibration might be warranted for specific proposals, but wholesale removal risks weakening the trust foundations that underpin Australia’s digital economy. Done right, the “package” should deliver net productivity gains, not just compliance cost savings.    

The PC is right to highlight productivity as a key lens for digital policy. But productivity is not just about deregulation, it’s about creating the conditions for efficient, high-quality, and trusted data use. That requires a recognition that privacy is not an obstacle to digital innovation; it is a precondition for it.    

A truly forward-facing privacy regime would:    

  • Embed trust as a measurable economic driver, not just a feel-good concept.
  • Recognise good data governance as an enabler of safe AI, cross-sector data sharing, and digital trade.
  • Quantify the benefit of strong privacy laws and downside risk avoidance. the savings from fewer and smaller data breaches, less downtime, and reduced reputational damage.
  • Ensure market access benefits are factored in, from GDPR adequacy to cross-border frameworks that depend on robust safeguards.

The PC’s interim report models the benefits of “good” data use in an outcomes-based regulatory space but does not appear to model the economic upside that flows directly from robust privacy measures. That’s a gap that matters, because without that evidence, the case for retaining stronger privacy rights is easier to dismiss.    

If we’re serious about harnessing data and digital technology for productivity, we need to stop treating privacy as a compliance drag and start treating it as the economic infrastructure it is.    

James Patto
Founder & Principal
Follow us on social media:
Blog

Clarity in a changing world.

Stay ahead of the curve with expert analysis on the legal, regulatory and strategic issues shaping data, technology, privacy, cybersecurity and AI.

Whether you’re navigating complex reforms, responding to risk, or planning for what’s next, our insights are here to keep you informed and empowered.

Jul 7, 2025

Privacy regime may mean organisations flat-footed for the AI era

Australia’s lighter-touch privacy regime was meant to foster innovation, but as AI adoption accelerates, it may be doing the opposite. In this Privacy Awareness Week article, James unpacks why strong data governance is critical for AI success and how Australia risks falling behind without clearer, stronger privacy rules.

Read more
Built for the digital era. Ready when you are.

Work with Scildan Legal to lead with confidence across privacy, cyber, AI and technology.