State of Privacy 2026: The uncomfortable truth about “doing more with less”

If you work in privacy right now, you’ve probably felt it:

More laws.
More AI.
More data.
More expectations.

But fewer people. And tighter budgets.

ISACA’s State of Privacy 2026 report (survey run in Sept 2025 with 1,800+ respondents) basically confirms what many of us have been seeing on the ground: privacy risk is rising while privacy capacity is shrinking.

Below is my “blog version” breakdown: what stood out, why it matters, and what I think privacy leaders should do next.

1) Privacy teams are shrinking… and it’s not subtle

The headline that hit me first:

  • The median privacy staff size dropped to 5, down from 8 last year.
  • 11% of organizations have only one full-time person handling privacy.

That’s not a “minor adjustment.” That’s a structural shift.

And here’s the kicker: despite smaller teams, the work is not shrinking—especially with AI and the explosion of data subject requests.

My take

Most companies are still treating privacy like an “added layer” on top of product, security, legal, and data teams.

That approach collapses when headcount goes down.

If privacy isn’t built into how the business runs, you end up with:

  • constant fire-fighting
  • checkbox compliance
  • exhausted teams
  • and predictable failures

2) Understaffing is worse on the technical side (and it matters)

ISACA separates privacy staffing into two categories:

  • Legal/compliance privacy roles
  • Technical privacy roles (privacy engineering, implementing controls, tooling, automation)

Both are under pressure. But technical privacy roles are consistently harder to cover—because you need people who can translate privacy requirements into actual system behavior.

And the survey also shows technical privacy roles are slightly more likely to see increased demand in the next year.

My take

If you’re serious about privacy outcomes, you can’t build a privacy program that’s 90% policy and 10% implementation.

The next era is privacy engineering + operations:

  • data mapping that actually stays updated
  • automated evidence collection
  • scalable DSAR handling
  • privacy-by-design embedded in SDLC
  • AI governance that’s real, not a PDF

3) Hiring is still hard — but the “top signal” changed

ISACA asked what makes a privacy candidate “qualified.” The top answers were:

  1. Organizational fit (culture)
  2. Hands-on privacy experience
  3. Adaptability

Notably, “organizational fit” and “adaptability” were new options this year—and they jumped straight to the top.

Also:

  • Expert-level privacy professionals are the hardest to hire (by far)

My take

Privacy is no longer a “static” field.

Tools change. Laws expand. AI shifts the ground under your feet every month.

So companies aren’t just hiring for knowledge. They’re hiring for someone who can keep up and influence cross-functional teams without breaking relationships.

4) The #1 privacy failure is still training (and it’s getting worse)

This was one of the most blunt findings in the report:

  • Lack of training / poor training is the most common privacy failure (51%).
  • The next most common failure: not practicing privacy by design (and it jumped noticeably vs last year).
  • Data breaches / leakage are also high on the list.

And while 79% of organizations provide privacy awareness training, the existence of training clearly isn’t the same as effective training.

My take

Most privacy training fails for one reason:

It’s built to “prove training happened,” not to change behavior.

If your program is just:

  • annual slides
  • a quiz
  • a completion report

…don’t be surprised when your top failure is “poor training.”

The real question is: What behavior changed this quarter because of training?

5) Boards influence everything — and many still treat privacy as compliance-only

  • 56% said their board adequately prioritizes privacy.
  • Boards most commonly view privacy programs as compliance-driven (the largest share).

ISACA also draws a strong connection between board support and:

  • practicing privacy by design
  • confidence in compliance outcomes
  • adequate budget funding

My take (strong opinion)

If the board frames privacy as “just compliance,” privacy becomes a cost center.

And cost centers get cut.

If the board frames privacy as trust + resilience + product risk, privacy gets funded.

So privacy leaders need to stop selling “compliance tasks” and start selling business consequences:

  • revenue risk
  • market access
  • customer churn
  • product delays
  • incident costs
  • and AI-related reputational risk

6) AI adoption in privacy is real… but it’s not mainstream yet

The survey shows:

  • A relatively small slice currently uses AI for privacy tasks
  • Some plan to adopt within 12 months
  • A large share still has no plans (and there’s likely “shadow AI” happening anyway)

ISACA’s important point: AI adoption correlates with maturity.
Organizations with stronger privacy-by-design habits and board prioritization are more likely to use AI responsibly.

My take

AI won’t save a broken privacy program.

If your fundamentals are weak—no inventory, no ownership, no workflows, no metrics—AI just helps you fail faster.

But if your fundamentals are strong, AI can absolutely help:

  • classify personal data
  • draft DPIAs and risk narratives (with human review)
  • accelerate policy mapping
  • support DSAR triage
  • detect privacy issues in requirements or tickets

7) Budget optimism is fading — even if cuts haven’t fully landed yet

ISACA notes a clear trend:

  • fewer than a quarter expect budgets to increase
  • about half anticipate decreases

Even when actual cuts lag behind expectations, the fear of cuts changes behavior:

  • hiring freezes
  • delayed tooling
  • no training refresh
  • “make do with what we have”

My take

This is exactly why privacy programs must become operationally efficient:

  • standard workflows
  • strong templates
  • metrics
  • automation where it’s safe
  • clear ownership across teams

When budget pressure hits, the programs that survive are the ones embedded into “how work gets done.”

What I’d do in 2026 (a practical playbook)

If you’re leading privacy (or advising clients), here’s a grounded plan:

1) Build a privacy operating system (not a binder)

  • Define intake (projects, vendors, DSARs, incidents)
  • Set SLAs
  • Assign ownership per workflow
  • Track throughput

2) Put privacy-by-design into the SDLC

  • Add privacy checkpoints into product rituals (PRD, design review, sprint grooming)
  • Require DPIA triggers for high-risk changes (especially AI)

3) Fix training to target behavior

  • short, role-based modules (marketing ≠ engineering ≠ HR)
  • measure impact using more than “completion”
  • connect it to real incidents and real scenarios

4) Give leadership a privacy dashboard

If you can’t show:

  • incident trends
  • DSAR volume and SLA
  • DPIA volume
  • vendor risk status
  • top risks and mitigation progress

…then privacy will always look like “noise” to executives.

5) Use AI carefully, where it actually helps

Start small:

  • classification assistance + human review
  • drafting support + approvals
  • ticket triage
  • evidence summarization

And document governance: what you use, what data touches it, and what you prohibit.

Final thought

The big message I took from State of Privacy 2026 is simple:

Technology is speeding up. Privacy teams are shrinking. And “compliance-only” programs will break.

The winners will be the organizations that treat privacy like a core operating capability—embedded, measured, and engineered.

If you had to pick one upgrade to your privacy program this year—training, privacy-by-design, AI governance, metrics, or staffing—what would you prioritize, and what’s stopping you today?

Masoud Salmani