
Over the past year, my perspective on “AI risk” has changed.
Before: I thought the hard part for companies would be picking the right model (vendor, accuracy, features).
After: I’m convinced the real risk is what you build around the model — infrastructure, governance, cybersecurity controls, and cross-border compliance.
That shift is exactly what Moody’s highlights in its 2026 Global AI Outlook: AI capabilities are leaping forward, but the business risk surface is expanding even faster.
Below is my “compliance + cyber” interpretation of the report — and what I think every leadership team should do next.
Moody’s describes a market where model performance is still making major breakthroughs (better reasoning, multimodal inputs, more tool-use and “agentic” behavior). The key point isn’t which model is “best.”
The point is: competition is forcing rapid improvement, and open-source is closing gaps, especially from China and other ecosystems. That means “AI advantage” will be harder to defend purely through model choice.
My opinion: Model selection is becoming a commodity decision. Process design and governance are not.
This was one of the most important sections in the report.
Moody’s says adoption is expanding, but firm-level productivity gains are still patchy. AI helps most where work is routine, document-centric, customer-facing, or analytics-heavy — but complex workflows still create friction. Even if the model performs well in a demo, real value requires redesigning full processes, often on top of messy legacy systems.
A lot of companies are doing “AI theater”:
Moody’s basically implies: AI success depends on the maturity of the operating stack, not just the AI layer.
My opinion: If your data is fragmented and your processes are unclear, AI will amplify chaos before it delivers productivity.
Moody’s expects AI to widen gaps between leaders and laggards. They point out that large firms are already embedding AI across R&D, supply chain optimization, forecasting, fraud detection, and compliance — while smaller players struggle with cost, talent, and dependency on third parties.
They also list characteristics that increase disruption risk:
If you’re a mid-sized firm, doing nothing is not “playing it safe.”
It’s choosing to compete against companies that will steadily get faster, cheaper, and more automated.
This is where the report gets very concrete — and honestly, a bit alarming.
Moody’s describes:
They also warn that demand will likely exceed supply through 2027/2028, giving pricing power to infrastructure owners — while long build cycles create a risk of overcapacity later if monetization doesn’t catch up.
Infrastructure constraints create two second-order risks:
My opinion: In 2026, “AI strategy” without a compute cost model is just a pitch deck.
Moody’s argues geopolitical fragmentation is reshaping access to chips, compute, and data infrastructure. The practical enterprise impact is huge:
Multinationals may be forced to run separate AI stacks across regions to comply with export controls, data transfer laws, and local technical standards.
They highlight regulatory divergence:
Not just legal review — but duplicated systems, duplicated monitoring, duplicated vendor management, and duplicated incident response.
My opinion: The next competitive advantage is not just “using AI.” It’s deploying AI globally without breaking laws or security controls.
When AI becomes agentic (able to use tools and take steps), the threat model changes.
Moody’s explicitly calls out:
They also note cyber insurers are trying to limit exposure to generative AI systemic incidents, including exclusions.
My opinion: If your AI can take actions in your systems, it must be treated like a privileged identity — with the same security discipline you’d apply to an admin account.
If you want AI value without AI chaos, here’s the sequence I recommend:
Start with:
Be honest about what you can support:
Minimum controls I’d require:
If you operate across EU/US/Turkey/MENA/Asia:
Moody’s notes firms are improving measurement by tracking specific workflow improvements (accuracy, processing time, claims cycle time, etc.), but there’s no universal framework yet.
So build your own:
Moody’s isn’t saying “AI is a bubble.” They’re saying risk is rising because capital spending, infrastructure bottlenecks, uneven value capture, cyber exposure, and regulatory divergence are all colliding at the same time.
My strong opinion:
2026 will reward companies that treat AI as a governed system — not a tool.
Source: Moody’s report, Artificial Intelligence Global 2026 Outlook — Risks are rising