Vol. 7 — The Decision Window
It's open now....
Last week Stanford published the most comprehensive independent report on AI’s trajectory ever assembled. Four hundred and twenty-three pages. Nine chapters. Contributions from over a hundred researchers. Data sourced from every continent.
The title: *AI Index Report 2026.*
The thesis, stated plainly in the co-chairs’ letter: the gap between what AI can do and how prepared we are to manage it runs through every chapter of this year’s report.
That sentence should sound familiar. It is the same thesis this series has been building toward since Vol. 1.
The difference is that Stanford didn’t get it from watching one industry. They got it from measuring all of them. And the numbers they found are sharper than anything I’ve put in front of you so far.
---
## The Numbers
Generative AI reached 53% population-level adoption within three years. Faster than the personal computer. Faster than the internet. No technology in modern history has crossed into the mainstream this quickly.
Eighty-eight percent of organizations now report using AI in at least one business function. That’s not experimentation. That’s saturation.
And here is the number that belongs on the wall of every IT leadership office in the country: AI agent deployment — the actual integration of AI into operational workflows — remains in the single digits across nearly all business functions.
Eighty-eight percent adoption. Single-digit deployment. That is the readiness gap measured at global scale.
Now look at the front edge of the workforce. Employment for U.S. software developers aged 22 to 25 fell nearly 20% from 2024. Not contractors. Not offshore. Full-time, domestic developers at the start of their careers. The same study found that headcount for older developers continues to grow. The displacement is not uniform. It is generational. The pipeline is being cut, not the senior workforce.
One-third of organizations now expect AI to reduce their headcount over the coming year. The functions they expect to cut first: service operations, supply chain, and software engineering.
And the productivity data? It depends on where you look. Customer support agents using AI resolve 14 to 15% more issues per hour. Developers using GitHub Copilot complete 26% more pull requests. But open-source developers using AI assistance became 19% slower. Engineers who relied heavily on AI for learning showed no speed improvement and faced what researchers call learning penalties — the AI helped them finish the task but blocked them from actually understanding it.
The pattern is clear. AI accelerates structured, measurable, supervisable work. It degrades judgment-heavy, context-dependent work. The organizations that treat AI as a universal accelerant are making a category error.
---
## The Sovereignty Question
Stanford devoted an entire analytical framework to a concept they call AI sovereignty — a country’s capacity to make independent decisions over the development, deployment, and governance of AI systems within its jurisdiction. They broke it into five layers: infrastructure, data, models, applications, and talent.
That framework isn’t just for governments.
Replace “country” with “organization” and read it again. Your capacity to make independent decisions over the AI systems you depend on. That is what sovereignty means at the enterprise level. And the data says most organizations don’t have it.
The Foundation Model Transparency Index — which measures how much frontier AI companies disclose about their models — dropped from 58 to 40 this year. The most capable models are now the least transparent. Training data, parameter counts, compute costs — none of it disclosed for several of the most widely used systems. You are building your operations on foundations you cannot inspect.
Meanwhile, nearly every leading AI chip is fabricated by a single company — TSMC — at a single facility in Taiwan. The United States hosts over 5,400 data centers, ten times any other country, and its entire AI hardware supply chain depends on one foundry in one geopolitically contested geography.
Sovereignty isn’t an abstraction. It is a question about what you control and what controls you.
---
## What This Means for IBM i Organizations
If you run an IBM i shop, you already know what an aging workforce looks like. You’ve been living it for a decade. What the Stanford report tells you is that the problem just acquired a second front.
The junior developer cliff is not coming for your RPG programmers — that pipeline dried up years ago. It is coming for the modern-stack developers you were planning to hire alongside them. The 22-to-25-year-old Python developer, the early-career cloud engineer, the junior data analyst — those roles are being compressed by the same AI tools your organization is adopting. CS enrollment at U.S. four-year universities fell 11% last year. Students are reading the market.
So the traditional modernization playbook — keep the RPG running while you hire a modern team around it — is running into a labor market that isn’t producing that team at the rate it used to.
At the same time, the AI agents that could eventually bridge this gap are stuck in pilot. Single-digit deployment across all business functions. That means the technology exists, the investment is flowing, and almost nobody has figured out how to actually operate with it.
This is the decision window.
Not whether AI will change your organization. That question was answered in Vol. 1. Not whether you need a posture. That was Vol. 6.
The decision is whether you build the governance, the architecture, and the operating model to absorb AI before the workforce pipeline narrows further and the window to act at your own pace closes.
---
## The Trust Problem
One more number from Stanford, and it’s the one that should concern leaders the most.
The United States reported the lowest trust in its own government to regulate AI of any country surveyed. Thirty-one percent. The global average was 54%. Singapore was 81%.
When 73% of AI experts say the technology will positively impact jobs, and only 23% of the public agrees, you have a 50-point perception gap. That gap lives inside your organization too. Your executive team reads one story about AI. Your workforce reads another. And when the mandate comes down without the trust being built first — that’s what we called the Mandate Trap in Signal #154.
Mandates without trust produce compliance theater, not governance.
The organizations that close the readiness gap won’t be the ones that adopted fastest. They’ll be the ones that built trust — with their workforce, their customers, and their own operating model — while they adopted.
---
## The Window
Andrew Yang said one to three years. That was Signal #1 in Vol. 1 of this series.
Stanford’s data doesn’t contradict that timeline. It sharpens it.
Eighty-eight percent adoption, single-digit deployment. A junior workforce being cut before the senior workforce retires. A transparency index falling while incidents rise 55% year-over-year. Benchmark evaluations saturating in months. Productivity gains that depend entirely on whether the work was designed for AI or had AI dropped on top of it.
The technology is not waiting. The labor market is not waiting. The governance frameworks that should be guiding all of this — from Washington to your own C-suite — are not keeping pace.
The window is the gap between where you are and where the environment demands you be. And the data says that gap is widening, not closing.
What IBM i organizations do in the next 90 days will determine whether they cross that gap on their own terms or are dragged across it on someone else’s.
---
## Next
Vol. 8 arrives next week. It’s called *The Room* and it will be the last of this series before PowerUp 2026.
The question was always: *Is your organization ready?*
The data is in. The answer, for most, is not yet.
The better question is: *What are you going to do about it?*
---
Reggie Britt is a 30-year IBM i practitioner and technologist. Signal4i tracks the AI signals that matter to IBM i organizations. This is Vol. 7 of an 8-part series
*Signal Stack v8.3 · 178 Signals · 14 Categories*
*Stanford HAI AI Index 2026 cited throughout*
*Signal4i — signal4i.ai*


