“Change is the process by which the future invades our lives.”
—Alvin Toffler, 1970, Future Shock
On 28 January 1986, the Space Shuttle Challenger exploded 73 seconds after launch, killing all seven astronauts aboard.
The culprit was a rubber O-ring. The cheapest object in the rocket.
In the cold Florida morning, the ring lost its flexibility and failed to seal a joint in the solid rocket booster. Hot gases escaped, the external fuel tank ruptured, and NASA’s worst nightmare unfolded on live television.
The lesson was brutal but simple: in a complex system where every component matters, one weak link can bring down the whole operation.
Economist Michael Kremer later formalised this insight into what he called:
‘O-Ring Theory of Economic Development.’
And it turns out this framework might be the best lens we have for understanding what AI will actually do to your job.
The Checklist Fallacy of AI Jobs
AI leaders were on the stage at Davos this week. Offering two distinct visions for the future of AI.
One from Anthropic’s Dario Amodei. The other DeepMind’s Sir Demis Hassabis.
Simplistically, you could split these ideas between human obsolescence and enhancement.
Amodei’s vision is one of worker substitution. In contrast, Hassabis’s is AI as a tool for superhuman capability.
Bringing the question back to our O-ring analogy, there’s a problem in the predictions you’ve seen in the past few years. Most predictions about AI and jobs treat work like a checklist.
Researchers identify the tasks in a job, figure out which ones AI can do, and calculate an ’exposure score’.
If AI can handle 80% of your tasks, you’re supposedly 80% at risk of being replaced.
This is the logic behind widely cited studies that predict millions of jobs will vanish.
And the reason behind these breathless headlines…

Source: 9News.com.au
But a new paper from economists argues this view is fundamentally flawed. They argue jobs aren’t checklists. They’re chains.
Just like the Challenger’s rocket boosters, many jobs involve tasks where the value of one part depends critically on the quality of another.
One weak link drags down the whole output.
When tasks are interconnected like this, automating some of them doesn’t just subtract work from your plate.
It fundamentally changes the value of everything that remains.
The Focus Effect
Here’s where it gets interesting if you’re batting for humans.
When AI takes over some of your tasks, you don’t just sit idle. You reallocate your time and energy to the remaining tasks.
You get better at them. You become more focused.
In the paper they call this the ’focus mechanism’. And it creates a counterintuitive result. Partial automation can actually increase your wages rather than decrease them.
Why? Because you become the ‘high-value bottleneck’.
The tasks you’re left doing are the ones machines can’t handle. And with AI amplifying everything else in the chain, those human bottleneck tasks become much more valuable.
Think of it this way: if AI does nine out of ten tasks brilliantly, and you do the tenth task, all that AI brilliance flows through your work.
You’re not 10% of the value. You’re the gateway for 100% of it.
The Bank Teller Paradox
We’ve seen this play out before.
When ATMs first arrived, the obvious prediction was mass unemployment for bank tellers.
Machines could handle cash, deposits, and withdrawals far more efficiently.
The checklist view suggested that tellers were toast.
Instead, something else happened. Teller employment didn’t collapse. It shifted.
With ATMs handling the mechanical tasks, tellers could focus on other duties.
Relationship banking, complex problem-solving, and high-value customer interactions.
They became better at the things machines couldn’t do. And many became more valuable to their employers, not less.
Meanwhile, branch operating costs fell, allowing banks to open more branches. Employment held steady while real wages increased.

Source: International Monetary Fund
Jagged Intelligence
There’s another wrinkle that makes the doom-and-gloom predictions even less likely in my eyes. As you’ve probably experienced, AI is ’jagged’.
By that, I mean current AI systems don’t perform evenly across all tasks. They’re brilliant at some tasks and bafflingly incompetent at others.
ChatGPT can write poetry in 60+ languages but might fumble basic arithmetic. It can crush grand masters at chess but struggles to fold a towel.
This jaggedness means that in most jobs, there will remain tasks where humans have a decisive advantage.
And under O-ring logic, those human-dominant tasks become the bottlenecks that determine overall quality or output.
The more capable AI becomes at the tasks it’s good at, the more valuable those remaining human bottlenecks become.
Everyone Becomes a Manager
This points toward a future I’ve written about before: one where everyone becomes a ’manager’ of AI.
Not a manager in the traditional sense, but someone who directs, orchestrates, and quality-checks AI output. The human role shifts from doing every task to ensuring the right tasks get done right.
In O-ring terms, you become the critical quality control point in an otherwise automated chain.
Your judgment, your creativity, and your ability to handle the unexpected becomes the irreplaceable bottleneck.
This doesn’t mean there won’t be disruption. Some jobs really are pure checklists and are already facing pressure at the entry level.
And pushing the timeline further out, full automation remains possible when AI eventually masters every task in a chain.
As I’ve covered before, the problem with exponential change is that it can appear to be doing little. Right up until the moment it’s changed the world.
But for now, the O-ring view suggests we’ve been asking the wrong questions about work.
It’s not ‘which tasks can AI do?’
It’s ‘which remaining human tasks will become more valuable when AI handles the rest’?
The answer to that question might determine whether the AI revolution makes you redundant, or irreplaceable.
Regards,

Charlie Ormond,
Small-Cap Systems and Altucher’s Investment Network Australia
Comments