The Beatings Will Continue Until the Morale Improves

Myra Travin
Myra Travin

Educational Futurist and Learning Innovator

AI Acceleration, Performance Myths,
and the New Work Compression Curve.

“Workers are being told to use AI to become more productive — but most organizations haven’t defined what good looks like, how performance should be measured, or how to fairly evaluate AI-augmented work.” That line — echoed in recent reporting across business and tech media — captures the strange moment we are in.

Expectations are skyrocketing, metrics feel vaporous, and the clock is ticking.
Organizations are mandating AI adoption. Leaders are forecasting productivity gains. Boards are asking about efficiency ratios. Inside teams, however, something quieter is happening.
People are experimenting. Reworking outputs. Validating probabilistic reasoning. Rewriting prompts. Rebuilding workflows that were stable two months ago. Integrating uncertainty into systems designed for determinism. Learning in public while being evaluated in private.

The narrative is acceleration.
The lived experience is volatility.
And volatility without updated architecture produces fragility.


The Metric Vacuum

We are trying to run AI-era work on pre-AI performance systems — measuring turnaround, throughput, output volume, and utilization as though the processes underneath them were still stable.

They aren’t.
AI destabilizes process.

Stanford economist Erik Brynjolfsson has argued that the real productivity gains from AI do not come from the technology alone, but from complementary organizational redesign. The tool does not create transformation; the surrounding system determines whether gains materialize — and whether they endure. Right now, many organizations are implementing the tool without redesigning the system.

So what fills the vacuum?
Pressure.

When new measurement constructs don’t exist, leaders default to visible output. If performance fluctuates during ramp time, the assumption is underperformance rather than adaptation. If productivity spikes briefly, it’s interpreted as stabilization rather than volatility.

But we are not measuring the right curves.

If AI is compressing time, then time is what we should be studying.

Two curves matter more than most dashboards reflect.

The first is Time-to-Tool Competency — how long it actually takes for someone to move from access to reliable operational fluency with a specific AI system. Not attendance at training. Not license activation. Real discernment, verification discipline, and workflow integration.

The second is Time-to-Task Capability — how long it takes for a real business task to be performed effectively under live conditions using AI augmentation. Not in a sandbox. Not in a demo. In production, where quality and accountability matter.
Both curves are unstable because the tools themselves are evolving. The target is moving while people are aiming at it.

Yet most organizations measure output velocity as if stabilization has already occurred.
That is the asymmetry. We measure acceleration but not absorption. We evaluate individuals but rarely ask whether the system itself has been recalibrated for volatility.

Shoshana Zuboff once wrote that “surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data.” Her point was about data extraction, but its deeper warning is about asymmetry — systems designed without reciprocal accountability. That risk is emerging in AI workplaces.

The system measures the worker. The dashboard measures the output. The metrics evaluate the human.
But who evaluates the metric?

Who measures whether ramp time is realistic? Who accounts for cognitive compression? Who asks whether the architecture is mature enough to absorb the speed being demanded? If measurement flows in only one direction, pressure becomes the default correction mechanism. And pressure is not architecture.

What’s needed instead is reciprocal measurement — the willingness to measure not just performance, but the stability of the system that defines performance. Without that reciprocity, asymmetry widens. Leadership sees lift while workers experience compression. Dashboards show motion even as durability weakens underneath.

What If the System Can’t Keep Up?

Here is the question we are not asking loudly enough: What if people, projects, and systems cannot keep up with the compression curve?

Earlier this week, Microsoft’s AI chief stated that “most, if not all” white-collar tasks could be automated within the next 12 to 18 months. That is not speculative futurism. That is someone operating at the center of enterprise AI deployment describing the trajectory he sees. If leaders at that level are projecting compression on that scale, then the pace of capability evolution is not theoretical. It is operational.
If that projection is directionally correct, the volatility curve steepens dramatically.

Time-to-Tool Competency begins to compress even as the tools redefine themselves. Time-to-Task Capability becomes unstable as workflows shift underneath live work. Role boundaries blur. Performance frameworks strain to keep up with categories that are dissolving in real time.

And this is just AI.

AGI represents exponential compression. We are Loki — clever, strategic, convinced we’re in control — and AGI is the Hulk, the force that reminds us we may be far less powerful than we think.
Right now, we are learning to integrate probabilistic reasoning systems into structured environments. If those systems begin operating autonomously across task domains, the volatility multiplies.
If the architecture is already strained under AI-level acceleration, what happens when the force intensifies?

What happens to measurement when the category of “task” dissolves?
What happens to performance evaluation when human contribution shifts from execution to orchestration?
What happens when stabilization windows collapse entirely?

Even this act — writing, analyzing, reflecting — exists inside that tension.

The question is not whether work disappears.

The question is whether our institutions can redesign themselves as fast as the tools evolve.

The Neo-Luddite Warning

Neo-Luddites have real concerns. They are responding to compression without recalibration. If leaders refuse to take that seriously, the backlash won’t be symbolic. When people feel structurally destabilized at scale, they don’t just disengage — they burn the house down. Now imagine that sentiment in far greater numbers.

That is not hyperbole.

It is what happens when asymmetry hardens — when people are measured by systems that refuse to measure themselves, when adaptation is demanded but stabilization is ignored, when acceleration is rewarded but absorption is invisible.

“The beatings will continue until morale improves.”

Pressure is not redesign.
Mandates are not measurement systems.
Acceleration is not transformation.

AI is not simply a tool shift. It is a structural shift in how knowledge work stabilizes. Until we redefine performance in a volatile environment — until we make reciprocal measurement the norm rather than the exception — morale will not improve.

Because morale is not the root issue.
The system is.

I explore these tensions — acceleration, asymmetry, and the fragility hidden inside productivity narratives — more deeply in my upcoming book, Revenge of the Neo-Luddites, out March 2026.
Because the future of work will not be defined by how fast we move.
It will be defined by whether our systems — and our people — can absorb the speed without breaking.