I Would Like to Speak to AI’s Manager

Myra Travin
Myra Travin

Educational Futurist and Learning Innovator

I’ve spent the last decade working at the intersection of learning, technology, and human performance—inside government, enterprise, and nonprofit systems—long enough to recognize when a tool isn’t just inefficient, but quietly undermining the work it claims to support.

Which is why I keep having the same thought, usually deep into what should have been a straightforward task:


I would like to speak to AI’s manager.


At first, that thought comes from frustration. But if we stop there, we miss the real opportunity.
There’s a familiar AI doom spiral many of us fall into. You ask for help with something concrete—a presentation, a plan, a deliverable that needs to exist in the world. The response is confident and fluent. You refine the prompt. It improves. You keep going. Time and effort accumulate.


Only later does the critical limitation surface. I don’t actually produce that artifact. I can’t do that step. I should have said this earlier.


What makes this moment so dispiriting isn’t wasted time; it’s delayed clarity. The system keeps you engaged instead of telling you early that it doesn’t have enough information or capability to finish the job. That’s not hallucination. It’s optimization.


I don’t find the term AI hallucination helpful. What’s happening isn’t delusion or error—it’s the system generating the most statistically coherent and plausible response available to it, given what it knows and how it’s been trained to behave. These systems are optimized for fluency, continuity, and perceived helpfulness, not for early refusal or delivery feasibility.


As a result, certainty often increases as grounding decreases. The interaction feels productive right up until the moment it fails to deliver. This isn’t a bug; it’s the predictable outcome of how success has been defined for the system.


And this is where the issue stops being about frustration and starts being about who gets to define success.


We talk a lot about AI ethics, and that conversation matters—but it’s not where most of the leverage actually is. Ethics tends to arrive after the fact, once systems are already built and deployed. It allows us to comment, critique, and constrain around the edges, but rarely to shape the core.


The decisions that truly govern AI behavior are made much earlier and much deeper: in optimization targets, evaluation metrics, benchmarks, and choices about when a system should stop, defer, or refuse. These are not abstract ethical questions; they are concrete design and computation decisions.

And too often, the people who understand human learning, judgment, and real-world delivery are not in those rooms.


If we want AI to support meaningful work, we need to move from reacting to having a say.
That means insisting that uncertainty, feasibility, and completion are first-class design constraints. It means designing systems that surface limits early rather than late. It means treating “I can’t do this with what I have” as a successful and responsible outcome. And it means creating pathways for practitioners—educators, managers, designers, operators—to participate directly in how these systems are shaped, not just how they are governed after deployment.


AI doesn’t just need ethical oversight.


It needs informed, human-centered management.


So yes, I still want to speak to AI’s manager.


But not to complain.


To help decide how the system is built in the first place.