Learning Through the Machine: The Hollow Promise of AI Productivity
Learning Through the Machine: The Hollow Promise of AI Productivity
The Metric Everyone Celebrates
Open any article about AI productivity. Scroll through LinkedIn posts from consultants and technologists. Attend any conference session on AI adoption.
The frame is always the same: what can AI take off your plate?
Hours saved. Tasks automated. Cognitive load reduced. The human becomes the beneficiary of labor removal. The tool does work so you don’t have to.
I’m not here to tell you that’s wrong. I’m here to tell you it’s hollow.
Because nobody is asking the follow-up question: if AI takes work off your plate, what happens to you?
The Friday Meeting Question
We had a brief discussion in our product and engineering meeting recently. Someone asked what I’ve been waiting for someone to ask: “Am I becoming too dependent on this tool?”
The room filled with nervous laughter. Because everyone felt it but nobody had named it.
What happens to me and my productivity if the tool is taken away?
The easy answer: we go back to doing things normally. Slower, more manual, but functional. The tool was a convenience, not a crutch.
But that answer only holds if you’ve been using these tools with purpose. If you’ve seen what can actually happen when you engage with AI as more than a task-completion machine.
The honest answer for most people: they don’t know. They’ve been measuring AI by what it removes from their workload. They haven’t been measuring what it’s doing to their capability.
And here’s where the doom-sayers enter the conversation. AI is coming for your job. AI will replace knowledge workers. AI will make humans obsolete.
The doom-sayers are right about the outcome. They’re wrong about the mechanism.
AI doesn’t take over. People let it take over. There’s a difference.
If you’ve spent two years letting AI do your thinking, produce your output, and solve your problems while you approve and ship, then yes, you’re replaceable. Not because AI got smarter. Because you stopped developing yourself.
But if you’ve spent two years using AI to challenge your thinking, compress your learning cycles, and accelerate your growth, you’re not replaceable. You’re more capable than you were before these tools existed.
The doom-sayers are warning about a real outcome. They’re just wrong about the mechanism. AI doesn’t take your job. Dependency takes your job. And dependency is a choice you make every time you engage with these tools.
This isn’t an article about how to avoid the AI apocalypse. It’s an article about how to recognize which path you’re on before it’s too late to change direction.
The productivity discourse gives you a metric that feels good and tells you nothing about what’s actually happening to you.
The Room Where I’m Never the Smartest Person
Early in my career, I had a simple principle: never be the smartest person in the room.
Find people who challenge you. Seek environments where your ideas get pushed back on. Surround yourself with practitioners who see gaps you miss and aren’t afraid to name them.
That principle drove my growth for years. Every hard conversation, every piece of critical feedback, every moment where someone told me my translation wasn’t landing, those were the reps that built capability.
Then something shifted.
Twenty years in, I started noticing the challenge environment thinning out. People defer to experience. They assume competence based on track record. The feedback that used to come freely gets filtered through relationship calculations. Is it worth the political cost to tell the senior architect his approach has gaps?
Seniority creates a paradox. The very success that proves you can grow becomes the barrier to continued growth.
Some senior practitioners welcome this. They’ve earned the right to stop being challenged. They coast on reputation and wonder why the work feels stale.
I felt the pull toward that myself. The exhaustion of seeing the same patterns repeat with new names. The temptation to stop pushing.
I needed a different path.
AI gives me that path. It lets me stay in the room where I’m never the smartest person.
Not because AI is smarter than me in any meaningful sense. But because AI doesn’t calculate relationship cost before pushing back. It doesn’t defer to my twenty years. It doesn’t soften feedback to protect my ego or its standing with me.
It will if you tell it to. But then you’ve defeated the entire purpose.
When I run a translation through Claude or Copilot, I get challenge without social friction. I get adversarial pressure that would take days to extract from human reviewers, if I could get it at all. I get told when something isn’t landing, immediately, with no diplomatic packaging.
That’s not productivity. That’s a training partner.
The value isn’t what AI takes off my plate. It’s what AI puts back on the table: the friction that makes me better.
Two Weeks Into an Afternoon
I’m working on an internal project I can’t name for obvious reasons. The task was translating senior leadership’s vision into technical architecture that would make sense to them while remaining technically accurate.
The old way: meet with leadership, capture their thinking, go away and draft something, come back in a few days, get feedback, interpret what they actually meant versus what they said, revise, repeat. Two weeks minimum. More if calendars didn’t align.
Here’s what actually happened:
I started with a brain dump from senior leadership. What’s the idea? Where did it come from? What challenges are we solving? How would you like to visualize the output?
Then I went to AI. Not to produce the deliverable, but to interpret what I’d heard. To test my understanding before I started building.
Then back to leadership with probing questions. Why this framing? What am I missing? Where are the gaps in my interpretation?
Then into iteration with AI. I gave it the constraints: leadership’s frame, their vocabulary, how they understand problems best. I drafted. AI pushed back. I refined. AI challenged the refinement. Each version got tested against the constraints I’d defined.
The deliverable landed in an afternoon.
The productivity interpretation: AI saved me two weeks.
The real interpretation: the feedback loop that used to take days collapsed into minutes. I got challenged faster, so I improved faster, so the output arrived faster.
And the compression revealed something else. When the iteration cycle shrinks, you stop drowning in the mechanics and start noticing patterns. How does this leader frame problems? What vocabulary resonates with them? Where do they need more context versus less? The probing questions weren’t just about this project. They were teaching me how to translate better to this leader, for every future interaction.
And here’s where the two relationships coexisted in a single project: I’m not a graphic artist. I hate creating polished PowerPoints and Word documents. That part? AI did for me. That’s productivity. That’s offloading. I’m not pretending otherwise.
But the translation work, the thinking, the framing, the ability to take technical architecture and make it land for leadership in their language? That grew through challenge. AI tested every draft against constraints I defined. I got better at translation. I didn’t get better at graphic design.
Same project. Both relationships. The difference is knowing which is which.
AI didn’t do the translation. AI tested my translation against constraints it held in view while I worked. Every pushback was a rep. Every iteration built capability.
When that project ends, I’ll still be better at translation than I was before it started. The growth is mine. AI compressed the timeline.
Fast output from AI doing work is hollow. Fast output from accelerated human iteration is growth.
The Confession
I’m not here to blow smoke.
I wrote a piece called “How Kiro Turned an Architect Into a Developer.” It’s about how AI coding tools let me build and maintain my website despite not being a developer by training or practice.
That piece is a dependency story.
Without Kiro, maintaining technicalanxiety.com becomes very difficult. Possibly fatal. The capability isn’t mine. I can produce output, but I don’t deeply understand what I’m producing. Take the tool away and I’m back to struggling.
Same person. Same category of tools. Two completely different relationships.
And I’m fine with that.
I’m never going to be a developer. I don’t want to be. Do I have the capability? No. Do I have the ability to learn it? Probably. But capability requires investment, and I’ve chosen to invest elsewhere. What I am is someone who creates content from experience. Kiro lets me do that better than ever because I’ve handed off what I’m not to a tool that can be. This dependency isn’t erosion. It’s enablement. I gave away something I never intended to keep so I could focus on what I actually am.
The difference is intention. Kiro dependency was a deliberate trade. If I’d spent two years thinking I was becoming a developer while Kiro did the work, that would be self-deception. I knew what I was doing. I chose it.
My translation work with Copilot and Claude built capability. My development work with Kiro built dependency. Both were the right choice for what I needed.
I’m not here to tell you dependency is wrong. I’m here to tell you it should be a choice, not an accident.
Honesty about dependency is the first step toward building something better.
The Diagnostic
Here’s how you know which path you’re on.
Ask yourself: what happens if the tool disappears tomorrow?
If the answer is “I lose part of who I thought I was,” you built dependency. The productivity gains were borrowed. The capability was never yours. The identity was fake.
If the answer is “I’m slower, but I’m still better than I was before I started using this tool,” you built capability. The growth was real. The tool compressed the timeline, but the development happened in you.
Most people can’t answer this question because they’ve never asked it. They measure AI by what it removes from their workload. They don’t measure what it’s adding to, or subtracting from, their capability.
Think about that for a moment. AI is sold as amplifying human capability. But the dominant metric is subtraction. Hours removed. Tasks eliminated. Workload reduced.
Why?
Maybe because growth is hard to measure. Maybe because “I became better at translation” doesn’t fit on a dashboard the way “saved 40 hours this quarter” does.
Or maybe because AI adoption is a financial interest, not a human interest. The people selling these tools have every reason to emphasize productivity metrics. Productivity metrics justify license costs, expand deployments, drive revenue. Growth metrics? Those benefit you. Not the vendor.
The productivity framing actively discourages this question. If the metric is “hours saved” or “tasks automated,” you’re measuring the tool’s contribution, not your own development. You can celebrate those metrics while slowly atrophying.
I’ve seen my own performance review. The growth in translation is measurable. Not because AI did translation work for me, but because AI challenged my translation work constantly for a year. The reps accumulated. The capability compounded.
I’m still a work in progress. Twenty years in and still learning. That’s the whole point.
The right question isn’t “what did AI do for me?” It’s “what did I become because of how I used AI?”
The Parallel Frame
In the Confidence Engineering series, I argued that the industry’s obsession with “trusting AI” is misguided. Trust is binary and emotional. You either trust or you don’t, and the decision happens before evidence.
Confidence is different. Confidence is graduated, empirical, earned through observation. You build confidence by measuring outcomes, not by making a leap of faith.
The productivity discourse has the same problem.
“AI takes work off your plate” is the trust frame. It sounds positive. It feels good. It requires no examination of what’s actually happening to you as a person.
“AI challenges you so you become more capable” is the confidence frame. It’s harder. It requires honest self-assessment. It asks you to measure your own development, not just your output.
I use AI to give me those assessments. Constantly. I don’t wait for peer reviews or yearly performance cycles. I have Claude projects and Copilot notebooks specifically tailored to evaluate my thinking, challenge my assumptions, and tell me where I’m falling short. The tools that can build dependency can also build accountability. Same technology. Different relationship.
Just as we need to move from trust to confidence in how we evaluate AI systems, we need to move from productivity to growth in how we evaluate AI use.
The hollow frame feels good and produces nothing sustainable. The honest frame is harder and builds something real.
The lazy thinking isn’t about AI. It’s about us. We’re choosing the easy metric because the hard metric requires looking in the mirror.
What Actually Happens
Let me tell you what using AI as a training partner actually looks like.
I write about what I’m learning. Not just past lessons, but ongoing experiences. The patterns I’m seeing today, the challenges I’m working through this week, the ideas forming in real-time. That means I need to surface themes from my daily work, not just my memory.
I had a conversation that started with me evaluating three topic suggestions from Copilot. Copilot has access to my internal discussions and emails. It can see what I’m engaging with across my work. So I asked it to surface themes worth writing about.
Two of those suggestions were echoes of work I’d already published. The third had potential but was framed wrong.
You’re reading it now. Framed correctly.
What emerged from the conversation wasn’t any of those three topics. It was something none of us, not me, not Copilot, not the AI I was discussing with, had articulated at the start. The exchange forced refinement, challenge, invention.
If you take nothing else from this article, take this:
That’s what thinking through the machine actually means. Not accepting output. Not offloading tasks. Using the interaction itself as the development mechanism.
That’s not productivity. That’s what learning looks like when the feedback loop compresses from days to minutes.
The Choice
AI can accelerate your growth or replace it.
If you use these tools to take work off your plate, you might be building dependency. When the tool disappears, so does the capability you thought you had. Worse, you might be convincing yourself you’re something you’re not.
If you use these tools to challenge your thinking, test your translations, compress your feedback loops, you’re building something that stays with you. The tool could disappear tomorrow and you’d still be better than you were.
Most people are making this choice unconsciously. They default to the productivity frame because that’s the frame the industry celebrates. Hours saved. Tasks automated. Output increased.
These are meaningful metrics. I’m not dismissing them. But they’re misleading. They confuse the true purpose of these tools, which is making us better. And being better is a choice. One you have to make deliberately, because the default path leads to complete and total replacement.
And if you need to report those metrics to keep these tools in your hands? Do it. Play the game. If you’re using them the way I’ve described here, the game is worth playing.
Ask the diagnostic question honestly. Look at your relationship with these tools. Measure what’s happening to your capability, not just your workload.
Twenty years in and still learning. That’s the goal. That’s always been the goal.
But I’ll be honest. Learning becomes hard. It becomes tiring. When you’ve been doing this as long as I have, you get worn down eventually. The same patterns. The same problems with new names. The same organizational dysfunction dressed up in different frameworks. The excitement fades.
What AI has done is breathe life back into my excitement for learning and technology. Not because the tools are novel. Because the tools create a learning environment that doesn’t depend on finding the right room, the right people, the right organization. I can manufacture the challenge environment I need, whenever I need it, on my own terms.
That’s not productivity. That’s renewal.
AI can help you stay there. Or it can help you pretend you’ve arrived while slowly forgetting how to do the work yourself.
The tool doesn’t decide. You do.
In Thinking Through the Machine, I wrote about AI surfacing what you already know but can’t access quickly. That’s retrieval. That’s synthesis. That’s the machine extending your reach.
This is the companion piece. This is the machine keeping you sharp. Not by doing work for you, but by refusing to let you stop growing.
The productivity promise is hollow. The growth promise is real. Choose accordingly.
Photo by Google DeepMind on Unsplash