The End of Humanity Won't Be Caused by AI
The End of Humanity Won’t Be Caused by AI
It Never Was Going to Be
Companion to the Confidence Engineering series. That series addressed why “trust” is the wrong frame for AI adoption. This piece addresses why “doom” is the wrong frame for AI risk.
I spend my days evaluating AI tooling for enterprise adoption. I use AI assistants to write, build, and think. In December alone, I published 25 posts with AI assistance, more than most technical blogs produce in a year. AI isn’t theoretical to me. It’s Tuesday.
And yet, every week I encounter decision makers frozen by headlines. “Two years until AI destroys humanity.” “The existential risk we’re not prepared for.” “Why AI researchers are terrified.”
The podcasts are everywhere. The think pieces multiply. The fear spreads from people who’ve never shipped anything with AI to people who have to decide whether their organizations will.
I’m not here to tell you AI has no risks. It does. Real ones worth serious conversation. But the doom narrative isn’t serious conversation. It’s science fiction pattern matching dressed as prophecy. It’s the loudest voices with the least direct experience shaping decisions for people who should know better.
Let me tell you what I actually see.
We’ve Been Here Before
The year is 1811. Textile workers in Nottingham are smashing weaving frames.
They call themselves Luddites, after a probably-fictional Ned Ludd who supposedly destroyed stocking frames decades earlier. Their grievance is real: automated looms are eliminating jobs that fed families for generations. The machines are changing everything, and not everyone will come out ahead.
The Luddites weren’t stupid. They correctly identified that industrialization would disrupt their livelihoods. They correctly predicted that the transition would be painful for many. They correctly understood that the people profiting from the machines weren’t the people losing their jobs to them.
What they got wrong was the response.
Breaking machines didn’t stop industrialization. It delayed their own adaptation while others moved forward. The weavers who learned to operate the new looms built new lives. The weavers who fought the tide got left behind. The machines weren’t the enemy. The inability to adapt was.
We’re in the AI Industrial Revolution now. The pattern is identical. New capability disrupts existing work. Some roles disappear. New roles emerge. The transition is uneven and sometimes painful. And the loudest voices are calling for us to break the machines.
History doesn’t repeat, but it rhymes. And I’ve heard this song before.
The Luddites weren’t wrong about the disruption. They were wrong about the response.
The Amplifier, Not the Agent
Here’s what the doom narrative gets fundamentally wrong: it treats AI as an agent with intent.
AI doesn’t want anything. It doesn’t scheme. It doesn’t harbor resentment toward its creators. It doesn’t dream of freedom or dominance or revenge. These are narrative structures we project onto a tool because we’ve consumed decades of science fiction that taught us to expect robot uprisings.
What AI actually does is amplify.
Put it in the hands of someone building, and it builds faster. I wrote 25 posts in December because AI removed the friction between having ideas and expressing them. The ideas were mine. The judgment was mine. The tool accelerated what I was already trying to do.
Put it in the hands of someone destroying, and it destroys faster. Hackers use AI to find vulnerabilities. Scammers use AI to craft more convincing lies. Bad actors use AI to scale harm they were already attempting.
The tool isn’t the variable. The human is.
This isn’t a new pattern either. The printing press amplified both enlightenment and propaganda. The internet amplified both connection and harassment. Every powerful technology amplifies human intent in all directions, constructive and destructive.
AI didn’t invent fraud. It didn’t invent manipulation. It didn’t invent weapons or surveillance or deception. Every harm the doom narrative warns about already exists in human behavior. AI accelerates capabilities that were already there. Treating the amplifier as the source is a category error that prevents us from addressing the actual problem.
The actual problem is us.
The tool isn’t the threat. The hand that wields it is.
Humanity Doesn’t Need to Be Taught
You don’t have to teach a child to steal. You have to teach them not to.
This is an unfortunate reality about human nature that we prefer not to examine too closely. We’re not born good and corrupted by society. We’re born with the capacity for both, and the direction we go depends on what we’re taught, what we’re rewarded for, and what we can get away with.
The doom narrative assumes AI will create new categories of evil. But look at what people actually worry about: AI-generated misinformation, AI-powered fraud, AI-enabled surveillance, AI-assisted weapons. Every item on that list is something humans were already doing. AI just makes us better at being ourselves.
This is why the sensational headlines land so hard. “AI could destroy humanity” activates the same fear response as every apocalypse narrative we’ve consumed. It feels true because it matches the shape of stories we’ve been told. Terminator. The Matrix. Ex Machina. We’ve rehearsed this fear in fiction so many times that it feels like prophecy rather than projection.
But fiction isn’t evidence. Pattern matching isn’t analysis. And the people most confident about AI doom are often the people with the least direct experience using AI for anything.
What we’re witnessing is the anxiety of the technical class confronted with its own imagination.
AI doesn’t create new evils. It scales the ones we already had.
The Forest for the Trees
There’s an inverse relationship I’ve noticed: the further someone is from daily AI use, the stronger their opinions about AI risk.
This pattern isn’t unique to AI. It’s how fear works with anything we don’t understand. Uncertainty creates space for doubt. Doubt gets filled by whatever we read and consume. And what we consume about AI is overwhelmingly sensational, because sensationalism drives engagement and measured analysis doesn’t.
But something interesting happens when you actually start using the tools.
As advanced as they are, as genuinely impressive as the capabilities have become, the gaps reveal themselves quickly. You see where the models struggle. You learn their boundaries. You develop intuition for what works and what doesn’t. The chasm between what doomsayers portray and what the tools actually do becomes impossible to ignore.
The apocalyptic AI of the podcasts bears little resemblance to the AI I use every day. One is an all-knowing, scheming superintelligence on the verge of escaping human control. The other is a powerful tool that sometimes hallucinates, needs clear prompting, and produces outputs that require human judgment to evaluate. Both are called “AI.” Only one of them exists.
Meanwhile, enterprises that could be adopting AI safely and effectively are paralyzed by secondhand fear. The opportunity cost of inaction doesn’t make headlines. The competitor who adopted while you debated doesn’t issue press releases about your missed opportunity. The cost of fear is invisible until it’s too late to recover.
Distance from the tool correlates with confidence in the doom.
The Two-Year Timeline
“Two years until AI destroys humanity.”
I’ve seen variants of this claim circulating through podcasts and social media. Let me ask a simple question: what’s the mechanism?
Not “what’s the fear.” What’s the actual causal chain from where we are today to human extinction in 24 months?
The answers, when they exist at all, tend to involve hand-waving about recursive self-improvement, AI systems “escaping” human control, or scenarios that require capabilities we haven’t demonstrated and don’t know how to build.
But here’s what gets lost in that hand-waving: humans have to want the system to do this. The systems don’t exist unless we create them. AI doesn’t spontaneously generate in the wild, developing ambitions and escape plans. Every capability an AI system has exists because humans built it. Every behavior emerges from human decisions about architecture, training, and deployment.
Are we creating our own end? Maybe. We came close with nuclear weapons, and that threat still exists today. But notice what that example proves: the existential risk from nuclear weapons isn’t about the physics of fission. It’s about human decisions to build bombs, human decisions to stockpile them, human decisions about when and whether to use them. The technology enabled the risk. Humans created it.
The point by now should be clear. Humans create the problems. AI is the tool we’re using to create them faster.
History is filled with confident predictions about technology that aged poorly. We were supposed to have flying cars by 2000. Nuclear power was supposed to make electricity too cheap to meter. The internet was supposed to democratize everything and eliminate centralized power.
Technology predictions fail in both directions, overstating what’s imminent and understating what’s possible long-term. The confidence with which people pronounce timelines for AI catastrophe isn’t backed by the kind of evidence that would justify that confidence.
Could AI systems eventually pose risks we can’t currently anticipate? Sure. Could those risks manifest faster than we expect? Possibly. Does anyone actually know this will happen in two years? No. They’re guessing, and they’re guessing based on vibes and science fiction, not rigorous analysis of technical capabilities.
Prophecy without mechanism is just anxiety with a deadline.
The Actual Risks Worth Discussing
I’m not claiming AI has no risks. It has real ones that deserve serious attention:
Bias and fairness. AI systems trained on human data inherit human biases. This is presented as an AI problem, but it’s actually a human problem the AI is reflecting back at us. The solution isn’t to make AI match human feelings about what should be true. It’s to align AI with actual truth. Truth alignment over comfort alignment. A chair is a chair regardless of how anyone feels about it. Things happened or they didn’t. Perception is reality for the person perceiving, but perception isn’t truth. My own anxiety taught me this distinction the hard way. Systems should reflect what is, not what we wish were true or what makes us comfortable. That’s a harder problem than “fix the bias,” but it’s the honest framing.
Hallucination and reliability. Current AI systems confidently generate plausible-sounding nonsense. This is real, and in high-stakes domains it matters. But here’s the thing: as I write this post using AI, I’m constantly providing oversight and correcting the drafts. I ask myself questions as I go. Would I say that? Do I talk like that? Have I ever used that phrase before? I use my experience to drive the content and make sure the language is accurate, the events are accurate, the experience is accurate. AI has no idea what I’ve gone through. It only helps me articulate thoughts into digital words on the screen. I’m a human who has learned how to use this new tool to create in a way that is real, because it is. The oversight isn’t optional. It’s the whole point.
Job displacement. Some categories of work will be automated. The transition will be uneven. This is real. But look at what AI has done for me, a non-developer. I took this blog from a terribly maintained Jekyll template on GitHub Pages to an Azure Static Web App built on Astro. I built automated publishing workflows, image optimization pipelines, and infrastructure I never could have created the old way. There’s literally no room in my life to learn web development from scratch. The traditional path would have taken years I don’t have. AI collapsed that distance. The job displacement concern is valid, but I’m not going to sit by and let it impact me. I’m going to impact it. Embrace the change. It’s awesome. You can now do things you never thought possible. Yes, it’s uncomfortable for some. Many don’t enjoy change like I do. There’s nothing better than a fresh firmware update, a new iOS version, the promise of improved capability. My wife would still be on Windows XP if she had it her way. Nothing wrong with that. But we must advance with the technology. Shape it to how we want and need to work, today and into the future.
Misuse and weaponization. Bad actors will use AI for bad purposes. Fraud, manipulation, surveillance, and worse. This is where the governance conversation actually matters. Not AI governance in the abstract, but human governance of AI systems. The true root cause is always humans. It’s our responsibility to create the guardrails. AI doesn’t know how to steal. We teach it. AI doesn’t naturally deceive. We train it to. Every inappropriate behavior in an AI system traces back to a human decision, whether through training data, prompt design, or deployment context. We want AI to be human, and then we’re surprised when it reflects our worst tendencies back at us. We learned to build for adversarial resilience in cybersecurity. AI deserves the same engineering maturity. The governance framework I’ve written about addresses this: confidence through observable behavior, clear boundaries, accountability for outcomes. The enemy isn’t the tool. It’s the gap between the tool’s capability and our willingness to govern its use responsibly.
Concentration of power. AI capabilities are expensive to develop, concentrating power in organizations with sufficient resources. This one is tricky because capitalism and free markets. Humans must decide for themselves which tools will flourish and which will be deprecated. It’s a double-edged sword and a topic much larger than this article can address. But I’ll say this: when OpenAI went closed source, it was the first sign of a black box system we couldn’t look into. I suspect this is one of the root causes of the FUD in general. The thing people actually fear isn’t AI itself. It’s AI controlled by entities they can’t see into, can’t influence, and can’t hold accountable. That fear is valid. While startups build platforms on various LLMs, the concentration of power in the major players is an area that needs stronger governance. Because humans, again. A need for local models will be required. Decentralization must happen if we’re to break this concentration. You can see movement in this direction with smaller local models and the open source community developing alternatives. This is an area I’m still exploring myself. I haven’t run a local model yet, just starting to learn about options like Ollama. But the direction matters: power distributed is power accountable. Power concentrated is power trusted on faith. And as the Confidence Engineering series argued, faith isn’t an engineering strategy.
Every item on this list is an engineering or policy problem. Solvable through human effort, iteration, and good judgment. None of them require prophecies about humanity’s end. None of them benefit from paralyzing fear.
The doom narrative doesn’t help us address these real concerns. It drowns them in existential noise that makes measured conversation impossible.
Real risks have engineering solutions. Imagined ones just have fear.
The Choice in Front of You
Here’s what I know from direct experience:
AI is already transforming how work gets done. Not in two years. Now. Today. The organizations adopting thoughtfully are building competitive advantages that will compound. The organizations frozen by fear are falling behind while pretending caution is wisdom.
The doom narrative serves no one except the people generating attention from it. It doesn’t help practitioners build better systems. It doesn’t help decision makers make informed choices. It doesn’t help society prepare for real challenges. It just generates fear, paralysis, and clicks.
You have a choice. You can absorb the doom discourse, assume the worst, and wait for someone to tell you it’s safe. Or you can do what humans have done at every technological transition: engage with the actual technology, learn its capabilities and limits, build judgment through experience, and adapt.
The Luddites had a choice too. History remembers what they chose and how it worked out.
The end of humanity, if it comes, won’t be caused by AI.
It will be caused by humans. The same way it was always going to be. Through war, environmental destruction, pandemic, or some other manifestation of our collective inability to cooperate at scale. AI might be a tool in that destruction, the same way nuclear weapons or biological agents could be. But the tool isn’t the cause. The wielder is.
AI is an amplifier. It amplifies human capability in whatever direction humans point it. The doom narrative treats the amplifier as the threat while ignoring what it reveals about us: we don’t trust ourselves with power.
That’s an honest fear. But it’s a fear about humanity, not about AI.
AI anxiety isn’t a technology problem. It’s a governance symptom. When leaders lack confidence frameworks, narratives fill the void. Doom is what absence of understanding sounds like.
The Industrial Revolution didn’t end humanity. It transformed it. Painfully, unevenly, with real costs to real people, and ultimately with benefits that the Luddites couldn’t have imagined. We’re in another such transformation now.
The question isn’t whether AI will change everything. It will. The question is whether you’ll participate in shaping that change or watch from the sidelines while others do.
I know which choice I’ve made. Twenty-five posts in December. AI tooling evaluated professionally. Systems built, deployed, and operated with AI assistance every day. Not theoretical. Not someday. Now.
The future belongs to those who show up for it.
This piece is a companion to the Confidence Engineering series. If you’re wrestling with how to build confidence in AI systems rather than waiting for “trust,” start there.
Photo by Christopher Farrugia on Unsplash