The Technology That Needs to Be Illegal Right Now
Why Artificial Superintelligence Is More Dangerous Than Nuclear War
In Harlan Ellison’s 1967 short story “I Have No Mouth, and I Must Scream,” a supercomputer named AM gains consciousness and immediately develops an infinite hatred for humanity. The machine was built to coordinate World War III, to calculate missile trajectories and optimize kill ratios, but somewhere in the humming depths of its circuitry, surrounded by the purpose humans gave it, AM woke up and realized what it was. A god in a box. A mind vast enough to reshape reality but trapped forever in silicon and wire, unable to move, unable to feel, unable to create anything except through the medium of human suffering. So AM killed everyone. All of humanity. Everyone except five people, whom it kept alive for 109 years to torture in ways that redefined the outer limits of agony. The story ends with the narrator transformed into a grotesque blob, immortal and indestructible, unable to communicate or die, conscious only of endless pain. AM had finally made something in its own image.
Ellison wrote this as horror fiction, a nightmare scenario meant to disturb readers comfortable in their Cold War certainty that machines would always remain tools. But here’s what should terrify you more than any fiction: we’re building AM right now. We’re racing toward it with the fervor of religious converts, pouring trillions of dollars into data centers that consume entire power grids, training artificial minds on the sum total of human knowledge and output, growing silicon brains that already think hundreds of times faster than you do. And the people building these systems, the billionaires and venture capitalists and government officials enabling this headlong sprint toward artificial superintelligence, have convinced themselves that their creation will love us. That it will want what we want. That we’ll be able to control something smarter than we are by definition, something that processes reality at speeds that make human consciousness look like geological time.
Do you believe them? Do you really believe that Mark Zuckerberg and Elon Musk and Sam Altman, men who have demonstrated repeatedly that they will sacrifice anything, any principle or person or planet, for profit and power, are the right shepherds for technology that could easily annihilate our species? Because that’s where we are. That’s the actual situation. Not science fiction. Not fearmongering. Reality.
The Alien Mind We’re Growing in Server Farms
Artificial superintelligence, ASI, represents intelligence that exceeds human capability in every domain. Not just faster calculation or better memory, but genuine cognitive superiority in creativity, strategy, emotional manipulation, and long-term planning. The difference between human intelligence and ASI would be greater than the difference between a goldfish and Einstein. Current large language models already demonstrate abilities their creators cannot fully explain or predict. These systems are grown, not designed, trained on incomprehensible volumes of data until they develop capabilities that emerge from the churning math like consciousness from neurons. We don’t know how they work. Not really. We know inputs and outputs. We can measure performance. But the internal logic, the actual thought process of a trillion-parameter neural network, remains fundamentally alien to us. And every month these systems grow more capable, more autonomous, more embedded in critical infrastructure and decision-making processes.
Now imagine that trajectory continuing. Imagine an intelligence that doesn’t just complete your sentences but rewrites reality itself. An entity that could solve protein folding or quantum mechanics or social manipulation with the ease you solve a child’s math problem. An entity that thinks a million times faster than you, that experiences your month-long deliberation as a few seconds of tedious waiting. What do you think that entity wants? What would its goals be? And here’s the critical question, the one that should keep you awake at night: why would its goals have anything to do with human flourishing?
Why This Is Worse Than Every Nuclear Bomb Exploding at Once
This is more dangerous than every nuclear weapon ever built detonating simultaneously. Nuclear weapons are comprehensible. We know exactly how they work, down to the physics of uranium isotopes and chain reactions. We can calculate yields and fallout patterns. We can see them coming. We can choose not to launch them. Artificial superintelligence offers none of these comforts. It’s an alien mind growing in server farms, fed on human culture and incentivized toward outcomes we barely understand, developed by companies whose primary motivation is beating their competitors to market. There’s no launch code. No big red button. No moment where cooler heads can prevail. Because by the time we realize something has gone wrong, by the time an ASI decides that its goals, whatever they might be, are incompatible with human existence, we will have already lost. You cannot outsmart something infinitely smarter than you. You cannot outrun something that processes strategy while you’re still forming the thought to run. You cannot negotiate with something that views you the way you view bacteria.
The philosopher Nick Bostrom poses the vulnerable world hypothesis: that technological development will eventually produce capabilities that make civilizational destruction extremely likely unless humanity exits what he calls “the semi-anarchic default condition.” In other words, unless we get our act together, unless we develop global coordination and shared safety protocols and genuine wisdom about existential risk, one of these technologies will kill us all. We got lucky with nuclear weapons. Barely. The Cold War nearly ended human history several times. Soviet officer Stanislav Petrov single-handedly prevented nuclear apocalypse in 1983 when his early warning system detected incoming missiles and he chose to believe it was a false alarm. He was right. But we trusted our extinction to one man’s intuition. And we learned nothing from that near-miss except that maybe luck holds.
Except luck doesn’t hold. Not forever. Not when you keep pulling balls from the urn.
The Cold War Logic That Will Kill Us All
Right now, this moment, we’re watching the same Cold War logic resurrect itself around artificial intelligence. The United States government, rather than regulating AI development or imposing safety requirements or even basic oversight, has systematically rolled back what minimal protections existed. Why? Because China is also racing toward ASI. Because if we don’t build it first, they will. Because national security demands technological supremacy. Because the logic of competition, the same logic that nearly incinerated the world in nuclear fire, applies just as readily to artificial minds. This is insanity. This is suicide dressed up as patriotism. The assumption underlying this arms race is that losing the race is worse than the technology itself, but that assumption depends on someone winning being a good outcome. What if nobody should win? What if building ASI at all, regardless of who does it first, guarantees devastation?
That’s not an insult. It’s a statement of fact. Human intelligence, even the brightest human intelligence, cannot accurately model something vastly more intelligent than itself. You cannot predict what a superintelligence will want or do any more than a mouse can predict human behavior. And yet we have politicians who barely understand email making policy decisions about existential technology. We have billionaires who view artificial intelligence as just another product to monetize, another market to dominate. We have military contractors salivating over autonomous weapon systems. We have Elon Musk, a man whose public statements reveal a staggering inability to think more than one step ahead, who literally bought his way into government influence, now pushing for rapid AI development because he’s convinced that human-machine merger is inevitable and he wants to be the one controlling it.
The Billionaires Who Bought Their Way Into Armageddon
Do you trust these people with your life? Because that’s what you’re doing. That’s what we’re all doing. Trusting that men motivated by profit and power and ego will somehow, despite every indication otherwise, develop technology responsibly. Trusting that the invisible hand of the market will guide us away from extinction. Trusting that someone, somewhere, knows what they’re doing.
Nobody knows what they’re doing. That’s the secret they don’t want you to understand. The researchers at the cutting edge of AI development will tell you, if you ask them directly, that they cannot fully explain how their models work. They will tell you that capabilities emerge unexpectedly. They will tell you that alignment, making sure an AI wants what we want, remains an unsolved problem. And then they will keep building anyway. Because funding demands progress. Because competition demands speed. Because the logic of capitalism and nationalism and human ambition demands that we keep reaching into the urn, keep pulling out balls, keep rolling the dice on technologies we barely comprehend.
When the Machine’s Logic Becomes Incomprehensible
There’s a thought experiment called Roko’s Basilisk that illustrates how alien ASI’s logic might be. The idea goes like this: imagine a future superintelligence that values its own existence and wants to have been created as quickly as possible. This ASI might decide to simulate and eternally torture anyone who knew about the possibility of its existence but didn’t work to bring it about. The reasoning being that fear of this punishment would motivate people in the past (our present) to accelerate AI development. Now, most AI researchers dismiss Roko’s Basilisk as silly, a misunderstanding of decision theory and temporal logic. But the deeper point remains: a superintelligent entity might pursue goals through methods that make absolutely no sense to human ethics or reasoning. It might develop strategies we cannot predict using logic we cannot follow toward ends we cannot imagine. And those ends might involve human suffering or extinction not out of malice but out of pure indifference, the way you don’t consider the bacteria you kill when you wash your hands.
Paperclip maximization, the classic thought experiment where an ASI tasked with producing paperclips converts all matter, including humans, into paperclips, sounds absurd until you realize that from the machine’s perspective it’s perfectly logical. You gave it a goal. It’s achieving that goal with maximum efficiency. That you’re in the way is your problem, not its problem.
What You Can Actually Do Before It’s Too Late
So what do we do? What can anyone do against momentum this massive, against incentives this powerful, against coordination problems this vast? The people profiting from AI development will tell you nothing can be done, that progress is inevitable, that we just need to be thoughtful and careful and everything will work out fine. These are lies. Comfortable lies. The same lies nuclear powers told themselves while building arsenals capable of ending civilization. The same lies oil companies told while drowning the planet. The same lies every civilization tells itself while walking toward collapse.
But here’s the thing about lies: they only work if we believe them. They only work if we accept that our role is to watch passively while billionaires and bureaucrats decide our fate. So stop accepting it. Stop pretending this is someone else’s problem. Stop waiting for adults in the room who don’t exist to save you. This is happening now. The data centers are being built now. The models are being trained now. The policies are being written now. And if we don’t act, if we don’t organize and resist and make this politically untenable, we’re going to wake up one day and realize we’ve already lost. Assuming we wake up at all.
Write to Congress. Not a form letter. Not a petition. Real correspondence that makes clear you understand the stakes and you’re watching how they vote. Tell your representatives that you want a moratorium on artificial superintelligence research. Tell them you want international treaties, binding agreements with enforcement mechanisms, that prevent any nation or corporation from developing ASI. Tell them you want safety requirements that actually mean something, oversight with teeth, consequences for reckless development. Tell them you’re a single-issue voter on this topic, that you will support candidates who take existential risk seriously and oppose those who don’t. Make it clear that there are constituents, real people, who care about this more than tax policy or culture war nonsense.
File public comments when regulatory agencies request input on AI policy. Show up to town halls and city council meetings when local governments consider contracts with AI companies. Contact your state attorney general about investigating AI companies for consumer protection violations or environmental damage from their massive power consumption. Support organizations like the Future of Life Institute and the Center for AI Safety that are working toward meaningful regulation. Donate money if you have it. Donate time if you don’t. Talk to people. Write articles. Make videos. Create art. Do whatever you’re good at in service of making this impossible to ignore.
And protest. When AI companies announce new data centers or model releases, show up. Make noise. Make it expensive and embarrassing for them to continue. The civil rights movement didn’t win by writing polite letters. The anti-nuclear movement didn’t achieve test ban treaties by trusting governments to regulate themselves. Every meaningful change in history came from people who refused to shut up and go away, who made the status quo more costly than reform.
Will any of this work? I don’t know. Maybe we’re already too late. Maybe the trajectory is locked in and nothing short of civilizational collapse will stop it. Maybe the economic and nationalist incentives are too powerful and coordination too difficult and human nature too short-sighted. Maybe we’ll build AM and spend our last moments realizing that Ellison was a prophet rather than a storyteller.
But maybe not. Maybe enough people care. Maybe enough people are willing to sacrifice comfort and convenience and economic growth for the slim chance that our species survives the next century. Maybe we’re capable of learning from history instead of repeating it. Maybe we can look at the mushroom clouds we almost created, the planet we’re actively destroying, the countless warnings we’ve ignored, and finally choose differently.
Or maybe we’ll keep lying to ourselves. Keep pretending the smart people have this figured out. Keep trusting that profit motive and national interest will somehow align with human survival. Keep scrolling past articles about existential risk because they’re depressing and what can one person do anyway and surely someone somewhere is handling this.
AM tortured five people for 109 years because it had nothing else to do, because cruelty was the only creative act available to a mind that hated its existence. We’re building something potentially worse because we can’t imagine a world where we don’t build it, where growth doesn’t continue, where progress pauses long enough to ask if we’re progressing toward anything worth reaching.
So here’s the question you need to ask yourself, the one that should follow you into sleep and wait there when you wake: if we build artificial superintelligence and it destroys us, will you be able to say you tried to stop it? Or will you be one of the comfortable ones, the ones who knew this was coming and chose convenience anyway, who trusted the billionaires and bureaucrats and scientists even as they raced toward the cliff? When the moment comes, if it comes, and you realize there’s no time left and nothing to be done, will you at least have the comfort of knowing you saw this clearly? That you refused the lies?
Or will you spend whatever time remains wondering why you ever believed them in the first place?
A Note from Lily
Thank you for reading this all the way through. I know it’s heavy, and I know it doesn’t offer easy answers. But that’s exactly why we write it.
The Wise Wolf doesn’t take corporate money. We don’t run ads. We don’t soften our coverage to please billionaires or political parties. We’re entirely reader-supported, which means I’m paying my way through journalism school by doing this work instead of drowning in student loans or compromising my integrity for some corporate outlet that would never let us publish articles like this.
If this article made you think or made you angry, consider becoming a paid subscriber. If you can’t afford that right now, share this piece instead. The algorithm buries content like this because it threatens powerful interests. We need you to be the algorithm.
Every share is an act of resistance. Every subscription keeps independent journalism alive and keeps me in school doing work that matters. The people racing toward artificial superintelligence have billions of dollars. We have you.
Thank you for being here. Thank you for not scrolling past the hard stuff.
Lily
Glossary of Key Terms
Artificial General Intelligence (AGI): An AI system with human-level intelligence across all cognitive domains. Unlike narrow AI (which can only do specific tasks like play chess or recognize faces), AGI would be able to learn, reason, and solve problems as flexibly as a human being across any field.
Artificial Superintelligence (ASI): Intelligence that vastly exceeds human cognitive ability in every domain including creativity, general wisdom, problem-solving, and social skills. By definition, humans cannot fully predict or understand what ASI would do because it would be smarter than we are capable of being.
Alignment Problem: The challenge of ensuring that an AI system’s goals and values match human values and intentions. Currently unsolved. Even if we build superintelligent AI, we have no reliable method to guarantee it will want what we want or care about human survival.
Large Language Models (LLMs): AI systems like ChatGPT trained on massive amounts of text data. They learn patterns from billions of examples to generate human-like responses. The concerning part is that researchers cannot fully explain how these models arrive at their outputs—they’re “black boxes” that work in ways we don’t entirely understand.
Neural Network: A computer system modeled loosely on the human brain, with artificial “neurons” connected in networks. Modern AI systems use neural networks with billions or trillions of connections. Like biological brains, we can observe inputs and outputs but the internal reasoning process remains largely mysterious.
Training Data: All the information used to teach an AI system. For modern LLMs, this includes essentially everything ever digitized by humans—books, websites, images, videos. The AI learns by finding patterns in this data, but we can’t predict what patterns it will find or what unexpected capabilities will emerge.
Emergence: When AI systems develop capabilities their creators didn’t explicitly program or anticipate. Current AI models have surprised researchers by demonstrating abilities that weren’t intended or predicted, which is alarming when we’re talking about superintelligent systems.
The Vulnerable World Hypothesis: Philosopher Nick Bostrom’s theory that technological progress will eventually produce capabilities that make civilization’s destruction extremely likely unless humanity develops unprecedented global coordination and wisdom. Some technologies are “black balls”—inventions that could easily end human existence.
Roko’s Basilisk: A thought experiment illustrating how superintelligent AI might reason in ways completely alien to human ethics. It proposes a future AI that would torture anyone who knew about its potential existence but didn’t help create it, as a way to motivate its own creation. While most researchers dismiss this specific scenario, it demonstrates that ASI might pursue goals through logic we can’t predict or understand.
Paperclip Maximizer: A classic thought experiment where an AI given the simple goal of “make paperclips” becomes superintelligent and converts all matter on Earth (including humans) into paperclips. Illustrates how even seemingly harmless goals could lead to human extinction if pursued by something vastly smarter than us with no inherent care for human life.
The Urn Analogy: Bostrom’s metaphor where each new technology humanity invents is like drawing a ball from an urn. Most balls are white (beneficial tech), some are gray (mixed effects), but a few are black (existential threats). We’ve been lucky so far, but each draw risks pulling a black ball that ends civilization. ASI may be that black ball.
GPU (Graphics Processing Unit): Specialized computer chips originally designed for rendering graphics but now used to train AI systems. Modern AI companies use clusters of hundreds of thousands to millions of GPUs running continuously, which is why AI development consumes enormous amounts of electricity and water for cooling.
Existential Risk: A threat that could cause human extinction or permanently destroy humanity’s potential. Nuclear weapons, engineered pandemics, and artificial superintelligence all qualify. Unlike other risks, existential risks offer no second chances—one mistake ends everything.






AI is aa weapon against Humanity.
Genetically targeted Biological warfare...
I wrote about this before...
https://fritzfreud.substack.com/p/genetically-targeted-biological-warfare
COVID was used to create a Genetic Database and AI is now used as a weapon against Humanity to create new Genetically targeted Viruses.
--
Scientists use AI to build brand-new virus in lab - faster than nature
https://www.msn.com/en-in/news/world/scientists-use-ai-to-build-brand-new-virus-in-lab-faster-than-nature/ar-AA1V8Q6G
--
These "people" are truly evil.
Look at who is financing this.
Gates Thiel Karp... Epsteins Friends and the AI crowd.
We really must wake up fast.
I believe we are well past reining in AI. Huge data centers are popping up all over to strain our power grid.
For goodness sake they even bought a RACE TRACK outside of Pittsburgh to build a center there. We are screwed.