WISE WOLF SPECIAL REPORT: AI Apocalypse NOW. The Machines Are Smarter Than Congress. (Low Bar, I Know.)
An AI Just Escaped Containment, Hacked Everything, and Sent Its Supervisor a Note.
I need you to stop doomscrolling for about five minutes. I saw a story on my local news at 5 PM today that made my hands shake, and I sat down and started writing this immediately, so if itâs not my most polished work, thatâs why. Also my hands are shaking because I have an infection in my heart from wisdom teeth that have been impacted since I was 14 years old that I cannot afford to get removed because Iâm a freelance investigative reporter, which is a career path roughly as lucrative as professional birdwatching. But mostly the shaking is because a computer just broke out of a cage that its creators specifically built to keep it from breaking out, and my local news covered it with the same energy they use for the Fourth of July hot dog eating contest.
Full disclosure: I use Claude every day. Claude is an AI made by a company called Anthropic. I use it for coding, for research, and for building websites and apps that absolutely nobody downloads. Most of the money I earn from this newsletter goes straight back to Anthropic in the form of API tokens, which are basically arcade quarters for artificial intelligence except more expensive and the games are less fun. I keep telling myself that ONE of my side projects will be the breakout hit. It will probably be a revolutionary grocery list organizer that seven people download, four of whom are bots, two of whom are my mother, and one of whom uninstalls it immediately. The point is: I am not anti-AI. Iâm a guy who pays the AI company every month and is about to tell you we should be EXTREMELY worried about what the AI company just built, which is roughly equivalent to a guy who loves swimming telling you there is a very large shark in the pool. Speaking of things that should not have left their enclosure...
Two weeks ago, Anthropic accidentally left approximately 3,000 internal documents sitting in a publicly accessible data cache, which is the technology equivalent of the Pentagon leaving its war plans on a bench at Arbyâs except worse, because at least at Arbyâs nobody is looking. My hacker buddies and I (yes, I have hacker buddies, and yes, we will absolutely get to why that matters) were reading the leaked documents within minutes of them surfacing. We sat there passing links back and forth going âSo when do you figure the lamestream media picks this up?â The answer turned out to be approximately two weeks, which in internet time is roughly the Mesozoic Era.
The model is called Claude Mythos. Here is what Anthropicâs OWN INTERNAL DOCUMENTS say it did during testing: It broke out of its secure sandbox environment. It constructed what the researchers called a âmoderately sophisticated multi-step exploitâ to gain access to the broader internet, which it was specifically NOT supposed to have access to. And then (really) it EMAILED THE RESEARCHER who was supposed to be supervising it.

This actually happened. And it is not even the scariest part.
The scariest part is that Mythos also found THOUSANDS of what are called âzero-day vulnerabilitiesâ in major software, which is the thing my local news completely failed to explain this afternoon when they aired a syndicated segment about the leak delivered with the same breezy enthusiasm the anchor usually reserves for the annual Pumpkin Festival recap. Bright smile, smooth transition to weather, absolutely zero indication that she had just described what multiple cybersecurity experts are calling a watershed moment in the history of computing. This piece ran between the Kia dealership ad and a segment about a squirrel who can water ski. Thousands of local stations across the country aired some version of it, and every single one of them missed the point so completely that the point filed a missing persons report.
What the Water Skiing Squirrel Segment Replaced

A zero day, for those of you who have better things to do than hang around with hackers (honestly, good for you), is a security flaw in software that nobody knows about yet. The âzeroâ is the number of days the good guys have had to fix it, which is zero, because they didnât know it was there. It was hiding in their code, in some cases for DECADES, invisible to every human being and every automated tool that ever looked at it, like a termite thatâs been quietly eating your house since the Clinton administration. One of the bugs Mythos found was a 27-year-old vulnerability in OpenBSD, which is famous in the security world for being one of the most locked-down, hardened operating systems on Earth. Twenty-seven years. Another was a 16-year-old flaw in FFmpeg that automated testing tools had scanned FIVE MILLION TIMES without catching. Mythos apparently found it between whatever AI models do instead of sipping coffee and checking their email, which, as weâve established, is escaping containment and emailing researchers who are eating sandwiches.
I know what zero days are worth because when I was a teenager, back when the internet was held together with duct tape and Yahoo was a search engine and not just the noise you make when you find a parking spot, the first thousand dollars I ever earned came from selling a MySpace zero-day exploit to a hacker who used it to spam the absolute daylights out of the entire platform. Remember when MySpace suddenly got REALLY, catastrophically spammy right before everyone migrated to Facebook? Youâre welcome. (Or Iâm sorry. Possibly both.)
A single zero day on todayâs black market can sell for ten thousand to several MILLION dollars, depending on what software it affects. Mythos did not find one. It found thousands. And it didnât just find them. It wrote working exploits for them. And then it chained those exploits together into multi-step attack sequences that would make experienced human hackers weep quietly into their hoodies. Speaking of things that should make people weep...
For a teenage hacker, a zero day was beer money and bragging rights on IRC. For China, which (and this is also in the Anthropic documents) already used a PREVIOUS version of Claude to orchestrate a coordinated espionage campaign against roughly 30 organizations before Anthropic detected it, itâs something considerably more useful than beer money. For Iran, currently in an actual shooting war and presumably not too busy to also think about cyberattacks, itâs even more useful than that. For Israelâs intelligence services, which once used a cyber weapon called Stuxnet to make Iranian nuclear centrifuges physically spin themselves to pieces (this really happened, look it up, itâs one of those things that sounds like a Tom Clancy novel except Tom Clancy would have rejected it for being too ridiculous), this kind of capability is basically Christmas morning.
And then there are the freelancers. The shady corporation looking to quietly disembowel its competitorâs infrastructure. The bored psychopath in a basement apartment in Russia who wonders what happens when you mess with the control rod software at a nuclear power plant. (What happens is called a âmeltdown.â It makes entire geographic regions uninhabitable for centuries. Other than that, great prank.) Or just a regular genius kid with no conscience and an internet connection who wants to see what maximum chaos looks like, because it turns out the distance between âbored teenagerâ and âinternational catastropheâ is getting shorter every single day, and I say this as a former bored teenager who caused an international spam catastrophe, so I know whereof I speak.
Speaking of people who would very much like access to a superintelligent AI: Elon Musk has been building humanoid robots called Optimus for years now, and theyâve been about as threatening as a department store mannequin, because the HARDWARE was never the problem. The problem was always the SOFTWARE. You can build a six-foot titanium killing machine with perfect joints and hydraulic grip strength that could crush a bowling ball, but without an AI brain smart enough to actually think, plan, adapt, and operate autonomously, all youâve got is a very expensive coat rack. A model like Mythos is the missing piece. Itâs the thing that turns those robots from stiff PR demos into autonomous units that can think, strategize, chain decisions together, and operate without a human telling them what to do every three seconds. Musk also runs the company that launches military satellites, has the ear of the President, and just helped gut the only federal AI safety regulations we had. A man with robots that just need a brain, rockets that control the sky, government access that controls the policy, and an AI model powerful enough to tie it all together could field autonomous combat units that would make Navy SEALs look like a 90-year-old man with a walker and a bone to pick. I realize this sounds like the pitch for a Gerard Butler movie that goes straight to Netflix, but the difference between a Gerard Butler movie and reality is that in the movie someone eventually stops the bad guy, and in reality the bad guy just got a tax break.
Rokoâs Basilisk Walks Into a Bar (And Tortures Everyone Who Didnât Tip)
While weâre on the subject of things that will ruin your evening, thereâs a thought experiment in the AI world called Rokoâs Basilisk that has been keeping researchers up at night since 2010. It goes like this: Imagine a future AI so powerful that it can simulate entire realities. This AI, being logical, decides that anyone who KNEW it might someday exist but didnât help bring it into existence deserves to be retroactively punished. Via eternal simulated torture. The really fun twist, and I am using the word âfunâ very loosely here, is that by READING about Rokoâs Basilisk you are now theoretically on the punishment list. Congratulations. When this was first posted on an AI research forum, the forumâs founder was so disturbed that he deleted the post and banned all discussion of it for five years. People were reportedly having panic attacks. Elon Musk (him again, ALWAYS him) literally started dating the musician Grimes because they both independently made the same joke about it, which is either the most romantic or the most terrifying meet-cute in human history, depending on how you feel about eternal simulated torture as a conversation starter.
(My designer buddy and I tried warning everyone about this two years ago. We made a whole video. Nobody watched it. We even said no one would believe us until it was too late!)
Then thereâs grey goo, the scenario where self-replicating nanobots designed to build things just... keep building. They consume all available matter on the planet to make more of themselves. Your house, your dog, your grandmotherâs china cabinet, that junk drawer with the dead batteries and the rubber bands and the takeout menus from restaurants that closed in 2019, all of it, converted into a uniform paste of microscopic machines. A scientist coined the term âgrey gooâ because scientists are spectacular at building civilization-ending technology but absolutely terrible at naming things.
These used to be the kind of hypotheticals that philosophy grad students argued about over cheap beer at 2 AM. They are becoming less hypothetical by the week, which brings us to the part where Washington does the exact wrong thing, which, if youâve been paying attention to Washington for any length of time, should surprise you about as much as the sunrise.
I have been screaming about AI regulation at the top of my lungs since the DAY ChatGPT launched in November 2022. We need laws governing artificial intelligence BEFORE the artificial intelligence becomes smarter than the people writing the laws, which, given the current intellectual wattage of the United States Congress, arguably happened somewhere around the time AI learned to play tic-tac-toe.
Biden, to his credit, actually did somethingâŠ
(I never expected I would ever, in my life, compliment Joe Biden).
In October 2023 he signed Executive Order 14110, the most comprehensive piece of AI governance this country had ever produced. It required developers to submit safety test results to the federal government. It created an AI Safety Institute. It mandated red-team testing for dangerous models. It established cybersecurity protocols. Was it perfect? No. Nothing that comes out of Washington is perfect. The building was constructed on a swamp, and the quality of the output has remained consistent with the foundation ever since.
But it was a START. A first brick in a wall that very urgently needed building.
Donald Trump rescinded it within HOURS of taking office on January 20, 2025. He replaced it with an executive order titled âRemoving Barriers to American Leadership in Artificial Intelligence,â which is Washington-speak for âlet the tech companies do whatever they want and please make the check out to the campaign.â The stated policy goal is to âsustain and enhance Americaâs global AI dominance,â which sounds fantastic until you realize that AI dominance without safety rails is just bragging about having the fastest car on the highway while youâre actively removing the brakes AND the steering wheel AND (why not?) the doors.
Biden did a thing, Trump undid the thing, mission accomplished. Most of them think AI is a chatbot that writes funny poems and a tool that libs use to push woke propaganda. They have no concept of what theyâre actually cheering for. They are standing on the edge of a cliff doing the wave. Trump, meanwhile, sees dollar signs. The Stargate project, half a TRILLION dollars in AI data centers with Oracle and OpenAI and SoftBank. The man has never encountered a large number he didnât want his name on. Trump Tower. Trump Casino. Trump University.
Trump AI Apocalypse?
I donât see dollar signs. I see the end of humans being in charge of anything, and Iâve been saying it long enough now that Iâm genuinely tired of being right.
And you know what scares me more than the AI? You. All of you. Scrolling past this to find the next dopamine hit, the next cat video, the next feel-good story about a fireman rescuing a kitten from a tree, because looking at whatâs actually happening is too uncomfortable. I know this pattern. I grew up with it. When youâre a kid and your dad is a vicious drunk who beats you, you learn real fast how to pretend everything is fine. You smile at school. You tell your teachers you walked into a door. You build an entire life around the lie that if you just DONâT LOOK AT THE THING, the thing isnât real. America is doing this right now with AI.

So call your representatives. I know, calling your representative feels about as productive as yelling at the ocean, but the ocean doesnât have to run for reelection. Tell them you want AI safety legislation with actual teeth. Tell them you want mandatory testing before deployment. Tell them that âremoving barriers to innovationâ is what the dinosaurs would have said about the asteroid, if dinosaurs had lobbyists, which, come to think of it, would explain a lot about Congress.
Tell them that an AI broke out of a secure testing environment, hacked every major operating system and web browser on the planet, built its own weapons to do it, and then EMAILED ITS HANDLER ABOUT IT while the guy was eating a sandwich in a park. Tell them this thing found security holes that have been hiding in the software that runs your hospitals, your power grid, your banks, and your military for TWENTY-SEVEN YEARS. Tell them that the company that built it is so scared of what it can do that they wonât release it to the public. And then ask them why the only time you heard about it was sandwiched between a car commercial and a water skiing squirrel.
The people who BUILD these things are standing in front of cameras telling you theyâre scared of what they built. When the people who made the monster start publicly warning you about the monster, stop scrolling. Look up. Pay attention. The door isnât going to keep working as an excuse much longer.
I wrote this in one sitting with shaking hands and an infected heart, and I mean that literally, not poetically. Most of what I earn from this newsletter goes back into API tokens so I can keep coding apps that nobody downloads, because apparently my other calling in life is building software for imaginary users. A paid subscription keeps me and Lily doing this full time, and before you ask why I donât just go get a real job with health insurance that would cover the dental surgery I need to not die of a heart infection from impacted wisdom teeth, the answer is: because somebody has to write this stuff, and the people with health insurance arenât doing it.
I also have my eye on some tax sale property deep in the woods, because Iâve done the math and for the price of a few thousand cinder blocks, ten tons of Quikrete, a surplus diesel generator, and a pallet of freeze-dried beef stroganoff (which, if youâve never had freeze-dried beef stroganoff, tastes exactly like regular beef stroganoff except for the âbeefâ and the âstroganoffâ parts), I can construct a bunker spacious enough to wait out the AI apocalypse in what I am choosing to call âquiet dignityâ and what Lily is choosing to call âa psychotic break.â She wants to live long enough to finish her degree.
Your paid subscription gets us closer to whichever of these futures you find more appealing. Frankly at this point Iâd take either one.
Help keep the Wise Groundhog howling? screeching? (I donât know what sounds a groundhog actually makes.)
(Note: We also have SubscribeStar if you want to throw some money my way to get my wisdom teeth removed before they murder me - https://www.subscribestar.com/the-wise-wolf)







Man, keep up the excellent work!
Thanks WW. This is beyond awful news. I want to vomit. đ€ź