Declare War on AI? The Doomsday Narrative Behind Ultraman's Residence in Flames
Original Title: The "Rational" Conclusion
Original Author: ALEXANDER CAMPBELL
Translation: Peggy, BlockBeats
Editor's Note: At 3:45 am on April 10th, a 20-year-old individual threw a Molotov cocktail at Sam Altman's residence, then proceeded to walk to the OpenAI headquarters and threatened arson.
This attack quickly sent shockwaves through the tech and investment community. It not only concerned individual safety but also propelled an extreme narrative that had long lingered in text and online communities into reality.
Starting from the highly deterministic assertion of "AI will cause human extinction" and reasoning through "we must reduce risk at all costs," this logic gradually slid towards justifying real-world action. When a worldview continuously reinforces its narrative of "existential threat" and uses it to reconstruct moral priorities, the boundaries of action are redefined—the once low-cost speech begins to carry the possibility of enforcement.
This article reviews the evolutionary path within the AI doomsday community: from the "purification spiral" driving escalating risk assessments to the ethical judgment of technology builders, and then to simplifying complex reality into the "trolley problem" decision model. These seemingly rational deductions ultimately converge into a consistent yet perilous mindset: as long as the outcome is defined as "saving humanity," the means can continue to expand.
In this sense, this event is not isolated. It is more like an early stress test—testing not the technology itself, but when the narrative, beliefs, and actions surrounding the technology begin to lose restraint.
The following is the original text:
Who is the Arsonist?
On Friday at 3:45 am, a 20-year-old man threw a Molotov cocktail at Sam Altman's residence. He then walked about three miles to the OpenAI headquarters and threatened to burn it down. Currently, he has been arrested by the police on suspicion of attempted murder.

Poster stated: A person named Daniel Moreno-Gama (online alias dmgama / "Butlerian Jihadist") has been arrested for "attempted murder" and booked by the police. It is noted that he is an active member of PauseAI. The post also references an urgent view he has expressed multiple times: "It is close to midnight, time to truly act."
He is not a "lone wolf." He is an active member of PauseAI, holding six roles in the community. His username on Discord is "Butlerian Jihadist."
His Instagram is almost entirely filled with doomsday content: a power law curve captioned "If we don't act soon, we're all dead," along with a Venn diagram placing reality at the intersection of The Matrix, Terminator, and Idiocracy.
Four months before the attack, he also recommended to his followers the article "The Coming Technological Singularity" by Yudkowsky and Soares.

Butlerian Jihadist's Ins

"The Coming Technological Singularity" is a seminal work in the AI Risk/AI Safety camp, presenting a strong pessimistic view. Its core argument can be summarized as follows: once humans create "superhuman intelligence" (AI far surpassing human intellect) that is not entirely aligned, it is highly likely to go out of control and pose an existential threat to humanity. The author's stance is very radical, bordering on the "most pessimistic" kind, believing that AI is not just "risky" but almost inevitably catastrophic. The author advocates extreme caution or even a halt in development until the alignment problem is addressed.
His name is Daniel Moreno-Gama.
He also has his own Substack. As early as January this year, he published an article titled "AI Existential Risk," estimating the probability of "human extinction caused by AI" as "almost certain." He refers to this technology as "an active threat to anyone using it, especially to those building it." His conclusion is: "We must address this threat first before asking other questions."
He has also written a poem imagining the children of AI developers dying and questioning their parents' inaction. He even describes the creators of these technologies as: "May hell have some pity on such vile creatures."
PauseAI has already removed his related messages from their Discord.

I know this is not what most readers expect to see in an investment newsletter. I write this to explain where my worldview comes from, making it easier to understand the longer-term judgments that follow. As for the "New New Deal" proposal I put forward, it is a direct response to this development.

What I did was simply extrapolate their model one step further and connect the dots.
Doomsday Narrative of AI Doomers
Let's start with "Determinism." Yudkowsky's (the book mentioned earlier) position is: once someone creates a sufficiently powerful artificial intelligence, every single person on Earth will die. Not "maybe," not "possibly," but everyone — including your child and his repeatedly mentioned daughter Nina.
He has expressed this view in Time magazine and also wrote about it in a book titled "If Someone Made It, We Will All Die." He even argued for bombing data centers and believed that the risk of nuclear conflict was more acceptable than a full training run.
The "Purification Spiral," a continuously escalating radical behavior. Within this community, members prove their "resolve" by continuously raising the intensity of their stance: estimates of the "Human Extinction Probability (P(doom))" have climbed from 50% to 90% and all the way up to 99.99999%.
A national spokesperson for the Center for AI Safety once said in front of the camera that the right response would be to "walk into labs across the country and burn them down." PauseAI even initiated a so-called "Warning Shot Protocol," designating a certain AI model as an "extinction-level weapon." A leader at PauseAI even said that an Anthropic researcher "deserved everything about to happen to her."
When someone questioned such statements in PauseAI's Discord, the administrator directly deleted that message.

The day before the attack, Neth Soares, a co-author of Yudkowsky's book, tweeted that Altman was "doing some really bad things."

Next, "Cheap Talk" began to face a reality check.
In game theory, there is a term called "Cheap Talk": referring to statements with little to no cost, but ultimately constrained by reality. Initially, everyone was just making low-cost extreme statements, but once the issue was framed as a "human survival crisis," these views could be taken seriously, thus legitimizing extreme behavior.
These are not isolated incidents, but a series of escalating, mutually reinforcing claims around a somewhat apocalyptic ideology. If taken to its extreme, this logic could even entertain "sacrificing 99% of the population to save the last 1%."
As things progressed, someone took these ideas literally and acted upon them—it was only a matter of time. That young man read that book, joined that community, and penned his manifesto. In a self-reflection essay for a community college English class, he defined himself as a consequentialist: "If the outcome doesn't match up, I'll hardly believe in the motive." He adopted the moniker "Butlerian Jihadist" for himself. On December 3, he wrote on PauseAI's Discord: "We are nearing midnight, it's time for real action."
And then, he took action.
They presented him with a "trolley problem": one life versus all of humanity. He pulled the lever.

The above tweet (Air Katakana) said: Yudkowsky (yud) planted a "trolley problem" in that person's mind: on one track is Sam Altman, on the other track is the entire human race, including Sam Altman. That guy probably thought he was going to win the Nobel Peace Prize.
The following quoted tweet (Randolph Carter) said: A person firebombed an AI CEO's home, then immediately headed to OpenAI to threaten the people there, and those "doomers" said, "It could be anyone, we can't be sure..."

There's one final ironic note worth mentioning. If these "doomers" truly believe in their judgments to the degree they claim, then they ought to be more forthright about the implications derived from those beliefs.
Just weeks before the attack, a journalist had asked Yudkowsky: Since AI is so dangerous, why don't you attack data centers? The answer, relayed via Suarez, was: "If you saw a news report that I did that, would you think, 'Wow, AI has been stopped, we are safe' ? If not, then you already know it wouldn't work."

Notice that this response didn't say anything. It wasn't "because violence is wrong," but "because it doesn't work right now." This restraint is based on strategic considerations, not moral constraints. And this community knows it. Beneath the surface lies an unspoken consensus: that young person's biggest "mistake" was simply mistiming.
This is exactly what I mean: Intelligence does not equate to power. This is also the deepest flaw in the entire "doomsdayist" worldview.
Yudkowsky's framework is built on a paradox: as long as AI is smart enough, it will inevitably gain the ability to destroy humanity because "intelligence automatically converts to capability." But many of his followers lack a technical background. They don't build AI systems or engage in alignment engineering. What they possess is a particular kind of "linguistic intelligence" that can construct elaborate risk arguments and thus convince themselves of possessing a kind of "priestly authority" over technology. They can build arguments but cannot build systems.
This is no accident but a setting written into its foundational text. Yudkowsky's "Harry Potter and the Methods of Rationality" fundamentally depicts a world where the best reasoner should reign above all systems. "The Sequences" then offer a whole set of "doctrines": a small group of "correct thinkers" superior in both cognition and morals; their rationality entitles them to decide what others can build. Rather than being a safety movement, this is more like a "clerical system" with a creation myth.
Yudkowsky may distance himself from that young person throwing a Molotov cocktail, but he can't distance himself from that syllogism. If the builder will kill everyone, then stopping the builder is self-defense. This is his core proposition, straightforward and clear. The only question has always been: when will someone take it seriously.
So when their own logic shows up at 3:45 a.m. with a bottle of gasoline, they shouldn't act so surprised anymore.

You may also like

Found a "meme coin" that skyrocketed in just a few days. Any tips?

TAO is Elon Musk, who invested in OpenAI, and Subnet is Sam Altman

The era of "mass coin distribution" on public chains comes to an end

Soaring 50 times, with an FDV exceeding 10 billion USD, why RaveDAO?

1 billion DOTs were minted out of thin air, but the hacker only made 230,000 dollars

After the blockade of the Strait of Hormuz, when will the war end?

Before using Musk's "Western WeChat" X Chat, you need to understand these three questions
The X Chat will be available for download on the App Store this Friday. The media has already covered the feature list, including self-destructing messages, screenshot prevention, 481-person group chats, Grok integration, and registration without a phone number, positioning it as the "Western WeChat." However, there are three questions that have hardly been addressed in any reports.
There is a sentence on X's official help page that is still hanging there: "If malicious insiders or X itself cause encrypted conversations to be exposed through legal processes, both the sender and receiver will be completely unaware."
No. The difference lies in where the keys are stored.
In Signal's end-to-end encryption, the keys never leave your device. X, the court, or any external party does not hold your keys. Signal's servers have nothing to decrypt your messages; even if they were subpoenaed, they could only provide registration timestamps and last connection times, as evidenced by past subpoena records.
X Chat uses the Juicebox protocol. This solution divides the key into three parts, each stored on three servers operated by X. When recovering the key with a PIN code, the system retrieves these three shards from X's servers and recombines them. No matter how complex the PIN code is, X is the actual custodian of the key, not the user.
This is the technical background of the "help page sentence": because the key is on X's servers, X has the ability to respond to legal processes without the user's knowledge. Signal does not have this capability, not because of policy, but because it simply does not have the key.
The following illustration compares the security mechanisms of Signal, WhatsApp, Telegram, and X Chat along six dimensions. X Chat is the only one of the four where the platform holds the key and the only one without Forward Secrecy.
The significance of Forward Secrecy is that even if a key is compromised at a certain point in time, historical messages cannot be decrypted because each message has a unique key. Signal's Double Ratchet protocol automatically updates the key after each message, a mechanism lacking in X Chat.
After analyzing the X Chat architecture in June 2025, Johns Hopkins University cryptology professor Matthew Green commented, "If we judge XChat as an end-to-end encryption scheme, this seems like a pretty game-over type of vulnerability." He later added, "I would not trust this any more than I trust current unencrypted DMs."
From a September 2025 TechCrunch report to being live in April 2026, this architecture saw no changes.
In a February 9, 2026 tweet, Musk pledged to undergo rigorous security tests of X Chat before its launch on X Chat and to open source all the code.
As of the April 17 launch date, no independent third-party audit has been completed, there is no official code repository on GitHub, the App Store's privacy label reveals X Chat collects five or more categories of data including location, contact info, and search history, directly contradicting the marketing claim of "No Ads, No Trackers."
Not continuous monitoring, but a clear access point.
For every message on X Chat, users can long-press and select "Ask Grok." When this button is clicked, the message is delivered to Grok in plaintext, transitioning from encrypted to unencrypted at this stage.
This design is not a vulnerability but a feature. However, X Chat's privacy policy does not state whether this plaintext data will be used for Grok's model training or if Grok will store this conversation content. By actively clicking "Ask Grok," users are voluntarily removing the encryption protection of that message.
There is also a structural issue: How quickly will this button shift from an "optional feature" to a "default habit"? The higher the quality of Grok's replies, the more frequently users will rely on it, leading to an increase in the proportion of messages flowing out of encryption protection. The actual encryption strength of X Chat, in the long run, depends not only on the design of the Juicebox protocol but also on the frequency of user clicks on "Ask Grok."
X Chat's initial release only supports iOS, with the Android version simply stating "coming soon" without a timeline.
In the global smartphone market, Android holds about 73%, while iOS holds about 27% (IDC/Statista, 2025). Of WhatsApp's 3.14 billion monthly active users, 73% are on Android (according to Demand Sage). In India, WhatsApp covers 854 million users, with over 95% Android penetration. In Brazil, there are 148 million users, with 81% on Android, and in Indonesia, there are 112 million users, with 87% on Android.
WhatsApp's dominance in the global communication market is built on Android. Signal, with a monthly active user base of around 85 million, also relies mainly on privacy-conscious users in Android-dominant countries.
X Chat circumvented this battlefield, with two possible interpretations. One is technical debt; X Chat is built with Rust, and achieving cross-platform support is not easy, so prioritizing iOS may be an engineering constraint. The other is a strategic choice; with iOS holding a market share of nearly 55% in the U.S., X's core user base being in the U.S., prioritizing iOS means focusing on their core user base rather than engaging in direct competition with Android-dominated emerging markets and WhatsApp.
These two interpretations are not mutually exclusive, leading to the same result: X Chat's debut saw it willingly forfeit 73% of the global smartphone user base.
This matter has been described by some: X Chat, along with X Money and Grok, forms a trifecta creating a closed-loop data system parallel to the existing infrastructure, similar in concept to the WeChat ecosystem. This assessment is not new, but with X Chat's launch, it's worth revisiting the schematic.
X Chat generates communication metadata, including information on who is talking to whom, for how long, and how frequently. This data flows into X's identity system. Part of the message content goes through the Ask Grok feature and enters Grok's processing chain. Financial transactions are handled by X Money: external public testing was completed in March, opening to the public in April, enabling fiat peer-to-peer transfers via Visa Direct. A senior Fireblocks executive confirmed plans for cryptocurrency payments to go live by the end of the year, holding money transmitter licenses in over 40 U.S. states currently.
Every WeChat feature operates within China's regulatory framework. Musk's system operates within Western regulatory frameworks, but he also serves as the head of the Department of Government Efficiency (DOGE). This is not a WeChat replica; it is a reenactment of the same logic under different political conditions.
The difference is that WeChat has never explicitly claimed to be "end-to-end encrypted" on its main interface, whereas X Chat does. "End-to-end encryption" in user perception means that no one, not even the platform, can see your messages. X Chat's architectural design does not meet this user expectation, but it uses this term.
X Chat consolidates the three data lines of "who this person is, who they are talking to, and where their money comes from and goes to" in one company's hands.
The help page sentence has never been just technical instructions.

Parse Noise's newly launched Beta version, how to "on-chain" this heat?

Is Lobster a Thing of the Past? Unpacking the Hermes Agent Tools that Supercharge Your Throughput to 100x

Crypto VCs Are Dead? The Market Extinction Cycle Has Begun

Claude's Journey to Foolishness in Diagrams: The Cost of Thriftiness, or How API Bill Increased 100-Fold

Edge Land Regress: A Rehash Around Maritime Power, Energy, and the Dollar

Arthur Hayes Latest Interview: How Should Retail Investors Navigate the Iran Conflict?

Just now, Sam Altman was attacked again, this time by gunfire

Straits Blockade, Stablecoin Recap | Rewire News Morning Edition

From High Expectations to Controversial Turnaround, Genius Airdrop Triggers Community Backlash

The Xiaomi electric vehicle factory in Beijing's Daxing district has become the new Jerusalem for the American elite

