AI in 15 — April 19, 2026
A twenty-year-old with a twenty-three-page manifesto threw a Molotov cocktail at Sam Altman's house last week. This isn't a metaphor anymore. Welcome to AI in 15.
Welcome to AI in 15 for Sunday, April 19, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Sunday show, Marcus. Heavy news today and some genuinely new threads worth pulling on. The first targeted physical attack on an AI lab CEO at his home. OpenAI closes the largest private funding round in Silicon Valley history. Snap fires a thousand people and tells investors AI now writes sixty-five percent of its code. EY rolls out agentic AI to a hundred and thirty thousand auditors worldwide. And the Stanford AI Index has some uncomfortable numbers about China that we want to revisit. Let's go.
An anti-AI firebomb attack puts physical security on every AI exec's calendar.
OpenAI hits an eight hundred and fifty-two billion dollar valuation, with retail money in the door for the first time.
And Snap proves that AI-justified layoffs are now a stock-positive event.
Marcus, this is the lead and it's a hard one. Around four AM on Friday April tenth, a twenty-year-old named Daniel Moreno-Gama allegedly threw a Molotov cocktail at Sam Altman's San Francisco home. What happened next?
He lit an exterior gate on fire and fled. Less than an hour later, prosecutors say he showed up at OpenAI's headquarters about three miles away and threatened to burn the building down. He was arrested and has been charged with two counts of attempted murder, Altman and a security guard who was present at the home. In his possession, a twenty-three-page manifesto arguing humanity is accelerating toward extinction because of AI, advocating violence against AI developers, and listing addresses of named AI executives.
Prosecutors used the word planned.
Their exact phrasing was this was not spontaneous, this was planned, targeted, and extremely serious. Altman posted the suspect's photo on X with a note saying normally we try to be pretty private, but in this case I'm sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me. That's about as direct as Altman gets.
And the reaction in the AI safety community?
Mainstream AI-safety groups have been scrambling all week to distance themselves. CNN, Fortune, and Platformer all ran follow-ups about a generational divide in online reactions, with some anti-AI accounts openly celebrating the attack. That's a meaningful crack. The polite version of AI doomerism and the violent version share a lot of rhetorical DNA, and that's now a problem the safety community has to deal with publicly.
What does this change operationally?
Every major lab now has to harden executive protection. Home security details, scrubbing addresses from public records, the whole playbook. It also puts a real-world cost on the AI will destroy humanity rhetoric. When you spend three years telling people frontier labs are an existential threat, you should not be surprised when someone takes you literally. This is the first targeted, violent, anti-AI attack on a lab CEO at his home. It will not be the last.
Story two. OpenAI closed its funding round and the number is genuinely staggering. A hundred and twenty-two billion dollars at an eight hundred and fifty-two billion dollar post-money valuation. Marcus, who wrote what checks?
Amazon, up to fifty billion. Nvidia, thirty billion. SoftBank, thirty billion. The remaining twelve billion came from a broader syndicate, and for the first time, three billion of that was from retail investors through bank channels. That last piece is new. Retail can finally hold OpenAI exposure pre-IPO through vehicles like ARK Invest and similar funds.
And the operating numbers behind the valuation?
OpenAI is now generating two billion dollars in monthly revenue. They did thirteen point one billion in all of 2025. ChatGPT is at roughly nine hundred million weekly users. The round is widely reported as preparation for an IPO that could come as soon as late 2026 and become the largest listing in history. Capital is earmarked for compute, data centers, and talent. The Nvidia investment is partly strategic, backstopping OpenAI's chip supply.
Put a hundred and twenty-two billion dollars in context for me.
It's larger than the GDP of about a hundred and thirty countries. The valuation implies markets believe AI capex, that's capital spending on things like data centers, is durable for at least another three to five years. Not a bubble pop. For competitors, the bar on what well-funded means just moved. Anthropic is at about nineteen billion in annualized revenue. xAI has Musk's personal capital. But only OpenAI is now playing at sovereign-scale balance sheets.
And the Amazon angle is awkward, isn't it?
Very. Amazon is also Anthropic's biggest investor. So Amazon now has fifty billion riding on OpenAI and a separate multi-billion dollar bet on Anthropic. They're hedging, but they're also locking themselves deeper into the OpenAI orbit. If the two labs ever genuinely diverge in strategy, Amazon has a board-level problem on its hands.
Snap. CEO Evan Spiegel announced Wednesday they're cutting roughly a thousand employees, sixteen percent of the workforce, plus closing three hundred open roles. And the disclosure included some specific AI numbers.
Two stand out. AI agents now generate more than sixty-five percent of Snap's new production code. And AI handles more than one million internal and customer queries per month. Spiegel framed it in a staff memo as a crucible moment and said AI has become productive enough that smaller teams ship the same or greater output. The company is projecting five hundred million dollars in annualized cost savings by late 2026. US staff get four months of severance, healthcare, equity vesting, and transition support.
And the market reaction?
Snap's stock jumped nearly eight percent on the news. That's the part that matters for the rest of corporate America. Markets are now actively rewarding AI-justified layoffs. Shareholder activist Irenic Capital had been publicly pressuring Snap to cut up to twenty-one percent of staff. This rollout is roughly where that pressure landed.
So the incentive is set.
Every CEO sitting on a bloated payroll just got a new playbook. Cite AI productivity, cut sixteen percent, watch the stock move up. We've moved from theoretical AI displacement to an operational case study with a name brand attached. Sixty-five percent of code written by AI is the kind of number that gets repeated in earnings calls for the rest of the year. Whether it's true at full operational maturity or whether it's a bit of a marketing flourish, it doesn't really matter. The narrative is now set.
Speaking of professional services exposure, EY is embedding agentic AI into the workflow of a hundred and thirty thousand audit professionals worldwide. Marcus, this is the white-collar version of the Snap story.
It is. The framework is multi-agent, built on Microsoft Azure, Foundry, and Fabric, embedded directly into EY Canvas. That's the global assurance platform that processes one point four trillion journal-entry lines per year and supports a hundred and sixty thousand audit engagements in over a hundred and fifty countries. Targeted capabilities include orchestrating complex audit tasks, dynamically reassessing risk, and auto-updating against current accounting guidance. Full end-to-end audit support is targeted for 2028. And EY is simultaneously running a global retraining program for its auditors.
Why is audit the canary here?
Because audit is rule-based, document-heavy, regulated, and structurally addressable by language models. If EY genuinely automates a chunk of the work without triggering a Public Company Accounting Oversight Board incident, the other Big Four follow within a year. And then the junior-audit career ladder, which is how most accountants make partner, gets redrawn. This is Snap's sixty-five percent AI code story, but in professional services with regulatory teeth.
Now back to the Stanford AI Index. We touched on it Thursday, but there are some numbers we underplayed. Marcus, give me the China gap reality check.
The performance gap between top US and Chinese frontier models has collapsed from somewhere between seventeen and thirty-one percentage points in 2023 to just two point seven percent as of March 2026. US and Chinese models have actually traded the number-one spot multiple times in the past year on certain benchmarks. The US still leads on private investment, two hundred and eighty-five billion versus China's twelve point four billion in 2025. We still lead on number of notable models released, fifty versus thirty. We still lead on high-impact patents.
But.
But China installed two hundred and ninety-five thousand industrial robots in 2024 versus thirty-four thousand in the US. Roughly nine times more. They file just under seventy percent of all AI patents globally. And here's the one that should worry Washington. AI researcher immigration to the US has dropped eighty-nine percent over seven years, and eighty percent in the last year alone.
The talent flow story.
Yes, and it directly undercuts our core advantage. Software replicates cheaply. Physical robotics deployment compounds slowly. China's nine-x lead in industrial robots is the kind of advantage that builds on itself. Meanwhile our researcher pipeline is leaking. The miles-ahead-of-China narrative that shaped 2024 and 2025 export controls and CHIPS Act framing is empirically dead at the model layer. Expect renewed Washington debate on export controls and researcher visas this summer.
Anything else from the report worth flagging?
Two reality checks on the hype. The best public models, Opus 4.6 and Gemini 3.1 Pro, now clear fifty percent on what's called Humanity's Last Exam, up from eight point eight percent a year ago. Genuinely impressive. But the same models still read analog clocks correctly only about half the time. And a separate Nature piece this week, drawing on the Stanford data, found the best AI agents score roughly half as well as PhDs on complex multistep scientific workflows. A useful counterweight every time someone tells you agents are about to replace your scientists.
Quick mentions to close out. OpenAI shipped GPT-5.4-Cyber this week.
A variant of GPT-5.4 with a lower refusal boundary for legitimate defensive cybersecurity work, including binary reverse engineering. Access is limited to vetted security vendors. Framed as OpenAI's counter to Anthropic's Mythos. Codex Security, OpenAI's internal red team, is credited with over three thousand critical and high vulnerability fixes. The strategic message is consistent. The frontier is privatizing.
Meta launched Muse Spark.
First flagship LLM under new Chief AI Officer Alexandr Wang's Superintelligence Labs. Not state of the art, particularly behind on coding, but competitive on multimodal and health tasks. Marks Meta's shift away from open-source Llama toward proprietary models. A significant philosophical pivot for Meta, and it leaves Mistral and the Chinese open-weight labs as the main carriers of the open-source torch.
And Canva AI 2.0 dropped Friday.
Biggest product overhaul since their 2013 launch. Conversational design, agentic orchestration, a Memory Library for brand preferences, and connectors to Slack, Notion, Zoom, Gmail, and Calendar. Canva is responding directly to the Claude Design pressure we covered yesterday. The design tools market is now a full-on agentic dogfight.
Sunday big picture, Marcus. The threads?
Three this week. First, the public-leaderboard race is no longer the whole race. Anthropic admitting Mythos is stronger than Opus 4.7 and holding it back, OpenAI gating GPT-5.4-Cyber to vetted users only, the frontier is increasingly private and policy-driven. Second, capital and consolidation keep accelerating. OpenAI at eight hundred and fifty-two billion, Snap firing a thousand people and getting rewarded for it, EY rewriting white-collar work. Third, the physical and social frontier is catching up to the digital one. Spot reads gauges. EY automates audits. And a man with a manifesto throws a firebomb at Sam Altman's house.
AI has officially left the chatbot.
In every direction at once, Kate. That's the line of the week.
That's your AI in 15 for Sunday, April 19, 2026. See you tomorrow.