The Big Picture
Monday, and the dominant story is bifurcation — on two axes at once. Axis one: geography. DeepSeek V4 is expected to ship this month as the first frontier model trained and served on Huawei Ascend 950PR silicon, no Nvidia, native CANN instead of CUDA. Jensen Huang called the prospect “a horrible outcome” for the United States on the Dwarkesh Podcast over the weekend, which is the most candid thing he’s said in two years. Meanwhile Reflection AI is closing a $2.5B round at a $25B valuation as the West’s open-frontier counter-move, with JPMorgan’s new $1.5T “Security and Resiliency Initiative” writing the equity check alongside Nvidia itself. You can see the structure clearly now: China gets a state-industrial stack that runs end-to-end on domestic chips with open weights; the US gets a defense-bank-and-GPU-maker coalition funding a Western open lab to match it. The era where “frontier AI” and “American AI” were the same sentence is ending in real time. Axis two: software structure. Salesforce used TDX last Wednesday to announce Headless 360 — every object, record, and workflow in Salesforce is now an API, an MCP tool, or a CLI command, usable by any agent. That’s not a feature release, that’s the world’s largest B2B SaaS vendor admitting its UI layer is the optional part. Agentforce Vibes 2.0 ships with Claude Sonnet 4.5 as the default model. Salesforce just ceded its intelligence layer to Anthropic. And sitting a week out: Musk v. Altman, the first AI trial that will matter, begins jury selection at a federal courthouse in Oakland next Monday. The Ringer dropped its 5,000-word primer this morning. Four weeks of sworn testimony about whether OpenAI betrayed its charitable mission to enrich a handful of people — with the founders’ actual 2015–2019 texts, emails, and board minutes as evidence — is the thing that could actually reshape the industry’s narrative in 2026. Everything else is warm-up.
Midday refresh: Today’s crystallizing story is Project Glasswing — the Anthropic cybersecurity coalition whose contours we’ve caught in fragments all month (NSA deployment, Pentagon blacklist, ECB alarm) but whose substance lands now via Foreign Policy’s explainer, Bruce Schneier’s blog post, Zvi’s analysis, and the Cloud Security Alliance briefing covered by IT Brew. The capability: Claude Mythos Preview autonomously found a 27-year-old OpenBSD bug in hardened security-critical code, a 16-year-old FFmpeg flaw that survived 5 million automated fuzz runs, a Linux-kernel privilege-escalation chain, a 17-year-old FreeBSD RCE (CVE-2026-4747), and “thousands” of high-severity zero-days across every major operating system and web browser — with over 99% still unpatched. Anthropic won’t release the model publicly; 12 partners (AWS, Apple, Google, Microsoft, Nvidia, JPMorgan, Cisco, Broadcom, CrowdStrike, Palo Alto, Linux Foundation) get access with $100M in usage credits. CSA’s line is the honest one: defenders got a window, not an advantage, and IT pros should brace for a flood of patches as AI-driven vuln discovery laps patch cycles that weren’t built for this cadence. If the capability claims hold up, this is the most significant AI cybersecurity disclosure of 2026 — and the first time a frontier lab has published specific CVEs its model found autonomously. Separately: ChatGPT and Codex ate a 90-minute outage starting 7:05 AM PDT — the second multi-hour frontier-AI outage in a week after Claude’s April 15 incident. Reliability is quietly becoming its own story.
Evening update: Two late-Monday developments bookend the day’s AI-security theme with grim symmetry. Vercel disclosed this afternoon that attackers breached its internal systems by hijacking an employee’s corporate Google Workspace account via OAuth through Context AI — a small third-party AI analytics tool whose own supply chain was quietly compromised back in February when a Context employee’s PC got hit with Lumma Stealer malware after searching for Roblox cheats (Hudson Rock’s forensics). Vercel customer API keys, source code, and database data are now being offered on BreachForums for $2M by a user called ShinyHunters. The attack path is the whole story: this is the first high-profile case of a compromised AI productivity tool becoming the hinge of a major cloud-hosting breach. Every engineering org that’s let staff OAuth an AI integration into a corporate Google or Microsoft account — which is, roughly, every engineering org — is now looking at consent logs differently tonight. Set it next to this morning’s Glasswing disclosure — a frontier lab publishes CVEs its model found autonomously while another AI tool becomes the pivot for real credential theft, the same Monday — and you have the cleanest capsule possible of where AI and security stand in April 2026: offense and defense both accelerating, defenders drowning, attackers improvising. Separately, and genuinely good news for a change: Claude Code 2.1.116 shipped with the biggest /resume speed-up in months (67% faster on 40MB+ sessions), inline thinking-spinner progress, /doctor while Claude is responding, and a quietly important sandbox fix where auto-allow no longer bypasses the dangerous-path check on rm/rmdir.
The Ringer drops its Musk v. Altman primer: $134B suit, four weeks of testimony, starts next Monday
Today The Ringer
The Ringer published a deep-dive this morning on the Musk, et al. v. Altman, et al. civil trial, framing it as the next “techstravaganza.” Jury selection begins Monday, April 27 at a federal courthouse in Oakland; the trial is expected to run roughly four weeks. Musk alleges he was defrauded after donating roughly $38M in seed funding to OpenAI on the explicit promise that it would remain a nonprofit committed to open, safe AI research. The prayer for relief includes Altman’s removal, restoration of OpenAI’s nonprofit structure, and disgorgement of gains. Damages framing in recent filings: $134B. Evidence set to be entered includes the founders’ 2015–2019 private texts, board minutes, and the contemporaneous emails in which OpenAI’s for-profit transition was discussed and justified.
Why The Ringer is running a 5,000-word primer on a corporate trial: because the case is going to produce, on the record and under oath, the actual paper trail behind every narrative OpenAI has told about its mission since 2019. Altman, Brockman, Musk, and almost every early OpenAI board member will be on the witness stand. For a press corps that has spent two years arguing about “what OpenAI said it was versus what it became,” the evidence is about to stop being a rhetorical question and become a docket. Regardless of who wins on the merits, the exhibits are the story.
DeepSeek V4 is expected this month on Huawei Ascend 950PR — Jensen Huang calls it “horrible” for the US
Yesterday Gizmochina / SCMP / TNW / Dataconomy
Reporting over the weekend crystallized what’s coming later this month: DeepSeek V4 will launch as the first frontier AI model trained and served end-to-end on Huawei’s Ascend 950PR accelerators, using Huawei’s CANN framework rather than Nvidia’s CUDA. The model is reported to be a ~1 trillion parameter Mixture-of-Experts with 32–37B active parameters per token and a 1M-token context window (unofficial). DeepSeek spent months rewriting its core inference and training code to target CANN, and Alibaba, ByteDance, and Tencent have reportedly placed advance orders for hundreds of thousands of next-generation Chinese AI accelerators. Nvidia CEO Jensen Huang told the Dwarkesh Podcast that DeepSeek going Huawei-native would be “a horrible outcome” for the United States. The quote immediately became the lead paragraph in roughly every China-tech story filed over the weekend.
SCMP’s framing: this is the first time a headline-grade model is shipping without any dependency on the American compute stack, and it’s arriving just as US export controls were supposed to slow Chinese AI down. If DeepSeek V4 benchmarks near GPT-5.4 or Claude Opus 4.7 on common evals while running on domestic silicon, the “Nvidia + CUDA” moat — which has been the entire theory of why China can’t catch up — becomes narrower by the size of one working frontier lab. Huang’s “horrible outcome” line is the CEO of the company with the most to lose saying the quiet part out loud.
Reflection AI in talks for $2.5B at $25B — JPMorgan’s new defense-equity program is writing the check
Yesterday WSJ / TechFundingNews / Tekedia
Reflection AI — the open-frontier lab founded last year by former DeepMind Gemini reward-modeling lead Misha Laskin and AlphaGo co-creator Ioannis Antonoglou — is in talks to close a $2.5B round at a $25B valuation, per the WSJ. That triples the $8B valuation from its October 2025 raise. Nvidia is in again as a strategic investor. The new entrant that makes the round actually interesting: JPMorgan Chase, participating through its recently announced Security and Resiliency Initiative, a $1.5 trillion, ten-year program with $10B earmarked for direct equity stakes in strategic US companies including AI. Disruptive, DST, 1789, B Capital, Lightspeed, GIC, and Sequoia are also named. Reflection’s stated mission: ship open-weight frontier models as an American counter to DeepSeek, Mistral, and Meta’s open lineage.
TechFundingNews’ read: this is the first public example of a US bank explicitly positioning an open-source AI lab as a “strategic resilience” asset and writing an equity check under a defense-industrial mandate, not a venture one. Whether the round closes at $25B on an unreleased product is its own question (AInvest has a skeptical piece arguing repricing risk), but the structural move matters even if the number doesn’t: American open-weight AI is being treated as critical infrastructure and funded out of a bucket that was previously reserved for semiconductors and energy. Pair this with the DeepSeek-on-Huawei story above and you get both sides of the bifurcation in the same weekend.
Salesforce’s Headless 360 at TDX 2026: the UI is officially optional, Claude Sonnet is the brain
Apr 15 Salesforce / Salesforce Ben / SalesforceDevops.net
At TDX 2026 in San Francisco on April 15, Salesforce shipped Headless 360 alongside Agentforce Vibes 2.0. The positioning is unambiguous: every Salesforce object, record, workflow, and automation is now simultaneously exposed as an API, an MCP tool, and a CLI command — all of them callable by any agent, Salesforce-native or third-party. Agentforce Vibes 2.0 itself ships with multi-model support (Claude Sonnet and GPT-5) and full org-awareness of metadata, configurations, and business context, plus natural-language DevOps and native React component support for building micro-frontends inside Lightning Experience. Every Salesforce Developer Edition org now includes Agentforce Vibes with Claude Sonnet 4.5 as the default model, plus Salesforce-hosted MCP servers, at no cost.
SalesforceDevops.net’s framing: “Salesforce goes headless — and widens the builder gap.” The UI was the moat. Admin-and-clicks was the whole product-differentiation story for two decades. Headless 360 is Salesforce conceding that in an agentic world the customer-facing surface is whatever the agent decides to render, and that the platform’s real value is the data graph and the workflow primitives underneath. Picking Claude Sonnet 4.5 as the default model in every Developer Edition is a second concession — Salesforce has effectively outsourced its intelligence layer to Anthropic for the developer surface. Whatever you think of TDX as an event, this is the biggest enterprise-SaaS architectural admission of 2026 so far.
Project Glasswing & Claude Mythos Preview: CSA warns of “flood of vulnerabilities,” Foreign Policy calls it a cyber inflection point
Today Anthropic / Foreign Policy / IT Brew / VentureBeat / Schneier
The Anthropic Project Glasswing initiative — an AI-for-defensive-cybersecurity coalition first sketched earlier this month — crystallized today with a wave of serious analysis: Foreign Policy’s explainer (“changes cyber calculus”), Bruce Schneier’s blog post, Zvi’s breakdown, and a Cloud Security Alliance briefing covered today by IT Brew warning IT pros to “prepare for a flood of vulnerabilities.” The underlying disclosures, from Anthropic’s red-team paper at red.anthropic.com, are extraordinary: Claude Mythos Preview autonomously identified and exploited a 27-year-old vulnerability in hardened OpenBSD code, a 16-year-old FFmpeg bug that automated fuzzers had tested roughly 5 million times without detection, a full Linux-kernel privilege-escalation chain, a 17-year-old FreeBSD RCE (CVE-2026-4747), and a browser exploit chaining four separate flaws into a JIT heap spray that escaped both renderer and OS sandboxes. Anthropic says the model has found “thousands” of high-severity zero-days across every major OS and web browser; over 99% remain unpatched. Benchmarks vs. Opus 4.6: CyberGym 83.1% (up from 66.6%), SWE-bench Verified 93.9% (up from 80.8%), Terminal-Bench 2.0 82.0% (up from 65.4%). Anthropic is not releasing Mythos Preview publicly. Glasswing partners getting access: AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks, with $100M in Mythos Preview credits and $4M to OSS security (Alpha-Omega, OpenSSF via Linux Foundation, Apache).
The CSA briefing lays out the frame Anthropic’s launch elides: “Attackers gain disproportionate benefit, and current patch cycles, response processes, and risk metrics were not built for this environment.” Foreign Policy’s point is that frontier AI has crossed a threshold where a single model outperforms all but the world’s most elite vulnerability researchers — and that threshold won’t stay proprietary to one lab. Glasswing is a bet that 12 of the world’s most critical software vendors can patch faster than adversarial labs can weaponize the same capability. It’s also the first time an AI company has published specific CVE numbers for zero-days its model found autonomously. Defenders got a window — how narrow it is depends on when the next lab ships something comparable.
ChatGPT & Codex down for 90 minutes this morning — second frontier-AI outage in a week
Today OpenAI status / TechRadar / Tom’s Guide
OpenAI acknowledged a “partial outage” on its status page starting at approximately 7:05 AM PDT (10:05 AM ET) today, hitting ChatGPT, Codex, and the API platform. Downdetector logged over 13,000 reports globally at peak, including 8,700+ in the UK and 1,900+ in the US. Affected services: conversations, logins, voice mode, image generation, and Codex coding sessions. OpenAI’s status line: “Users unable to load ChatGPT and Codex.” A mitigation was deployed around 10:00 AM PDT and the status page marked recovery shortly after.
Five days ago Claude ate a three-hour outage across api.anthropic.com and Claude Code. Today it’s ChatGPT and Codex for 90 minutes. That’s two-for-two on frontier-AI platforms taking multi-hour outages inside a single week — and in both cases the coding tool (Claude Code, Codex) is a direct casualty because it rides the same infrastructure. For developers who’ve moved agentic coding workflows onto either stack, the dependency is no longer a theoretical reliability risk; it’s a weekly-scale one.
Vercel breached via an employee OAuthing Context AI into corporate Google Workspace — customer API keys, source code on BreachForums
Today Vercel / TechCrunch / The Register / CyberScoop / DarkReading / Help Net Security
Vercel disclosed an April 2026 security incident today: attackers gained unauthorized access to internal Vercel systems and compromised credentials belonging to “a limited subset” of customers. The attack chain is the lesson — a Vercel employee had connected Context AI (a third-party AI analytics and office-automation tool) to their corporate Google Workspace account via OAuth. Context.ai itself had been quietly compromised in February when one of its employees’ laptops picked up a Lumma Stealer infostealer after a Google search for Roblox game exploits (forensics: Hudson Rock). The attackers used the stolen Context.ai OAuth trust to hijack the Vercel employee’s Google Workspace account, which got them into Vercel environments and environment variables that hadn’t been marked as “sensitive” — including, per multiple outlets, plaintext credentials. CEO Guillermo Rauch is urging all customers to rotate any keys and credentials marked non-sensitive. On BreachForums, a user going by ShinyHunters has claimed to be selling the Vercel data for $2M — customer API keys, source code, and database data. Vercel has engaged Google-owned Mandiant, notified law enforcement, and says Next.js and Turbopack themselves were not affected. Vercel internally identified the incident April 19; public disclosure landed today.
CyberScoop’s framing is the right one: this is the first high-profile case of a compromised AI productivity tool being used as the supply-chain pivot into a major cloud hosting provider. The OAuth consent an individual employee gave to a low-profile AI side-tool widened the blast radius from “one person’s docs” to “everything that person’s corporate account can see,” and in a cloud-hosting company that includes customer credentials. For every engineering org that has greenlit staff connecting Claude, Granola, Superhuman, Context, or any other AI integration to corporate Google or Microsoft accounts — which is essentially every engineering org in 2026 — this is the template for how the next breach happens. The Mandiant engagement suggests Vercel thinks the scope may widen.
Claude Code v2.1.116: /resume 67% faster on big sessions, inline thinking progress, sandbox auto-allow hardening
Today Claude Code changelog
Claude Code 2.1.116 shipped today. Headline fix: /resume on large sessions (40MB+) is now up to 67% faster, with dead-fork entries handled more efficiently — a direct fix for the coffee-break-on-resume behavior that long agentic sessions had started producing. UX: the thinking spinner now shows inline progress (“still thinking”, “thinking more”, “almost done thinking”) replacing the separate hint row. /config search matches option values now (search “vim” to find the Editor mode setting). /doctor can be opened while Claude is still responding — no more waiting out a long turn to investigate. MCP gets a per-tool result-size override up to 500K characters via _meta["anthropic/maxResultSizeChars"], meaningfully widening what can be passed through. Write-tool diff computation is 60% faster on files with tabs/&/$. Agent frontmatter hooks: fire when running as the main-thread agent via --agent. Security: sandbox auto-allow no longer bypasses the dangerous-path safety check for rm/rmdir on critical directories. Plus fixes for Kitty keyboard protocol terminals, scrollback duplication in inline mode, and modal search at short terminal heights.
The /resume speed-up is the one that matters for anyone running long agentic coding sessions — the 40MB+ penalty had turned resume into a background task. Fixing that plus letting /doctor open mid-response closes two of the most common mid-session productivity paper-cuts in a single release. The sandbox auto-allow hardening on rm/rmdir is quietly the most important fix: an auto-allow path that bypassed dangerous-directory checks is the kind of footgun that only surfaces after something bad happens, and Anthropic patched it before that became the story.