The Big Picture
Wednesday is the day the bill comes due. Microsoft, Alphabet, Meta, and Amazon all report earnings tonight — combined 2026 AI capex of $600–645 billion, a number that exists in the same air as Tuesday’s WSJ leak that OpenAI’s CFO is privately worried about paying for $600B in compute commitments. The Street will measure Microsoft’s 38% Azure constant-currency growth target against the just-rewritten OpenAI deal, where Redmond stopped paying revenue share and the AGI clause vanished. Across the bay, Musk returns to the witness stand for cross-examination — Savitt’s job today is to make the “speciesist” speech of yesterday afternoon look like vintage Musk performance art rather than evidence. Meanwhile Anthropic’s Claude got blocked at Goldman Sachs Hong Kong via a strict contract interpretation that Anthropic itself confirmed (“Claude has never been officially supported in Hong Kong”), Congress got its first classified briefing on the new generation of cyber-capable models from OpenAI and Anthropic, and Poolside open-sourced a 33B agentic coding model that runs on a Mac. The pattern: every direction the agentic-AI buildout pushed in the past 72 hours is hitting a real-world constraint — financial, legal, geopolitical, biosecurity. The capex bet meets the gravity of an earnings call.
Evening update: Bloomberg dropped the kicker — Anthropic is weighing offers at $900B+ valuation with a potential $50B raise, doubling February’s $380B mark and topping OpenAI’s $852B. Earnings day two played out cruelly: Microsoft CFO Amy Hood guided 2026 capex to $190B (+61% Y/Y, $35B above consensus, with $25B of the hike pinned on memory pricing), Meta’s after-hours rout deepened to ~6% after Zuckerberg called an ROI question “a very technical question,” and Alphabet held its +4%. Combined four-hyperscaler 2026 capex now lands near $650B — almost exactly the WSJ figure that was supposed to be panic-inducing yesterday. Mistral Medium 3.5 (128B dense open weight) dropped to Hugging Face. Musk’s cross still has ~90 minutes left for Thursday morning; Birchall didn’t take the stand, Brockman is confirmed for Thursday.
Big Tech earnings: Microsoft 2026 capex guides to $190B (+61% Y/Y, $25B from memory inflation), Meta AH selloff deepens to ~6% on Zuckerberg ROI deflection, Alphabet holds +4% — four-hyperscaler 2026 capex tally near $650B
Today Updated 8:00 PM CNBC, Bloomberg, Fortune, 9to5Google
All four mega-cap hyperscalers reported Tuesday after the close, and the call commentary reset the spend trajectory upward through Wednesday. Microsoft Q3 FY26: revenue $82.89B, EPS $4.27, Azure +40% constant currency, AI run-rate $37B (+123% Y/Y) — but CFO Amy Hood guided FY26 capex to $190B (+61% Y/Y, $35B above the $154.6B Visible Alpha consensus), pinning $25B of the increase on memory and component pricing and warning Microsoft will remain capacity-constrained through 2026; next-quarter capex steps to >$40B. Alphabet Q1: revenue $109.9B (+22%, fastest since 2022), Google Cloud $20.02B (+63% vs. +50.1% est.), backlog doubled QoQ to $460B, EPS $5.11 (+81% Y/Y); 2026 capex raised to $180–$190B; Pichai called enterprise AI “the primary growth driver for cloud for the first time in Q1.” Amazon Q1: revenue $181.52B, AWS $37.59B (+28%, fastest in 15 quarters), Bedrock token volume in Q1 alone exceeded all prior years combined, EPS $2.78 vs. $1.64 est. Meta Q1: revenue $56.31B (+33%, fastest since 2021), profit $26.8B (+61%, helped by an $8B tax benefit); FY26 capex raised to $125–145B from $115–135B. After-hours by 8 PM PT: MSFT roughly flat, GOOGL +4%, AMZN mixed, META ~-6% as Zuckerberg deflected an ROI question on the call as “a very technical question.”
Per the call commentary, two stories diverged: Alphabet’s revenue acceleration is keeping pace with the spend curve and the market is paying for it; Meta’s capex hike of $20B at the midpoint with no fresh ROI framing got a roughly $103B market-cap haircut. The new through-line: combined 2026 hyperscaler capex now lands near $650B (MSFT $190B + GOOGL $185B mid + META $135B mid + AMZN ~$200B implied) — almost exactly the WSJ figure that was supposed to be panic-inducing yesterday. The interesting micro-trade is memory: Hood explicitly named it as a $25B line item, and TD Cowen raised Micron’s PT $550→$660 the same day. The $600B-compute frame just got refinanced upward by the people who have to pay for it.
Musk v. Altman day three closes mid-cross: Musk admits actual donations were ~$38M (vs. $1B pledged), calls Microsoft’s $10B investment “the key tipping point,” volunteers he was “a fool” — Birchall held over, Brockman confirmed for Thursday
Today Updated 8:00 PM CNBC, NBC News, NPR, CNN, SF Standard
Day 3 ended with Musk still under cross-examination by William Savitt — OpenAI’s lead trial counsel — who revised his close-of-day estimate from ~90 more minutes to “roughly another hour and a half” for Thursday morning. Afternoon highlights: Savitt walked Musk to admit his actual donations to OpenAI totaled $38 million — “shy of the ‘up to $1 billion’ you offered the nonprofit”; Musk testified Microsoft’s $10B investment was “the key tipping point” that convinced him OpenAI was violating its mission, arguing the size precluded a traditional donation framing; on the for-profit, Musk allowed it could exist “as long as it was a subsidiary of the nonprofit… what you can’t have is the for-profit become the main event, and that’s what we have here”; pressed on his own intended cap-table majority and board control with documents in evidence, Musk volunteered he was “a fool” for putting money into OpenAI; later, “You’re being misleading,” Musk to Savitt. Judge Gonzalez Rogers intervened repeatedly to slow the talkover. Birchall did not take the stand. Brockman is confirmed for Thursday.
Per the live coverage, Savitt’s afternoon was the cleanest cross of the week: he got Musk on the record that pledged-vs-actual giving was $1B-vs-$38M, that the $10B Microsoft investment — not the for-profit conversion itself — was Musk’s personal trigger, and that Musk’s own cap-table demands assumed a for-profit existed. The “a fool” line is the kind of admission a jury writes down. Brockman on Thursday is the witness OpenAI most wants in front of an advisory jury — cofounder, founding-CTO credibility, the only living non-Altman who can speak to the 2017–2018 negotiations from inside the room.
Goldman Sachs cuts Anthropic Claude access for Hong Kong bankers via strict contract interpretation; Anthropic confirms Claude was “never officially supported in Hong Kong” — Gemini and ChatGPT remain available
Today FT, Bloomberg, Reuters, Yahoo Finance, Forex News
The FT reported Wednesday that Goldman Sachs employees in Hong Kong have been unable for several weeks to access Anthropic’s Claude models — either directly or through Goldman’s internal AI platforms. Per the FT and Bloomberg, the cutoff stemmed from a strict interpretation of Goldman’s contractual arrangements with Anthropic, made after consultation with Anthropic itself. Anthropic confirmed to the FT that its Claude models had never been officially supported in Hong Kong. Other AI tools remain available on Goldman’s platform — Google Gemini and OpenAI’s ChatGPT included. Goldman did not comment on the underlying contract clause that triggered the cutoff.
Per the FT, the interesting tell is that this isn’t a US export-control action or a Trump-administration directive — it’s a private contractual restriction that Anthropic itself confirmed. The implication is that Anthropic’s commercial agreements have explicit territorial carve-outs around mainland-adjacent jurisdictions that OpenAI and Google have not adopted, leaving Anthropic uniquely exposed in the geo-fragmenting compliance regime. Trump and Xi meet in Beijing in mid-May, and AI access is on the agenda; the Goldman/HK precedent is the kind of operational artifact that gets cited in those rooms.
Claude Code v2.1.123 lands the morning after v2.1.122 — OAuth 401 retry-loop hot patch when CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1 is set; three releases in 24 hours
Today Claude Code changelog, GitHub releases
Anthropic shipped Claude Code v2.1.123 Wednesday morning, a single-line patch fixing OAuth authentication that was failing with a 401 retry loop when the environment variable CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1 was set. That makes three releases in 24 hours: v2.1.121 in Tuesday morning’s big drop (PostToolUse generalization, alwaysLoad MCP, plugin prune), v2.1.122 Tuesday afternoon (ANTHROPIC_BEDROCK_SERVICE_TIER, PR-URL paste in /resume, /branch fork-crash fix), and v2.1.123 this morning fixing a regression specific to enterprise users running with experimental betas disabled.
The cadence is the story: the typical release rhythm is every 24–48 hours, not three patches inside a business day. v2.1.123’s symptom — OAuth failing only when the betas-disabled flag is set — is the kind of conditional regression that almost certainly reflects an enterprise customer screaming first thing PT, not a routine cleanup. After last week’s post-mortem on the March/April quality regressions Cherny called “the most complex investigation we’ve had,’’ the team is visibly running tighter loops between detection and ship.
Poolside open-sources Laguna XS.2 (33B/3B-active MoE, Apache 2.0): SWE-bench Verified 68.2%, runs on a Mac with 36 GB RAM — first US-lab open-weight agentic coding model since Llama 4
Yesterday Poolside blog, VentureBeat, MarkTechPost, Hugging Face
Poolside — the three-year-old NVIDIA-backed lab led by former GitHub CTO Jason Warner — released two coding models Tuesday: the proprietary Laguna M.1 (their flagship foundation model) and the open-weight Laguna XS.2 under Apache 2.0. XS.2 is a 33B-total / 3B-active MoE with 256 experts plus one shared expert, 131K context, FP8 KV cache, mixed sliding-window/global attention in a 3:1 ratio across 40 layers, trained with the Muon optimizer. Benchmarks: SWE-bench Verified 68.2%, SWE-bench Multilingual 62.4%, SWE-bench Pro (Public) 44.5%, Terminal-Bench 2.0 30.1%. Native interleaved reasoning between tool calls; full OpenAI-compatible tool calling; ships with vLLM, Transformers, TensorRT-LLM, Ollama, MLX support and FP8/NVFP4 quantized variants. Compact enough to run on a Mac with 36 GB RAM via Ollama.
Per VentureBeat, Poolside’s framing is government and public-sector deployment into “the highest-security environments” — XS.2 is the open-weight surface, M.1 is what they’ll sell. Substantively, this is the first new American open-weight agentic coding model since Llama 4, and the SWE-bench Verified 68.2% number puts it in the same neighborhood as much larger Chinese MoE drops at a fraction of the deployment footprint. The 36-GB Mac claim is the part developers will actually test — if it holds, XS.2 becomes the strongest local-coding model that Patrick can run without renting a GPU.
Anthropic ships “Claude for Creative Work”: nine first-party connectors live across all plans (Adobe Creative Cloud, Blender, Ableton, Autodesk Fusion, SketchUp, Splice, Affinity, Resolume Arena/Wire) — Anthropic joins Blender Development Fund as patron
Yesterday Anthropic, 9to5Mac, MacRumors, Neowin, Dataconomy
Anthropic announced “Claude for Creative Work” Tuesday with nine creative-tool connectors live immediately on all Claude plans (Free through Enterprise). The Adobe connector is the broadest — reaching Photoshop, Illustrator, Firefly, Express, Premiere, Lightroom, InDesign, and Stock (50+ tools). The Blender connector exposes Blender’s full Python API to Claude, letting it analyze and debug entire scenes or batch-script object operations; Anthropic joined the Blender Development Fund as a patron alongside the announcement. Ableton ships as a Live/Push documentation assistant. Autodesk Fusion lets paid Fusion users create and modify 3D models conversationally. SketchUp, Splice, Affinity (now under Canva), Resolume Arena and Resolume Wire round out the set.
Per Neowin, the strategic frame is “the front door for creative work” — Anthropic positioning Claude as the orchestration layer over apps that have spent decades building proprietary scripting surfaces (Blender Python, ExtendScript, Max/MSP, etc.). The Blender Development Fund patronage is the sharpest signal: Anthropic is paying upstream into the Python API that makes the integration possible, which is what a serious long-term commitment looks like (vs. OpenAI’s historical pattern of API consumption without contribution). For Patrick’s own use this is mostly a 3D/audio/video play; for Anthropic, it’s the consumer-app surface they’ve been missing.
OpenAI and Anthropic give first classified briefings to House Homeland Security on cyber-capable models: Anthropic Mythos held back over autonomous-vuln-exploit risk, OpenAI tiering GPT-5.4-Cyber rollout
Yesterday Axios, CNN Business, SecurityWeek, Schneier on Security
Per Axios, OpenAI and Anthropic each delivered separate classified briefings to House Homeland Security Committee staff Tuesday on their new cyber-focused models and the implications for critical infrastructure. Anthropic’s Mythos Preview — which the company has held back from public release because it can autonomously find and weaponize software vulnerabilities including OS and internet-infrastructure flaws human reviewers missed — was the focal model on the Anthropic side. OpenAI’s GPT-5.4-Cyber is shipping under a tiered access program. Per CNN, OpenAI is also pushing to put its “most powerful model” at all levels of US government to fight hackers. Chair Andrew Garbarino (R-NY) is hosting ongoing private roundtables with AI executives; Rep. Jay Obernolte’s federal-AI-framework bill is the legislative vehicle in motion.
Per Axios, this is the first time lawmakers have been briefed by both labs on the new generation of cyber-capable models — not the older generic-LLM threat model, the new “model finds and exploits novel CVEs autonomously” threat model. Anthropic’s Mythos hold-back is itself the strongest argument the safety case is real: a company about to IPO is sitting on a finished frontier model rather than ship it, which is a behavior the market does not normally reward. Schneier’s read — “Mythos shifts the asymmetry permanently in favor of attackers” — is what the briefings are trying to forestall.
Bloomberg: Anthropic weighing offers at $900B+ valuation with potential $50B raise — doubling February’s $380B mark and topping OpenAI’s $852B
Today Bloomberg, CNBC, TechCrunch, Reuters
Bloomberg reported Wednesday that Anthropic is weighing investor offers at a valuation above $900 billion, with a potential $50 billion primary raise — the largest single AI fundraise on record if it closes near the targeted size. The framing more than doubles Anthropic’s February 2026 mark of $380B (the round that priced when the Google $40B add-on was announced) and prices the company above OpenAI’s $852B. Bloomberg’s sources said no terms are finalized and the sweep is in early stages; CNBC, TechCrunch, and Reuters confirmed the same shape via separate sourcing. Anthropic declined to comment.
Per Bloomberg’s framing, the timing is the tell: this leaks the same day Microsoft told the Street its 2026 capex is going to $190B and 24 hours after the WSJ’s “OpenAI CFO worried about $600B compute” story landed, with the OpenAI/Microsoft contract rewrite (revenue-share gone, AGI clause deleted) still in the air. A $900B mark with a $50B add isn’t a financing — it’s the public answer to “is the OpenAI–Microsoft renegotiation a sign the buildout is faltering.” The market has been waiting to price the OpenAI-vs-Anthropic spread; tonight it has a number.
Mistral drops Medium 3.5: 128B dense open-weight model lands on Hugging Face, pitched as enterprise self-host with reasoning + coding + agentic in one package
Today Hugging Face, Startup Fortune
Mistral released Medium 3.5 Wednesday, a 128B-parameter dense (every parameter active per request, not MoE) model live on Hugging Face as mistralai/Mistral-Medium-3.5-128B. Per Mistral’s framing — covered by Startup Fortune — the model is “built to combine reasoning, coding and agentic workflows in one self-hostable package,” pitched at enterprises that want a single open-weight surface rather than a stack of specialized models. The model card was trending on HF within hours of post (115+ likes despite low download count, an early-acceleration signal). Per project rules, the open-weights page only updates on the morning run, so the formal entry will land tomorrow morning.
Per Startup Fortune, the strategic frame is “open-weight AI feel enterprise-grade again” — explicitly contrasted with the MoE-everything trend (Poolside Laguna XS.2 yesterday, Tencent Hy3 Sunday, DeepSeek V4 last week, Nemotron 3 Nano Omni Tuesday). A 128B dense model is heavier to deploy than a 33B/3B-active MoE, but it is also predictable in the way enterprise procurement people care about: every parameter does work on every token, no expert-routing variance. With Anthropic’s $900B leak the same day, the open-vs-closed framing got sharper.