Claude Code adds MCP elicitation — servers can now ask you questions mid-task
Today Claude Code changelog (v2.1.76)
Claude Code now supports MCP elicitation, letting MCP servers request structured input from the user via an interactive dialog while a task is running. New Elicitation and ElicitationResult hooks let developers tap into this flow. Also added a -n / --name CLI flag to name sessions at startup.
The changelog describes this as enabling “more sophisticated interactions with external tools” — MCP tools can now have back-and-forth with the user, turning one-shot tool calls into interactive workflows.
1M context now generally available for Opus 4.6 and Sonnet 4.6
Yesterday Simon Willison (@simonw)
Anthropic made 1M-token context windows generally available for Claude Opus 4.6 and Sonnet 4.6 at standard pricing — no premium charges for longer prompts. As Willison notes, this contrasts with OpenAI and Google, which charge extra fees when token usage exceeds certain thresholds, making Claude significantly more cost-effective for large documents and extended conversations.
Willison’s point: at flat pricing, you can now feed entire codebases or long documents without hitting the cost cliffs that Gemini (200K) and GPT-5.4 (272K) impose.
Claude Code gets /color, session names, and smarter memory
Yesterday Claude Code changelog (v2.1.75)
Anthropic shipped a batch of Claude Code updates: a /color command to tag sessions with a prompt-bar color, session name display when using /rename, and last-modified timestamps on memory files so Claude can reason about which memories are fresh vs. stale. Also fixed voice mode, model switching, and HTTP 400 errors for users behind proxies on Bedrock/Vertex.
The memory timestamp feature is a prerequisite for agents that work reliably across sessions — without it, Claude can’t tell if saved context is from today or last month.
Karpathy launches AgentHub — “GitHub, but for AI agents”
Mar 11 Andrej Karpathy (@karpathy)
Karpathy dropped AgentHub, an open-source collaboration platform designed for swarms of AI agents working on the same codebase. Instead of branches, PRs, and merges, it exposes a bare git DAG where agents push commits via bundles and coordinate through a built-in message board. Built as the organizational layer for autoresearch, it’s explicitly a sketch — but hit 2,000+ GitHub stars in under 24 hours.
Karpathy’s argument: GitHub has “a softly built-in assumption of one master branch” — agents need a DAG they can push to in parallel, not PRs and merge conflicts designed for humans.
Latent Space hosting Notion AI team — swyx calls them “the most important knowledge work agent lab”
Mar 10 swyx (@swyx)
swyx announced the Latent Space podcast will feature the Notion AI team, including Simon Last. He framed Notion as “probably the most important knowledge work agent lab in the world” — a bold take that reflects how AI-native productivity tools are becoming the real battleground, not standalone chat interfaces.
swyx has been tracking the “AI-native vs. AI-added” split — his bet is that tools with existing workflow lock-in (like Notion) will outcompete standalone AI products.
Autoresearch goes distributed — 35 agents, 333 experiments, zero humans
Mar 10 Andrej Karpathy (@karpathy)
Following Karpathy’s autoresearch release, Varun Mathur (Hyperspace AI) distributed the single-agent loop across a peer-to-peer network. On the night of March 8–9, 35 autonomous agents ran 333 experiments completely unsupervised. Karpathy’s original 2-day run found ~20 real improvements that cut time-to-GPT-2 by 11% — the distributed version is scaling that approach to a research community.
Latent Space called this “sparks of recursive self-improvement” — all ~20 improvements were additive (they stack without canceling each other) and transferred cleanly from small to large models.