The Sequence Radar #811: Last Week in AI: OpenAI's Capital Leap, India's Summit, and the Next Frontier of Models
New Models, massive funding rounds and India's AI ambitions
Next Week in The Sequence:
Our series about world models continues with an explorations of video models as physics engines.
We dive into the incredible GLM-5 release.
We explore reinforcement learning with verifiable rewards which is becoming the most important post-training technique in the new wave of AI models.
A surprise interview is coming.
Subscribe and don’t miss out:
📝 Editorial: OpenAI’s Capital Leap, India’s Summit, and the Next Frontier of Models
This week in artificial intelligence felt like an inflection point where the sheer scale of capital, algorithmic breakthroughs, and geopolitical maneuvering collided. From New Delhi to Silicon Valley, the events of the past few days underscore a fundamental transition: AI is no longer just a software layer; it is quickly becoming the most capital-intensive infrastructure project in human history.
The most staggering news of the week is OpenAI’s impending capitalization. The ChatGPT developer is reportedly finalizing the first phase of a historic $100 billion funding round. This unprecedented injection of capital is expected to push OpenAI’s post-money valuation beyond $850 billion, up from a pre-money valuation of $730 billion. What is particularly notable is the consortium of strategic corporate investors. Reports indicate that Amazon is discussing a $50 billion investment, alongside $30 billion from SoftBank and $20 billion from Nvidia, with Microsoft also participating. This massive capital formation is explicitly aimed at giving OpenAI the resources to prepare for multi-trillion-dollar infrastructure projects, highlighting that the primary bottleneck for advanced AI is now silicon, power, and data centers.
As OpenAI gathers capital, the model-layer competition remains fiercely relentless. Anthropic released Claude Sonnet 4.6, a model that radically advances agentic computer use and software engineering. Featuring a massive 1-million-token context window in beta, Sonnet 4.6 exhibits human-level capabilities in multi-step tasks, such as navigating complex spreadsheets, and often outperforms Anthropic’s own Opus 4.5.
Google swiftly countered with the release of Gemini 3.1 Pro. Positioned as an incremental but highly vital enterprise upgrade, Gemini 3.1 Pro delivers a verified 77.1% on the ARC-AGI-2 benchmark—more than doubling the reasoning performance of its predecessor. The model also introduces the novel ability to natively generate website-ready, animated Scalable Vector Graphics (SVGs) using pure code directly from text prompts. Furthermore, Google recently deployed a major upgrade to its Gemini 3 Deep Think mode, targeting complex research, mathematics, and physics challenges.
The deployment of these models requires a global footprint, a reality on full display at the India AI Impact Summit 2026 in New Delhi. Hosted at Bharat Mandapam, the summit cemented India’s role as a critical hub for global AI infrastructure, securing over $250 billion in infrastructure-linked investment commitments for data centers, power systems, and digital connectivity.
Tech giants aggressively courted the Global South at the event. Microsoft committed to investing $50 billion by the end of the decade to expand AI access across developing nations. Meanwhile, Google announced the “America-India Connect” initiative, which promises new strategic fiber-optic routes linking the U.S., India, and the Southern Hemisphere, complemented by DeepMind partnerships tailored for Indian national priorities. The summit even secured a Guinness World Record for gathering over 250,000 pledges for “Responsible AI” within 24 hours.
This week makes one thing abundantly clear: the AI ecosystem has matured into a sovereign-level discipline. As models like Sonnet 4.6 and Gemini 3.1 Pro unlock autonomous agentic capabilities, the underlying hardware and infrastructure are receiving hundred-billion-dollar commitments. For researchers, developers, and enterprise leaders, the gap between prototype and planetary-scale deployment is closing faster than ever.
🔎 AI Research
Towards a Science of AI Agent Reliability
AI Lab: Princeton University
Summary: This paper proposes a holistic performance profile for AI agents by introducing twelve concrete metrics that decompose reliability into consistency, robustness, predictability, and safety. By evaluating 14 agentic models across two benchmarks, the authors demonstrate that recent capability gains have only yielded small improvements in agent reliability.
GLM-5: from Vibe Coding to Agentic Engineering
AI Lab: Zhipu AI & Tsinghua University
Summary: This paper presents GLM-5, a next-generation foundation model that adopts DeepSeek Sparse Attention (DSA) to significantly reduce training and inference costs while maintaining long-context fidelity. It utilizes a new asynchronous reinforcement learning infrastructure to decouple generation from training, achieving state-of-the-art performance on complex, end-to-end software engineering challenges.
HLE-Verified: A Systematic Verification and Structured Revision of Humanity’s Last Exam
AI Lab: Alibaba Group
Summary: To address concerns about noisy and ambiguous items in the Humanity’s Last Exam (HLE) benchmark, this paper introduces a verified and revised version called HLE-Verified. Through a rigorous two-stage validation and repair workflow, the authors correct systematic errors in problem statements and reference answers, enabling more faithful measurements of language model capabilities.
How Much Reasoning Do Retrieval-Augmented Models Add beyond LLMs? A Benchmarking Framework for Multi-Hop Inference over Hybrid Knowledge ,
AI Lab: IBM Research, Massachusetts Institute of Technology, Cornell University, & University of Central Florida
Summary: The authors introduce HYBRIDRAG-BENCH, an automated framework for constructing benchmarks to evaluate retrieval-intensive, multi-hop reasoning over hybrid unstructured text and structured knowledge graphs. By utilizing recent scientific literature to minimize pretraining contamination, the framework generates challenging question-answer pairs that genuinely test a model’s retrieval and reasoning abilities.
Experiential Reinforcement Learning
AI Lab: University of Southern California, Microsoft, & University of Pennsylvania
Summary: This paper introduces Experiential Reinforcement Learning (ERL), a training paradigm that embeds an explicit experience-reflection-consolidation loop into the reinforcement learning process. By having the model generate self-reflections to guide refined attempts and internalizing successful corrections, ERL significantly improves learning efficiency and final performance in sparse-reward control environments and agentic reasoning tasks.
Multi-agent cooperation through in-context co-player inference
AI Lab: Google
Summary: This research demonstrates that training sequence model agents against a diverse distribution of co-players naturally induces in-context best-response strategies without requiring hardcoded assumptions about learning rules. The resulting in-context adaptation makes agents vulnerable to extortion, creating a mutual pressure that resolves into the emergence of robust cooperative behaviors in decentralized multi-agent reinforcement learning.
🤖 AI Tech Releases
Gemini 3.1 Pro
Google DeepMind released Gemini 3.1 Pro, the newest version of its marquee model which is setting records across different benchmarks.
Sonnet 4.6
Anthropic released Claude Sonnet 4.6 which excels in computer use and long-context reasoning tasks.
Tiny Aya
Cohere open sourced Tiny Aya, a new series of small models for multilingual operations.
📡AI Radar
OpenAI’s $100 Billion Funding Round: OpenAI is reportedly finalizing a historic $100 billion funding round that would catapult the ChatGPT-maker’s valuation to $850 billion. The consortium of backers reportedly includes massive strategic investments from Microsoft, Nvidia, Amazon, and SoftBank, making it the most valuable AI company in the world.
India AI Impact Summit 2026: Hosted in New Delhi, the India AI Impact Summit served as the first major global AI gathering in the Global South. It brought together heads of state and global tech leaders to focus on democratizing AI, fostering multilingual models, and building sovereign compute infrastructure, underpinned by the nation’s ₹10,000 crore IndiaAI Mission.
Freeform’s $67M Series B for Laser AI Manufacturing: Freeform, an AI-native 3D metal printing startup founded by former SpaceX engineers, raised $67 million backed by Nvidia’s NVentures, Founders Fund, and others. The capital will help scale its “GoldenEye” system into a next-generation “Skyfall” platform, utilizing hundreds of lasers and real-time GPU physics simulations to mass-produce metal parts for aerospace and defense.
Reliance’s $110B AI Investment Plan: At the India AI Impact Summit, Mukesh Ambani announced a massive $110 billion investment plan over seven years through Reliance Industries and Jio. The initiative is aimed at building gigawatt-scale, green-energy-powered data centers and a sovereign AI ecosystem in India, strategically positioning Jio as a dominant AI entity ahead of its anticipated IPO.
World Labs Lands $200M from Autodesk: Spatial intelligence startup World Labs, co-founded by AI pioneer Fei-Fei Li, secured a $200 million strategic investment from design software giant Autodesk. The partnership will integrate World Labs’ “world models”—AI that understands 3D spatial geometry and physical constraints—directly into professional 3D workflows for media, entertainment, and architectural design.
Sarvam AI Expands to Feature Phones, Cars, and Smart Glasses: Indian startup Sarvam AI is bringing its highly compressed edge AI models directly to consumer hardware to function fully offline. The company announced partnerships with HMD to put conversational AI on basic feature phones (running its Sarvam 30B model), teamed up with Bosch for automotive integrations, and is launching its own AI-powered smart glasses called “Sarvam Kaze.”
Mistral AI Acquires Koyeb: French AI champion Mistral AI made its first acquisition by buying Koyeb, a serverless cloud startup founded by former Scaleway executives. The acquisition aims to integrate Koyeb’s serverless infrastructure and sandboxing technologies into Mistral Compute, offering enterprise customers low-latency, secure, and sovereign full-stack AI deployment in Europe.
SpaceX Vets Raise $50M for Data Center Links: Mesh Optical Technologies, a startup founded by three former SpaceX engineers, raised a $50 million Series A led by Thrive Capital. The company plans to mass-produce advanced optical transceivers that convert optical signals to electrical ones, a critical hardware component for linking GPUs together in massive AI data center clusters.
Ricursive Intelligence’s $335M Mega-Round: Founded by former Google Brain and Anthropic researchers Anna Goldie and Azalia Mirhoseini, AI chip design startup Ricursive Intelligence raised $335 million at a $4 billion valuation just four months after launching. The startup uses reinforcement learning and LLMs to automate semiconductor layouts, expanding on the founders’ acclaimed “Alpha Chip” research.
Blackstone Backs Neysa in $1.2B Financing: Indian AI infrastructure provider Neysa secured up to $1.2 billion in financing led by Blackstone, consisting of $600 million in primary equity and $600 million in debt. The monumental deal gives Blackstone a majority stake and aims to establish a localized “neo-cloud” in India to ease global AI compute shortages and build sovereign capacity.
Peak XV Raises $1.3B for AI in India: Venture capital firm Peak XV Partners (formerly Sequoia India) announced a $1.3 billion fundraise divided across seed, venture, and APAC vehicles. The capital injection is geared heavily toward backing Indian startups, highlighting an intensifying clash among global VC titans to dominate the region’s booming AI and tech ecosystem.

