Robotics is Inching Towards it ChatGPT Moment
Major developments in robotics from NVIDIA, Meta and MIT.
Next Week in The Sequence:
Edge 445: We start a new series about one of the most exciting topics in generative AI: model distillation.
The Sequence Chat: We discuss some coontroversial points on the debate between small vs. large foundation models.
Edge 446: We dive into OpenAI’s MLE-Bench, one of the craziest benchmarks ever created.
You can subscribe to The Sequence below:
📝 Editorial: Robotics is Inching Towards it ChatGPT Moment
The field of AI robotics is currently experiencing a surge in innovation, with researchers developing new techniques and technologies that are pushing the boundaries of what robots can do. One of the most exciting areas of development is the use of large language models (LLMs) to train robots. LLMs are a type of AI that are trained on massive datasets of text and code, and they have shown remarkable ability to generate text, translate languages, and write different kinds of creative content. Researchers are now exploring how to use LLMs to train robots to perform a wide range of tasks, from simple household chores to complex industrial operations.
This week we saw several major research contributions in the field of robotics from NVIDIA, MIT and Meta among others.
A major challenge in robotics is the heterogeneity of data. Robots generate data from a variety of sources, including vision sensors, robotic arm position encoders, and simulations. These data are often difficult to combine and use to train robots. Researchers at MIT have developed a new technique called Heterogeneous Pretrained Transformers (HPT) that addresses this challenge. HPT aligns data from different sources into a shared "language" that a generative AI model can process. This approach allows robots to be trained on a much larger and more diverse dataset, which can lead to significant improvements in performance.
Beyond the technical advancements, the industry is witnessing a growing focus on the integration of touch perception, dexterity, and human-robot interaction. Meta's Fundamental AI Research (FAIR) team is actively working on creating embodied AI agents capable of perceiving and interacting with their surroundings, while also coexisting safely with humans. Their efforts are leading to advancements in areas such as tactile sensing, which allows robots to "feel" and manipulate objects with greater precision. This is exemplified by their development of Meta Sparsh, a general-purpose touch representation that works across various sensors and tasks, and Meta Digit 360, a breakthrough tactile fingertip with human-level multimodal sensing capabilities.
The drive towards more versatile and adaptable robots is also evident in the development of new control frameworks. NVIDIA's research on HOVER (Humanoid Versatile Controller) showcases a multi-mode policy distillation framework that consolidates diverse control modes into a unified policy. HOVER allows robots to seamlessly switch between different control modes, such as navigation, manipulation, and human interaction, without the need for retraining.8 This development marks a significant step toward creating more flexible and adaptable robots that can perform a wide range of tasks.
The advancements in AI robotics, as highlighted by these recent developments, demonstrate a clear momentum in the field. With the continuous development of new techniques and technologies, we can expect even more impressive progress in the near future. These breakthroughs not only promise to revolutionize industries but also hold the potential to significantly enhance our daily lives.
📍 Event
You’re invited to an exclusive fireside chat with Ben Orkin, VP of Engineering - MLOps at North, hosted by Tecton and Data Science Connect. Discover how this leading fintech company leveraged Tecton to build a system that detects fraud at scale with millisecond-level response times while adapting to emerging fraud patterns.
You’ll learn:
The architecture behind North's transition from third-party to in-house ML
How they maintain high performance at massive transaction volumes
Strategies for rapid iteration on fraud detection models
Don't miss this deep dive into building mission-critical ML systems that balance speed, scale, and adaptability! –>Register here.
🔎 ML Research
HOVER
NVIDIA, Carnegie Mellon University, UC Berkeley and other AI research labs published the research around HOVER(Humanoid Versatile Controller), a 1.5 million parameter neural network to control humanoid robots. HOVER is based on a distillation method that extracts various control modes under the same policy —> Read more.
NotebookLM Audio
Google DeepMind published some details about the speech generation technologies behind NotebookLM and Illuminate. The solution included audio generation models such as AudioLM or SoundStream as well as specialized transformers for handling audio tokens —> Read more.
Advancing Embodied AI
Meta FAIR published several papers and research artifacts advancing different areas of embodied AI. The research includes areas such as perception, dexterity, and human-robot interaction —> Read more.
Stealing User Prompts from MoEs
Google DeepMind published a paper proposing an attack against MoE models that can unveil the user’s input prompt. The core of the technique centers on manipulating the expert routing system within the MoE model to capture the entire input —> Read more.
LLMs as Data Scientists
Snowflake AI Research published a paper proposing FeatEng, a benchmark designed to evaluate LLMs in data science tasks such as feature engineering code. The benchmark presents a model with a dataset and a series of prompts and scores the generated code —> Read more.
Memorization in LLMs
Researchers from Princeton University, Google, Allen AI and University of Illinois published a paper proposing a quantitative approach to measure memorization in LLMs. The paper proposes a bechmark based on Knights and Knaves (K&K) puzzles to evaluate memorization in reasoning tasks —> Read more.
🤖 AI Tech Releases
ChatGPT Search
OpenAI unveiled ChatGPT Search allowing it to search web sources —> Read more.
MobileLLM
Meta AI open sourced MobileLLM, a foundation model optimized for on-device scenarios —> Read more.
TensorFlow 2.18
The new version of TensorFlow is out —> Read more.
SmolLM2
HuggingFace open sourced a series of small models optimized for edge computing —> Read more.
🛠 Real World AI
Conversational AI at Airbnb
Airbnb revealed some details about the architecture powering its conversational AI experiences —> Read more.
📡AI Radar
Agentic platform Devrev raised a $100 million Series A.
Betaworks announced a new batch of AI startups.
Data security startup Noma came out of stealth mode with $32 million in funding.
Microsoft introduced its GitHub Copilot for Azure.
Small language model platform Moondream emerged from stealth mode with $4.5 million in new funding.
LinkedIn introduced a Hiring Assistant for recruiting tasks.
AI financial research platform Brightwave raised $15 million in a new round.
GPU platform GMI Cloud raised $82 million in new funding.
Read AI raised $50 million for its text summarization bot.