Gemma 2: A Release That Matters
A new model, a guardrails framework and an interpretability tool.
Next Week in The Sequence:
Edge 419: We provide a summary of our very very very long series about autonomous agents. We also reveal our new series :).
Edge 420: We dive into FlashAttention-3, the improved technique powering many of the performance improvements in transformers.
You can subscribe to The Sequence below:
📝 Editorial: Gemma 2: A Release That Matters
Small language models (SLMs) are gaining a lot of momentum in both research and product releases. SLMs have the potential to streamline the adoption of generative AI and unlock use cases such as on-device AI. Most of the SLM releases from the past year have been limited to the models themselves, and that’s hardly enough. Adopting SLMs (just like larger models) comes with significant challenges. Interpretability and security certainly rank at the top of the list. Last week, Google took a major step in addressing some of these challenges with a new release of its Gemma stack.
In June, Google announced the release of Gemma 2, a series of mid-size models with 9 billion and 27 billion parameters, respectively. A few days ago, Google extended that release with some exciting additions:
Gemma 2 2B: A brand-new version of the popular 2 billion (2B) parameter model, featuring built-in safety advancements and a powerful balance of performance and efficiency.
ShieldGemma: A suite of safety content classifier models, built upon Gemma 2, designed to filter the inputs and outputs of AI models to keep users safe.
Gemma Scope: A new model interpretability tool that offers unparalleled insight into our models' inner workings.
The Gemma release is significant because it is open source. This should encourage researchers and developers to expand on these capabilities, particularly in the case of Gemma Scope, given that interpretability is such a challenge in generative AI applications. I have been tinkering with Gemma Scope over the past few days, and it’s pretty useful.
With this release, it seems that Gemma is evolving from a family of models to complete stacks to enable SLM adoption in real-world applications. This release definitely matters.
What’s the best model for RAG? Our latest LLM Hallucination Index ranks 22 of the leading models on their performance across 3 different RAG tasks, evaluating the correctness of their responses and propensity to hallucinate. See which model comes out on top and why larger is not always better…"
🔎 ML Research
Gemma 2
Google DeepMind published a paper detailing Gemma 2, a new set of small models ranging from 2 to 27 billion parameters. Gemma 2 extends its predecessors with new techniques such as interleaving local-global attentions and group query attention —> Read more.
SAM 2
Meta AI published a paper introducing the second version of its Segment Anything Model(SAM) for real time object segmentation. SAM expands over its predecesors by providing an unified model for object segmentation in videos and images —> Read more.
Trace
Microsoft Research published a paper and open source code of Trace, a framework for AI systems optimization. Trace is a new AutoDiff like tool but can be used in systems without gradients —> Read more.
CMU-MATH
Carnegie Mellon University researchers published details about CMU-MATH, a model that took second place in the AI mathematical olympiad. CMU-MATH uses a dual model system that which includes a policy model that produces multiple solutions and a reward model that chooses the answer with the highest weight —> Read more.
MoMa
Meta AI publishd a paper introducing MoMa, a mixture-of-experts architecture for mixed model models. MoMa is based on the Chameleon architecture and processes images and text in arbitrary sequences —> Read more.
Berkeley Humanoid
AI researchers from UC Berkeley published a paper introducing Berkeley Humanoid, a research framework for learning based control. The framework includes a robot designed to learn algorithm with low simulation complexity —> Read more.
🤖 AI Tech Releases
Gemma 2
Google open sourced new additions to its Gemma models including a 2B parameter model, a guardrail framework and an interpretability tool —> Read more.
torchchat
The PyTorch team released torchchat, a library for accelerated inference in laptop, mobile and desktop devices —> Read more.
Stable Fast 3D
Stability AI open sourced Stable Fast 3D, a model for rapid 3D asset generation —> Read more.
🛠 Real World AI
Agents in the Enterprise
Salesforce shares some perspectives about the impact of AI agents in enterprise automation tasks —> Read more.
📡AI Radar
AI agent platform Ema raised $36 million in new funding.
Black Forest Labs, a new startup launched by the creators or Stable Diffusion, raised $31 million.
The CEO of high profile startup Character.AI returned to work for Google.
Airtable expanded its AI capabilities with the acquisition of Dopt.
Robotics startup NEURA showcased its new humanoid robot called 4NE-1.
Self-driving truck startup Aurora Innovation raised $820 million in funding.
AI education startup Heeyo came out of stealth mode with $3.5 million in funding.
AI drug discovery platform Healx raised a $47 million Series C.
Perplexity AI announced a new plan for publishers.
Google Cloud announced a GPU cluster dedicated to Y Combinator startups.