NVIDIA Releases Nemotron 70B
The new model has been making the headlines due to its impressive performance.
Next Week in The Sequence:
You can subscribe to The Sequence below:
Edge 441: We are closing our series about SSMs with an exploration of SSMs for non-language modalities. We discuss Meta AI’s research about SSMs for speech recognition and dive into the Llama-Factory framework.
Edge 442: We dive into DeepMind’s fascinating AlphaProteo model for protein design.
📝 Editorial: NVIDIA Releases Nemotron 70B
NVIDIA made headlines in AI again this week, but surprisingly, it wasn’t about GPUs. Beyond its hardware dominance, the tech giant has been making waves in the AI software space by releasing advanced models built on Llama technology.
This week, NVIDIA unveiled its latest foundation model, Nemotron 70B. This sleek new language model is turning heads with its impressive performance, surpassing even heavyweights like OpenAI's GPT-4 and Anthropic's Claude 3.5 Sonnet in benchmark tests. Nemotron 70B is based on Meta's open-source Llama 3.1 model but has been meticulously fine-tuned by NVIDIA, utilizing advanced techniques such as Reinforcement Learning from Human Feedback (RLHF) to achieve exceptional "helpfulness." This makes Nemotron 70B capable of delivering more natural, context-aware, and accurate responses, positioning it as a serious contender among advanced language models.
What makes Nemotron 70B stand out is its ability to handle complex queries without requiring extra prompting or specialized tokens. For instance, it can accurately respond to tricky questions like "How many r’s are in strawberry?" with a detailed breakdown. The model’s outstanding performance on benchmarks such as Arena Hard, AlpacaEval 2 LC, and GPT-4-Turbo MT-Bench demonstrates its ability to generate human-like text while prioritizing user alignment and helpfulness.
NVIDIA is also democratizing access to this powerful AI by offering free hosted inference through its build.nvidia.com platform, which supports an OpenAI-compatible API interface. This initiative lowers the barrier to entry for businesses of all sizes, enabling them to experiment with and implement cutting-edge language models. Nemotron 70B’s flexibility and adaptability make it a versatile tool for various applications, ranging from customer service interactions to generating complex reports.
However, like all AI systems, Nemotron 70B has its limitations. NVIDIA cautions that the model is not optimized for highly specialized domains, such as math or legal reasoning, where absolute accuracy is essential. Users are advised to implement appropriate safeguards to mitigate potential errors or misuse.
NVIDIA's venture into high-performance AI software with Nemotron 70B signals a significant shift in the AI landscape. By challenging established players and pushing the boundaries of open-source collaboration, NVIDIA is helping to shape a new era in AI development. The focus on accessibility and high-performance solutions promises to pave the way for innovative breakthroughs in the near future.
💎 GenAI app development tips from NVIDIA, Databricks, HP, and more
Do you know how NVIDIA, Databricks, Twilio, HP, and ServiceNow get their GenAI apps into production?
Learn their best practices at GenAI Productionize 2.0, including:
How to design a GenAI stack for enterprise scale
Techniques for AI governance, evaluation, and observability
Proven strategies for getting GenAI apps into production
🔎 ML Research
Agent as a Judge
Meta FAIR and KAUST published a paper introducing an agent as a judge framework for evaluating agentic systems. The paper offers practical results of the evaluation framework being applied in coding scenarios and introduces DevAI, a new benchmark with over 55 dev tasks —> Read more.
Reconstructing LLM Training
In a fascinating paper, researchers from Hardvard University and the Imperial College of London proposed an inverse reinforcement learning method to recover the reward functions used in RLHF. The paper also shades more light into the relationship of model size and interpretability as well as interesting findings about the impact of RLHF processes —> Read more.
Thinking LLMs
Researchers from Meta FAIR, UC Berkeley and NYU published a paper proposing a training method for improving the ability of LLMs to “think” before producing an output. The technique is based on a search and optimization procedure that allows the LLM to explore the space of potential space of possible thoughts for a given intruction —> Read more.
OMNI-MATH
Researchers from several top AI labs collaborated on the creation of OMNI-MATH, a math olympiad level benchmark for LLMs. The benchmark includes over 4400 olympiad-level problems with human annotations —> Read more.
LONGMEMEVAL
AI researchers from UCLA, UC San Diego and Tencent published a paper introducing LONGMEMEVAL, a benchmark for evaluating long term memory capabilities in LLMs. The benchmark evaluates five key long term memory functions: information extraction, multi-session reasoning, temporal reasoning, knowledge updates, and abstention —> Read more.
OMCAT
NVIDIA published a paper introducing Omni Context Aware Transformer(OMCAT), an LLM optimized for the understanding of temporal data. OMCAT shows impressive performance when processing multimodal temporal inputs such as audio or video —> Read more.
🤖 AI Tech Releases
Nemotron 70B
NVIDIA released Nemotron-70B, a Llama 3.1 intruction tuned version that has shown impressive performance against much larger models —> Read more.
Janus
DeepSeek open sourced Janus, an autoregressive framework for multimodal understanding and generation —> Read more.
Minimistral
Mistral open sourced Minimistral 3B and 8B, two models optimized for edge computing use cases —> Read more.
Arch
Katanemo open sourced Arch, an intelligent gateway for LLMs —> Read more.
NotebookLM
NotebookLM relased some cool updates including audio customizations —> Read more.
🛠 Real World AI
Meta AI Hardware
Meta AI discusses its vision for open AI hardware —> Read more.
📡AI Radar
Former OpenAI CTO Mira Murari is in talks to raise capital for a new startup.
Alphabet’s AI spin off SandboxAQ is raising funds at a $5 billion valuation.
Arcee.ai released SuperNova Medium, a 14B SLM that exhibit very storng performance.
Magic Leap creator Rony Abovitz raised $20 million for a new enterprise AI startup.
Photonic supercomputing platform Lightmatter raised an astonishing $400 million to build the next gen of AI datacenters —> Read more.
Live Aware, a platform that uses AI to provide insights about games raised a $4.8 million seed round.
Former Palantir chief security officer joined OpenAI.
Microsoft’s VP of AI Research also joined OpenAI.
AI-powered user assistant platform Command AI was acquired by Amplitude.
Adobe unveiled Firefly’s new video generation interface.
Conversational avatar platform Beyond Presence raised $3.1 in pre-seed funding.
Galileo raised $45 million for its monitoring and evaluation platform.