Nobody Likes a Know-It-All: Smaller LLMs are Gaining Momentum
Phi-3 and OpenELM, two major small model releases this week.
Next Week in The Sequence:
Edge 391: Our series about autonomous agents continues with the fascinating topic of function calling. We explore UCBerkeley’s research on LLMCompiler for function calling and we review the PhiData framework for building agents.
Edge 392: We dive into RAFT, UC Berkeley’s technique for improving RAG scenarios.
You can subscribed to The Sequence below:
📝 Editorial: Nobody Likes a Know-It-All: Smaller LLMs are Gaining Momentum
Last year, Microsoft coined the term 'small language model' (SLM) following the publication of the influential paper 'Textbooks Are All You Need', which introduced the initial Phi model. Since then, there has been a tremendous market uptake in this area, and SLMs are starting to make inroads as one of the next big things in generative AI.
The case for SLMs is pretty clear. Massively large foundation models are likely to dominate generalist use cases, but they remain incredibly expensive to run, plagued with hallucinations, security vulnerabilities, and reliability issues when applied in domain-specific scenarios. Add to that environments such as mobile or IoT, which are computation-constrained by definition. SLMs are likely to fill that gap in the market with hyper-specialized models that are more secure and affordable to execute. This week we had two major developments in the SLM space:
Microsoft released the Phi-3 family of models. Although not that small anymore at 3.8 billion parameters, Phi-3 continues to outperform much larger alternatives. The model also boasts an impressive 128k token window. Again, not that small, but small enough ;)
Apple open-sourced OpenELM, a family of LLMs optimized for mobile scenarios. Obviously, OpenELM has raised speculations about Apple’s ambitions to incorporate native LLM capabilities in the iPhone.
Large foundation models have commanded the narrative in generative AI and will continue to do so while the scaling laws hold. But SLMs are certainly going to capture an important segment of the market. After all, nobody likes a know-it-all ;)"
🔎 ML Research
Phi-3
Microsoft Research published the technical report of Phi-3, their famous small language model that excel at match and computer science task. The new models are not that small anymore with phi-3-mini at 3.8B parameters and phi-3-small and phi-3-medium at 7B and 14B parameters respective —> Read more.
The Instruction Hierarchy
OpenAI published a paper introducing the instruction hierarchy which defines the model behavior upon confronting conflicting instructions. The method has profound implications in LLM security scenarios such as preventing prompt injections, jailbreaks and other attacks —> Read more.
MAIA
Researchers from MIT published a paper introducing Multimodal Automated Interpretability Agent (MAIA), an AI agent that can design experiments to answer queries of other AI models. The method is an interesting approach to interpretability to prove generative AI models to undestand their behavior —> Read more.
LayerSkip
Meta AI Research published a paper introducing LayerSkip, a method for accelerated inference in LLMs. The method introduces modification in both the pretraining and inference process of LLMs as well as a novel decoding solution —> Read more.
Gecko
Google DeepMind published a paper introducing Gecko, a new benchmark for text to image models. Gecko is structured as a skill-based benchmark that can discriminate models across different human templates —> Read more.
🤖 Cool AI Tech Releases
OpenELM
Apple open sourced OpenELM, a family of small LLMs optimized to run on devices —> Read more.
Artic
Snowflake open sourced Artic, an MoE model specialized in enterprise workloads such as SQL, coding and RAG —> Read more.
Meditron
Researchers from EPFL’s School of Computer and Communication Sciences and Yale School of Medicine released Meditron, an open source family of models tailored to the medical field —> Read more.
Cohere Toolkit
Cohere released a new toolking to accelerate generative AI app development —> Read more.
Penzai
Google DeepMind open sourced Penzai, a research tookit for editing and visualizing neural networks and inject custom logic —> Read more.
🛠 Real World ML
Fixing Code Builds
Google discusses how they trained a model to predict and fix build fixes —> Read more.
Data Science Teams at Lyft
Lyft shared some of the best practices and processes followed for building its data science teams —> Read more.
📡AI Radar
Perplexity announced it has $63 million at over $1 billion valuation.
Elon Musk’s xAI is closing in on a $6 billion valuation.
Microsoft and Alphabet beat Wall Street expectations with strong earnings fueled by AI adoption.
NVIDIA is acquiring AI ifnrastructure startup Run:ai for a reported $700 million.
Cognition, the startup behind coding assistant Devin, raised a $175 million round at $2 billion valuation.
Salesforce announced released Einstein Copilot Actions to bring actionability to its AI platform.
Adobe introduced Firefly 3 with new image generation capabilities.
Higher than expected AI investments had a negative impact in Meta’s earnings report.
Augment emerged from stealth mode with a monster $227 million round.
AI-biotech company Xaira Therapeutics launched with $1 billion in funding.
AI sales platform Nooks raised $22 million.
Snorkel AI announced major generative AI updates to its Snorkel Flow platform.
Flex AI raised $30 million for a new AI compute platform.
The OpenAI Fund closed a $15 million tranche.