The Next RLHF Effect: Three Breakhroughts that can Unlock the Next Wave of Innovation in Foundation Models
Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space.
Next Week in The Sequence:
Edge 297: Covers one of my favorite subjects in foundation models: tool augmented LLMs. It also reviews Meta AI’s famour Toolformer paper and the LlamaIndex framework. You can’t miss this one!
Edge 298: A review of MiniGPT-4, one of the most impressive open source multimodal foundation models released to date.
📝 Editorial: Three Breakhroughts that can Unlock the Next Wave of Innovation in Foundation Models
Reinforcement learning with human feedback (RLHF) could arguably be credited as the technique that unlocked the generative AI frenzy we are currently experiencing. While RLHF is not as relevant as transformer architectures, it facilitated the transition from powerful yet uninteresting models like GPT-3 to world-changing phenomena like ChatGPT. In essence, RLHF enabled language models (LLMs) to overcome a significant hurdle by offering two key capabilities:
Producing output aligned with human intentions.
Following instructions.
These two factors serve as the foundation of ChatGPT and the new generation of LLMs we have witnessed in recent months. If RLHF was the major breakthrough for ChatGPT, we must now question what the next research milestones will be that unlock a new wave of innovation in LLMs. Currently, there are three key areas in LLM research that have the potential to have a "RLHF effect" in the next generation of models:
Chain of Thought Reasoning: Techniques that simulate reasoning by breaking tasks into smaller steps.
Knowledge Augmentations: Methods that equip LLMs with access to real-time knowledge sources or tools.
Continual Learning: Techniques that regularly update the knowledge in LLMs.
If ChatGPT felt magical with knowledge limited to 2021, just imagine the impact of LLMs that can reason through tasks, access real-time data, and expand their knowledge in real time. The aforementioned areas represent some of the most active research domains in the LLM space, and these techniques are gradually being incorporated into new models. It won't be long before we witness another magical moment akin to the "RLHF effect."
🔎 ML Research
Neuralangelo
NVIDIA unveiled Neuralangelo, a generative AI model that can turn 2d video clips into 3D objects. The model can interpret complex aspects of videos such as textures or complex materials into the 3D representation —> Read more.
Gorilla
Researchers from UC Berkeley and Microsoft Research published a paper detailing Gorilla, an LLM able to use APIs and tools. Gorilla is a fine-tuned version of LLaMA and surpasses GPT-4 in API call tasks —> Read more.
Self-Learning
MIT researchers published a paper outlining a self-learning LLM that is able to outperform much larger alternatives. The model uses a technique called self-training where the model used its own predictions to train itself —> Read more.
REVEAL
Google Research published a paper introducing Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge Memory(REVEAL), a visual-language model that can answer knowledge intensive queries. The model employs neural representation learning to encode knowledge representations as memory structures that can be easily accessible —> Read more.
Differential Privacy in ML
Google Research published a paper reviewing the current state of ML differential privacy methods. The paper discusses the core DP ML techniques from an engineering and research perspective —> Read more.
Improved Mathematical Reasoning
OpenAI published a paper detailing a new technique for mathematical problem solving. The method relies on rewarding a model for the reasoning steps instead of the final results —> Read more.
📌 Event: LLMs in Production II — A Virtual Conference From the Forefront of AI - Jun 15-16
There’s endless chatter about LLMs these days. But the people who actually have interesting things to say are the ones using LLMs in the wild. Join us at the upcoming LLMs in Production Conference to hear from the experts at the forefront of using LLMs. There will be over 50 speakers from Stripe, Meta, Canva, Databricks, Anthropic, Microsoft, Cohere, Redis, Langchain, Chroma, Humanloop, Jasper, Salesforce, and so many more. Be there on June 15-16!
Register for the virtual conference for free, or join in-person workshops (SF).
🤖 Cool AI Tech Releases
Falcon 40B
The Technology Innovation Institute (TII) open sourced Falcon 40B, an open source LLM trained on one trillion tokens —> Read more.
Aviary
Anycale, the company behind the Ray platform, open sourced a new LLM serving platform called Aviary —> Read more.
📡AI Radar
NVIDIA had several exciting generative AI announcements at the COMPUTEX conference in Taipei.
AI-powered customer service platform 8Flow.ai announced a $6.6 million seed round.
Baidu announced a $145 million venture fund to back Chinese AI startups.
Lightmatter, a startup using light to enable deep learning computations, announced a $154 million round.
Enterprise conversational platform Hyro raised $20 million in a new financing round.
Blink released an AI powered copilot for security workflows.
Automation Anywhere announced partnerships with Google and AWS to enable intelligent automation workflows.
ML platforms Aporia and Databricks announced a partnership to streamline real time monitoring of ML models.