🔎🧠 Improving Language Models by Learning from the Human Brain
Weekly news digest curated by the industry insiders
📝 Editorial
For the last few years, language models have been the hottest area in the deep learning space. Models like OpenAI’s GPT-3, NVIDIAs’s MT-NLG, and Google’s Switch Transformer have achieved milestones in natural language understanding (NLU) that were unimaginable just a few years ago. However, that generation of models remains just sophisticated machines for predicting the next word given a specific text. The next generation of NLU models is expected to come closer to resembling human cognitive abilities. However, getting there will require a deep understanding of how the human brain processes language, which requires strong collaboration between leading researchers in ML and neuroscience.
Meta AI Research (FAIR) has been one of the top AI research labs embarking on initiatives to understand the human brain and improve NLU models. FAIR announced a long-term collaboration with neuroscience labs to study how language models and the human brain respond to written or spoken sentences. Initial results show some astonishing similarities in how the brain and NLU models can predict the next word while the input is in a close context. However, some of the results also showed the human brain’s ability for long-term word forecasting, which is hard to recreate in NLU methods. More importantly, FAIR believes this type of study will transition NLU models from sophisticated word prediction engines into developing more text comprehension capabilities. Based on the initial results, the FAIR study becomes a highly influential source of ideas for the next few years of research and developments in language models.
🔺🔻TheSequence Scope – our Sunday edition with the industry’s development overview – is free. To receive high-quality content about the most relevant developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🔺🔻
🗓 Next week in TheSequence Edge:
Edge#187: we overview the different types of data parallelism; +explain TF-Replicator, DeepMind’s framework for distributed ML training; +explore FairScale, a PyTorch-based library for scaling the training of neural networks.
Edge#188: a deep dive into continuous model observability with Superwise.ai.
Now, let’s review the most important developments in the AI industry this week
🔎 ML Research
Studying the Human Brain to Build Better Language Models
Meta AI Research (FAIR) announced a long-term initiative to study the human brain to drive insights that can improve NLU models →read more on FAIR blog
Multi-Task Visual Language Model
DeepMind published a paper introducing Flamingo, a visual language model that was able to master multiple tasks using a few-shot learning approach →read more on DeepMind blog
Privacy Protection and Fairness in ML
Amazon Research published a blog post summarizing some of their recent papers in areas such as privacy-preserving ML, federated learning and ML fairness →read more on Amazon Research blog
Removing Exogenous Noise in RL
Microsoft Research published a paper detailing Path Predictive Elimination (PPE), a reinforcement learning algorithm that eliminates exogenous noise →read more on Microsoft Research blog
Offline RL vs. Imitation Learning
Berkeley AI Research (BAIR) lab published a detailed blog post and paper outlining the differences between offline reinforcement learning and imitation learning →read more on BAIR blog
🤖 Cool AI Tech Releases
Amazon Rekognition Streaming Video Events
AWS unveiled the general availability of Rekognition Streaming Video Events, a service that produces notifications based on objects detected on a video stream →read more on AWS ML team blog
📌 Event: Understanding performance and availability for feature stores
It is common in the machine learning world to hear a lot about performance and availability in terms of data infrastructure for AI. Join Hopsworks at their upcoming event where Jim Dowling will illustrate what lies behind these terms, the three different facets of performance, and the different levels of high availability.
What: Understanding performance and availability for feature stores
Who: Jim Dowling, CEO at Hopsworks
When: Wednesday, May 4th | 6 PM CEST
🛠 Real World ML
ML and LinkedIn’s Economic Graph
LinkedIn published a blog post describing the ML architecture used to match external companies to their economic graph →read more on LinkedIn blog
ML at Monzo
Online banking startup Monzo offered some details about their internal ML architecture →read more on Monzo engineering blog
💸 Money in AI
AI safety and research company Anthropic raised $580 million in a Series B led by Sam Bankman-Fried, CEO of FTX. Hiring in San Francisco/US.
Contact center automation company Replicant raised $78 million in Series B funding led by Stripes. Hiring remote in the US and Canada.
Relational knowledge graph system creator RelationalAI raised $75 million in Series B funding led by Tiger Global. Hiring remote.
ML application builder Baseten raised $20 Million in Funding Led by Greylock. Hiring remote, San Francisco/US.
Synthetic data startup Synthesis AI raised $17 million in Series A financing led by 468 Capital. Hiring in San Francisco/US.
Deepset, the startup behind the open-source NLP framework Haystack, raised $14 million in a Series A investment led by GV. Hiring remote, Europe/UK.
Conversational AI software Loris raised a $12 million Series A funding round led by Bow Capital. Hiring in Tel Aviv/Israel and New York/US.