🧬 The AlphaFold Race is On!
We keep you updated with the most important things that happen in the ML world
Last year, DeepMind shocked the biology and artificial intelligence (AI) worlds with the unveiling of AlphaFold2, a deep learning model able to predict the structure of proteins. The model blew away the competition in the famous CASP14 challenge, a contest between different algorithms to predict the physical structure of a protein given a sequence of amino acids. This is considered one of the iconic problems in biology for understanding the structure of cells and accelerating drug discovery. Often referred by the biology community as “the algorithm that will change everything”, AlphaFold2 triggered new levels of hope and innovation at the intersection of AI and biology. However, despite its promises, the internals of AlphaFold2 remain relatively opaque, which raised some concerns within the scientific community.
Inspired by AlphaFold2 and trying to address some of the transparency criticism, a stellar team from the University of Washington created an alternative model known as RoseTTAFold. The details of the new model were published in a paper in Science magazine this week and the code was open-sourced. The RoseTTAFold claims to have achieved similar results of AlphaFold2 using lower computational costs. In a surprising turn of events, DeepMind matched the publication of RoseTTAFold with a new paper in Nature magazine describing some of the details behind AlphaFold2 and also open sourcing the code. So concurrent publication in the top two scientific journals was accompanied by open-source releases. How is that for a crazy week in deep learning land? If nothing else, the beneficiaries of this level of innovation are the deep learning community and the little bit of drama in the competition was a nice touch 😉.
🔺🔻TheSequence Scope – our Sunday edition with the industry’s development overview – is free. To receive high-quality content about the most relevant developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🔺🔻
🗓 Next week in TheSequence Edge:
Edge#107: Crowdsourcing in Data Labeling; overview of a few data labeling companies.
Edge#108: Toloka’s Data Labeling Platform for ML and a few interesting use cases with crowdsourced data labeling.
Now, let’s review the most important developments in the AI industry this week
🔎 ML Research
Researchers from the University of Washington published a paper detailing RoseTTAFold, a neural network model that improves over the architecture of DeepMind’s AlphaFold2 to achieve similar levels of accuracy with improved performance ->read more in the original paper
Suspiciously timed with the publication of the RoseTTAFold model, DeepMind published a paper outlining some of the details behind AlphaFold2 ->read more in the original paper
Efficient PPO Methods in Reinforcement Learning
The Berkeley AI Institute (BAIR) published a paper that challenges some of the common assumptions about the effectiveness of on-policy methods in multi-agent reinforcement learning models ->read more on BAIR blog
Noisy Student Training
Google Research published a paper detailing noisy student training, a semi-supervised learning technique that can work effectively in large scale data scenarios ->read more on Google Research blog
🤖 Cool AI Tech Releases
Model Fine Tuning at Uber
The Uber engineering team published an insightful blog post detailing the architecture used to streamline the optimization of machine learning models in their infrastructure ->read more on Uber blog
Model Health Assurance at LinkedIn
The LinkedIn engineering team published a great overview of the model health assurance capabilities of its Pro-ML architecture ->read more on LinkedIn blog
Microsoft open sourced Torch-Ort, a module for optimizing the training of large language models across specific hardware topologies ->read more on Microsoft blog
A Benchmark for Learning from Human Feedback
The Berkeley AI Institute(BAIR) launched BASALT, a new set of Minecraft environments to evaluate the ability of machine learning models that can optimize a reward function based on human feedback ->read more on BAIR blog
💎 We recommend
Our friends from the MLOps community built a useful feature: they have the ability to compare feature stores providers and model monitoring providers. Soon they promise to add experiment tracking and deployment tools with the end goal to cover the whole MLOps lifecycle.
💸 Money in AI
For dev and engineers: