DeepMind's AlphaFold-Latest is Pushing the Boundaries of Scientific Exploration
The model continues making breakthroughts in digital biology.
Next Week in The Sequence:
Edge 341: Our series about fine-tuning dives into a concept everyone should understand: prompt-tuning. We review the original prompt-tuning paper and the Axolotl fine-tuning framework.
Edge 342: Reviews one of the most fascinating papers of the year: Who is Harry Potter? which explores techniques for selective forgetting in LLMs.
You can subscribe below:
📝 Editorial: AlphaFold-Latest is Pushing the Boundaries of Scientific Exploration
The powering of scientific breakthroughs might be the purest definition of a Turing Test. New science requires the combination of sophisticated cognitive skills such as reasoning across disparate domains, experimentation, and a non-trivial dose of creativity and intuition. In recent years, we have seen various AI efforts venturing into scientific discovery. Among these, DeepMind’s AlphaFold has been widely regarded as the model that exemplifies the potential of AI for scientific exploration. Last week, DeepMind shared an update on their progress with the latest version of AlphaFold, known as AlphaFold-Latest.
AlphaFold stands as a prime example of innovation at the intersection of the two hottest trends in the current market: artificial intelligence and digital biology. The second version of AlphaFold amazed the scientific community a few years ago by achieving unprecedented accuracy in predicting protein structures from a given sequence of amino acids. This release was followed by AlphaFold-Multimer, which expanded the method to include other complexes containing protein elements. AlphaFold 2.3 took this work to a different scale by applying it to very large complexes. These incremental breakthroughs have paved the way for AlphaFold-Latest.
The current iteration of AlphaFold focuses on predicting structures of more complex biological systems. The work extends beyond proteins to structures containing nucleic acids, small molecule ligands, and modified or non-canonical residues. The combination of these structures is essential for understanding and predicting the behavior of biological mechanisms within a cell, which can, in turn, unlock predictions for incredibly complex biological systems. The implications are broad and far-reaching, from cancer drug discovery to new vaccines and pollution-resistant materials.
AlphaFold-Latest has demonstrated impressive performance, generating accurate predictions that go far beyond protein folding. The release of AlphaFold-Latest can accelerate scientific breakthroughs at an unprecedented speed and, hopefully, inspire a new generation of models in this space.
AI Hot Takes - Agree or Disagree?
Frank Liu, head of AI & ML at Zilliz, the company behind widely adopted open source vector database Milvus, shares his red hot takes on the latest topics in AI, ML, LLMs and more! Hear his reasons for why long-context models are NOT going to replace vector database to do RAG and how data contamination in LLMs isn't a new problem in ML -> Watch Videos
🔎 ML Research
AlphaFold vNext
Google Deepmind shared an update about their protein-prediction model AlphaFold. The new version is able to generate predictions for most of the molecules included in the Protein Data Bank and its expanding onto other areas of digital biology —> Read more.
Zero-Shot-Reasoning Prompting
Google Research published a paper detailing a zero-shot reasoning technique with self-adaptive prompting. The method constructs pseudo-demonstrations for LLMs that can improve reaosning capabilities —> Read more.
EdiT5
Google Research published a paper detaling EdiT5, a transformer model based on the T5 architecture fine-tuned for grammar correction and text edition. EdiT5 is the model behind the grammar correction experience in Google Search → Read more.
DataFormulator
Microsoft Research published a paper detailing DataFormulator, an AI-first approach to data visualization. DataFormulator translates high-level visualization intent into low level data visualization actions —> Read more.
Foundation Models and Video Representations
Amazon Science published a paper introducing a technique called motion-guided masking(MGM) to track motion across video frames. The technique can help improve the extraction of semantic representations in foundation models for video —> Read more.
🤖 Cool AI Tech Releases
Embed v3
Cohere released the newest version of its embedding model —> Read more.
Stability AI Releases
Stability AI unveiled a series of new additions to its platform in areas such as image transformation, 3D and fine-tuning —> Read more.
🛠 Real World ML
LLM Architectures at GitHub
GitHub ML engineers discuss the architecture of LLMs apps —> Read more.
Walmart Enterprise Chatbot
Walmart discusses an architecture used to build enteprise chatbots based on LangChain, VectorDB and GPT-4 —> Read more.
📡AI Radar
Elon Musk’s x.ai might be releasing the first version of its model to selected parties next week.
AWS ., an interesting service that enables the reservation of GPU clusters in advance for a specific amount of time.
More than 70 attendees to the AI Safety Summit in the UK signed a letter about safety and openness in AI development.
Snowflake unveiled Cortex, a new platform for enterprise LLMs.
Yahoo big-data AI startup Vespa announced a $31 million series A.
Cranium raised $25 million for its enterprise AI security platform.
AMD delivered a strong quarter performance driven by the demand for AI chips.
AI defense platform Shield AI raised $200 million in new funding.
HubSpot announced the acquisition of Clearbit to enhance its AI capabilties.
Brave’s AI assistant is now available on its desktop edition.