✨ DeepMind and OpenAI Magical Week in AI
Weekly news digest curated by the industry insiders
Artificial intelligence (AI) is growing at an astonishing rate, with new milestones being reached on a regular basis. Despite being accustomed to regular breakthroughs, from time to time we get to see AI research that challenges the boundaries of human comprehension. Just think about such releases as AlphaGo, GPT-3, AlphaFold, and several other models that account for some of the most important milestones in the last decades of AI. This week surprised us with two groundbreaking releases from DeepMind and OpenAI.
Earlier in the week, DeepMind announced its work on AlphaCode, a new transformer model that can generate high-quality programming code. AlphaCode follows OpenAI’s work on Codex. However, DeepMind disclosed that AlphaCode has been able to solve complex problems from programming competitions which certainly sets a new bar for AI-powered code generation models.
If the release of AlphaCode was not awesome enough, OpenAI published their research on Lean, a model that can solve complex mathematical problems, including those from the high school Olympiads. Mathematical reasoning is one of the ultimate tests for AI. Lean was able to solve problems that only elite high school students (those representing their countries in the math Olympiads) are able to tackle.
DeepMind and OpenAI continue pushing the boundaries of AI research and development. AlphaCode and Lean represent impressive breakthroughs in two of the most challenging areas of AI. Certainly an impressive week.
🔺🔻 TheSequence Scope is our Sunday free digest. To receive high-quality educational content about the most relevant concepts, research papers, and developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🔺🔻
🗓 Next week in TheSequence Edge:
Edge#163: we explain Variational Autoencoders; +VQ-VAE, DeepMind’s variational autoencoder for large scale image generation; +Pixyz, a simple library for building generative models in PyTorch.
Edge#164: we deep dive into Meta’s Data2vec, a new self-supervised model that works for speech, vision, and text.
Now, let’s review the most important developments in the AI industry this week
🔎 ML Research
OpenAI published a paper detailing Lean, a model that can solve really complex math problems, including some high-school Olympiads →read more on OpenAI blog
DeepMind published a paper unveiling AlphaCode, a transformer model that can generate code at a level of programming competitions →read more on DeepMind blog
Google Research published a paper unveiling BC-Z, a model that helps robots generalize to new tasks that they were not trained to perform →read more on Google Research blog
Improving User Experience in Conversational Agents
Stanford University published three papers that detail techniques to improve the experience of conversational agents →read more Stanford University blog
🤖 Cool AI Tech Releases
One of the hottest startups in the ML space, Scale AI, just released a new product to focus on synthetic data generation →read more on Scale AI blog
Quantum ML Course
The Qiskit team, who is behind the most popular quantum computing framework in the market, released a course about quantum ML →read more on Qiskit blog
New Body Segmentation Models
The TensorFlow team released two new body segmentation models that work efficiently with TensorFlow.js →read more on TensorFlow blog
🛠 Real World ML
Uber published a blog post detailing the architecture behind RADAR, its fraud detection system →read more on Uber blog
LinkedIn discusses DARWIN, its platform to enable tool diversity in data science experimentation →read more in this blog post from the LinkedIn engineering team
💸 Money in AI