Google’s Big ML Week
Weekly news digest curated by the industry insiders
For years, Google I/O have been one of the most exciting conferences in the tech word given the number of exciting products that are regularly unveiled at this event. The 2022 edition of Google I/O took place last week and machine learning (ML) was front and center. Just like Microsoft’s Ignore AWS re:Invent, I/O provides a first row seat to the ML innovation happening at Google and the new additions to its ML stack.
This year’s edition of I/O was packed with ML announcements across software, hardware, and research. On the hardware and infrastructure front, Google announced the general availability of its Cloud TPU VMs as well as what can be considered the biggest ML compute cluster available. The new Cloud ML Hub boosts an astonishing 9 exaflops computation power. On the software side, Google announced support for 24 new low-resource languages in Google Translate, new ML capabilities for Google Maps as well as new libraries added to TensorFlow. Google also made available new versions of the LaMDA (Language Model for Dialog Applications) and Pathways Language Model (PaLM) which power systems such as the Google Assistant. Another interesting release was the AI Test Kitchen that provides users with an interactive experience to explore the capabilities of these models. Finally, there was an unexpected announcement of a new form of augmented reality glasses that leverage sophisticated computer vision and language models.
I/O 2022 provided a glimpse of Google’s investments in ML research and technology. While there were no major new product lines announced, these incremental releases should consolidate Google’s positions as one of the main ML platform ecosystems on the market.
🔺🔻TheSequence Scope – our Sunday edition with the industry’s development overview – is free. To receive high-quality content about the most relevant developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🔺🔻
🗓 Next week in TheSequence Edge:
Edge#191: we discuss the fundamental enabler of distributed training: message passing interface (MPI); +Google’s paper about General and Scalable Parallelization for ML Computation Graphs; +the most relevant technology stacks to enable distributed training in TensorFlow applications. .
Edge#192: a deep dive into Predibase, the first declarative ML platform.
Now, let’s review the most important developments in the AI industry this week
🔎 ML Research
A Generalist Agent
DeepMind published a paper introducing Gato, a large model that can perform multiple tasks in highly heterogenous environments such as computer vision, physical embodiment or language →read more in this summary published by DeepMind
IBM Research published a detailed blog post discussing the idea of foundational models being a way to build more reusable ML models →read more on IBM Research
Chain of Thought in Language Models
Google Research published a paper about a chain of thought method to enable reasoning in language models →read more on Google Research blog
Improved Model Based RL
Carnegie Mellon University published a paper describing a technique to improve the training data selection in model based RL models →read more on CMU ML Lab blog
🤖 Cool AI Tech Releases
Largest Available ML Cluster
Google announced what is effectively the largest, publicly available ML cluster with 8 TPU v4 pods which provide 9 exaflops of computer power →read more on Google Cloud blog
Cloud TPU VMs
At its I/O conference, Google announced the general availability of Cloud TPU VMs →read more on Google Cloud blog
New Languages in Google Translate
Google announced a new version of Google Translate with support for over 24 languages →read more from Google Translate team
On-Device ML Search
TensorFlow added a new library to enable on-device image, text or audio search across a large dataset →read more on TensorFlow blog
🛠 Real World ML
TensorFlow at Karrot
The engineering team of Karrot, the app of local products and communities, published details about their ML architecture based on TensorFlow →read more on their blog
ML Model Persistence at Walmart
Walmart provided some insights about the techniques used for persisting and retrieving models using PySpark →read more on Walmart Global Tech blog
💸 Money in AI