💥The “What’s New in AI” recap#2️⃣
TheSequence is the best way to build and reinforce your knowledge about machine learning and AI
Every six months we provide a summary of what we’ve recently covered in TheSequence. Catch up with what you missed and prepare for the next half of the year. This issue is the second part of the “What’s New in AI” recap. What’s New in AI is a deep dive into one of the freshest research papers or technology frameworks that is worth your attention.
Starting next week, look for our fresh Edges about data labeling, series on Transformers, and other fascinating ML topics. Now, let’s review what we have:
Edge#74: How Uber, Google, DeepMind and Microsoft Train Models at Scale. Large-scale training is one of the most challenging aspects of building deep learning solutions in the real world. In this issue, we review some of the top architectures, specifically:
Edge#80: TensorFlow Quantum Just Turned One Year Old. In this Edge, we overview:
Quantum machine learning and its two main components: Quantum Datasets and Hybrid Quantum Models.
Quantum computing framework Cirq.
TensorFlow Quantum (TFQ) – a framework for building QML applications. As well as, the steps that TFQ follows to train and build QML models.
Edge#86: How DeepMind Prevents RL Agents from Getting “Too Clever”: Recently, DeepMind started analyzing the problem of task specification in more detail and proposed the notion of aligned RL agents that have the objective of achieving the best possible result in their environment… ->read more
Edge#90: OpenAI Safety Gym is an Environment to Improve Safety in RL Models: The idea of incorporating safety as a measure in RL agents is certainly intriguing and Safety Gym is one of the first fully automated approaches to enable these ideas for the next generation of RL agents ->read our deep dive
Edge#98: OpenAI Built RL Agents that Mastered Montezuma’s Revenge by Going Backwards. It turns out that replaying a knowledge sequence backwards for small time intervals is an incredibly captivating method of learning and also a marvel of human cognition. How would a similar skill look in artificial intelligence (AI) systems? Read more->
Edge#100: Facebook NetHack Challenge is Likely to Become One of the Toughest Reinforcement Learning Benchmarks in History. Despite all its success, Facebook AI Research (FAIR) believes that RL needs to be pushed to new levels and, for that, they are turning their attention to a new game: NetHack -> learn more about NetHack competition and how it can advance the research
Edge#102: DeepMind Wants to Redefine One of the Most Important Algorithms in Machine Learning as a Game. The DeepMind work is one of those papers that you can’t resist reading just based on the title ->what did DeepMind come up with and how it can be expanded to some of the fundamental optimization problems in machine learning? Read further
(read along, no subscription is needed) Edge#104: AllenNLP Makes Cutting-Edge NLP Models Look Easy: Created by the Allen Institute for AI, AllenNLP provides a simple and modular programing model for applying advanced deep learning techniques to NLP research, streamlining the creation of NLP experiments and abstracting the core building blocks of NLP models ->read this overview that got over 90 likes ;)
Next week we start with Data Labeling. Stay tuned!