♟♟ Chess Learning Explainability
Weekly news digest curated by the industry insiders
Chess has long been considered a solved problem in machine learning (ML) as many chess programs have achieved superhuman performance. However, chess still continues contributing to the ML field in surprising ways. One of those new areas of contributions has to do with understanding how deep learning models build knowledge representations in complex scenarios such as chess. Traditional chess engines often start with extensive collections of games as well as established knowledge pools of openings and mid-game and end-game tactics. That approach was challenged with recent chess models like DeepMind’s AlphaZero that mastered chess by simply playing games. AlphaZero quickly became the strongest chess engine in the world and also discovered all set of new lines in chess openings that challenged conventional wisdom. Despite the success and popularity of AlphaZero, we still know very little about how it builds its knowledge. This is starting to change by a collaboration between DeepMind, Google Brain, and one of the brightest minds in chess history.
In a paper released this week, DeepMind and Google Brain collaborated with former chess world champion Vladimir Kramnik to evaluate how AlphaZero develops knowledge representations of chess positions. This level of analysis is incredibly relevant to add a layer of interpretability to superhuman neural networks. The general assumption is that complex neural networks build opaque and nearly impossible to interpret knowledge representations. However, some recent empirical evidence seems to challenge that belief suggesting that those neural networks develop plenty of human-understandable concepts. The study of AlphaZero added more evidence to this thesis illustrating how the super neural network developed several widely understood human chess concepts during its learning process. Furthermore, the research showed exactly when AlphaZero developed these concepts during training, helping us understand how explainable knowledge representations are built in complex neural networks. Certainly one of the most fascinating papers of this year.
🍂🍁 TheSequence Scope is our Sunday free digest. To receive high-quality educational content about the most relevant concepts, research papers and developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🍂🍁
🗓 Next week in TheSequence Edge is Thanksgiving Week: we will share a few important content series, deep dives, and best interviews.
Now, let’s review the most important developments in the AI industry this week
🔎 ML Research
Understanding Chess Acquisition Knowledge
DeepMind collaborated with former world chess champion Vladimir Kramnik in a fascinating paper about how AlphaZero acquires and develops chess knowledge →read more in this article from Chessbase
Self-Supervised Speech in 128 Languages
Facebook AI Research (FAIR) published a paper detailing XLS-R, a self-supervised model that can master speech tasks in 128 languages →read more on FAIR blog
Evaluation and Reporting in Reinforcement Learning
Google Research published a paper and open-sourced RLiable, a method for quantifying uncertainty in RL models →read more on Google Research blog
Predicting Text Readability
Google Research published a paper proposing a method to predict text readability based on screen interactions, such as scrolls →read more Google Research blog
🛠 Real World ML
DataOps vs. MLOps
Walmart Labs published a blog post explaining their ideas about DataOps and its relevance in MLOps pipelines →read more on Walmart Global Tech blog
🤖 Cool AI Tech Releases
GNNs in TensorFlow
TensorFlow open-sourced TensorFlow Graph Neural Networks (GNNs), a new framework designed to streamline GNNs implementation and graph data processing in deep learning models →read more on TensorFlow blog
Microsoft Research open-sourced SynapseML (formerly MMLSpark), a library that enables the implementation of massively parallel machine learning pipelines →read more on Microsoft Research blog
OpenAI API General Availability
OpenAI removed the waitlist requirement to access its popular API that includes models like GPT-3 and Codex →read more on OpenAI blog
💸 Money in AI
Writing assistant Grammarly raised a $200 million round led by Baillie Gifford. Post-money evaluation increased to $13 billion, making Grammarly on of 10th most valuable US startups. Hiring in San Francisco and New York/US, Kyiv/Ukraine, Vancouver/Canada.
Chargeback prevention solution Justt emerged from stealth with $70 million raised across three funding rounds, including a Series B led by Oak HC/FT and two previously unannounced rounds led by Zeev Ventures and F2 Venture Capital, respectively. Hiring.
Voice AI platform SoundHound goes public through a merger with Archimedes Tech SPAC Partners Co.