🧠🧠🧠 The Thousand Brains Theory, A New AI Book You Must Read
The Scope covers the most relevant ML papers, real-world ML use cases, cool tech releases, and $ in AI. Weekly
Jeff Hawkins is one of my AI heroes. His book On Intelligence, is one of the clearest, most insightful works about the deep relationship between neuroscience and artificial intelligence(AI). The fascinating thing about Jeff is that he is not only a theoretician but a world-class technologist. He was one of the co-founders of Palm Computing and now is doing some groundbreaking AI work at Numenta. Now Jeff is back with a fascinating new book. A Thousand Brains: A New Theory of Intelligence has to be one of the most fascinating AI books I have read in recent years, challenging some of the core foundations of AI and proposing alternative paths.
The book proposes a unique theory of intelligence based on a very intriguing principle. When presented with an environment, instead of building one model of objects, our brain builds many models based on different inputs. Some inputs involve movement, others texture or distance. The ultimate model of objects is reached by achieving some form of consensus between all those representations. Hawkins backs this idea with decades of neuroscience research. The thousand brains theory could have profound implications in the design of neural network architectures facilitating modularity and knowledge transfer over super large models. Whether you agree or disagree with Hawkins’s ideas, they will definitely make you think. You know, using our thousand brains 😉.
🔺🔻TheSequence Scope – our Sunday edition with the industry’s development overview – is free. To receive high-quality content about the most relevant developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🔺🔻
🗓 Next week in TheSequence Edge:
Edge#131: we discuss Self-Supervised Learning for Language; we explore XLM-R, one of the most powerful SSL cross-lingual models ever built.
Edge#132: we overview WhyLabs, end-to-end AI observability and monitoring platform that enables transparency across the different stages of ML pipelines.
📌 Join us free
We are happy to support Feature Store Summit by Hopsworks! It’s dedicated to cutting-edge technologies that facilitate bringing ML models into production. The event, taking place on October 12-13, is 100% digital and free to attend.
Now, let’s review the most important developments in the AI industry this week
🔎 ML Research
Predicting Gene Expressions
DeepMind published a paper proposing a transformer-based method called Enforcer that can predict gene expressions from DNA sequences →read more on DeepMind blog
Common Sense Evaluation
IBM Research published a paper proposing a benchmark for evaluating psychological reasoning capabilities in deep learning models →read more on IBM Research blog
Pragmatic Image Compression
Berkeley AI Research (BAIR) published a paper proposing a technique for image compression and optimization for computer vision models →read more on BAIR blog
Instruction Fine Tuning
Google Research published a paper proposing a new technique to improve the fine-tuning of pretrained NLP models →read more on Google Research blog
🛠 Real World ML
Explainable AI at LinkedIn
The LinkedIn engineering team published a blog post about the architecture and techniques used to enable explainability in their ML infrastructure →read more on LinkedIn blog
Training Large Deep Learning Models on Budget
ML startup AssemblyAI published an insightful blog post about how to train large-scale deep learning models without breaking the bank →read more on their blog
🤖 Cool AI Tech Releases
Google open-sourced FedJAX, a library for federated learning implemented in JAX →read more on Google Research blog
🗯 Useful Tweet
That’s just super cool
💸 Money in AI
For ML&AI teams:
Software test automation startup Autify raised a $10 million Series A round led by World Innovation Lab (WiL). Hiring in Japan and remote.
PaaS for synthetic data generation Rendered.ai raised a $6 million seed round led by Space Capital.