Sitemap - 2021 - TheSequence

🕋 8 Free ML courses – our favorites

🫀MLOPs recap, part 1

🔮🤓 Some Non-Obvious ML Predictions for 2022

🚚 🚛 Edge#152: How DeepMind and Waymo Train Self-Driving Car Models?

📚 Your Reading List for 2022

👯‍♀️📦 Edge#151: Model Packaging

🥜 Meta’s Clever Idea to Handle Uncertainty in ML models

🏷 Data Labeling for ML, part 3

✨ Edge#150: Microsoft’s SynapseML is a New Framework for Large Scale Machine Learning

📺 3 tips to optimize your infrastructure for data-driven targeting*

🔎 Edge#149: Model Tracing and Lineage

🕹 A Massive Leap in ML for Gaming

📝 The Definitive ML Observability Checklist*

➗✖️ Edge#148: The OpenAI Model to Solve Text Math Problems

🎙 Doug Downey/Semantic Scholar: Applying Cutting Edge NLP at scale

🍮 Edge#147: MLOPs – Model Serving

✴️ Amazon’s Big ML Week

📕 If only someone wrote the book on ML Observability*

🔺 Edge#146: A Deep Dive Into Arize AI ML Observability Platform  

📝 Guest post: A Guide to Leveraging Active Learning for Data Labeling

🔬 Edge#145: MLOPs – model observability

👨🏼‍🎓👩🏽‍🎓 The Standard for Scalable Deep Learning Models

▪️▫️▪️▫️ Edge#144: How Many AI Neurons Does It Take to Simulate a Brain Neuron?

🎙 TheSequence Chat: a year of interesting conversations

🏗 Edge#143: Feature Stores in ML Pipelines: A Recap

♟♟ Chess Learning Explainability

🤖Edge#142: How Microsoft Built a 530 Billion Parameter Model

☝️ CoreWeave is allocating $50 million to scale the best compute infrastructure for ML*

🩺 Edge#141: MLOPs – Model Monitoring

🐉 NVIDIA's ML Software Moment

🌟Share your opinion on MLOps and get rewarded*

☁️ Edge#140: cnvrg.io’s Metacloud aims to help AI developers to fight vendor lock-in  

🎙 Brian Venturo/CoreWeave about GPU-first ML infrastructures

🔥 Edge#139: MLOps – one of the hottest topics in the ML space

➗✖️ OpenAI New NLP Challenge: Mathematical Reasoning

📝 Guest post: How to build SuperData for AI [Full Checklist]*

🏷 Edge#138: Toloka App Services Aims to Make Data Labeling Easier for AI Startups

📌 Event: MLOps Cocktails Done Right: How to Mix Data Science, ML Engineering, and DevOps*

🤓 Self-Supervised Learning Recap

🤔🤯 Addressing One of the Fundamental Questions in Machine Learning

🤩 Early access: try the world's most flexible AI cloud*

🔵⚪️Edge#136: Kili Technology and Its Automated Data-Centric Training Platform

🎙 Olga Megorskaya/Toloka: Practical Lessons About Data Labeling

👓 Edge#135: Self-Supervised Learning for Computer Vision 

🤖✨ ML, Physics and Robotics

🎙 Rinat Gareev/Provectus About Data Quality and Enterprise ML Solutions

🔵🔴 Edge#134: Run:AI's 2021 AI Infrastructure Survey

📝 Guest post: 7 key considerations to develop a scalable annotation pipeline

🗣 Edge#133: Self-Supervised Learning for Speech 

🗣🏎 The Race for Big Language Models Continues

🟠🟣 Edge#132: WhyLabs, AI Observability as a Service

🎙 Paroma Varma/Snorkel on programmatic approaches to data labeling

✍🏽 Edge#131: Self-Supervised Learning for Language

🧠🧠🧠 The Thousand Brains Theory, A New AI Book You Must Read

📝 Guest post: Data Aggregation is Unavoidable! (And Other Big Data Lies)

🧙🏻‍♂️ Edge#130: The ML Engineering Magic Behind OpenAI Codex

🐹 Edge#129: Self-Supervised Learning as Non-Contrastive Learning 

🌋 The Biggest Problems in ML Safety

📌 Join us free for TransformX on Oct 6-7

🤖 Edge#128: Wu Dao – the Biggest Transformer Model in History

🎙 Judah Phillips / Squark about No-Code Predictive Analytics

🐼 Edge#127: How Contrastive Learning Influences Self-Supervised Learning methods

👑 The GPT-3 Influence Factor

📌 Join us on Oct 6: Data Aggregation is Unavoidable! (And Other Big Data Lies)*

🔷🟥 Edge#126: Pachyderm 2 Brings  New Data Capabilities to  Accelerate your ML Lifecycle

📌 Event: Data Discovery and Observability for ML Engineers

⚡️Edge#125: Self-Supervised Learning as Energy Based Methods

👑 Big Tech and their Favorite Deep Learning Schools

🏷 Data Labeling for ML, part 2

🦾Transformer Architectures Recap

🎙 German Osin/Provectus About Data Discovery and Observability in ML Solutions

🌌 Edge#123: A New Series About Self-Supervised Learning

🏷🥊 The Fight Against Labeled Dataset Dependencies

🌟 Take part in the ML Insider Survey

🎭 Edge#122: Unified VLP is a Transformer Model for Visual Question Answering

🎙 Bryce Daines/CDS at Modulus Therapeutics: Using ML to Power Next Generation Cell Therapy

🕐🕚 Edge#121: Transformers and Time Series    

🥗 Will Machine Learning Data Infrastructures Become Commoditized?

⚪️🟠️ Edge#120: How to Leverage Open-Source Data Labeling for your Business

⚒ Edge#119: Data Labeling – Build vs. Buy vs. Customize

🗄 ML to Power a New Generation of Databases 

🔴 Cutting-Edge, No-Code Data Science: Powerful, Flexible, Nimble and Explainable AI Automation*

👯‍♀️ Edge#118: DeepMind Releases Two New Super Models to Handle Any Type of Dataset

☝️ The #1 Easy Way You Win Machine Learning*

👁 Edge#117: Transformers and Computer Vision 

😱 Distributed ML Training is the Problem Everyone is Going to Have

📝 Guest post: Introducing Low-Latency Streaming Pipelines for Real-Time ML

💪🏻 Edge#116: AI2-Thor is an Open-Source Framework for Embodied AI Research

🎙 Greg Finak/ CTO of Ozette: using ML to extract intelligence from the immune system

🤩Edge#115: OpenAI GPT-3, OpenAI API for GPT-3; and how to make transformers more efficient

💻 OpenAI Codex, a Programming Challenge and one of the Most Impressive AI Demos Ever Created

👏 Edge#114: AI2’s Longformer is a Transformer Model for Long 

🍢 Edge#113: Google BERT; TAPAS that Query Tabular Datasets; and AutoNLP

👾Transformers are Getting More Ambitious

🧠 Edge#112: How DeepMind’s Compressive Transformer Improves Long-Term Memory in Transformer Architectures

🎙 Emad Elwany/CTO at Lexion on using deep learning to reimagine contract management systems

🤗 Edge#111: The concept of Attention; Google Switch Transformer; and Hugging Face

🔱 Triton: GPU Programming for Deep Neural Networks

🔷🟥 Edge#110: How The Pachyderm Platform Delivers the Data Foundation for Machine Learning

🤖 Edge#109: What are Transformers?

🛠 Introducing the Real World ML Section

⚪️⚫️ Edge#108: How to Improve Model Accuracy with Crowdsourced Data Labeling – Real World Use Cases

🎙Albert Azout/Level Ventures on the state of AI market and the areas to pay attention to

🗂 Edge#107: Crowdsourced vs. Automated vs. Hybrid Data Labeling

🧬 The AlphaFold Race is On!

Edge#106: 💥The “What’s New in AI” recap#2️⃣

Edge#105: 💥The “What’s New in AI” recap#1️⃣

🔮 The Future of Deep Learning According to Three Legends 🧙🏻‍♂️ 🧙🏻‍♂️ 🧙🏻‍♂️

🏷 Data Labeling for ML

🔹🔸Edge#104: AllenNLP Makes Cutting-Edge NLP Models Look Easy

🎙 Joe Doliner/CEO of Pachyderm on developing a canonical ML stack and main challenges for mainstream developer adoption

🏆📚 Reinforcement Learning Recap

👩🏻‍✈️The GitHub CoPilot Milestone

💥 Edge#102: DeepMind Redefines One of the Most Important Algorithms in ML as a Game

🤔 Edge#101: The Exploration-Exploitation Dilemma in RL; Bayesian Exploration; and TF-Agents for TensorFlow

⛲️ The Importance of Open-Source ML Datasets

📌 Join us July 14th at MLCon – The AI & ML Developer Conference

Edge#💯: Will NetHack Challenge Become One of the Toughest RL Benchmarks in History?

🎙 Piero Molino on creating Ludwig and the Importance of Low-Code ML

🌐 Edge#99: What are Trust Region and Proximal Policy Optimization; PPO to master Dota2; and RLlib

🔥 A New Release of PyTorch is Here

🗿 Edge#98: OpenAI Built RL Agents that Mastered Montezuma’s Revenge by Going Backwards

⚽️ Edge#97: Policy Optimization in RL; how to master football with RL; and DeepMind’s bsuite

📲 Why Mobile Deep Learning is Tougher Than You Think

🔹◽️ Edge#96: Molecula is a Feature Extraction and Storage Platform Designed for Enterprise ML Workloads

🎙Oren Etzioni/CEO of Allen Institute for AI (AI2) on advancing AI research for the common good

🚩 Edge#95: What is DQN; how DeepMind masters Quake III; and OpenAI Gym as a must-have tool

🖼 AI Incumbents and Their Favorite ML Frameworks

🔸◽️Edge#94: Determined AI Tackles the Monster Challenge of Distributed Training

🕵🏻‍♀️ Edge#93: Q-Learning, Google SEED RL architecture, and Facebook’s ReAgent

🔥 PyTorch is Getting Serious About the Enterprise

🤹‍♂️ Edge#92: Cogito Brings Human-in-the-Loop Data Annotation to Enterprises

🎙 Hyun Kim/CEO of Superb AI on true data labeling automation

🕹 Edge#91: Model-Free RL; Atari57 that outperformed humans; and DeepMind's OpenSpiel, an RL framework for games

🌊 Google’s New Wave of Machine Learning Capabilities

💥 Edge#90: OpenAI Safety Gym is an Environment to Improve Safety in Reinforcement Learning Models

⛑👑 Edge#89: Feature Learning – What Makes Some Features Better Than Others?

🧠 What is TheSequence? The guide to our content universe

🤓😡👹 Open Source ML from Large Tech Incumbents: The Good, The Bad and the Ugly

🎙H.O. Maycotte/CEO of Molecula on shifting from “data as fuel” to “features as fuel”

🗣🤖 Edge#88: How IBM Uses Weak Supervision to Bootstrap Chatbots and How You Can Do It Too With Snorkel Flow

🧘‍♀️📚 Edge#87: Model-Based Reinforcement Learning, Google Dreamer, and Uber Fiber.

🤜🤛 AI/ML startups align to build a canonical stack and compete with the incumbents

🧠🤖 Edge#86: How DeepMind Prevents RL Agents from Getting "Too Clever"

🏆📚 Edge#85: Reinforcement Learning – very popular and yet misunderstood deep learning discipline

👀👀 Self-Supervised Learning is Making Inroads 

⚪️⚪️🔵 Edge#84: Snorkel Flow – One of the Most Comprehensive ML Platforms on the Market

🎙 William Falcon: "We did our job right if the term MLOps disappears"

🥃 👯 Edge#83: One-Shot Learning, Siamese Networks, and ONNX standard

🛴🚲 The Race to Improve Reinforcement Learning

⚪️🔵 Edge#82: Fiddler is Bringing ML Monitoring to Enterprises

🎙 François Chollet: Keras, TensorFlow and New Ways to Measure Machine Intelligence

🥛 Edge#81: Zero-Shot Learning and How It Can Be Used

❇️ The Nvidia AI Network Effect Goes Beyond Hardware

📌 Event on April 21-22: apply() – the ML Data Engineering Conference

💻⚛️* Edge#80: Some Things You Should Know about TensorFlow Quantum

🥃🥃 Edge#79: Few-Shot Learning, Prototypical networks, and TorchMeta

Ⓜ️🌀 The MLOps Space is Getting Crowded and Confusing

💪🏻 AutoML recap

🥗🥩 Edge#78: Feast is an Open Source, Lightweight Feature Store You Should Know About

🎙 Adam Wenchel/CEO of Arthur AI on ML explainability, interpretability, and fairness

🏗🏪 Edge#77: How Feature Stores Were Started

☝️⚖️ ML Fairness is Everybody’s Problem

🔎 👯‍♂️ Edge#76: Google’s Model Search is a New, Open-Source Framework for Finding Optimal ML Models

🥃📚 Edge#75: N-Shot Learning, how OpenAI Uses it; and learn2learn Meta-Learning Framework

🔆🔅 Go Big First, Then Compress

🎙 Iskandar Sitdikov/Provectus: Healthcare has it all: NLP, computer vision, recommendations, and a whole lot more

👯‍♀️👯 Edge#74: How Uber, Google, DeepMind and Microsoft Train Models at Scale

📌 Event: AI & Automation for Document Processing in Healthcare

🐍🦎 Edge#73: Meta-Learning and AutoML, OpenAI’s Reptile Model, and the Auto-Keras Framework

👉🕳👈 Closing the Gap Between Deep Learning Software and Hardware

🤩💥 'What's New in AI' Recap 2

🔴⚪️ Edge#72: Tecton – The Enterprise Feature Store Built by the Creators of Uber's ML Platform

🔎◼️ Edge#71: What is Differentiable Architecture Search?

🚜 🛵 Using Transformers in Mainstream Deep Learning Applications

🤩💥 'What's New in AI' Recap

//💨 Edge#70: Typed Features to Accelerate ML Experimentation at Scale

🎙 Manu Sharma/CEO of Labelbox about the future of data labeling automation

🔎🔍 Edge#69: Search Strategies in Neural Architecture Search

♨️ Making Sense of Microsoft’s Recent Machine Learning Announcements 

📕📖📗 Natural Language Understanding Recap

🔵🔴 Edge#68: Run:AI Decouples Machine Learning Pipelines from the Underlying Hardware

🔪 Edge#67: Dissecting Neural Architecture Search in the context of AutoML

👩🏽‍🔧👨🏻‍🔧 Continuous Data Improvements and ML Performance

🃏😎 Edge#66: Pluribus – superhuman AI for multiplayer poker

🎙 Mike Del Balso/CEO of Tecton: There is too much depth in this space for feature stores to be just a “feature”

◻️◼️ Edge#65: Bayesian hyperparameter optimization; how Amazon uses AutoML for the entire lifecycle of ML models; and Azure AutoML

🏎🏎 The AI Chip Race is Getting More Specialized

🤓😎 Emerging ML Methods Recap

🛠 Edge#64: The Architectures Powering ML at Google, Facebook, Uber, LinkedIn

🔳🔳 Edge#63: Blackbox Hyperparameter Optimization, AutoML to train Waymo’s self-driving cars; H2O AutoML

🙈🙉🙊 GPT-3 and Large Language Models can Get Out of Control

🔐🔏 Security and Privacy Recap

👩‍💻 Edge#62: Data Discovery and Management Architectures at LinkedIn, Uber, Lyft, Airbnb and Netflix

🎙 Jan Beitner, Creator of PyTorch Forecasting

🏋️‍♂️🤸‍♀️Edge#61: Understanding AutoML and its Different Disciplines

🏷 🔥Training Data Labeling is One of the Hottest Markets in Machine Learning

⏳⌛️Time-Series Forecasting Wrap-Up

🕹🤖 Edge#60: Google’s Switch Transformer

☝️⏰ Edge#59: NeuralProphet, the final chapter on time-series

💸🤷🏽‍♂️ Running AI Compute Infrastructures Without Breaking the Bank

🎙 Jim Dowling/CEO Logical Clocks: The future of feature stores

👄👁 Edge#58: OpenAI’s CLIP and DALL·E Draw Inspiration from GPT-3 to Connect Language and Computer Vision

🤖 🕕 Edge#57: Transformer Architectures for Time Series

🤓☝️The Need for Open-Source Datasets and Benchmarks

⚪️ ⚫️ Edge#56: DeepMind’s MuZero that Mastered Go, Chess, Shogi and Atari Without Knowing the Rules

⏳ Edge#55: DeepAR, multi-dimensional time-series forecasting, and Sktime

🎈 Are Feature Stores the Next Bubble in AI?

🎙 Krishna Gade/CEO Fiddler AI: Challenges with model explainability

♣️ Edge#54: Facebook ReBeL That Can Master Poker

💬 Edge#53: What are Facebook’s Prophet and AR-Net, and how PyTorch Forecasting enables deep learning models for time-series forecasting

🚜 Transformers Continue Setting Records

🤖 Edge#52: Google Meena That Can Chat About Anything

⏱ Edge#51: Arima, GluonTS, and AutoML for Time Series Forecasting

1️⃣2️⃣3️⃣ Three Data Science Trends that are Hard to Live Without in 2021