Interpretability is one of the most important areas of the new generation of machine learning (ML) platforms. Within the space of interpretability, nothing requires more attention than handling uncertainty in ML models. Uncertainty is one of the top factors that breaks the performance of ML models and one that is particularly difficult to model. After all, how can you effectively plan for things you haven’t seen before? Dealing with uncertainty in the real world forces us to think in terms of probabilities. Can we use probabilistic programming languages (PPL) to improve interpretability in ML models? Meta (Facebook) thinks we can and should.
This week, Meta (Facebook) AI Research (FAIR) released Bean Machine, a PPL optimized for estimating uncertainty in outputs from ML models. Essentially, Bean Machine can take into account the impact that random events can have on the outputs of the predictions. More specifically, Bean Machine uses generative models to automatically learn the unobserved properties of ML models. Not surprisingly, the FAIR team chose PyTorch as the initial implementation of Bean Machine, which guarantees native interoperability with the rest of the ecosystem. PPLs have always been an important component of the ML ecosystem but using PPLs to improve interpretability in ML models is a super clever idea. We hope to see FAIR iterations on the Bean Machine release in the near future.
🔺🔻 TheSequence Scope is our Sunday free digest. To receive high-quality educational content about the most relevant concepts, research papers and developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🔺🔻
🗓 Next week in TheSequence Edge:
Edge#151: we discuss Model Packaging; explore Typed Features at LinkedIn; overview ONNX, a key framework for ML interoperability.
Edge#152: we explain how DeepMind and Waymo train self-driving car models.
Now, let’s review the most important developments in the AI industry this week
🔎 ML Research
Interpretability and Time Series Forecasting
Google Research published a paper proposing a transformer-based technique for time series analysis which is both highly accurate and interpretable →read more on Google Research blog
Microsoft published a paper detailing Florence 1.0, a state-of-the-art model for vision and vision-language tasks →read more on Microsoft Research blog
Google Research published a paper discussing a dataset distillation technique used to improve the efficiency in the training of ML models →read more on Google Research blog
Tailored Text Summarization
Salesforce Research published a paper detailing a text-summarization method that can be customized based on user’s preferences →read more on Salesforce Research blog
🤖 Cool AI Tech Releases
Meta (Facebook) AI Research (FAIR) released Bean Machine, a new probabilistic programming language optimized for fast experimentation →read more on FAIR blog
Fine Tuning GPT-3
OpenAI expanded its API to enable developers to fine-tune GPT-3 models →read more on OpenAI blog
ONNX in Mobile
Microsoft released a version of the ONNX Runtime for its Xamarin mobile platform which allows developers to easily add ML models in mobile applications →read more on Microsoft blog
🔊 Shout out
Check out the podcast called How AI Happens. It features conversations with experts and practitioners at the cutting edge of Artificial Intelligence and includes guests from Dell, Google, Facebook AI, RedPoint Global, Walmart, Microsoft and many more.
💸 Money in AI