🔂 Edge#217: ML Testing Series – Recap
Last week we finished our mini-series about ML testing, one of the most critical elements of the ML models’ lifecycle. Here is a full recap for you to catch up with the topics we covered. As the proverb (and many ML people) says: Repetition is the mother of learning ;)
The essence of ML testing is to execute explicit checks that validate the behavior of an ML model. This approach contrasts with testing in traditional software applications. In a web or mobile application, users provide tests in the form of logic and data and fine-tune the system’s behavior. The cycle is different in ML, where a test starts with the expected behavior and the corresponding dataset, with the model’s logic as an output. Â
Plenty of taxonomies can be used to organize ML testing techniques. A very general approach segments testing techniques into two main groups relative to the ML model lifecycle:Â Â
Pre-Train Tests: Designed to find problems that can help optimize the training workflow.Â
Post-Train Tests: The most important types of tests in ML. Post-train tests are designed to check the behavior of ML models.
Typically, both types of tests should be incorporated into an MLOps pipeline. Also, tests should include both code and data. Over the years, there have been a lot of different ML test techniques that have been widely covered in research. Examples include invariance tests, minimum functionality tests, directional tests and many others.
Forward this email to those who might benefit from reading it or give a gift subscription.
→ In Edge#209 (read it without a subscription): we explore how Uber backtests time-series forecasting models at scale; and discuss Deepchecks, an ML testing platform you should know about.Â
→ In Edge#211: we discuss what to test in ML models; explain how Meta uses A/B testing to improve Facebook’s newsfeed algorithm; and explore Meta’s Ax, a framework for A/B testing in PyTorch. Â
→ In Edge#213: we overview the fundamental types of tests to be applied to trained models; explain how Meta uses Bayesian Optimization to conduct better experiments in ML models; and explore TensorFlow’s What-If Tool, one of the most commonly used testing tools in the machine learning space.
→ In Edge#215: we discuss Pre-Train Model Testing; overview the pillars of robust machine learning; and explore Great Expectations, one of the most complete data validation frameworks used in ML pipelines.
Next week we are going back to deep learning theory. Our next mini-series will cover a new generation of text-image models and their underlying techniques. Fascinating!