đ Krishna Gade/CEO Fiddler AI: Challenges with model explainability
TheSequence interviews ML practitioners to merge you into the real world of machine learning and artificial intelligence
There is nothing more inspiring than to learn from practitioners. Getting to know the experience gained by researchers, engineers and entrepreneurs doing real ML work can become a great source of insights and inspiration. Weâd like to introduce to you TheSequence Chat â the interviews that bring you closer to real ML practitioners. Please share these interviews if you find them enriching. No subscription is needed.
đ€Â Quick bio / Krishna Gade
Tell us a bit about yourself. Your background, current role and how did you get started in machine learning?Â
Krishna Gade (KG):Â I am the Founder/CEO of Fiddler, an Explainable AI (XAI) Platform. Prior to founding Fiddler, I led engineering teams at Facebook, Pinterest, Twitter, and Microsoft.Â
For most of the last two decades, I spent time building scalable platforms at these internet companies to convert data into intelligent insights using big data, machine learning, and deep learning technologies. At Facebook, I was leading the News Feed Ranking Platform that created the infrastructure for ranking content in News Feed and powered use-cases like Facebook Stories and recommendations like People You May Know, Groups You Should Join, etc. My team built Facebookâs explainability features like âWhy am I seeing this?â which helped bring much-needed algorithmic transparency and thereby accountability to the News Feed for both internal and external users.
đ Research Â
You are currently focused on one of the toughest areas of modern machine learning. Can you describe the challenges with model explainability and monitoring?
KG:Â Machine learning is being increasingly applied by enterprises across a variety of use-cases, but AI is not the easiest technology to build and operationalize. However, the past few years have seen the emergence of GUI-based ML tools and open-source libraries to help enterprises, less inclined to build in-house, successfully train and deploy ML models. Training highly accurate ML models has become relatively fast and cheap now. But deploying, monitoring, governing, and ensuring responsible use of them over time â thatâs become the main challenge.Â
Data Science and Engineering teams have come to realize they must then monitor and manage their models to ensure risk-free and reliable business outcomes. With the rise of higher-performing black-box models, the need to explain models has become both more necessary and more challenging. ML models are essentially stochastic entities and their performance can degrade over time due to changes in input data distribution, seasonality, or some unexpected events like COVID-19. Therefore models require continuous monitoring to ensure their fidelity while in production. An Explainable ML Monitoring system extends traditional monitoring to provide deep model insights with actionable steps.Â
A classic challenge in machine learning interpretability is the difference between simple models like decision trees and complex neural networks. How does Fiddler manage to provide a single platform for explainability across such different sets of architectures? Â
KG:Â Great question!Â
At Fiddler, we chose a particular family of techniques that rely on attribution based methods to explain ML models. These algorithms attribute a given modelâs prediction to input features, relative to a certain baseline input. For example, we can attribute a lending modelâs prediction to its features like FICO, Income, Previous Debt, etc, we can also attribute an object recognition networkâs prediction to its pixels, as well as attribute a text sentiment networkâs prediction to individual words. This is a reductive formulation of answering the question of âwhy this predictionâ but it is surprisingly effective :)Â Â
Oftentimes, the features used in an ML model are not independent so the task of attributing the impact of each feature on the model's final prediction is non-trivial. Therefore, we leverage the well established Shapley Value concept from the cooperative game theory proposed by Lloyd Shapley in 1953. Weâve found that our novel and scalable implementation (here is a research paper about it) performs well across different model types and scales for large production datasets.Â
đș TheSequence Edge is a summary of groundbreaking ML research papers, engaging explanations of ML concepts, and exploration of new ML frameworks and platforms. Stay up to date with the news, trends, and tech developments in the AI field. Practical. No hype. đ»
Is machine learning explainability a feature or a product? Can explainability platforms remain standalone companies of eventually will be merged into the larger machine learning stacks from companies like Microsoft, Google or AWS?
KG:Â Sure, as teams are building more and more sophisticated AI applications, they recognize a single monolithic AI platform is not working for them after a point. They are looking for a machine learning workflow with best-of-breed tools.
Image credit: a16z.com
Weâve built a pluggable platform that allows our customers to integrate Fiddler with their favorite model types like Scikit-Learn, TensorFlow, PyTorch, MLFlow, Spark, etc, and seamlessly Monitor, Explain and Analyze their AI in production.Â
Weâve seen this being successful in the traditional software development lifecycle, where developers use a myriad of tools (e.g, LAMP stack) to build, deploy, and monitor their applications. Machine learning development lifecycle is also going through the same evolutionary process where AI developerâs toolkit is literally being built right now by a handful of exciting startups like Fiddler! Given that businesses, consumers, and regulators are calling for more transparency and accountability in AI solutions, XAI platforms like Fiddler can provide more trustworthy, transparent, and accountable AI by plugging into every stage of the AI lifecycle.
What are some of your most ambitious ideas about machine learning explainability? Can we get to the point of using machine learning to explain the behavior of other machine learning models?
KG: Since I came from a software engineering background, I will use an analogy from there to illustrate my point. When we root-cause software failures in production teams, we use a âFive Whysâ method to perform the diagnosis. One can extend that to Model Explainability. Say, for example, a credit risk ML model denies a loan application. We can ask a series of questions like below to know the truth.Â
Why did the credit risk model deny the loan application?
Why was the credit risk score so high for this loan?
What are the most influential factors that affected the credit risk score?Â
What are other loans that got rejected that are similar to this loan application?
Would the model approve the loan if the user requested a $1000 lesser amount?
What can the loan applicant do to improve their case for getting approved in the future?Â
...
As we answer one question, we get 10 other questions in our mind that we want to know the answers to. Therefore fully explaining a complex ML model becomes a holy-grail. The goal of any XAI tool is to get the user as close to the truth as possible.Â
We recognize that the XAI space is evolving quickly and lots of new algorithms are being invented, so we've taken the pragmatic approach of focusing our offering around tools and high-quality implementations of methods that have largely proven their usefulness in deployed AI applications. As space evolves, we are committed to improving our existing offering as well as adding new methods and techniques to meet our users' needs.
The ethos of the machine learning community is tightly aligned with the spirit of open-source. Why is Fiddler not open-source in a market with several open-source machine learning interpretability stacks?Â
KG:Â At Fiddler, weâve productized several open-source XAI algorithms such as SHAP, Integrated Gradients along with our propriety techniques. Weâve published papers on our proprietary techniques as well as creating video tutorials that deep-dive into how they work. Weâre committed to publishing our research as we believe in creating transparency around how our algorithms work for our users who are data scientists, ML engineers, analysts, and regulators.Â
What OSS stacks, like Microsoft Interpret or IBM XAI toolkit, provide is a collection of libraries. Developers still have to do all the heavy lifting to operationalize them. Whereas the value Fiddler provides to our users is that all of these algorithms (including some of our proprietary ones) are packaged in a scalable and operationalized XAI platform that helps them monitor and explain complex models and large datasets.Â
đ„ Miscellaneous â a set of rapid-fire questions Â
TensorFlow or PyTorch?Â
KG: PyTorch if I am mostly prototyping, but Tensorflow for production models. Btw, check out this talk by Facebook and Fiddler partnering to explain PyTorch models. Â
Favorite math paradox?
KG:Â Simpsonâs paradox. It is a fascinating paradox because it shows that data can both guide and deceive us at the same time :)
Any book you would recommend to aspiring data scientists?
KG: There are many popular books that have come into the field over time. For someone new to the field, aspiring for a rigorous text â I would recommend the Pattern Classification book by Duda, Hart, and Stork. I am also a big fan of âThe Book of Whyâ by Judea Pearl, especially the first chapter is super insightful.
Is P equals NP?
KG: HmmâŠÂ If P equals NP, every NP problem would contain a hidden method, allowing computers to quickly find perfect solutions to them. But if P does not equal NP, then no such shortcuts exist, and computersâ problem-solving powers will remain fundamentally and permanently limited. We can make a philosophical argument that P just canât equal NP. If it did, then it would mean that finding the solution to a problem has always been as easy as verifying that the solution is correct and that factoring our large numbers is easy.Â
We should be happy that P does not equal NP because that would have broken all internet security and all our identities and credit histories would have been stolen by now :)Â
TheSequenceâs goal is to make you smarter about artificial intelligence. 5 minutes of your time, 3 times a week â you will steadily become knowledgeable about everything happening in the AI space. Subscribe to receive it straight into your inbox. Support the project and our mission to simplify AI education, one newsletter at a time. Thank you.