Contribute Media
A thank you to everyone who makes this possible: Read More

How did you know? Explaining Black Box Model Predictions in Python

Description

As algorithms get more and more complex (i.e. Ensemble models - XGBoost, Random Forest, Neural Networks), it becomes harder to explain the predictions they make. These 'Black Box' models may produce more accurate results but may in fact hard to operationalize in the real world as it gets harder and harder to explain to business decision makers how a model came up with the prediction. In certain cases such as in credit scoring model interpretability is crucial particularly for regulatory compliance. This talk will highlight certain Python tools and libraries such as LIME, ELI5 and Skater, that would allow data scientists to finally be able to explain how their models came up with its predictions.

Details

Improve this page