A Python library for explaining machine learning models using black-box, white-box, local, and global interpretation methods.
Alibi is a Python library dedicated to explaining machine learning models. It implements a wide range of algorithms for model inspection and interpretation, helping data scientists and ML engineers understand model predictions, debug performance, and ensure regulatory compliance. The library supports both classification and regression models across tabular, text, and image data.
Machine learning practitioners, data scientists, and researchers who need to interpret, debug, or validate their models, especially those working on projects requiring transparency, fairness, or regulatory compliance.
Developers choose Alibi for its comprehensive, production-ready collection of state-of-the-art explanation methods, its clean API inspired by scikit-learn, and its focus on high-quality implementations that work across diverse data types and model architectures.
Algorithms for explaining machine learning models
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Implements over 15 state-of-the-art methods including SHAP, Anchors, and Counterfactuals, covering black-box and white-box scenarios as detailed in the supported methods table.
Handles tabular, text, and image data with proper categorical feature handling, demonstrated in examples like Anchor explanations for images and Integrated Gradients for text.
Features a scikit-learn-inspired API with fit/explain steps, making it intuitive for practitioners familiar with machine learning workflows in Python.
Offers optional Ray integration for distributed computation of explanations, allowing scaling for large datasets or complex models.
Some advanced methods like Integrated Gradients and Counterfactuals are primarily optimized for TensorFlow/Keras models, with limited native support for PyTorch or other frameworks, as noted in the method tables.
Explanation methods such as Kernel SHAP and Anchor explanations can be resource-intensive, potentially slowing down inference pipelines without distributed setups or careful optimization.
Requires separate installations for key features like SHAP support or Ray integration, adding steps and potential version conflicts in deployment environments, as highlighted in the installation section.