A model-agnostic method for generating high-precision rule-based explanations for black-box classifier predictions.
Anchor is a Python library for generating high-precision, rule-based explanations for predictions made by black-box machine learning classifiers. It helps users understand why a model made a specific decision by identifying a minimal set of input features that 'anchor' the prediction, ensuring that changes to other features do not alter the outcome. This method is model-agnostic, meaning it works with any classifier that provides predictions, such as deep neural networks or ensemble models.
Data scientists, machine learning engineers, and researchers who need to interpret, debug, or audit black-box models, particularly in domains requiring transparency like healthcare, finance, or legal applications.
Developers choose Anchor for its ability to provide precise, interpretable local explanations without requiring access to the model's internals, its support for text and tabular data, and its foundation in peer-reviewed research, ensuring reliability and academic rigor.
Code for "High-Precision Model-Agnostic Explanations" paper
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Works with any black-box classifier that outputs predictions, regardless of internal architecture, making it versatile for various ML models without needing access to internals.
Produces if-then rules (anchors) that guarantee, with high probability, that the prediction remains unchanged when the rule holds, ensuring faithful explanations as per the research paper.
Currently supports explaining predictions for text classifiers and tabular data (numpy arrays), covering common use cases in machine learning, as stated in the README.
Available on PyPI via 'pip install anchor-exp', with clear instructions for additional dependencies like spaCy for text, simplifying initial deployment.
The README notes that image support is contingent on community interest, meaning it's not readily available for computer vision tasks without custom implementation.
Requires installing spaCy and optionally BERT for text explanations, which adds complexity and potential setup hurdles, as mentioned in the installation steps.
Perturbation-based explanation generation can be slow for large inputs or complex models, impacting performance in time-sensitive applications due to the method's inherent resource intensity.