A Python library that explains predictions of any machine learning classifier using local interpretable model-agnostic explanations.
LIME is a Python library that provides local interpretable model-agnostic explanations for machine learning classifiers. It explains individual predictions by approximating complex models with simpler, interpretable linear models around specific instances, helping users understand model behavior without needing access to internal workings.
Data scientists, machine learning engineers, and researchers who need to interpret, debug, or validate black-box models in production or research settings, particularly those working with text, tabular, or image data.
Developers choose LIME because it works with any classifier, requires only prediction probabilities, and offers intuitive visual explanations, making it a versatile tool for enhancing transparency and trust in machine learning systems.
Lime: Explaining the predictions of any machine learning classifier
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Works with any classifier that outputs probabilities, including scikit-learn, H2O, Keras, and PyTorch models, as shown in the extensive tutorials.
Generates explanations for text, tabular data, and images, with dedicated tutorials for each type, making it versatile across domains.
Produces HTML and matplotlib visualizations that highlight feature contributions, such as words in text or regions in images, evident in the screenshots.
Offers detailed notebooks and API documentation for various use cases, from basic usage to advanced frameworks, lowering the learning curve.
Only approximates model behavior around individual predictions, lacking global interpretability, which can miss systemic biases or patterns.
Requires perturbing instances and fitting local models, adding significant latency for large datasets or complex models, as acknowledged in the methodology.
Matplotlib visualizations are less polished than HTML ones, as admitted in the README, which may affect presentation for stakeholders.
Dropped Python 2 support in version 0.2.0, forcing updates for older systems, and reliance on probability outputs excludes some model types.