Open Source Libs
Find Open Source Packages
Open Source Libraries
👉
Explainable Ai
128 Open Source Explainable Ai Software Projects
Free and open source explainable ai code projects including engines, APIs, generators, and tools.
Interpret
4502 ⭐
Fit interpretable models. Explain blackbox machine learning.
Tensorwatch
3202 ⭐
Debugging, monitoring and visualization for Python Machine Learning and Data Science
Mindsdb
4494 ⭐
In-Database Machine Learning
Awesome Interpretable Machine Learning
807 ⭐
Aix360
1036 ⭐
Interpretability and explainability of data and machine learning models
Dalex
988 ⭐
moDel Agnostic Language for Exploration and eXplanation
Machine_learning_tutorials
667 ⭐
Code, exercises and tutorials of my personal blog ! 📝
Xai
766 ⭐
XAI - An eXplainability toolbox for machine learning
Interpretml Dice
759 ⭐
Generate Diverse Counterfactual Explanations for any machine learning model.
Lofo Importance
431 ⭐
Leave One Feature Out Importance
Cnn Exposed
172 ⭐
🕵️♂️ Interpreting Convolutional Neural Network (CNN) Results.
Modelstudio
213 ⭐
📍 Interactive Studio for Explanatory Model Analysis
Path_explain
136 ⭐
A repository for explaining feature attributions and feature interactions in deep neural networks.
Scalaconsultants Aspect Based Sentiment Analysis
324 ⭐
💭 Aspect-Based-Sentiment-Analysis: Transformer & Explainable ML (TensorFlow)
Hierarchical Dnn Interpretations
101 ⭐
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Xnm Net
88 ⭐
Pytorch implementation of "Explainable and Explicit Visual Reasoning over Scene Graphs "
Explainx
280 ⭐
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Cxplain
99 ⭐
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Interpretability By Parts
103 ⭐
Code repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Shapr
95 ⭐
Explaining the output of machine learning models with more accurately estimated Shapley values
Deep Explanation Penalization
93 ⭐
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Fast Tsetlin Machine With Mnist Demo
55 ⭐
A fast Tsetlin Machine implementation employing bit-wise operators, with MNIST demo.
Eclique Rise
89 ⭐
Detect model's attention
Datascience_artificialintelligence_utils
175 ⭐
Examples of Data Science projects and Artificial Intelligence use cases
Xaiaterum2020
50 ⭐
Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Ml Fairness Framework
60 ⭐
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Relational_deep_reinforcement_learning
44 ⭐
Fastshap
77 ⭐
Fast approximate Shapley values in R
Representer_point_selection
55 ⭐
code release for Representer point Selection for Explaining Deep Neural Network in NeurIPS 2018
Textgain Grasp
49 ⭐
Essential NLP & ML, short & fast pure Python code
Xaience
85 ⭐
All about explainable AI, algorithmic fairness and more
Visner
49 ⭐
In the wild extraction of entities that are found using Flair and displayed using a very elegant front-end.
Fat Forensics
48 ⭐
Modular Python Toolbox for Fairness, Accountability and Transparency Forensics
Contrastiveexplanation
38 ⭐
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Bert_attn_viz
37 ⭐
Visualize BERT's self-attention layers on text classification tasks
Shap_fold
34 ⭐
(Explainable AI) - Learning Non-Monotonic Logic Programs From Statistical Models Using High-Utility Itemset Mining
Iba
51 ⭐
Information Bottlenecks for Attribution
Self_critical_vqa
36 ⭐
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Goneat_ns
34 ⭐
This project provides GOLang implementation of Neuro-Evolution of Augmented Topologies (NEAT) with Novelty Search optimization aimed to solve deceptive tasks with strong local optima
Sagemaker Explaining Credit Decisions
76 ⭐
Amazon SageMaker Solution for explaining credit decisions.
Whitebox Part1
34 ⭐
In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)
Plaquebox Paper
26 ⭐
Repo for Tang et al, bioRxiv 454793 (2018)
Ddsm Visual Primitives
23 ⭐
Using deep learning to discover interpretable representations for mammogram classification and explanation
Minotor
20 ⭐
Open source software for machine learning production monitoring : maintain control over production models, detect bias, explain your results.
Pyceterisparibus
19 ⭐
Python library for Ceteris Paribus Plots (What-if plots)
Disentangled Attribution Curves
20 ⭐
Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"
Relative_attributing_propagation
22 ⭐
Interpreting DNNs, Relative attributing propagation
Javaanchorexplainer
17 ⭐
Explains machine learning models fast using the Anchor algorithm originally proposed by marcotcr in 2018
Hc_ml
20 ⭐
Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.
Toybox Rs Toybox
26 ⭐
The Machine Learning Toybox for testing the behavior of autonomous agents.
Anchorsonr
13 ⭐
Implementation of the Anchors algorithm: Explain black-box ML models
Awesome Adversarial Interpretable Machine Learning
121 ⭐
💡 Adversarial attacks on model explanations, and evaluation approaches
Modeloriented Vivo
14 ⭐
Variable importance via oscillations
Dbountouridis Siren
21 ⭐
SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments
Datafsm
13 ⭐
Machine Learning Finite State Machine Models from Data with Genetic Algorithms
Dlime_experiments
17 ⭐
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Article Information 2019
13 ⭐
Article for Special Edition of Information: Machine Learning with Python
Amirhk Mace
44 ⭐
Model Agnostic Counterfactual Explanations
U Cam
13 ⭐
Visual Explanation using Uncertainty based Class Activation Maps
Awesome Explainable Ai
510 ⭐
A collection of research materials on explainable AI/ML
Ceml
22 ⭐
CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox
Dialogue Understanding
96 ⭐
This repository contains PyTorch implementation for the baseline models from the paper Utterance-level Dialogue Understanding: An Empirical Study
Awesome Explainable Graph Reasoning
1759 ⭐
A collection of research papers and software related to explainability in graph machine learning.
Transformers Interpret
544 ⭐
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Bcg Gamma Facet
361 ⭐
Human-explainable AI.
Responsible Ai Widgets
399 ⭐
This project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Interpretability Implementations Demos
438 ⭐
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Transformer Mm Explainability
289 ⭐
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Vit Explain
231 ⭐
Explainability for Vision Transformers
Carla Recourse Carla
149 ⭐
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Shapley
134 ⭐
The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).
Awesome Explanatory Supervision
93 ⭐
List of relevant resources for machine learning from explanatory supervision
Selfexplainml Aletheia
53 ⭐
A Python package for unwrapping ReLU DNNs
Shapleyexplanationnetworks
55 ⭐
Implementation of the paper "Shapley Explanation Networks"
Whyisyoung Cade
60 ⭐
Code for our USENIX Security 2021 paper -- CADE: Detecting and Explaining Concept Drift Samples for Security Applications
Da_visualization
57 ⭐
[CVPR2021] "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai
Seggradcam
42 ⭐
SEG-GRAD-CAM: Interpretable Semantic Segmentation via Gradient-Weighted Class Activation Mapping
Color_distillation
36 ⭐
[CVPR 2020] "Learning to Structure an Image with Few Colors". Critical structure for network recognition. #explainable-ai
Graphlime
35 ⭐
This is a Pytorch implementation of GraphLIME
Fact Checking Survey
34 ⭐
Repository for the COLING 2020 paper "Explainable Automated Fact-Checking: A Survey."
Xplique
61 ⭐
👋 Xplique is a Neural Networks Explainability Toolbox
Concept Based Xai
31 ⭐
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Clustershapley
28 ⭐
Explaining dimensionality results using SHAP values
3D Guidedgradcam For Medical Imaging
34 ⭐
This Repo containes the implemnetation of generating Guided-GradCAM for 3D medical Imaging using Nifti file in tensorflow 2.0. Different input files can be used in that case need to edit the input to the Guided-gradCAM model.
Pytorch_explain
24 ⭐
PyTorch Explain: Logic Explained Networks in Python.
Health Fact Checking
23 ⭐
Dataset and code for "Explainable Automated Fact-Checking for Public Health Claims" from EMNLP 2020.
Prototree
27 ⭐
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Automated Fact Checking Resources
48 ⭐
Links to conference/journal publications in automated fact-checking (resources for the TACL21 paper).
Gradcam_pytorch
20 ⭐
GradCAM Pytorch
Fastcam
20 ⭐
A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore National Laboratory.
Recourse
20 ⭐
Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831
Koriavinash1 Bioexp
22 ⭐
Explainability of Deep Learning Models
Global Attribution Mapping
18 ⭐
GAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations
Logic_explainer_networks
21 ⭐
Deep Logic is a python package providing a set of utilities to build deep learning models that are explainable by design.
Lemna
16 ⭐
Source code for 'Lemna: Explaining deep learning based security applications'.
Strategic Decisions
17 ⭐
Code and data for decision making under strategic behavior
Cfai
15 ⭐
A collection of algorithms of counterfactual explanations.
Mtunet
17 ⭐
MTUNet: Few-shot Image Classification with Visual Explanations
Atgfe
14 ⭐
Automated Transparent Genetic Feature Engineering
Gebi
14 ⭐
GEBI: Global Explanations for Bias Identification. Open source code for discovering bias in data with skin lesion dataset