128 Open Source Explainable Ai Software Projects
Free and open source explainable ai code projects including engines, APIs, generators, and tools.
Tensorwatch 3202 ⭐
Debugging, monitoring and visualization for Python Machine Learning and Data Science
Path_explain 136 ⭐
A repository for explaining feature attributions and feature interactions in deep neural networks.
Scalaconsultants Aspect Based Sentiment Analysis 324 ⭐
💭 Aspect-Based-Sentiment-Analysis: Transformer & Explainable ML (TensorFlow)
Hierarchical Dnn Interpretations 101 ⭐
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Xnm Net 88 ⭐
Pytorch implementation of "Explainable and Explicit Visual Reasoning over Scene Graphs "
Explainx 280 ⭐
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Cxplain 99 ⭐
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Interpretability By Parts 103 ⭐
Code repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Shapr 95 ⭐
Explaining the output of machine learning models with more accurately estimated Shapley values
Deep Explanation Penalization 93 ⭐
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Fast Tsetlin Machine With Mnist Demo 55 ⭐
A fast Tsetlin Machine implementation employing bit-wise operators, with MNIST demo.
Datascience_artificialintelligence_utils 175 ⭐
Examples of Data Science projects and Artificial Intelligence use cases
Xaiaterum2020 50 ⭐
Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Ml Fairness Framework 60 ⭐
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Representer_point_selection 55 ⭐
code release for Representer point Selection for Explaining Deep Neural Network in NeurIPS 2018
Visner 49 ⭐
In the wild extraction of entities that are found using Flair and displayed using a very elegant front-end.
Contrastiveexplanation 38 ⭐
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Shap_fold 34 ⭐
(Explainable AI) - Learning Non-Monotonic Logic Programs From Statistical Models Using High-Utility Itemset Mining
Self_critical_vqa 36 ⭐
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Goneat_ns 34 ⭐
This project provides GOLang implementation of Neuro-Evolution of Augmented Topologies (NEAT) with Novelty Search optimization aimed to solve deceptive tasks with strong local optima
Sagemaker Explaining Credit Decisions 76 ⭐
Amazon SageMaker Solution for explaining credit decisions.
Whitebox Part1 34 ⭐
In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)
Ddsm Visual Primitives 23 ⭐
Using deep learning to discover interpretable representations for mammogram classification and explanation
Minotor 20 ⭐
Open source software for machine learning production monitoring : maintain control over production models, detect bias, explain your results.
Disentangled Attribution Curves 20 ⭐
Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"
Javaanchorexplainer 17 ⭐
Explains machine learning models fast using the Anchor algorithm originally proposed by marcotcr in 2018
Hc_ml 20 ⭐
Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.
Awesome Adversarial Interpretable Machine Learning 121 ⭐
💡 Adversarial attacks on model explanations, and evaluation approaches
Dbountouridis Siren 21 ⭐
SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments
Dlime_experiments 17 ⭐
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Article Information 2019 13 ⭐
Article for Special Edition of Information: Machine Learning with Python
Dialogue Understanding 96 ⭐
This repository contains PyTorch implementation for the baseline models from the paper Utterance-level Dialogue Understanding: An Empirical Study
Awesome Explainable Graph Reasoning 1759 ⭐
A collection of research papers and software related to explainability in graph machine learning.
Transformers Interpret 544 ⭐
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Responsible Ai Widgets 399 ⭐
This project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Interpretability Implementations Demos 438 ⭐
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Transformer Mm Explainability 289 ⭐
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Carla Recourse Carla 149 ⭐
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Shapley 134 ⭐
The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).
Awesome Explanatory Supervision 93 ⭐
List of relevant resources for machine learning from explanatory supervision
Whyisyoung Cade 60 ⭐
Code for our USENIX Security 2021 paper -- CADE: Detecting and Explaining Concept Drift Samples for Security Applications
Da_visualization 57 ⭐
[CVPR2021] "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai
Seggradcam 42 ⭐
SEG-GRAD-CAM: Interpretable Semantic Segmentation via Gradient-Weighted Class Activation Mapping
Color_distillation 36 ⭐
[CVPR 2020] "Learning to Structure an Image with Few Colors". Critical structure for network recognition. #explainable-ai
Fact Checking Survey 34 ⭐
Repository for the COLING 2020 paper "Explainable Automated Fact-Checking: A Survey."
Concept Based Xai 31 ⭐
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
3D Guidedgradcam For Medical Imaging 34 ⭐
This Repo containes the implemnetation of generating Guided-GradCAM for 3D medical Imaging using Nifti file in tensorflow 2.0. Different input files can be used in that case need to edit the input to the Guided-gradCAM model.
Health Fact Checking 23 ⭐
Dataset and code for "Explainable Automated Fact-Checking for Public Health Claims" from EMNLP 2020.
Prototree 27 ⭐
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Automated Fact Checking Resources 48 ⭐
Links to conference/journal publications in automated fact-checking (resources for the TACL21 paper).
Fastcam 20 ⭐
A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore National Laboratory.
Recourse 20 ⭐
Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831
Global Attribution Mapping 18 ⭐
GAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations
Logic_explainer_networks 21 ⭐
Deep Logic is a python package providing a set of utilities to build deep learning models that are explainable by design.