Skip to content

Files

Latest commit

 

History

History

transparent_model

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 

Transparent/Interpretable Model

This family of models is self-explained and transparent to users.

Papers

On the Power of Decision Trees in Auto-Regressive Language Modeling, NeurIPS 2024

PICNN: A Pathway towards Interpretable Convolutional Neural Networks, AAAI 2024

A Convolutional Neural Network Interpretable Framework for Human Ventral Visual Pathway Representation, AAAI 2024

Self-Interpretable Graph Learning with Sufficient and Necessary Explanations, AAAI 2024

Learning Performance Maximizing Ensembles with Explainability Guarantees, AAAI 2024

NeSyFOLD: A Framework for Interpretable Image Classification, AAAI 2024

Pantypes: Diverse Representatives for Self-Explainable Models, AAAI 2024

Towards Modeling Uncertainties of Selfexplaining Neural Networks, AAAI 2024

FEAMOE: Fair, Explainable and Adaptive Mixture of Experts, IJCAI 2023

Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint, ICLR 2023

A Framework for Learning Ante-hoc Explainable Models via Concepts, CVPR 2022

Explainable Reinforcement Learning via Model Transforms, NeurIPS 2022

Visual correspondence-based explanations improve AI robustness and human-AI team accuracy, NeurIPS 2022

Decision Trees with Short Explainable Rules, NeurIPS 2022

Hierarchical Shrinkage:improving the accuracy and interpretability of tree-based methods, ICML 2022

Entropy-based Logic Explanations of Neural Networks, AAAI 2022

Scalable Rule-Based Representation Learning for Interpretable Classification, NeurIPS 2021

Neural Additive Models: Interpretable Machine Learning with Neural Nets, NeurIPS 2021

Self-Interpretable Model with TransformationEquivariant Interpretation, NeurIPS 2021

Interpretable Compositional Convolutional Neural Networks, IJCAI 2021

Connecting Interpretability and Robustness in Decision Trees through Separation, ICML 2021

Explanation Consistency Training: Facilitating Consistency-Based Semi-Supervised Learning with Interpretability, AAAI 2021

TabNet: Attentive Interpretable Tabular Learning, AAAI 2021

Building Interpretable Interaction Trees for Deep NLP Mode, AAAI 2021

Shapley Explanation Networks, ICLR 2021

NBDT: Neural-Backed Decision Trees, ICLR 2021

Neural additive models: Interpretable machine learning with neural nets, Arxiv preprint 2020

Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters, ECCV 2020

Transparent Classification with Multilayer Logical Perceptrons and Random Binarization, AAAI 2020

Generalized Linear Rule Models, ICML 2019

Axiomatic Interpretability for Multiclass Additive Models, KDD 2019

Interpretable Convolutional Neural Networks, CVPR 2018

Boolean Decision Rules via Column Generation, NIPS 2018

Towards Robust Interpretability with Self-Explaining Neural Networks, NIPS 2018

Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model, The Annals of Applied Statistics 2015

Towards Robust Interpretability with Self-Explaining Neural Networks, NeurIPS 2016

Comprehensible Classification Models – a position paper, KDD 2015

Making machine learning models interpretable, ESANN 2012

Predictive learning via rule ensembles, The Annals of Applied Statistics 2008

Other transparent models:

  • Decision Tree
  • Linear Models
  • Rule-based Models
  • K-NN
  • General Additive Models(GAMs)
  • RuleFit