Uncertainty quantification (UQ) plays a vital role in the decision-making process. Bayesian approximation and Ensemble learning techniques are two of the most widely-used UQ methods in the literature. This project aims to explore several UQ methods and compare their performances on different Datasets to validate the predicitive Machine Learning models.
In this respository you will find :
- [Here](UQ Methods/), Uncertainty Quantification methods applied on two types of data.
- [Here](Validation metrics/), We present several metrics to evaluate the estimation of uncertainties of each method.
- Here, We analyze and comment the results.
To have a large view betweeen the performance of the methods using two types of data, we displayed the figures and tables for the sake of comparaison ===> here
Uncertainty-quantification
└── Analysis/
└── utils.py
└── pictures/
└── UQ methods/
└── Bayesian Neural Networks
│ ├── BNN_dense.ipynb
│ └── BNN_sparse.ipynb
└── Bagging
│ ├── Bagging_dense.ipynb
│ └── Bagging_sparse.ipynb
└── Conformalized Quantile
│ ├── CQR_dense.ipynb
│ └── CQR_sparse.ipynb
└── Deep Evidential Regression
│ ├── DER_dense.ipynb
│ └── DER_sparse.ipynb
└── Gaussian Processes
│ ├── GP_dense.ipynb
│ └── GP_sparse.ipynb
└── Monte Carlo Dropout
│ ├── MCD_dense.ipynb
│ └── MCD_sparse.ipynb
└── Random Forest
│ ├── RF_dense.ipynb
│ └── RF_sparse.ipynb
└── Snapshot Ensemble
├── Models/
└── SnapEns_dense.ipynb
└── Validation-Metrics/
└── metrics.py
Here a non-exhaustive list of tools and APIs ready-to-use for Uncertainty Quantification in Machine Learning :
Name | Description | Licence |
---|---|---|
IBM UQ360 | Extensible open-source toolkit that can help you estimate, communicate and use uncertainty in machine learning model predictions | MIT |
Uncertainty Toolbox | A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualizations. | MIT |
MAPPIE | A scikit-learn-compatible module for estimating prediction intervals. | DD |
I- Uncertainty
- Hüllermeier, Eyke, and Willem Waegeman. "Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods." Machine Learning 110.3 (2021): 457-506.
II- UQ Methods
-
Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation : Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050–1059. PMLR, 2016
-
Huang, Gao, et al. "Snapshot ensembles: Train 1, get m for free." arXiv preprint arXiv:1704.00109 (2017).
-
Amini, Alexander, et al. "Deep evidential regression." Advances in Neural Information Processing Systems 33 (2020): 14927-14937.
-
C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, “Weight uncertainty in neural networks,” arXiv:1505.05424, 2015
-
Rasmussen, C. E., Williams, C. K. I., Gaussian processes for machine learning (2016), The MIT Press
-
Breiman, Leo. "Bagging predictors." Machine learning 24.2 (1996): 123-140.
-
Romano, Yaniv, Evan Patterson, and Emmanuel Candes. "Conformalized quantile regression." Advances in neural information processing systems 32 (2019).
III- Validation metrics
-
Tran, Kevin, et al. "Methods for comparing uncertainty quantifications for material property predictions." Machine Learning: Science and Technology 1.2 (2020): 025006.
-
Sluijterman, L., Cator, E., Heskes, T. (2021). How to Evaluate Uncertainty Estimates in Machine Learning for Regression. arXiv preprint arXiv:2106.03395.
-
Levi D, Gispan L, Giladi N and Fetaya E 2020 Evaluating and Calibrating Uncertainty Prediction in Regression Tasks
@author : Mohamed El Baha