Skip to content

(AAAI 2021) Official TensorFlow implementation of "Vector Quantized Bayesian Neural Network Inference for Data Streams", a temporal smoothing for efficient uncertainty estimation

License

Notifications You must be signed in to change notification settings

xxxnell/temporal-smoothing

Repository files navigation

Vector Quantized Bayesian Neural Network Inference for Data Streams

This repository provides an TensorFlow implementation of "Vector Quantized Bayesian Neural Network (VQ-BNN) Inference for Data Streams (AAAI 2021)" which is a temporal smoothing for efficient uncertainty estimation and other baselines on vision tasks e.g. semantic segmentation. This implementation can be a starting point for uncertainty estimation researches.

Motivation. Although Bayesian neural networks (BNNs) have a lot of theoretical merits such as estimating uncertainties, they has a major disadvantage that makes it difficult to use as a practical tool; the predictive inference speed of BNNs is dozens of times slower than that of deterministic NNs.

VQ-DNN and VQ-BNN, temporal smoothing of recent predictions of deterministic NNs and BNNs, have been proposed to improve uncertainty estimation and computational performance of deterministic NNs and BNNs. It has the following advantages:

  • The computational performance of VQ-BNN is almost the same as that of deterministic NN, and the predictive performance is comparable to or even superior to that of BNN. VQ-DNN estimates the uncertainty better than deterministic NN.
  • Deterministic NN and BNN predict noisy results, while the VQ-DNN and VQ-BNN predict stabilized results.
  • This method is easy to implement.

For more details, please refer to the paper, blog (theory), and blog (semantic segmentation).

Input DNN
(11 FPS)
VQ-DNN
(10 FPS)
BNN
(0.8 FPS)
VQ-BNN
(9 FPS)

These are predictive results and uncertainties of deterministic NN (DNN), VQ-DNN, BNN, and VQ-BNN on CamVid. The results of DNN and BNN change irregularly and randomly. In contrast, the predictive results of VQ-DNN and VQ-BNN change smoothly, i.e., we might get more natural results by using temporal smoothing.

See qualitative results document for more examples.

Getting Started

The following packages are required:

  • python==3.6
  • matplotlib>=3.1.1
  • tensorflow-gpu==2.0
  • tensorboard

Then, see semantic-segmentation.ipynb for semantic segmentation experiment.

The notebook provides deterministic and Bayesian U-Net and SegNet by default. Bayesian U-Net and Bayesian SegNet contains MC dropout layers.

To conduct the experiment, it is required to download the training, test, and sequence datasets manually. There are snippets available for handling CamVid and CityScape datasets.

We define several metrics for measuring accuracy and uncertainty: Acuracy (Acc) and Acc for certain pixels (Acc-90), Intersection-over-Union (IoU) and IoU for certain pixels (IoU-90), negative log-likelihood (NLL), Expected Calibration Error (ECE), Unconfidence (Unc-90), and Frequency for certain pixels (Freq-90). We also provide reliability diagram for visualization.

Results

Method NLL
(↓)
Acc
(%, ↑)
ECE
(%, ↓)
DNN0.31491.14.31
BNN0.27691.83.71
VQ-DNN0.28491.23.00
VQ-BNN0.25392.02.24

This table shows the performance of the methods with semantic segmentation task on the CamVid dataset. We use arrows to indicate which direction is better. According to these results, VQ-BNN performs siignificantly faster than BNNs while estimating predictive results comparable to or even superior to the results of BNNs.

This reliability diagram also shows consistent results that temporal smoothing is an effective method to calibrate results. For more detailed discussion, please refer to the blog.

How to Apply Temporal Smoothing to Your Own Model

It is very easy to implement VQ-BNN inference. VQ-BNN is simply the temporal exponential smoothing or exponential moving average (EMA) of BNN's (or deterministic NN's) recent prediction sequence. No additional training or modification is needed.

More precisely, the predictive distribution of VQ-BNN is

where is integer timestamp, are recent inputs, are recent NN predictions (e.g. softmax of NN logits for classification tasks), and are exponentially decaying importances of the predictions with hyperparameter . No additional training is required.

Citation

If you find this useful, please consider citing 📑 the paper and starring 🌟 this repository. Please do not hesitate to contact Namuk Park (email: namuk.park at gmail dot com, twitter: xxxnell) with any comments or feedback.

@inproceedings{park2021vector,
  title={Vector Quantized Bayesian Neural Network Inference for Data Streams},
  author={Park, Namuk and Lee, Taekyu and Kim, Songkuk},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={35},
  number={10},
  pages={9322--9330},
  year={2021}
}

License

All code is available to you under the Apache License 2.0.

Copyright the maintainers.

About

(AAAI 2021) Official TensorFlow implementation of "Vector Quantized Bayesian Neural Network Inference for Data Streams", a temporal smoothing for efficient uncertainty estimation

Topics

Resources

License

Stars

Watchers

Forks