This repository contains code and data to reproduce the results presented in the manuscript Minimizing the Expected Posterior Entropy Yields Optimal Summaries.
Figures and tables can be regenerated by executing the following steps:
- Ensure a recent Python version is installed; this code has been tested with Python 3.10 on Ubuntu and macOS.
- Optionally, create a new virtual environment.
- Install the Python requirements by executing
pip install -r requirements.txt
from the root directory of the repository. - Install CmdStan by executing
python -m cmdstanpy.install_cmdstan --version 2.31.0
. Other recent versions of CmdStan may also work but have not been tested. - Optionally, verify the installation by executing
pytest -v
. - Execute
cook exec "*:evaluation"
which will run all experiments and generate evaluation metrics which are saved atworkspace/[experiment name]/evaluation.csv
. - Execute each of the Jupyter notebooks (saved as markdown files) in the
notebooks
folder to generate the figures.
After running the experiments (see above), the workspace
folder contains all results. It is structured as follows, and the folder structure is repeated for each experiment.
benchmark-large # One folder for each experiment.
data # Train, validation, and test split as pickle files; other temp files may also be present.
test.pkl
train.pkl
validation.pkl
...
samples # (Approximate) posterior samples as pickle files.
[sampler configuration name].pkl
...
transformers # Trained transformers, e.g., posterior mean estimators, as pickle files.
[transformer configuration name]-[digits].pkl # One of three replications with diff. seeds.
[transformer configuration name].pkl # Best transformer amongst the three replications.
evaluation.csv # Evaluation of different summary statistic extraction methods.
benchmark-small
...
coalescent
...
tree-large
...
tree-large
...
figures # Contains PDF figures after executing notebooks.
Each evaluation.csv
file has seven columns:
path
which refers to one of the methods used to extract summaries.- three columns
{nlp,rmise,mise}
which are best estimates of negative log probability loss, root mean integrated squared error, and mean integrated squared error, respectively. The estimates are obtained by averaging over all samples in the corresponding test set. - three columns
{nlp,rmise,mise}_err
which are standard errors obtained assqrt(var / (n - 1))
, wherevar
is the variance of the metric in the test set, andn
is the size of the test set.