LCA-on-the-Line: Benchmarking Out of Distribution Generalization with Class Taxonomies (ICML 2024 Oral Presentation)
# Create a Virtual Environment
conda create --name lca python=3.9
conda activate lca
# Install Dependencies
pip install -r requirements.txt
1). Pre-extract model logits using the following scripts:
Alternatively, download pre-extracted logits from this link.
2). Modify the logit_folder_path
in main.py
to the path where the logits are stored.
3). Launch the experiment:
python main.py
1). Pre-extract backbone model features using extract_feature_linear_probe.py. Alternatively, download pre-extracted backbone features from this link.
2). Modify DATASET_PATHS
in linear_probe_runner.py
accordingly with the output path from the previous step.
3). Launch experiment.
python linear_probe_runner.py
Follow the instructions in create_hierarchy.py
(Models' logits are required).
To use the latent hierarchy in previous experiments, follow the use_latent_hierarchy
section in main.py
, and update tree_list
in linear_probe_runner.py
.
To evaluate the correlation between ID LCA and Soft Labels quality, refer to the "Predict soft label quality with source model LCA that construct latent hierarchy" section in main.py
.
The scripts hier.py
, datasets/
, and wordNet_tree.npy
are adopted from Jack Valmadre's hierarchical classification repository.
- Valmadre, Jack. "Hierarchical classification at multiple operating points." Advances in Neural Information Processing Systems 35 (2022): 18034-18045.
- Clean up code for readibility.
- Make variables/paths configurable as flags.