From 567f01c655514f5f3caaea55d73d2c946f6bc1a1 Mon Sep 17 00:00:00 2001 From: Mehdi Cherti Date: Sun, 27 Aug 2023 13:29:21 +0200 Subject: [PATCH] Update README.md --- README.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/README.md b/README.md index 5c15b72..c83105b 100644 --- a/README.md +++ b/README.md @@ -305,6 +305,16 @@ clip_benchmark eval --pretrained_model benchmark/models.txt \ Examples are available in [benchmark/datasets.txt](benchmark/datasets.txt) and [benchmark/models.txt](benchmark/models.txt) +### Multiple checkpoints from the same model + +It is also common to evaluate multiple checkpoints from the same model: + +```bash +clip_benchmark eval --model ViT-B-32 --pretrained *.pt \ +--dataset benchmark/datasets.txt --dataset_root "clip_benchmark_datasets/{dataset}" \ + --output "{dataset}_{pretrained}_{model}_{language}_{task}.json" +``` + ### Model and dataset collections We can also provide model collection names (`openai`, `openclip_base`, `openclip_multilingual`, `openclip_full` are supported) or dataset collection names (`vtab`, `vtab+`, `retrieval`, `imagenet_robustness` are supported):