This repository contains the code of the ECB method for Classification in Domain Adaptation.
Ba-Hung Ngo*, Nhat-Tuong Do-Tran*, Tuan-Ngoc Nguyen, Hae-Gon Jeon and Tae Jong Choi†
Accepted In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2024).
- Supervised Training: We train both ViT and CNN branches on labeled samples.
- Finding To Conquering Strategy (FTC): We find class-specific boundaries based on the fixed ViT Encoder E1 by maximizing discrepancy between the Classifier F1 and F2. Subsequently, the CNN Encoder E2 clusters the target features based on those class-specific boundaris by minimizing discrepancy.
Please follow the instructions in DATASET.md to download datasets.
conda env create -f environment.yml
- The train.yaml is the config file for training our method. You can change the arguments to train Semi-Supervised Domain Adaptation (SSDA) or Unsupervised Domain Adaptation (UDA).
python train.py --cfg configs/train.yaml
- If you need evaluate the test dataset with our pretrained model. You need to download these checkpoint.
sh download_pretrain.sh
- For evaluation, you need to modify the configuration arguments in test/yaml in the configs folder. These arguments are described in CONFIG.md
python test.py --cfg configs/test.yaml
- The visualization compares features from two networks (CNN, ViT) for the real --> sketch on the DomainNet dataset in the 3-shot scenario, before and after adaptation with the FTC strategy.
- The visualization in a few samples using GRAD-CAM technique to show to performance for CNN and ViT when applying ECB method.
@InProceedings{
author = {Ngo, Ba Hung and Do-Tran, Nhat-Tuong and Nguyen, Tuan-Ngoc and Jeon, Hae-Gon and Choi, Tae Jong},
title = {Learning CNN on ViT: A Hybrid Model to Explicitly Class-specific Boundaries for Domain Adaptation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
pages = {28545-28554}
}
This project is licensed under the MIT License - see the LICENSE file for details.