Skip to content

This is a fork of official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows", adapted to do multi-label classification for NIH (ChestX-ray14) dataset.

License

Notifications You must be signed in to change notification settings

taslimisina/Swin-Transformer

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Swin Transformer

PWC PWC PWC PWC PWC PWC

By Ze Liu*, Yutong Lin*, Yue Cao*, Han Hu*, Yixuan Wei, Zheng Zhang, Stephen Lin and Baining Guo.

This repo is the official implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows". It currently includes code and models for the following tasks:

Image Classification: Included in this repo. See get_started.md for a quick start.

Object Detection and Instance Segmentation: See Swin Transformer for Object Detection.

Semantic Segmentation: See Swin Transformer for Semantic Segmentation.

Self-Supervised Learning: See Transformer-SSL.

Updates

05/12/2021

  1. Used as a backbone for Self-Supervised Learning: Transformer-SSL

Using Swin-Transformer as the backbone for self-supervised learning enables us to evaluate the transferring performance of the learnt representations on down-stream tasks, which is missing in previous works due to the use of ViT/DeiT, which has not been well tamed for down-stream tasks.

04/12/2021

Initial commits:

  1. Pretrained models on ImageNet-1K (Swin-T-IN1K, Swin-S-IN1K, Swin-B-IN1K) and ImageNet-22K (Swin-B-IN22K, Swin-L-IN22K) are provided.
  2. The supported code and models for ImageNet-1K image classification, COCO object detection and ADE20K semantic segmentation are provided.
  3. The cuda kernel implementation for the local relation layer is provided in branch LR-Net.

Introduction

Swin Transformer (the name Swin stands for Shifted window) is initially described in arxiv, which capably serves as a general-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection.

Swin Transformer achieves strong performance on COCO object detection (58.7 box AP and 51.1 mask AP on test-dev) and ADE20K semantic segmentation (53.5 mIoU on val), surpassing previous models by a large margin.

teaser

Main Results on ImageNet with Pretrained Models

ImageNet-1K and ImageNet-22K Pretrained Models

name pretrain resolution acc@1 acc@5 #params FLOPs FPS 22K model 1K model
Swin-T ImageNet-1K 224x224 81.2 95.5 28M 4.5G 755 - github/baidu
Swin-S ImageNet-1K 224x224 83.2 96.2 50M 8.7G 437 - github/baidu
Swin-B ImageNet-1K 224x224 83.5 96.5 88M 15.4G 278 - github/baidu
Swin-B ImageNet-1K 384x384 84.5 97.0 88M 47.1G 85 - github/baidu
Swin-B ImageNet-22K 224x224 85.2 97.5 88M 15.4G 278 github/baidu github/baidu
Swin-B ImageNet-22K 384x384 86.4 98.0 88M 47.1G 85 github/baidu github/baidu
Swin-L ImageNet-22K 224x224 86.3 97.9 197M 34.5G 141 github/baidu github/baidu
Swin-L ImageNet-22K 384x384 87.3 98.2 197M 103.9G 42 github/baidu github/baidu

Note: access code for baidu is swin.

Main Results on Downstream Tasks

COCO Object Detection (2017 val)

Backbone Method pretrain Lr Schd box mAP mask mAP #params FLOPs
Swin-T Mask R-CNN ImageNet-1K 3x 46.0 41.6 48M 267G
Swin-S Mask R-CNN ImageNet-1K 3x 48.5 43.3 69M 359G
Swin-T Cascade Mask R-CNN ImageNet-1K 3x 50.4 43.7 86M 745G
Swin-S Cascade Mask R-CNN ImageNet-1K 3x 51.9 45.0 107M 838G
Swin-B Cascade Mask R-CNN ImageNet-1K 3x 51.9 45.0 145M 982G
Swin-T RepPoints V2 ImageNet-1K 3x 50.0 - 45M 283G
Swin-T Mask RepPoints V2 ImageNet-1K 3x 50.3 43.6 47M 292G
Swin-B HTC++ ImageNet-22K 6x 56.4 49.1 160M 1043G
Swin-L HTC++ ImageNet-22K 3x 57.1 49.5 284M 1470G
Swin-L HTC++* ImageNet-22K 3x 58.0 50.4 284M -

Note: * indicates multi-scale testing.

ADE20K Semantic Segmentation (val)

Backbone Method pretrain Crop Size Lr Schd mIoU mIoU (ms+flip) #params FLOPs
Swin-T UPerNet ImageNet-1K 512x512 160K 44.51 45.81 60M 945G
Swin-S UperNet ImageNet-1K 512x512 160K 47.64 49.47 81M 1038G
Swin-B UperNet ImageNet-1K 512x512 160K 48.13 49.72 121M 1188G
Swin-B UPerNet ImageNet-22K 640x640 160K 50.04 51.66 121M 1841G
Swin-L UperNet ImageNet-22K 640x640 160K 52.05 53.53 234M 3230G

Citing Swin Transformer

@article{liu2021Swin,
  title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
  author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
  journal={arXiv preprint arXiv:2103.14030},
  year={2021}
}

Getting Started

Third-party Usage and Experiments

In this pargraph, we cross link third-party repositories which use Swin and report results. You can let us know by raising an issue

(Note please report accuracy numbers and provide trained models in your new repository to facilitate others to get sense of correctness and model behavior)

[04/14/2021] Swin for RetinaNet in Detectron: https://github.com/xiaohu2015/SwinT_detectron2.

[04/16/2021] Included in a famous model zoo: https://github.com/rwightman/pytorch-image-models.

[04/20/2021] Swin-Transformer classifier inference using TorchServe: https://github.com/kamalkraj/Swin-Transformer-Serve

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Training on NIH (ChestX-ray14) dataset

instructions

  • You can find initial setup and pretrain models in get_started.md. We used ImageNet-22K pre-trained model Swin-L with 224x224 resolution for our training.

  • If you had problems installing Apex, you can install it using conda-forge:

    conda install -c conda-forge nvidia-apex
    

    Additionally, install numpy, pillow, pandas, scikit-learn and scipy packages with pip or conda:

    pip install package-name
    
  • Download NIH (ChestX-ray14) dataset from Kaggle. Merge images from different folders into one folder. Alternatively, you can have different folders for train, validation and test data but it's not necessary.

  • For training the model on one gpu run:

    python -m torch.distributed.launch --nproc_per_node 1 --master_port 12345 main.py \
    --cfg configs/swin_large_patch4_window7_224.yaml --resume path/to/pretrain/swin_large_patch4_window7_224_22k.pth \
    --trainset path/to/train_data/ --validset path/to/validation_data/ --testset path/to/test_data/ \
    --train_csv_path configs/NIH/train.csv --valid_csv_path configs/NIH/validation.csv --test_csv_path configs/NIH/test.csv \
    --batch-size 32 [--output <output-directory> --tag <job-tag> --num_mlp_heads 3] > log.txt
    

    You can extract validation and test ROC_AUC scores from resulting log.txt file.

    Note: As mentioned above, if you have all data in one folder, then the trainset, validset and testset arguments would point to the same folder. Also you can have them in seperate folders.

    Note: If you want to continue a half-trained model from a checkpoint, you should comment a line specified with "TODO" in utils.py (sorry for inconvenience).

About

This is a fork of official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows", adapted to do multi-label classification for NIH (ChestX-ray14) dataset.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%