This is an official implementation 🍎 and we aim to implement an effective rgb-infrared multi-modal attack in the physical world.
Install
Python>=3.6.0 is required with all requirements.txt installed including PyTorch>=1.7:
$ git clone https://github.com/Aries-iai/Cross-modal_Patch_Attack
$ cd Cross-modal_Patch_Attack
$ pip install -r requirements.txt
Data Convention
The data is organized as follows:dataset
|-- attack_infrared
|-- 000.png # images in the infrared modality
|-- 001.png
...
|-- attack_visible
|-- 000.png # images in the visible modality
|-- 001.png
...
Here, we should ensure the consistency of infrared images and visible images' names.
Running
python spline_DE_attack.py
Notes
-
When you prepare dataset, you need to change directory path in spline_DE_attack.py and DE.py.
-
⚠️ If you want to attack other detection models, you need to change yolov3 folder to the model folder you want to attack and add detect_infrared.py, detect_visible.py in this new folder for returning targets' detection confidence scores to DE. -
The weights of yolov3 models can be downloaded from: https://drive.google.com/file/d/1gpPnHcGRjrJAComQety__dWVwJTWCnTk/view?usp=drive_link.
-
The part of attacked images can be downloaded from: https://drive.google.com/file/d/1C7mhrr94lXu4qw_P1dX5hwpRuDc4iI-4/view?usp=drive_link.
Cite as below if you find this repository is helpful to your project:
@misc{wei2023unified,
title={Unified Adversarial Patch for Cross-modal Attacks in the Physical World},
author={Xingxing Wei and Yao Huang and Yitong Sun and Jie Yu},
year={2023},
eprint={2307.07859},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Dataset is made from LLVIP: A Visible-infrared Paired Dataset for Low-light Vision. YOLOv3 code is the version of ultralytics-yolov3. Thanks for these great projects.