Code and dataset for the paper: VehiGAN: Generative Adversarial Networks for Adversarially Robust V2X Misbehavior Detection Systems
The following implementation is only for the following (changes may be needed for other systems):
- Ubuntu 20.04
- Python 3.10
- Nvidia GPU
wget https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh
chmod +x Mambaforge-Linux-x86_64.sh
./Mambaforge-Linux-x86_64.sh
git clone https://github.com/shahriar0651/VehiGAN.git VehiGAN
cd VehiGAN
conda env create --file dependency/environment.yaml
conda activate vehigan
If you already have created the environment, to update:
mamba env update --file dependency/environment.yaml --prune
mamba activate vehigan
Download the dataset folder from IEEE Data Port Or from a temporary google drive folder.
Place it just outside of the VehiGAN workspace as follows.
├── datasets
│ └── MisbehaviorX
│ ├── ambients
│ └── attacks
└── VehiGAN
Note: To process the complete dataset, use the parameter dataset.run_type=full
in the commands or for unit testing use dataset.run_type=unit fast_load=True
.
Navigate to the src
directory and execute the data curation pipeline to process the training and testing datasets. This step organizes raw data into the MBDS format.
cd src
python step_1_run_data_curation_pipeline.py -m dataset=training,testing
Run the training pipeline for various models, specifying the model types (wgan
, autoencoder
, baseline
) and limiting the dataset to a unit sample (dataset.run_type=unit
) for testing purposes.
python step_2_run_ind_training_pipeline.py -m models=wgan,autoencoder,baseline dataset=training dataset.run_type=unit fast_load=True
Evaluate each trained model on the testing dataset.
python step_3_run_ind_detect_evaluation.py -m models=wgan,autoencoder,baseline dataset=testing dataset.run_type=unit fast_load=True
- First, load and merge the individual WGAN model performances (from step 3) and save them in CSV format.
- Then, select the top
m
models based on their scores (Generator/Discriminator) and evaluate the detection performance for the ensemble model.
python step_4_run_ens_fixed_detect_evaluation.py models=wgan dataset=testing dataset.run_type=unit fast_load=True
Generate adversarial samples to evaluate the robustness of individual models against benign, adversarial, and noisy datasets. The parameters include:
advCap
: specifies the type of evaluation (indv
for individual models).advFnc
: the adversarial attack function (fgsm
in this case).epsilon
: the perturbation level for adversarial samples.
python step_5_run_ind_robustness_evaluation.py -m dataset=testing dataset.run_type=unit fast_load=True advCap=indv evalType=adversarial m_max=5 advRandom=True epsilon=0.00,0.005,0.010,0.015,0.020 advFnc='fgsm'
Test the ensemble model (using m_max
and k_max
parameters) on benign datasets.
python step_6_run_ens_random_robust_evaluation.py -m dataset=testing dataset.run_type=unit fast_load=True evalType=benign
Evaluate the ensemble model's robustness on adversarial datasets using various perturbation levels (epsilon
) and attack capabilities (advCap
).
python step_6_run_ens_random_robust_evaluation.py -m dataset=testing dataset.run_type=unit fast_load=True evalType=adversarial advCap=indv,trans,multi epsilon=0.00,0.005,0.010,0.015,0.020
To visualize and analyze the results, open and run the step_final_visualize_results.ipynb
notebook. This generates graphs and provides a summary of the experiment outcomes.
├── datasets
│ └── MisbehaviorX
│ ├── ambients
│ └── attacks
└── VehiGAN
├── artifacts
├── config
│ ├── config.yaml
│ ├── dataset
│ └── models
├── dependency
│ └── environment.yaml
├── docs
├── README.md
├── references
├── results
└── src
├── dataset
├── helper
├── models
├── step_1_run_data_curation_pipeline.py
├── step_2_run_ind_training_pipeline.py
├── step_3_run_ind_detect_evaluation.py
├── step_4_run_ens_fixed_detect_evaluation.py
├── step_5_run_adv_robust_evaluation_pipeline_gan.py
├── step_6_run_ens_robust_evaluation_pipeline_adv.py
└── step_final_visualize_results.ipynb
@inproceedings{shahriar2024vehigan,
title={VehiGAN: Generative Adversarial Networks for Adversarially Robust V2X Misbehavior Detection Systems},
author={Shahriar, Md Hasan and Ansari, Mohammad Raashid and Monteuuis, Jean-Philippe and Chen, Cong and Petit, Jonathan and Hou, Y. Thomas and Lou, Wenjing},
booktitle={The 44th IEEE International Conference on Distributed Computing Systems (ICDCS)},
year={2024}
}
- This project relies on VASP for V2X simulation and attack data generation.
- This project relies on keras.io/examples for GAN implementation.
This work was supported in part by the US National Science Foundation under grants 1837519, 2235232 and 2312447, and by the Office of Naval Research under grant N00014-19-1-2621.