Skip to content

Implementations on Security and Privacy in ML; Evasion Attack, Model Stealing, Model Poisoning, Membership Inference Attacks, ...

Notifications You must be signed in to change notification settings

mahdiabdollahpour/Security-and-Privacy-in-Machine-Learning

Repository files navigation

Security and Privacy in Machine Learning

Adversarial Malware Generator

MalwareDetector.py detects malware by CNN on bytes, Adversarial_Malware_Generator.py generates an adversarial malware by appending some bytes at the end and perturbating them. Malware_DoNotExecute.exe is a malware to create adversarial example from.

Evasion Attack and Defense

Working around targeted and non-targeted Random noise attack, FGSM Explaining and Harnessing Adversarial Examples and PGD Towards Deep Learning Models Resistant to Adversarial Attacks and measuring their success rate against FGSM/PGD adversarial traning.

JBDA Model Stealing and Obfuscated Gradients

Jacobian-based Dataset Augmentation from Practical Black-Box Attacks against Machine Learning to approximate a surrogate model to use its gradients for atatck on target model which has obfuscated gradients defense machanism. Black-box attacks peforms better than white-box as already said in Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

Membership Inference Attack

A quick touch on Membership Inference Attacks Against Machine Learning Models, good inefrence rate was possible with only two shadow models on CIFAR10.

Poisoning Attack

Poisoning Attacks based on Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks.

About

Implementations on Security and Privacy in ML; Evasion Attack, Model Stealing, Model Poisoning, Membership Inference Attacks, ...

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published