Skip to content

issmonitor/Awesome-AI-Security

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 

Repository files navigation

Awesome AI Security Awesome

A curated list of AI security resources inspired by awesome-adversarial-machine-learning & awesome-ml-for-cybersecurity.

Legend:

Type Icon
Research
Slides
Video
Website / Blog post
Code
Other

Keywords:

Adversarial examples

Type Title
Explaining and Harnessing Adversarial Examples
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Delving into Transferable Adversarial Examples and Black-box Attacks
On the (Statistical) Detection of Adversarial Examples
The Space of Transferable Adversarial Examples
Adversarial Attacks on Neural Network Policies
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
Crafting Adversarial Input Sequences for Recurrent Neural Networks
Practical Black-Box Attacks against Machine Learning
Adversarial examples in the physical world

Evasion

Type Title
Query Strategies for Evading Convex-Inducing Classifiers
Evasion attacks against machine learning at test time
Automatically Evading Classifiers A Case Study on PDF Malware Classifiers
Looking at the Bag is not Enough to Find the Bomb: An Evasion of Structural Methods for Malicious PDF Files Detection
Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers

Poisoning

Type Title
Poisoning Behavioral Malware Clustering
Efficient Label Contamination Attacks Against Black-Box Learning Models

Feature selection

Type Title
Is Feature Selection Secure against Training Data Poisoning?

Misc

Type Title
Can Machine Learning Be Secure?
On The Integrity Of Deep Learning Systems In Adversarial Settings
Stealing Machine Learning Models via Prediction APIs
Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
A Methodology for Formalizing Model-Inversion Attacks

Code

Type Title
CleverHans - Python library to benchmark machine learning systems vulnerability to adversarial examples
Model extraction attacks on Machine-Learning-as-a-Service platforms

Links

Type Title
EvadeML - Machine Learning in the Presence of Adversaries
Adversarial Machine Learning - PRA Lab

About

📁 #AISecurity

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published