Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

wanglouis49/pytorch-adversarial_box

Open more actions menu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Box - Pytorch Adversarial Attack and Training

Luyu Wang and Gavin Ding, Borealis AI

Motivation?

CleverHans comes in handy for Tensorflow. However, PyTorch does not have the luck at this moment. Foolbox supports multiple deep learning frameworks, but it lacks many major implementations (e.g., black-box attack, Carlini-Wagner attack, adversarial training). We feel there is a need to write an easy-to-use and versatile library to help our fellow researchers and engineers.

We have a much more updated version called AdverTorch. You can find most of the popular attacks there. This repo will not be maintained anymore.

Usage

from adversarialbox.attacks import FGSMAttack
adversary = FGSMAttack(model, epsilon=0.1)
X_adv = adversary.perturb(X_i, y_i)

Examples

  1. MNIST with FGSM (code)
  2. Adversarial Training on MNIST (code)
  3. MNIST using a black-box attack (code)

List of supported attacks

  1. FGSM
  2. PGD
  3. Black-box

About

PyTorch library for adversarial attack and training

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

Morty Proxy This is a proxified and sanitized view of the page, visit original site.