White-box adversarial attacks on images

In the first post, we introduced the concept of adversarial attacks and contextualized them in the case of images. In this post, we are going to explore the first category of attacks, namely, white-box attacks.

Under this setting, the adversary has full access and knowledge of the model, that is, the architecture of the model, its parameters, gradients, and loss of respect to the input as well as possible defense mechanisms known to the attacker.

It is thus not particularly difficult to attack models under this condition and common methods exploit the model’s output gradient to generate adversarial examples. More specifically, we are going to learn about the following attacks:

  • FGSM;
  • FGM;
  • I-FGSM;
  • MI-FGSM;
  • PGD;
  • DeepFool;
  • C&W;
  • JSMA;
  • BPDA;
  • BSA.
Different type of adversarial attacks

Overview

This post is part of a collection including the following 6 posts:

  1. Threat models;
  2. White-box attacks (current post);
  3. Black-box transfer-based attacks;
  4. Black-box score-based attacks;
  5. Black-box decision-based attacks;
  6. Defence methods.

FGSM

Fast Gradient Sign Method (FGSM) is a basic one-step gradient-based approach that is able to find an adversarial example in a single step by maximizing the loss function L(xadv, y) with respect to the input x and then adding back the sign of the output gradient to (x) so to produce the adversarial example xadv:

x_{adv}=x+\epsilon \cdot sign(\nabla_x L(x, y)),

where xL(x,y) is the gradient of the loss with respect to the input x, and the equation is expected to meet the lnorm bound by design.

This method works because adding a perturbation to the legitimate input such as to maximize its loss with respect to the correct label y decreases the likelihood that y could be predicted given the input xadv.

Mathematically, it moves the adversarial example in one direction toward the border between the true class and some other class.

FGSM attack on GoogLeNet which was trained on ImageNet classification dataset

FGM

We can consider Fast Gradient Method (FGM) as a generalization of FGSM where the only difference is that FGM doesn’t apply the sign operator to the calculated gradient. It can be defined as

x_{adv}=x+\epsilon \cdot \frac{\nabla_x L(x, y)}{||\nabla_x L(x, y)||_p},

Note that in this case the gradient is divided by its lp norm to prevent the adversarial noise to break the constraint ||xadv-x||p<=Ɛ.

Despite the sign operator can be regarded as a normalization process, however, it also changes the most effective direction, thus making FGSM less effective than FGM. Given their similarity, all those methods exploiting FGSM can also be applied to FGM and vice versa.

I-FGSM

Basic iterative methods iteratively apply fast gradient multiple times with a small step size α. Thus, the iterative version of FGSM (I-FGSM) can be expressed as

x_{adv}^{t+1}=x_{adv}^{t}+\epsilon \cdot \alpha \cdot sign(\nabla_x L(x_{adv}^{t}, y)),

where xadv0=x (legitimate example). There are several ways to make the adversarial example satisfy the norm bound. For example, xadv could be clipped into the Ɛ vicinity of x or set α=Ɛ/T with T being the number of iterations.

It has been proved that iterative methods exploit much finer perturbations that do not destroy the image even with higher Ɛ and at the same time confuse the classifier with a higher rate.

Their drawbacks are that iterative methods are a little bit slower than their one-step counterparts, and more importantly, they show poor performance on transferability, which is fundamental propriety in black box attacks.

Evaluation of the success rates of I-FGSM attacks on Inception V3, Inception V4, Inception ResNet V2, and ResNet-152

MI-FGSM

Momentum iterative gradient-based methods integrate momentum into iterative fast gradient method to generate adversarial examples satisfying the lp norm bound.

Traditionally, momentum is a technique for accelerating gradient descent algorithms by accumulating a velocity vector in the gradient direction of the loss function across iterations.

However, this concept can also be applied to generate adversarial examples and obtain tremendous benefits. As it was for the learning rate update, the first step consists of updating the momentum gt by accumulating the velocity vector in the gradient direction as

g_{t+1}=\mu \cdot g_{t} + \frac{\nabla_x L(f(x, y)}{||\nabla_x L(x, y)||_p},

where µ is a decay factor. Next, the adversarial example xadvt is perturbed in the direction of the sign of gt with a step size α as

x_{adv}^{t+1}=x_{adv}^{t}+\alpha \cdot sign(g_{t+1}).

In each iteration, the current gradient xL(x,y) is normalized by the lp distance of itself, because the authors noticed that the scale of the gradients in different iterations varies in magnitude.

Evaluation of the success rates of I-FGSM and MI-FGSM attacks on Inception V3, Inception V4, Inception ResNet V2, and ResNet-152

PGD

PGD is another iterative algorithm that exploits projected gradient descent to iteratively craft adversarial examples as

x_{adv}^{t+1}=\pi_{x+S}(x_{adv}^{t}+\epsilon +\alpha \cdot sign(\nabla_x L(x_{adv}^{t}, y))),

where S is the set of all the allowed perturbations. Projected gradient descent performs one step of standard gradient descent, and then clips all the coordinates to be within the box.

Moreover, in order to explore a large part of the loss landscape, the algorithm is restarted from many points within the l-balls around data points taken from the evaluation set.

Thanks to this large number of observations, the authors realized that all the local maxima found by PGD have similar loss values, both for normally trained networks and for adversarially trained networks, thus pointing out that robustness against the PGD adversary yields robustness against all first-order adversaries such as SGD based attacks.

This conclusion also leads to the fact that as long as the adversary only uses gradients of the loss function with respect to the input, it will not find significantly better local maxima than PGD.

Deep Fool

DeepFool is an iterative algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers.

At each iteration t, it approximates the distance between xadvt and the complement of the convex polyhedron P which defines the region of the space where f(x)=y. Consequently, at each iteration of the algorithm, the perturbation vector that reaches the boundary of the polyhedron P is computed, and the current estimate is updated.

The iterations keep going until f(x)!=f(xadvt) so as to craft an adversarial example with minimum perturbations. DeepFool is thus a greedy algorithm that is not guaranteed to converge to the optimal perturbation.

However, the author observed that in practice the algorithm yields very small perturbations which are believed to be good approximations of the minimal perturbations.

An example of adversarial perturbations. The original image x is classified as “whale”. First row: the image x + r classified as “turtle” and the corresponding perturbation r
computed by DeepFool. Second row: the image classified
as “turtle” and the corresponding perturbation computed by the fast gradient sign method. DeepFool leads to a smaller perturbation

C&W

The method proposed by Carlini & Wagner relies on the initial formulation of adversarial examples and formally defines the problem of finding an adversarial instance for an image x as follows:

minimize \: ||\delta||p+c \cdot L(x+\delta, y),

under the condition that (x+δ)∈[0,1]n. The function f is an objective function such that L(x+δ, y)!=y if and only if L(x+δ, y)<=0. Note that x+δ is equivalent to xadv by definition of adversarial example.

The value c>0 is a constant that is often chosen as the smallest value of c for which the resulting solution xadv has L(xadv, y)<=0. Moreover, to ensure that the modification yields a valid image, the adversarial noise δ is constrained such that 0<=xi+δi<=1 ∀ i (this method assumes image pixels to be in the range [0,1]).

One way to do it is to replace x+δ with ((1+tanh(w))/2) so that the optimization problem in defined above becomes an unconstrained minimization problem in w. Finally, the loss is iteratively optimized with Adam optimizer since it has been found to be the most effective at quickly finding adversarial examples.

The authors also showed that this method is particularly effective in breaking models re-trained with defensive distillation which is a strong security very robust against most gradient-based attack methods.

JSMA

JSMA is an attack optimized under l distance known as the Jacobian-based Saliency Map Attack. A saliency map rates each input feature xi (e.g. each pixel) on how influential it is in causing the model to predict a particular class.

A saliency map S+(xi,c) is computed using the output gradients to calculate how much each xi positively correlates with y=c and how negatively correlates with all other classes y’!=y. An attacker can exploit this saliency map by targeting an adversarial class y’ that does not match the true class label y of a given sample x.

By increasing a few high-saliency pixels xi according to S+(xi,c=y’), the modified image xadv will have an increased prediction confidence f(xadv) for the adversarial class y’, and thus might result in misclassification.

There exist also four variants of this attack denoted as JSMA+Z, JSMA-Z, JSMA+F, and JSMA-F, based on the choices of adopting (S+) (positive saliency map) or (S-) (negative saliency map), and using Z (gradients of the logits) versus F (gradients of the outputs).

This method’s results are useful for finding adversarial examples both quickly and with limited perceptual differences.

Illustration of JSMA+F algorithm

BPDA

All previous methods work only under the condition that the gradients are exactly known. However, defenses that cause obfuscated gradients appear to defeat iterative optimization-based attacks.

To break those defences, Backward Pass Differentiable Approximation (BPDA) attacks approximate derivatives by computing the forward pass normally and computing the backward pass using a differentiable approximation of the non-differentiable layers.

It works by iteratively generating l∞ bounded adversarial examples using PGD confined to a specified l∞ ball, and uses an optimization objective as in C&W. To approximate ∇xf(x), it first finds a differentiable approximation g(x) such that g(x)≈fi(x), where fi(x) is a non-differentiable layer (i.e. a randomized transformation of the input).

Afterward, xf(x) can be approximated by performing the forward pass through f, but on the backward pass, replacing fi(x) with g(x). One minor drawback is that since each individual gradient descent step is not exactly correct, applying BPDA often requires more iterations of gradient descent.

BSA

A new class of attacks known as blind-spot attacks (BSA), exploits blind spots in the empirical distribution of training to fool adversarially trained models. With “blind spots” we refer to that input images far enough from any training examples in some embedding space, but still in the ground-truth data distribution, thus well recognized by humans and correctly classified by the model.

Note that blind-spot images are not adversarial images themselves but constitute a convenient starting point to craft them. Blind-spot images can be easily found by applying a few simple transformations on the ground-truth images such as slightly changing contrast and background to the ground-truth data so that they are still correctly classified by the target model.

Next, traditional methods like C&W attacks are employed on these transformed images so to find their corresponding adversarial examples. An interesting propriety of blind spots is that if the parameters are accurately tuned, models’ performance rarely decreases, yet their adversarial examples can be easily found.

Continues on (3r part)

References

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s