In the previous post, we reviewed a series of black-box transfer-based adversarial attacks where the adversary has to generate adversarial examples against a substitute model. In this post, we are going to explore the second category of black-box attacks, namely, **black-box score-based attacks**.

Under this setting, it is not possible to access the white-box model’s gradients. The only knowledge about the attacked model is the input images and the relative output confidence scores. Anyway, the gradient can still be estimated by querying the target model and retrieving the labels’ confidence score.

More specifically, we are going to learn about the following attacks:

- ZOO;
- NES;
- SPSA;
- P-RGF;
- N-ATTACK;
- TREMBA.

### Overview

This post is part of a **collection** including the following 6 posts:

- Threat models;
- White-box attacks;
- Black-box transfer-based attacks;
- Black-box score-based attacks (current post);
- Black-box decision-based attacks;
- Defence methods.

### ZOO

Zeroth Order Optimization (**ZOO**) method, inspired by the formulation of the C&W attack, uses a symmetric difference quotient to estimate the gradient with respect to the input *x* as

where *h* is a small constant and *e _{i}* is a one-hot vector with 1 at the

*i*-th position. Given x ∈ R

^{p}, 2

*p*gradients queries are required to estimate the gradients for all

*p*coordinates which would be infeasible for large images.

To reduce the attack space, it can be used a function D to reduce the permissible dimension of the adversarial noise from p to m, with m<p, and then upscale it as a size-*p* image using an operator such as bilinear interpolation.

This method can be further improved with a hierarchical attack scheme, where it can be used a series of transformations D_{1},D_{2}… with dimensions *m*_{1},*m*_{2},… to gradually increase *m* over the iterations of the optimization process.

Regarding the optimization algorithm, it is typically used Adam with stochastic coordinate descent and importance sampling to selectively apply a coordinate descent method to the most important pixels inferred by the adversarial noise. (link)

### NES

Natural Strategy Evolution attack (**NES**) is a black-box gradient estimation technique that employs PGD with the estimated gradient to construct adversarial examples.

Rather than maximizing an objective function as in ZOO, NES maximizes the expected value of the loss function L under the search distribution *π*(*x*_{t+1}|*x*_{t}), with *x*_{t+1}=*x*_{t}+σδ, where δ∼N(*x*_{t},σ^{2}) and σ defines the search variance. The whole formula is then

Thus, the expected gradient is obtained by sampling *n* points from random Gaussian noise around the current image *x*_{t}. Next, in each iteration, PGD is performed using the sign of the estimated gradient to update the current *x*_{t+1} as

where Π_{S} represents a projection onto the *l _{p}*-ball. This method has been specifically designed to work under query-limited settings where the attacker has a query upper-bound T and aims to cause targeted misclassification in T queries or less. In this case, N queries are used to estimate each gradient performing T/N steps of PGD. (link)

### SPSA

**SPSA** is another iterative black-box gradient estimation method based on C&W optimization. Its main difference from other gradient estimation methods is that the gradients are estimated with the SPSA optimization algorithms which are well-suited for high-dimensional optimization problems, even in the case of noisy objectives.

The noise δ is not sampled from a Gaussian as in NES but from a Rademacher distribution in order to make the method work. Comparing SPAS with NES and ZOO, the authors found that the specific choice of the gradient-free optimizer is relatively unimportant and that all methods produce reliable adversarial examples.

In fact, with sufﬁcient computation, both NES and SPSA decrease the accuracy to 0% on ImageNet (link).

### P-RGF

The authors of the P-RGF method argued that transfer-based black-box attacks often don’t perform very well because the gradient of the surrogate model points to a non-adversarial region of the target model. At the same time, pure score-based attacks require a tremendous amount of queries to perform well.

Thus they developed a new method called Prior-guided Random Gradient-Free (**P-RGF**) which improves black-box adversarial attacks taking advantage of both a transfer-based prior given by the gradient of a surrogate model and the query information to further approximate it.

Beside transfer-based prior, this algorithm can also incorporate data-dependent prior to reducing query complexity. Compared with NES and SPSA, P-RGF leads to much higher attack success rates with improvements ranging between 20% and 40% in many cases.

Since the math behind the P-RGF method is quite complex, its details have not been reported here but can be found in the original paper (link).

### N-ATTACK

The authors of **N-ATTACK** realized that gradient-estimation-based methods often don’t perform very well because some of the neural networks are not smooth, so the estimation of the gradient is not reliable enough.

Thus, N-ATTACK instead of estimating the gradient, estimates a probability density distribution defined over the *l _{p}*-ball S centered around an input

*x*to the neural network. Assuming that the distribution is such that the loss is small, then a sample drawn from the distribution is likely to be adversarial. The parameters of the distribution can be updated as

In the above formula, *z*∼N(μ,σ) is a normal distribution whose mean μ has to be learned. The function g_{0}(*z*) is computed as g(*z*)=1/2(tanh(g_{0}(*z*))+1), where g_{0}(*z*) maps the sample *z* such to have the same dimension of *x* (i.e. *z* lies in the space of the CIFAR10 images and the function g_{0}(*z*) maps it to the ImageNet images with bilinear interpolation).

We also define the difference δ=clip_{p}(g(*z*)−*x*) so to clip the projection Π_{S}(g(*z*))=*x*_{t+1}=*x*_{t}+δ, where Π is the projection operator. Finally, *b* is the number of samples per iteration and the loss function L is the C&W loss.

The initialization of the distribution parameters plays a key role in the run time of the algorithm, thus, it is also trained a regression network that takes as input a sample *x*, and outputs (μ_{0}) to initialize N-ATTACK.

The advantage of this method is that once the distribution has been learned, it’s not needed to query the model to attack it, but it is virtually possible to draw an inﬁnite number of adversarial examples from the learned distribution (link).

### TREMBA

TRansferable-EMbedding-based Black-box Attack (TREMBA) tries to learn a low-dimensional embedding using a pre-trained model Fs, and then performs an efficient search within the embedding space to attack an unknown target network F_{t}.

The first phase consists of training an encoder-decoder G that can effectively generate adversarial perturbations for the source network with a low-dimensional embedding space. The encoder E takes an input* x* and outputs a latent vector *z*=E(*x*), where *dim*(*z*)<*dim*(*x*).

The decoder D takes z as input and outputs an adversarial perturbation δ=εtanh(D(z)) with dim(δ)=dim(x). In this way, the G would be trained such that its generated perturbations δ can fool the source network F_{s} minimizing the hinge loss used in the C&W attack.

In the second phase, it is used NES to attack Ft. However, instead of performing a search on the input space, TREMBA performs a search on the embedding space *z*. Given an input *x* and its label *y*, and a starting point *z*_{0}=E(*x*), the gradient of *z*_{t} given can be estimated as

where *v _{k}* is the sample from the gaussian distribution N(

*z*

_{t},σ

^{2}). Note that it is not required to do projection explicitly since δ already satisfies ||δ||

_{∞}<ε by design.

The benefit TREMBA introduces, is that searching over *z* is like searching adversarial examples in a lower dimensional space where adversarial perturbations are less concentrated, thus it is easier to find adversarial patterns (link).

### Continue on (5th part)

### References

- Boosting Adversarial Attacks with Momentum
- Black-box Adversarial Attacks with Limited Queries and Information
- Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
- Improving Black-box Adversarial Attacks with a Transfer-based Prior
- NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
- Black-Box Adversarial Attack with Transferable Model-based Embedding