10
A geometry-inspired decision-based attack Yujia Liu * University of Science and Technology of China Hefei, China [email protected] Seyed-Mohsen Moosavi-Dezfooli ´ Ecole Polytechnique F´ ed´ erale de Lausanne Lausanne, Switzerland [email protected] Pascal Frossard ´ Ecole Polytechnique F´ ed´ erale de Lausanne Lausanne, Switzerland [email protected] Abstract Deep neural networks have recently achieved tremen- dous success in image classification. Recent studies have however shown that they are easily misled into incorrect classification decisions by adversarial examples. Adver- saries can even craft attacks by querying the model in black- box settings, where no information about the model is re- leased except its final decision. Such decision-based at- tacks usually require lots of queries, while real-world image recognition systems might actually restrict the number of queries. In this paper, we propose qFool, a novel decision- based attack algorithm that can generate adversarial exam- ples using a small number of queries. The qFool method can drastically reduce the number of queries compared to pre- vious decision-based attacks while reaching the same qual- ity of adversarial examples. We also enhance our method by constraining adversarial perturbations in low-frequency subspace, which can make qFool even more computation- ally efficient. Altogether, we manage to fool commercial image recognition systems with a small number of queries, which demonstrates the actual effectiveness of our new al- gorithm in practice. 1. Introduction Deep neural networks have led to major breakthroughs in recent years and have been developing as powerful tools in various applications, including computer vision [15, 14], speech [19, 11], health-care [13, 17], etc. Despite their huge success, it has been shown that these networks are vulnera- ble to deliberate attacks. An adversary can generate adver- sarial examples [26] that look very similar to the original * Work done during a summer internship at EPFL-LTS4. images, yet that can mislead a deep neural network to give an incorrect output. Many existing attacks focus on white-box settings [8, 4], where adversaries have full access to all information of the model. They can attack models by computing gradients of the loss function. White-box attacks are interesting to derive a better understanding of deep neural networks, but they are unrealistic in a more practical setting, where adver- saries have no knowledge about model parameters. In this case, called the black-box setting, some recent works focus on the score-based attacks [21, 5], where they make a large number of queries to the target model and exploit the output probabilities to generate adversarial examples. Most exist- ing black-box attacks do however not consider that queries are usually costly in terms of both time and money, espe- cially for commercial models. And, in fact, some commer- cial models only provide users with the final decision (top- 1 label), and not even any output probabilities. This sce- nario corresponds to decision-based settings, where attacks [3] are actually the most challenging for adversaries. In this paper, we propose a novel method of decision- based attack called qFool that computes adversarial exam- ples with only a few queries. We consider both non-targeted and targeted attacks, where we advantageously exploit the fact that the decision boundary is locally flat around adver- sarial examples. In non-targeted attacks, we estimate the gradient direction of the decision boundary only by rely- ing on the top-1 label of each query result. We then search for an adversarial example directly from the original im- age in the estimated direction. In targeted attacks, we make iterative gradient estimations on multiple points along the boundary and seek for adversarial examples with the help of a target image. Both our non-targeted and targeted at- tacks can mislead networks with only few queries. More- over, our experiments show that searching perturbations in 1 arXiv:1903.10826v1 [cs.LG] 26 Mar 2019

arXiv:1903.10826v1 [cs.LG] 26 Mar 2019in various applications, including computer vision [15, 14], speech [19, 11], health-care [13, 17], etc. Despite their huge success, it has been

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: arXiv:1903.10826v1 [cs.LG] 26 Mar 2019in various applications, including computer vision [15, 14], speech [19, 11], health-care [13, 17], etc. Despite their huge success, it has been

A geometry-inspired decision-based attack

Yujia Liu∗

University of Science and Technology of ChinaHefei, China

[email protected]

Seyed-Mohsen Moosavi-DezfooliEcole Polytechnique Federale de Lausanne

Lausanne, [email protected]

Pascal FrossardEcole Polytechnique Federale de Lausanne

Lausanne, [email protected]

Abstract

Deep neural networks have recently achieved tremen-dous success in image classification. Recent studies havehowever shown that they are easily misled into incorrectclassification decisions by adversarial examples. Adver-saries can even craft attacks by querying the model in black-box settings, where no information about the model is re-leased except its final decision. Such decision-based at-tacks usually require lots of queries, while real-world imagerecognition systems might actually restrict the number ofqueries. In this paper, we propose qFool, a novel decision-based attack algorithm that can generate adversarial exam-ples using a small number of queries. The qFool method candrastically reduce the number of queries compared to pre-vious decision-based attacks while reaching the same qual-ity of adversarial examples. We also enhance our methodby constraining adversarial perturbations in low-frequencysubspace, which can make qFool even more computation-ally efficient. Altogether, we manage to fool commercialimage recognition systems with a small number of queries,which demonstrates the actual effectiveness of our new al-gorithm in practice.

1. IntroductionDeep neural networks have led to major breakthroughs

in recent years and have been developing as powerful toolsin various applications, including computer vision [15, 14],speech [19, 11], health-care [13, 17], etc. Despite their hugesuccess, it has been shown that these networks are vulnera-ble to deliberate attacks. An adversary can generate adver-sarial examples [26] that look very similar to the original

∗Work done during a summer internship at EPFL-LTS4.

images, yet that can mislead a deep neural network to givean incorrect output.

Many existing attacks focus on white-box settings [8, 4],where adversaries have full access to all information of themodel. They can attack models by computing gradientsof the loss function. White-box attacks are interesting toderive a better understanding of deep neural networks, butthey are unrealistic in a more practical setting, where adver-saries have no knowledge about model parameters. In thiscase, called the black-box setting, some recent works focuson the score-based attacks [21, 5], where they make a largenumber of queries to the target model and exploit the outputprobabilities to generate adversarial examples. Most exist-ing black-box attacks do however not consider that queriesare usually costly in terms of both time and money, espe-cially for commercial models. And, in fact, some commer-cial models only provide users with the final decision (top-1 label), and not even any output probabilities. This sce-nario corresponds to decision-based settings, where attacks[3] are actually the most challenging for adversaries.

In this paper, we propose a novel method of decision-based attack called qFool that computes adversarial exam-ples with only a few queries. We consider both non-targetedand targeted attacks, where we advantageously exploit thefact that the decision boundary is locally flat around adver-sarial examples. In non-targeted attacks, we estimate thegradient direction of the decision boundary only by rely-ing on the top-1 label of each query result. We then searchfor an adversarial example directly from the original im-age in the estimated direction. In targeted attacks, we makeiterative gradient estimations on multiple points along theboundary and seek for adversarial examples with the helpof a target image. Both our non-targeted and targeted at-tacks can mislead networks with only few queries. More-over, our experiments show that searching perturbations in

1

arX

iv:1

903.

1082

6v1

[cs

.LG

] 2

6 M

ar 2

019

Page 2: arXiv:1903.10826v1 [cs.LG] 26 Mar 2019in various applications, including computer vision [15, 14], speech [19, 11], health-care [13, 17], etc. Despite their huge success, it has been

a low-dimensional subspace is even more efficient in query-limited settings. We thus propose to extend qFool to sub-spaces in order to further reduce the number of queries.Finally, we use qFool to fool a famous commercial imagerecognition service with only a small number of queries tothe model.

Our main contributions are the following:

• We propose a novel method, qFool, for generating ad-versarial examples when only having access to the top-1 label decision of a trained deep network model.

• We demonstrate that qFool needs fewer queries thanprevious decision-based attacks in both non-targetedand targeted settings.

• We enhance the proposed qFool with properly chosensubspace constraints, thus further reducing the numberof queries while maintaining a high fooling rate.

• We apply qFool to Google Cloud Vision, an exampleof commercially deployed black-box machine learningsystem. We demonstrate that such commercial classi-fiers can be fooled with a small number of queries.

2. Related WorkSzegedy et al. [26] were the first to show the vulnera-

bility of deep networks to adversarial examples. Dependingon how much adversaries know about networks, adversarialattacks are divided into white-box and black-box attacks.

In white-box setting, adversaries have all the knowledgeabout the network itself. It is a more favorable situation foradversaries to attack.Gradient-based attacks. Most white-box attacks need thegradient of loss function of the network. Goodfellow et al.[8] proposed a fast gradient sign method (FGSM), whichexplores the gradient direction. The Jacobian saliency mapattack (JSMA) by Papernot et al. [25] uses the saliencymap to compute the adversarial perturbation. DeepFoolby Moosavi-Dezfooli et al. [20] generates an adversarialexample by exploring the nearest decision boundary andcrossing it to deceive the network. C&W attack by Carliniand Wanger [4] solves a more efficient optimization prob-lem under three distance measurements with the Adam op-timizer. Houdini by Cisse [6] is an approach for generatingadversarial examples specifically tailored for task losses,and it can be applied to a range of applications. Baluja andFischer [1] trained Adversarial Transformation Networks togenerate adversarial examples with a very large diversityagainst a target network or set of networks.

All these white-box attacks need to compute gradientsof models, but sometimes attackers might have no access tothe model or it may contain non-differentiable operations.So black-box attacks have gained more attention recently.

In the black-box attacks, adversaries do not have knowl-edge of the network. They only have access to the predic-tions by querying it. Black-box attacks can roughly be di-vided into three families: transfer-based, score-based anddecision-based attacks.

Transfer-based attacks. Szegedy et al. [26] found thatadversarial examples can transfer between models, thus en-abling black-box attacks on deployed models. Transfer-based attacks aim to train a surrogate model by exploitingpredictions from an underlying target model. Papernot et al.[22] showed that adversarial examples crafted based on sur-rogate models can mislead the target model. Liu et al. [18]developed an ensemble transfer-based attack and showed itshigh success against Clarifai.com, a commercial image clas-sification service.

Score-based attacks. Adversaries only rely on the pre-dicted scores of a model to generate adversarial examplesin score-based attacks. Narodytska et al. [21] proposed thelocal-search attack that measures a models sensitivity to in-dividual pixels. Chen et al. [5] proposed Zeroth Order Op-timization (ZOO), that approximates gradient informationby the symmetric difference quotient to generate adversar-ial examples. Hayes et al. [9] used the predicted scores totrain an attacker model for black-box attacks.

Decision-based attacks. Decision-based attacks rely onlyon the class decision (top-1 label) of the model. A simplemethod is the additive Gaussian noise attack [23], whichprobes the robustness of a model to i.i.d. normal noises. Ad-versaries can also use uniform noise, salt and pepper noise,contrast reduction or Gaussian blur. But perturbations com-puted by these simple methods are usually very perceptible.Brendel et al. [3] proposed the Boundary attack, that cangenerate much smaller adversarial perturbations effectively.It starts from a large adversarial perturbation and iterativelyreduces the norm of the perturbation by making the per-turbed data point walk along the decision boundary whileensuring that it stays in the adversarial region.

Most black-box attacks usually require a large numberof queries to the network model, while this is an impor-tant constraint in practice. Li et al. [16] introduced anactive learning strategy to significantly reduce the num-ber of queries for transfer-based attacks. For score-basedattacks, Ilyas et al. [12] applied natural evolution strate-gies and only used two to three orders of magnitude fewerqueries than previous methods. Bhagoji et al. [2] proposedrandom feature grouping and principal component analy-sis, which can achieve both high query efficiency and highsuccess rate. Finally in decision-based attacks, since ad-versaries get less information from each query, the requirednumber of queries is definitely large. Reducing the numberof queries in decision-based is definitely challenging andin line with practical settings. In this paper, we propose

2

Page 3: arXiv:1903.10826v1 [cs.LG] 26 Mar 2019in various applications, including computer vision [15, 14], speech [19, 11], health-care [13, 17], etc. Despite their huge success, it has been

a novel decision-based attack, which greatly improves thequery efficiency even with commercial black-box systems.

3. Non-targeted decision-based attacksWe describe now our new decision-based attack algo-

rithm. We consider a trained model with parameters θ thatcan be represented as fθ : x→ y, where x ∈ Rd is an inputnormalized image and y is the final decision of the model,i.e., the top-1 classification label.

We first consider the non-targeted attack problem, wherean adversary computes an adversarial perturbation v tochange the estimated label of an image x0, i.e., fθ(x0 +v) 6= fθ(x0). Furthermore, there shall be no perceptibledifference between the perturbed and original image, i.e.,the Euclidean norm ||v||2 ≤ τ for some small τ . The adver-sarial example is constructed by making queries to the un-known deep network. Theoretically, the more queries, themore information that one can get about the target model,and the smaller the adversarial perturbation. We aim to useas few queries as possible such that the norm of the pertur-bation is smaller than a certain threshold τ .

3.1. Query-efficient design

The basic idea of our decision-based attack algorithm isthe following. Fawzi et al. [7] have shown that the deci-sion boundary has a quite small curvature in the vicinity ofadversarial examples. This indicates that some geometryproperties of the decision boundary can be approximated ina similar way at different neighbour points around adver-sarial examples. We therefore exploit this observation tocompute the adversarial perturbation v. The direction ofthe minimal adversarial perturbation v for the input sam-ple x0 is theoretically the gradient direction of the decisionboundary at xadv . As we do not have full access to the clas-sifier’s structure in the black-box scenario, it is impossibleto compute the direction directly. But, since the boundaryis quite flat, the gradient of the classifier at point xadv isalmost the same as the gradient at other neighboring pointson the boundary. Hence, the direction of v can be approxi-mated well by ξ, the gradient estimated at neighbour pointP . Then we can seek for an adversarial xadv from x0 alongthe approximated direction ξ. The basic logic is illustratedin Fig. 1. More details about each step are given below.(1) Initial point. First, we find a small random noise r toperturb the original image x0 and identify a starting pointP on the boundary. Formally,

P := x0 +minr||r||2 s.t. fθ(P) 6= fθ(x0), r ∼ N (0, σ)

(1)To do that, we add some random Gaussian noises ri ∼N (0, σi)(i = 1, 2, ..., k) to the original image and makequeries to the system, until the image P = x0 + rj is mis-classified. In order to make the starting point closer to the

η1, ..., ηn

r

Pxadv

x0

B

ξ

(1)

(2)

(3) v

Figure 1: A simple illustration of three steps to compute anadversarial example by qFool. (1) Compute starting pointPwith a random perturbation r. (2) Estimate gradient direc-tion ξ at P by n perturbation vectors η1, ...,ηn. (3) Searchan adversarial example xadv with a perturbation v in theestimated gradient direction ξ.

decision boundary, we then do a binary search, in the direc-tion of rj to find a small Gaussian noise r that makes theperturbed image located on the boundary.(2) Gradient estimation. The gradient of boundary∇f(P)is estimated using only the top-1 label of the classifier. Werandomly generate n small vectors η1, ...,ηn with the samenorm to perturb P and query the classifier to get the pre-dicted label f(P + ηi). We define:

zi =

{−1 f(P + ηi) = f(x0)

+1 f(P + ηi) 6= f(x0), i = 1, 2, ..., n (2)

Theoretically, the vectors (η1, ...,ηn) are likely to besymmetrically distributed on both sides of the decisionboundary. For large enough n, 1

n

∑ni=1 ziηi converges to

the normal of the decision boundary as the components ofηi’s along the decision boundary will cancel out each other.Hence, the direction of the gradient ∇f(P) can be esti-mated as

ξ =

∑ni=1 ziηi

||∑ni=1 ziηi||2

. (3)

(3) Directional search. Due to the flatness of the boundary,the direction of the gradient at point xadv can be approxi-mated by the one at point P , i.e., ∇f(P) ≈ ξ, which canbe computed by Eq. (3). We can thus find the adversarialexample xadv by perturbing the original image x0 in thedirection of ξ until we hit the decision boundary. It onlycosts a few queries to the classifier by using a binary searchalgorithm.

The qFool design described above is highly paralleliz-able, which can bring important speed-ups. In the step ofgradient estimation, where the vast majority of queries areconsumed, we can conduct parallel queries, as they are in-dependent of each other. On the contrary, in all previousdecision-based attacks, the algorithms are usually executedin an iterative way, and the results are highly dependent on

3

Page 4: arXiv:1903.10826v1 [cs.LG] 26 Mar 2019in various applications, including computer vision [15, 14], speech [19, 11], health-care [13, 17], etc. Despite their huge success, it has been

the ones of the previous iterations. Hence, our algorithmcan not only reduce the number of queries, but also saveconsiderable computation time due to parallelization.

3.2. qFool algorithm

Algorithm 1: Non-targeted qFoolInput: original image x0, initial estimation number

nu, a threshold ε.1 i = 0 ;2 Search starting point P0 ;3 while

∑ij=0 n

(j) < n do4 n(i) = 0 ;5 x

(i)adv = x

′adv = Pi ;

6 while∥∥∥x(i)

adv−Pi

∥∥∥n(i)+nu

≤ ε · ‖x′adv−Pi‖n(i) and∑i

j=0 n(j) < n do

7 Make nu new queries ;8 x′adv = x

(i)adv ;

9 n(i) = n(i) + nu ;10 Estimate gradient ξi at Pi with n(i) queries

in full/sub space;11 Binary search for x(i)

adv = x0 + β · ξi ;12 end13 Pi+1 = x

(i)adv , i = i+ 1 ;

14 end15 return x(i−1)

adv

The distance between the starting point P and originalpoint x0 directly impacts the quality of adversarial pertur-bations computed in Section 3.1. If we find a better startingpoint P that is closer to the boundary, the final perturbationis generally smaller and more likely to be imperceptible.

Therefore, we design the qFool attack as an iterative al-gorithm where the starting point P is iteratively updated.The total number of queries n is divided into s iterations,i.e., n = n(0) + n(1) + ... + n(s−1). s and n(i) (i =0, ..., s − 1) can be found adaptively. At i-th iteration, wequery the model n(i) times. These queries permit to esti-mate the gradient direction ξi at starting point Pi, and tosearch for the adversarial example x(i)

adv in the direction ofξi. Then we update the starting point Pi+1 to x(i)

adv in thenext iteration.

The allocation of the number of queries n = n(0) +n(1) + ... + n(s−1) in the gradient estimation plays an im-portant role in the efficiency of the algorithm. Here we findn(i) (i = 0, 1, ...s − 1) in a heuristic and greedy way. Inone iteration, we first initialize the number of queries witha small nu and compute an adversarial perturbation follow-ing the three steps described in Section 3.1. Then we grad-

ually increase the number of queries in this iteration to get asmaller perturbation. This process will stop when the reduc-tion rate of the perturbation is less than a preset threshold orthe number of used queries reaches n. With this method, wecan find an appropriate number of queries for each iterationn(i)(i = 0, 1, ..., s− 1).

The norm of noise vectors in the step of estimating gra-dient direction is another important parameter. It should besmall enough so that the boundary between the adversarialand non-adversarial regions can be considered as approxi-mately linear. If so, we expect to get an equal split of +1and −1 for zi in Eq. (2). We dynamically adjust the normω(k) in the k-th iteration according to the norm and the per-centage of zi = +1 in previous iterations. We formalizethis adjustment in Eq. (4).

ω(k+1) = ω(k) · (1 + φ(k) · ρ(k))φ(k) = −sign(ρ(k)) · φ(k−1)

(4)

where ω(0) = ω0, φ(0) = −1. φ(k) controls the increase ordecrease of ω(k), and ρ(k) is the difference between 0.5 andthe proportion of zi that are equal to +1. In this way, we canalways ensure that the noise vectors are evenly distributedon both sides of the boundary.

3.3. qFool in subspace

In order to further reduce the number of queries, we nowpropose to concentrate on perturbations that belong to aproperly chosen subspace, in order to simplify the searchfor adversarial perturbations. Indeed, it has been shownthat adversarial perturbations that cause data misclassifica-tion can be found within specific subspaces [7]. Hence, inthe step of gradient estimation, instead of using n noise vec-tors η1, ...,ηn of dimension d, we randomly choose n noisevectors in a pre-defined subspace of dimension m � d.The other steps of the algorithm remain the same. The non-targeted qFool algorithm is described in Alg. 1.

Generating adversarial perturbations in subspaces ratherthan full data space clearly reduces the complexity of query-ing the model. The more symmetrically the sampled noisevectors are distributed, the better the components in theother directions than the gradient direction will cancel out,so the more accurate the estimated gradient direction. Givena limited number of sampled noise vectors, these vectors aredistributed more sparsely in the full space than in a lowerdimensional subspace. Thus, the probability that these vec-tors are symmetrically distributed on both sides of gradi-ent direction in a subspace is larger than the one in the fullspace. This leads to a smaller adversarial perturbation in asubspace. Alternatively, given a threshold of the norm ofperturbation, the subspace method requires fewer queries.

4

Page 5: arXiv:1903.10826v1 [cs.LG] 26 Mar 2019in various applications, including computer vision [15, 14], speech [19, 11], health-care [13, 17], etc. Despite their huge success, it has been

target image

∟∟

original imagex0<latexit sha1_base64="2Dp/OGqxTmV6OOVS+8+c1bN2B2o=">AAAB+HicbVC7TsMwFL3hWcqjASQWFosKialKWGCsYGFsJfqQ2ig4rtNadZzIdhAlypewMIAQK5/CxsK34LQdoOVIlo7OuVf3+AQJZ0o7zpe1srq2vrFZ2ipv7+zuVez9g7aKU0loi8Q8lt0AK8qZoC3NNKfdRFIcBZx2gvF14XfuqVQsFrd6klAvwkPBQkawNpJvV/oR1qMgzB5yP3Pysm9XnZozBVom7pxU60fN7zsAaPj2Z38QkzSiQhOOleq5TqK9DEvNCKd5uZ8qmmAyxkPaM1TgiCovmwbP0alRBiiMpXlCo6n6eyPDkVKTKDCTRUy16BXif14v1eGllzGRpJoKMjsUphzpGBUtoAGTlGg+MQQTyUxWREZYYqJNV0UJ7uKXl0n7vOY6Nbdp2riCGUpwDCdwBi5cQB1uoAEtIJDCE7zAq/VoPVtv1vtsdMWa7xzCH1gfPzQ3lPE=</latexit><latexit sha1_base64="/4ee5Ux5m2ytATwfP5BuHutrmcQ=">AAAB+HicbVC7TsMwFL3hWcqjASQWFosKialKWGCsysLYSvQhtVHluE5r1XEi20GUKF/CwgBCDCx8A1/AxsK34LQdoOVIlo7OuVf3+PgxZ0o7zpe1srq2vrFZ2Cpu7+zulez9g5aKEklok0Q8kh0fK8qZoE3NNKedWFIc+py2/fFV7rdvqVQsEjd6ElMvxEPBAkawNlLfLvVCrEd+kN5l/dTJin277FScKdAyceekXD1qfLO32ke9b3/2BhFJQio04VipruvE2kux1IxwmhV7iaIxJmM8pF1DBQ6p8tJp8AydGmWAgkiaJzSaqr83UhwqNQl9M5nHVIteLv7ndRMdXHopE3GiqSCzQ0HCkY5Q3gIaMEmJ5hNDMJHMZEVkhCUm2nSVl+AufnmZtM4rrlNxG6aNGsxQgGM4gTNw4QKqcA11aAKBBB7gCZ6te+vRerFeZ6Mr1nznEP7Aev8BhXiWrQ==</latexit><latexit sha1_base64="/4ee5Ux5m2ytATwfP5BuHutrmcQ=">AAAB+HicbVC7TsMwFL3hWcqjASQWFosKialKWGCsysLYSvQhtVHluE5r1XEi20GUKF/CwgBCDCx8A1/AxsK34LQdoOVIlo7OuVf3+PgxZ0o7zpe1srq2vrFZ2Cpu7+zulez9g5aKEklok0Q8kh0fK8qZoE3NNKedWFIc+py2/fFV7rdvqVQsEjd6ElMvxEPBAkawNlLfLvVCrEd+kN5l/dTJin277FScKdAyceekXD1qfLO32ke9b3/2BhFJQio04VipruvE2kux1IxwmhV7iaIxJmM8pF1DBQ6p8tJp8AydGmWAgkiaJzSaqr83UhwqNQl9M5nHVIteLv7ndRMdXHopE3GiqSCzQ0HCkY5Q3gIaMEmJ5hNDMJHMZEVkhCUm2nSVl+AufnmZtM4rrlNxG6aNGsxQgGM4gTNw4QKqcA11aAKBBB7gCZ6te+vRerFeZ6Mr1nznEP7Aev8BhXiWrQ==</latexit><latexit sha1_base64="kygiGMTkoVAF5YbLR2F1ZRitVHU=">AAAB+HicbVC7TsMwFL3hWcqjAUYWiwqJqXJYYKxgYSwSfUhtFDmu01p1nMh2ECXql7AwgBArn8LG3+C0GaDlSJaOzrlX9/iEqeDaYPztrK1vbG5tV3aqu3v7BzX38Kijk0xR1qaJSFQvJJoJLlnbcCNYL1WMxKFg3XByU/jdB6Y0T+S9mabMj8lI8ohTYqwUuLVBTMw4jPLHWZDjWTVw67iB50CrxCtJHUq0AvdrMExoFjNpqCBa9z2cGj8nynAq2Kw6yDRLCZ2QEetbKknMtJ/Pg8/QmVWGKEqUfdKgufp7Iyex1tM4tJNFTL3sFeJ/Xj8z0ZWfc5lmhkm6OBRlApkEFS2gIVeMGjG1hFDFbVZEx0QRamxXRQne8pdXSeei4eGGd4frzeuyjgqcwCmcgweX0IRbaEEbKGTwDK/w5jw5L86787EYXXPKnWP4A+fzB6WQkww=</latexit>

x⇤<latexit sha1_base64="y/lUjUXJC3vw1aLc0aG8zNJ385c=">AAAB+HicdVDLSgMxFL1T3/VVdekmWARxMWQsxZmFWHTjsoJVoa0lk2ZsaOZBkhHrUPA/3LhQxK0/4d6df2OmVVDRA4HDOfdyT46fCK40xu9WYWJyanpmdq44v7C4tFxaWT1VcSopa9BYxPLcJ4oJHrGG5lqw80QyEvqCnfn9w9w/u2JS8Tg60YOEtUNyGfGAU6KN1Cktt0Kie36QXQ8vsu1hsVMqY9vzqi52EbZdjB1cMaTieZ5bRY6NRyjvvxb3bgGg3im9tboxTUMWaSqIUk0HJ7qdEak5FWxYbKWKJYT2ySVrGhqRkKl2Ngo+RJtG6aIgluZFGo3U7xsZCZUahL6ZzGOq314u/uU1Ux247YxHSapZRMeHglQgHaO8BdTlklEtBoYQKrnJimiPSEK16Sov4eun6H9yumM72HaOcbl2AGPMwjpswBY4sAs1OII6NIBCCnfwAI/WjXVvPVnP49GC9bmzBj9gvXwAkimVMA==</latexit><latexit sha1_base64="fPD6dmvpnHwzFeLWU7Cpt4w8kfk=">AAAB+HicdVDLSgMxFM34rOOjVZdugkUQF0PGUpxZiEU3LivYB7S1ZNJMG5p5kGTEOvRL3Agq4tafcO9G/BszrYKKHggczrmXe3K8mDOpEHo3Zmbn5hcWc0vm8srqWr6wvlGXUSIIrZGIR6LpYUk5C2lNMcVpMxYUBx6nDW94kvmNSyoki8JzNYppJ8D9kPmMYKWlbiHfDrAaeH56Nb5I98Zmt1BEluuWHeRAZDkI2aikScl1XacMbQtNUDx6MQ/juzez2i28tnsRSQIaKsKxlC0bxaqTYqEY4XRsthNJY0yGuE9bmoY4oLKTToKP4Y5WetCPhH6hghP1+0aKAylHgacns5jyt5eJf3mtRPlOJ2VhnCgakukhP+FQRTBrAfaYoETxkSaYCKazQjLAAhOlu8pK+Pop/J/U9y0bWfYZKlaOwRQ5sAW2wS6wwQGogFNQBTVAQAJuwD14MK6NW+PReJqOzhifO5vgB4znD4O4lqQ=</latexit><latexit sha1_base64="fPD6dmvpnHwzFeLWU7Cpt4w8kfk=">AAAB+HicdVDLSgMxFM34rOOjVZdugkUQF0PGUpxZiEU3LivYB7S1ZNJMG5p5kGTEOvRL3Agq4tafcO9G/BszrYKKHggczrmXe3K8mDOpEHo3Zmbn5hcWc0vm8srqWr6wvlGXUSIIrZGIR6LpYUk5C2lNMcVpMxYUBx6nDW94kvmNSyoki8JzNYppJ8D9kPmMYKWlbiHfDrAaeH56Nb5I98Zmt1BEluuWHeRAZDkI2aikScl1XacMbQtNUDx6MQ/juzez2i28tnsRSQIaKsKxlC0bxaqTYqEY4XRsthNJY0yGuE9bmoY4oLKTToKP4Y5WetCPhH6hghP1+0aKAylHgacns5jyt5eJf3mtRPlOJ2VhnCgakukhP+FQRTBrAfaYoETxkSaYCKazQjLAAhOlu8pK+Pop/J/U9y0bWfYZKlaOwRQ5sAW2wS6wwQGogFNQBTVAQAJuwD14MK6NW+PReJqOzhifO5vgB4znD4O4lqQ=</latexit><latexit sha1_base64="NAYr5ESvnvq+Ca7bYg9gtgI3GjQ=">AAAB+HicdVDLSgMxFM34rPXRUZdugkUQF0PGUpzZFd24rGAf0I4lk2ba0MyDJCPWYb7EjQtF3Pop7vwbM20FFT0QOJxzL/fk+AlnUiH0YSwtr6yurZc2yptb2zsVc3evLeNUENoiMY9F18eSchbRlmKK024iKA59Tjv+5KLwO7dUSBZH12qaUC/Eo4gFjGClpYFZ6YdYjf0gu8tvspO8PDCryHLduoMciCwHIRvVNKm5ruvUoW2hGapggebAfO8PY5KGNFKEYyl7NkqUl2GhGOE0L/dTSRNMJnhEe5pGOKTSy2bBc3iklSEMYqFfpOBM/b6R4VDKaejrySKm/O0V4l9eL1WB42UsSlJFIzI/FKQcqhgWLcAhE5QoPtUEE8F0VkjGWGCidFdFCV8/hf+T9qllI8u+QtXG+aKOEjgAh+AY2OAMNMAlaIIWICAFD+AJPBv3xqPxYrzOR5eMxc4++AHj7RMi8ZNj</latexit>

xt<latexit sha1_base64="8E7xavK8zeB29DVMK2EOOiwdWrs=">AAAB+HicbVC7TsMwFL3hWcqjASQWFosKialKWGCsYGFsJfqQ2ig4rtNadZzIdhAlypewMIAQK5/CxsK34LQdoOVIlo7OuVf3+AQJZ0o7zpe1srq2vrFZ2ipv7+zuVez9g7aKU0loi8Q8lt0AK8qZoC3NNKfdRFIcBZx2gvF14XfuqVQsFrd6klAvwkPBQkawNpJvV/oR1qMgzB5yP9N52berTs2ZAi0Td06q9aPm9x0ANHz7sz+ISRpRoQnHSvVcJ9FehqVmhNO83E8VTTAZ4yHtGSpwRJWXTYPn6NQoAxTG0jyh0VT9vZHhSKlJFJjJIqZa9ArxP6+X6vDSy5hIUk0FmR0KU450jIoW0IBJSjSfGIKJZCYrIiMsMdGmq6IEd/HLy6R9XnOdmts0bVzBDCU4hhM4AxcuoA430IAWEEjhCV7g1Xq0nq036302umLNdw7hD6yPH5vPlTU=</latexit><latexit sha1_base64="6HV10xG/T1vo8uKzppW/8r5P5aw=">AAAB+HicbVC7TsMwFL3hWcqjASQWFosKialKWGCsysLYSvQhtVHluE5r1XEi20GUKF/CwgBCDCx8A1/AxsK34LQdoOVIlo7OuVf3+PgxZ0o7zpe1srq2vrFZ2Cpu7+zulez9g5aKEklok0Q8kh0fK8qZoE3NNKedWFIc+py2/fFV7rdvqVQsEjd6ElMvxEPBAkawNlLfLvVCrEd+kN5l/VRnxb5ddirOFGiZuHNSrh41vtlb7aPetz97g4gkIRWacKxU13Vi7aVYakY4zYq9RNEYkzEe0q6hAodUeek0eIZOjTJAQSTNExpN1d8bKQ6VmoS+mcxjqkUvF//zuokOLr2UiTjRVJDZoSDhSEcobwENmKRE84khmEhmsiIywhITbbrKS3AXv7xMWucV16m4DdNGDWYowDGcwBm4cAFVuIY6NIFAAg/wBM/WvfVovVivs9EVa75zCH9gvf8A7RCW8Q==</latexit><latexit sha1_base64="6HV10xG/T1vo8uKzppW/8r5P5aw=">AAAB+HicbVC7TsMwFL3hWcqjASQWFosKialKWGCsysLYSvQhtVHluE5r1XEi20GUKF/CwgBCDCx8A1/AxsK34LQdoOVIlo7OuVf3+PgxZ0o7zpe1srq2vrFZ2Cpu7+zulez9g5aKEklok0Q8kh0fK8qZoE3NNKedWFIc+py2/fFV7rdvqVQsEjd6ElMvxEPBAkawNlLfLvVCrEd+kN5l/VRnxb5ddirOFGiZuHNSrh41vtlb7aPetz97g4gkIRWacKxU13Vi7aVYakY4zYq9RNEYkzEe0q6hAodUeek0eIZOjTJAQSTNExpN1d8bKQ6VmoS+mcxjqkUvF//zuokOLr2UiTjRVJDZoSDhSEcobwENmKRE84khmEhmsiIywhITbbrKS3AXv7xMWucV16m4DdNGDWYowDGcwBm4cAFVuIY6NIFAAg/wBM/WvfVovVivs9EVa75zCH9gvf8A7RCW8Q==</latexit><latexit sha1_base64="EA/B/WUEduDXm1leOs888zwedWM=">AAAB+HicbVDLSsNAFL3xWeujUZduBovgqiRudFl047KCfUAbwmQ6aYdOHszciDX0S9y4UMStn+LOv3HSZqGtBwYO59zLPXOCVAqNjvNtra1vbG5tV3aqu3v7BzX78Kijk0wx3maJTFQvoJpLEfM2CpS8lypOo0DybjC5KfzuA1daJPE9TlPuRXQUi1Awikby7dogojgOwvxx5uc4q/p23Wk4c5BV4pakDiVavv01GCYsi3iMTFKt+66TopdThYJJPqsOMs1TyiZ0xPuGxjTi2svnwWfkzChDEibKvBjJXP29kdNI62kUmMkipl72CvE/r59heOXlIk4z5DFbHAozSTAhRQtkKBRnKKeGUKaEyUrYmCrK0HRVlOAuf3mVdC4artNw75x687qsowIncArn4MIlNOEWWtAGBhk8wyu8WU/Wi/VufSxG16xy5xj+wPr8AQ03k1A=</latexit>

P0<latexit sha1_base64="ycUpIEqZpv8a/QlQ+7+hy67vBVo=">AAAB9HicbVDLSsNAFL3xWeur6tLNYBFclUQEXRbduKxgH9CGMplO2qGTSZy5KZTQ73DjQhG3fow7/8ZJm4W2Hhg4nHMv98wJEikMuu63s7a+sbm1Xdop7+7tHxxWjo5bJk41400Wy1h3Amq4FIo3UaDknURzGgWSt4PxXe63J1wbEatHnCbcj+hQiVAwilbyexHFEaMya8z6br9SdWvuHGSVeAWpQoFGv/LVG8QsjbhCJqkxXc9N0M+oRsEkn5V7qeEJZWM65F1LFY248bN56Bk5t8qAhLG2TyGZq783MhoZM40CO5mHNMteLv7ndVMMb/xMqCRFrtjiUJhKgjHJGyADoTlDObWEMi1sVsJGVFOGtqeyLcFb/vIqaV3WPLfmPVxV67dFHSU4hTO4AA+uoQ730IAmMHiCZ3iFN2fivDjvzsdidM0pdk7gD5zPH7Mxkgk=</latexit><latexit sha1_base64="ycUpIEqZpv8a/QlQ+7+hy67vBVo=">AAAB9HicbVDLSsNAFL3xWeur6tLNYBFclUQEXRbduKxgH9CGMplO2qGTSZy5KZTQ73DjQhG3fow7/8ZJm4W2Hhg4nHMv98wJEikMuu63s7a+sbm1Xdop7+7tHxxWjo5bJk41400Wy1h3Amq4FIo3UaDknURzGgWSt4PxXe63J1wbEatHnCbcj+hQiVAwilbyexHFEaMya8z6br9SdWvuHGSVeAWpQoFGv/LVG8QsjbhCJqkxXc9N0M+oRsEkn5V7qeEJZWM65F1LFY248bN56Bk5t8qAhLG2TyGZq783MhoZM40CO5mHNMteLv7ndVMMb/xMqCRFrtjiUJhKgjHJGyADoTlDObWEMi1sVsJGVFOGtqeyLcFb/vIqaV3WPLfmPVxV67dFHSU4hTO4AA+uoQ730IAmMHiCZ3iFN2fivDjvzsdidM0pdk7gD5zPH7Mxkgk=</latexit><latexit sha1_base64="ycUpIEqZpv8a/QlQ+7+hy67vBVo=">AAAB9HicbVDLSsNAFL3xWeur6tLNYBFclUQEXRbduKxgH9CGMplO2qGTSZy5KZTQ73DjQhG3fow7/8ZJm4W2Hhg4nHMv98wJEikMuu63s7a+sbm1Xdop7+7tHxxWjo5bJk41400Wy1h3Amq4FIo3UaDknURzGgWSt4PxXe63J1wbEatHnCbcj+hQiVAwilbyexHFEaMya8z6br9SdWvuHGSVeAWpQoFGv/LVG8QsjbhCJqkxXc9N0M+oRsEkn5V7qeEJZWM65F1LFY248bN56Bk5t8qAhLG2TyGZq783MhoZM40CO5mHNMteLv7ndVMMb/xMqCRFrtjiUJhKgjHJGyADoTlDObWEMi1sVsJGVFOGtqeyLcFb/vIqaV3WPLfmPVxV67dFHSU4hTO4AA+uoQ730IAmMHiCZ3iFN2fivDjvzsdidM0pdk7gD5zPH7Mxkgk=</latexit><latexit sha1_base64="ycUpIEqZpv8a/QlQ+7+hy67vBVo=">AAAB9HicbVDLSsNAFL3xWeur6tLNYBFclUQEXRbduKxgH9CGMplO2qGTSZy5KZTQ73DjQhG3fow7/8ZJm4W2Hhg4nHMv98wJEikMuu63s7a+sbm1Xdop7+7tHxxWjo5bJk41400Wy1h3Amq4FIo3UaDknURzGgWSt4PxXe63J1wbEatHnCbcj+hQiVAwilbyexHFEaMya8z6br9SdWvuHGSVeAWpQoFGv/LVG8QsjbhCJqkxXc9N0M+oRsEkn5V7qeEJZWM65F1LFY248bN56Bk5t8qAhLG2TyGZq783MhoZM40CO5mHNMteLv7ndVMMb/xMqCRFrtjiUJhKgjHJGyADoTlDObWEMi1sVsJGVFOGtqeyLcFb/vIqaV3WPLfmPVxV67dFHSU4hTO4AA+uoQ730IAmMHiCZ3iFN2fivDjvzsdidM0pdk7gD5zPH7Mxkgk=</latexit>

P1<latexit sha1_base64="lRlOrtu0mmRcNP0hRvJ4l/ovEBI=">AAAB9HicbVC7SgNBFL0bXzG+ooKNzWAQrMKujZZBG8sEzAOSJd6dTJIhs7PrzGwgLPkOGwtFbP0YOxu/xdkkhSYeGDiccy/3zAliwbVx3S8nt7a+sbmV3y7s7O7tHxQPjxo6ShRldRqJSLUC1ExwyeqGG8FasWIYBoI1g9Ft5jfHTGkeyXsziZkf4kDyPqdorOR3QjRDiiKtTrtet1hyy+4MZJV4C1KqnNS+HwCg2i1+dnoRTUImDRWoddtzY+OnqAyngk0LnUSzGOkIB6xtqcSQaT+dhZ6Sc6v0SD9S9klDZurvjRRDrSdhYCezkHrZy8T/vHZi+td+ymWcGCbp/FA/EcREJGuA9Lhi1IiJJUgVt1kJHaJCamxPBVuCt/zlVdK4LHtu2avZNm5gjjycwhlcgAdXUIE7qEIdKDzCE7zAqzN2np03530+mnMWO8fwB87HD0Ick+s=</latexit><latexit sha1_base64="ir4D+DlISn5Mjb5ZZFBKbyqojGU=">AAAB9HicbVDLSgMxFL2pr1pfVcGNm2ARXJUZN7osdeOyBfuAdiiZNNOGZjJjkimUod/hxoUiLvUr/AJ3bvwWM20X2nogcDjnXu7J8WPBtXGcL5RbW9/Y3MpvF3Z29/YPiodHTR0lirIGjUSk2j7RTHDJGoYbwdqxYiT0BWv5o5vMb42Z0jySd2YSMy8kA8kDTomxktcNiRlSItLatOf2iiWn7MyAV4m7IKXKSf2bv1U/ar3iZ7cf0SRk0lBBtO64Tmy8lCjDqWDTQjfRLCZ0RAasY6kkIdNeOgs9xedW6eMgUvZJg2fq742UhFpPQt9OZiH1speJ/3mdxATXXsplnBgm6fxQkAhsIpw1gPtcMWrExBJCFbdZMR0SRaixPRVsCe7yl1dJ87LsOmW3btuowhx5OIUzuAAXrqACt1CDBlC4hwd4gmc0Ro/oBb3OR3NosXMMf4DefwCTXZWn</latexit><latexit sha1_base64="ir4D+DlISn5Mjb5ZZFBKbyqojGU=">AAAB9HicbVDLSgMxFL2pr1pfVcGNm2ARXJUZN7osdeOyBfuAdiiZNNOGZjJjkimUod/hxoUiLvUr/AJ3bvwWM20X2nogcDjnXu7J8WPBtXGcL5RbW9/Y3MpvF3Z29/YPiodHTR0lirIGjUSk2j7RTHDJGoYbwdqxYiT0BWv5o5vMb42Z0jySd2YSMy8kA8kDTomxktcNiRlSItLatOf2iiWn7MyAV4m7IKXKSf2bv1U/ar3iZ7cf0SRk0lBBtO64Tmy8lCjDqWDTQjfRLCZ0RAasY6kkIdNeOgs9xedW6eMgUvZJg2fq742UhFpPQt9OZiH1speJ/3mdxATXXsplnBgm6fxQkAhsIpw1gPtcMWrExBJCFbdZMR0SRaixPRVsCe7yl1dJ87LsOmW3btuowhx5OIUzuAAXrqACt1CDBlC4hwd4gmc0Ro/oBb3OR3NosXMMf4DefwCTXZWn</latexit><latexit sha1_base64="ygRMPLmW57VQoBkOGPBX6gFDqBs=">AAAB9HicbVDLSsNAFL2pr1pfVZduBovgqiRudFl047KCfUAbymQ6aYdOJnHmplBCv8ONC0Xc+jHu/BsnbRbaemDgcM693DMnSKQw6LrfTmljc2t7p7xb2ds/ODyqHp+0TZxqxlsslrHuBtRwKRRvoUDJu4nmNAok7wSTu9zvTLk2IlaPOEu4H9GREqFgFK3k9yOKY0Zl1pwPvEG15tbdBcg68QpSgwLNQfWrP4xZGnGFTFJjep6boJ9RjYJJPq/0U8MTyiZ0xHuWKhpx42eL0HNyYZUhCWNtn0KyUH9vZDQyZhYFdjIPaVa9XPzP66UY3viZUEmKXLHloTCVBGOSN0CGQnOGcmYJZVrYrISNqaYMbU8VW4K3+uV10r6qe27de3BrjduijjKcwTlcggfX0IB7aEILGDzBM7zCmzN1Xpx352M5WnKKnVP4A+fzB7N1kgY=</latexit>

P2<latexit sha1_base64="5776QENaOVPgSNNuMN8+HLOnMCM=">AAAB9HicbVC7SgNBFL3rM8ZXVLCxGQyCVdhNo2XQxjIB84BkibOTSTJkdnaduRsIS77DxkIRWz/GzsZvcTZJoYkHBg7n3Ms9c4JYCoOu++WsrW9sbm3ndvK7e/sHh4Wj44aJEs14nUUy0q2AGi6F4nUUKHkr1pyGgeTNYHSb+c0x10ZE6h4nMfdDOlCiLxhFK/mdkOKQUZlWp91yt1B0S+4MZJV4C1KsnNa+HwCg2i18dnoRS0KukElqTNtzY/RTqlEwyaf5TmJ4TNmIDnjbUkVDbvx0FnpKLqzSI/1I26eQzNTfGykNjZmEgZ3MQpplLxP/89oJ9q/9VKg4Qa7Y/FA/kQQjkjVAekJzhnJiCWVa2KyEDammDG1PeVuCt/zlVdIolzy35NVsGzcwRw7O4BwuwYMrqMAdVKEODB7hCV7g1Rk7z86b8z4fXXMWOyfwB87HD0Ogk+w=</latexit><latexit sha1_base64="bYoanHJA5Bf4wEw8vXVrleitQB0=">AAAB9HicbVDLSgMxFL3js9ZXVXDjJlgEV2WmG12WunHZgn1AO5RMmmlDk8yYZApl6He4caGIS/0Kv8CdG7/FTNuFth4IHM65l3tygpgzbVz3y1lb39jc2s7t5Hf39g8OC0fHTR0litAGiXik2gHWlDNJG4YZTtuxolgEnLaC0U3mt8ZUaRbJOzOJqS/wQLKQEWys5HcFNkOCeVqb9sq9QtEtuTOgVeItSLFyWv9mb9WPWq/w2e1HJBFUGsKx1h3PjY2fYmUY4XSa7yaaxpiM8IB2LJVYUO2ns9BTdGGVPgojZZ80aKb+3kix0HoiAjuZhdTLXib+53USE177KZNxYqgk80NhwpGJUNYA6jNFieETSzBRzGZFZIgVJsb2lLcleMtfXiXNcslzS17dtlGFOXJwBudwCR5cQQVuoQYNIHAPD/AEz87YeXRenNf56Jqz2DmBP3DefwCU4ZWo</latexit><latexit sha1_base64="bYoanHJA5Bf4wEw8vXVrleitQB0=">AAAB9HicbVDLSgMxFL3js9ZXVXDjJlgEV2WmG12WunHZgn1AO5RMmmlDk8yYZApl6He4caGIS/0Kv8CdG7/FTNuFth4IHM65l3tygpgzbVz3y1lb39jc2s7t5Hf39g8OC0fHTR0litAGiXik2gHWlDNJG4YZTtuxolgEnLaC0U3mt8ZUaRbJOzOJqS/wQLKQEWys5HcFNkOCeVqb9sq9QtEtuTOgVeItSLFyWv9mb9WPWq/w2e1HJBFUGsKx1h3PjY2fYmUY4XSa7yaaxpiM8IB2LJVYUO2ns9BTdGGVPgojZZ80aKb+3kix0HoiAjuZhdTLXib+53USE177KZNxYqgk80NhwpGJUNYA6jNFieETSzBRzGZFZIgVJsb2lLcleMtfXiXNcslzS17dtlGFOXJwBudwCR5cQQVuoQYNIHAPD/AEz87YeXRenNf56Jqz2DmBP3DefwCU4ZWo</latexit><latexit sha1_base64="JplUECOZHTse9NvFEbj0VDyMMOg=">AAAB9HicbVBNT8JAFHzFL8Qv1KOXjcTEE2m56JHoxSMmAibQkO3yChu227q7JSENv8OLB43x6o/x5r9xCz0oOMkmk5n38mYnSATXxnW/ndLG5tb2Tnm3srd/cHhUPT7p6DhVDNssFrF6DKhGwSW2DTcCHxOFNAoEdoPJbe53p6g0j+WDmSXoR3QkecgZNVby+xE1Y0ZF1poPGoNqza27C5B14hWkBgVag+pXfxizNEJpmKBa9zw3MX5GleFM4LzSTzUmlE3oCHuWShqh9rNF6Dm5sMqQhLGyTxqyUH9vZDTSehYFdjIPqVe9XPzP66UmvPYzLpPUoGTLQ2EqiIlJ3gAZcoXMiJkllClusxI2pooyY3uq2BK81S+vk06j7rl1796tNW+KOspwBudwCR5cQRPuoAVtYPAEz/AKb87UeXHenY/laMkpdk7hD5zPH7T5kgc=</latexit>

Q2<latexit sha1_base64="LXSFvJCZYQiB4BeLooZMI+EF1Jc=">AAAB6nicbVDLSgNBEOyNrxhfUY96GAyCp7Cbix6DXjwmaB6QLGF2MpsMmZ1dZnqFsOQTvHhQxKsf4Xd48+anOHkcNLGgoajqprsrSKQw6LpfTm5tfWNzK79d2Nnd2z8oHh41TZxqxhsslrFuB9RwKRRvoEDJ24nmNAokbwWjm6nfeuDaiFjd4zjhfkQHSoSCUbTSXb1X6RVLbtmdgawSb0FK1dOP+jcA1HrFz24/ZmnEFTJJjel4boJ+RjUKJvmk0E0NTygb0QHvWKpoxI2fzU6dkHOr9EkYa1sKyUz9PZHRyJhxFNjOiOLQLHtT8T+vk2J45WdCJSlyxeaLwlQSjMn0b9IXmjOUY0so08LeStiQasrQplOwIXjLL6+SZqXsuWWvbtO4hjnycAJncAEeXEIVbqEGDWAwgEd4hhdHOk/Oq/M2b805i5lj+APn/QfgdY+7</latexit><latexit sha1_base64="QVM5tsE9Fz8XhzN0oqB6wneaMZw=">AAAB6nicbVDLSgNBEOz1GeMr6lGRwSB4Cru56DHoxWOC5gHJEmYnnWTI7OwyMyuEJUePXjwo4tWPyHd48xv8CSePgyYWNBRV3XR3BbHg2rjul7Oyura+sZnZym7v7O7t5w4OazpKFMMqi0SkGgHVKLjEquFGYCNWSMNAYD0Y3Ez8+gMqzSN5b4Yx+iHtSd7ljBor3VXaxXYu7xbcKcgy8eYkXzoZV74fT8fldu6z1YlYEqI0TFCtm54bGz+lynAmcJRtJRpjyga0h01LJQ1R++n01BE5t0qHdCNlSxoyVX9PpDTUehgGtjOkpq8XvYn4n9dMTPfKT7mME4OSzRZ1E0FMRCZ/kw5XyIwYWkKZ4vZWwvpUUWZsOlkbgrf48jKpFQueW/AqNo1rmCEDx3AGF+DBJZTgFspQBQY9eIIXeHWE8+y8Oe+z1hVnPnMEf+B8/AC+upEh</latexit><latexit sha1_base64="QVM5tsE9Fz8XhzN0oqB6wneaMZw=">AAAB6nicbVDLSgNBEOz1GeMr6lGRwSB4Cru56DHoxWOC5gHJEmYnnWTI7OwyMyuEJUePXjwo4tWPyHd48xv8CSePgyYWNBRV3XR3BbHg2rjul7Oyura+sZnZym7v7O7t5w4OazpKFMMqi0SkGgHVKLjEquFGYCNWSMNAYD0Y3Ez8+gMqzSN5b4Yx+iHtSd7ljBor3VXaxXYu7xbcKcgy8eYkXzoZV74fT8fldu6z1YlYEqI0TFCtm54bGz+lynAmcJRtJRpjyga0h01LJQ1R++n01BE5t0qHdCNlSxoyVX9PpDTUehgGtjOkpq8XvYn4n9dMTPfKT7mME4OSzRZ1E0FMRCZ/kw5XyIwYWkKZ4vZWwvpUUWZsOlkbgrf48jKpFQueW/AqNo1rmCEDx3AGF+DBJZTgFspQBQY9eIIXeHWE8+y8Oe+z1hVnPnMEf+B8/AC+upEh</latexit><latexit sha1_base64="0hUHJ87PhF7N6u4BCbiNIDvCPAg=">AAAB6nicbVA9TwJBEJ3DL8Qv1NJmIzGxInc0UhJtLCHKRwIXsrfMwYa9vcvungm58BNsLDTG1l9k579xgSsUfMkkL+/NZGZekAiujet+O4Wt7Z3dveJ+6eDw6PikfHrW0XGqGLZZLGLVC6hGwSW2DTcCe4lCGgUCu8H0buF3n1BpHstHM0vQj+hY8pAzaqz00BrWhuWKW3WXIJvEy0kFcjSH5a/BKGZphNIwQbXue25i/Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+asO5nXCapQclWi8JUEBOTxd9kxBUyI2aWUKa4vZWwCVWUGZtOyYbgrb+8STq1qudWvZZbadzmcRThAi7hGjy4gQbcQxPawGAMz/AKb45wXpx352PVWnDymXP4A+fzB9B7jXY=</latexit>

Q0<latexit sha1_base64="SwihmCk6UznOeJE9fkBlSw2IeX8=">AAAB6nicbVDLSgNBEOyNrxhfUY96GAyCp7DrJR6DXjwmaB6QLGF2MpsMmZ1dZnqFsOQTvHhQxKsf4Xd48+anOHkcNLGgoajqprsrSKQw6LpfTm5tfWNzK79d2Nnd2z8oHh41TZxqxhsslrFuB9RwKRRvoEDJ24nmNAokbwWjm6nfeuDaiFjd4zjhfkQHSoSCUbTSXb3n9oolt+zOQFaJtyCl6ulH/RsAar3iZ7cfszTiCpmkxnQ8N0E/oxoFk3xS6KaGJ5SN6IB3LFU04sbPZqdOyLlV+iSMtS2FZKb+nshoZMw4CmxnRHFolr2p+J/XSTG88jOhkhS5YvNFYSoJxmT6N+kLzRnKsSWUaWFvJWxINWVo0ynYELzll1dJ87LsuWWvbtO4hjnycAJncAEeVKAKt1CDBjAYwCM8w4sjnSfn1Xmbt+acxcwx/IHz/gPdbY+5</latexit><latexit sha1_base64="1g28rxsa3kt32HxDm0HTHANyq4I=">AAAB6nicbVDLSgNBEOyNrxhfUY+KDAbBU9j1osegF48JmgckS5idzCZDZmeXmV4hhBw9evGgiFc/It/hzW/wJ5w8DppY0FBUddPdFSRSGHTdLyezsrq2vpHdzG1t7+zu5fcPaiZONeNVFstYNwJquBSKV1Gg5I1EcxoFkteD/s3Erz9wbUSs7nGQcD+iXSVCwSha6a7Sdtv5glt0pyDLxJuTQul4XPl+PBmX2/nPVidmacQVMkmNaXpugv6QahRM8lGulRqeUNanXd60VNGIG384PXVEzqzSIWGsbSkkU/X3xJBGxgyiwHZGFHtm0ZuI/3nNFMMrfyhUkiJXbLYoTCXBmEz+Jh2hOUM5sIQyLeythPWopgxtOjkbgrf48jKpXRQ9t+hVbBrXMEMWjuAUzsGDSyjBLZShCgy68AQv8OpI59l5c95nrRlnPnMIf+B8/AC7spEf</latexit><latexit sha1_base64="1g28rxsa3kt32HxDm0HTHANyq4I=">AAAB6nicbVDLSgNBEOyNrxhfUY+KDAbBU9j1osegF48JmgckS5idzCZDZmeXmV4hhBw9evGgiFc/It/hzW/wJ5w8DppY0FBUddPdFSRSGHTdLyezsrq2vpHdzG1t7+zu5fcPaiZONeNVFstYNwJquBSKV1Gg5I1EcxoFkteD/s3Erz9wbUSs7nGQcD+iXSVCwSha6a7Sdtv5glt0pyDLxJuTQul4XPl+PBmX2/nPVidmacQVMkmNaXpugv6QahRM8lGulRqeUNanXd60VNGIG384PXVEzqzSIWGsbSkkU/X3xJBGxgyiwHZGFHtm0ZuI/3nNFMMrfyhUkiJXbLYoTCXBmEz+Jh2hOUM5sIQyLeythPWopgxtOjkbgrf48jKpXRQ9t+hVbBrXMEMWjuAUzsGDSyjBLZShCgy68AQv8OpI59l5c95nrRlnPnMIf+B8/AC7spEf</latexit><latexit sha1_base64="spHyD4XdWFmW1oRt0E/h7vzI7sw=">AAAB6nicbVA9SwNBEJ2LXzF+RS1tFoNgFfZstAzaWCZoPiA5wt5mkyzZ2zt254Rw5CfYWChi6y+y89+4Sa7QxAcDj/dmmJkXJkpapPTbK2xsbm3vFHdLe/sHh0fl45OWjVPDRZPHKjadkFmhpBZNlKhEJzGCRaES7XByN/fbT8JYGetHnCYiiNhIy6HkDJ300OjTfrlCq3QBsk78nFQgR71f/uoNYp5GQiNXzNquTxMMMmZQciVmpV5qRcL4hI1E11HNImGDbHHqjFw4ZUCGsXGlkSzU3xMZi6ydRqHrjBiO7ao3F//zuikOb4JM6iRFofly0TBVBGMy/5sMpBEc1dQRxo10txI+ZoZxdOmUXAj+6svrpHVV9WnVb9BK7TaPowhncA6X4MM11OAe6tAEDiN4hld485T34r17H8vWgpfPnMIfeJ8/zXONdA==</latexit>

Q1<latexit sha1_base64="v6NhjGBrEh7mIgjJP5zDRi4/yzw=">AAAB6nicbVDLSgNBEOyNrxhfUY96GAyCp7DrJR6DXjwmaB6QLGF2MpsMmZ1dZnqFsOQTvHhQxKsf4Xd48+anOHkcNLGgoajqprsrSKQw6LpfTm5tfWNzK79d2Nnd2z8oHh41TZxqxhsslrFuB9RwKRRvoEDJ24nmNAokbwWjm6nfeuDaiFjd4zjhfkQHSoSCUbTSXb3n9Yolt+zOQFaJtyCl6ulH/RsAar3iZ7cfszTiCpmkxnQ8N0E/oxoFk3xS6KaGJ5SN6IB3LFU04sbPZqdOyLlV+iSMtS2FZKb+nshoZMw4CmxnRHFolr2p+J/XSTG88jOhkhS5YvNFYSoJxmT6N+kLzRnKsSWUaWFvJWxINWVo0ynYELzll1dJ87LsuWWvbtO4hjnycAJncAEeVKAKt1CDBjAYwCM8w4sjnSfn1Xmbt+acxcwx/IHz/gPe8Y+6</latexit><latexit sha1_base64="99vs+rI4koE1l1IYhscZhhtZquI=">AAAB6nicbVDLSgNBEOyNrxhfUY+KDAbBU9j1osegF48JmgckS5idzCZDZmeXmV4hhBw9evGgiFc/It/hzW/wJ5w8DppY0FBUddPdFSRSGHTdLyezsrq2vpHdzG1t7+zu5fcPaiZONeNVFstYNwJquBSKV1Gg5I1EcxoFkteD/s3Erz9wbUSs7nGQcD+iXSVCwSha6a7S9tr5glt0pyDLxJuTQul4XPl+PBmX2/nPVidmacQVMkmNaXpugv6QahRM8lGulRqeUNanXd60VNGIG384PXVEzqzSIWGsbSkkU/X3xJBGxgyiwHZGFHtm0ZuI/3nNFMMrfyhUkiJXbLYoTCXBmEz+Jh2hOUM5sIQyLeythPWopgxtOjkbgrf48jKpXRQ9t+hVbBrXMEMWjuAUzsGDSyjBLZShCgy68AQv8OpI59l5c95nrRlnPnMIf+B8/AC9NpEg</latexit><latexit sha1_base64="99vs+rI4koE1l1IYhscZhhtZquI=">AAAB6nicbVDLSgNBEOyNrxhfUY+KDAbBU9j1osegF48JmgckS5idzCZDZmeXmV4hhBw9evGgiFc/It/hzW/wJ5w8DppY0FBUddPdFSRSGHTdLyezsrq2vpHdzG1t7+zu5fcPaiZONeNVFstYNwJquBSKV1Gg5I1EcxoFkteD/s3Erz9wbUSs7nGQcD+iXSVCwSha6a7S9tr5glt0pyDLxJuTQul4XPl+PBmX2/nPVidmacQVMkmNaXpugv6QahRM8lGulRqeUNanXd60VNGIG384PXVEzqzSIWGsbSkkU/X3xJBGxgyiwHZGFHtm0ZuI/3nNFMMrfyhUkiJXbLYoTCXBmEz+Jh2hOUM5sIQyLeythPWopgxtOjkbgrf48jKpXRQ9t+hVbBrXMEMWjuAUzsGDSyjBLZShCgy68AQv8OpI59l5c95nrRlnPnMIf+B8/AC9NpEg</latexit><latexit sha1_base64="TnsLFqdApZhGHEHeoheYiGXmh2A=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m86LHoxWOL9gPaUDbbSbt0swm7G6GE/gQvHhTx6i/y5r9x0+agrQ8GHu/NMDMvSATXxnW/ndLG5tb2Tnm3srd/cHhUPT7p6DhVDNssFrHqBVSj4BLbhhuBvUQhjQKB3WB6l/vdJ1Sax/LRzBL0IzqWPOSMGis9tIbesFpz6+4CZJ14BalBgeaw+jUYxSyNUBomqNZ9z02Mn1FlOBM4rwxSjQllUzrGvqWSRqj9bHHqnFxYZUTCWNmShizU3xMZjbSeRYHtjKiZ6FUvF//z+qkJb/yMyyQ1KNlyUZgKYmKS/01GXCEzYmYJZYrbWwmbUEWZselUbAje6svrpHNV99y613JrjdsijjKcwTlcggfX0IB7aEIbGIzhGV7hzRHOi/PufCxbS04xcwp/4Hz+AM73jXU=</latexit>

Figure 2: An illustration of qFool in targeted attacks.

4. Targeted attacksWe now extend our method to the targeted attack prob-

lem. In this case, an adversary tries to compute an adver-sarial perturbation v to change the label towards a specifictarget class t, i.e., fθ(x0 + v) = t. The norm of v shouldbe small enough. The only information available from theclassification system is the top-1 label and the number ofqueries is assumed to be limited.

Similarly to the non-targeted attack, we need a startingpoint in the target adversarial class, close to the decisionboundary. But it is almost impossible to change the labelof the original image to a specific target label by addingrandom noise. We therefore utilize an arbitrary image xtbelonging to the target class t to find a proper starting pointP .

As the distance between x0 and xt can be quite large,the decision boundary between original and target adversar-ial region cannot be treated as flat. We cannot find a goodestimation around P as we did in the non-targeted attackcase. In the targeted attack, we therefore use a new way tocompute and update the starting point Pi in each iteration,as shown in Fig. 2. We first perform a linear interpolationin the direction of (xt − x0) to find a starting point P0,

P0 := minα

(x0 + α · xt − x0

||xt − x0||2) s.t. fθ(P0) = t (5)

The gradient direction ξ0 at P0 can be estimated withthe same method as for non-targeted attacks. We then geta point Q0, which is slightly away from the boundary, byadding a small perturbation at P0 in this direction, i.e.,Q0 = P0 + δ · ξ0. In the direction of (Q0 − x0), P1 canbe searched with the method in Eq. (5). Motivated by Qi(i = 0, 1, ...,m), another point Pi+1 near the boundary canbe found quickly. Points Pi can walk along the boundaryuntil the computed adversarial perturbation converges, i.e.,(Pm − x0) ‖ ∇f(Pm), and finally Pm is the adversarial

Algorithm 2: Targeted qFoolInput: original image x0, target image xt, initial

estimation number nu, a threshold ε.1 i = 0 ;2 Compute direction ν0 = xt−x0

||xt−x0||2 ;3 Search starting point P0 in direction νi by Eq. (5) ;4 while

∑ij=0 n

(j) < n do5 n(i) = 0 ;6 x

(i)adv = x

′adv = Pi ;

7 while∥∥∥x(i)

adv−Pi

∥∥∥n(i)+nu

≤ ε · ‖x′adv−Pi‖n(i) and∑i

j=0 n(j) < n do

8 Make nu new queries ;9 x′adv = x

(i)adv ;

10 n(i) = n(i) + nu ;11 Estimate gradient ξi at Pi with n(i) queries

in full/sub space;12 Qi = Pi + δ · ξi ;13 νi =

Qi−x0

||Qi−x0||2 ;

14 Search x(i)adv in the direction νi by Eq. (5) ;

15 end16 Pi+1 = x

(i)adv , i = i+ 1 ;

17 end18 return x(i−1)

adv

example we get. Alg. 2 describes the algorithm in details.We note that the targeted qFool attack algorithm can alsobe executed in either the full data space or limited to a pre-defined subspace.

5. Experimental ResultsWe evaluate qFool using 250 images randomly chosen

from the validation set of ImageNet on the networks VGG-19 [24], ResNet-50 [10] and Inception-v3 [25]. We measurethe median of the average norm of all adversarial perturba-tions after a specific number of queries, defined by

MX (n) = medianxi∈X

(1

m‖v(xi, n)‖22

), (6)

where v(xi, n) ∈ Rm is an adversarial perturbation gen-erated for sample xi using n queries to the model. Themedian is taken over the images in dataset X .

5.1. Non-targeted attacks

For the non-targeted attack, Fig. 3 shows samples of ad-versarial perturbations on different models. We compareqFool with Boundary attack [3] on ImageNet. The resultscan be seen in Fig. 4. When the distances between adversar-

5

Page 6: arXiv:1903.10826v1 [cs.LG] 26 Mar 2019in various applications, including computer vision [15, 14], speech [19, 11], health-care [13, 17], etc. Despite their huge success, it has been

VGG-19

Full space Subspace Full space Subspace Full space Subspace

Full space Subspace Full space Subspace Full space Subspace

ResNet-50Original image Inception-v3

Figure 3: Original images and adversarial perturbations by qFool in the full space and subspace on different models fora “frog” and a “vase” (20,000 queries). The resulting perturbed images are classified as “snake” (first row) and “goblet”(second row). In addition, qFool in the subspace can get smaller perturbation than full space for the same number of queries.

0 1042 4 6 8 10 ×

10-2

10-3

10-4

10-5

10-7

10-6

Dis

tanc

e

Number of queries

Boundary attack qFool-fullspace qFool-subspace

(a) VGG-19

10-3

10-4

10-5

10-6

0 10421 3 4 5 6 7 8 ×Number of queries

Dis

tanc

e

Boundary attack qFool-fullspace qFool-subspace

(b) ResNet-50

10-2

10-3

10-4

10-5

0 1042 4 6 8 10 ×Number of queries

Dis

tanc

e

Boundary attack qFool-fullspace qFool-subspace

(c) Inception-v3

Figure 4: Distance (the median of MSE in Eq. (6)) between adversarial example and original image over numbers of queriesin non-targeted qFool and Boundary attack [3] against different models.

ial examples generated by the two methods and original im-ages are similar, qFool always spends fewer queries. qFoolcan converge much faster: if the number of queries is lim-ited to a small value (e.g., 10000), our method can achievemuch better performance.

In addition, compared with qFool in full space, the sub-space version can reduce the number of queries even more.Specifically, in this work, we use a 2-dimensional Dis-crete Cosine Transform (DCT) basis to define the low-dimensional subspace. Let S = {ψi,j}i,j=0,...,

√m−1 be

the basis in the subspace of dimension m. When estimatinggradients in the subspace, we use n noise vectors η∗i = Sγi,where γi ∼ N (0, Im), instead of vectors ηi ∼ N (0, Id).

In our experiments, the dimension d of full space is224×224 or 299×299, we use 250 images in the ImageNettraining set to find the best subspace. From Fig. 5, the bestdimensions m of subspace range from 70 × 70 to 90 × 90.We choose

√m = 75, i.e., we use a low frequency subspace

S of dimension m = 75× 75. The method of using pertur-

bation vectors in the subspace to estimate the gradient ofboundary can achieve higher efficiency, as shown in Fig. 4.

We also compare qFool with several white-box methodson 50 images when applying no more than 10,000 queries,shown in Table. 1. FGSM and DeepFool are gradient-based attacks, that unsurprisingly need fewer predictions(queries). C&W, a gradient-based attack as well, achievesthe smallest perturbation with more queries. As decision-based attacks, qFool can get much smaller perturbation thanBoundary attack. Besides, qFool has an obvious advantagein the number of iterations, that can bring significant speed-ups because of its high parallelization.

In addition, we observe in our experiments that, for250 different images, the number of iterations is no morethan 4, and the allocation of n queries in consecutive it-erations computed by our algorithm conforms to a similarrule: in the first few iterations, the numbers of queries aresmall, while the last iteration needs a much larger numberof queries, shown in Fig. 6. It indicates that the basic logic

6

Page 7: arXiv:1903.10826v1 [cs.LG] 26 Mar 2019in various applications, including computer vision [15, 14], speech [19, 11], health-care [13, 17], etc. Despite their huge success, it has been

Table 1: Comparison with different non-targeted attacks on the total number of backward gra-dient computation (#gradients), the total number of forward predictions (#predictions) and thetotal number of iterations (#iterations). (FGSM∗ is a modified version of FGSM that tries tofind the smallest perturbation in the direction of the sign of the gradient.)

#gradients #predictions #iterations MSE

VGG-19 ResNet-50 Inception-v3

FGSM∗ 1 ∼69 ∼69 1.28e-6 6.37e-7 1.20e-5DeepFool ∼20 ∼20 ∼ 2 2.76e-7 1.71e-7 6.12e-7

C&W 10000 10000 10000 2.22e-7 2.26e-7 3.25e-7Boundary attack 0 ∼10000 ∼500 4.61e-4 1.15e-3 1.79e-3qFool-fullspace 0 ∼ 10000 ∼3 1.16e-5 1.03e-4 2.48e-4qFool-subspace 0 ∼ 10000 ∼3 6.15e-6 4.14e-5 1.80e-4

40 60 80 100 120 140√m

0.2x10−5

0.4x10−5

0.6x10−5

0.8x10−5

1.0x10−5

1.2x10−5

Distan

ce

VGG-19 ×10ResNet-50Inception-v3

Figure 5: `2-distance between adversarial examples andoriginal images in ImageNet training set over differentsubspace dimensions in non-targeted qFool attacks againstthree classifier models. (Results for VGG-19 are multipliedby 10 to make the dynamic range roughly the same.)

of qFool algorithm is reasonable and effective. Allocatingsome small numbers of queries to the first several iterationscan make the starting point closer to the theoretical adver-sarial example, which leads to a more locally flat boundaryat the starting point, and thus a more accurate approxima-tion of adversarial perturbation direction.

5.2. Targeted attacks

In targeted attacks, we use 250 samples in ImageNet. Foreach of them, we draw a target label randomly and pick onetarget image belonging to the target label randomly. Wekeep this original-target image pair consistent in all exper-iments of targeted attacks. The results are shown in Fig. 8.We can see that, when the number of queries is not so large,our algorithm is more efficient than the Boundary attackmethod [3]. Although the two curves cross at 100, 000queries, qFool converges much faster. In the first severalthousand queries, qFool can find a much smaller adversar-ial perturbation than Boundary attack. Therefore, in set-

0 10 20 30 40 50 60 70 80 90 100

110

120

130

140

150

160

170

180

190

200

210

220

230

240

image id1

23

4ite

ratio

n

0150030004500600075009000

Figure 6: Distribution of 10,000 queries in different itera-tions for 250 images. We have grouped the images accord-ing to the number of used iterations. More than 30% imagesonly need 2 iterations, and most queries appear in their sec-ond iteration. Other images need three or four iterations,and the last iteration similarly contain the most queries.

tings where the number of queries is limited, our algorithmis more efficient. Some samples are shown in Fig. 9.

(a) (b) (c) (d)

Figure 7: Attacking Google Cloud Vision. (a) original: ‘doglike mammal’ (b) adversarial: ‘hair’ (1500 queries) (c) orig-inal: ‘flag’ (d) adversarial: ‘sky’ (500 queries)

Different target labels or target images obviously resultin different numbers of queries, as shown in Table 2. Welist results at around 100,000 queries. It shows that MSEof adversarial examples is not positively correlated with thedistance between the original and target image. Two imagesthat are close in the spatial domain may not be similar in thefeature domain.

5.3. Attacks on commercial classifiers

In order to demonstrate the effectiveness of qFool, weapply our non-targeted attack algorithm to one sample com-mercial system, namely Google Cloud Vision Image Recog-

7

Page 8: arXiv:1903.10826v1 [cs.LG] 26 Mar 2019in various applications, including computer vision [15, 14], speech [19, 11], health-care [13, 17], etc. Despite their huge success, it has been

0

10-1

10-2

10-3

1042 4 6 8 10 ×

Boundary attack qFool-fullspace qFool-subspace

Dis

tanc

e

Number of queries

(a) VGG-19

0

10-1

10-2

10-3

1042 4 6 8 10 ×

Boundary attack qFool-fullspace qFool-subspace

Dis

tanc

e

Number of queries

(b) ResNet-50

0

10-1

10-2

10-3

1042 4 6 8 10 ×

Boundary attack qFool-fullspace qFool-subspace

Dis

tanc

e

Number of queries

(c) Inception-v3

Figure 8: Distance between adversarial example and original image versus numbers of queries in targeted qFool and Boundaryattack [3] in three different network models.

(a) (b) (c) (d) (e) (f)

Figure 9: qFool in targeted attacks on VGG-19. (a) original:‘cardoon’ (b) target:‘vase’ (c) adversarial perturbation in fullspace (d) adversarial example in full space (e) adversarial perturbation in subspace (f) adversarial example in subspace. Weuse 50,000 queries in this experiment. The MSE is 4.40e-4 in full space and 1.12e-4 in subspace.

0.155999388.91e-4

0.126999737.06e-4

“school bus” “French loaf” “sunflower”“golden retriever”

original distance#query

adversarial distance

target classoriginal class

0.159 0.1820.180 0.1920.122 0.211 0.20099993 9944399127 9999899514 99135 999285.27e-4 5.79e-47.90e-4 7.07e-43.09e-4 3.08e-4 3.60e-4

13 10 8 632 23 4

modify

Table 2: Targeted qFool attacks on different target labels and target images. Our goal is to fool the “school bus” to be classified as three different classes: “French loaf”, “golden retriever” and “sunflower”. For each class, we select three different images randomly. Original distance is the distance between the original image and target image. Adversarial distance is the distance between the original image and the generated adversarial example after approximately 100,000 queries by targeted qFool attack.

`2<latexit sha1_base64="/woxTzFZt4EnoavoXRgwNrjMOco=">AAAB/XicbVA9SwNBEJ3zM8avqKXNYRCswl0QtAzaWEYwH5AcYW8zl6zZ3Tt294RwBP+CrfZ2YutvsfWXuEmuMIkPBh7vzTAzL0w408bzvp219Y3Nre3CTnF3b//gsHR03NRxqig2aMxj1Q6JRs4kNgwzHNuJQiJCjq1wdDv1W0+oNIvlgxknGAgykCxilBgrNbvIea/aK5W9ijeDu0r8nJQhR71X+un2Y5oKlIZyonXH9xITZEQZRjlOit1UY0LoiAywY6kkAnWQza6duOdW6btRrGxJ487UvxMZEVqPRWg7BTFDvexNxf+8Tmqi6yBjMkkNSjpfFKXcNbE7fd3tM4XU8LElhCpmb3XpkChCjQ1oYUsoJkUbir8cwSppViu+V/HvL8u1mzyeApzCGVyAD1dQgzuoQwMoPMILvMKb8+y8Ox/O57x1zclnTmABztcvgFyVhg==</latexit><latexit sha1_base64="/woxTzFZt4EnoavoXRgwNrjMOco=">AAAB/XicbVA9SwNBEJ3zM8avqKXNYRCswl0QtAzaWEYwH5AcYW8zl6zZ3Tt294RwBP+CrfZ2YutvsfWXuEmuMIkPBh7vzTAzL0w408bzvp219Y3Nre3CTnF3b//gsHR03NRxqig2aMxj1Q6JRs4kNgwzHNuJQiJCjq1wdDv1W0+oNIvlgxknGAgykCxilBgrNbvIea/aK5W9ijeDu0r8nJQhR71X+un2Y5oKlIZyonXH9xITZEQZRjlOit1UY0LoiAywY6kkAnWQza6duOdW6btRrGxJ487UvxMZEVqPRWg7BTFDvexNxf+8Tmqi6yBjMkkNSjpfFKXcNbE7fd3tM4XU8LElhCpmb3XpkChCjQ1oYUsoJkUbir8cwSppViu+V/HvL8u1mzyeApzCGVyAD1dQgzuoQwMoPMILvMKb8+y8Ox/O57x1zclnTmABztcvgFyVhg==</latexit><latexit sha1_base64="/woxTzFZt4EnoavoXRgwNrjMOco=">AAAB/XicbVA9SwNBEJ3zM8avqKXNYRCswl0QtAzaWEYwH5AcYW8zl6zZ3Tt294RwBP+CrfZ2YutvsfWXuEmuMIkPBh7vzTAzL0w408bzvp219Y3Nre3CTnF3b//gsHR03NRxqig2aMxj1Q6JRs4kNgwzHNuJQiJCjq1wdDv1W0+oNIvlgxknGAgykCxilBgrNbvIea/aK5W9ijeDu0r8nJQhR71X+un2Y5oKlIZyonXH9xITZEQZRjlOit1UY0LoiAywY6kkAnWQza6duOdW6btRrGxJ487UvxMZEVqPRWg7BTFDvexNxf+8Tmqi6yBjMkkNSjpfFKXcNbE7fd3tM4XU8LElhCpmb3XpkChCjQ1oYUsoJkUbir8cwSppViu+V/HvL8u1mzyeApzCGVyAD1dQgzuoQwMoPMILvMKb8+y8Ox/O57x1zclnTmABztcvgFyVhg==</latexit><latexit sha1_base64="/woxTzFZt4EnoavoXRgwNrjMOco=">AAAB/XicbVA9SwNBEJ3zM8avqKXNYRCswl0QtAzaWEYwH5AcYW8zl6zZ3Tt294RwBP+CrfZ2YutvsfWXuEmuMIkPBh7vzTAzL0w408bzvp219Y3Nre3CTnF3b//gsHR03NRxqig2aMxj1Q6JRs4kNgwzHNuJQiJCjq1wdDv1W0+oNIvlgxknGAgykCxilBgrNbvIea/aK5W9ijeDu0r8nJQhR71X+un2Y5oKlIZyonXH9xITZEQZRjlOit1UY0LoiAywY6kkAnWQza6duOdW6btRrGxJ487UvxMZEVqPRWg7BTFDvexNxf+8Tmqi6yBjMkkNSjpfFKXcNbE7fd3tM4XU8LElhCpmb3XpkChCjQ1oYUsoJkUbir8cwSppViu+V/HvL8u1mzyeApzCGVyAD1dQgzuoQwMoPMILvMKb8+y8Ox/O57x1zclnTmABztcvgFyVhg==</latexit>

`2<latexit sha1_base64="/woxTzFZt4EnoavoXRgwNrjMOco=">AAAB/XicbVA9SwNBEJ3zM8avqKXNYRCswl0QtAzaWEYwH5AcYW8zl6zZ3Tt294RwBP+CrfZ2YutvsfWXuEmuMIkPBh7vzTAzL0w408bzvp219Y3Nre3CTnF3b//gsHR03NRxqig2aMxj1Q6JRs4kNgwzHNuJQiJCjq1wdDv1W0+oNIvlgxknGAgykCxilBgrNbvIea/aK5W9ijeDu0r8nJQhR71X+un2Y5oKlIZyonXH9xITZEQZRjlOit1UY0LoiAywY6kkAnWQza6duOdW6btRrGxJ487UvxMZEVqPRWg7BTFDvexNxf+8Tmqi6yBjMkkNSjpfFKXcNbE7fd3tM4XU8LElhCpmb3XpkChCjQ1oYUsoJkUbir8cwSppViu+V/HvL8u1mzyeApzCGVyAD1dQgzuoQwMoPMILvMKb8+y8Ox/O57x1zclnTmABztcvgFyVhg==</latexit><latexit sha1_base64="/woxTzFZt4EnoavoXRgwNrjMOco=">AAAB/XicbVA9SwNBEJ3zM8avqKXNYRCswl0QtAzaWEYwH5AcYW8zl6zZ3Tt294RwBP+CrfZ2YutvsfWXuEmuMIkPBh7vzTAzL0w408bzvp219Y3Nre3CTnF3b//gsHR03NRxqig2aMxj1Q6JRs4kNgwzHNuJQiJCjq1wdDv1W0+oNIvlgxknGAgykCxilBgrNbvIea/aK5W9ijeDu0r8nJQhR71X+un2Y5oKlIZyonXH9xITZEQZRjlOit1UY0LoiAywY6kkAnWQza6duOdW6btRrGxJ487UvxMZEVqPRWg7BTFDvexNxf+8Tmqi6yBjMkkNSjpfFKXcNbE7fd3tM4XU8LElhCpmb3XpkChCjQ1oYUsoJkUbir8cwSppViu+V/HvL8u1mzyeApzCGVyAD1dQgzuoQwMoPMILvMKb8+y8Ox/O57x1zclnTmABztcvgFyVhg==</latexit><latexit sha1_base64="/woxTzFZt4EnoavoXRgwNrjMOco=">AAAB/XicbVA9SwNBEJ3zM8avqKXNYRCswl0QtAzaWEYwH5AcYW8zl6zZ3Tt294RwBP+CrfZ2YutvsfWXuEmuMIkPBh7vzTAzL0w408bzvp219Y3Nre3CTnF3b//gsHR03NRxqig2aMxj1Q6JRs4kNgwzHNuJQiJCjq1wdDv1W0+oNIvlgxknGAgykCxilBgrNbvIea/aK5W9ijeDu0r8nJQhR71X+un2Y5oKlIZyonXH9xITZEQZRjlOit1UY0LoiAywY6kkAnWQza6duOdW6btRrGxJ487UvxMZEVqPRWg7BTFDvexNxf+8Tmqi6yBjMkkNSjpfFKXcNbE7fd3tM4XU8LElhCpmb3XpkChCjQ1oYUsoJkUbir8cwSppViu+V/HvL8u1mzyeApzCGVyAD1dQgzuoQwMoPMILvMKb8+y8Ox/O57x1zclnTmABztcvgFyVhg==</latexit><latexit sha1_base64="/woxTzFZt4EnoavoXRgwNrjMOco=">AAAB/XicbVA9SwNBEJ3zM8avqKXNYRCswl0QtAzaWEYwH5AcYW8zl6zZ3Tt294RwBP+CrfZ2YutvsfWXuEmuMIkPBh7vzTAzL0w408bzvp219Y3Nre3CTnF3b//gsHR03NRxqig2aMxj1Q6JRs4kNgwzHNuJQiJCjq1wdDv1W0+oNIvlgxknGAgykCxilBgrNbvIea/aK5W9ijeDu0r8nJQhR71X+un2Y5oKlIZyonXH9xITZEQZRjlOit1UY0LoiAywY6kkAnWQza6duOdW6btRrGxJ487UvxMZEVqPRWg7BTFDvexNxf+8Tmqi6yBjMkkNSjpfFKXcNbE7fd3tM4XU8LElhCpmb3XpkChCjQ1oYUsoJkUbir8cwSppViu+V/HvL8u1mzyeApzCGVyAD1dQgzuoQwMoPMILvMKb8+y8Ox/O57x1zclnTmABztcvgFyVhg==</latexit>

nition service, where users can get some tags associatedwith the input image and the corresponding confidencescores after querying the model. But we only use the finaldecision (top-1 tag) in order to attack this classifier.

Adversaries would ideally want to use the smallest num-ber of queries to get an adversarial example with a relativelygood visual quality. Since qFool performs particularly wellwhen the number of queries is not very large, it is suitablefor attacking commercial classifiers. Fig. 7 demonstratessome adversarial examples generated by only 500 ∼ 1500queries. In the first ‘dog’ image, the MSE is 7.53e-5, which

is totally imperceptible by human eyes. We only use 1500queries to the model. Its new top-1 label is ‘hair’, and itindicates that the generated perturbation might mislead themodel to ignore the whole content of this image yet focuson some local information. In the second ‘flag’ image, weuse only 500 queries and the MSE achieves 2.66e-6. Theperturbation makes the model ignore the existence of theobject flag, and classify the image as ‘sky’.

8

Page 9: arXiv:1903.10826v1 [cs.LG] 26 Mar 2019in various applications, including computer vision [15, 14], speech [19, 11], health-care [13, 17], etc. Despite their huge success, it has been

6. ConclusionIn this work, we propose a novel query-efficient

decision-based attack algorithm, namely qFool, for settingswhere the attacker only has access to the final decision (top-1 label) of a model. Our algorithm is based on the obser-vation that the curvature of the boundary is small aroundadversarial examples. We use a simple way to estimatethe gradient direction of the boundary, and construct imper-ceptible adversarial examples using the estimated direction.With this method, the required number of queries in bothour non-targeted and targeted attacks can be reduced drasti-cally compared to previous decision-based attacks. We alsoenhance qFool by constraining the gradient estimation to apre-defined subspace, which further reduces the number ofqueries. We finally apply qFool on Google Cloud Vision,demonstrating the effectiveness of our attack on a samplereal-world black-box classifier.

AcknowledgementsThis work has been supported by a Google Faculty Re-

search Award, and the Hasler Foundation, Switzerland, inthe framework of the ROBERT project. We also gratefullyacknowledge the support of NVIDIA Corporation with thedonation of the GTX Titan X GPU used for this research.

References[1] S. Baluja and I. Fischer. Adversarial transformation net-

works: Learning to generate adversarial examples. arXivpreprint arXiv:1703.09387, 2017.

[2] A. N. Bhagoji, W. He, B. Li, and D. Song. Practical black-box attacks on deep neural networks using efficient querymechanisms. In Proceedings of the European Conference onComputer Vision, 2018.

[3] W. Brendel, J. Rauber, and M. Bethge. Decision-based ad-versarial attacks: Reliable attacks against black-box machinelearning models. arXiv preprint arXiv:1712.04248, 2017.

[4] N. Carlini and D. Wagner. Towards evaluating the robustnessof neural networks. In IEEE Symposium on Security andPrivacy (SP), 2017.

[5] P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.-J. Hsieh. Zoo:Zeroth order optimization based black-box attacks to deepneural networks without training substitute models. In Pro-ceedings of the 10th ACM Workshop on Artificial Intelligenceand Security, 2017.

[6] M. Cisse, Y. Adi, N. Neverova, and J. Keshet. Houdini:Fooling deep structured prediction models. arXiv preprintarXiv:1707.05373, 2017.

[7] A. Fawzi, S.-M. Moosavi-Dezfooli, and P. Frossard. Robust-ness of classifiers: from adversarial to random noise. In Pro-ceedings of Advances in Neural Information Processing Sys-tems, 2016.

[8] I. Goodfellow, J. Shlens, and C. Szegedy. Explaining andharnessing adversarial examples. In Proceedings of Interna-tional Conference on Learning Representations, 2015.

[9] J. Hayes and G. Danezis. Machine learning as an adversar-ial service: Learning black-box adversarial examples. arXivpreprint arXiv:1708.05207, 2017.

[10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learningfor image recognition. In Proceedings of the IEEE confer-ence on computer vision and pattern recognition, 2016.

[11] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed,N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath,et al. Deep neural networks for acoustic modeling in speechrecognition: The shared views of four research groups. IEEESignal processing magazine, 2012.

[12] A. Ilyas, L. Engstrom, A. Athalye, and J. Lin. Query-efficient black-box adversarial examples. arXiv preprintarXiv:1712.07113, 2017.

[13] N. Kriegeskorte. Deep neural networks: a new frameworkfor modeling biological vision and brain information pro-cessing. Annual review of vision science, 2015.

[14] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenetclassification with deep convolutional neural networks. InProceedings of Advances in Neural Information ProcessingSystems, 2012.

[15] Y. LeCun, K. Kavukcuoglu, C. Farabet, et al. Convolutionalnetworks and applications in vision. In ISCAS, 2010.

[16] P. Li, J. Yi, and L. Zhang. Query-efficient black-box attackby active learning. arXiv preprint arXiv:1809.04913, 2018.

[17] Z. Liang, G. Zhang, J. X. Huang, and Q. V. Hu. Deep learn-ing for healthcare decision making with emrs. In Bioinfor-matics and Biomedicine, 2014.

[18] Y. Liu, X. Chen, C. Liu, and D. Song. Delving into trans-ferable adversarial examples and black-box attacks. arXivpreprint arXiv:1611.02770, 2016.

[19] T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Cernocky.Strategies for training large scale neural network languagemodels. In Automatic Speech Recognition and Understand-ing, 2011.

[20] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deep-fool: a simple and accurate method to fool deep neural net-works. In Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition, 2016.

[21] N. Narodytska and S. P. Kasiviswanathan. Simple black-boxadversarial perturbations for deep networks. arXiv preprintarXiv:1612.06299, 2016.

[22] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik,and A. Swami. Practical black-box attacks against machinelearning. In Proceedings of Asia Conference on Computerand Communications Security, 2017.

[23] J. Rauber, W. Brendel, and M. Bethge. Foolbox v0. 8.0:A python toolbox to benchmark the robustness of machinelearning models. arXiv preprint arXiv:1707.04131, 2017.

[24] K. Simonyan and A. Zisserman. Very deep convolutionalnetworks for large-scale image recognition. arXiv preprintarXiv:1409.1556, 2014.

[25] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna.Rethinking the inception architecture for computer vision.In Proceedings of the IEEE Conference on Computer Visionand Pattern Recognition, 2016.

9

Page 10: arXiv:1903.10826v1 [cs.LG] 26 Mar 2019in various applications, including computer vision [15, 14], speech [19, 11], health-care [13, 17], etc. Despite their huge success, it has been

[26] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan,I. Goodfellow, and R. Fergus. Intriguing properties of neuralnetworks. arXiv preprint arXiv:1312.6199, 2013.

10