Wednesday , December 8 2021

Scholar | NIPS 2018 Face-to-face challenge Visual Challenge announcement: CMU Xingbo team wins two champions | Model | Xing Bo | Achievements |



[ad_1]

From media

Author: Weirland Brendel

Machine heart

Participation: Zhang Qian, Wang Shuting

Today, the results of the NIPS 2018 Anti-Visual Challenge have been announced. Games are divided into three units: defensive, untargeted, and targeted. The CMU Xingbo team won two championships, the other championship was captured by the Canadian LIVIA team, and the Tsinghua TSAIL team won the "Untargeted Attack" runner-up. This article outlines the team's methodology, but more details will be available at the 9: 15-10: 30 NIPS Competition seminar on December 7th.

NIPS 2018 Discussion Visual Challenge Address: https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge-robust-model-track

Today, NIPS 2018 NIPS Adversarial Vision Challenge 2018 results were released, with more than 3,000 participating teams submitting more than 3,000 models and attack methods. This year's competition focused on real-world scenarios with small amounts of access to the model (up to 1000 per sample). The model returns only the final result instead of a gradient or confidence score. This approach is expected to simulate the typical threat scenarios encountered by deploying machine learning systems, advance the development of effective decision-based attack methods, and build more robust models.

Model track completed on CrowdAI platform.

All winners perform orders of magnitude greater than standard criteria (eg, from a normal model or a boundary attack) (based on the median size of L2 disturbances). We asked for an overview of the approach to the top three games (defense, off target, target attack) of each game. The winners will announce their approach at the NIPS Competition Seminar on December 7 from 9:15 to 10:00.

The common theme of attack track winners is a combination of various defense methods as a low-frequency version and an alternative model of border attacks. On the model track, the winners used a new powerful model approach (details may not be known until seminar) and a new gradient-based iterative L2 attack for combat training. In the next few weeks, we will republish to publish more details about the results, including visualization of the anti-samples generated for the defense model. The winning team will be announced in a few weeks.

Defense

1st place: Petuum-CMU team (codenamed "91YXLT" on Leaderboard)

Author: Yaodong Yu *, Hong Yang Chang *, Suzu Xu, Hongbao Chang, Eric P. Singh (contributed equally expressively) to Pengtao City, respectively, at Petuum Inc Company, Carnegie Mellon University, University of Virginia.

The authors analyzed the generalization performance of the robust model in the sample to learn a robust deep network for the anti-sample. Based on the analysis, the authors propose a new formula for learning a powerful model that guarantees generalization and robustness.

2nd place: Wilson team (I have not received a reply from the team yet)

3rd place: LIVIA team (code name "Jerome R" on the leaderboard)

Author: Jerome RONY & Luis Gustavo Hafemann from Montreal, Quebec, Canada Advanced Technical School (ETS Montreal, Canada)

The author trained a powerful model using the proposed gradient-based new iterative L2 (Decoupled Direction and Norm, DDN), which is fast enough to be used for training. At each training phase, the author looks for a confrontation sample (using DDN) that is close to the decision boundary and minimizes the cross entropy of this example. The model architecture is unchanged and does not affect the inference time.

Unspecified attack

1st place: LIVIA team (code name "Jerome R" on the leaderboard)

Author: Jerome RONY & Lewis Gustavo Hafemann at Montreal, Quebec, Canada Higher Tech School,

The attack method is based on multiple proxy models (including the new attack method proposed by the author – a powerful DDN training model). For each model, the author chose two attack directions, such as the cross-entropy loss slope of the original category and the direction given by the DDN attack execution. For each direction, the author performs a binary search on the norm to find the decision boundary. The author uses the edge attack method of Decision-Based Adversarial Attacks to perform and improve the best attack.

2nd place: TSAIL Team (code name "csy530216" on Leaderboard)

Author: Shuyu Cheng & Yinpeng Dong

The author uses a heuristic search algorithm to improve anti-samples similar to the boundary sample attack method. BIM attacks have found a starting point by migrating the baseline of Adversarial Logit Pairing. In each iteration, the random perturbation is sampled from the Gaussian distribution using an updated diagonal covariance matrix with successful past attempts to simulate the search direction. The author limits perturbations to 40 * 40 * 3 areas in the 64 * 64 * 3 image. First, create a 10 * 10 * 3 noise and then adjust it to 40 * 40 * 3 using bilinear interpolation. Limiting the search space increases the efficiency of the algorithm.

3rd place: Petuum-CMU team (Leaderboard code name "91YXLT")

Author: Yaodong Yu *, Hong Yang Chang *, Suzu Xu, Hongbao Chang, Eric P. Singh (contributed equally expressively) to Pengtao City, respectively, at Petuum Inc Company, Carnegie Mellon University, University of Virginia.

Using the Foolbox's multiple metric metrics, the author created a variety of robust models and various countermeasures against the attack to create interference. We also chose the best attack method to minimize the maximum distance when attacking a powerful model based on different distance metrics.

Target attack

1st place: Petuum-CMU team (codenamed "91YXLT" on Leaderboard)

Author: Yaodong Yu *, Hong Yang Chang *, Suzu Xu, Hongbao Chang, Eric P. Singh (contributed equally expressively) to Pengtao City, respectively, at Petuum Inc Company, Carnegie Mellon University, University of Virginia.

Using the Foolbox, the authors created a variety of robust models and various anti-sample methods to create an anti-interference function. They found that an integrated approach makes target attack models more efficient for a variety of robust models.

2nd place: fortiss team (codenamed "ttbrunner")

Author: Thomas Brunner & Frederick Schaefer, Institute of Research at Fortiss Germany

This attack method is similar to a perimeter attack, but is not sampled at any normal distribution. The authors also use a low-frequency mode that is well migrated and can not be easily filtered by the defender. The author also uses the projection gradient of the surrogate model as a priori use of sampling. This method combines two advantages (PGD and border attacks) into a flexible, sample-efficient attack method.

3rd place: LIVIA team (code name "Jerome R" on the leaderboard)

Author: Jerome RONY & Lewis Gustavo Hafemann at Montreal, Quebec, Canada Higher Tech School,

Native

<! –

Native

Disclaimer: Sina's exclusive manuscript, unauthorized reproduction is prohibited.

->

[ad_2]
Source link