ROBUSTNESS OF DEEP NEURAL NETWORKS TO ADVERSARIAL EXAMPLES

Authors

  • Da Teng Beihang University
  • Xiao Song Beihang University
  • Guanghong Gong Beihang University
  • liang Han Beihang University

DOI:

https://doi.org/10.23055/ijietap.2017.24.2.2840

Keywords:

machine learning, deep learning, neural networks, adversarial examples

Abstract

Deep neural networks have achieved state-of-the-art performance in many artificial intelligence areas, such as object recognition, speech recognition and machine translation. While the deep neural networks have high expression capabilities, they are prone to over fitting due to the high dimensionalities of the networks. Recently, deep neural networks are found to be unstable to adversarial perturbations, which are small but can maximize the network’s prediction error. This paper proposes a novel training algorithm to improve the robustness of the neural networks to adversarial examples.

Author Biography

Xiao Song, Beihang University

Associate Prof. in School of Automation, Beihang Univ.

Published

2017-09-12

How to Cite

Teng, D., Song, X., Gong, G., & Han, liang. (2017). ROBUSTNESS OF DEEP NEURAL NETWORKS TO ADVERSARIAL EXAMPLES. International Journal of Industrial Engineering: Theory, Applications and Practice, 24(2). https://doi.org/10.23055/ijietap.2017.24.2.2840

Issue

Section

Special Issue: Asia Simulation 2015