Paper: | PS-1A.17 |
Session: | Poster Session 1A |
Location: | Symphony/Overture |
Session Time: | Thursday, September 6, 16:30 - 18:30 |
Presentation Time: | Thursday, September 6, 16:30 - 18:30 |
Presentation: |
Poster
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
Representation of adversarial images in deep neural networks and the human brain |
Manuscript: |
Click here to view manuscript |
DOI: |
https://doi.org/10.32470/CCN.2018.1066-0 |
Authors: |
Chi Zhang, Xiaohan Duan, National Digital Switching System Engineering and Technological Research Center, China; Ruyuan Zhang, University of Minnesota, United States; Li Tong, National Digital Switching System Engineering and Technological Research Center, China |
Abstract: |
Many studies have demonstrated the prominent similarity between deep neural networks (DNNs) and human vision. However, one recent study (Nguyen et al., 2015) challenged this idea and showed that some artificially generated adversarial images can successfully ‘fool’ the even most state-of-the-art DNNs but not human vision. Specifically, DNNs can accurately recognize adversarial noise (AN) images but not adversarial interference (AI) images, and vice versa in humans. In this paper, we aim to use functional magnetic resonance imaging (fMRI) to elucidate the neural mechanisms that underlie these dissociable behaviors. We measured neural responses in the human brain towards regular, AN and AI images, and quantify the representational similarity between the three types of images in a DNN and in the human brain respectively. Results demonstrated that the representational similarity in the DNN reflects image similarity more than perceptual similarity. We also found that the DNN misrepresents low- and middle-level visual features compared to human vision. These results offer new insight into the development of both human visual models and deep neural networks in future work. |