Intriguing properties of neural networks
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties.
深度神经网络是高度表达模型,这些模型已经在语音和视觉识别任务重实现了最佳的性能。虽然他们的表达能力是他们成功的原因,不过它也使他们学习到了不可解释的解决方法,这些解决方法可能有违反直觉的属性。在论文中,我们就报告了两个这样的属性。
First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains the semantic information in the high layers of neural networks.
首先,根据单元分析的各种方法,我们发现单个高级单元和高级单元的随机线性组合之间没有区别。 它表明,它是空间,而不是单个单位,包含语义信息在高层的神经网络。
Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extent. We can cause the network to misclassify an image by applying a certain hardly perceptible perturbation, which is found by maximizing the network’s prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.
第二,我们发现深层神经网络学习输入输出映射,这在相当大程度上是相当不连续的。 我们可以通过应用某种几乎不可察觉的扰动,通过最大化网络的预测误差找到图像,使网络误分类。 此外,这些扰动的特定性质不是学习的随机假象:相同的扰动可以导致在数据集的不同子集上训练的不同网络,以错误分类相同的输入。