联系我们
您当前所在位置: 首页 > 学术研究 > 学术报告 > 正文

Adversarial Examples in Random Neural Networks with General Activations

2022年07月25日 15:00


报告:Adversarial Examples in Random Neural Networks with General Activations

报告时间:2022-07-28  10:00 - 11:00

报告人:吴雨晨 博士  斯坦福大学

ZOOMID:561 420 9883  密码:tmcc2022

Abstract: A substantial body of empirical work documents the lack of robustness in deep learning models to adversarial examples. Recent theoretical work proved that adversarial examples are ubiquitous in two-layers networks with sub-exponential width and ReLU or smooth activations, and multi-layer ReLU networks with sub-exponential width. We present a result of the same type, with no restriction on width and for general locally Lipschitz continuous activations.

 More precisely, given a neural network f(⋅;θ) with random weights θ, and feature vector x, we show that an adversarial example x′ can be found with high probability along the direction of the gradient ∇xf(x;θ). Our proof is based on a Gaussian conditioning technique. Instead of proving that f is approximately linear in a neighborhood of x, we characterize the joint distribution of f(x;θ) and f(x′;θ) for x′=x−s(x)∇xf(x;θ).


演讲者 吴雨晨(斯坦福大学) 地址 ZOOM
会议时间 2022-07-28 时间段 2022-07-28 10:00 - 11:00