10:20 〜 10:40
[4S1-IS-2f-02] Adversarial attack detection on graph classification by autoencoder-based analysis of hidden layers in graph convolutional networks
Regular
キーワード:graph neural networks, adversarial attacks, autoencoders
Graph neural networks (GNNs) have been applied to various fields in recent years due to their high performance. However, GNNs are not always robust against adversarial attacks, which is one of the hindrances for the application of GNNs in safety-critical areas.
In this study, targeting graph classification models using graph convolutional neural networks (GCNs), we propose a method for detecting adversarial attacks by edge addition. The proposed method obtains latent representations of each input graph through the autoencoding of the output in hidden layer in the trained GCN, and determines the existence of attacks on the input graph using classification or outlier detection models.
We conducted empirical experiment on attack detection using four real datasets. As a result, although it depends on the combination of employed autoencoder and classification models, it was confirmed that the proposed method can detect a certain number of attacks.
In this study, targeting graph classification models using graph convolutional neural networks (GCNs), we propose a method for detecting adversarial attacks by edge addition. The proposed method obtains latent representations of each input graph through the autoencoding of the output in hidden layer in the trained GCN, and determines the existence of attacks on the input graph using classification or outlier detection models.
We conducted empirical experiment on attack detection using four real datasets. As a result, although it depends on the combination of employed autoencoder and classification models, it was confirmed that the proposed method can detect a certain number of attacks.
講演PDFパスワード認証
論文PDFの閲覧にはログインが必要です。参加登録者の方は「参加者用ログイン」画面からログインしてください。あるいは論文PDF閲覧用のパスワードを以下にご入力ください。