Hello,

We are glad to inform you that the submitted paper titled “Optimized Parameter Search Approach For Weight Modification Attack Targeting Deep Learning Models” has been accepted in the Applied Sciences journal. Please see its abstract below.

Moreover, the paper submitted to Neural Computing and Applications journal has been accepted. It was titled “Understanding Deep Learning Defenses Against Adversarial Examples Through Visualizations for Dynamic Risk Assessment”. Please see its abstract below.


Best wishes,
Xabi 


Title: Optimized Parameter Search Approach For Weight Modification Attack Targeting Deep Learning Models
Authors: Xabier Echeberria-Barrio, Amaia Gil-Lerchundi,  Raul Orduna-Urrutia, Iñigo Mendialdua
Abstract. Deep Neural Network models have been developed in different fields bringing many advances in several tasks. However, they have also started to be incorporated into tasks with critical risk. That worries researchers who have been interested in studying possible attacks on these models, discovering a long list of threats from which every model should be defended.
The weights modification attack is presented and discussed among researchers who have presented several versions and analyses about such a threat. It focuses on detecting the vulnerable weight to modify them, misclassifying the desired input data. Therefore, analyzing the different approaches of this attack can help to understand more precisely how to defend such vulnerabilities.
In this work, a new version of the weight modification attack is presented. That approach is based on three processes: input data clusterization, weight selection, and the modification of the weights. The data clusterization allows attacking the model more precisely. The weight selection uses the gradient given by the input data to know the desired parameters. The modification is incorporated little by little via reduced noise.



Title: Understanding Deep Learning defenses Against Adversarial Examples Through Visualizations for Dynamic Risk Assessment
Authors: Xabier Echeberria-Barrio, Amaia Gil-Lerchundi, Jon Egaña-Zubia, Raul Orduna-Urrutia
Abstract. In recent years, Deep Neural Network models have been developed in different fields, where they have brought many advances. However, they have also started to be used in tasks where risk is critical. A misdiagnosis of these models can lead to serious accidents or even death. This concern has led to an interest among researchers to study possible attacks on these models, discovering a long list of vulnerabilities, from which every model should be defended.
The adversarial example attack is a widely known attack among researchers, who have developed several defenses to avoid such a threat. However, these defenses are as opaque as a deep neural network model, how they work is still unknown. This is why visualizing how they change the behavior of the target model is interesting in order to understand more precisely how the performance of the defended model is being modified.
For this work, some defenses, against adversarial example attack, have been selected in order to visualize the behavior modification of each of them in the defended model. Adversarial training, dimensionality reduction and prediction similarity were the selected defenses, which have been developed using a model composed by convolution neural network layers and dense neural network layers. In each defense, the behavior of the original model has been compared with the behavior of the defended model, representing the target model by a graph in a visualization.

--

Xabier Etxeberria Barrio
Researcher | Investigador

xetxeberria@vicomtech.org
+[34] 943 30 92 30
Digital Security | Seguridad digital

  

member of: 

La información que contiene este mensaje y sus adjuntos son confidenciales y están dirigidos exclusivamente a sus destinatarios. Si recibe este mensaje por error, se ruega nos lo comunique y proceda a su borrado.

The information contained in this electronic message is intended only for the personal and confidential use of the recipients designated in the original message. If you have received this communication in error, please notify us immediately by replying to the message and deleting it from your computer.