Hi,
we also have received an acceptance notice for a paper submitted to the IEEE Transactions on Software Engineering:
F. Ebbers, "A Large-Scale Analysis of IoT Firmware Version Distribution in the Wild," in IEEE Transactions on Software Engineering, doi: 10.1109/TSE.2022.3163969.
Abstract:
This paper examines the up-to-dateness of installed firmware versions of Internet of Things devices accessible via public Internet. It takes a novel approach to identify versions based on the source code of their web interfaces. It analyzes data sets of 1.06m devices collected using the IoT search engine Censys and then maps the results against the latest version each manufacturer offers. A fully scalable and adaptive approach is developed by applying the SEMMA data mining process. This approach relies on three data artifacts: raw data from Censys, a mapping table with firmware versions, and a keyword search list. The results confirm the heterogeneity of connected IoT devices and show that only 2.45 percent of the IoT devices in the wild run the latest available firmware. Installed versions are 19.2 months old on average. This real-world evidence suggests that the updating processes and methods used by engineers so far are not sufficient to keep IoT devices up-to-date. This paper identifies and quantifies influencing factors and captures the global and diverse distribution of IoT devices. It finds manufacturer and device type influence the up-to-dateness of firmware, whereas the country in which the device is deployed is less significant.
Best
Michael
---
Dr. Michael Friedewald
Fraunhofer-Institut für System- und Innovationsforschung ISI
Competence Center Emerging Technologies
Coordinator ICT Research
Breslauer Straße 48 | 76139 Karlsruhe
fon: +49 721 6809-146 (-166, ass.)
michael.friedewald(a)isi.fraunhofer.de
http://www.isi.fraunhofer.de
Neue Veröffentlichungen:
Friedewald M., Schiffner S., Krenn S. (Eds.) (2021): Privacy and Identity Management. 15th IFIP WG 9.2 9.6/11.7 11.6/SIG 9.2.2 International Summer School Maribor Slovenia September 20-23 2020 Revised Selected Papers. Cham: Springer International (IFIP Advances in Information and Communication Technology, 619).
Stapf, I.; Ammicht Quinn, R.; Friedewald, M.; Heesen, J.; Krämer, N. C. (Hrsg.) (2021): Aufwachsen in überwachten Umgebungen: Interdisziplinäre Positionen zu Privatheit und Datenschutz in Kindheit und Jugend. Baden-Baden: Nomos (Kommunikations- und Medienethik, 14). Open access: https://www.nomos-elibrary.de/10.5771/9783748921639.pdf
Martin, N.; Friedewald, M.; Schiering, I. et al. (2020): Die Datenschutz-Folgenabschätzung nach Art. 35 DSGVO: Ein Handbuch für die Praxis. Stuttgart: Fraunhofer Verlag. Open access: http://publica.fraunhofer.de/documents/N-586394.html
Hello,
We are glad to inform you that the submitted paper titled “Optimized
Parameter Search Approach For Weight Modification Attack Targeting Deep
Learning Models” has been accepted in the Applied Sciences journal. Please
see its abstract below.
Moreover, the paper submitted to Neural Computing and Applications journal
has been accepted. It was titled “Understanding Deep Learning Defenses
Against Adversarial Examples Through Visualizations for Dynamic Risk
Assessment”. Please see its abstract below.
Best wishes,
Xabi
Title: Optimized Parameter Search Approach For Weight Modification Attack
Targeting Deep Learning Models
Authors: Xabier Echeberria-Barrio, Amaia Gil-Lerchundi, Raul
Orduna-Urrutia, Iñigo Mendialdua
Abstract. Deep Neural Network models have been developed in different
fields bringing many advances in several tasks. However, they have also
started to be incorporated into tasks with critical risk. That worries
researchers who have been interested in studying possible attacks on these
models, discovering a long list of threats from which every model should be
defended.
The weights modification attack is presented and discussed among
researchers who have presented several versions and analyses about such a
threat. It focuses on detecting the vulnerable weight to modify them,
misclassifying the desired input data. Therefore, analyzing the different
approaches of this attack can help to understand more precisely how to
defend such vulnerabilities.
In this work, a new version of the weight modification attack is presented.
That approach is based on three processes: input data clusterization,
weight selection, and the modification of the weights. The data
clusterization allows attacking the model more precisely. The weight
selection uses the gradient given by the input data to know the desired
parameters. The modification is incorporated little by little via reduced
noise.
Title: Understanding Deep Learning defenses Against Adversarial Examples
Through Visualizations for Dynamic Risk Assessment
Authors: Xabier Echeberria-Barrio, Amaia Gil-Lerchundi, Jon Egaña-Zubia,
Raul Orduna-Urrutia
Abstract. In recent years, Deep Neural Network models have been developed
in different fields, where they have brought many advances. However, they
have also started to be used in tasks where risk is critical. A
misdiagnosis of these models can lead to serious accidents or even death.
This concern has led to an interest among researchers to study possible
attacks on these models, discovering a long list of vulnerabilities, from
which every model should be defended.
The adversarial example attack is a widely known attack among researchers,
who have developed several defenses to avoid such a threat. However, these
defenses are as opaque as a deep neural network model, how they work is
still unknown. This is why visualizing how they change the behavior of the
target model is interesting in order to understand more precisely how the
performance of the defended model is being modified.
For this work, some defenses, against adversarial example attack, have been
selected in order to visualize the behavior modification of each of them in
the defended model. Adversarial training, dimensionality reduction and
prediction similarity were the selected defenses, which have been developed
using a model composed by convolution neural network layers and dense
neural network layers. In each defense, the behavior of the original model
has been compared with the behavior of the defended model, representing the
target model by a graph in a visualization.
--
<https://www.vicomtech.org/>
Xabier Etxeberria Barrio
Researcher | Investigador
xetxeberria(a)vicomtech.org
+[34] 943 30 92 30
Digital Security | Seguridad digital
<https://www.linkedin.com/company/vicomtech>
<https://www.youtube.com/user/VICOMTech> <https://twitter.com/@Vicomtech>
member of: <https://graphicsvision.ai/>
La información que contiene este mensaje y sus adjuntos son confidenciales
y están dirigidos exclusivamente a sus destinatarios. Si recibe este
mensaje por error, se ruega nos lo comunique y proceda a su borrado.
The information contained in this electronic message is intended only for
the personal and confidential use of the recipients designated in the
original message. If you have received this communication in error, please
notify us immediately by replying to the message and deleting it from your
computer.