Dear all,
We submitted the paper "Incremental Common Criteria Certification
Processes using DevSecOps Practices" to EuroSPW 2021. We request to
acknowledge SPARTA if the paper is accepted.
Abstract:
The growing digitalisation of our economies and societies is driving the
need for increased connectivity of critical applications and
infrastructures to the point where failures can lead to important
disruptions and consequences to our lives. One growing source of
failures for critical applications and infrastructures originates from
cybersecurity threats and vulnerabilities that can be exploited in
attacks. One approach to mitigating these risks is verifying that
critical applications and infrastructures are sufficiently protected by
certification of products and services. However, reaching sufficient
assurance levels for product certification may require detailed
evaluation of product properties. An important challenge for product
certification is dealing with product evolution: now that critical
applications and infrastructures are connected they are being updated on
a more frequent basis. To ensure continuity of certification, updates
must be analysed to verify the impact on certified cybersecurity
properties. Impacted properties need to be re-certified. This paper
proposes a lightweight and flexible incremental certification process
that can be integrated with DevSecOps practices to automate as much as
possible evidence gathering and certification activities. The approach
is illustrated on the Common Criteria product certification scheme and a
firewall update on an automotive case study. Only the impact analysis
phase of the incremental certification process is illustrated.
Best Regards,
--
Sebastien Dupont
Expert Research Engineer
Model-Based Engineering and Distributed Systems
CETIC
Avenue Jean Mermoz 28
B-6041 Charleroi
Tel: +32 488 237 483
Dear All,
we submitted the paper "VulnEx: Exploring Open-Source Software
Vulnerabilities in Large Development Organizations to Understand Risk
Exposure" to the IEEE Symposium on Visualization for Cyber Security (at
IEEE VIS 2021). We request to acknowledge SPARTA if the paper is accepted.
* Abstract: "The prevalent usage of open-source software (OSS) has led
to an increased interest in resolving potential third-party security
risks by fixing common vulnerabilities and exposures (CVEs).
However, even with automated code analysis tools in place, security
analysts often lack the means to obtain an overview of vulnerable
OSS reuse in large software organizations. In this design study, we
propose VulnEx (Vulnerability Explorer), a tool to audit entire
software development organizations. We introduce three complementary
table-based representations to identify and assess vulnerability
exposures due to OSS, which we designed in collaboration with
security analysts. The presented tool allows examining problematic
projects and applications (repositories), third-party libraries, and
vulnerabilities across a software organization. We show the
applicability of our tool through a use case and preliminary expert
feedback."
Best Regards,
Eren Cakmak
--
Research Associate
Department of Computer and Information Science
Data Analysis and Visualization Group
78457 Konstanz, Germany
Website: http://infovis.uni.kn/~cakmak
Phone: +49 (0)7531 88 2507
Room: ZT1107
Dear all,
I would like to inform you that the following SPARTA paper has been published:
Damaševičius, Robertas; Venčkauskas, Algimantas; Toldinas, Jevgenijus; Grigaliūnas, Šarūnas. 2021. "Ensemble-Based Classification Using Neural Networks and Machine Learning Models for Windows PE Malware Detection" Electronics 10, no. 4: 485. https://doi.org/10.3390/electronics10040485https://www.mdpi.com/2079-9292/10/4/485
All the best,
Algimantas Venčkauskas
______________________________________________________
Abstract
The security of information is among the greatest challenges facing organizations and institutions. Cybercrime has risen in frequency and magnitude in recent years, with new ways to steal, change and destroy information or disable information systems appearing every day. Among the types of penetration into the information systems where confidential information is processed is malware. An attacker injects malware into a computer system, after which he has full or partial access to critical information in the information system. This paper proposes an ensemble classification-based methodology for malware detection. The first-stage classification is performed by a stacked ensemble of dense (fully connected) and convolutional neural networks (CNN), while the final stage classification is performed by a meta-learner. For a meta-learner, we explore and compare 14 classifiers. For a baseline comparison, 13 machine learning methods are used: K-Nearest Neighbors, Linear Support Vector Machine (SVM), Radial basis function (RBF) SVM, Random Forest, AdaBoost, Decision Tree, ExtraTrees, Linear Discriminant Analysis, Logistic, Neural Net, Passive Classifier, Ridge Classifier and Stochastic Gradient Descent classifier. We present the results of experiments performed on the Classification of Malware with PE headers (ClaMP) dataset. The best performance is achieved by an ensemble of five dense and CNN neural networks, and the ExtraTrees classifier as a meta-learner.
Dear all,
I would like to inform you that the following paper
A Comparative Study of Automatic Software Repair Techniques for Security Vulnerabilities
Eduard Pinconschi, Rui Abreu, and Pedro Adão
will be published in the conference (core-A ranking)
The 32nd International Symposium on Software Reliability Engineering (ISSRE 2021)
http://2021.issre.net/ <http://2021.issre.net/>
Oct 25 - 28, 2021, Wuhan, China
and will acknowledge SPARTA.
I will make the paper available as soon as we have the camera ready version (28th of August).
Do let me know if you need a draft in advance.
Best regards,
Pedro
Abstract:
In the past years, research on automatic program repair (APR), in particular on test-suite-based approaches, has significantly attracted the attention of researchers. Despite the advances in the field, it remains unclear how these techniques fare in the context of security---most approaches are evaluated using benchmarks of bugs that do not (\textit{only}) contain security vulnerabilities.
In this paper, we present our observations using 10 state-of-the-art test-suite-based automatic program repair tools on the DARPA Cyber Grand Challenge benchmark of vulnerabilities in C/C++. Our intention is to have a better understanding of the current state of automatic program repair tools when addressing security issues.
In particular, our study is guided by the hypothesis that the efficiency of repair tools may not generalize to security vulnerabilities. We found that the 10 analyzed tools can only fix 30 out of 55 vulnerable programs---54.5\% of the considered issues. In particular, we found that APR tools with atomic change operators and brute-force search strategy (\emph{AE} and \emph{GenProg}) and brute-force functionality deletion (\emph{Kali}) overall perform better at repairing security vulnerabilities (considering both efficiency and effectiveness). \emph{AE} is the tool that individually repairs most programs with 20 out of 55 programs (36.4\%).
The causes for failing to repair are discussed in the paper, which can help repair tool designers to improve their techniques and tools.
Dear all,
I would like to inform you that the following paper
A Comparative Study of Automatic Software Repair Techniques for Security Vulnerabilities
Eduard Pinconschi, Rui Abreu, and Pedro Adão
will be published in the conference (core-A ranking)
The 32nd International Symposium on Software Reliability Engineering (ISSRE 2021)
http://2021.issre.net/
Oct 25 - 28, 2021, Wuhan, China
and will acknowledge SPARTA.
I will make the paper available as soon as we have the camera ready version (28th of August).
Do let me know if you need a draft in advance.
Best regards,
Pedro
Abstract:
In the past years, research on automatic program repair (APR), in particular on test-suite-based approaches, has significantly attracted the attention of researchers. Despite the advances in the field, it remains unclear how these techniques fare in the context of security---most approaches are evaluated using benchmarks of bugs that do not (\textit{only}) contain security vulnerabilities.
In this paper, we present our observations using 10 state-of-the-art test-suite-based automatic program repair tools on the DARPA Cyber Grand Challenge benchmark of vulnerabilities in C/C++. Our intention is to have a better understanding of the current state of automatic program repair tools when addressing security issues.
In particular, our study is guided by the hypothesis that the efficiency of repair tools may not generalize to security vulnerabilities. We found that the 10 analyzed tools can only fix 30 out of 55 vulnerable programs---54.5\% of the considered issues. In particular, we found that APR tools with atomic change operators and brute-force search strategy (\emph{AE} and \emph{GenProg}) and brute-force functionality deletion (\emph{Kali}) overall perform better at repairing security vulnerabilities (considering both efficiency and effectiveness). \emph{AE} is the tool that individually repairs most programs with 20 out of 55 programs (36.4\%).
The causes for failing to repair are discussed in the paper, which can help repair tool designers to improve their techniques and tools.