id
stringlengths 9
16
| title
stringlengths 9
239
| categories
stringclasses 965
values | abstract
stringlengths 26
3.28k
| created_at
stringlengths 29
29
|
---|---|---|---|---|
2306.01220 | Is Model Attention Aligned with Human Attention? An Empirical Study on
Large Language Models for Code Generation | cs.SE cs.HC cs.LG | Large Language Models (LLMs) have been demonstrated effective for code
generation. Due to the complexity and opacity of LLMs, little is known about
how these models generate code. To deepen our understanding, we investigate
whether LLMs attend to the same parts of a natural language description as
human programmers during code generation. An analysis of five LLMs on a popular
benchmark, HumanEval, revealed a consistent misalignment between LLMs' and
programmers' attention. Furthermore, we found that there is no correlation
between the code generation accuracy of LLMs and their alignment with human
programmers. Through a quantitative experiment and a user study, we confirmed
that, among twelve different attention computation methods, attention computed
by the perturbation-based method is most aligned with human attention and is
constantly favored by human programmers. Our findings highlight the need for
human-aligned LLMs for better interpretability and programmer trust.
| 2023-06-02 00:57:03.000000000 |
1804.03589 | ConPredictor: Concurrency Defect Prediction in Real-World Applications | cs.SE | Concurrent programs are difficult to test due to their inherent
non-determinism. To address this problem, testing often requires the
exploration of thread schedules of a program; this can be time-consuming when
applied to real-world programs. Software defect prediction has been used to
help developers find faults and prioritize their testing efforts. Prior studies
have used machine learning to build such predicting models based on designed
features that encode the characteristics of programs. However, research has
focused on sequential programs; to date, no work has considered defect
prediction for concurrent programs, with program characteristics distinguished
from sequential programs. In this paper, we present ConPredictor, an approach
to predict defects specific to concurrent programs by combining both static and
dynamic program metrics. Specifically, we propose a set of novel static code
metrics based on the unique properties of concurrent programs. We also leverage
additional guidance from dynamic metrics constructed based on mutation
analysis. Our evaluation on four large open source projects shows that
ConPredictor improved both within-project defect prediction and cross-project
defect prediction compared to traditional features.
| 2018-04-10 15:33:29.000000000 |
1612.03813 | Spreadsheet Guardian: An Approach to Protecting Semantic Correctness
throughout the Evolution of Spreadsheets | cs.SE cs.PL | Spreadsheets are powerful tools which play a business-critical role in many
organizations. However, many bad decisions taken due to faulty spreadsheets
show that these tools need serious quality assurance. Furthermore, while
collaboration on spreadsheets for maintenance tasks is common, there has been
almost no support for ensuring that the spreadsheets remain correct during this
process.
We have developed an approach named Spreadsheet Guardian which separates the
specification of spreadsheet test rules from their execution. By automatically
executing user-defined test rules, our approach is able to detect semantic
faults. It also protects all collaborating spreadsheet users from introducing
faults during maintenance, even if only few end-users specify test rules. To
evaluate Spreadsheet Guardian, we implemented a representative testing
technique as an add-in for Microsoft Excel.
We evaluated the testing technique in two empirical evaluations with 29
end-users and 42 computer science students. The results indicate that the
technique is easy to learn and to apply. Furthermore, after finishing
maintenance, participants with spreadsheets "protected" by the technique are
more realistic about the correctness of their spreadsheets than participants
who employ only "classic", non-interactive test rules based on static analysis
techniques. Hence, we believe Spreadsheet Guardian can be of use for
business-critical spreadsheets.
| 2016-11-30 21:31:22.000000000 |
2302.14091 | Implementing a Model-based Engineering Tool as Web Application | cs.SE | This paper reports on a study of transferring a desktop-based model-based
engineering tool to a web application. The study has been conducted in the
WEBMODEL project where the well-established technology stack around the Eclipse
platform and the Eclipse Modeling Framework was lifted into a cloud-based
environment. As results, a modeling language independent tooling kernel for
web-based modeling tools and a minimal prototypical web-based implementation of
the AutoFOCUS 3 model-based engineering tool are presented. Furthermore, the
report documents experiences and implementation advises gained during the
implementation.
| 2023-02-27 19:08:07.000000000 |
2309.04142 | Trustworthy and Synergistic Artificial Intelligence for Software
Engineering: Vision and Roadmaps | cs.SE cs.AI | For decades, much software engineering research has been dedicated to
devising automated solutions aimed at enhancing developer productivity and
elevating software quality. The past two decades have witnessed an unparalleled
surge in the development of intelligent solutions tailored for software
engineering tasks. This momentum established the Artificial Intelligence for
Software Engineering (AI4SE) area, which has swiftly become one of the most
active and popular areas within the software engineering field.
This Future of Software Engineering (FoSE) paper navigates through several
focal points. It commences with a succinct introduction and history of AI4SE.
Thereafter, it underscores the core challenges inherent to AI4SE, particularly
highlighting the need to realize trustworthy and synergistic AI4SE.
Progressing, the paper paints a vision for the potential leaps achievable if
AI4SE's key challenges are surmounted, suggesting a transition towards Software
Engineering 2.0. Two strategic roadmaps are then laid out: one centered on
realizing trustworthy AI4SE, and the other on fostering synergistic AI4SE.
While this paper may not serve as a conclusive guide, its intent is to catalyze
further progress. The ultimate aspiration is to position AI4SE as a linchpin in
redefining the horizons of software engineering, propelling us toward Software
Engineering 2.0.
| 2023-09-08 05:53:24.000000000 |
2105.00041 | Towards Certified Analysis of Software Product Line Safety Cases | cs.SE | Safety-critical software systems are in many cases designed and implemented
as families of products, usually referred to as Software Product Lines (SPLs).
Products within an SPL vary from each other in terms of which features they
include. Applying existing analysis techniques to SPLs and their safety cases
is usually challenging because of the potentially exponential number of
products with respect to the number of supported features. In this paper, we
present a methodology and infrastructure for certified \emph{lifting} of
existing single-product safety analyses to product lines. To ensure certified
safety of our infrastructure, we implement it in an interactive theorem prover,
including formal definitions, lemmas, correctness criteria theorems, and
proofs. We apply this infrastructure to formalize and lift a Change Impact
Assessment (CIA) algorithm. We present a formal definition of the lifted
algorithm, outline its correctness proof (with the full machine-checked proof
available online), and discuss its implementation within a model management
framework.
| 2021-04-30 18:51:46.000000000 |
2104.07460 | Automated Conformance Testing for JavaScript Engines via Deep Compiler
Fuzzing | cs.SE cs.PL | JavaScript (JS) is a popular, platform-independent programming language. To
ensure the interoperability of JS programs across different platforms, the
implementation of a JS engine should conform to the ECMAScript standard.
However, doing so is challenging as there are many subtle definitions of API
behaviors, and the definitions keep evolving.
We present COMFORT, a new compiler fuzzing framework for detecting JS engine
bugs and behaviors that deviate from the ECMAScript standard. COMFORT leverages
the recent advance in deep learning-based language models to automatically
generate JS test code. As a departure from prior fuzzers, COMFORT utilizes the
well-structured ECMAScript specifications to automatically generate test data
along with the test programs to expose bugs that could be overlooked by the
developers or manually written test cases. COMFORT then applies differential
testing methodologies on the generated test cases to expose standard
conformance bugs. We apply COMFORT to ten mainstream JS engines. In 200 hours
of automated concurrent testing runs, we discover bugs in all tested JS
engines. We had identified 158 unique JS engine bugs, of which 129 have been
verified, and 115 have already been fixed by the developers. Furthermore, 21 of
the Comfort-generated test cases have been added to Test262, the official
ECMAScript conformance test suite.
| 2021-04-15 13:47:42.000000000 |
2005.06279 | Failure Mode Reasoning in Model Based Safety Analysis | cs.SE | Failure Mode Reasoning (FMR) is a novel approach for analyzing failure in a
Safety Instrumented System (SIS). The method uses an automatic analysis of an
SIS program to calculate potential failures in parts of the SIS. In this paper
we use a case study from the power industry to demonstrate how FMR can be
utilized in conjunction with other model-based safety analysis methods, such as
HiP-HOPS and CFT, in order to achieve a comprehensive safety analysis of SIS.
In this case study, FMR covers the analysis of SIS inputs while HiP-HOPS/CFT
models the faults of logic solver and final elements. The SIS program is
analyzed by FMR and the results are exported to HiP-HOPS/CFT via automated
interfaces. The final outcome is the collective list of SIS failure modes along
with their reliability measures. We present and review the results from both
qualitative and quantitative perspectives.
| 2020-05-11 23:14:59.000000000 |
2307.09964 | Towards green AI-based software systems: an architecture-centric
approach (GAISSA) | cs.SE cs.LG | Nowadays, AI-based systems have achieved outstanding results and have
outperformed humans in different domains. However, the processes of training AI
models and inferring from them require high computational resources, which pose
a significant challenge in the current energy efficiency societal demand. To
cope with this challenge, this research project paper describes the main
vision, goals, and expected outcomes of the GAISSA project. The GAISSA project
aims at providing data scientists and software engineers tool-supported,
architecture-centric methods for the modelling and development of green
AI-based systems. Although the project is in an initial stage, we describe the
current research results, which illustrate the potential to achieve GAISSA
objectives.
| 2023-07-19 13:14:47.000000000 |
2007.14070 | Anomaly detection in Context-aware Feature Models | cs.AI cs.SE | Feature Models are a mechanism to organize the configuration space and
facilitate the construction of software variants by describing configuration
options using features, i.e., a name representing a functionality. The
development of Feature Models is an error prone activity and detecting their
anomalies is a challenging and important task needed to promote their usage.
Recently, Feature Models have been extended with context to capture the
correlation of configuration options with contextual influences and user
customizations. Unfortunately, this extension makes the task of detecting
anomalies harder. In this paper, we formalize the anomaly analysis in
Context-aware Feature Models and we show how Quantified Boolean Formula (QBF)
solvers can be used to detect anomalies without relying on iterative calls to a
SAT solver. By extending the reconfigurator engine HyVarRec, we present
findings evidencing that QBF solvers can outperform the common techniques for
anomaly analysis.
| 2020-07-28 08:59:14.000000000 |
2310.11171 | Improving Testing Behavior by Gamifying IntelliJ | cs.SE | Testing is an important aspect of software development, but unfortunately, it
is often neglected. While test quality analyses such as code coverage or
mutation analysis inform developers about the quality of their tests, such
reports are viewed only sporadically during continuous integration or code
review, if they are considered at all, and their impact on the developers'
testing behavior therefore tends to be negligible. To actually influence
developer behavior, it may rather be necessary to motivate developers directly
within their programming environment, while they are coding. We introduce
IntelliGame, a gamified plugin for the popular IntelliJ Java Integrated
Development Environment, which rewards developers for positive testing behavior
using a multi-level achievement system: A total of 27 different achievements,
each with incremental levels, provide affirming feedback when developers
exhibit commendable testing behavior, and provide an incentive to further
continue and improve this behavior. A controlled experiment with 49
participants given a Java programming task reveals substantial differences in
the testing behavior triggered by IntelliGame: Incentivized developers write
more tests, achieve higher coverage and mutation scores, run their tests more
often, and achieve functionality earlier.
| 2023-10-17 11:40:55.000000000 |
2403.06723 | A SysML Profile for the Standardized Description of Processes during
System Development | cs.SE cs.SY eess.SY | A key aspect in creating models of production systems with the use of
model-based systems engineering (MBSE) lies in the description of system
functions. These functions shouldbe described in a clear and standardized
manner.The VDI/VDE 3682 standard for Formalised Process De-scription (FPD)
provides a simple and easily understandable representation of processes. These
processes can be conceptualized as functions within the system model, making
the FPD particularly well-suited for the standardized representation ofthe
required functions. Hence, this contribution focuses on thedevelopment of a
Domain-Specific Modeling Language(DSML) that facilitates the integration of
VDI/VDE 3682 into the Systems Modeling Language (SysML). The presented approach
not onlyextends classical SysML with domain-specific requirements but also
facilitates model verification through constraints modeled in Object Constraint
Language (OCL). Additionally, it enables automatic serialization of process
descriptions into the Extensible Markup Language (XML) using the Velocity
Template Language (VTL). This serialization enables the use of process modeling
in applications outside of MBSE. The approach was validated using an collar
screwing use case in the major component assembly in aircraft production.
| 2024-03-11 13:44:38.000000000 |
1811.11940 | Tracking Systems as Thinging Machine: A Case Study of a Service Company | cs.SE | Object tracking systems play important roles in tracking moving objects and
overcoming problems such as safety, security and other location-related
applications. Problems arise from the difficulties in creating a well-defined
and understandable description of tracking systems. Nowadays, describing such
processes results in fragmental representation that most of the time leads to
difficulties creating documentation. Additionally, once learned by assigned
personnel, repeated tasks result in them continuing on autopilot in a way that
often degrades their effectiveness. This paper proposes the modeling of
tracking systems in terms of a new diagrammatic methodology to produce
engineering-like schemata. The resultant diagrams can be used in documentation,
explanation, communication, education and control.
| 2018-11-28 09:37:28.000000000 |
2206.14606 | Building a Secure Software Supply Chain with GNU Guix | cs.SE cs.PL | The software supply chain is becoming a widespread analogy to designate the
series of steps taken to go from source code published by developers to
executables running on the users? computers. A security vulnerability in any of
these steps puts users at risk, and evidence shows that attacks on the supply
chain are becoming more common. The consequences of an attack on the software
supply chain can be tragic in a society that relies on many interconnected
software systems, and this has led research interest as well as governmental
incentives for supply chain security to rise.
GNU Guix is a software deployment tool and software distribution that
supports provenance tracking, reproducible builds, and reproducible software
environments. Unlike many software distributions, it consists exclusively of
source code: it provides a set of package definitions that describe how to
build code from source. Together, these properties set it apart from many
deployment tools that center on the distribution of binaries.
This paper focuses on one research question: how can Guix and similar systems
allow users to securely update their software? Guix source code is distributed
using the Git version control system; updating Guix-installed software packages
means, first, updating the local copy of the Guix source code. Prior work on
secure software updates focuses on systems very different from Guix -- systems
such as Debian, Fedora, or PyPI where updating consists in fetching metadata
about the latest binary artifacts available -- and is largely inapplicable in
the context of Guix. By contrast, the main threats for Guix are attacks on its
source code repository, which could lead users to run inauthentic code or to
downgrade their system. Deployment tools that more closely resemble Guix, from
Nix to Portage, either lack secure update mechanisms or suffer from
shortcomings.
Our main contribution is a model and tool to authenticate new Git revisions.
We further show how, building on Git semantics, we build protections against
downgrade attacks and related threats. We explain implementation choices. This
work has been deployed in production two years ago, giving us insight on its
actual use at scale every day. The Git checkout authentication at its core is
applicable beyond the specific use case of Guix, and we think it could benefit
to developer teams that use Git.
As attacks on the software supply chain appear, security research is now
looking at every link of the supply chain. Secure updates are one important
aspect of the supply chain, but this paper also looks at the broader context:
how Guix models and implements the supply chain, from upstream source code to
binaries running on computers. While much recent work focuses on attestation --
certifying each link of the supply chain -- Guix takes a more radical approach:
enabling independent verification of each step, building on reproducible
builds, "bootstrappable" builds, and provenance tracking. The big picture shows
how Guix can be used as the foundation of secure software supply chains.
| 2022-06-28 08:53:21.000000000 |
2109.06404 | Detecting Multi-Sensor Fusion Errors in Advanced Driver-Assistance
Systems | cs.RO cs.AI cs.LG cs.SE | Advanced Driver-Assistance Systems (ADAS) have been thriving and widely
deployed in recent years. In general, these systems receive sensor data,
compute driving decisions, and output control signals to the vehicles. To
smooth out the uncertainties brought by sensor outputs, they usually leverage
multi-sensor fusion (MSF) to fuse the sensor outputs and produce a more
reliable understanding of the surroundings. However, MSF cannot completely
eliminate the uncertainties since it lacks the knowledge about which sensor
provides the most accurate data and how to optimally integrate the data
provided by the sensors. As a result, critical consequences might happen
unexpectedly. In this work, we observed that the popular MSF methods in an
industry-grade ADAS can mislead the car control and result in serious safety
hazards. We define the failures (e.g., car crashes) caused by the faulty MSF as
fusion errors and develop a novel evolutionary-based domain-specific search
framework, FusED, for the efficient detection of fusion errors. We further
apply causality analysis to show that the found fusion errors are indeed caused
by the MSF method. We evaluate our framework on two widely used MSF methods in
two driving environments. Experimental results show that FusED identifies more
than 150 fusion errors. Finally, we provide several suggestions to improve the
MSF methods we study.
| 2021-09-14 02:35:34.000000000 |
1608.00656 | Parametric, Probabilistic, Timed Resource Discovery System | cs.DC cs.SE | This paper presents a fully distributed resource discovery and reservation
system. Verification of such a system is important to ensure the execution of
distributed applications on a set of resources in appropriate conditions. A
semi-formal model for his system is presented using probabilistic timed
automata. This model is timed, parametric and probabilistic, making it a
challenge to the parameter synthesis community.
| 2016-08-02 00:37:01.000000000 |
2205.09552 | Hybrid Intelligent Testing in Simulation-Based Verification | cs.AR cs.AI cs.LG cs.SE | Efficient and effective testing for simulation-based hardware verification is
challenging. Using constrained random test generation, several millions of
tests may be required to achieve coverage goals. The vast majority of tests do
not contribute to coverage progress, yet they consume verification resources.
In this paper, we propose a hybrid intelligent testing approach combining two
methods that have previously been treated separately, namely Coverage-Directed
Test Selection and Novelty-Driven Verification. Coverage-Directed Test
Selection learns from coverage feedback to bias testing toward the most
effective tests. Novelty-Driven Verification learns to identify and simulate
stimuli that differ from previous stimuli, thereby reducing the number of
simulations and increasing testing efficiency. We discuss the strengths and
limitations of each method, and we show how our approach addresses each
method's limitations, leading to hardware testing that is both efficient and
effective.
| 2022-05-19 13:22:08.000000000 |
1112.5774 | Dynamic Composition of Evolving Process Types | cs.SE | Classical approaches like process algebras or labelled transition systems
deal with static composition to model non-trivial concurrent or distributed
systems; this is not sufficient for systems with dynamic architecture and with
variable number of components. We introduce a method to guide the modelling and
the dynamic composition of processes to build large distributed systems with
dynamic adhoc architecture. The modelling and the composition are based on an
event-based approach that favour the decoupling of the system components. The
composition uses the sharing of abstract communication channels. The method is
appropriate to deal with evolving processes (with mobility, mutation). The
event-B method is used for practical support. A fauna and its evolution are
considered as a working system; this system presents some specificities, its
behaviour is not foreseeable, it has an adhoc (not statically fixed)
architecture.
| 2011-12-25 10:09:58.000000000 |
2303.10015 | Where and What do Software Architects blog? An Exploratory Study on
Architectural Knowledge in Blogs, and their Relevance to Design Steps | cs.SE | Software engineers share their architectural knowledge (AK) in different
places on the Web. Recent studies show that architectural blogs contain the
most relevant AK, which can help software engineers to make design steps.
Nevertheless, we know little about blogs, and specifically architectural blogs,
where software engineers share their AK. In this paper, we conduct an
exploratory study on architectural blogs to explore their types, topics, and
their AK. Moreover, we determine the relevance of architectural blogs to make
design steps. Our results support researchers and practitioners to find and
re-use AK from blogs.
| 2023-03-17 14:44:13.000000000 |
2108.10381 | On The (In)Effectiveness of Static Logic Bomb Detector for Android Apps | cs.CR cs.SE | Android is present in more than 85% of mobile devices, making it a prime
target for malware. Malicious code is becoming increasingly sophisticated and
relies on logic bombs to hide itself from dynamic analysis. In this paper, we
perform a large scale study of TSOPEN, our open-source implementation of the
state-of-the-art static logic bomb scanner TRIGGERSCOPE, on more than 500k
Android applications. Results indicate that the approach scales. Moreover, we
investigate the discrepancies and show that the approach can reach a very low
false-positive rate, 0.3%, but at a particular cost, e.g., removing 90% of
sensitive methods. Therefore, it might not be realistic to rely on such an
approach to automatically detect all logic bombs in large datasets. However, it
could be used to speed up the location of malicious code, for instance, while
reverse engineering applications. We also present TRIGDB a database of 68
Android applications containing trigger-based behavior as a ground-truth to the
research community.
| 2021-08-23 19:36:01.000000000 |
1912.09519 | Analyzing Web Search Behavior for Software Engineering Tasks | cs.SE cs.IR | Web search plays an integral role in software engineering (SE) to help with
various tasks such as finding documentation, debugging, installation, etc. In
this work, we present the first large-scale analysis of web search behavior for
SE tasks using the search query logs from Bing, a commercial web search engine.
First, we use distant supervision techniques to build a machine learning
classifier to extract the SE search queries with an F1 score of 93%. We then
perform an analysis on one million search sessions to understand how software
engineering related queries and sessions differ from other queries and
sessions. Subsequently, we propose a taxonomy of intents to identify the
various contexts in which web search is used in software engineering. Lastly,
we analyze millions of SE queries to understand the distribution, search
metrics and trends across these SE search intents. Our analysis shows that SE
related queries form a significant portion of the overall web search traffic.
Additionally, we found that there are six major intent categories for which web
search is used in software engineering. The techniques and insights can not
only help improve existing tools but can also inspire the development of new
tools that aid in finding information for SE related tasks.
| 2019-12-19 19:46:26.000000000 |
1312.0461 | Abmash: Mashing Up Legacy Web Applications by Automated Imitation of
Human Actions | cs.SE | Many business web-based applications do not offer applications programming
interfaces (APIs) to enable other applications to access their data and
functions in a programmatic manner. This makes their composition difficult (for
instance to synchronize data between two applications). To address this
challenge, this paper presents Abmash, an approach to facilitate the
integration of such legacy web applications by automatically imitating human
interactions with them. By automatically interacting with the graphical user
interface (GUI) of web applications, the system supports all forms of
integrations including bi-directional interactions and is able to interact with
AJAX-based applications. Furthermore, the integration programs are easy to
write since they deal with end-user, visual user-interface elements. The
integration code is simple enough to be called a "mashup".
| 2013-12-02 14:04:50.000000000 |
1809.02724 | An automated model-based test oracle for access control systems | cs.SE | In the context of XACML-based access control systems, an intensive testing
activity is among the most adopted means to assure that sensible information or
resources are correctly accessed. Unfortunately, it requires a huge effort for
manual inspection of results: thus automated verdict derivation is a key aspect
for improving the cost-effectiveness of testing. To this purpose, we introduce
XACMET, a novel approach for automated model-based oracle definition. XACMET
defines a typed graph, called the XAC-Graph, that models the XACML policy
evaluation. The expected verdict of a specific request execution can thus be
automatically derived by executing the corresponding path in such graph. Our
validation of the XACMET prototype implementation confirms the effectiveness of
the proposed approach.
| 2018-09-08 00:38:16.000000000 |
1612.05975 | D-LITe: Building Internet of Things Choreographies | cs.SE | In this work, we present a complete architecture for designing Internet of
Things applications. While a main issue in this domain is the heterogeneity of
Objects hardware, networks and protocols, we propose D-LITe, a solution to hide
this wide range of low layer technologies. By abstracting the hardware, we
focus on object's features and not on its real characteristics. D-LITe aims to
give a universal access to object's internal processing and computational
power. A small virtual machine embedded in each object gives a universal view
of its functionalities. Each object's features are discovered and programmed
through the network, without any physical access. D-LITe comes with the SALT
language that describes the logical behaviour needed to include user's Objects
into an IoT application. This communication is based on REST architecture.
Gathering all these logical units into a global composition is our way to build
a services Choreography, in which each Object has its own task to achieve. This
paper presents also an analysis of the gain obtained when a Choreography is
used instead of the most common services.
| 2016-12-18 19:21:19.000000000 |
2005.14510 | Avoiding Unnecessary Information Loss: Correct and Efficient Model
Synchronization Based on Triple Graph Grammars | cs.SE | Model synchronization, i.e., the task of restoring consistency between two
interrelated models after a model change, is a challenging task. Triple Graph
Grammars (TGGs) specify model consistency by means of rules that describe how
to create consistent pairs of models. These rules can be used to automatically
derive further rules, which describe how to propagate changes from one model to
the other or how to change one model in such a way that propagation is
guaranteed to be possible. Restricting model synchronization to these derived
rules, however, may lead to unnecessary deletion and recreation of model
elements during change propagation. This is inefficient and may cause
unnecessary information loss, i.e., when deleted elements contain information
that is not represented in the second model, this information cannot be
recovered easily. Short-cut rules have recently been developed to avoid
unnecessary information loss by reusing existing model elements. In this paper,
we show how to automatically derive (short-cut) repair rules from short-cut
rules to propagate changes such that information loss is avoided and model
synchronization is accelerated. The key ingredients of our rule-based model
synchronization process are these repair rules and an incremental pattern
matcher informing about suitable applications of them. We prove the termination
and the correctness of this synchronization process and discuss its
completeness. As a proof of concept, we have implemented this synchronization
process in eMoflon, a state-of-the-art model transformation tool with inherent
support of bidirectionality. Our evaluation shows that repair processes based
on (short-cut) repair rules have considerably decreased information loss and
improved performance compared to former model synchronization processes based
on TGGs.
| 2020-05-29 11:48:16.000000000 |
1902.10517 | On the validation of complex systems operating in open contexts | cs.SE | In the recent years, there has been a rush towards highly autonomous systems
operating in public environments, such as automated driving of road vehicles,
passenger shuttle systems and mobile robots. These systems, operating in
unstructured, public real-world environments (the operational design domain can
be characterized as open context) per se bear a serious safety risk. The
serious safety risk, the complexity of the necessary technical systems, the
openness of the operational design domain and the regulatory situation pose a
fundamental challenge to the automotive industry.
Many different approaches to the validation of autonomous driving functions
have been proposed over the course of the last years. However, although partly
announced as the solution to the validation challenge, many of the praised
approaches leave open crucial parts.
To illustrate the contributions as well as the limitations of the individual
approaches and providing strategies for 'viable' validation and approval of
such systems, the first part of the paper gives an analysis of the fundamental
challenges related to the valid design and operation of complex autonomous
systems operating in open contexts. In the second part, we formalize the
problem statement and provide algorithms for an iterative development and
validation. In the last part we give a high level overview of a practical,
holistic development process which we refer to as systematic, system view based
approach to validation (in short sys2val) and comment on the contributions from
ISO26262 and current state of ISO/PAS 21448 (SOTIF).
| 2019-01-22 09:32:16.000000000 |
1505.00005 | A Case Study on Quality Attribute Measurement using MARF and GIPSY | cs.SE | This literature focuses on doing a comparative analysis between Modular Audio
Recognition Framework (MARF) and the General Intentional Programming System
(GIPSY) with the help of different software metrics. At first, we understand
the general principles, architecture and working of MARF and GIPSY by looking
at their frameworks and running them in the Eclipse environment. Then, we study
some of the important metrics including a few state of the art metrics and rank
them in terms of their usefulness and their influence on the different quality
attributes of a software. The quality attributes are viewed and computed with
the help of the Logiscope and McCabe IQ tools. These tools perform a
comprehensive analysis on the case studies and generate a quality report at the
factor level, criteria level and metrics level. In next step, we identify the
worst code at each of these levels, extract the worst code and provide
recommendations to improve the quality. We implement and test some of the
metrics which are ranked as the most useful metrics with a set of test cases in
JDeodorant. Finally, we perform an analysis on both MARF and GIPSY by doing a
fuzzy code scan using MARFCAT to find the list of weak and vulnerable classes.
| 2015-04-30 00:42:18.000000000 |
2307.14406 | Demystifying Code Snippets in Code Reviews: A Study of the OpenStack and
Qt Communities and A Practitioner Survey | cs.SE | Code review is widely known as one of the best practices for software quality
assurance in software development. In a typical code review process, reviewers
check the code committed by developers to ensure the quality of the code,
during which reviewers and developers would communicate with each other in
review comments to exchange necessary information. As a result, understanding
the information in review comments is a prerequisite for reviewers and
developers to conduct an effective code review. Code snippet, as a special form
of code, can be used to convey necessary information in code reviews. For
example, reviewers can use code snippets to make suggestions or elaborate their
ideas to meet developers' information needs in code reviews. However, little
research has focused on the practices of providing code snippets in code
reviews. To bridge this gap, we conduct a mixed-methods study to mine
information and knowledge related to code snippets in code reviews, which can
help practitioners and researchers get a better understanding about using code
snippets in code review. Specifically, our study includes two phases: mining
code review data and conducting practitioners' survey. The study results
highlight that reviewers can provide code snippets in appropriate scenarios to
meet developers' specific information needs in code reviews, which will
facilitate and accelerate the code review process.
| 2023-07-26 17:49:19.000000000 |
1807.07387 | The State of Sustainable Research Software: Results from the Workshop on
Sustainable Software for Science: Practice and Experiences (WSSSPE5.1) | cs.SE | This article summarizes motivations, organization, and activities of the
Workshop on Sustainable Software for Science: Practice and Experiences
(WSSSPE5.1) held in Manchester, UK in September 2017. The WSSSPE series
promotes sustainable research software by positively impacting principles and
best practices, careers, learning, and credit. This article discusses the Code
of Conduct, idea papers, position papers, experience papers, demos, and
lightning talks presented during the workshop. The main part of the article
discusses the speed-blogging groups that formed during the meeting, along with
the outputs of those sessions.
| 2018-07-19 13:20:11.000000000 |
2010.12282 | Exploring Research Interest in Stack Overflow -- A Systematic Mapping
Study and Quality Evaluation | cs.SE | Platforms such as Stack Overflow are available for software practitioners to
solicit solutions to their challenges and knowledge needs. The practices
therein have in recent times however triggered quality related concerns. This
is a noteworthy issue when considering that the Stack Overflow platform is used
by numerous software developers. Academic research tends to provide validation
for the practices and processes employed by Stack Overflow and other such
forums. However, previous work did not review the scale of scientific attention
that is given to this cause. Continuing from our preliminary work, we conducted
a Systematic Mapping study involving 265 papers from six relevant databases to
address this gap. In this work, we explored the level of academic interest
Stack Overflow has generated, the publication venues that are targeted, the
topics that are studied, approaches used, types of contributions and the
quality of the publications that are written about Stack Overflow. Outcomes
show that Stack Overflow has attracted increasing research interest over the
years, with topics relating to both community dynamics and human factors, and
technical issues. In addition, research studies have been largely evaluative or
proposed solutions; however, the latter approach tends to lack validation. The
contributions of these studies are often techniques or answers to a specific
problem. Evaluating the quality of all studies that were dedicated to software
programming (58 papers), our outcomes show that on average only 58% of the
developed quality criteria were met. Notwithstanding that research is
continually aiming to understand Stack Overflow and other similar communities,
further investigations are required to validate such studies and the solutions
they propose.
| 2020-10-23 10:27:29.000000000 |
2305.05303 | ENCOVIZ: An open-source, secure and multi-role energy consumption
visualisation platform | cs.SE | The need for a more energy efficient future is now more evident than ever and
has led to the continuous growth of sectors with greater potential for energy
savings, such as smart buildings, energy consumption meters, etc. The large
volume of energy related data produced is a huge advantage but, at the same
time, it creates a new problem; The need to structure, organize and efficiently
present this meaningful information. In this context, we present the ENCOVIZ
platform, a multi-role, extensible, secure, energy consumption visualization
platform with built-in analytics. ENCOVIZ has been built in accordance with the
best visualisation practices, on top of open source technologies and includes
(i) multi-role functionalities, (ii) the automated ingestion of energy
consumption data and (iii) proper visualisations and information to support
effective decision making both for energy providers and consumers.
| 2023-05-09 09:48:09.000000000 |
2309.00900 | Large Process Models: Business Process Management in the Age of
Generative AI | cs.SE cs.AI | The continued success of Large Language Models (LLMs) and other generative
artificial intelligence approaches highlights the advantages that large
information corpora can have over rigidly defined symbolic models, but also
serves as a proof-point of the challenges that purely statistics-based
approaches have in terms of safety and trustworthiness. As a framework for
contextualizing the potential, as well as the limitations of LLMs and other
foundation model-based technologies, we propose the concept of a Large Process
Model (LPM) that combines the correlation power of LLMs with the analytical
precision and reliability of knowledge-based systems and automated reasoning
approaches. LPMs are envisioned to directly utilize the wealth of process
management experience that experts have accumulated, as well as process
performance data of organizations with diverse characteristics, e.g., regarding
size, region, or industry. In this vision, the proposed LPM would allow
organizations to receive context-specific (tailored) process and other business
models, analytical deep-dives, and improvement recommendations. As such, they
would allow to substantially decrease the time and effort required for business
transformation, while also allowing for deeper, more impactful, and more
actionable insights than previously possible. We argue that implementing an LPM
is feasible, but also highlight limitations and research challenges that need
to be solved to implement particular aspects of the LPM vision.
| 2023-09-02 10:32:53.000000000 |
2107.13902 | Developers perception on the severity of test smells: an empirical study | cs.SE | Unit testing is an essential component of the software development
life-cycle. A developer could easily and quickly catch and fix software faults
introduced in the source code by creating and running unit tests. Despite their
importance, unit tests are subject to bad design or implementation decisions,
the so-called test smells. These might decrease software systems quality from
various aspects, making it harder to understand, more complex to maintain, and
more prone to errors and bugs. Many studies discuss the likely effects of test
smells on test code. However, there is a lack of studies that capture
developers perceptions of such issues. This study empirically analyzes how
developers perceive the severity of test smells in the test code they develop.
Severity refers to the degree to how a test smell may negatively impact the
test code. We selected six open-source software projects from GitHub and
interviewed their developers to understand whether and how the test smells
affected the test code. Although most of the interviewed developers considered
the test smells as having a low severity to their code, they indicated that
test smells might negatively impact the project, particularly in test code
maintainability and evolution. Also, detecting and removing test smells from
the test code may be positive for the project.
| 2021-07-29 11:26:08.000000000 |
2312.08477 | E&V: Prompting Large Language Models to Perform Static Analysis by
Pseudo-code Execution and Verification | cs.SE | Static analysis, the process of examining code without executing it, is
crucial for identifying software issues. Yet, static analysis is hampered by
its complexity and the need for customization for different targets.
Traditional static analysis tools require extensive human effort and are often
limited to specific target programs and programming languages. Recent
advancements in Large Language Models (LLMs), such as GPT-4 and Llama, offer
new capabilities for software engineering tasks. However, their application in
static analysis, especially in understanding complex code structures, remains
under-explored. This paper introduces a novel approach named E&V , which
leverages LLMs to perform static analysis. Specifically, E&V employs LLMs to
simulate the execution of pseudo-code, effectively conducting static analysis
encoded in the pseudo-code with minimal human effort, thereby improving the
accuracy of results. E&V includes a verification process for pseudo-code
execution without needing an external oracle. This process allows E&V to
mitigate hallucinations of LLMs and enhance the accuracy of static analysis
results. We have implemented E&V in a prototype tool designed for triaging
crashes through backward taint analysis. This prototype, paired with GPT-4-32k,
has been applied to triage 170 recently fixed Linux kernel bugs across seven
bug categories. Our experiments demonstrate that the prototype correctly
identifies the blamed function in 81.2% of the cases. Additionally, we observe
that our novel verification process significantly improves the accuracy,
increasing it from 28.2% to 81.2%.
| 2023-12-13 19:31:00.000000000 |
2010.02716 | AI Lifecycle Models Need To Be Revised. An Exploratory Study in Fintech | cs.SE | Tech-leading organizations are embracing the forthcoming artificial
intelligence revolution. Intelligent systems are replacing and cooperating with
traditional software components. Thus, the same development processes and
standards in software engineering ought to be complied in artificial
intelligence systems. This study aims to understand the processes by which
artificial intelligence-based systems are developed and how state-of-the-art
lifecycle models fit the current needs of the industry. We conducted an
exploratory case study at ING, a global bank with a strong European base. We
interviewed 17 people with different roles and from different departments
within the organization. We have found that the following stages have been
overlooked by previous lifecycle models: data collection, feasibility study,
documentation, model monitoring, and model risk assessment. Our work shows that
the real challenges of applying Machine Learning go much beyond sophisticated
learning algorithms - more focus is needed on the entire lifecycle. In
particular, regardless of the existing development tools for Machine Learning,
we observe that they are still not meeting the particularities of this field.
| 2020-10-03 19:25:01.000000000 |
1810.08289 | Sample-Free Learning of Input Grammars for Comprehensive Software
Fuzzing | cs.SE cs.PL | Generating valid test inputs for a program is much easier if one knows the
input language. We present first successes for a technique that, given a
program P without any input samples or models, learns an input grammar that
represents the syntactically valid inputs for P -- a grammar which can then be
used for highly effective test generation for P . To this end, we introduce a
test generator targeted at input parsers that systematically explores parsing
alternatives based on dynamic tracking of constraints; the resulting inputs go
into a grammar learner producing a grammar that can then be used for fuzzing.
In our evaluation on subjects such as JSON, URL, or Mathexpr, our PYGMALION
prototype took only a few minutes to infer grammars and generate thousands of
valid high-quality inputs.
| 2018-10-18 22:12:26.000000000 |
2306.01404 | Reducing Large Adaptation Spaces in Self-Adaptive Systems Using Machine
Learning | cs.SE | Modern software systems often have to cope with uncertain operation
conditions, such as changing workloads or fluctuating interference in a
wireless network. To ensure that these systems meet their goals these
uncertainties have to be mitigated. One approach to realize this is
self-adaptation that equips a system with a feedback loop. The feedback loop
implements four core functions -- monitor, analyze, plan, and execute -- that
share knowledge in the form of runtime models. For systems with a large number
of adaptation options, i.e., large adaptation spaces, deciding which option to
select for adaptation may be time consuming or even infeasible within the
available time window to make an adaptation decision. This is particularly the
case when rigorous analysis techniques are used to select adaptation options,
such as formal verification at runtime, which is widely adopted. One technique
to deal with the analysis of a large number of adaptation options is reducing
the adaptation space using machine learning. State of the art has showed the
effectiveness of this technique, yet, a systematic solution that is able to
handle different types of goals is lacking. In this paper, we present ML2ASR+,
short for Machine Learning to Adaptation Space Reduction Plus. Central to
ML2ASR+ is a configurable machine learning pipeline that supports effective
analysis of large adaptation spaces for threshold, optimization, and setpoint
goals. We evaluate ML2ASR+ for two applications with different sizes of
adaptation spaces: an Internet-of-Things application and a service-based
system. The results demonstrate that ML2ASR+ can be applied to deal with
different types of goals and is able to reduce the adaptation space and hence
the time to make adaptation decisions with over 90%, with negligible effect on
the realization of the adaptation goals.
| 2023-06-02 09:49:33.000000000 |
2006.02155 | MLOS: An Infrastructure for Automated Software Performance Engineering | cs.DC cs.DB cs.LG cs.PF cs.SE | Developing modern systems software is a complex task that combines business
logic programming and Software Performance Engineering (SPE). The later is an
experimental and labor-intensive activity focused on optimizing the system for
a given hardware, software, and workload (hw/sw/wl) context.
Today's SPE is performed during build/release phases by specialized teams,
and cursed by: 1) lack of standardized and automated tools, 2) significant
repeated work as hw/sw/wl context changes, 3) fragility induced by a
"one-size-fit-all" tuning (where improvements on one workload or component may
impact others). The net result: despite costly investments, system software is
often outside its optimal operating point - anecdotally leaving 30% to 40% of
performance on the table.
The recent developments in Data Science (DS) hints at an opportunity:
combining DS tooling and methodologies with a new developer experience to
transform the practice of SPE. In this paper we present: MLOS, an ML-powered
infrastructure and methodology to democratize and automate Software Performance
Engineering. MLOS enables continuous, instance-level, robust, and trackable
systems optimization. MLOS is being developed and employed within Microsoft to
optimize SQL Server performance. Early results indicated that component-level
optimizations can lead to 20%-90% improvements when custom-tuning for a
specific hw/sw/wl, hinting at a significant opportunity. However, several
research challenges remain that will require community involvement. To this
end, we are in the process of open-sourcing the MLOS core infrastructure, and
we are engaging with academic institutions to create an educational program
around Software 2.0 and MLOS ideas.
| 2020-06-01 22:38:30.000000000 |
1312.1040 | A Deployment Process for Strategic Measurement Systems | cs.SE | Explicitly linking software-related activities to an organisation's
higher-level goals has been shown to be critical for organizational success.
GQM+Strategies provides mechanisms for explicitly linking goals and strategies,
based on goal-oriented strategic measurement systems. Deploying such strategic
measurement systems in an organization is highly challenging. Experience has
shown that a clear deployment strategy is needed for achieving sustainable
success. In particular, an adequate deployment process as well as corresponding
tool support can facilitate the deployment. This paper introduces the
systematical GQM+Strategies deployment process and gives an overview of
GQM+Strategies modelling and associated tool support. Additionally, it provides
an overview of industrial applications and describes success factors and
benefits for the usage of GQM+Strategies.
| 2013-12-04 06:42:04.000000000 |
1809.00143 | Test Prioritization in Continuous Integration Environments | cs.SE | Two heuristics namely diversity-based (DBTP) and history-based test
prioritization (HBTP) have been separately proposed in the literature. Yet,
their combination has not been widely studied in continuous integration (CI)
environments. The objective of this study is to catch regression faults
earlier, allowing developers to integrate and verify their changes more
frequently and continuously. To achieve this, we investigated six open-source
projects, each of which included several builds over a large time period.
Findings indicate that previous failure knowledge seems to have strong
predictive power in CI environments and can be used to effectively prioritize
tests. HBTP does not necessarily need to have large data, and its effectiveness
improves to a certain degree with larger history interval. DBTP can be used
effectively during the early stages, when no historical data is available, and
also combined with HBTP to improve its effectiveness. Among the investigated
techniques, we found that history-based diversity using NCD Multiset is
superior in terms of effectiveness but comes with relatively higher overhead in
terms of method execution time. Test prioritization in CI environments can be
effectively performed with negligible investment using previous failure
knowledge, and its effectiveness can be further improved by considering
dissimilarities among the tests.
| 2018-09-01 09:38:57.000000000 |
1306.1772 | Are Happy Developers more Productive? The Correlation of Affective
States of Software Developers and their self-assessed Productivity | cs.SE cs.HC | For decades now, it has been claimed that a way to improve software
developers' productivity is to focus on people. Indeed, while human factors
have been recognized in Software Engineering research, few empirical
investigations have attempted to verify the claim. Development tasks are
undertaken through cognitive processing abilities. Affective states - emotions,
moods, and feelings - have an impact on work-related behaviors, cognitive
processing activities, and the productivity of individuals. In this paper, we
report an empirical study on the impact of affective states on software
developers' performance while programming. Two affective states dimensions are
positively correlated with self-assessed productivity. We demonstrate the value
of applying psychometrics in Software Engineering studies and echo a call to
valorize the human, individualized aspects of software developers. We introduce
and validate a measurement instrument and a linear mixed-effects model to study
the correlation of affective states and the productivity of software
developers.
| 2013-06-07 16:51:39.000000000 |
2106.06652 | Lessons learned from hyper-parameter tuning for microservice candidate
identification | cs.SE | When optimizing software for the cloud, monolithic applications need to be
partitioned into many smaller *microservices*. While many tools have been
proposed for this task, we warn that the evaluation of those approaches has
been incomplete; e.g. minimal prior exploration of hyperparameter optimization.
Using a set of open source Java EE applications, we show here that (a) such
optimization can significantly improve microservice partitioning; and that (b)
an open issue for future work is how to find which optimizer works best for
different problems. To facilitate that future work, see
[https://github.com/yrahul3910/ase-tuned-mono2micro](https://github.com/yrahul3910/ase-tuned-mono2micro)
for a reproduction package for this research.
| 2021-06-12 00:51:25.000000000 |
1712.01718 | An LLVM Instrumentation Plug-in for Score-P | cs.SE cs.PF cs.PL | Reducing application runtime, scaling parallel applications to higher numbers
of processes/threads, and porting applications to new hardware architectures
are tasks necessary in the software development process. Therefore, developers
have to investigate and understand application runtime behavior. Tools such as
monitoring infrastructures that capture performance relevant data during
application execution assist in this task. The measured data forms the basis
for identifying bottlenecks and optimizing the code. Monitoring infrastructures
need mechanisms to record application activities in order to conduct
measurements. Automatic instrumentation of the source code is the preferred
method in most application scenarios. We introduce a plug-in for the LLVM
infrastructure that enables automatic source code instrumentation at
compile-time. In contrast to available instrumentation mechanisms in
LLVM/Clang, our plug-in can selectively include/exclude individual application
functions. This enables developers to fine-tune the measurement to the required
level of detail while avoiding large runtime overheads due to excessive
instrumentation.
| 2017-12-01 13:15:49.000000000 |
1807.03280 | Adversarial Symbolic Execution for Detecting Concurrency-Related Cache
Timing Leaks | cs.CR cs.DC cs.PL cs.SE | The timing characteristics of cache, a high-speed storage between the fast
CPU and the slowmemory, may reveal sensitive information of a program, thus
allowing an adversary to conduct side-channel attacks. Existing methods for
detecting timing leaks either ignore cache all together or focus only on
passive leaks generated by the program itself, without considering leaks that
are made possible by concurrently running some other threads. In this work, we
show that timing-leak-freedom is not a compositional property: a program that
is not leaky when running alone may become leaky when interleaved with other
threads. Thus, we develop a new method, named adversarial symbolic execution,
to detect such leaks. It systematically explores both the feasible program
paths and their interleavings while modeling the cache, and leverages an SMT
solver to decide if there are timing leaks. We have implemented our method in
LLVM and evaluated it on a set of real-world ciphers with 14,455 lines of C
code in total. Our experiments demonstrate both the efficiency of our method
and its effectiveness in detecting side-channel leaks.
| 2018-07-09 17:32:09.000000000 |
2009.00999 | An Automatically Verified Prototype of the Tokeneer ID Station
Specification | cs.SE | The Tokeneer project was an initiative set forth by the National Security
Agency (NSA, USA) to be used as a demonstration that developing highly secure
systems can be made by applying rigorous methods in a cost effective manner.
Altran Praxis (UK) was selected by NSA to carry out the development of the
Tokeneer ID Station. The company wrote a Z specification later implemented in
the SPARK Ada programming language, which was verified using the SPARK Examiner
toolset. In this paper, we show that the Z specification can be easily and
naturally encoded in the {log} set constraint language, thus generating a
functional prototype. Furthermore, we show that {log}'s automated proving
capabilities can discharge all the proof obligations concerning state
invariants as well as important security properties. As a consequence, the
prototype can be regarded as correct with respect to the verified properties.
This provides empirical evidence that Z users can use {log} to generate correct
prototypes from their Z specifications. In turn, these prototypes enable or
simplify some verificatio activities discussed in the paper.
| 2020-09-02 12:26:56.000000000 |
2206.06406 | Consent verification monitoring | cs.SE | Advances in service personalization are driven by low-cost data collection
and processing, in addition to the wide variety of third-party frameworks for
authentication, storage, and marketing. New privacy regulations, such as the
General Data Protection Regulation (GDPR) and the California Consumer Privacy
Act (CCPA), increasingly require organizations to explicitly state their data
practices in privacy policies. When data practices change, a new version of the
policy is released. This can occur a few times a year, when data collection or
processing requirements are rapidly changing. Consent evolution raises specific
challenges to ensuring GDPR compliance. We propose a formal consent framework
to support organizations, data users and data subjects in their understanding
of policy evolution under a consent regime that supports both the retroactive
and non-retroactive granting and withdrawal of consent. The contributions
include: (i) a formal framework to reason about data collection and access
under multiple consent granting and revocation scenarios; (ii) a scripting
language that implements the consent framework for encoding and executing
different scenarios; (iii) five consent evolution use cases that illustrate how
organizations would evolve their policies using this framework; and (iv) a
scalability evaluation of the reasoning framework. The framework models are
used to verify when user consent prevents or detects unauthorized data
collection and access. The framework can be integrated into a runtime
architecture to monitor policy violations as data practices evolve in
real-time. The framework was evaluated using the five use cases and a
simulation to measure the framework scalability. The simulation results show
that the approach is computationally scalable for use in runtime consent
monitoring under a standard model of data collection and access, and practice
and policy evolution.
| 2022-06-13 18:23:36.000000000 |
2305.10910 | Analysis of Library Dependency Networks of Package Managers Used in iOS
Development | cs.SE | Reusing existing solutions in the form of third-party libraries is common
practice when writing software. Package managers are used to manage
dependencies to third-party libraries by automating the process of installing
and updating the libraries. Library dependencies themselves can have
dependencies to other libraries creating a dependency network with several
levels of indirections. The library dependency network in the Swift ecosystem
encompasses libraries from CocoaPods, Carthage and Swift Package Manager (PM).
These package managers are used when developing, for example, iOS or Mac OS
applications in Swift and Objective-C. We provide the first analysis of the
library dependency network evolution in the Swift ecosystem. Although CocoaPods
is the package manager with the biggest set of libraries, the difference to
other package managers is not as big as expected. The youngest package manager
and official package manager for Swift, Swift PM, is becoming more and more
popular, resulting in a gradual slow-down of the growth of the other two
package managers. When analyzing direct and transitive dependencies, we found
that the mean total number of dependencies is lower in the Swift ecosystem
compared to many other ecosystems. Still, the total number of dependencies
shows a clear growing trend over the last five years.
| 2023-05-18 12:14:19.000000000 |
1404.6801 | Nothing is Certain but Doubt and Tests | cs.SE | Effective software safety standards will contribute to confidence, or
assurance, in the safety of the systems in which the software is used. It is
infeasible to demonstrate a correlation between standards and accidents, but
there is an alternative view that makes standards "testable". Software projects
are subject to uncertainty; good standards reduce uncertainty more than poor
ones. Similarly assurance or integrity levels in standards should define an
uncertainty gradient. The paper proposes an argument -based method of reasoning
about uncertainty that can be used as a basis for conducting experiments
(tests) to evaluate standards.
| 2014-04-27 18:10:44.000000000 |
2309.00329 | Mi-Go: Test Framework which uses YouTube as Data Source for Evaluating
Speech Recognition Models like OpenAI's Whisper | cs.SD cs.LG cs.SE eess.AS | This article introduces Mi-Go, a novel testing framework aimed at evaluating
the performance and adaptability of general-purpose speech recognition machine
learning models across diverse real-world scenarios. The framework leverages
YouTube as a rich and continuously updated data source, accounting for multiple
languages, accents, dialects, speaking styles, and audio quality levels. To
demonstrate the effectiveness of the framework, the Whisper model, developed by
OpenAI, was employed as a test object. The tests involve using a total of 124
YouTube videos to test all Whisper model versions. The results underscore the
utility of YouTube as a valuable testing platform for speech recognition
models, ensuring their robustness, accuracy, and adaptability to diverse
languages and acoustic conditions. Additionally, by contrasting the
machine-generated transcriptions against human-made subtitles, the Mi-Go
framework can help pinpoint potential misuse of YouTube subtitles, like Search
Engine Optimization.
| 2023-09-01 08:31:35.000000000 |
2103.09113 | EtherSolve: Computing an Accurate Control-Flow Graph from Ethereum
Bytecode | cs.SE cs.CR | Motivated by the immutable nature of Ethereum smart contracts and of their
transactions, quite many approaches have been proposed to detect defects and
security problems before smart contracts become persistent in the blockchain
and they are granted control on substantial financial value.
Because smart contracts source code might not be available, static analysis
approaches mostly face the challenge of analysing compiled Ethereum bytecode,
that is available directly from the official blockchain. However, due to the
intrinsic complexity of Ethereum bytecode (especially in jump resolution),
static analysis encounters significant obstacles that reduce the accuracy of
exiting automated tools.
This paper presents a novel static analysis algorithm based on the symbolic
execution of the Ethereum operand stack that allows us to resolve jumps in
Ethereum bytecode and to construct an accurate control-flow graph (CFG) of the
compiled smart contracts. EtherSolve is a prototype implementation of our
approach. Experimental results on a significant set of real world Ethereum
smart contracts show that EtherSolve improves the accuracy of the execrated
CFGs with respect to the state of the art available approaches.
Many static analysis techniques are based on the CFG representation of the
code and would therefore benefit from the accurate extraction of the CFG. For
example, we implemented a simple extension of EtherSolve that allows to detect
instances of the re-entrancy vulnerability.
| 2021-03-16 14:51:53.000000000 |
1608.02621 | The Machine that Builds Itself: How the Strengths of Lisp Family
Languages Facilitate Building Complex and Flexible Bioinformatic Models | q-bio.OT cs.SE | We address the need for expanding the presence of the Lisp family of
programming languages in bioinformatics and computational biology research.
Languages of this family, like Common Lisp, Scheme, or Clojure, facilitate the
creation of powerful and flexible software models that are required for complex
and rapidly evolving domains like biology. We will point out several important
key features that distinguish languages of the Lisp family from other
programming languages and we will explain how these features can aid
researchers in becoming more productive and creating better code. We will also
show how these features make these languages ideal tools for artificial
intelligence and machine learning applications. We will specifically stress the
advantages of domain-specific languages (DSL): languages which are specialized
to a particular area and thus not only facilitate easier research problem
formulation, but also aid in the establishment of standards and best
programming practices as applied to the specific research field at hand. DSLs
are particularly easy to build in Common Lisp, the most comprehensive Lisp
dialect, which is commonly referred to as the "programmable programming
language." We are convinced that Lisp grants programmers unprecedented power to
build increasingly sophisticated artificial intelligence systems that may
ultimately transform machine learning and AI research in bioinformatics and
computational biology.
| 2016-08-08 20:58:32.000000000 |
2203.04519 | Efficient Search of Live-Coding Screencasts from Online Videos | cs.SE cs.MM | Programming videos on the Internet are valuable resources for learning
programming skills. To find relevant videos, developers typically search online
video platforms (e.g., YouTube) with keywords on topics they wish to learn.
Developers often look for live-coding screencasts, in which the videos' authors
perform live coding. Yet, not all programming videos are live-coding
screencasts. In this work, we develop a tool named PSFinder to identify
live-coding screencasts. PSFinder leverages a classifier to identify whether a
video frame contains an IDE window. It uses a sampling strategy to pick a
number of frames from an input video, runs the classifer on these frames, and
then determines whether the video is a live-coding screencast based on frames
classified as containing IDE window. In our preliminary experiment, PSFinder
can effectively identify live-coding screencasts as it achieves an F1-score of
0.97.
| 2022-03-09 04:32:37.000000000 |
2403.09474 | An Industrial Experience Report about Challenges from Continuous
Monitoring, Improvement, and Deployment for Autonomous Driving Features | cs.SE cs.SY eess.SY | Using continuous development, deployment, and monitoring (CDDM) to understand
and improve applications in a customer's context is widely used for non-safety
applications such as smartphone apps or web applications to enable rapid and
innovative feature improvements. Having demonstrated its potential in such
domains, it may have the potential to also improve the software development for
automotive functions as some OEMs described on a high level in their financial
company communiqus. However, the application of a CDDM strategy also faces
challenges from a process adherence and documentation perspective as required
by safety-related products such as autonomous driving systems (ADS) and guided
by industry standards such as ISO-26262 and ISO21448. There are publications on
CDDM in safety-relevant contexts that focus on safety-critical functions on a
rather generic level and thus, not specifically ADS or automotive, or that are
concentrating only on software and hence, missing out the particular context of
an automotive OEM: Well-established legacy processes and the need of their
adaptations, and aspects originating from the role of being a system integrator
for software/software, hardware/hardware, and hardware/software. In this paper,
particular challenges from the automotive domain to better adopt CDDM are
identified and discussed to shed light on research gaps to enhance CDDM,
especially for the software development of safe ADS. The challenges are
identified from today's industrial well-established ways of working by
conducting interviews with domain experts and complemented by a literature
study.
| 2024-03-14 15:14:24.000000000 |
1708.09002 | Verification of Programs via Intermediate Interpretation | cs.PL cs.SE | We explore an approach to verification of programs via program transformation
applied to an interpreter of a programming language. A specialization technique
known as Turchin's supercompilation is used to specialize some interpreters
with respect to the program models. We show that several safety properties of
functional programs modeling a class of cache coherence protocols can be proved
by a supercompiler and compare the results with our earlier work on direct
verification via supercompilation not using intermediate interpretation.
Our approach was in part inspired by an earlier work by E. De Angelis et al.
(2014-2015) where verification via program transformation and intermediate
interpretation was studied in the context of specialization of constraint logic
programs.
| 2017-08-24 00:19:15.000000000 |
1811.03045 | Descartes: A PITest Engine to Detect Pseudo-Tested Methods - Tool
Demonstration | cs.SE | Descartes is a tool that implements extreme mutation operators and aims at
finding pseudo-tested methods in Java projects. It leverages the efficient
transformation and runtime features of PIT. The demonstration compares
Descartes with Gregor, the default mutation engine provided by PIT, in a set of
real open source projects. It considers the execution time, number of mutants
created and the relationship between the mutation scores produced by both
engines. It provides some insights on the main features exposed by Descartes.
| 2018-11-07 18:00:58.000000000 |
2301.03863 | Robust web element identification for evolving applications by
considering visual overlaps | cs.SE | Fragile (i.e., non-robust) test execution is a common challenge for automated
GUI-based testing of web applications as they evolve. Despite recent progress,
there is still room for improvement since test execution failures caused by
technical limitations result in unnecessary maintenance costs that limit its
effectiveness and efficiency. One of the most reported technical challenges for
web-based tests concerns how to reliably locate a web element used by a test
script. This paper proposes the novel concept of Visually Overlapping Nodes
(VON) that reduces fragility by utilizing the phenomenon that visual web
elements (observed by the user) are constructed from multiple web-elements in
the Document Object Model (DOM) that overlaps visually. We demonstrate the
approach in a tool, VON Similo, which extends the state-of-the-art
multi-locator approach (Similo) that is also used as the baseline for an
experiment. In the experiment, a ground truth set of 1163 manually collected
web element pairs, from different releases of the 40 most popular websites on
the internet, are used to compare the approaches' precision, recall, and
accuracy. Our results show that VON Similo provides 94.7% accuracy in
identifying a web element in a new release of the same SUT. In comparison,
Similo provides 83.8% accuracy. These results demonstrate the applicability of
the visually overlapping nodes concept/tool for web element localization in
evolving web applications and contribute a novel way of thinking about web
element localization in future research on GUI-based testing.
| 2023-01-10 09:22:13.000000000 |
1806.04055 | The History of Software Architecture - In the Eye of the Practitioner | cs.SE | Software architecture (SA) is celebrating 25 years. This is so if we consider
the seminal papers establishing SA as a distinct discipline and scientific
publications that have identified cornerstones of both research and practice,
like architecture views, architecture description languages, and architecture
evaluation. With the pervasive use of cloud provisioning, the dynamic
integration of multi-party distributed services, and the steep increase in the
digitalization of business and society, making sound design decisions
encompasses an increasingly-large and complex problem space. The role of SA is
essential as never before, so much so that no organization undertakes `serious'
projects without the support of suitable architecture practices. But, how did
SA practice evolve in the past 25 years? and What are the challenges ahead?
There have been various attempts to summarize the state of research and
practice of SA. Still, we miss the practitioners' view on the questions above.
To fill this gap, we have first extracted the top-10 topics resulting from the
analysis of 5,622 scientific papers. Then, we have used such topics to design
an online survey filled out by 57 SA practitioners with 5 to 20+ years of
experience. We present the results of the survey with a special focus on the SA
topics that SA practitioners perceive, in the past, present and future, as the
most impactful. We finally use the results to draw preliminary takeaways.
| 2018-06-11 15:25:14.000000000 |
2201.06235 | Characterizing Sensor Leaks in Android Apps | cs.CR cs.SE | While extremely valuable to achieve advanced functions, mobile phone sensors
can be abused by attackers to implement malicious activities in Android apps,
as experimentally demonstrated by many state-of-the-art studies. There is hence
a strong need to regulate the usage of mobile sensors so as to keep them from
being exploited by malicious attackers. However, despite the fact that various
efforts have been put in achieving this, i.e., detecting privacy leaks in
Android apps, we have not yet found approaches to automatically detect sensor
leaks in Android apps. To fill the gap, we designed and implemented a novel
prototype tool, SEEKER, that extends the famous FlowDroid tool to detect
sensor-based data leaks in Android apps. SEEKER conducts sensor-focused static
taint analyses directly on the Android apps' bytecode and reports not only
sensor-triggered privacy leaks but also the sensor types involved in the leaks.
Experimental results using over 40,000 real-world Android apps show that SEEKER
is effective in detecting sensor leaks in Android apps, and malicious apps are
more interested in leaking sensor data than benign apps.
| 2022-01-17 06:40:21.000000000 |
1909.06251 | V2: Fast Detection of Configuration Drift in Python | cs.SE | Code snippets are prevalent, but are hard to reuse because they often lack an
accompanying environment configuration. Most are not actively maintained,
allowing for drift between the most recent possible configuration and the code
snippet as the snippet becomes out-of-date over time. Recent work has
identified the problem of validating and detecting out-of-date code snippets as
the most important consideration for code reuse. However, determining if a
snippet is correct, but simply out-of-date, is a non-trivial task. In the best
case, breaking changes are well documented, allowing developers to manually
determine when a code snippet contains an out-of-date API usage. In the worst
case, determining if and when a breaking change was made requires an exhaustive
search through previous dependency versions.
We present V2, a strategy for determining if a code snippet is out-of-date by
detecting discrete instances of configuration drift, where the snippet uses an
API which has since undergone a breaking change. Each instance of configuration
drift is classified by a failure encountered during validation and a
configuration patch, consisting of dependency version changes, which fixes the
underlying fault. V2 uses feedback-directed search to explore the possible
configuration space for a code snippet, reducing the number of potential
environment configurations that need to be validated. When run on a corpus of
public Python snippets from prior research, V2 identifies 248 instances of
configuration drift.
| 2019-09-13 14:25:06.000000000 |
2311.08424 | Exploring Multi-Programming-Language Commits and Their Impacts on
Software Quality: An Empirical Study on Apache Projects | cs.SE | Context: Modern software systems (e.g., Apache Spark) are usually written in
multiple programming languages (PLs). There is little understanding on the
phenomenon of multi-programming-language commits (MPLCs), which involve
modified source files written in multiple PLs. Objective: This work aims to
explore MPLCs and their impacts on development difficulty and software quality.
Methods: We performed an empirical study on eighteen non-trivial Apache
projects with 197,566 commits. Results: (1) the most commonly used PL
combination consists of all the four PLs, i.e., C/C++, Java, JavaScript, and
Python; (2) 9% of the commits from all the projects are MPLCs, and the
proportion of MPLCs in 83% of the projects goes to a relatively stable level;
(3) more than 90% of the MPLCs from all the projects involve source files in
two PLs; (4) the change complexity of MPLCs is significantly higher than that
of non-MPLCs; (5) issues fixed in MPLCs take significantly longer to be
resolved than issues fixed in non-MPLCs in 89% of the projects; (6) MPLCs do
not show significant effects on issue reopen; (7) source files undergoing MPLCs
tend to be more bug-prone; and (8) MPLCs introduce more bugs than non-MPLCs.
Conclusions: MPLCs are related to increased development difficulty and
decreased software quality.
| 2023-11-12 09:55:10.000000000 |
2110.04951 | Bug Prediction Using Source Code Embedding Based on Doc2Vec | cs.SE | Bug prediction is a resource demanding task that is hard to automate using
static source code analysis. In many fields of computer science, machine
learning has proven to be extremely useful in tasks like this, however, for it
to work we need a way to use source code as input. We propose a simple, but
meaningful representation for source code based on its abstract syntax tree and
the Doc2Vec embedding algorithm. This representation maps the source code to a
fixed length vector which can be used for various upstream tasks -- one of
which is bug prediction. We measured this approach's validity by itself and its
effectiveness compared to bug prediction based solely on code metrics. We also
experimented on numerous machine learning approaches to check the connection
between different embedding parameters with different machine learning models.
Our results show that this representation provides meaningful information as it
improves the bug prediction accuracy in most cases, and is always at least as
good as only using code metrics as features.
| 2021-10-11 01:07:42.000000000 |
2308.10759 | EALink: An Efficient and Accurate Pre-trained Framework for Issue-Commit
Link Recovery | cs.SE | Issue-commit links, as a type of software traceability links, play a vital
role in various software development and maintenance tasks. However, they are
typically deficient, as developers often forget or fail to create tags when
making commits. Existing studies have deployed deep learning techniques,
including pretrained models, to improve automatic issue-commit link
recovery.Despite their promising performance, we argue that previous approaches
have four main problems, hindering them from recovering links in large software
projects. To overcome these problems, we propose an efficient and accurate
pre-trained framework called EALink for issue-commit link recovery. EALink
requires much fewer model parameters than existing pre-trained methods,
bringing efficient training and recovery. Moreover, we design various
techniques to improve the recovery accuracy of EALink. We construct a
large-scale dataset and conduct extensive experiments to demonstrate the power
of EALink. Results show that EALink outperforms the state-of-the-art methods by
a large margin (15.23%-408.65%) on various evaluation metrics. Meanwhile, its
training and inference overhead is orders of magnitude lower than existing
methods.
| 2023-08-21 14:46:43.000000000 |
2104.02513 | Logging Practices with Mobile Analytics: An Empirical Study on Firebase | cs.SE | Software logs are of great value in both industrial and open-source projects.
Mobile analytics logging enables developers to collect logs remotely from their
apps running on end user devices at the cost of recording and transmitting logs
across the Internet to a centralised infrastructure.
This paper makes a first step in characterising logging practices with a
widely adopted mobile analytics logging library, namely Firebase Analytics. We
provide an empirical evaluation of the use of Firebase Analytics in 57
open-source Android applications by studying the evolution of code-bases to
understand: a) the needs-in-common that push practitioners to adopt logging
practices on mobile devices, and b) the differences in the ways developers use
local and remote logging.
Our results indicate mobile analytics logs are less pervasive and less
maintained than traditional logging code. Based on our analysis, we believe
logging using mobile analytics is more user centered compared to traditional
logging, where the latter is mainly used to record information for debugging
purposes.
| 2021-04-06 13:47:58.000000000 |
1711.07451 | AndroVault: Constructing Knowledge Graph from Millions of Android Apps
for Automated Analysis | cs.SE cs.CR | Data driven research on Android has gained a great momentum these years. The
abundance of data facilitates knowledge learning, however, also increases the
difficulty of data preprocessing. Therefore, it is non-trivial to prepare a
demanding and accurate set of data for research. In this work, we put forward
AndroVault, a framework for the Android research composing of data collection,
knowledge representation and knowledge extraction. It has started with a
long-running web crawler for data collection (both apps and description) since
2013, which guarantees the timeliness of data; With static analysis and dynamic
analysis of the collected data, we compute a variety of attributes to
characterize Android apps. After that, we employ a knowledge graph to connect
all these apps by computing their correlation in terms of attributes; Last, we
leverage multiple technologies such as logical inference, machine learning, and
correlation analysis to extract facts (more accurate and demanding, either high
level or not, data) that are beneficial for a specific research problem. With
the produced data of high quality, we have successfully conducted many research
works including malware detection, code generation, and Android testing. We
would like to release our data to the research community in an authenticated
manner, and encourage them to conduct productive research.
| 2017-11-20 18:26:36.000000000 |
1805.05518 | Formal Modelling of Ontologies : An Event-B based Approach Using the
Rodin Platform | cs.SE cs.AI cs.LO | This paper reports on the results of the French ANR IMPEX research project
dealing with making explicit domain knowledge in design models. Ontologies are
formalised as theories with sets, axioms, theorems and reasoning rules. They
are integrated to design models through an annotation mechanism. Event-B has
been chosen as the ground formal modelling technique for all our developments.
In this paper, we particularly describe how ontologies are formalised as
Event-B theories.
| 2018-05-15 01:20:18.000000000 |
2211.15207 | Multiple Query Satisfiability of Constrained Horn Clauses | cs.LO cs.PL cs.SE | We address the problem of checking the satisfiability of a set of constrained
Horn clauses (CHCs) possibly including more than one query. We propose a
transformation technique that takes as input a set of CHCs, including a set of
queries, and returns as output a new set of CHCs, such that the transformed
CHCs are satisfiable if and only if so are the original ones, and the
transformed CHCs incorporate in each new query suitable information coming from
the other ones so that the CHC satisfiability algorithm is able to exploit the
relationships among all queries. We show that our proposed technique is
effective on a non trivial benchmark of sets of CHCs that encode many
verification problems for programs manipulating algebraic data types such as
lists and trees.
| 2022-11-28 10:30:04.000000000 |
2401.05673 | Analyzing and Debugging Normative Requirements via Satisfiability
Checking | cs.SE | As software systems increasingly interact with humans in application domains
such as transportation and healthcare, they raise concerns related to the
social, legal, ethical, empathetic, and cultural (SLEEC) norms and values of
their stakeholders. Normative non-functional requirements (N-NFRs) are used to
capture these concerns by setting SLEEC-relevant boundaries for system
behavior. Since N-NFRs need to be specified by multiple stakeholders with
widely different, non-technical expertise (ethicists, lawyers, regulators, end
users, etc.), N-NFR elicitation is very challenging. To address this challenge,
we introduce N-Check, a novel tool-supported formal approach to N-NFR analysis
and debugging. N-Check employs satisfiability checking to identify a broad
spectrum of N-NFR well-formedness issues (WFI), such as conflicts, redundancy,
restrictiveness, insufficiency, yielding diagnostics which pinpoint their
causes in a user-friendly way that enables non-technical stakeholders to
understand and fix them. We show the effectiveness and usability of our
approach through nine case studies in which teams of ethicists, lawyers,
philosophers, psychologists, safety analysts, and engineers used N-Check to
analyse and debug 233 N-NFRs comprising 62 issues for the software underpinning
the operation of systems ranging from assistive-care robots and tree-disease
detection drones to manufacturing collaborative robots.
| 2024-01-11 05:32:31.000000000 |
2012.06822 | Digital Twins Are Not Monozygotic -- Cross-Replicating ADAS Testing in
Two Industry-Grade Automotive Simulators | cs.SE cs.AI | The increasing levels of software- and data-intensive driving automation call
for an evolution of automotive software testing. As a recommended practice of
the Verification and Validation (V&V) process of ISO/PAS 21448, a candidate
standard for safety of the intended functionality for road vehicles,
simulation-based testing has the potential to reduce both risks and costs.
There is a growing body of research on devising test automation techniques
using simulators for Advanced Driver-Assistance Systems (ADAS). However, how
similar are the results if the same test scenarios are executed in different
simulators? We conduct a replication study of applying a Search-Based Software
Testing (SBST) solution to a real-world ADAS (PeVi, a pedestrian vision
detection system) using two different commercial simulators, namely,
TASS/Siemens PreScan and ESI Pro-SiVIC. Based on a minimalistic scene, we
compare critical test scenarios generated using our SBST solution in these two
simulators. We show that SBST can be used to effectively and efficiently
generate critical test scenarios in both simulators, and the test results
obtained from the two simulators can reveal several weaknesses of the ADAS
under test. However, executing the same test scenarios in the two simulators
leads to notable differences in the details of the test outputs, in particular,
related to (1) safety violations revealed by tests, and (2) dynamics of cars
and pedestrians. Based on our findings, we recommend future V&V plans to
include multiple simulators to support robust simulation-based testing and to
base test objectives on measures that are less dependant on the internals of
the simulators.
| 2020-12-12 14:00:33.000000000 |
1508.00618 | ViSpec: A graphical tool for elicitation of MTL requirements | cs.SE | One of the main barriers preventing widespread use of formal methods is the
elicitation of formal specifications. Formal specifications facilitate the
testing and verification process for safety critical robotic systems. However,
handling the intricacies of formal languages is difficult and requires a high
level of expertise in formal logics that many system developers do not have. In
this work, we present a graphical tool designed for the development and
visualization of formal specifications by people that do not have training in
formal logic. The tool enables users to develop specifications using a
graphical formalism which is then automatically translated to Metric Temporal
Logic (MTL). In order to evaluate the effectiveness of our tool, we have also
designed and conducted a usability study with cohorts from the academic student
community and industry. Our results indicate that both groups were able to
define formal requirements with high levels of accuracy. Finally, we present
applications of our tool for defining specifications for operation of robotic
surgery and autonomous quadcopter safe operation.
| 2015-08-03 23:25:34.000000000 |
0904.4709 | Software Model Checking via Large-Block Encoding | cs.SE cs.PL | The construction and analysis of an abstract reachability tree (ART) are the
basis for a successful method for software verification. The ART represents
unwindings of the control-flow graph of the program. Traditionally, a
transition of the ART represents a single block of the program, and therefore,
we call this approach single-block encoding (SBE). SBE may result in a huge
number of program paths to be explored, which constitutes a fundamental source
of inefficiency. We propose a generalization of the approach, in which
transitions of the ART represent larger portions of the program; we call this
approach large-block encoding (LBE). LBE may reduce the number of paths to be
explored up to exponentially. Within this framework, we also investigate
symbolic representations: for representing abstract states, in addition to
conjunctions as used in SBE, we investigate the use of arbitrary Boolean
formulas; for computing abstract-successor states, in addition to Cartesian
predicate abstraction as used in SBE, we investigate the use of Boolean
predicate abstraction. The new encoding leverages the efficiency of
state-of-the-art SMT solvers, which can symbolically compute abstract
large-block successors. Our experiments on benchmark C programs show that the
large-block encoding outperforms the single-block encoding.
| 2009-04-29 21:53:56.000000000 |
1808.01100 | Code Shrew: Software platform for teaching programming through drawings
and animations | cs.CY cs.SE | In this paper, we present Code Shrew, a new software platform accompanied by
an interactive programming course. Its aim is to teach the fundamentals of
computer programming by enabling users to create their own drawings and
animations. The programming language has a straightforward syntax based on
Python, with additions that enable easy drawing and animating using
object-oriented code. The editor reacts seamlessly and instantly, providing an
engaging and interactive environment for experimenting and testing ideas. The
programming course consists of lessons that cover essential programming
principles, as well as challenges to test users' skills as they progress
through the course. Both the lessons and challenges take advantage of the
editor's instant feedback, allowing for a focus on learning-by-doing. We
describe the software and the content, the motivation behind them, and their
connection to constructionism.
| 2018-08-03 07:27:26.000000000 |
2104.13992 | Challenges of Adopting SAFe in the Banking Industry -- A Study Two Years
after its Introduction | cs.SE | The Scaled Agile Framework (SAFe) is a framework for scaling agile methods in
large organizations. We have found several experience reports and white papers
describing SAFe adoptions in different banks, which indicates that SAFe is
being used in the banking industry. However, there is a lack of academic
publications on the topic, the banking industry is missing in the scientific
reports analyzing SAFe transformations. To fill this gap, we present a study on
the main challenges with a SAFe transformation at a large full-service bank. We
identify the challenges in the bank under study and compare the findings with
experience reports from other banks, as well as with research on SAFe
transformations in other domains. Many of the challenges reported in this paper
overlap with the generic SAFe challenges, including management and
organization, education and training, culture and mindset, requirements
engineering, quality assurance, and systems architecture. However, we also
report some novel challenges specific to the banking domain, e.g., the risk of
jeopardizing customer relations, stability, and trust of external stakeholders.
This study validates several SAFe-related challenges reported in previous work
in the banking context. It also brings up some novel challenges specific to the
banking industry. Therefore, we believe our results are particularly useful to
practitioners responsible for SAFe transformations at other banks.
| 2021-04-28 19:44:53.000000000 |
2403.13220 | Elevating Software Quality in Agile Environments: The Role of Testing
Professionals in Unit Testing | cs.SE | Testing is an essential quality activity in the software development process.
Usually, a software system is tested on several levels, starting with unit
testing that checks the smallest parts of the code until acceptance testing,
which is focused on the validations with the end-user. Historically, unit
testing has been the domain of developers, who are responsible for ensuring the
accuracy of their code. However, in agile environments, testing professionals
play an integral role in various quality improvement initiatives throughout
each development cycle. This paper explores the participation of test engineers
in unit testing within an industrial context, employing a survey-based research
methodology. Our findings demonstrate that testing professionals have the
potential to strengthen unit testing by collaborating with developers to craft
thorough test cases and fostering a culture of mutual learning and cooperation,
ultimately contributing to increasing the overall quality of software projects.
| 2024-03-20 00:41:49.000000000 |
0909.2103 | MESURE Tool to benchmark Java Card platforms | cs.SE | The advent of the Java Card standard has been a major turning point in smart
card technology. With the growing acceptance of this standard, understanding
the performance behavior of these platforms is becoming crucial. To meet this
need, we present in this paper a novel benchmarking framework to test and
evaluate the performance of Java Card platforms. MESURE tool is the first
framework which accuracy and effectiveness are independent from the particular
Java Card platform tested and CAD used.
| 2009-09-11 07:37:04.000000000 |
1805.04825 | Deep Learning in Software Engineering | cs.SE | Recent years, deep learning is increasingly prevalent in the field of
Software Engineering (SE). However, many open issues still remain to be
investigated. How do researchers integrate deep learning into SE problems?
Which SE phases are facilitated by deep learning? Do practitioners benefit from
deep learning? The answers help practitioners and researchers develop practical
deep learning models for SE tasks. To answer these questions, we conduct a
bibliography analysis on 98 research papers in SE that use deep learning
techniques. We find that 41 SE tasks in all SE phases have been facilitated by
deep learning integrated solutions. In which, 84.7% papers only use standard
deep learning models and their variants to solve SE problems. The
practicability becomes a concern in utilizing deep learning techniques. How to
improve the effectiveness, efficiency, understandability, and testability of
deep learning based solutions may attract more SE researchers in the future.
| 2018-05-13 06:01:39.000000000 |
2003.05155 | Towards CRISP-ML(Q): A Machine Learning Process Model with Quality
Assurance Methodology | cs.LG cs.SE stat.ML | Machine learning is an established and frequently used technique in industry
and academia but a standard process model to improve success and efficiency of
machine learning applications is still missing. Project organizations and
machine learning practitioners have a need for guidance throughout the life
cycle of a machine learning application to meet business expectations. We
therefore propose a process model for the development of machine learning
applications, that covers six phases from defining the scope to maintaining the
deployed machine learning application. The first phase combines business and
data understanding as data availability oftentimes affects the feasibility of
the project. The sixth phase covers state-of-the-art approaches for monitoring
and maintenance of a machine learning applications, as the risk of model
degradation in a changing environment is eminent. With each task of the
process, we propose quality assurance methodology that is suitable to adress
challenges in machine learning development that we identify in form of risks.
The methodology is drawn from practical experience and scientific literature
and has proven to be general and stable. The process model expands on CRISP-DM,
a data mining process model that enjoys strong industry support but lacks to
address machine learning specific tasks. Our work proposes an industry and
application neutral process model tailored for machine learning applications
with focus on technical tasks for quality assurance.
| 2020-03-11 08:25:49.000000000 |
2203.08877 | Code Smells in Elixir: Early Results from a Grey Literature Review | cs.SE | Elixir is a new functional programming language whose popularity is rising in
the industry. However, there are few works in the literature focused on
studying the internal quality of systems implemented in this language.
Particularly, to the best of our knowledge, there is currently no catalog of
code smells for Elixir. Therefore, in this paper, through a grey literature
review, we investigate whether Elixir developers discuss code smells. Our
preliminary results indicate that 11 of the 22 traditional code smells
cataloged by Fowler and Beck are discussed by Elixir developers. We also
propose a list of 18 new smells specific for Elixir systems and investigate
whether these smells are currently identified by Credo, a well-known static
code analysis tool for Elixir. We conclude that only two traditional code
smells and one Elixir-specific code smell are automatically detected by this
tool. Thus, these early results represent an opportunity for extending tools
such as Credo to detect code smells and then contribute to improving the
internal quality of Elixir systems.
| 2022-03-16 18:44:52.000000000 |
2311.07495 | The Last Decade in Review: Tracing the Evolution of Safety Assurance
Cases through a Comprehensive Bibliometric Analysis | cs.SE | Safety assurance is of paramount importance across various domains, including
automotive, aerospace, and nuclear energy, where the reliability and
acceptability of mission-critical systems are imperative. This assurance is
effectively realized through the utilization of Safety Assurance Cases. The use
of safety assurance cases allows for verifying the correctness of the created
systems capabilities, preventing system failure. The latter may result in loss
of life, severe injuries, large-scale environmental damage, property
destruction, and major economic loss. Still, the emergence of complex
technologies such as cyber-physical systems (CPSs), characterized by their
heterogeneity, autonomy, machine learning capabilities, and the uncertainty of
their operational environments poses significant challenges for safety
assurance activities. Several papers have tried to propose solutions to tackle
these challenges, but to the best of our knowledge, no secondary study
investigates the trends, patterns, and relationships characterizing the safety
case scientific literature. This makes it difficult to have a holistic view of
the safety case landscape and to identify the most promising future research
directions. In this paper, we, therefore, rely on state-of-the-art bibliometric
tools(e.g., VosViewer) to conduct a bibliometric analysis that allows us to
generate valuable insights, identify key authors and venues, and gain a birds
eye view of the current state of research in the safety assurance area. By
revealing knowledge gaps and highlighting potential avenues for future
research, our analysis provides an essential foundation for researchers,
corporate safety analysts, and regulators seeking to embrace or enhance safety
practices that align with their specific needs and objectives.
| 2023-11-13 17:34:23.000000000 |
1701.05650 | Demand-Driven Pointer Analysis with Strong Updates via Value-Flow
Refinement | cs.PL cs.SE | We present a new demand-driven flow- and context-sensitive pointer analysis
with strong updates for C programs, called SUPA, that enables computing
points-to information via value-flow refinement, in environments with small
time and memory budgets such as IDEs. We formulate SUPA by solving a graph
reachability problem on an inter-procedural value-flow graph representing a
program's def-use chains, which are pre-computed efficiently but
over-approximately. To answer a client query (a request for a variable's
points-to set), SUPA reasons about the flow of values along the pre-computed
def-use chains sparsely (rather than across all program points), by performing
only the work necessary for the query (rather than analyzing the whole
program). In particular, strong updates are performed to filter out spurious
def-use chains through value-flow refinement as long as the total budget is not
exhausted. SUPA facilitates efficiency and precision tradeoffs by applying
different pointer analyses in a hybrid multi-stage analysis framework.
We have implemented SUPA in LLVM (3.5.0) and evaluate it by choosing
uninitialized pointer detection as a major client on 18 open-source C programs.
As the analysis budget increases, SUPA achieves improved precision, with its
single-stage flow-sensitive analysis reaching 97.4% of that achieved by
whole-program flow-sensitive analysis by consuming about 0.18 seconds and 65KB
of memory per query, on average (with a budget of at most 10000 value-flow
edges per query). With context-sensitivity also considered, SUPA's two- stage
analysis becomes more precise for some programs but also incurs more analysis
times. SUPA is also amenable to parallelization. A parallel implementation of
its single-stage flow-sensitive analysis achieves a speedup of up to 6.9x with
an average of 3.05x a 8-core machine with respect its sequential version.
| 2017-01-20 00:51:09.000000000 |
1704.02418 | Proceedings Tenth Workshop on Programming Language Approaches to
Concurrency- and Communication-cEntric Software | cs.PL cs.DC cs.SE | PLACES 2017 (full title: Programming Language Approaches to Concurrency- and
Communication-cEntric Software) is the tenth edition of the PLACES workshop
series. After the first PLACES, which was affiliated to DisCoTec in 2008, the
workshop has been part of ETAPS every year since 2009 and is now an established
part of the ETAPS satellite events. PLACES 2017 was held on 29th April in
Uppsala, Sweden. The workshop series was started in order to promote the
application of novel programming language ideas to the increasingly important
problem of developing software for systems in which concurrency and
communication are intrinsic aspects. This includes software for both multi-core
systems and large-scale distributed and/or service-oriented systems. The scope
of PLACES includes new programming language features, whole new programming
language designs, new type systems, new semantic approaches, new program
analysis techniques, and new implementation mechanisms. This volume consists of
the papers accepted for presentation at the workshop.
| 2017-04-08 01:32:06.000000000 |
2112.01218 | GraphCode2Vec: Generic Code Embedding via Lexical and Program Dependence
Analyses | cs.SE | Code embedding is a keystone in the application of machine learning on
several Software Engineering (SE) tasks. To effectively support a plethora of
SE tasks, the embedding needs to capture program syntax and semantics in a way
that is generic. To this end, we propose the first self-supervised pre-training
approach (called GraphCode2Vec) which produces task-agnostic embedding of
lexical and program dependence features. GraphCode2Vec achieves this via a
synergistic combination of code analysis and Graph Neural Networks.
GraphCode2Vec is generic, it allows pre-training, and it is applicable to
several SE downstream tasks. We evaluate the effectiveness of GraphCode2Vec on
four (4) tasks (method name prediction, solution classification, mutation
testing and overfitted patch classification), and compare it with four (4)
similarly generic code embedding baselines (Code2Seq, Code2Vec, CodeBERT,
GraphCodeBERT) and 7 task-specific, learning-based methods. In particular,
GraphCode2Vec is more effective than both generic and task-specific
learning-based baselines. It is also complementary and comparable to
GraphCodeBERT (a larger and more complex model). We also demonstrate through a
probing and ablation study that GraphCode2Vec learns lexical and program
dependence features and that self-supervised pre-training improves
effectiveness.
| 2021-12-02 13:39:10.000000000 |
1310.6686 | Estado Actual de la Pr\'actica de la Ingenier\'ia de Software en
M\'exico | cs.SE | The software engineering is a relatively new discipline compared to other
sciences, since the origins of the term itself dates back to the years 1968 and
1969. At present, the market and the software industry have a significant
relevance in several countries of the world; however, although Mexico is
immersed in this race, has not even reached the level of success achieved in
other countries in this sector. This paper presents an overview of the
situation that keeps the practice of software engineering in Mexico, with
emphasis on the academic realm. It shows a compilation of scientific research
activity carried out in universities, as well as a brief analysis of
undergraduate educational programs including the software engineering
discipline . At the end, future work to be done is proposed in order to find a
point of convergence between academia and industry, and also to support the
flourishing of this business which somehow will have a positive impact on the
economy of our country.
| 2013-10-24 17:58:50.000000000 |
1310.0802 | Introducing Enriched Concrete Syntax Trees | cs.SE cs.PL | In our earlier research an area of consistent and systematic application of
software metrics was explored. Strong dependency of applicability of software
metrics on input programming language was recognized as one of the main
weaknesses in this field. Introducing enriched Concrete Syntax Tree (eCST) for
internal and intermediate representation of the source code resulted with step
forward over this weakness. In this paper we explain innovation made by
introducing eCST and provide idea for broader applicability of eCST in some
other fields of software engineering.
| 2013-10-02 19:39:11.000000000 |
2302.07190 | Context Query Simulation for Smart Carparking Scenarios in the Melbourne
CDB | cs.DB cs.SE eess.SP | The rapid growth in Internet of Things (IoT) has ushered in the way for
better context-awareness enabling more smarter applications. Although for the
growth in the number of IoT devices, Context Management Platforms (CMPs) that
integrate different domains of IoT to produce context information lacks
scalability to cater to a high volume of context queries. Research in
scalability and adaptation in CMPs are of significant importance due to this
reason. However, there is limited methods to benchmarks and validate research
in this area due to the lack of sizable sets of context queries that could
simulate real-world situations, scenarios, and scenes. Commercially collected
context query logs are not publicly accessible and deploying IoT devices, and
context consumers in the real-world at scale is expensive and consumes a
significant effort and time. Therefore, there is a need to develop a method to
reliably generate and simulate context query loads that resembles real-world
scenarios to test CMPs for scale. In this paper, we propose a context query
simulator for the context-aware smart car parking scenario in Melbourne Central
Business District in Australia. We present the process of generating context
queries using multiple real-world datasets and publicly accessible reports,
followed by the context query execution process. The context query generator
matches the popularity of places with the different profiles of commuters,
preferences, and traffic variations to produce a dataset of context query
templates containing 898,050 records. The simulator is executable over a
seven-day profile which far exceeds the simulation time of any IoT system
simulator. The context query generation process is also generic and context
query language independent.
| 2023-02-13 14:23:35.000000000 |
2105.04767 | A Value-driven Approach for Software Process Improvement -- A Solution
Proposal | cs.SE | Software process improvement (SPI) is a means to an end, not an end in itself
(e.g., a goal is to achieve shorter time to market and not just compliance to a
process standard). Therefore, SPI initiatives ought to be streamlined to meet
the desired values for an organization. Through a literature review, seven
secondary studies aggregating maturity models and assessment frameworks were
identified. Furthermore, we identified six proposals for building a new
maturity model. We analyzed the existing maturity models for (a) their purpose,
structure, guidelines, and (b) the degree to which they explicitly consider
values and benefits. Based on this analysis and utilizing the guidelines from
the proposals to build maturity models, we have introduced an approach for
developing a value-driven approach for SPI. The proposal leveraged the
benefits-dependency networks. We argue that our approach enables the following
key benefits: (a) as a value-driven approach, it streamlines value-delivery and
helps to avoid unnecessary process interventions, (b) as a
knowledge-repository, it helps to codify lessons learned i.e. whether adopted
practices lead to value realization, and (c) as an internal process maturity
assessment tool, it tracks the progress of process realization, which is
necessary to monitor progress towards the intended values.
| 2021-05-11 03:27:02.000000000 |
2302.06065 | A Systematic Literature Review of Explainable AI for Software
Engineering | cs.SE | Context: In recent years, leveraging machine learning (ML) techniques has
become one of the main solutions to tackle many software engineering (SE)
tasks, in research studies (ML4SE). This has been achieved by utilizing
state-of-the-art models that tend to be more complex and black-box, which is
led to less explainable solutions that reduce trust and uptake of ML4SE
solutions by professionals in the industry.
Objective: One potential remedy is to offer explainable AI (XAI) methods to
provide the missing explainability. In this paper, we aim to explore to what
extent XAI has been studied in the SE community (XAI4SE) and provide a
comprehensive view of the current state-of-the-art as well as challenge and
roadmap for future work.
Method: We conduct a systematic literature review on 24 (out of 869 primary
studies that were selected by keyword search) most relevant published studies
in XAI4SE. We have three research questions that were answered by meta-analysis
of the collected data per paper.
Results: Our study reveals that among the identified studies, software
maintenance (\%68) and particularly defect prediction has the highest share on
the SE stages and tasks being studied. Additionally, we found that XAI methods
were mainly applied to classic ML models rather than more complex models. We
also noticed a clear lack of standard evaluation metrics for XAI methods in the
literature which has caused confusion among researchers and a lack of
benchmarks for comparisons.
Conclusions: XAI has been identified as a helpful tool by most studies, which
we cover in the systematic review. However, XAI4SE is a relatively new domain
with a lot of untouched potentials, including the SE tasks to help with, the
ML4SE methods to explain, and the types of explanations to offer. This study
encourages the researchers to work on the identified challenges and roadmap
reported in the paper.
| 2023-02-13 02:59:41.000000000 |
2312.16791 | Error Propagation Analysis for Multithreaded Programs: An Empirical
Approach | cs.SE cs.DC | Fault injection is a technique to measure the robustness of a program to
errors by introducing faults into the program under test. Following a fault
injection experiment, Error Propagation Analysis (EPA) is deployed to
understand how errors affect a program's execution. EPA typically compares the
traces of a fault-free (golden) run with those from a faulty run of the
program. While this suffices for deterministic programs, EPA approaches are
unsound for multithreaded programs with non-deterministic golden runs. In this
paper, we propose Invariant Propagation Analysis (IPA) as the use of
automatically inferred likely invariants ("invariants" in the following) in
lieu of golden traces for conducting EPA in multithreaded programs. We evaluate
the stability and fault coverage of invariants derived by IPA through fault
injection experiments across six different fault types and six representative
programs that can be executed with varying numbers of threads. We find that
stable invariants can be inferred in all cases, but their fault coverage
depends on the application and the fault type. We also find that fault coverage
for multithreaded executions with IPA can be even higher than for traditional
singlethreaded EPA, which emphasizes that IPA results cannot be trivially
extrapolated from traditional EPA results.
| 2023-12-28 02:36:02.000000000 |
2402.05256 | IRFuzzer: Specialized Fuzzing for LLVM Backend Code Generation | cs.SE | Modern compilers, such as LLVM, are complex pieces of software. Due to their
complexity, manual testing is unlikely to suffice, yet formal verification is
difficult to scale. End-to-end fuzzing can be used, but it has difficulties in
achieving high coverage of some components of LLVM.
In this paper, we implement IRFuzzer to investigate the effectiveness of
specialized fuzzing of the LLVM compiler backend. We focus on two approaches to
improve the fuzzer: guaranteed input validity using constrained mutations and
improved feedback quality. The mutator in IRFuzzer is capable of generating a
wide range of LLVM IR inputs, including structured control flow, vector types,
and function definitions. The system instruments coding patterns in the
compiler to monitor the execution status of instruction selection. The
instrumentation not only provides a new coverage feedback called matcher table
coverage, but also provides an architecture specific guidance to the mutator.
We show that IRFuzzer is more effective than existing fuzzers by fuzzing on
29 mature LLVM backend targets. In the process, we reported 74 confirmed new
bugs in LLVM upstream, out of which 49 have been fixed, five have been back
ported to LLVM 15, showing that specialized fuzzing provides useful and
actionable insights to LLVM developers.
| 2024-02-07 21:02:33.000000000 |
2403.05873 | LEGION: Harnessing Pre-trained Language Models for GitHub Topic
Recommendations with Distribution-Balance Loss | cs.SE cs.IR cs.LG | Open-source development has revolutionized the software industry by promoting
collaboration, transparency, and community-driven innovation. Today, a vast
amount of various kinds of open-source software, which form networks of
repositories, is often hosted on GitHub - a popular software development
platform. To enhance the discoverability of the repository networks, i.e.,
groups of similar repositories, GitHub introduced repository topics in 2017
that enable users to more easily explore relevant projects by type, technology,
and more. It is thus crucial to accurately assign topics for each GitHub
repository. Current methods for automatic topic recommendation rely heavily on
TF-IDF for encoding textual data, presenting challenges in understanding
semantic nuances. This paper addresses the limitations of existing techniques
by proposing Legion, a novel approach that leverages Pre-trained Language
Models (PTMs) for recommending topics for GitHub repositories. The key novelty
of Legion is three-fold. First, Legion leverages the extensive capabilities of
PTMs in language understanding to capture contextual information and semantic
meaning in GitHub repositories. Second, Legion overcomes the challenge of
long-tailed distribution, which results in a bias toward popular topics in
PTMs, by proposing a Distribution-Balanced Loss (DB Loss) to better train the
PTMs. Third, Legion employs a filter to eliminate vague recommendations,
thereby improving the precision of PTMs. Our empirical evaluation on a
benchmark dataset of real-world GitHub repositories shows that Legion can
improve vanilla PTMs by up to 26% on recommending GitHubs topics. Legion also
can suggest GitHub topics more precisely and effectively than the
state-of-the-art baseline with an average improvement of 20% and 5% in terms of
Precision and F1-score, respectively.
| 2024-03-09 10:49:31.000000000 |
2210.06840 | Forensic-Ready Risk Management Concepts | cs.CR cs.SE | Currently, numerous approaches exist supporting the implementation of
forensic readiness and, indirectly, forensic-ready software systems. However,
the terminology used in the approaches and their focus tends to vary. To
facilitate the design of forensic-ready software systems, the clarity of the
underlying concepts needs to be established so that their requirements can be
unambiguously formulated and assessed. This is especially important when
considering forensic readiness as an add-on to information security. In this
paper, the concepts relevant to forensic readiness are derived and aligned
based on six existing approaches. The results then serve as a stepping stone
for enhancing Information Systems Security Risk Management (ISSRM) with
forensic readiness.
| 2022-10-13 08:50:09.000000000 |
2309.01379 | MLGuard: Defend Your Machine Learning Model! | cs.SE | Machine Learning (ML) is used in critical highly regulated and high-stakes
fields such as finance, medicine, and transportation. The correctness of these
ML applications is important for human safety and economic benefit. Progress
has been made on improving ML testing and monitoring of ML. However, these
approaches do not provide i) pre/post conditions to handle uncertainty, ii)
defining corrective actions based on probabilistic outcomes, or iii) continual
verification during system operation. In this paper, we propose MLGuard, a new
approach to specify contracts for ML applications. Our approach consists of a)
an ML contract specification defining pre/post conditions, invariants, and
altering behaviours, b) generated validation models to determine the
probability of contract violation, and c) an ML wrapper generator to enforce
the contract and respond to violations. Our work is intended to provide the
overarching framework required for building ML applications and monitoring
their safety.
| 2023-09-04 06:08:11.000000000 |
1111.1022 | Towards the integration of formal specification in the \'Ancora
methodology | cs.SE | There are some non-formal methodologies such as RUP, OpenUP, agile
methodologies such as SCRUP, XP and techniques like those proposed by UML,
which allow the development of software. The software industry has struggled to
generate quality software, as importance has not been given to the engineering
requirements, resulting in a poor specification of requirements and software of
poor quality. In order to generate a contribution to the specification of
requirements, this article describes a methodological proposal, implementing
formal methods to the results of the process of requirements analysis of the
methodology \'Ancora.
| 2011-11-04 00:52:06.000000000 |
2303.16989 | Applications of Causality and Causal Inference in Software Engineering | cs.SE | Causal inference is a study of causal relationships between events and the
statistical study of inferring these relationships through interventions and
other statistical techniques. Causal reasoning is any line of work toward
determining causal relationships, including causal inference. This paper
explores the relationship between causal reasoning and various fields of
software engineering. This paper aims to uncover which software engineering
fields are currently benefiting from the study of causal inference and causal
reasoning, as well as which aspects of various problems are best addressed
using this methodology. With this information, this paper also aims to find
future subjects and fields that would benefit from this form of reasoning and
to provide that information to future researchers. This paper follows a
systematic literature review, including; the formulation of a search query,
inclusion and exclusion criteria of the search results, clarifying questions
answered by the found literature, and synthesizing the results from the
literature review. Through close examination of the 45 found papers relevant to
the research questions, it was revealed that the majority of causal reasoning
as related to software engineering is related to testing through root cause
localization. Furthermore, most causal reasoning is done informally through an
exploratory process of forming a Causality Graph as opposed to strict
statistical analysis or introduction of interventions. Finally, causal
reasoning is also used as a justification for many tools intended to make the
software more human-readable by providing additional causal information to
logging processes or modeling languages.
| 2023-03-29 19:38:19.000000000 |
2109.02001 | Proceedings of the 9th International Workshop on Verification and
Program Transformation | cs.SC cs.PL cs.SE | The previous VPT 2020 workshop was organized in honour of Professor Alberto
Pettorossi on the occasion of his academic retirement from Universit\`a di Roma
Tor Vergata. Due to the pandemic the VPT 2020 meeting was cancelled but its
proceeding have already appeared in the EPTCS 320 volume. The joint VPT-20-21
event has subsumed the original programme of VPT 2020 and provided an
opportunity to meet and celebrate the achievements of Professor Alberto
Pettorossi; its programme was further expanded with the newly submitted
presentations for VPT 2021. The aim of the VPT workshop series is to provide a
forum where people from the areas of program transformation and program
verification can fruitfully exchange ideas and gain a deeper understanding of
the interactions between those two fields.
| 2021-09-05 05:42:21.000000000 |
1910.06500 | DeepVS: An Efficient and Generic Approach for Source Code Modeling Usage | cs.NE cs.PL cs.SE | The source code suggestions provided by current IDEs are mostly dependent on
static type learning. These suggestions often end up proposing irrelevant
suggestions for a peculiar context. Recently, deep learning-based approaches
have shown great potential in the modeling of source code for various software
engineering tasks. However, these techniques lack adequate generalization and
resistance to acclimate the use of such models in a real-world software
development environment. This letter presents \textit{DeepVS}, an end-to-end
deep neural code completion tool that learns from existing codebases by
exploiting the bidirectional Gated Recurrent Unit (BiGRU) neural net. The
proposed tool is capable of providing source code suggestions instantly in an
IDE by using pre-trained BiGRU neural net. The evaluation of this work is
two-fold, quantitative and qualitative. Through extensive evaluation on ten
real-world open-source software systems, the proposed method shows significant
performance enhancement and its practicality. Moreover, the results also
suggest that \textit{DeepVS} tool is capable of suggesting zero-day (unseen)
code tokens by learning coding patterns from real-world software systems.
| 2019-10-15 02:59:52.000000000 |
2201.02926 | Variational design for a structural family of CAD models | stat.ME cs.GR cs.SE | Variational design is a well-recognized CAD technique due to the increased
design efficiency. It often presents as a parametric family of CAD models.
Although effective, this way of working cannot handle design requirements that
go beyond parametric changes. Such design requirements are not uncommon today
due to the increasing popularity of product customization. In particular, there
is often a need for designing a new model out of an existing structural family
of models, which share a structural pattern but have individually varied detail
features. To facilitate such design requirements, a new method is presented in
this paper. The idea is to express the underlying structural pattern in terms
of a submodel composed of the maximum common design features of the family, and
then to build a single master model by attaching to the submodel all detail
design features in the family. This master model is a representative model for
the family and contains all the features. By removing unwanted detail features
and adding new features, the master model can be easily adapted into a new
design, while keeping aligned with the family, structurally. Effectiveness of
this method has been validated by a series of case studies and comparisons of
increasing complexity.
| 2022-01-09 04:49:51.000000000 |
1604.03184 | Desiree - a Refinement Calculus for Requirements Engineering | cs.SE | The requirements elicited from stakeholders suffer from various afflictions,
including informality, incompleteness, ambiguity, vagueness, inconsistencies,
and more. It is the task of requirements engineering (RE) processes to derive
from these an eligible (formal, complete enough, unambiguous, consistent,
measurable, satisfiable, modifiable and traceable) requirements specification
that truly captures stakeholder needs.
We propose Desiree, a refinement calculus for systematically transforming
stakeholder require-ments into an eligible specification. The core of the
calculus is a rich set of requirements operators that iteratively transform
stakeholder requirements by strengthening or weakening them, thereby reducing
incompleteness, removing ambiguities and vagueness, eliminating unattainability
and conflicts, turning them into an eligible specification. The framework also
includes an ontology for modeling and classifying requirements, a
description-based language for representing requirements, as well as a
systematic method for applying the concepts and operators. In addition, we
define the semantics of the requirements concepts and operators, and develop a
graphical modeling tool in support of the entire framework.
To evaluate our proposal, we have conducted a series of empirical
evaluations, including an ontology evaluation by classifying a large public
requirements set, a language evaluation by rewriting the large set of
requirements using our description-based syntax, a method evaluation through a
realistic case study, and an evaluation of the entire framework through three
controlled experiments. The results of our evaluations show that our ontology,
language, and method are adequate in capturing requirements in practice, and
offer strong evidence that with sufficient training, our framework indeed helps
people conduct more effective requirements engineering.
| 2016-04-12 00:32:53.000000000 |
2007.02652 | Rethinking IoT Security: A Protocol Based on Blockchain Smart Contracts
for Secure and Automated IoT Deployments | cs.CR cs.SE | Proliferation of IoT devices in society demands a renewed focus on securing
the use and maintenance of such systems. IoT-based systems will have a great
impact on society and therefore such systems must have guaranteed resilience.
We introduce cryptographic-based building blocks that strive to ensure that
distributed IoT networks remain in a healthy condition throughout their
lifecycle. Our presented solution utilizes deterministic and interlinked smart
contracts on the Ethereum blockchain to enforce secured management and
maintenance for hardened IoT devices. A key issue investigated is the protocol
development for securing IoT device deployments and means for communicating
securely with devices. By supporting values of openness, automation, and
provenance, we can introduce novel means that reduce the threats of
surveillance and theft, while also improving operator accountability and trust
in IoT technology.
| 2020-07-06 11:20:45.000000000 |
1412.3726 | Considering Polymorphism in Change-Based Test Suite Reduction | cs.SE | With the increasing popularity of continuous integration, algorithms for
selecting the minimal test-suite to cover a given set of changes are in order.
This paper reports on how polymorphism can handle false negatives in a previous
algorithm which uses method-level changes in the base-code to deduce which
tests need to be rerun. We compare the approach with and without polymorphism
on two distinct cases ---PMD and CruiseControl--- and discovered an interesting
trade-off: incorporating polymorphism results in more relevant tests to be
included in the test suite (hence improves accuracy), however comes at the cost
of a larger test suite (hence increases the time to run the minimal
test-suite).
| 2014-12-11 17:09:28.000000000 |
2208.01105 | How to characterize the health of an Open Source Software project? A
snowball literature review of an emerging practice | cs.SE | Motivation: Society's dependence on Open Source Software (OSS) and the
communities that maintain the OSS is ever-growing. So are the potential risks
of, e.g., vulnerabilities being introduced in projects not actively maintained.
By assessing an OSS project's capability to stay viable and maintained over
time without interruption or weakening, i.e., the OSS health, users can
consider the risk implied by using the OSS as is, and if necessary, decide
whether to help improve the health or choose another option. However, such
assessment is complex as OSS health covers a wide range of sub-topics, and
existing support is limited. Aim: We aim to create an overview of
characteristics that affect the health of an OSS project and enable the
assessment thereof. Method: We conduct a snowball literature review based on a
start set of 9 papers, and identify 146 relevant papers over two iterations of
forward and backward snowballing. Health characteristics are elicited and coded
using structured and axial coding into a framework structure. Results: The
final framework consists of 104 health characteristics divided among 15 themes.
Characteristics address the socio-technical spectrum of the community of actors
maintaining the OSS project, the software and other deliverables being
maintained, and the orchestration facilitating the maintenance. Characteristics
are further divided based on the level of abstraction they address, i.e., the
OSS project-level specifically, or the project's overarching ecosystem of
related OSS projects. Conclusion: The framework provides an overview of the
wide span of health characteristics that may need to be considered when
evaluating OSS health and can serve as a foundation both for research and
practice.
| 2022-08-01 19:17:24.000000000 |