id
stringlengths 9
16
| title
stringlengths 9
239
| categories
stringclasses 965
values | abstract
stringlengths 26
3.28k
| created_at
stringlengths 29
29
|
---|---|---|---|---|
2310.11248 | CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code
Completion | cs.LG cs.CL cs.SE | Code completion models have made significant progress in recent years, yet
current popular evaluation datasets, such as HumanEval and MBPP, predominantly
focus on code completion tasks within a single file. This over-simplified
setting falls short of representing the real-world software development
scenario where repositories span multiple files with numerous cross-file
dependencies, and accessing and understanding cross-file context is often
required to complete the code correctly.
To fill in this gap, we propose CrossCodeEval, a diverse and multilingual
code completion benchmark that necessitates an in-depth cross-file contextual
understanding to complete the code accurately. CrossCodeEval is built on a
diverse set of real-world, open-sourced, permissively-licensed repositories in
four popular programming languages: Python, Java, TypeScript, and C#. To create
examples that strictly require cross-file context for accurate completion, we
propose a straightforward yet efficient static-analysis-based approach to
pinpoint the use of cross-file context within the current file.
Extensive experiments on state-of-the-art code language models like CodeGen
and StarCoder demonstrate that CrossCodeEval is extremely challenging when the
relevant cross-file context is absent, and we see clear improvements when
adding these context into the prompt. However, despite such improvements, the
pinnacle of performance remains notably unattained even with the
highest-performing model, indicating that CrossCodeEval is also capable of
assessing model's capability in leveraging extensive context to make better
code completion. Finally, we benchmarked various methods in retrieving
cross-file context, and show that CrossCodeEval can also be used to measure the
capability of code retrievers.
| 2023-10-17 13:18:01.000000000 |
1601.05976 | Business Process Modeling and Execution -- A Compiler for Distributed
Microservices | cs.SE | In this paper, we propose to rethink the dominant logic of how to model
business processes. We think that an actor based approach supports in a much
better way the fundamental nature of business processes. We present a proposal
for a compiler architecture to model and execute business processes as a set of
communicating microservices that are hosted on a general purpose virtual
machine for distributed execution.
| 2016-01-22 12:40:29.000000000 |
1312.0354 | PN2SC Case Study: An EMF-IncQuery solution | cs.SE | The paper presents a solution for the Petri-Net to Statecharts case study of
the Transformation Tool Contest 2013, using EMF-IncQuery and Xtend for
implementing the model transformation.
| 2013-12-02 07:01:54.000000000 |
2309.03044 | Method-Level Bug Severity Prediction using Source Code Metrics and LLMs | cs.SE | In the past couple of decades, significant research efforts are devoted to
the prediction of software bugs. However, most existing work in this domain
treats all bugs the same, which is not the case in practice. It is important
for a defect prediction method to estimate the severity of the identified bugs
so that the higher-severity ones get immediate attention. In this study, we
investigate source code metrics, source code representation using large
language models (LLMs), and their combination in predicting bug severity labels
of two prominent datasets. We leverage several source metrics at method-level
granularity to train eight different machine-learning models. Our results
suggest that Decision Tree and Random Forest models outperform other models
regarding our several evaluation metrics. We then use the pre-trained CodeBERT
LLM to study the source code representations' effectiveness in predicting bug
severity. CodeBERT finetuning improves the bug severity prediction results
significantly in the range of 29%-140% for several evaluation metrics, compared
to the best classic prediction model on source code metric. Finally, we
integrate source code metrics into CodeBERT as an additional input, using our
two proposed architectures, which both enhance the CodeBERT model
effectiveness.
| 2023-09-06 14:38:07.000000000 |
1805.09919 | DesignBIP: A Design Studio for Modeling and Generating Systems with BIP | cs.SE | The Behavior-Interaction-Priority (BIP) framework, rooted in rigorous
semantics, allows the construction of systems that are correct-by-design. BIP
has been effectively used for the construction and analysis of large systems
such as robot controllers and satellite on-board software. Nevertheless, the
specification of BIP models is done in a purely textual manner without any code
editor support. To facilitate the specification of BIP models, we present
DesignBIP, a web-based, collaborative, version-controlled design studio. To
promote model scaling and reusability of BIP models, we use a graphical
language for modeling parameterized BIP models with rigorous semantics. We
present the various services provided by the design studio, including model
editors, code editors, consistency checking mechanisms, code generators, and
integration with the JavaBIP tool-set.
| 2018-05-24 22:04:37.000000000 |
2004.04477 | Demo Abstract: Contract-based Hierarchical Resilience Framework for
Cyber-Physical Systems | cs.SE | This demonstration presents a framework for building a resilient
Cyber-Physical Systems (CPS) cyber-infrastructure through the use of
hierarchical parametric assume-guarantee contracts. A Fischertechnik Sorting
Line with Color Detection training model is used to showcase our framework.
| 2020-04-09 10:54:51.000000000 |
1312.2359 | Towards Ontological Support for Principle Solutions in Mechanical
Engineering | cs.SE | The engineering design process follows a series of standardized stages of
development, which have many aspects in common with software engineering. Among
these stages, the principle solution can be regarded as an analogue of the
design specification, fixing as it does the way the final product works. It is
usually constructed as an abstract sketch (hand-drawn or constructed with a CAD
system) where the functional parts of the product are identified, and geometric
and topological constraints are formulated. Here, we outline a semantic
approach where the principle solution is annotated with ontological assertions,
thus making the intended requirements explicit and available for further
machine processing; this includes the automated detection of design errors in
the final CAD model, making additional use of a background ontology of
engineering knowledge. We embed this approach into a document-oriented design
workflow, in which the background ontology and semantic annotations in the
documents are exploited to trace parts and requirements through the design
process and across different applications.
| 2013-12-09 09:47:08.000000000 |
2302.07445 | Silent Vulnerable Dependency Alert Prediction with Vulnerability Key
Aspect Explanation | cs.CR cs.SE | Due to convenience, open-source software is widely used. For beneficial
reasons, open-source maintainers often fix the vulnerabilities silently,
exposing their users unaware of the updates to threats. Previous works all
focus on black-box binary detection of the silent dependency alerts that suffer
from high false-positive rates. Open-source software users need to analyze and
explain AI prediction themselves. Explainable AI becomes remarkable as a
complementary of black-box AI models, providing details in various forms to
explain AI decisions. Noticing there is still no technique that can discover
silent dependency alert on time, in this work, we propose a framework using an
encoder-decoder model with a binary detector to provide explainable silent
dependency alert prediction. Our model generates 4 types of vulnerability key
aspects including vulnerability type, root cause, attack vector, and impact to
enhance the trustworthiness and users' acceptance to alert prediction. By
experiments with several models and inputs, we confirm CodeBERT with both
commit messages and code changes achieves the best results. Our user study
shows that explainable alert predictions can help users find silent dependency
alert more easily than black-box predictions. To the best of our knowledge,
this is the first research work on the application of Explainable AI in silent
dependency alert prediction, which opens the door of the related domains.
| 2023-02-15 03:32:03.000000000 |
2207.13827 | Declarative Smart Contracts | cs.SE | This paper presents DeCon, a declarative programming language for
implementing smart contracts and specifying contract-level properties. Driven
by the observation that smart contract operations and contract-level properties
can be naturally expressed as relational constraints, DeCon models each smart
contract as a set of relational tables that store transaction records. This
relational representation of smart contracts enables convenient specification
of contract properties, facilitates run-time monitoring of potential property
violations, and brings clarity to contract debugging via data provenance.
Specifically, a DeCon program consists of a set of declarative rules and
violation query rules over the relational representation, describing the smart
contract implementation and contract-level properties, respectively. We have
developed a tool that can compile DeCon programs into executable Solidity
programs, with instrumentation for run-time property monitoring. Our case
studies demonstrate that DeCon can implement realistic smart contracts such as
ERC20 and ERC721 digital tokens. Our evaluation results reveal the marginal
overhead of DeCon compared to the open-source reference implementation,
incurring 14% median gas overhead for execution, and another 16% median gas
overhead for run-time verification.
| 2022-07-27 23:36:22.000000000 |
2209.11103 | To Fix or Not to Fix: A Critical Study of Crypto-misuses in the Wild | cs.CR cs.SE | Recent studies have revealed that 87 % to 96 % of the Android apps using
cryptographic APIs have a misuse which may cause security vulnerabilities. As
previous studies did not conduct a qualitative examination of the validity and
severity of the findings, our objective was to understand the findings in more
depth. We analyzed a set of 936 open-source Java applications for cryptographic
misuses. Our study reveals that 88.10 % of the analyzed applications fail to
use cryptographic APIs securely. Through our manual analysis of a random
sample, we gained new insights into effective false positives. For example,
every fourth misuse of the frequently misused JCA class MessageDigest is an
effective false positive due to its occurrence in a non-security context. As we
wanted to gain deeper insights into the security implications of these misuses,
we created an extensive vulnerability model for cryptographic API misuses. Our
model includes previously undiscussed attacks in the context of cryptographic
APIs such as DoS attacks. This model reveals that nearly half of the misuses
are of high severity, e.g., hard-coded credentials and potential
Man-in-the-Middle attacks.
| 2022-09-22 15:40:41.000000000 |
2207.03277 | A Comprehensive Empirical Study of Bias Mitigation Methods for Machine
Learning Classifiers | cs.SE cs.AI | Software bias is an increasingly important operational concern for software
engineers. We present a large-scale, comprehensive empirical study of 17
representative bias mitigation methods for Machine Learning (ML) classifiers,
evaluated with 11 ML performance metrics (e.g., accuracy), 4 fairness metrics,
and 20 types of fairness-performance trade-off assessment, applied to 8
widely-adopted software decision tasks. The empirical coverage is much more
comprehensive, covering the largest numbers of bias mitigation methods,
evaluation metrics, and fairness-performance trade-off measures compared to
previous work on this important software property. We find that (1) the bias
mitigation methods significantly decrease ML performance in 53% of the studied
scenarios (ranging between 42%~66% according to different ML performance
metrics); (2) the bias mitigation methods significantly improve fairness
measured by the 4 used metrics in 46% of all the scenarios (ranging between
24%~59% according to different fairness metrics); (3) the bias mitigation
methods even lead to decrease in both fairness and ML performance in 25% of the
scenarios; (4) the effectiveness of the bias mitigation methods depends on
tasks, models, the choice of protected attributes, and the set of metrics used
to assess fairness and ML performance; (5) there is no bias mitigation method
that can achieve the best trade-off in all the scenarios. The best method that
we find outperforms other methods in 30% of the scenarios. Researchers and
practitioners need to choose the bias mitigation method best suited to their
intended application scenario(s).
| 2022-07-07 13:14:49.000000000 |
2211.07604 | The OpenDC Microservice Simulator: Design, Implementation, and
Experimentation | cs.DC cs.SE | Microservices is an architectural style that structures an application as a
collection of loosely coupled services, making it easy for developers to build
and scale their applications. The microservices architecture approach differs
from the traditional monolithic style of treating software development as a
single entity. Microservice architecture is becoming more and more adapted.
However, microservice systems can be complex due to dependencies between the
microservices, resulting in unpredictable performance at a large scale.
Simulation is a cheap and fast way to investigate the performance of
microservices in more detail. This study aims to build a microservices
simulator for evaluating and comparing microservices based applications. The
microservices reference architecture is designed. The architecture is used as
the basis for a simulator. The simulator implementation uses statistical models
to generate the workload. The compelling features added to the simulator
include concurrent execution of microservices, configurable request depth,
three load-balancing policies and four request execution order policies. This
paper contains two experiments to show the simulator usage. The first
experiment covers request execution order policies at the microservice
instance. The second experiment compares load balancing policies across
microservice instances.
| 2022-11-08 10:52:21.000000000 |
2101.05877 | "How Was Your Weekend?" Software Development Teams Working From Home
During COVID-19 | cs.SE | The mass shift to working at home during the COVID-19 pandemic radically
changed the way many software development teams collaborate and communicate. To
investigate how team culture and team productivity may also have been affected,
we conducted two surveys at a large software company. The first, an exploratory
survey during the early months of the pandemic with 2,265 developer responses,
revealed that many developers faced challenges reaching milestones and that
their team productivity had changed. We also found through qualitative analysis
that important team culture factors such as communication and social connection
had been affected. For example, the simple phrase "How was your weekend?" had
become a subtle way to show peer support.
In our second survey, we conducted a quantitative analysis of the team
cultural factors that emerged from our first survey to understand the
prevalence of the reported changes. From 608 developer responses, we found that
74% of these respondents missed social interactions with colleagues and 51%
reported a decrease in their communication ease with colleagues. We used data
from the second survey to build a regression model to identify important team
culture factors for modeling team productivity. We found that the ability to
brainstorm with colleagues, difficulty communicating with colleagues, and
satisfaction with interactions from social activities are important factors
that are associated with how developers report their software development
team's productivity. Our findings inform how managers and leaders in large
software companies can support sustained team productivity during times of
crisis and beyond.
| 2021-01-14 21:47:42.000000000 |
2103.12298 | Revisiting Dockerfiles in Open Source Software Over Time | cs.SE | Docker is becoming ubiquitous with containerization for developing and
deploying applications. Previous studies have analyzed Dockerfiles that are
used to create container images in order to better understand how to improve
Docker tooling. These studies obtain Dockerfiles using either Docker Hub or
Github. In this paper, we revisit the findings of previous studies using the
largest set of Dockerfiles known to date with over 9.4 million unique
Dockerfiles found in the World of Code infrastructure spanning from 2013-2020.
We contribute a historical view of the Dockerfile format by analyzing the
Docker engine changelogs and use the history to enhance our analysis of
Dockerfiles. We also reconfirm previous findings of a downward trend in using
OS images and an upward trend of using language images. As well, we reconfirm
that Dockerfile smell counts are slightly decreasing meaning that Dockerfile
authors are likely getting better at following best practices. Based on these
findings, it indicates that previous analyses from prior works have been
correct in many of their findings and their suggestions to build better tools
for Docker image creation are further substantiated.
| 2021-03-23 04:27:23.000000000 |
2309.16798 | Expert-sourcing Domain-specific Knowledge: The Case of Synonym
Validation | cs.SE | One prerequisite for supervised machine learning is high quality labelled
data. Acquiring such data is, particularly if expert knowledge is required,
costly or even impossible if the task needs to be performed by a single expert.
In this paper, we illustrate tool support that we adopted and extended to
source domain-specific knowledge from experts. We provide insight in design
decisions that aim at motivating experts to dedicate their time at performing
the labelling task. We are currently using the approach to identify true
synonyms from a list of candidate synonyms. The identification of synonyms is
important in scenarios were stakeholders from different companies and
background need to collaborate, for example when defining and negotiating
requirements. We foresee that the approach of expert-sourcing is applicable to
any data labelling task in software engineering. The discussed design decisions
and implementation are an initial draft that can be extended, refined and
validated with further application.
| 2023-09-28 19:02:33.000000000 |
2203.12065 | Dozer: Migrating Shell Commands to Ansible Modules via Execution
Profiling and Synthesis | cs.SE | Software developers frequently use the system shell to perform configuration
management tasks. Unfortunately, the shell does not scale well to large
systems, and configuration management tools like Ansible are more difficult to
learn. We address this problem with Dozer, a technique to help developers push
their shell commands into Ansible task definitions. It operates by tracing and
comparing system calls to find Ansible modules with similar behaviors to shell
commands, then generating and validating migrations to find the task which
produces the most similar changes to the system. Dozer is syntax agnostic,
which should allow it to generalize to other configuration management
platforms. We evaluate Dozer using datasets from open source configuration
scripts.
| 2022-03-22 21:54:44.000000000 |
2205.04883 | Identical Image Retrieval using Deep Learning | cs.CV cs.SE | In recent years, we know that the interaction with images has increased.
Image similarity involves fetching similar-looking images abiding by a given
reference image. The target is to find out whether the image searched as a
query can result in similar pictures. We are using the BigTransfer Model, which
is a state-of-art model itself. BigTransfer(BiT) is essentially a ResNet but
pre-trained on a larger dataset like ImageNet and ImageNet-21k with additional
modifications. Using the fine-tuned pre-trained Convolution Neural Network
Model, we extract the key features and train on the K-Nearest Neighbor model to
obtain the nearest neighbor. The application of our model is to find similar
images, which are hard to achieve through text queries within a low inference
time. We analyse the benchmark of our model based on this application.
| 2022-05-10 13:34:41.000000000 |
2212.11140 | Benchmarking Large Language Models for Automated Verilog RTL Code
Generation | cs.PL cs.LG cs.SE | Automating hardware design could obviate a significant amount of human error
from the engineering process and lead to fewer errors. Verilog is a popular
hardware description language to model and design digital systems, thus
generating Verilog code is a critical first step. Emerging large language
models (LLMs) are able to write high-quality code in other programming
languages. In this paper, we characterize the ability of LLMs to generate
useful Verilog. For this, we fine-tune pre-trained LLMs on Verilog datasets
collected from GitHub and Verilog textbooks. We construct an evaluation
framework comprising test-benches for functional analysis and a flow to test
the syntax of Verilog code generated in response to problems of varying
difficulty. Our findings show that across our problem scenarios, the
fine-tuning results in LLMs more capable of producing syntactically correct
code (25.9% overall). Further, when analyzing functional correctness, a
fine-tuned open-source CodeGen LLM can outperform the state-of-the-art
commercial Codex LLM (6.5% overall). Training/evaluation scripts and LLM
checkpoints are available: https://github.com/shailja-thakur/VGen.
| 2022-12-13 16:34:39.000000000 |
2310.18385 | Matching of Descriptive Labels to Glossary Descriptions | cs.CL cs.AI cs.SE | Semantic text similarity plays an important role in software engineering
tasks in which engineers are requested to clarify the semantics of descriptive
labels (e.g., business terms, table column names) that are often consists of
too short or too generic words and appears in their IT systems. We formulate
this type of problem as a task of matching descriptive labels to glossary
descriptions. We then propose a framework to leverage an existing semantic text
similarity measurement (STS) and augment it using semantic label enrichment and
set-based collective contextualization where the former is a method to retrieve
sentences relevant to a given label and the latter is a method to compute
similarity between two contexts each of which is derived from a set of texts
(e.g., column names in the same table). We performed an experiment on two
datasets derived from publicly available data sources. The result indicated
that the proposed methods helped the underlying STS correctly match more
descriptive labels with the descriptions.
| 2023-10-27 07:09:04.000000000 |
2404.03624 | Standardizing Knowledge Engineering Practices with a Reference
Architecture | cs.AI cs.SE | Knowledge engineering is the process of creating and maintaining
knowledge-producing systems. Throughout the history of computer science and AI,
knowledge engineering workflows have been widely used given the importance of
high-quality knowledge for reliable intelligent agents. Meanwhile, the scope of
knowledge engineering, as apparent from its target tasks and use cases, has
been shifting, together with its paradigms such as expert systems, semantic
web, and language modeling. The intended use cases and supported user
requirements between these paradigms have not been analyzed globally, as new
paradigms often satisfy prior pain points while possibly introducing new ones.
The recent abstraction of systemic patterns into a boxology provides an opening
for aligning the requirements and use cases of knowledge engineering with the
systems, components, and software that can satisfy them best. This paper
proposes a vision of harmonizing the best practices in the field of knowledge
engineering by leveraging the software engineering methodology of creating
reference architectures. We describe how a reference architecture can be
iteratively designed and implemented to associate user needs with recurring
systemic patterns, building on top of existing knowledge engineering workflows
and boxologies. We provide a six-step roadmap that can enable the development
of such an architecture, providing an initial design and outcome of the
definition of architectural scope, selection of information sources, and
analysis. We expect that following through on this vision will lead to
well-grounded reference architectures for knowledge engineering, will advance
the ongoing initiatives of organizing the neurosymbolic knowledge engineering
space, and will build new links to the software architectures and data science
communities.
| 2024-04-04 17:46:32.000000000 |
2201.06720 | DeepRelease: Language-agnostic Release Notes Generation from Pull
Requests of Open-source Software | cs.SE | The release note is an essential software artifact of open-source software
that documents crucial information about changes, such as new features and bug
fixes. With the help of release notes, both developers and users could have a
general understanding of the latest version without browsing the source code.
However, it is a daunting and time-consuming job for developers to produce
release notes. Although prior studies have provided some automatic approaches,
they generate release notes mainly by extracting information from code changes.
This will result in language-specific and not being general enough to be
applicable. Therefore, helping developers produce release notes effectively
remains an unsolved challenge. To address the problem, we first conduct a
manual study on the release notes of 900 GitHub projects, which reveals that
more than 54% of projects produce their release notes with pull requests. Based
on the empirical finding, we propose a deep learning based approach named
DeepRelease (Deep learning based Release notes generator) to generate release
notes according to pull requests. The process of release notes generation in
DeepRelease includes the change entries generation and the change category
(i.e., new features or bug fixes) generation, which are formulated as a text
summarization task and a multi-class classification problem, respectively.
Since DeepRelease fully employs text information from pull requests to
summarize changes and identify the change category, it is language-agnostic and
can be used for projects in any language. We build a dataset with over 46K
release notes and evaluate DeepRelease on the dataset. The experimental results
indicate that DeepRelease outperforms four baselines and can generate release
notes similar to those manually written ones in a fraction of the time.
| 2022-01-18 03:42:42.000000000 |
1804.03919 | An Experimental Evaluation of a De-biasing Intervention for Professional
Software Developers | cs.SE cs.CY | CONTEXT: The role of expert judgement is essential in our quest to improve
software project planning and execution. However, its accuracy is dependent on
many factors, not least the avoidance of judgement biases, such as the
anchoring bias, arising from being influenced by initial information, even when
it's misleading or irrelevant. This strong effect is widely documented.
OBJECTIVE: We aimed to replicate this anchoring bias using professionals and,
novel in a software engineering context, explore de-biasing interventions
through increasing knowledge and awareness of judgement biases. METHOD: We ran
two series of experiments in company settings with a total of 410 software
developers. Some developers took part in a workshop to heighten their awareness
of a range of cognitive biases, including anchoring. Later, the anchoring bias
was induced by presenting low or high productivity values, followed by the
participants' estimates of their own project productivity. Our hypothesis was
that the workshop would lead to reduced bias, i.e., work as a de-biasing
intervention. RESULTS: The anchors had a large effect (robust Cohen's $d=1.19$)
in influencing estimates. This was substantially reduced in those participants
who attended the workshop (robust Cohen's $d=0.72$). The reduced bias related
mainly to the high anchor. The de-biasing intervention also led to a threefold
reduction in estimate variance. CONCLUSIONS: The impact of anchors upon
judgement was substantial. Learning about judgement biases does appear capable
of mitigating, although not removing, the anchoring bias. The positive effect
of de-biasing through learning about biases suggests that it has value.
| 2018-04-11 10:47:27.000000000 |
2205.13522 | Dynamically Relative Position Encoding-Based Transformer for Automatic
Code Edit | cs.SE | Adapting Deep Learning (DL) techniques to automate non-trivial coding
activities, such as code documentation and defect detection, has been
intensively studied recently. Learning to predict code changes is one of the
popular and essential investigations. Prior studies have shown that DL
techniques such as Neural Machine Translation (NMT) can benefit meaningful code
changes, including bug fixing and code refactoring. However, NMT models may
encounter bottleneck when modeling long sequences, thus are limited in
accurately predicting code changes. In this work, we design a Transformer-based
approach, considering that Transformer has proven effective in capturing
long-term dependencies. Specifically, we propose a novel model named DTrans.
For better incorporating the local structure of code, i.e., statement-level
information in this paper, DTrans is designed with dynamically relative
position encoding in the multi-head attention of Transformer. Experiments on
benchmark datasets demonstrate that DTrans can more accurately generate patches
than the state-of-the-art methods, increasing the performance by at least
5.45\%-46.57\% in terms of the exact match metric on different datasets.
Moreover, DTrans can locate the lines to change with 1.75\%-24.21\% higher
accuracy than the existing methods.
| 2022-05-26 17:41:07.000000000 |
2306.01509 | EvLog: Identifying Anomalous Logs over Software Evolution | cs.SE | Software logs record system activities, aiding maintainers in identifying the
underlying causes for failures and enabling prompt mitigation actions. However,
maintainers need to inspect a large volume of daily logs to identify the
anomalous logs that reveal failure details for further diagnosis. Thus, how to
automatically distinguish these anomalous logs from normal logs becomes a
critical problem. Existing approaches alleviate the burden on software
maintainers, but they are built upon an improper yet critical assumption:
logging statements in the software remain unchanged. While software keeps
evolving, our empirical study finds that evolving software brings three
challenges: log parsing errors, evolving log events, and unstable log
sequences.
In this paper, we propose a novel unsupervised approach named Evolving Log
analyzer (EvLog) to mitigate these challenges. We first build a multi-level
representation extractor to process logs without parsing to prevent errors from
the parser. The multi-level representations preserve the essential semantics of
logs while leaving out insignificant changes in evolving events. EvLog then
implements an anomaly discriminator with an attention mechanism to identify the
anomalous logs and avoid the issue brought by the unstable sequence. EvLog has
shown effectiveness in two real-world system evolution log datasets with an
average F1 score of 0.955 and 0.847 in the intra-version setting and
inter-version setting, respectively, which outperforms other state-of-the-art
approaches by a wide margin. To our best knowledge, this is the first study on
localizing anomalous logs over software evolution. We believe our work sheds
new light on the impact of software evolution with the corresponding solutions
for the log analysis community.
| 2023-06-02 12:58:00.000000000 |
2203.14093 | MQDD: Pre-training of Multimodal Question Duplicity Detection for
Software Engineering Domain | cs.CL cs.LG cs.PL cs.SE | This work proposes a new pipeline for leveraging data collected on the Stack
Overflow website for pre-training a multimodal model for searching duplicates
on question answering websites. Our multimodal model is trained on question
descriptions and source codes in multiple programming languages. We design two
new learning objectives to improve duplicate detection capabilities. The result
of this work is a mature, fine-tuned Multimodal Question Duplicity Detection
(MQDD) model, ready to be integrated into a Stack Overflow search system, where
it can help users find answers for already answered questions. Alongside the
MQDD model, we release two datasets related to the software engineering domain.
The first Stack Overflow Dataset (SOD) represents a massive corpus of paired
questions and answers. The second Stack Overflow Duplicity Dataset (SODD)
contains data for training duplicate detection models.
| 2022-03-26 15:01:26.000000000 |
1202.1953 | Arduino Tool: For Interactive Artwork Installations | cs.SE | The emergence of the digital media and computational tools has widened the
doors for creativity. The cutting edge in the digital arts and role of new
technologies can be explored for the possible creativity. This gives an
opportunity to involve arts with technologies to make creative works. The
interactive artworks are often installed in the places where multiple people
can interact with the installation, which allows the art to achieve its purpose
by allowing the people to observe and involve with the installation. The level
of engagement of the audience depends on the various factors such as aesthetic
satisfaction, how the audience constructs meaning, pleasure and enjoyment. The
method to evaluate these experiences is challenging as it depends on
integration between the artificial life and real life by means of human
computer interaction. This research investigates "How Adriano fits for creative
and interactive artwork installations?" using an artwork installation in the
campus of NTNU (Norwegian University of Science & Technology). The main focus
of this investigation has been to get an overview on the intersection between
information technology and Arts. This gives an opportunity to understand
various attributes like creativity, cooperation and openness of processes
influencing the creative Artworks. The artwork is combination of Adriano and
other auxiliary components such as sensors, LED's and speakers.
| 2012-02-09 11:12:09.000000000 |
1411.3790 | Monotonic Abstraction Techniques: from Parametric to Software Model
Checking | cs.LO cs.SE | Monotonic abstraction is a technique introduced in model checking
parameterized distributed systems in order to cope with transitions containing
global conditions within guards. The technique has been re-interpreted in a
declarative setting in previous papers of ours and applied to the verification
of fault tolerant systems under the so-called "stopping failures" model. The
declarative reinterpretation consists in logical techniques (quantifier
relativizations and, especially, quantifier instantiations) making sense in a
broader context. In fact, we recently showed that such techniques can
over-approximate array accelerations, so that they can be employed as a
meaningful (and practically effective) component of CEGAR loops in software
model checking too.
| 2014-11-14 04:38:40.000000000 |
2107.07357 | One Thousand and One Stories: A Large-Scale Survey of Software
Refactoring | cs.SE | Despite the availability of refactoring as a feature in popular IDEs, recent
studies revealed that developers are reluctant to use them, and still prefer
the manual refactoring of their code. At JetBrains, our goal is to fully
support refactoring features in IntelliJ-based IDEs and improve their adoption
in practice. Therefore, we start by raising the following main questions. How
exactly do people refactor code? What refactorings are the most popular? Why do
some developers tend not to use convenient IDE refactoring tools?
In this paper, we investigate the raised questions through the design and
implementation of a survey targeting 1,183 users of IntelliJ-based IDEs. Our
quantitative and qualitative analysis of the survey results shows that almost
two-thirds of developers spend more than one hour in a single session
refactoring their code; that refactoring types vary greatly in popularity; and
that a lot of developers would like to know more about IDE refactoring features
but lack the means to do so. These results serve us internally to support the
next generation of refactoring features, as well as can help our research
community to establish new directions in the refactoring usability research.
| 2021-07-15 14:34:33.000000000 |
2209.14071 | Towards Auditable Distributed Systems | cs.SE | The emerging trend towards distributed (cloud) systems (DS) has widely
arrived whether in the automotive, public or the financial sector, but the
execution of services of heterogeneous service providers is exposed to several
risks. Beside hardware/software faults or cyber attacks that can influence the
correctness of the system, fraud is also an issue. In such case it is not only
important to verify the correctness of the system, but also have evidence which
component and participant behaves faulty. This makes it possible, e.g. to claim
for compensation after systems execution but also to assure information for
verification can be trusted. The main goal of our research is to assure the
monitoring of DS based on auditable information. We follow a decentralized
monitoring strategy and envision a distributed monitoring approach of system
properties based on distributedlogic programs that consider auditability. The
expected contribution of this work is to establish with the application of our
framework the mutual trust of distributed parties, as well as trust of clients
in the systems execution. We showcase our ideas on a DS for booking services
with unmanned air vehicles.
| 2022-09-28 13:13:26.000000000 |
2104.00452 | Semantic XAI for contextualized demand forecasting explanations | cs.AI cs.SE | The paper proposes a novel architecture for explainable AI based on semantic
technologies and AI. We tailor the architecture for the domain of demand
forecasting and validate it on a real-world case study. The provided
explanations combine concepts describing features relevant to a particular
forecast, related media events, and metadata regarding external datasets of
interest. The knowledge graph provides concepts that convey feature information
at a higher abstraction level. By using them, explanations do not expose
sensitive details regarding the demand forecasting models. The explanations
also emphasize actionable dimensions where suitable. We link domain knowledge,
forecasted values, and forecast explanations in a Knowledge Graph. The ontology
and dataset we developed for this use case are publicly available for further
research.
| 2021-04-01 13:08:53.000000000 |
2306.10429 | An Architectural Design Decision Model for Resilient IoT Application | cs.SE | The Internet of Things is a paradigm that refers to the ubiquitous presence
around us of physical objects equipped with sensing, networking, and processing
capabilities that allow them to cooperate with their environment to reach
common goals. However, any threat affecting the availability of IoT
applications can be crucial financially and for the safety of the physical
integrity of users. This feature calls for IoT applications that remain
operational and efficiently handle possible threats. However, designing an IoT
application that can handle threats is challenging for stakeholders due to the
high susceptibility to threats of IoT applications and the lack of modeling
mechanisms that contemplate resilience as a first-class representation. In this
paper, an architectural Design Decision Model for Resilient IoT applications is
presented to reduce the difficulty of stakeholders in designing resilient IoT
applications. Our approach is illustrated and demonstrates the value through
the modeling of a case.
| 2023-06-17 21:44:07.000000000 |
1912.10198 | Automatically Extracting Subroutine Summary Descriptions from
Unstructured Comments | cs.SE | Summary descriptions of subroutines are short (usually one-sentence) natural
language explanations of a subroutine's behavior and purpose in a program.
These summaries are ubiquitous in documentation, and many tools such as
JavaDocs and Doxygen generate documentation built around them. And yet,
extracting summaries from unstructured source code repositories remains a
difficult research problem -- it is very difficult to generate clean structured
documentation unless the summaries are annotated by programmers. This becomes a
problem in large repositories of legacy code, since it is cost prohibitive to
retroactively annotate summaries in dozens or hundreds of old programs.
Likewise, it is a problem for creators of automatic documentation generation
algorithms, since these algorithms usually must learn from large annotated
datasets, which do not exist for many programming languages. In this paper, we
present a semi-automated approach via crowdsourcing and a fully-automated
approach for annotating summaries from unstructured code comments. We present
experiments validating the approaches, and provide recommendations and cost
estimates for automatically annotating large repositories.
| 2019-12-21 05:03:10.000000000 |
1905.01213 | Matlab vs. OpenCV: A Comparative Study of Different Machine Learning
Algorithms | cs.LG cs.MS cs.SE stat.ML | Scientific Computing relies on executing computer algorithms coded in some
programming languages. Given a particular available hardware, algorithms speed
is a crucial factor. There are many scientific computing environments used to
code such algorithms. Matlab is one of the most tremendously successful and
widespread scientific computing environments that is rich of toolboxes,
libraries, and data visualization tools. OpenCV is a (C++)-based library
written primarily for Computer Vision and its related areas. This paper
presents a comparative study using 20 different real datasets to compare the
speed of Matlab and OpenCV for some Machine Learning algorithms. Although
Matlab is more convenient in developing and data presentation, OpenCV is much
faster in execution, where the speed ratio reaches more than 80 in some cases.
The best of two worlds can be achieved by exploring using Matlab or similar
environments to select the most successful algorithm; then, implementing the
selected algorithm using OpenCV or similar environments to gain a speed factor.
| 2019-05-03 14:58:58.000000000 |
1905.11366 | Supporting Software Engineering Research and Education by Annotating
Public Videos of Developers Programming | cs.SE | Software engineering has long studied how software developers work, building
a body of work which forms the foundation of many software engineering best
practices, tools, and theories. Recently, some developers have begun recording
videos of themselves engaged in programming tasks contributing to open source
projects, enabling them to share knowledge and socialize with other developers.
We believe that these videos offer an important opportunity for both software
engineering research and education. In this paper, we discuss the potential use
of these videos as well as open questions for how to best enable this
envisioned use. We propose creating a central repository of programming videos,
enabling analyzing and annotating videos to illustrate specific behaviors of
interest such as asking and answering questions, employing strategies, and
software engineering theories. Such a repository would offer an important new
way in which both software engineering researchers and students can understand
how software developers work.
| 2019-05-09 04:42:39.000000000 |
2111.01415 | Callee: Recovering Call Graphs for Binaries with Transfer and
Contrastive Learning | cs.SE cs.AI cs.CR | Recovering binary programs' call graphs is crucial for inter-procedural
analysis tasks and applications based on them.transfer One of the core
challenges is recognizing targets of indirect calls (i.e., indirect callees).
Existing solutions all have high false positives and negatives, making call
graphs inaccurate. In this paper, we propose a new solution Callee combining
transfer learning and contrastive learning. The key insight is that, deep
neural networks (DNNs) can automatically identify patterns concerning indirect
calls, which can be more efficient than designing approximation algorithms or
heuristic rules to handle various cases. Inspired by the advances in
question-answering applications, we utilize contrastive learning to answer the
callsite-callee question. However, one of the toughest challenges is that DNNs
need large datasets to achieve high performance, while collecting large-scale
indirect-call ground-truths can be computational-expensive. Since direct calls
and indirect calls share similar calling conventions, it is possible to
transfer knowledge learned from direct calls to indirect ones. Therefore, we
leverage transfer learning to pre-train DNNs with easy-to-collect direct calls
and further fine-tune the indirect-call DNNs. We evaluate Callee on several
groups of targets, and results show that our solution could match callsites to
callees with an F1-Measure of 94.6%, much better than state-of-the-art
solutions. Further, we apply Callee to binary code similarity detection and
hybrid fuzzing, and found it could greatly improve their performance.
| 2021-11-02 08:08:18.000000000 |
cs/0106030 | Logic, Individuals and Concepts | cs.LO cs.DB cs.DM cs.SE | This extended abstract gives a brief outline of the connections between the
descriptions and variable concepts. Thus, the notion of a concept is extended
to include both the syntax and semantics features. The evaluation map in use is
parameterized by a kind of computational environment, the index, giving rise to
indexed concepts. The concepts are inhabited into language by the descriptions
from the higher order logic. In general the idea of object-as-functor should
assist the designer to outline a programming tool in conceptual shell style.
| 2001-06-12 12:58:27.000000000 |
2209.04514 | Compiler Testing using Template Java Programs | cs.PL cs.SE | We present JAttack, a framework that enables template-based testing for
compilers. Using JAttack, a developer writes a template program that describes
a set of programs to be generated and given as test inputs to a compiler. Such
a framework enables developers to incorporate their domain knowledge on testing
compilers, giving a basic program structure that allows for exploring complex
programs that can trigger sophisticated compiler optimizations. A developer
writes a template program in the host language (Java) that contains holes to be
filled by JAttack. Each hole, written using a domain-specific language,
constructs a node within an extended abstract syntax tree (eAST). An eAST node
defines the search space for the hole, i.e., a set of expressions and values.
JAttack generates programs by executing templates and filling each hole by
randomly choosing expressions and values (available within the search space
defined by the hole). Additionally, we introduce several optimizations to
reduce JAttack's generation cost. While JAttack could be used to test various
compiler features, we demonstrate its capabilities in helping test just-in-time
(JIT) Java compilers, whose optimizations occur at runtime after a sufficient
number of executions. Using JAttack, we have found six critical bugs that were
confirmed by Oracle developers. Four of them were previously unknown, including
two unknown CVEs (Common Vulnerabilities and Exposures). JAttack shows the
power of combining developers' domain knowledge (via templates) with random
testing to detect bugs in JIT compilers.
| 2022-09-09 20:31:38.000000000 |
1912.05937 | Inferring Input Grammars from Dynamic Control Flow | cs.SE cs.PL | A program is characterized by its input model, and a formal input model can
be of use in diverse areas including vulnerability analysis, reverse
engineering, fuzzing and software testing, clone detection and refactoring.
Unfortunately, input models for typical programs are often unavailable or out
of date. While there exist algorithms that can mine the syntactical structure
of program inputs, they either produce unwieldy and incomprehensible grammars,
or require heuristics that target specific parsing patterns.
In this paper, we present a general algorithm that takes a program and a
small set of sample inputs and automatically infers a readable context-free
grammar capturing the input language of the program. We infer the syntactic
input structure only by observing access of input characters at different
locations of the input parser. This works on all program stack based recursive
descent input parsers, including PEG and parser combinators, and can do
entirely without program specific heuristics. Our Mimid prototype produced
accurate and readable grammars for a variety of evaluation subjects, including
expr, URLparse, and microJSON.
| 2019-12-12 13:35:09.000000000 |
1406.3554 | Methodological Societies | cs.SE | The evolution of self-adaptive systems poses the problems of their coherence
and the resume of the systems' functioning taking into account the accomplished
work. While they are the base of the self-adaptive systems, these two aspects
are not considered in the related works. In this paper, we propose a
methodological based approach. In such approach, the adaptive system's
evolution is thought at its model level where its execution is made on the
system by exploiting a methodological process. For its concretization, we use
colored Petri nets to describe the agents' individual tasks. To handle the
system's functioning resume, we exploit the property of Petri nets on which the
control flow depends on last marking only.
| 2014-06-13 14:52:30.000000000 |
2307.15930 | Tailoring Stateless Model Checking for Event-Driven Multi-Threaded
Programs | cs.PL cs.SE | Event-driven multi-threaded programming is an important idiom for structuring
concurrent computations. Stateless Model Checking (SMC) is an effective
verification technique for multi-threaded programs, especially when coupled
with Dynamic Partial Order Reduction (DPOR). Existing SMC techniques are often
ineffective in handling event-driven programs, since they will typically
explore all possible orderings of event processing, even when events do not
conflict. We present Event-DPOR , a DPOR algorithm tailored to event-driven
multi-threaded programs. It is based on Optimal-DPOR, an optimal DPOR algorithm
for multi-threaded programs; we show how it can be extended for event-driven
programs. We prove correctness of Event-DPOR for all programs, and optimality
for a large subclass. One complication is that an operation in Event-DPOR,
which checks for redundancy of new executions, is NP-hard, as we show in this
paper; we address this by a sequence of inexpensive (but incomplete) tests
which check for redundancy efficiently. Our implementation and experimental
evaluation show that, in comparison with other tools in which handler threads
are simulated using locks, Event-DPOR can be exponentially faster than other
state-of-the-art DPOR algorithms on a variety of programs and manages to
completely avoid unnecessary exploration of executions.
| 2023-07-29 08:43:49.000000000 |
2004.05851 | Detecting Latency Degradation Patterns in Service-based Systems | cs.SE | Performance in heterogeneous service-based systems shows non-determistic
trends. Even for the same request type, latency may vary from one request to
another. These variations can occur due to several reasons on different levels
of the software stack: operating system, network, software libraries,
application code or others. Furthermore, a request may involve several Remote
Procedure Calls (RPC), where each call can be subject to performance variation.
Performance analysts inspect distributed traces and seek for recurrent patterns
in trace attributes, such as RPCs execution time, in order to cluster traces in
which variations may be induced by the same cause. Clustering "similar" traces
is a prerequisite for effective performance debugging. Given the scale of the
problem, such activity can be tedious and expensive. In this paper, we present
an automated approach that detects relevant RPCs execution time patterns
associated to request latency degradation, i.e. latency degradation patterns.
The presented approach is based on a genetic search algorithm driven by an
information retrieval relevance metric and an optimized fitness evaluation.
Each latency degradation pattern identifies a cluster of requests subject to
latency degradation with similar patterns in RPCs execution time. We show on a
microservice-based application case study that the proposed approach can
effectively detect clusters identified by artificially injected latency
degradation patterns. Experimental results show that our approach outperforms
in terms of F-score a state-of-art approach for latency profile analysis and
widely popular machine learning clustering algorithms. We also show how our
approach can be easily extended to trace attributes other than RPC execution
time (e.g. HTTP headers, execution node, etc.).
| 2020-04-13 10:10:39.000000000 |
2109.11896 | A Model-Driven Approach to Reengineering Processes in Cloud Computing | cs.SE | The reengineering process of large data-intensive legacy software
applications to cloud platforms involves different interrelated activities.
These activities are related to planning, architecture design,
re-hosting/lift-shift, code refactoring, and other related ones. In this
regard, the cloud computing literature has seen the emergence of different
methods with a disparate point of view of the same underlying legacy
application reengineering process to cloud platforms. As such, the effective
interoperability and tailoring of these methods become problematic due to the
lack of integrated and consistent standard models.
| 2021-09-24 11:36:55.000000000 |
2308.16670 | Safety of the Intended Functionality Concept Integration into a
Validation Tool Suite | cs.SE | Nowadays, the increasing complexity of Advanced Driver Assistance Systems
(ADAS) and Automated Driving (AD) means that the industry must move towards a
scenario-based approach to validation rather than relying on established
technology-based methods. This new focus also requires the validation process
to take into account Safety of the Intended Functionality (SOTIF), as many
scenarios may trigger hazardous vehicle behaviour. Thus, this work demonstrates
how the integration of the SOTIF process within an existing validation tool
suite can be achieved. The necessary adaptations are explained with
accompanying examples to aid comprehension of the approach.
| 2023-08-31 12:22:35.000000000 |
2204.09758 | Lowering Barriers to Application Development With Cloud-Native
Domain-Specific Functions | cs.SE | Creating and maintaining a modern, heterogeneous set of client applications
remains an obstacle for many businesses and individuals. While simple
domain-specific graphical languages and libraries can empower a variety of
users to create application behaviors and logic, using these languages to
produce and maintain a set of heterogeneous client applications is a challenge.
Primarily because each client typically requires the developers to both
understand and embed the domain-specific logic. This is because application
logic must be encoded to some extent in both the server and client sides.
In this paper, we propose an alternative approach, which allows the
specification of application logic to reside solely on the cloud. We have built
a system where reusable application components can be assembled on the cloud in
different logical chains and the client is largely decoupled from this logic
and is solely concerned with how data is displayed and gathered from users of
the application. In this way, the chaining of requests and responses is done by
the cloud and the client side has no knowledge of the application logic.
An additional effect of our approach is that the client side developer is
able to immediately see any changes they make, while executing the logic
residing on the cloud. This further allows more novice programmers to perform
these customizations, as they do not need to `get the full application working'
and are able to see the results of their code as they go, thereby lowering the
obstacles to businesses and individuals to produce and maintain applications.
Furthermore, this decoupling enables the quick generation and customization of
a variety of application clients, ranging from web to mobile devices and
personal assistants, while customizing one or more as needed.
| 2022-04-20 19:45:13.000000000 |
2209.04791 | Towards Understanding the Faults of JavaScript-Based Deep Learning
Systems | cs.SE | Quality assurance is of great importance for deep learning (DL) systems,
especially when they are applied in safety-critical applications. While quality
issues of native DL applications have been extensively analyzed, the issues of
JavaScript-based DL applications have never been systematically studied.
Compared with native DL applications, JavaScript-based DL applications can run
on major browsers, making the platform- and device-independent. Specifically,
the quality of JavaScript-based DL applications depends on the 3 parts: the
application, the third-party DL library used and the underlying DL framework
(e.g., TensorFlow.js), called JavaScript-based DL system. In this paper, we
conduct the first empirical study on the quality issues of JavaScript-based DL
systems. Specifically, we collect and analyze 700 real-world faults from
relevant GitHub repositories, including the official TensorFlow.js repository,
13 third-party DL libraries, and 58 JavaScript-based DL applications. To better
understand the characteristics of these faults, we manually analyze and
construct taxonomies for the fault symptoms, root causes, and fix patterns,
respectively. Moreover, we also study the fault distributions of symptoms and
root causes, in terms of the different stages of the development lifecycle, the
3-level architecture in the DL system, and the 4 major components of
TensorFlow.js framework. Based on the results, we suggest actionable
implications and research avenues that can potentially facilitate the
development, testing, and debugging of JavaScript-based DL systems.
| 2022-09-11 05:39:54.000000000 |
2402.09299 | Trained Without My Consent: Detecting Code Inclusion In Language Models
Trained on Code | cs.SE cs.LG | Code auditing ensures that the developed code adheres to standards,
regulations, and copyright protection by verifying that it does not contain
code from protected sources. The recent advent of Large Language Models (LLMs)
as coding assistants in the software development process poses new challenges
for code auditing. The dataset for training these models is mainly collected
from publicly available sources. This raises the issue of intellectual property
infringement as developers' codes are already included in the dataset.
Therefore, auditing code developed using LLMs is challenging, as it is
difficult to reliably assert if an LLM used during development has been trained
on specific copyrighted codes, given that we do not have access to the training
datasets of these models. Given the non-disclosure of the training datasets,
traditional approaches such as code clone detection are insufficient for
asserting copyright infringement. To address this challenge, we propose a new
approach, TraWiC; a model-agnostic and interpretable method based on membership
inference for detecting code inclusion in an LLM's training dataset. We extract
syntactic and semantic identifiers unique to each program to train a classifier
for detecting code inclusion. In our experiments, we observe that TraWiC is
capable of detecting 83.87% of codes that were used to train an LLM. In
comparison, the prevalent clone detection tool NiCad is only capable of
detecting 47.64%. In addition to its remarkable performance, TraWiC has low
resource overhead in contrast to pair-wise clone detection that is conducted
during the auditing process of tools like CodeWhisperer reference tracker,
across thousands of code snippets.
| 2024-02-14 16:41:35.000000000 |
2112.02807 | MDPFuzz: Testing Models Solving Markov Decision Processes | cs.SE cs.LG | The Markov decision process (MDP) provides a mathematical framework for
modeling sequential decision-making problems, many of which are crucial to
security and safety, such as autonomous driving and robot control. The rapid
development of artificial intelligence research has created efficient methods
for solving MDPs, such as deep neural networks (DNNs), reinforcement learning
(RL), and imitation learning (IL). However, these popular models solving MDPs
are neither thoroughly tested nor rigorously reliable.
We present MDPFuzz, the first blackbox fuzz testing framework for models
solving MDPs. MDPFuzz forms testing oracles by checking whether the target
model enters abnormal and dangerous states. During fuzzing, MDPFuzz decides
which mutated state to retain by measuring if it can reduce cumulative rewards
or form a new state sequence. We design efficient techniques to quantify the
"freshness" of a state sequence using Gaussian mixture models (GMMs) and
dynamic expectation-maximization (DynEM). We also prioritize states with high
potential of revealing crashes by estimating the local sensitivity of target
models over states.
MDPFuzz is evaluated on five state-of-the-art models for solving MDPs,
including supervised DNN, RL, IL, and multi-agent RL. Our evaluation includes
scenarios of autonomous driving, aircraft collision avoidance, and two games
that are often used to benchmark RL. During a 12-hour run, we find over 80
crash-triggering state sequences on each model. We show inspiring findings that
crash-triggering states, though they look normal, induce distinct neuron
activation patterns compared with normal states. We further develop an abnormal
behavior detector to harden all the evaluated models and repair them with the
findings of MDPFuzz to significantly enhance their robustness without
sacrificing accuracy.
| 2021-12-06 06:35:55.000000000 |
2104.14296 | Storytelling in human--centric software engineering research | cs.SE | BACKGROUND: Software engineering is a human activity. People naturally make
sense of their activities and experience through storytelling. But storytelling
does not appear to have been properly studied by software engineering research.
AIM: We explore the question: what contribution can storytelling make to
human--centric software engineering research? METHOD: We define concepts,
identify types of story and their purposes, outcomes and effects, briefly
review prior literature, identify several contributions and propose next steps.
RESULTS: Storytelling can, amongst other contributions, contribute to data
collection, data analyses, ways of knowing, research outputs, interventions in
practice, and advocacy, and can integrate with evidence and arguments. Like all
methods, storytelling brings risks. These risks can be managed. CONCLUSION:
Storytelling provides a potential counter--balance to abstraction, and an
approach to retain and honour human meaning in software engineering.
| 2021-04-29 12:31:40.000000000 |
2203.13231 | A Broad Comparative Evaluation of x86-64 Binary Rewriters | cs.SE | Binary rewriting is a rapidly-maturing technique for modifying software for
instrumentation, customization, optimization, and hardening without access to
source code. Unfortunately, the practical applications of binary rewriting
tools are often unclear to users because their limitations are glossed over in
the literature. This, among other challenges, has prohibited the widespread
adoption of these tools. To address this shortcoming, we collect ten popular
binary rewriters and assess their generality across a broad range of input
binary classes and the functional reliability of the resulting rewritten
binaries. Additionally, we evaluate the performance of the rewriters themselves
as well as the rewritten binaries they produce.
The goal of this broad evaluation is to establish a shared context for future
research and development of binary rewriting tools by providing a state of the
practice for their capabilities. To support potential binary rewriter users, we
also identify input binary features that are predictive of tool success and
show that a simple decision tree model can accurately predict whether a
particular tool can rewrite a target binary. The binary rewriters, our corpus
of 3344 sample binaries, and the evaluation infrastructure itself are all
freely available as open-source software.
| 2022-03-24 17:43:56.000000000 |
0907.2640 | Towards Hybrid Intensional Programming with JLucid, Objective Lucid, and
General Imperative Compiler Framework in the GIPSY | cs.PL cs.SE | Pure Lucid programs are concurrent with very fine granularity. Sequential
Threads (STs) are functions introduced to enlarge the grain size; they are
passed from server to workers by Communication Procedures (CPs) in the General
Intensional Programming System (GIPSY). A JLucid program combines Java code for
the STs with Lucid code for parallel control. Thus first, in this thesis, we
describe the way in which the new JLucid compiler generates STs and CPs. JLucid
also introduces array support.
Further exploration goes through the additional transformations that the
Lucid family of languages has undergone to enable the use of Java objects and
their members, in the Generic Intensional Programming Language (GIPL), and
Indexical Lucid: first, in the form of JLucid allowing the use of
pseudo-objects, and then through the specifically-designed the Objective Lucid
language. The syntax and semantic definitions of Objective Lucid and the
meaning of Java objects within an intensional program are provided with
discussions and examples.
Finally, there are many useful scientific and utility routines written in
many imperative programming languages other than Java, for example in C, C++,
Fortran, Perl, etc. Therefore, it is wise to provide a framework to facilitate
inclusion of these languages into the GIPSY and their use by Lucid programs. A
General Imperative Compiler Framework and its concrete implementation is
proposed to address this issue.
| 2009-07-15 16:24:05.000000000 |
1708.07229 | A Survey of Runtime Monitoring Instrumentation Techniques | cs.LO cs.SE | Runtime Monitoring is a lightweight and dynamic verification technique that
involves observing the internal operations of a software system and/or its
interactions with other external entities, with the aim of determining whether
the system satisfies or violates a correctness specification. Compilation
techniques employed in Runtime Monitoring tools allow monitors to be
automatically derived from high-level correctness specifications (aka.
properties). This allows the same property to be converted into different types
of monitors, which may apply different instrumentation techniques for checking
whether the property was satisfied or not. In this paper we compare and
contrast the various types of monitoring methodologies found in the current
literature, and classify them into a spectrum of monitoring instrumentation
techniques, ranging from completely asynchronous monitoring on the one end and
completely synchronous monitoring on the other.
| 2017-08-24 00:38:23.000000000 |
2310.10687 | An Exploration Into Web Session Security- A Systematic Literature Review | cs.SE cs.CR | The most common attacks against web sessions are reviewed in this paper, for
example, some attacks against web browsers' honest users attempting to create
session with trusted web browser application legally. We have assessed with
four different ways to judge the viability of a certain solution by reviewing
existing security solutions which prevent or halt the different attacks. Then
we have pointed out some guidelines that have been taken into account by the
designers of the proposals we reviewed. The guidelines we have identified will
be helpful for the creative solutions proceeding web security in a more
structured and holistic way.
| 2023-10-14 16:22:07.000000000 |
1812.00140 | The Art, Science, and Engineering of Fuzzing: A Survey | cs.CR cs.SE | Among the many software vulnerability discovery techniques available today,
fuzzing has remained highly popular due to its conceptual simplicity, its low
barrier to deployment, and its vast amount of empirical evidence in discovering
real-world software vulnerabilities. At a high level, fuzzing refers to a
process of repeatedly running a program with generated inputs that may be
syntactically or semantically malformed. While researchers and practitioners
alike have invested a large and diverse effort towards improving fuzzing in
recent years, this surge of work has also made it difficult to gain a
comprehensive and coherent view of fuzzing. To help preserve and bring
coherence to the vast literature of fuzzing, this paper presents a unified,
general-purpose model of fuzzing together with a taxonomy of the current
fuzzing literature. We methodically explore the design decisions at every stage
of our model fuzzer by surveying the related literature and innovations in the
art, science, and engineering that make modern-day fuzzers effective.
| 2018-12-01 04:16:27.000000000 |
2006.01476 | Kaya: A Testing Framework for Blockchain-based Decentralized
Applications | cs.SE | In recent years, many decentralized applications based on blockchain (DApp)
have been developed. However, due to inadequate testing, DApps are easily
exposed to serious vulnerabilities. We find three main challenges for DApp
testing, i.e., the inherent complexity of DApp, inconvenient pre-state setting,
and not-so-readable logs. In this paper, we propose a testing framework named
Kaya to bridge these gaps. Kaya has three main functions. Firstly, Kaya
proposes DApp behavior description language (DBDL) to make writing test cases
easier. Test cases written in DBDL can also be automatically executed by Kaya.
Secondly, Kaya supports a flexible and convenient way for test engineers to set
the blockchain pre-states easily. Thirdly, Kaya transforms incomprehensible
addresses into readable variables for easy comprehension. With these functions,
Kaya can help test engineers test DApps more easily. Besides, to fit the
various application environments, we provide two ways for test engineers to use
Kaya, i.e., UI and command-line. Our experimental case demonstrates the
potential of Kaya in helping test engineers to test DApps more easily.
| 2020-06-02 09:22:18.000000000 |
2208.11352 | Ai4EComponentLib.jl: A Component-base Model Library in Julia | cs.SE | Ai4EComponentLib.jl(Ai4EComponentLib) is a component-base model library based
on Julia language, which relies on the differential equation solver
DifferentialEquations.jl and the symbolic modeling tool Modelingtoolkit.jl. To
handle problems in different physical domains, Ai4EComponentLib tries to build
them with component-base model. Supported by a new generation of symbolic
modeling tools, models built with Ai4EComponentLib are more flexible and
scalable than models built with traditional tools like Modelica. This paper
will introduce the instance and general modeling methods of Ai4EComponentLib
model library.
| 2022-08-24 08:03:07.000000000 |
1409.5718 | Convolutional Neural Networks over Tree Structures for Programming
Language Processing | cs.LG cs.NE cs.SE | Programming language processing (similar to natural language processing) is a
hot research topic in the field of software engineering; it has also aroused
growing interest in the artificial intelligence community. However, different
from a natural language sentence, a program contains rich, explicit, and
complicated structural information. Hence, traditional NLP models may be
inappropriate for programs. In this paper, we propose a novel tree-based
convolutional neural network (TBCNN) for programming language processing, in
which a convolution kernel is designed over programs' abstract syntax trees to
capture structural information. TBCNN is a generic architecture for programming
language processing; our experiments show its effectiveness in two different
program analysis tasks: classifying programs according to functionality, and
detecting code snippets of certain patterns. TBCNN outperforms baseline
methods, including several neural models for NLP.
| 2014-09-18 06:50:52.000000000 |
2209.04006 | What is Software Supply Chain Security? | cs.CR cs.SE | The software supply chain involves a multitude of tools and processes that
enable software developers to write, build, and ship applications. Recently,
security compromises of tools or processes has led to a surge in proposals to
address these issues. However, these proposals commonly overemphasize specific
solutions or conflate goals, resulting in unexpected consequences, or unclear
positioning and usage.
In this paper, we make the case that developing practical solutions is not
possible until the community has a holistic view of the security problem; this
view must include both the technical and procedural aspects. To this end, we
examine three use cases to identify common security goals, and present a
goal-oriented taxonomy of existing solutions demonstrating a holistic overview
of software supply chain security.
| 2022-09-08 19:13:53.000000000 |
2201.10160 | Data-driven Mutation Analysis for Cyber-Physical Systems | cs.SE | Cyber-physical systems (CPSs) typically consist of a wide set of integrated,
heterogeneous components; consequently, most of their critical failures relate
to the interoperability of such components.Unfortunately, most CPS test
automation techniques are preliminary and industry still heavily relies on
manual testing. With potentially incomplete, manually-generated test suites, it
is of paramount importance to assess their quality. Though mutation analysis
has demonstrated to be an effective means to assess test suite quality in some
specific contexts, we lack approaches for CPSs. Indeed, existing approaches do
not target interoperability problems and cannot be executed in the presence of
black-box or simulated components, a typical situation with CPSs.
In this paper, we introduce data-driven mutation analysis, an approach that
consists in assessing test suite quality by verifying if it detects
interoperability faults simulated by mutating the data exchanged by software
components. To this end, we describe a data-driven mutation analysis technique
(DaMAT) that automatically alters the data exchanged through data buffers. Our
technique is driven by fault models in tabular form where engineers specify how
to mutate data items by selecting and configuring a set of mutation operators.
We have evaluated DaMAT with CPSs in the space domain; specifically, the test
suites for the software systems of a microsatellite and nanosatellites launched
on orbit last year. Our results show that the approach effectively detects test
suite shortcomings, is not affected by equivalent and redundant mutants, and
entails acceptable costs.
| 2022-01-25 08:05:09.000000000 |
1111.5251 | Evolution of a Modular Software Network | cs.OS cs.SE | "Evolution behaves like a tinkerer" (Francois Jacob, Science, 1977). Software
systems provide a unique opportunity to understand biological processes using
concepts from network theory. The Debian GNU/Linux operating system allows us
to explore the evolution of a complex network in a novel way. The modular
design detected during its growth is based on the reuse of existing code in
order to minimize costs during programming. The increase of modularity
experienced by the system over time has not counterbalanced the increase in
incompatibilities between software packages within modules. This negative
effect is far from being a failure of design. A random process of package
installation shows that the higher the modularity the larger the fraction of
packages working properly in a local computer. The decrease in the relative
number of conflicts between packages from different modules avoids a failure in
the functionality of one package spreading throughout the entire system. Some
potential analogies with the evolutionary and ecological processes determining
the structure of ecological networks of interacting species are discussed.
| 2011-11-22 17:00:50.000000000 |
2305.09864 | The Jaseci Programming Paradigm and Runtime Stack: Building Scale-out
Production Applications Easy and Fast | cs.CL cs.DC cs.PL cs.SE | Today's production scale-out applications include many sub-application
components, such as storage backends, logging infrastructure and AI models.
These components have drastically different characteristics, are required to
work in collaboration, and interface with each other as microservices. This
leads to increasingly high complexity in developing, optimizing, configuring,
and deploying scale-out applications, raising the barrier to entry for most
individuals and small teams. We developed a novel co-designed runtime system,
Jaseci, and programming language, Jac, which aims to reduce this complexity.
The key design principle throughout Jaseci's design is to raise the level of
abstraction by moving as much of the scale-out data management, microservice
componentization, and live update complexity into the runtime stack to be
automated and optimized automatically. We use real-world AI applications to
demonstrate Jaseci's benefit for application performance and developer
productivity.
| 2023-05-17 00:34:36.000000000 |
1001.0683 | Introducing Automated Regression Testing in Open Source Projects | cs.SE | To learn how to introduce automated regression testing to existing medium
scale Open Source projects, a long-term field experiment was performed with the
Open Source project FreeCol. Results indicate that (1) introducing testing is
both beneficial for the project and feasible for an outside innovator, (2)
testing can enhance communication between developers, (3) signaling is
important for engaging the project participants to fill a newly vacant position
left by a withdrawal of the innovator. Five prescriptive strategies are
extracted for the innovator and two conjectures offered about the ability of an
Open Source project to learn about innovations.
| 2010-01-05 11:50:23.000000000 |
2109.08896 | Steps Before Syntax: Helping Novice Programmers Solve Problems using the
PCDIT Framework | cs.CY cs.SE | Novice programmers often struggle with problem solving due to the high
cognitive loads they face. Furthermore, many introductory programming courses
do not explicitly teach it, assuming that problem solving skills are acquired
along the way. In this paper, we present 'PCDIT', a non-linear problem solving
framework that provides scaffolding to guide novice programmers through the
process of transforming a problem specification into an implemented and tested
solution for an imperative programming language. A key distinction of PCDIT is
its focus on developing concrete cases for the problem early without actually
writing test code: students are instead encouraged to think about the abstract
steps from inputs to outputs before mapping anything down to syntax. We reflect
on our experience of teaching an introductory programming course using PCDIT,
and report the results of a survey that suggests it helped students to break
down challenging problems, organise their thoughts, and reach working
solutions.
| 2021-09-18 10:31:15.000000000 |
1403.7258 | Verifying Web Applications: From Business Level Specifications to
Automated Model-Based Testing | cs.SE | One of reasons preventing a wider uptake of model-based testing in the
industry is the difficulty which is encountered by developers when trying to
think in terms of properties rather than linear specifications. A disparity has
traditionally been perceived between the language spoken by customers who
specify the system and the language required to construct models of that
system. The dynamic nature of the specifications for commercial systems further
aggravates this problem in that models would need to be rechecked after every
specification change. In this paper, we propose an approach for converting
specifications written in the commonly-used quasi-natural language Gherkin into
models for use with a model-based testing tool. We have instantiated this
approach using QuickCheck and demonstrate its applicability via a case study on
the eHealth system, the national health portal for Maltese residents.
| 2014-03-28 01:04:29.000000000 |
1807.00182 | EnFuzz: Ensemble Fuzzing with Seed Synchronization among Diverse Fuzzers | cs.SE | Fuzzing is widely used for software vulnerability detection. There are
various kinds of fuzzers with different fuzzing strategies, and most of them
perform well on their targets. However, in industry practice and empirical
study, the performance and generalization ability of those well-designed
fuzzing strategies are challenged by the complexity and diversity of real-world
applications.
In this paper, inspired by the idea of ensemble learning, we first propose an
ensemble fuzzing approach EnFuzz, that integrates multiple fuzzing strategies
to obtain better performance and generalization ability than that of any
constituent fuzzer alone. First, we define the diversity of the base fuzzers
and choose those most recent and well-designed fuzzers as base fuzzers. Then,
EnFuzz ensembles those base fuzzers with seed synchronization and result
integration mechanisms. For evaluation, we implement EnFuzz , a prototype
basing on four strong open-source fuzzers (AFL, AFLFast, AFLGo, FairFuzz), and
test them on Google's fuzzing test suite, which consists of widely used
real-world applications. The 24-hour experiment indicates that, with the same
resources usage, these four base fuzzers perform variously on different
applications, while EnFuzz shows better generalization ability and always
outperforms others in terms of path coverage, branch coverage and crash
discovery. Even compared with the best cases of AFL, AFLFast, AFLGo and
FairFuzz, EnFuzz discovers 26.8%, 117%, 38.8% and 39.5% more unique crashes,
executes 9.16%, 39.2%, 19.9% and 20.0% more paths and covers 5.96%, 12.0%,
21.4% and 11.1% more branches respectively.
| 2018-06-30 14:07:12.000000000 |
1206.5104 | Automatic Test Generation for Space | cs.SE | The European Space Agency (ESA) uses an engine to perform tests in the Ground
Segment infrastructure, specially the Operational Simulator. This engine uses
many different tools to ensure the development of regression testing
infrastructure and these tests perform black-box testing to the C++ simulator
implementation. VST (VisionSpace Technologies) is one of the companies that
provides these services to ESA and they need a tool to infer automatically
tests from the existing C++ code, instead of writing manually scripts to
perform tests. With this motivation in mind, this paper explores automatic
testing approaches and tools in order to propose a system that satisfies VST
needs.
| 2012-06-22 10:38:07.000000000 |
physics/0405154 | The ATLAS Tile Calorimeter Test Beam Monitoring Program | physics.ins-det cs.PF cs.SE | During 2003 test beam session for ATLAS Tile Calorimeter a monitoring program
has been developed to ease the setup of correct running condition and the
assessment of data quality. The program has been built using the Online
Software services provided by the ATLAS Online Software group. The first part
of this note contains a brief overview of these services followed by the full
description of Tile Calorimeter monitoring program architecture and features.
Performances and future upgrades are discussed in the final part of this note.
| 2004-05-29 10:05:51.000000000 |
2303.03999 | Combining static analysis and dynamic symbolic execution in a toolchain
to detect fault injection vulnerabilities | cs.SE | Certification through auditing allows to ensure that critical embedded
systems are secure. This entails reviewing their critical components and
checking for dangerous execution paths. This latter task requires the use of
specialized tools which allow to explore and replay executions but are also
difficult to use effectively within the context of the audit, where time and
knowledge of the code are limited. Fault analysis is especially tricky as the
attacker may actively influence execution, rendering some common methods
unusable and increasing the number of possible execution paths exponentially.
In this work, we present a new method which mitigates these issues by reducing
the number of fault injection points considered to only the most relevant ones
relatively to some security properties. We use fast and robust static analysis
to detect injection points and assert their impactfulness. A more precise
dynamic/symbolic method is then employed to validate attack paths. This way the
insight required to find attacks is reduced and dynamic methods can better
scale to realistically sized programs. Our method is implemented into a
toolchain based on Frama-C and KLEE and validated on WooKey, a case-study
proposed by the National Cybersecurity Agency of France.
| 2023-03-07 15:59:59.000000000 |
2208.00304 | Conceptual Modeling of Objects | cs.SE | In this paper, we concentrate on object-related analysis in the field of
general ontology of reality as related to software engineering (e.g., UML
classes). Such a venture is similar to many studies in which researchers have
enhanced modeling through ontological analysis of the underlying paradigm of
UML models. We attempt to develop a conceptual model that consists of a
foundation of things that is supplemented with a second level of designated
objects. According to some researchers, the problem of the difference between
things and objects is one of the most decisive issues for the conception of
reality. In software engineering, objects serve two purposes: they promote
understanding of the real world and provide a practical basis for computer
implementation. The notion of object plays a central role in the
object-oriented approach, in which other notions are viewed by decomposing them
into objects and their relationships. This paper contributes to the
establishment of a broader understanding of the notion of object in conceptual
modeling based on things that are simultaneously machines. In this study, we
explored the underlying hypothesis of conceptual models (e.g., UML) to enhance
their ontological analysis by using the thing/machine (TM) model, which
presents the domain as thimacs. Following the philosophical distinction between
things and objects, we can specify modeling at two levels: the thinging stage
and the objectification stage. Objects are thimacs that control the
handleablity of their sub-parts when interacting with the outside of the object
(analogous to the body parts holding together in an assemblage when interacting
with the outside). The results promise a more refined modeling process to
develop a high-level description of the involved domain.
| 2022-07-30 20:32:54.000000000 |
1812.05033 | Differentially Testing Soundness and Precision of Program Analyzers | cs.SE | In the last decades, numerous program analyzers have been developed both by
academia and industry. Despite their abundance however, there is currently no
systematic way of comparing the effectiveness of different analyzers on
arbitrary code. In this paper, we present the first automated technique for
differentially testing soundness and precision of program analyzers. We used
our technique to compare six mature, state-of-the art analyzers on tens of
thousands of automatically generated benchmarks. Our technique detected
soundness and precision issues in most analyzers, and we evaluated the
implications of these issues to both designers and users of program analyzers.
| 2018-12-12 17:09:03.000000000 |
2305.04228 | Heterogeneous Directed Hypergraph Neural Network over abstract syntax
tree (AST) for Code Classification | cs.SE cs.AI cs.LG | Code classification is a difficult issue in program understanding and
automatic coding. Due to the elusive syntax and complicated semantics in
programs, most existing studies use techniques based on abstract syntax tree
(AST) and graph neural network (GNN) to create code representations for code
classification. These techniques utilize the structure and semantic information
of the code, but they only take into account pairwise associations and neglect
the high-order correlations that already exist between nodes in the AST, which
may result in the loss of code structural information. On the other hand, while
a general hypergraph can encode high-order data correlations, it is homogeneous
and undirected which will result in a lack of semantic and structural
information such as node types, edge types, and directions between child nodes
and parent nodes when modeling AST. In this study, we propose to represent AST
as a heterogeneous directed hypergraph (HDHG) and process the graph by
heterogeneous directed hypergraph neural network (HDHGN) for code
classification. Our method improves code understanding and can represent
high-order data correlations beyond paired interactions. We assess
heterogeneous directed hypergraph neural network (HDHGN) on public datasets of
Python and Java programs. Our method outperforms previous AST-based and
GNN-based methods, which demonstrates the capability of our model.
| 2023-05-07 09:28:16.000000000 |
2103.02870 | Robustness Evaluation of Stacked Generative Adversarial Networks using
Metamorphic Testing | cs.SE cs.CV | Synthesising photo-realistic images from natural language is one of the
challenging problems in computer vision. Over the past decade, a number of
approaches have been proposed, of which the improved Stacked Generative
Adversarial Network (StackGAN-v2) has proven capable of generating high
resolution images that reflect the details specified in the input text
descriptions. In this paper, we aim to assess the robustness and
fault-tolerance capability of the StackGAN-v2 model by introducing variations
in the training data. However, due to the working principle of Generative
Adversarial Network (GAN), it is difficult to predict the output of the model
when the training data are modified. Hence, in this work, we adopt Metamorphic
Testing technique to evaluate the robustness of the model with a variety of
unexpected training dataset. As such, we first implement StackGAN-v2 algorithm
and test the pre-trained model provided by the original authors to establish a
ground truth for our experiments. We then identify a metamorphic relation, from
which test cases are generated. Further, metamorphic relations were derived
successively based on the observations of prior test results. Finally, we
synthesise the results from our experiment of all the metamorphic relations and
found that StackGAN-v2 algorithm is susceptible to input images with obtrusive
objects, even if it overlaps with the main object minimally, which was not
reported by the authors and users of StackGAN-v2 model. The proposed
metamorphic relations can be applied to other text-to-image synthesis models to
not only verify the robustness but also to help researchers understand and
interpret the results made by the machine learning models.
| 2021-03-04 07:29:17.000000000 |
2203.12085 | Characterizing High-Quality Test Methods: A First Empirical Study | cs.SE | To assess the quality of a test suite, one can rely on mutation testing,
which computes whether the overall test cases are adequately exercising the
covered lines. However, this high level of granularity may overshadow the
quality of individual test methods. In this paper, we propose an empirical
study to assess the quality of test methods by relying on mutation testing at
the method level. We find no major differences between high-quality and
low-quality test methods in terms of size, number of asserts, and
modifications. In contrast, high-quality test methods are less affected by
critical test smells. Finally, we discuss practical implications for
researchers and practitioners.
| 2022-03-22 22:47:47.000000000 |
1405.0786 | Fault Localization in a Software Project using Back-Tracking Principles
of Matrix Dependency | cs.SE | Fault identification and testing has always been the most specific concern in
the field of software development. To identify and testify the bug we should be
aware of the source of the failure or any unwanted issue. In this paper, we are
trying to extract the location of failure and trying to cope up with the bug.
Using directed graph, we tried to obtain the dependency of multiple activities
in live environment to trace the origin of fault. Software development comes up
with series of activities and we tried to show the dependency of multiple
activities on each other. Critical activities are considered as they cause
abnormal functioning of the whole system. The paper discuss about the
priorities of activities of dependency of software failure on the critical
activities. Matrix representation of activities as part of the software is
chosen to determine root of the failure using concept of dependency. It can
vary with the topography of network and software environment. When faults
occur, the possible symptoms will be reflected in the dependency matrix with
high probability in fault itself. Thus, independent faults are located in the
main diagonal of dependency matrix.
| 2014-05-05 06:22:08.000000000 |
1606.05092 | Adaptable Symbol Table Management by Meta Modeling and Generation of
Symbol Table Infrastructures | cs.SE | Many textual software languages share common concepts such as defining and
referencing elements, hierarchical structures constraining the visibility of
names, and allowing for identical names for different element kinds. Symbol
tables are useful to handle those reference and visibility concepts. However,
developing a symbol table can be a tedious task that leads to an additional
effort for the language engineer. This paper presents a symbol table meta model
usable to define languagespecific symbol tables. Furthermore, we integrate this
symbol table meta model with a meta model of a grammar-based language
definition. This enables the language engineer to switch between the model
structure and the symbol table as needed. Finally, based on a grammarannotation
mechanism, our approach is able to generate a symbol table infrastructure that
can be used as is or serve as a basis for custom symbol tables.
| 2016-06-16 08:36:10.000000000 |
1902.00526 | Applications of Multi-view Learning Approaches for Software
Comprehension | cs.SE | Program comprehension concerns the ability of an individual to make an
understanding of an existing software system to extend or transform it.
Software systems comprise of data that are noisy and missing, which makes
program understanding even more difficult. A software system consists of
various views including the module dependency graph, execution logs,
evolutionary information and the vocabulary used in the source code, that
collectively defines the software system. Each of these views contain unique
and complementary information; together which can more accurately describe the
data. In this paper, we investigate various techniques for combining different
sources of information to improve the performance of a program comprehension
task. We employ state-of-the-art techniques from learning to 1) find a suitable
similarity function for each view, and 2) compare different multi-view learning
techniques to decompose a software system into high-level units and give
component-level recommendations for refactoring of the system, as well as
cross-view source code search. The experiments conducted on 10 relatively large
Java software systems show that by fusing knowledge from different views, we
can guarantee a lower bound on the quality of the modularization and even
improve upon it. We proceed by integrating different sources of information to
give a set of high-level recommendations as to how to refactor the software
system. Furthermore, we demonstrate how learning a joint subspace allows for
performing cross-modal retrieval across views, yielding results that are more
aligned with what the user intends by the query. The multi-view approaches
outlined in this paper can be employed for addressing problems in software
engineering that can be encoded in terms of a learning problem, such as
software bug prediction and feature location.
| 2019-02-01 19:07:49.000000000 |
1402.3821 | Efficient and Generalized Decentralized Monitoring of Regular Languages | cs.SE | The main contribution of this paper is an efficient and generalized
decentralized monitoring algorithm allowing to detect satisfaction or violation
of any regular specification by local monitors alone in a system without
central observation point. Our algorithm does not assume any form of
synchronization between system events and communication of monitors, uses state
machines as underlying mechanism for efficiency, and tries to keep the number
and size of messages exchanged between monitors to a minimum. We provide a full
implementation of the algorithm with an open-source benchmark to evaluate its
efficiency in terms of number, size of exchanged messages, and delay induced by
communication between monitors. Experimental results demonstrate the
effectiveness of our algorithm which outperforms the previous most general one
along several (new) monitoring metrics.
| 2014-02-16 17:49:57.000000000 |
1209.1949 | Improved Robust DWT-Watermarking in YCbCr Color Space | cs.CR cs.MM cs.SE | Digital watermarking is an effective way to protect copyright. In this paper,
a robust watermarking algorithm based on wavelet transformation is proposed
which can confirm the copyright without original image. The wavelet
transformation technique is effective in image analyzing and processing. Thus
the color-image watermark algorithm based on discrete wavelet transformation
(DWT) begins to draw an increasing attention. In the proposed approach, the
watermark Encrypt by Arnold transform and the host image is converted into the
YCbCr color space. Then its Y channel decomposed into wavelet coefficients and
the selected approximation coefficients are quantized and then their least
significant bit of the quantized coefficients is replaced by the Encrypted
watermark using LSB insertion technique. The experimental results show that
watermark embedded by this algorithm is of better robustness and extra
imperceptibility and robustness against wavelet compression compared to the
traditional embedding methods in RGB color space.
| 2012-09-10 11:35:39.000000000 |
1402.2271 | An Optimized Semantic Web Service Composition Method Based on Clustering
and Ant Colony Algorithm | cs.SE | In today's Web, Web Services are created and updated on the fly. For
answering complex needs of users, the construction of new web services based on
existing ones is required. It has received a great attention from different
communities. This problem is known as web services composition. However, it is
one of big challenge problems of recent years in a distributed and dynamic
environment. Web services can be composed manually but it is a time consuming
task. The automatic web service composition is one of the key features for
future the semantic web. The various approaches in field of web service
compositions proposed by the researchers. In this paper, we propose a novel
architecture for semantic web service composition using clustering and Ant
colony algorithm.
| 2014-02-10 20:59:22.000000000 |
2111.14142 | Agility in Software 2.0 -- Notebook Interfaces and MLOps with Buttresses
and Rebars | cs.SE cs.AI | Artificial intelligence through machine learning is increasingly used in the
digital society. Solutions based on machine learning bring both great
opportunities, thus coined "Software 2.0," but also great challenges for the
engineering community to tackle. Due to the experimental approach used by data
scientists when developing machine learning models, agility is an essential
characteristic. In this keynote address, we discuss two contemporary
development phenomena that are fundamental in machine learning development,
i.e., notebook interfaces and MLOps. First, we present a solution that can
remedy some of the intrinsic weaknesses of working in notebooks by supporting
easy transitions to integrated development environments. Second, we propose
reinforced engineering of AI systems by introducing metaphorical buttresses and
rebars in the MLOps context. Machine learning-based solutions are dynamic in
nature, and we argue that reinforced continuous engineering is required to
quality assure the trustworthy AI systems of tomorrow.
| 2021-11-28 13:40:30.000000000 |
1409.6604 | Scaling the Management of Extreme Programming Projects | cs.SE | XP is a code-oriented, light-weight software engineering methodology, suited
merely for small-sized teams who develop software that relies on vague or
rapidly changing requirements. Being very code-oriented, the discipline of
systems engineering knows it as approach of incremental system change. In this
contribution, we discuss the enhanced version of a concept on how to extend XP
on large scale projects with hundreds of software engineers and programmers,
respectively. Previous versions were already presented in [1] and [12]. The
basic idea is to apply the "hierarchical approach", a management principle of
reorganizing companies, as well as well-known moderation principles to XP
project organization. We show similarities between software engineering methods
and company reorganization processes and discuss how the elements of the
hierarchical approach can improve XP. We provide guidelines on how to scale up
XP to very large projects e.g. those common in telecommunication industry and
IT technology consultancy firms by using moderation techniques.
| 2014-09-22 17:29:53.000000000 |
2004.10350 | Modeling Network Architecture: A Cloud Case Study | cs.SE cs.NI | The Internet s ability to support a wide range of services depends on the
network architecture and theoretical and practical innovations necessary for
future networks. Network architecture in this context refers to the structure
of a computer network system as well as interactions among its physical
components, their configuration, and communication protocols. Various
descriptions of architecture have been developed over the years with an
unusually large number of superficial icons and symbols. This situation has
created a need for more coherent systematic representations of network
architecture. This paper is intended to refine the design, analysis, and
documentation of network architecture by adopting a conceptual model called a
thinging (abstract) machine (TM), which views all components of a network in
terms of a single notion: the flow of things in a TM. Since cloud computing has
become increasingly popular in the last few years as a model for a shared pool
of networks, servers, storage, and applications, we apply the TM to model a
real case study of cloud networks. The resultant model introduces an integrated
representation of computer networks.
| 2020-04-22 00:12:13.000000000 |
1511.02725 | Integrating a large-scale testing campaign in the CK framework | cs.SE | We consider the problem of conducting large experimental campaigns in
programming languages research. Most research efforts require a certain level
of bookkeeping of results. This is manageable via quick, on-the-fly
infrastructure implementations. However, it becomes a problem for large-scale
testing initiatives, especially as the needs of the project evolve along the
way. We look at how the Collective Knowledge generalized testing framework can
help with such a project and its overall applicability and ease of use. The
project in question is an OpenCL compiler testing campaign. We investigate how
to use the Collective Knowledge framework to lead the experimental campaign, by
providing storage and representation of test cases and their results. We also
provide an initial implementation, publicly available.
| 2015-11-09 15:54:36.000000000 |
1909.06353 | That's C, baby. C! | cs.PL cs.SE | Hardly a week goes by at BUGSENG without having to explain to someone that
almost any piece of C text, considered in isolation, means absolutely nothing.
The belief that C text has meaning in itself is so common, also among seasoned
C practitioners, that I thought writing a short paper on the subject was a good
time investment. The problem is due to the fact that the semantics of the C
programming language is not fully defined: non-definite behavior, predefined
macros, different library implementations, peculiarities of the translation
process, . . . : all these contribute to the fact that no meaning can be
assigned to source code unless full details about the build are available. The
paper starts with an exercise that admits a solution. The existence of this
solution will hopefully convince anyone that, in general, unless the toolchain
and the build procedure are fully known, no meaning can be assigned to any
nontrivial piece of C code.
| 2019-09-13 17:56:40.000000000 |
1610.03960 | Multi-view Consistency in UML | cs.SE | We study the question of consistency of multi-view models in UML and OCL. We
first critically survey the large amount of literature that already exists. We
find that only limited subsets of the UML/OCL have been covered so far and that
consistency checks mostly only cover structural aspects, whereas only few
methods also address behaviour. We also give a classification of different
techniques for multi-view UML/OCL consistency: consistency rules, the system
model approach, dynamic meta-modelling, universal logic, and heterogeneous
transformation. Finally, we elaborate cornerstones of a comprehensive
distributed semantics approach to consistency using OMG's Distributed Ontology,
Model and Specification Language (DOL).
| 2016-10-13 07:21:36.000000000 |
1704.04189 | Seamless Requirements | cs.SE | Popular notations for functional requirements specifications frequently
ignore developers' needs, target specific development models, or require
translation of requirements into tests for verification; the results can give
out-of-sync or downright incompatible artifacts. Seamless Requirements, a new
approach to specifying functional requirements, contributes to developers'
understanding of requirements and to software quality regardless of the
process, while the process itself becomes lighter due to the absence of tests
in the presence of formal verification. A development case illustrates these
benefits, and a discussion compares seamless requirements to other approaches.
| 2017-04-13 15:58:52.000000000 |
2001.02467 | Comparing Constraints Mined From Execution Logs to Understand Software
Evolution | cs.SE | Complex software systems evolve frequently, e.g., when introducing new
features or fixing bugs during maintenance. However, understanding the impact
of such changes on system behavior is often difficult. Many approaches have
thus been proposed that analyze systems before and after changes, e.g., by
comparing source code, model-based representations, or system execution logs.
In this paper, we propose an approach for comparing run-time constraints,
synthesized by a constraint mining algorithm, based on execution logs recorded
before and after changes. Specifically, automatically mined constraints define
the expected timing and order of recurring events and the values of data
elements attached to events. Our approach presents the differences of the mined
constraints to users, thereby providing a higher-level view on software
evolution and supporting the analysis of the impact of changes on system
behavior. We present a motivating example and a preliminary evaluation based on
a cyber-physical system controlling unmanned aerial vehicles. The results of
our preliminary evaluation show that our approach can help to analyze changed
behavior and thus contributes to understanding software evolution.
| 2020-01-08 12:01:07.000000000 |
2308.00886 | Enhancing Machine Learning Performance with Continuous In-Session Ground
Truth Scores: Pilot Study on Objective Skeletal Muscle Pain Intensity
Prediction | cs.LG cs.AI cs.SE eess.SP | Machine learning (ML) models trained on subjective self-report scores
struggle to objectively classify pain accurately due to the significant
variance between real-time pain experiences and recorded scores afterwards.
This study developed two devices for acquisition of real-time, continuous
in-session pain scores and gathering of ANS-modulated endodermal activity
(EDA).The experiment recruited N = 24 subjects who underwent a post-exercise
circulatory occlusion (PECO) with stretch, inducing discomfort. Subject data
were stored in a custom pain platform, facilitating extraction of time-domain
EDA features and in-session ground truth scores. Moreover, post-experiment
visual analog scale (VAS) scores were collected from each subject. Machine
learning models, namely Multi-layer Perceptron (MLP) and Random Forest (RF),
were trained using corresponding objective EDA features combined with
in-session scores and post-session scores, respectively. Over a 10-fold
cross-validation, the macro-averaged geometric mean score revealed MLP and RF
models trained with objective EDA features and in-session scores achieved
superior performance (75.9% and 78.3%) compared to models trained with
post-session scores (70.3% and 74.6%) respectively. This pioneering study
demonstrates that using continuous in-session ground truth scores significantly
enhances ML performance in pain intensity characterization, overcoming ground
truth sparsity-related issues, data imbalance, and high variance. This study
informs future objective-based ML pain system training.
| 2023-08-02 00:28:22.000000000 |
2401.15545 | PPM: Automated Generation of Diverse Programming Problems for
Benchmarking Code Generation Models | cs.SE cs.AI cs.CL cs.PL | In recent times, a plethora of Large Code Generation Models (LCGMs) have been
proposed, showcasing significant potential in assisting developers with complex
programming tasks. Benchmarking LCGMs necessitates the creation of a set of
diverse programming problems, and each problem comprises the prompt (including
the task description), canonical solution, and test inputs. The existing
methods for constructing such a problem set can be categorized into two main
types: manual methods and perturbation-based methods. However, manual methods
demand high effort and lack scalability, while also risking data integrity due
to LCGMs' potentially contaminated data collection, and perturbation-based
approaches mainly generate semantically homogeneous problems with the same
canonical solutions and introduce typos that can be easily auto-corrected by
IDE, making them ineffective and unrealistic. In this work, we propose the idea
of programming problem merging (PPM) and provide two implementation of this
idea, we utilize our tool on two widely-used datasets and compare it against
nine baseline methods using eight code generation models. The results
demonstrate the effectiveness of our tool in generating more challenging,
diverse, and natural programming problems, comparing to the baselines.
| 2024-01-28 02:27:38.000000000 |
2303.14713 | Engineering Software Systems for Quantum Computing as a Service: A
Mapping Study | cs.SE | Quantum systems have started to emerge as a disruptive technology and
enabling platforms - exploiting the principles of quantum mechanics - to
achieve quantum supremacy in computing. Academic research, industrial projects
(e.g., Amazon Braket), and consortiums like 'Quantum Flagship' are striving to
develop practically capable and commercially viable quantum computing (QC)
systems and technologies. Quantum Computing as a Service (QCaaS) is viewed as a
solution attuned to the philosophy of service-orientation that can offer QC
resources and platforms, as utility computing, to individuals and organisations
who do not own quantum computers. To understand the quantum service development
life cycle and pinpoint emerging trends, we used evidence-based software
engineering approach to conduct a systematic mapping study (SMS) of research
that enables or enhances QCaaS. The SMS process retrieved a total of 55
studies, and based on their qualitative assessment we selected 9 of them to
investigate (i) the functional aspects, design models, patterns, programming
languages, deployment platforms, and (ii) trends of emerging research on QCaaS.
The results indicate three modelling notations and a catalogue of five design
patterns to architect QCaaS, whereas Python (native code or frameworks) and
Amazon Braket are the predominant solutions to implement and deploy QCaaS
solutions. From the quantum software engineering (QSE) perspective, this SMS
provides empirically grounded findings that could help derive processes,
patterns, and reference architectures to engineer software services for QC.
| 2023-03-26 12:50:22.000000000 |
1703.09567 | Classifying and Qualifying GUI Defects | cs.SE | Graphical user interfaces (GUIs) are integral parts of software systems that
require interactions from their users. Software testers have paid special
attention to GUI testing in the last decade, and have devised techniques that
are effective in finding several kinds of GUI errors. However, the introduction
of new types of interactions in GUIs (e.g., direct manipulation) presents new
kinds of errors that are not targeted by current testing techniques. We believe
that to advance GUI testing, the community needs a comprehensive and high level
GUI fault model, which incorporates all types of interactions. The work
detailed in this paper establishes 4 contributions: 1) A GUI fault model
designed to identify and classify GUI faults. 2) An empirical analysis for
assessing the relevance of the proposed fault model against failures found in
real GUIs. 3) An empirical assessment of two GUI testing tools (i.e. GUITAR and
Jubula) against those failures. 4) GUI mutants we've developed according to our
fault model. These mutants are freely available and can be reused by developers
for benchmarking their GUI testing tools.
| 2017-03-28 13:22:04.000000000 |
1901.10926 | Safe Compilation for Hidden Deterministic Hardware Aliasing and
Encrypted Computing | cs.CR cs.PL cs.SE | Hardware aliasing occurs when the same logical address sporadically accesses
different physical memory locations and is a problem encountered by systems
programmers (the opposite, software aliasing, when different addresses access
the same location, is more familiar to application programmers). This paper
shows how to compile so code works in the presence of {\em hidden
deterministic} hardware aliasing. That means that a copy of an address always
accesses the same location, and recalculating it exactly the same way also
always gives the same access, but otherwise access appears arbitrary and
unpredictable. The technique is extended to cover the emerging technology of
encrypted computing too.
| 2019-01-30 16:19:12.000000000 |
2110.06253 | StateAFL: Greybox Fuzzing for Stateful Network Servers | cs.CR cs.OS cs.SE | Fuzzing network servers is a technical challenge, since the behavior of the
target server depends on its state over a sequence of multiple messages.
Existing solutions are costly and difficult to use, as they rely on
manually-customized artifacts such as protocol models, protocol parsers, and
learning frameworks. The aim of this work is to develop a greybox fuzzer
(StateaAFL) for network servers that only relies on lightweight analysis of the
target program, with no manual customization, in a similar way to what the AFL
fuzzer achieved for stateless programs. The proposed fuzzer instruments the
target server at compile-time, to insert probes on memory allocations and
network I/O operations. At run-time, it infers the current protocol state of
the target server by taking snapshots of long-lived memory areas, and by
applying a fuzzy hashing algorithm (Locality-Sensitive Hashing) to map memory
contents to a unique state identifier. The fuzzer incrementally builds a
protocol state machine for guiding fuzzing.
We implemented and released StateaAFL as open-source software. As a basis for
reproducible experimentation, we integrated StateaAFL with a large set of
network servers for popular protocols, with no manual customization to
accomodate for the protocol. The experimental results show that the fuzzer can
be applied with no manual customization on a large set of network servers for
popular protocols, and that it can achieve comparable, or even better code
coverage and bug detection than customized fuzzing. Moreover, our qualitative
analysis shows that states inferred from memory better reflect the server
behavior than only using response codes from messages.
| 2021-10-12 18:08:38.000000000 |
2309.11926 | Quantum Microservices Development and Deployment | cs.SE | Early advances in the field of quantum computing have provided new
opportunities to tackle intricate problems in areas as diverse as mathematics,
physics, or healthcare. However, the technology required to construct such
systems where different pieces of quantum and classical software collaborate is
currently lacking. For this reason, significant advancements in quantum
service-oriented computing are necessary to enable developers to create and
operate quantum services and microservices comparable to their classical
counterparts. Therefore, the core objective of this work is to establish the
necessary technological infrastructure that enables the application of the
benefits and lessons learned from service-oriented computing to the domain of
quantum software engineering. To this end, we propose a pipeline for the
continuous deployment of services. Additionally, we have validated the proposal
by making use of a modification of the OpenAPI specification, the GitHub
Actions, and AWS.
| 2023-09-21 09:40:55.000000000 |
1703.10862 | Edit Transactions: Dynamically Scoped Change Sets for Controlled Updates
in Live Programming | cs.SE cs.PL | Live programming environments enable programmers to edit a running program
and obtain immediate feedback on each individual change. The liveness quality
is valued by programmers to help work in small steps and continuously add or
correct small functionality while maintaining the impression of a direct
connection between each edit and its manifestation at run-time. Such immediacy
may conflict with the desire to perform a combined set of intermediate steps,
such as a refactoring, without immediately taking effect after each individual
edit. This becomes important when an incomplete sequence of small-scale changes
can easily break the running program. State-of-the-art solutions focus on
retroactive recovery mechanisms, such as debugging or version control. In
contrast, we propose a proactive approach: Multiple individual changes to the
program are collected in an Edit Transaction, which can be made effective if
deemed complete. Upon activation, the combined steps become visible together.
Edit Transactions are capable of dynamic scoping, allowing a set of changes to
be tested in isolation before being extended to the running application. This
enables a live programming workflow with full control over change granularity,
immediate feedback on tests, delayed effect on the running application, and
coarse-grained undos. We present an implementation of Edit Transactions along
with Edit-Transaction-aware tools in Squeak/Smalltalk. We asses this
implementation by conducting a case study with and without the new tool
support, comparing programming activities, errors, and detours for implementing
new functionality in a running simulation. We conclude that workflows using
Edit Transactions have the potential to increase confidence in a change, reduce
potential for run-time errors, and eventually make live programming more
predictable and engaging.
| 2017-03-31 11:46:52.000000000 |
2204.05345 | Towards Automatically Generating Release Notes using Extractive
Summarization Technique | cs.SE | Release notes are admitted as an essential document by practitioners. They
contain the summary of the source code changes for the software releases, such
as issue fixes, added new features, and performance improvements. Manually
producing release notes is a time-consuming and challenging task. For that
reason, sometimes developers neglect to write release notes. For example, we
collect data from GitHub with over 1,900 releases, among them 37% of the
release notes are empty. We propose an automatic generate release notes
approach based on the commit messages and merge pull-request (PR) titles to
mitigate this problem. We implement one of the popular extractive text
summarization techniques, i.e., the TextRank algorithm. However, accurate
keyword extraction is a vital issue in text processing. The keyword matching
and topic extraction process of the TextRank algorithm ignores the semantic
similarity among texts. To improve the keyword extraction method, we integrate
the GloVe word embedding technique with TextRank. We develop a dataset with
1,213 release notes (after null filtering) and evaluate the generated release
notes through the ROUGE metric and human evaluation. We also compare the
performance of our technique with another popular extractive algorithm, latent
semantic analysis (LSA). Our evaluation results show that the improved TextRank
method outperforms LSA.
| 2022-04-11 18:03:14.000000000 |
2302.00551 | Triggering Conditions Analysis and Use Case for Validation of ADAS/ADS
Functions | cs.SE | Safety in the automotive domain is a well-known topic, which has been in
constant development in the past years. The complexity of new systems that add
more advanced components in each function has opened new trends that have to be
covered from the safety perspective. In this case, not only specifications and
requirements have to be covered but also scenarios, which cover all relevant
information of the vehicle environment. Many of them are not yet still
sufficient defined or considered. In this context, Safety of the Intended
Functionality (SOTIF) appears to ensure the system when it might fail because
of technological shortcomings or misuses by users. An identification of the
plausibly insufficiencies of ADAS/ADS functions has to be done to discover the
potential triggering conditions that can lead to these unknown scenarios, which
might effect a hazardous behaviour. The main goal of this publication is the
definition of an use case to identify these triggering conditions that have
been applied to the collision avoidance function implemented in our
self-developed mobile Hardware-in-Loop (HiL) platform.
| 2023-01-31 16:23:59.000000000 |
1809.01940 | Standards of Validity and the Validity of Standards in Behavioral
Software Engineering Research: The Perspective of Psychological Test Theory | cs.SE | Background. There are some publications in software engineering research that
aim at guiding researchers in assessing validity threats to their studies.
Still, many researchers fail to address many aspects of validity that are
essential to quantitative research on human factors. Goal. This paper has the
goal of triggering a change of mindset in what types of studies are the most
valuable to the behavioral software engineering field, and also provide more
details of what construct validity is. Method. The approach is based on
psychological test theory and draws upon methods used in psychology in relation
to construct validity. Results. In this paper, I suggest a different approach
to validity threats than what is commonplace in behavioral software engineering
research. Conclusions. While this paper focuses on behavioral software
engineering, I believe other types of software engineering research might also
benefit from an increased focus on construct validity.
| 2018-09-06 12:21:31.000000000 |
1512.06178 | Proceedings XV Jornadas sobre Programaci\'on y Lenguajes | cs.PL cs.SE | This volume contains a selection of the papers presented at the XV Jornadas
sobre Programaci\'on y Lenguajes (PROLE 2015), held at Santander, Spain, during
September 15th-17th, 2015. Previous editions of the workshop were held in
C\'adiz (2014), Madrid (2013), Almer\'ia (2012), A Coru\~na (2011), Val\`encia
(2010), San Sebasti\'an (2009), Gij\'on (2008), Zaragoza (2007), Sitges (2006),
Granada (2005), M\'alaga (2004), Alicante (2003), El Escorial (2002), and
Almagro (2001). Programming languages provide a conceptual framework which is
necessary for the development, analysis, optimization and understanding of
programs and programming tasks. The aim of the PROLE series of conferences
(PROLE stems from PROgramaci\'on y LEnguajes) is to serve as a meeting point
for Spanish research groups which develop their work in the area of programming
and programming languages. The organization of this series of events aims at
fostering the exchange of ideas, experiences and results among these groups.
Promoting further collaboration is also one of its main goals.
| 2015-12-19 02:15:49.000000000 |
2401.14617 | A Systematic Literature Review on Explainability for Machine/Deep
Learning-based Software Engineering Research | cs.SE cs.AI | The remarkable achievements of Artificial Intelligence (AI) algorithms,
particularly in Machine Learning (ML) and Deep Learning (DL), have fueled their
extensive deployment across multiple sectors, including Software Engineering
(SE). However, due to their black-box nature, these promising AI-driven SE
models are still far from being deployed in practice. This lack of
explainability poses unwanted risks for their applications in critical tasks,
such as vulnerability detection, where decision-making transparency is of
paramount importance. This paper endeavors to elucidate this interdisciplinary
domain by presenting a systematic literature review of approaches that aim to
improve the explainability of AI models within the context of SE. The review
canvasses work appearing in the most prominent SE & AI conferences and
journals, and spans 63 papers across 21 unique SE tasks. Based on three key
Research Questions (RQs), we aim to (1) summarize the SE tasks where XAI
techniques have shown success to date; (2) classify and analyze different XAI
techniques; and (3) investigate existing evaluation approaches. Based on our
findings, we identified a set of challenges remaining to be addressed in
existing studies, together with a roadmap highlighting potential opportunities
we deemed appropriate and important for future work.
| 2024-01-26 03:20:40.000000000 |
2309.07103 | Comparing Llama-2 and GPT-3 LLMs for HPC kernels generation | cs.SE cs.AI cs.DC cs.PL | We evaluate the use of the open-source Llama-2 model for generating
well-known, high-performance computing kernels (e.g., AXPY, GEMV, GEMM) on
different parallel programming models and languages (e.g., C++: OpenMP, OpenMP
Offload, OpenACC, CUDA, HIP; Fortran: OpenMP, OpenMP Offload, OpenACC; Python:
numpy, Numba, pyCUDA, cuPy; and Julia: Threads, CUDA.jl, AMDGPU.jl). We built
upon our previous work that is based on the OpenAI Codex, which is a descendant
of GPT-3, to generate similar kernels with simple prompts via GitHub Copilot.
Our goal is to compare the accuracy of Llama-2 and our original GPT-3 baseline
by using a similar metric. Llama-2 has a simplified model that shows
competitive or even superior accuracy. We also report on the differences
between these foundational large language models as generative AI continues to
redefine human-computer interactions. Overall, Copilot generates codes that are
more reliable but less optimized, whereas codes generated by Llama-2 are less
reliable but more optimized when correct.
| 2023-09-12 01:19:54.000000000 |