id
stringlengths 9
16
| title
stringlengths 9
239
| categories
stringclasses 965
values | abstract
stringlengths 26
3.28k
| created_at
stringlengths 29
29
|
---|---|---|---|---|
1903.04165 | Object-oriented requirements: reusable, understandable, verifiable | cs.SE | Insufficient requirements reusability, understandability and verifiability
jeopardize software projects. Empirical studies show little success in
improving these qualities separately. Applying object-oriented thinking to
requirements leads to their unified treatment. An online library of reusable
requirement templates implements recurring requirement structures, offering a
starting point for practicing the unified approach.
| 2019-03-11 08:18:32.000000000 |
2304.05327 | SciKGTeX -- A LaTeX Package to Semantically Annotate Contributions in
Scientific Publications | cs.DL cs.SE | Scientific knowledge graphs have been proposed as a solution to structure the
content of research publications in a machine-actionable way and enable more
efficient, computer-assisted workflows for many research activities.
Crowd-sourcing approaches are used frequently to build and maintain such
scientific knowledge graphs. To contribute to scientific knowledge graphs,
researchers need simple and easy-to-use solutions to generate new knowledge
graph elements and establish the practice of semantic representations in
scientific communication. In this paper, we present a workflow for authors of
scientific documents to specify their contributions with a LaTeX package,
called SciKGTeX, and upload them to a scientific knowledge graph. The SciKGTeX
package allows authors of scientific publications to mark the main
contributions of their work directly in LaTeX source files. The package embeds
marked contributions as metadata into the generated PDF document, from where
they can be extracted automatically and imported into a scientific knowledge
graph, such as the ORKG. This workflow is simpler and faster than current
approaches, which make use of external web interfaces for data entry. Our user
evaluation shows that SciKGTeX is easy to use, with a score of 79 out of 100 on
the System Usability Scale, as participants of the study needed only 7 minutes
on average to annotate the main contributions on a sample abstract of a
published paper. Further testing shows that the embedded contributions can be
successfully uploaded to ORKG within ten seconds. SciKGTeX simplifies the
process of manual semantic annotation of research contributions in scientific
articles. Our workflow demonstrates how a scientific knowledge graph can
automatically ingest research contributions from document metadata.
| 2023-04-11 16:36:29.000000000 |
1510.01421 | From Network Traces to System Responses: Opaquely Emulating Software
Services | cs.SE | Enterprise software systems make complex interactions with other services in
their environment. Developing and testing for production-like conditions is
therefore a challenging task. Prior approaches include emulations of the
dependency services using either explicit modelling or record-and-replay
approaches. Models require deep knowledge of the target services while
record-and-replay is limited in accuracy. We present a new technique that
improves the accuracy of record-and-replay approaches, without requiring prior
knowledge of the services. The approach uses multiple sequence alignment to
derive message prototypes from recorded system interactions and a scheme to
match incoming request messages against message prototypes to generate response
messages. We introduce a modified Needleman-Wunsch algorithm for distance
calculation during message matching, wildcards in message prototypes for high
variability sections, and entropy-based weightings in distance calculations for
increased accuracy. Combined, our new approach has shown greater than 99%
accuracy for four evaluated enterprise system messaging protocols.
| 2015-10-06 03:45:07.000000000 |
0811.3621 | Description of the CUDF Format | cs.SE | This document contains several related specifications, together they describe
the document formats related to the solver competition which will be organized
by Mancoosi. In particular, this document describes: - DUDF (Distribution
Upgradeability Description Format), the document format to be used to submit
upgrade problem instances from user machines to a (distribution-specific)
database of upgrade problems; - CUDF (Common Upgradeability Description
Format), the document format used to encode upgrade problems, abstracting over
distribution-specific details. Solvers taking part in the competition will be
fed with input in CUDF format.
| 2008-11-21 19:46:46.000000000 |
1706.06369 | Towards the Trustworthy Development of Active Medical Devices: A
Hemodialysis Case Study | cs.SE | The use of embedded software is advancing in modern medical devices, so does
its capabilities and complexity. This paradigm shift brings many challenges
such as an increased rate of medical device failures due to software faults. In
this letter, we present a rigorous correct by construction approach for the
trustworthy development of hemodialysis machines, a sub-class of active medical
devices. We show how informal requirements of hemodialysis machines are modeled
and analyzed through a rigorous process and suggest a generalization to a
larger class of active medical devices.
| 2017-06-20 11:28:02.000000000 |
1612.01053 | Proceedings Second Graphs as Models Workshop | cs.DS cs.LO cs.SE | Graphs are used as models in all areas of computer science: examples are
state space graphs, control flow graphs, syntax graphs, UML-type models of all
kinds, network layouts, social networks, dependency graphs, and so forth. Once
such graphical models are constructed, they can be analysed and transformed to
verify their correctness within a domain, discover new properties, or produce
new equivalent and/or optimised versions.
Graphs as Models' main focus is the exchange and collaboration of researchers
from different backgrounds. The workshop serves as platform to boost inter- and
transdisciplinary research and wants to serve as leeway for new ideas. Thus,
besides classical research presentations, the workshop is highly geared toward
numerous interactive sessions.
The second edition of the Graphs as Models workshop was held on 2-3 June 2016
in Eindhoven, The Netherlands, colocated with the 19th European Joint
Conferences on Theory and Practice of Software (ETAPS 2016).
| 2016-12-04 03:15:16.000000000 |
2204.05561 | Toward Granular Automatic Unit Test Case Generation | cs.SE | Unit testing verifies the presence of faults in individual software
components. Previous research has been targeting the automatic generation of
unit tests through the adoption of random or search-based algorithms. Despite
their effectiveness, these approaches do not implement any strategy that allows
them to create unit tests in a structured manner: indeed, they aim at creating
tests by optimizing metrics like code coverage without ensuring that the
resulting tests follow good design principles. In order to structure the
automatic test case generation process, we propose a two-step systematic
approach to the generation of unit tests: we first force search-based
algorithms to create tests that cover individual methods of the production
code, hence implementing the so-called intra-method tests; then, we relax the
constraints to enable the creation of intra-class tests that target the
interactions among production code methods.
| 2022-04-12 06:47:03.000000000 |
2205.15904 | Synthesizing Configuration Tactics for Exercising Hidden Options in
Serverless Systems | cs.SE | A proper configuration of an information system can ensure accuracy and
efficiency, among other system objectives. Conversely, a poor configuration can
have a significant negative impact on the system's performance, reliability,
and cost. Serverless systems, which are comprised of many functions and managed
services, especially risk exposure to misconfigurations, with many provider-
and platform-specific, often intransparent and 'hidden' settings. In this
paper, we argue to pay close attention to the configuration of serverless
systems to exercise options with known accuracy, cost and time. Based on a
literature study and long-term serverless systems development experience, we
present nine tactics to unlock potentially neglected and unknown options in
serverless systems.
| 2022-05-31 15:53:33.000000000 |
2002.05760 | An Exploratory Study of Code Smells in Web Games | cs.SE | With the continuous growth of the internet market, games are becoming more
and more popular worldwide. However, increased market competition for game
demands developers to write more efficient games in terms of performance,
security, and maintenance. The continuous evolution of software systems and its
increasing complexity may result in bad design decisions. Researchers analyzed
the cognitive, behavioral and social effects of games. Also, gameplay and game
mechanics have been a research area to enhance game playing, but to the extent
of our knowledge, there hardly exists any research work that studies the bad
coding practices in game development. Hence, through our study, we try to
analyze and identify the presence of bad coding practices called code smells
that may cause quality issues in games. To accomplish this, we created a
dataset of 361 web games written in JavaScript. On this dataset, we run a
JavaScript code smell detection tool JSNose to find the occurrence and
distribution of code smell in web games. Further, we did a manual study on 9
web games to find violation of existing game programming patterns. Our results
show that existing tools are mostly language-specific and are not enough in the
context of games as they were not able to detect the anti-patterns or bad
coding practices that are game-specific, motivating the need of game-specific
code smell detection tools.
| 2020-02-13 19:54:29.000000000 |
2112.12398 | Towards Fully Declarative Program Analysis via Source Code
Transformation | cs.SE cs.PL | Advances in logic programming and increasing industrial uptake of
Datalog-inspired approaches demonstrate the emerging need to express powerful
code analyses more easily. Declarative program analysis frameworks (e.g., using
logic programming like Datalog) significantly ease defining analyses compared
to imperative implementations. However, the declarative benefits of these
frameworks only materialize after parsing and translating source code to
generate facts. Fact generation remains a non-declarative precursor to analysis
where imperative implementations first parse and interpret program structures
(e.g., abstract syntax trees and control-flow graphs). The procedure of fact
generation thus remains opaque and difficult for non-experts to understand or
modify. We present a new perspective on this analysis workflow by proposing
declarative fact generation to ease specification and exploration of
lightweight declarative analyses. Our approach demonstrates the first venture
towards fully declarative analysis specification across multiple languages. The
key idea is to translate source code directly to Datalog facts in the analysis
domain using declarative syntax transformation. We then reuse existing Datalog
analyses over generated facts, yielding an end-to-end declarative pipeline. As
a first approximation we pursue a syntax-driven approach and demonstrate the
feasibility of generating and using lightweight versions of liveness and call
graph reachability properties. We then discuss the workability of extending
declarative fact generation to also incorporate semantic information.
| 2021-12-23 07:49:05.000000000 |
2110.00782 | Recommending Code Understandability Improvements based on Code Reviews | cs.SE | Developers spend 70% of their time understanding code. Code that is easy to
read can save time, while hard-to-read code can lead to the introduction of
bugs. However, it is difficult to establish what makes code more
understandable. Although there are guides and directives on improving code
understandability, in some contexts, these practices can have a detrimental
effect. Practical software development projects often employ code review to
improve code quality, including understandability. Reviewers are often senior
developers who have contributed extensively to projects and have an in-depth
understanding of the impacts of different solutions on code understandability.
This paper is an early research proposal to recommend code understandability
improvements based on code reviewer knowledge. The core of the proposal
comprises a dataset of code understandability improvements extracted from code
reviews. This dataset will serve as a basis to train machine learning systems
to recommend understandability improvements.
| 2021-10-02 11:10:50.000000000 |
2207.06515 | Automated Cause Analysis of Latency Outliers Using System-Level
Dependency Graphs | cs.PF cs.SE | Detecting performance issues and identifying their root causes in the runtime
is a challenging task. Typically, developers use methods such as logging and
tracing to identify bottlenecks. These solutions are, however, not ideal as
they are time-consuming and require manual effort. In this paper, we propose a
method to automate the task of detecting latency outliers using system-level
traces and then comparing them to identify the root cause(s). Our method makes
use of dependency graphs to show internal interactions between threads and
system resources. With these graphs, one can pinpoint where performance issues
occur. However, a single trace can be composed of a large number of requests,
each generating one graph. To automate the task of identifying outliers within
the dataset, we use machine learning density-based models and statistical
calculations such as -score. Our evaluation shows an accuracy greater than 97 %
on outlier detection, making them appropriate for in-production servers and
industry-level use cases.
| 2022-07-13 20:30:41.000000000 |
2402.11581 | Using rule engine in self-healing systems and MAPE model | cs.SE | Software malfunction presents a significant hurdle within the computing
domain, carrying substantial risks for systems, enterprises, and users
universally. To produce software with high reliability and quality, effective
debugging is essential. Program debugging is an activity to reduce software
maintenance costs. In this study, a failure repair method that uses a rule
engine is presented. The simulation on mRUBIS showed that the proposed method
could be efficient in the operational environment. Through a thorough grasp of
software failure and the adoption of efficient mitigation strategies,
stakeholders can bolster the dependability, security, and adaptability of
software systems. This, in turn, reduces the repercussions of failures and
cultivates increased confidence in digital technologies.
| 2024-02-18 13:03:11.000000000 |
2009.03678 | An Efficient Approach for Reviewing Security-Related Aspects in Agile
Requirements Specifications of Web Applications | cs.SE | Defects in requirements specifications can have severe consequences during
the software development lifecycle. Some of them may result in poor product
quality and/or time and budget overruns due to incorrect or missing quality
characteristics, such as security. This characteristic requires special
attention in web applications because they have become a target for
manipulating sensible data. Several concerns make security difficult to deal
with. For instance, security requirements are often misunderstood and
improperly specified due to lack of security expertise and emphasis on security
during early stages of software development. This often leads to unspecified or
ill-defined security-related aspects. These concerns become even more
challenging in agile contexts, where lightweight documentation is typically
produced. To tackle this problem, we designed an approach for reviewing
security-related aspects in agile requirements specifications of web
applications. Our proposal considers user stories and security specifications
as inputs and relates those user stories to security properties via Natural
Language Processing. Based on the related security properties, our approach
identifies high-level security requirements from the Open Web Application
Security Project (OWASP) to be verified, and generates a reading technique to
support reviewers in detecting defects. We evaluate our approach via three
experiment trials conducted with 56 novice software engineers, measuring
effectiveness, efficiency, usefulness, and ease of use. We compare our approach
against using: (1) the OWASP high-level security requirements, and (2) a
perspective-based approach as proposed in contemporary state of the art. The
results strengthen our confidence that using our approach has a positive impact
(with large effect size) on the performance of inspectors in terms of
effectiveness and efficiency.
| 2020-09-06 08:21:37.000000000 |
1405.1618 | Complete Separation of the 3 Tiers - Divide and Conquer | cs.SE cs.PL | Most Java applications, including web based ones, follow the 3-tier
architecture. Although Java provides standard tools for tier-to-tier
interfaces, the separation of the tiers is usually not perfect. E.g. the
database interface, JDBC, assumes that SQL statements are issued from the
application server. Similarly, in web based Java applications, HTML code is
assumed to be produced by servlets. In terms of syntax, this turns Java source
code into mixtures of languages: Java and SQL, Java and HTML. These language
mixtures are difficult to read, modify, and maintain.
In this paper we examine criteria and methods to achieve a good separation of
the 3 tiers and propose a technique to provide a clean separation. Our proposed
technique requires an explicit Interface and Data Definitions. These allow
isolation of the back-end, application server, and front-end development. The
Definitions also enable application design in terms of aggregated data
structures. As a result significant amounts of auxiliary code can be generated
from the Definitions, enabling the developers to concentrate on the business
logic. By and large the proposed approach greatly facilitates development and
maintenance, and overall improves the quality of the products.
| 2014-05-02 20:01:38.000000000 |
2208.00443 | Taming Multi-Output Recommenders for Software Engineering | cs.SE cs.HC | Recommender systems are a valuable tool for software engineers. For example,
they can provide developers with a ranked list of files likely to contain a
bug, or multiple auto-complete suggestions for a given method stub. However,
the way these recommender systems interact with developers is often rudimentary
-- a long list of recommendations only ranked by the model's confidence. In
this vision paper, we lay out our research agenda for re-imagining how
recommender systems for software engineering communicate their insights to
developers. When issuing recommendations, our aim is to recommend diverse
rather than redundant solutions and present them in ways that highlight their
differences. We also want to allow for seamless and interactive navigation of
suggestions while striving for holistic end-to-end evaluations. By doing so, we
believe that recommender systems can play an even more important role in
helping developers write better software.
| 2022-07-31 14:44:37.000000000 |
2303.05947 | Automotive Perception Software Development: An Empirical Investigation
into Data, Annotation, and Ecosystem Challenges | cs.SE cs.LG | Software that contains machine learning algorithms is an integral part of
automotive perception, for example, in driving automation systems. The
development of such software, specifically the training and validation of the
machine learning components, require large annotated datasets. An industry of
data and annotation services has emerged to serve the development of such
data-intensive automotive software components. Wide-spread difficulties to
specify data and annotation needs challenge collaborations between OEMs
(Original Equipment Manufacturers) and their suppliers of software components,
data, and annotations. This paper investigates the reasons for these
difficulties for practitioners in the Swedish automotive industry to arrive at
clear specifications for data and annotations. The results from an interview
study show that a lack of effective metrics for data quality aspects,
ambiguities in the way of working, unclear definitions of annotation quality,
and deficits in the business ecosystems are causes for the difficulty in
deriving the specifications. We provide a list of recommendations that can
mitigate challenges when deriving specifications and we propose future research
opportunities to overcome these challenges. Our work contributes towards the
on-going research on accountability of machine learning as applied to complex
software systems, especially for high-stake applications such as automated
driving.
| 2023-03-10 14:29:06.000000000 |
2003.03001 | The Cost and Benefits of Static Analysis During Development | cs.SE | Without quantitative data, deciding whether and how to use static analysis in
a development workflow is a matter of expert opinion and guesswork rather than
an engineering trade-off. Moreover, relevant data collected under real-world
conditions is scarce. Important but unknown quantitative parameters include,
but are not limited to, the effort to apply the techniques, the effectiveness
of removing defects, where in the workflow the analysis should be applied, and
how static analysis interacts with other quality techniques. This study
examined the detailed development process data 35 industrial development
projects that included static analysis and that were also instrumented with the
Team Software Process. We collected data project plans, logs of effort, defect,
and size and post mortem reports and analyzed performance of their development
activities to populate a parameterized performance model. We compared effort
and defect levels with and without static analysis using a planning model that
includes feedback for defect removal effectiveness and fix effort. We found
evidence that using each tool developers found and removed defects at a higher
rate than alternative removal techniques. Moreover, the early and inexpensive
removal reduced not only final defect density but also total development
effort. The contributions of this paper include real-world benchmarks of
process data from projects using static analysis tools, a demonstration of a
cost-effectiveness analysis using this data, and a recommendation these tools
were consistently cost effective operationally.
| 2020-03-06 02:11:36.000000000 |
2108.06705 | A Qualitative Study of Architectural Design Issues in DevOps | cs.SE | Software architecture is critical in succeeding with DevOps. However,
designing software architectures that enable and support DevOps (DevOps-driven
software architectures) is a challenge for organizations. We assert that one of
the essential steps towards characterizing DevOps-driven architectures is to
understand architectural design issues raised in DevOps. At the same time, some
of the architectural issues that emerge in the DevOps context (and their
corresponding architectural practices or tactics) may stem from the context
(i.e., domain) and characteristics of software organizations. To this end, we
conducted a mixed-methods study that consists of a qualitative case study of
two teams in a company during their DevOps transformation and a content
analysis of Stack Overflow and DevOps Stack Exchange posts to understand
architectural design issues in DevOps. Our study found eight specific and
contextual architectural design issues faced by the two teams and classified
architectural design issues discussed in Stack Overflow and DevOps Stack
Exchange into 11 groups. Our aggregated results reveal that the main
characteristics of DevOps-driven architectures are: being loosely coupled and
prioritizing deployability, testability, supportability, and modifiability over
other quality attributes. Finally, we discuss some concrete implications for
research and practice.
| 2021-08-15 09:49:06.000000000 |
1206.5279 | Making life better one large system at a time: Challenges for UAI
research | cs.SE cs.AI | The rapid growth and diversity in service offerings and the ensuing
complexity of information technology ecosystems present numerous management
challenges (both operational and strategic). Instrumentation and measurement
technology is, by and large, keeping pace with this development and growth.
However, the algorithms, tools, and technology required to transform the data
into relevant information for decision making are not. The claim in this paper
(and the invited talk) is that the line of research conducted in Uncertainty in
Artificial Intelligence is very well suited to address the challenges and close
this gap. I will support this claim and discuss open problems using recent
examples in diagnosis, model discovery, and policy optimization on three real
life distributed systems.
| 2012-06-20 15:09:50.000000000 |
2208.00047 | What to share, when, and where: balancing the objectives and
complexities of open source software contributions | cs.SE | Context: Software-intensive organizations' rationale for sharing Open Source
Software (OSS) may be driven by both idealistic, strategic and commercial
objectives, and include both monetary as well as non-monetary benefits. To gain
the potential benefits, an organization may need to consider what they share
and how, while taking into account risks, costs and other complexities.
Objective: This study aims to empirically investigate objectives and
complexities organizations need to consider and balance between when deciding
on what software to share as OSS, when to share it, and whether to create a new
or contribute to an existing community.
Method: A multiple-case study of three case organizations was conducted in
two research cycles, with data gathered from interviews with 20 practitioners
from these organizations. The data was analyzed qualitatively in an inductive
and iterative coding process.
Results: 12 contribution objectives and 15 contribution complexities were
found. Objectives include opportunities for improving reputation, managing
suppliers, managing partners and competitors, and exploiting externally
available knowledge and resources. Complexities include risk of loosing
control, risk of giving away competitive advantage, risk of creating negative
exposure, costs of contributing, and the possibility and need to contribute to
an existing or new community.
Conclusions: Cross-case analysis and interview validation show that the
identified objectives and complexities offer organizations a possibility to
reflect on and adapt their contribution strategies based on their specific
contexts and business goals.
| 2022-07-29 19:28:26.000000000 |
2207.04093 | An Integrated Framework for DevSecOps Adoption | cs.SE | Introduction of DevOps into the software development life cycle represents a
cultural shift in the IT culture, amalgamating development and operations to
improve delivery speed in a rapid and maintainable manner. At the same time,
security threats and breaches are expected to grow as more enterprises move to
new agile frameworks for rapid product delivery. Meanwhile, DevSecOps is a
mindset change that revolutionizes software development by embedding security
at each step of the software cycle, leading to resilient software. This paper
discusses a framework organization can use to embed DevSecOps swiftly and
efficiently into the general IT culture.
| 2022-07-07 17:23:59.000000000 |
2004.05705 | Are Game Engines Software Frameworks? A Three-perspective Study | cs.SE | Game engines help developers create video games and avoid duplication of code
and effort, like frameworks for traditional software systems. In this paper, we
explore open-source game engines along three perspectives: literature, code,
and human. First, we explore and summarise the academic literature on game
engines. Second, we compare the characteristics of the 282 most popular engines
and the 282 most popular frameworks in GitHub. Finally, we survey 124 engine
developers about their experience with the development of their engines. We
report that: (1) Game engines are not well-studied in software-engineering
research with few studies having engines as object of research. (2) Open-source
game engines are slightly larger in terms of size and complexity and less
popular and engaging than traditional frameworks. Their programming languages
differ greatly from frameworks. Engine projects have shorter histories with
less releases. (3) Developers perceive game engines as different from
traditional frameworks. Generally, they build game engines to (a) better
control the environment and source code, (b) learn about game engines, and (c)
develop specific games. We conclude that open-source game engines have
differences compared to traditional open-source frameworks although this
differences do not demand special treatments.
| 2020-04-12 21:57:12.000000000 |
1906.11351 | Software Engineering Research Community Viewpoints on Rapid Reviews | cs.SE | Background: One of the most important current challenges of Software
Engineering (SE) research is to provide relevant evidence to practice. In
health related fields, Rapid Reviews (RRs) have shown to be an effective method
to achieve that goal. However, little is known about how the SE research
community perceives the potential applicability of RRs. Aims: The goal of this
study is to understand the SE research community viewpoints towards the use of
RRs as a means to provide evidence to practitioners. Method: To understand
their viewpoints, we invited 37 researchers to analyze 50 opinion statements
about RRs, and rate them according to what extent they agree with each
statement. Q-Methodology was employed to identify the most salient viewpoints,
represented by the so called factors. Results: Four factors were identified:
Factor A groups undecided researchers that need more evidence before using RRs;
Researchers grouped in Factor B are generally positive about RRs, but highlight
the need to define minimum standards; Factor C researchers are more skeptical
and reinforce the importance of high quality evidence; Researchers aligned to
Factor D have a pragmatic point of view, considering RRs can be applied based
on the context and constraints faced by practitioners. Conclusions: In
conclusion, although there are opposing viewpoints, there are also some common
grounds. For example, all viewpoints agree that both RRs and Systematic Reviews
can be poorly or well conducted.
| 2019-06-26 21:08:04.000000000 |
1110.1866 | Putting Instruction Sequences into Effect | cs.PL cs.SE | An attempt is made to define the concept of execution of an instruction
sequence. It is found to be a special case of directly putting into effect of
an instruction sequence. Directly putting into effect of an instruction
sequences comprises interpretation as well as execution. Directly putting into
effect is a special case of putting into effect with other special cases
classified as indirectly putting into effect.
| 2011-10-09 18:46:04.000000000 |
cs/0207054 | Enhancing Usefulness of Declarative Programming Frameworks through
Complete Integration | cs.SE | The Gisela framework for declarative programming was developed with the
specific aim of providing a tool that would be useful for knowledge
representation and reasoning within real-world applications. To achieve this, a
complete integration into an object-oriented application development
environment was used. The framework and methodology developed provide two
alternative application programming interfaces (APIs): Programming using
objects or programming using a traditional equational declarative style. In
addition to providing complete integration, Gisela also allows extensions and
modifications due to the general computation model and well-defined APIs. We
give a brief overview of the declarative model underlying Gisela and we present
the methodology proposed for building applications together with some real
examples.
| 2002-07-12 01:17:13.000000000 |
1706.09357 | Differential Testing for Variational Analyses: Experience from
Developing KConfigReader | cs.SE | Differential testing to solve the oracle problem has been applied in many
scenarios where multiple supposedly equivalent implementations exist, such as
multiple implementations of a C compiler. If the multiple systems disagree on
the output for a given test input, we have likely discovered a bug without
every having to specify what the expected output is. Research on variational
analyses (or variability-aware or family-based analyses) can benefit from
similar ideas. The goal of most variational analyses is to perform an analysis,
such as type checking or model checking, over a large number of configurations
much faster than an existing traditional analysis could by analyzing each
configuration separately. Variational analyses are very suitable for
differential testing, since the existence nonvariational analysis can provide
the oracle for test cases that would otherwise be tedious or difficult to
write. In this experience paper, I report how differential testing has helped
in developing KConfigReader, a tool for translating the Linux kernel's kconfig
model into a propositional formula. Differential testing allows us to quickly
build a large test base and incorporate external tests that avoided many
regressions during development and made KConfigReader likely the most precise
kconfig extraction tool available.
| 2017-06-28 16:48:35.000000000 |
1509.09067 | Semantic issues in model-driven management of information system
interoperability | cs.SE | The MISE Project (Mediation Information System Engineering) aims at providing
collaborating organizations with a Mediation Information System (MIS) in charge
of supporting interoperability of a collaborative network. MISE proposes an
overall MIS design method according to a model-driven approach, based on model
transformations. This MIS is in charge of managing (i) information, (ii)
functions and (iii) processes among the information systems (IS) of partner
organizations involved in the network. Semantic issues are accompanying this
triple objective: How to deal with information reconciliation? How to ensure
the matching between business functions and technical services? How to identify
workflows among business processes? This article aims first, at presenting the
MISE approach, second at defining the semantic gaps along the MISE approach and
third at describing some past, current and future research works that deal with
these issues. Finally and as a conclusion, the very "design-oriented" previous
considerations are confronted with "run-time" requirements.
| 2015-09-30 08:31:46.000000000 |
1209.4922 | Monitoring Control Updating Period In Fast Gradient Based NMPC | cs.SY cs.SE | In this paper, a method is proposed for on-line monitoring of the control
updating period in fast-gradient-based Model Predictive Control (MPC) schemes.
Such schemes are currently under intense investigation as a way to accommodate
for real-time requirements when dealing with systems showing fast dynamics. The
method needs cheap computations that use the algorithm on-line behavior in
order to recover the optimal updating period in terms of cost function
decrease. A simple example of constrained triple integrator is used to
illustrate the proposed method and to assess its efficiency.
| 2012-09-21 21:22:56.000000000 |
2012.05563 | Combined Intuition and Rationality Increases Software Feature Novelty
for Female Software Designers | cs.SE | Overcoming society's complex problems requires novel solutions. Applying
different cognitive styles can promote novelty when designing software aimed at
these problems. Through an experiment with 80 software design practitioners, we
found that female practitioners who had a preference for more than one
cognitive style (intuition and rationality) produced the most novel software
features of all participants.
| 2020-12-10 10:21:50.000000000 |
2104.13982 | Challenges Women in Software Engineering Leadership Roles Face: A
Qualitative Study | cs.SE | Software engineering is not only about technical solutions. To a large
extent, it is also concerned with organizational issues, project management,
and human behavior. There are serious gender issues that can severely limit the
participation of women in science and engineering careers. It is claimed that
women lead differently than men and are more collaboration-oriented,
communicative, and less aggressive than their male counterparts. However, when
talking with women in technology companies' leadership roles, a list of
problems women face grows fast. We invite women in software engineering
management roles to answer the questions from an empathy map canvas. We used
thematic analysis for coding the answers and group the codes into themes. From
the analysis, we identified seven themes that support us to list the main
challenges they face in their careers.
| 2021-04-28 19:22:09.000000000 |
2103.13154 | Exploiting the Unique Expression for Improved Sentiment Analysis in
Software Engineering Text | cs.SE | Sentiment analysis on software engineering (SE) texts has been widely used in
the SE research, such as evaluating app reviews or analyzing developers
sentiments in commit messages. To better support the use of automated sentiment
analysis for SE tasks, researchers built an SE-domain-specified sentiment
dictionary to further improve the accuracy of the results. Unfortunately,
recent work reported that current mainstream tools for sentiment analysis still
cannot provide reliable results when analyzing the sentiments in SE texts. We
suggest that the reason for this situation is because the way of expressing
sentiments in SE texts is largely different from the way in social network or
movie comments. In this paper, we propose to improve sentiment analysis in SE
texts by using sentence structures, a different perspective from building a
domain dictionary. Specifically, we use sentence structures to first identify
whether the author is expressing her sentiment in a given clause of an SE text,
and to further adjust the calculation of sentiments which are confirmed in the
clause. An empirical evaluation based on four different datasets shows that our
approach can outperform two dictionary-based baseline approaches, and is more
generalizable compared to a learning-based baseline approach.
| 2021-03-24 12:55:19.000000000 |
2102.06292 | Improving Fault Localization by Integrating Value and Predicate Based
Causal Inference Techniques | cs.SE | Statistical fault localization (SFL) techniques use execution profiles and
success/failure information from software executions, in conjunction with
statistical inference, to automatically score program elements based on how
likely they are to be faulty. SFL techniques typically employ one type of
profile data: either coverage data, predicate outcomes, or variable values.
Most SFL techniques actually measure correlation, not causation, between
profile values and success/failure, and so they are subject to confounding bias
that distorts the scores they produce. This paper presents a new SFL technique,
named \emph{UniVal}, that uses causal inference techniques and machine learning
to integrate information about both predicate outcomes and variable values to
more accurately estimate the true failure-causing effect of program statements.
\emph{UniVal} was empirically compared to several coverage-based,
predicate-based, and value-based SFL techniques on 800 program versions with
real faults.
| 2021-02-11 22:29:30.000000000 |
2207.13263 | Software Engineering for Serverless Computing | cs.SE | Serverless computing is an emerging cloud computing paradigm that has been
applied to various domains, including machine learning, scientific computing,
video processing, etc. To develop serverless computing-based software
applications (a.k.a., serverless applications), developers follow the new
cloud-based software architecture, where they develop event-driven applications
without the need for complex and error-prone server management. The great
demand for developing serverless applications poses unique challenges to
software developers. However, Software Engineering (SE) has not yet
wholeheartedly tackled these challenges. In this paper, we outline a vision for
how SE can facilitate the development of serverless applications and call for
actions by the SE research community to reify this vision. Specifically, we
discuss possible directions in which researchers and cloud providers can
facilitate serverless computing from the SE perspective, including
configuration management, data security, application migration, performance,
testing and debugging, etc.
| 2022-07-27 02:57:19.000000000 |
1909.08378 | Anomaly Detection As-a-Service | cs.SE | Cloud systems are complex, large, and dynamic systems whose behavior must be
continuously analyzed to timely detect misbehaviors and failures. Although
there are solutions to flexibly monitor cloud systems, cost-effectively
controlling the anomaly detection logic is still a challenge. In particular,
cloud operators may need to quickly change the types of detected anomalies and
the scope of anomaly detection, for instance based on observations. This kind
of intervention still consists of a largely manual and inefficient ad-hoc
effort.
In this paper, we present Anomaly Detection as-a-Service (ADaaS), which uses
the same as-a-service paradigm often exploited in cloud systems to declarative
control the anomaly detection logic. Operators can use ADaaS to specify the set
of indicators that must be analyzed and the types of anomalies that must be
detected, without having to address any operational aspect. Early results with
lightweight detectors show that the presented approach is a promising solution
to deliver better control of the anomaly detection logic.
| 2019-09-18 11:58:51.000000000 |
2309.12938 | Frustrated with Code Quality Issues? LLMs can Help! | cs.AI cs.SE | As software projects progress, quality of code assumes paramount importance
as it affects reliability, maintainability and security of software. For this
reason, static analysis tools are used in developer workflows to flag code
quality issues. However, developers need to spend extra efforts to revise their
code to improve code quality based on the tool findings. In this work, we
investigate the use of (instruction-following) large language models (LLMs) to
assist developers in revising code to resolve code quality issues. We present a
tool, CORE (short for COde REvisions), architected using a pair of LLMs
organized as a duo comprised of a proposer and a ranker. Providers of static
analysis tools recommend ways to mitigate the tool warnings and developers
follow them to revise their code. The \emph{proposer LLM} of CORE takes the
same set of recommendations and applies them to generate candidate code
revisions. The candidates which pass the static quality checks are retained.
However, the LLM may introduce subtle, unintended functionality changes which
may go un-detected by the static analysis. The \emph{ranker LLM} evaluates the
changes made by the proposer using a rubric that closely follows the acceptance
criteria that a developer would enforce. CORE uses the scores assigned by the
ranker LLM to rank the candidate revisions before presenting them to the
developer. CORE could revise 59.2% Python files (across 52 quality checks) so
that they pass scrutiny by both a tool and a human reviewer. The ranker LLM is
able to reduce false positives by 25.8% in these cases. CORE produced revisions
that passed the static analysis tool in 76.8% Java files (across 10 quality
checks) comparable to 78.3% of a specialized program repair tool, with
significantly much less engineering efforts.
| 2023-09-22 15:37:07.000000000 |
1412.3687 | Modelling common cause failures of large digital I&C systems with
coloured Petri nets | cs.SE cs.PF | The purpose of this study is the representation of Common Cause Failures
(CCF) in large digital systems. The system under study is representative of a
control system of a nuclear plant. The model for CCF is the generalized Atwood
model. It can represent independent failures, CCF non-lethal for some system
elements and CCF lethal to all. The Atwood model was modified to "direct"
non-lethal DCC on certain parts of the system and take into account the
different possible origins of DCC. Maintenance and repairs are taken into
account in the model that is thus dynamic. The main evaluation results are
probabilistic, the considered indicator is the probability of failure on demand
(PFD). A comparison is made between the estimator of the PFD taking into
account all the failures and the estimator taking into account only the
detected failures.
| 2014-12-09 18:06:33.000000000 |
1803.10324 | Ten Diverse Formal Models for a CBTC Automatic Train Supervision System | cs.SE cs.FL cs.LO cs.SY | Communications-based Train Control (CBTC) systems are metro signalling
platforms, which coordinate and protect the movements of trains within the
tracks of a station, and between different stations. In CBTC platforms, a
prominent role is played by the Automatic Train Supervision (ATS) system, which
automatically dispatches and routes trains within the metro network. Among the
various functions, an ATS needs to avoid deadlock situations, i.e., cases in
which a group of trains block each other. In the context of a technology
transfer study, we designed an algorithm for deadlock avoidance in train
scheduling. In this paper, we present a case study in which the algorithm has
been applied. The case study has been encoded using ten different formal
verification environments, namely UMC, SPIN, NuSMV/nuXmv, mCRL2, CPN Tools,
FDR4, CADP, TLA+, UPPAAL and ProB. Based on our experience, we observe
commonalities and differences among the modelling languages considered, and we
highlight the impact of the specific characteristics of each language on the
presented models.
| 2018-03-27 20:59:25.000000000 |
2107.13320 | A Case Study on the Stability of Performance Tests for Serverless
Applications | cs.DC cs.SE | Context. While in serverless computing, application resource management and
operational concerns are generally delegated to the cloud provider, ensuring
that serverless applications meet their performance requirements is still a
responsibility of the developers. Performance testing is a commonly used
performance assessment practice; however, it traditionally requires visibility
of the resource environment.
Objective. In this study, we investigate whether performance tests of
serverless applications are stable, that is, if their results are reproducible,
and what implications the serverless paradigm has for performance tests.
Method. We conduct a case study where we collect two datasets of performance
test results: (a) repetitions of performance tests for varying memory size and
load intensities and (b) three repetitions of the same performance test every
day for ten months.
Results. We find that performance tests of serverless applications are
comparatively stable if conducted on the same day. However, we also observe
short-term performance variations and frequent long-term performance changes.
Conclusion. Performance tests for serverless applications can be stable;
however, the serverless model impacts the planning, execution, and analysis of
performance tests.
| 2021-07-28 12:32:00.000000000 |
2107.12136 | The Role of Functional Programming in Management and Orchestration of
Virtualized Network Resources Part I. System structure for Complex Systems
and Design Principles | cs.SE cs.PL cs.SY eess.SY | This is part I of the follow-up lecture notes of the lectures given by the
authors at the Three \CO" (Composability, Comprehensibility, Correctness)
Winter School held in Ko\v{s}ice, Slovakia, in January 2018, and Summer School
held in Budapest, Hungary, in June 2019. In this part we explain the role of
functional programming paradigm in the management of complex software systems,
and how the functional programming concepts play important role in the
designing such systems. Key prerequisite for implementing functional
programming concepts is properly designed system structure following well
defined design principles and rules. That is the main goal of this lecture to
introduce students with proper system modeling. Furthermore, we also explain
how new emerging technologies are designed in such a way that they enforce the
development of systems that comply to the design rules inspired by the
functional programming. This is extremely important in view of the current
network evolution and virtualization concepts, which will require many
functional programming concepts in the network services and functions, as will
be discussed in part II of these lecture notes. These notes provide an
introduction to the subject, with the goal of explaining the problems and the
principles, methods and techniques used for their solution. The worked examples
and exercises serve students as the teaching material, from which they can
learn how to use design principles to model effective system structures. Here
we focus on students understanding of importance of effective system structures
for coordination of development and management processes that are driven by
business goals and further evolution.
| 2021-07-26 12:14:50.000000000 |
2103.11739 | Mine Me but Don't Single Me Out: Differentially Private Event Logs for
Process Mining | cs.CR cs.SE | The applicability of process mining techniques hinges on the availability of
event logs capturing the execution of a business process. In some use cases,
particularly those involving customer-facing processes, these event logs may
contain private information. Data protection regulations restrict the use of
such event logs for analysis purposes. One way of circumventing these
restrictions is to anonymize the event log to the extent that no individual can
be singled out using the anonymized log. This paper addresses the problem of
anonymizing an event log in order to guarantee that, upon disclosure of the
anonymized log, the probability that an attacker may single out any individual
represented in the original log, does not increase by more than a threshold.
The paper proposes a differentially private disclosure mechanism, which
oversamples the cases in the log and adds noise to the timestamps to the extent
required to achieve the above privacy guarantee. The paper reports on an
empirical evaluation of the proposed approach using 14 real-life event logs in
terms of data utility loss and computational efficiency.
| 2021-03-22 11:39:11.000000000 |
2102.05310 | Controlled Experimentation in Continuous Experimentation: Knowledge and
Challenges | cs.SE | Context: Continuous experimentation and A/B testing is an established
industry practice that has been researched for more than 10 years. Our aim is
to synthesize the conducted research.
Objective: We wanted to find the core constituents of a framework for
continuous experimentation and the solutions that are applied within the field.
Finally, we were interested in the challenges and benefits reported of
continuous experimentation.
Method: We applied forward snowballing on a known set of papers and
identified a total of 128 relevant papers. Based on this set of papers we
performed two qualitative narrative syntheses and a thematic synthesis to
answer the research questions.
Results: The framework constituents for continuous experimentation include
experimentation processes as well as supportive technical and organizational
infrastructure. The solutions found in the literature were synthesized to nine
themes, e.g. experiment design, automated experiments, or metric specification.
Concerning the challenges of continuous experimentation, the analysis
identified cultural, organizational, business, technical, statistical, ethical,
and domain-specific challenges. Further, the study concludes that the benefits
of experimentation are mostly implicit in the studies.
Conclusions: The research on continuous experimentation has yielded a large
body of knowledge on experimentation. The synthesis of published research
presented within include recommended infrastructure and experimentation process
models, guidelines to mitigate the identified challenges, and what problems the
various published solutions solve.
| 2021-02-10 08:15:12.000000000 |
1312.0356 | Applying AOSE Concepts to Model Crosscutting Variability in Variant-Rich
Processes | cs.SE | Software process models need to be variant-rich, in the sense that they
should be systematically customizable to specific project goals and project
environments. It is currently very difficult to model Variant-Rich Process
(VRP) because variability mechanisms are largely missing in modern process
modeling languages. Variability mechanisms from other domains, such as
programming languages, might be suitable for the representation of variability
and could be adapted to the modeling of software processes. Mechanisms from
Software Product Line Engineering (SPLE) and concepts from Aspect- Oriented
Software Engineering (AOSE) show particular promise when modeling variability.
This paper presents an approach that integrates variability concepts from SPLE
and AOSE in the design of a VRP approach for the systematic support of
tailoring in software processes. This approach has also been implemented in
SPEM, resulting in the vSPEM notation. It has been used in a pilot application,
which indicates that our approach based on AOSE can make process tailoring
easier and more productive.
| 2013-12-02 07:17:10.000000000 |
cs/0607121 | Object-Based Groupware: Theory, Design and Implementation Issues | cs.SE | Document management software systems are having a wide audience at present.
However, groupware as a term has a wide variety of possible definitions.
Groupware classification attempt is made in this paper. Possible approaches to
groupware are considered including document management, document control and
mailing systems. Lattice theory and concept modelling are presented as a
theoretical background for the systems in question. Current technologies in
state-of-the-art document managenent software are discussed. Design and
implementation aspects for user-friendly integrate enterprise systems are
described. Results for a real system to be implemented are given. Perspectives
of the field in question are discussed.
| 2006-07-27 10:16:16.000000000 |
1809.04041 | Identifying Unmaintained Projects in GitHub | cs.SE | Background: Open source software has an increasing importance in modern
software development. However, there is also a growing concern on the
sustainability of such projects, which are usually managed by a small number of
developers, frequently working as volunteers. Aims: In this paper, we propose
an approach to identify GitHub projects that are not actively maintained. Our
goal is to alert users about the risks of using these projects and possibly
motivate other developers to assume the maintenance of the projects. Method: We
train machine learning models to identify unmaintained or sparsely maintained
projects, based on a set of features about project activity (commits, forks,
issues, etc). We empirically validate the model with the best performance with
the principal developers of 129 GitHub projects. Results: The proposed machine
learning approach has a precision of 80%, based on the feedback of real open
source developers; and a recall of 96%. We also show that our approach can be
used to assess the risks of projects becoming unmaintained. Conclusions: The
model proposed in this paper can be used by open source users and developers to
identify GitHub projects that are not actively maintained anymore.
| 2018-09-11 17:15:56.000000000 |
2210.02506 | Large Language Models are Pretty Good Zero-Shot Video Game Bug Detectors | cs.CL cs.SE | Video game testing requires game-specific knowledge as well as common sense
reasoning about the events in the game. While AI-driven agents can satisfy the
first requirement, it is not yet possible to meet the second requirement
automatically. Therefore, video game testing often still relies on manual
testing, and human testers are required to play the game thoroughly to detect
bugs. As a result, it is challenging to fully automate game testing. In this
study, we explore the possibility of leveraging the zero-shot capabilities of
large language models for video game bug detection. By formulating the bug
detection problem as a question-answering task, we show that large language
models can identify which event is buggy in a sequence of textual descriptions
of events from a game. To this end, we introduce the GameBugDescriptions
benchmark dataset, which consists of 167 buggy gameplay videos and a total of
334 question-answer pairs across 8 games. We extensively evaluate the
performance of six models across the OPT and InstructGPT large language model
families on our benchmark dataset. Our results show promising results for
employing language models to detect video game bugs. With the proper prompting
technique, we could achieve an accuracy of 70.66%, and on some video games, up
to 78.94%. Our code, evaluation data and the benchmark can be found on
https://asgaardlab.github.io/LLMxBugs
| 2022-10-05 18:44:35.000000000 |
1810.09704 | Understanding and Formalizing Accountability for Cyber-Physical Systems | cs.SE | Accountability is the property of a system that enables the uncovering of
causes for events and helps understand who or what is responsible for these
events. Definitions and interpretations of accountability differ; however, they
are typically expressed in natural language that obscures design decisions and
the impact on the overall system. This paper presents a formal model to express
the accountability properties of cyber-physical systems. To illustrate the
usefulness of our approach, we demonstrate how three different interpretations
of accountability can be expressed using the proposed model and describe the
implementation implications through a case study. This formal model can be used
to highlight context specific-elements of accountability mechanisms, define
their capabilities, and express different notions of accountability. In
addition, it makes design decisions explicit and facilitates discussion,
analysis and comparison of different approaches.
| 2018-10-23 07:49:58.000000000 |
2111.04302 | How Developers and Managers Define and Trade Productivity for Quality | cs.SE | In this paper, we present the findings from a survey study to investigate how
developers and managers define and trade-off developer productivity and
software quality (two related lenses into software development). We found that
developers and managers, as cohorts, are not well aligned in their views of
what it means to be productive (developers think of productivity in terms of
activity, while more managers think of productivity in terms of performance).
We also found that developers are not accurate at predicting their managers'
views of productivity. In terms of quality, we found that individual developers
and managers have quite varied views of what quality means to them, but as
cohorts they are closely aligned in their different views, with the majority in
both groups defining quality in terms of robustness. Over half of the
developers and managers reported that quality can be traded for higher
productivity and why this trade-off can be justified, while one third consider
quality as a necessary part of productivity that cannot be traded. We also
present a new descriptive framework for quality, TRUCE, that we synthesize from
the survey responses. We call for more discussion between developers and
managers about what they each consider as important software quality
attributes, and to have open debate about how software quality relates to
developer productivity and what trade-offs should or should not be made.
| 2021-11-08 07:14:32.000000000 |
2302.03723 | "STILL AROUND": Experiences and Survival Strategies of Veteran Women
Software Developers | cs.SE | The intersection of ageism and sexism can create a hostile environment for
veteran software developers belonging to marginalized genders. In this study,
we conducted 14 interviews to examine the experiences of people at this
intersection, primarily women, in order to discover the strategies they
employed in order to successfully remain in the field. We identified 283 codes,
which fell into three main categories: Strategies, Experiences, and Perception.
Several strategies we identified, such as (Deliberately) Not Trying to Look
Younger, were not previously described in the software engineering literature.
We found that, in some companies, older women developers are recognized as
having particular value, further strengthening the known benefits of diversity
in the workforce. Based on the experiences and strategies, we suggest
organizations employing software developers to consider the benefits of hiring
veteran women software developers. For example, companies can draw upon the
life experiences of older women developers in order to better understand the
needs of customers from a similar demographic. While we recognize that many of
the strategies employed by our study participants are a response to systemic
issues, we still consider that, in the short-term, there is benefit in
describing these strategies for developers who are experiencing such issues
today.
| 2023-02-07 19:26:15.000000000 |
2211.14607 | Sketch2FullStack: Generating Skeleton Code of Full Stack Website and
Application from Sketch using Deep Learning and Computer Vision | cs.CV cs.AI cs.NE cs.SE | For a full-stack web or app development, it requires a software firm or more
specifically a team of experienced developers to contribute a large portion of
their time and resources to design the website and then convert it to code. As
a result, the efficiency of the development team is significantly reduced when
it comes to converting UI wireframes and database schemas into an actual
working system. It would save valuable resources and fasten the overall
workflow if the clients or developers can automate this process of converting
the pre-made full-stack website design to get a partially working if not fully
working code. In this paper, we present a novel approach of generating the
skeleton code from sketched images using Deep Learning and Computer Vision
approaches. The dataset for training are first-hand sketched images of low
fidelity wireframes, database schemas and class diagrams. The approach consists
of three parts. First, the front-end or UI elements detection and extraction
from custom-made UI wireframes. Second, individual database table creation from
schema designs and lastly, creating a class file from class diagrams.
| 2022-11-26 16:32:13.000000000 |
1407.6099 | Autonomous requirements specification processing using natural language
processing | cs.CL cs.SE | We describe our ongoing research that centres on the application of natural
language processing (NLP) to software engineering and systems development
activities. In particular, this paper addresses the use of NLP in the
requirements analysis and systems design processes. We have developed a
prototype toolset that can assist the systems analyst or software engineer to
select and verify terms relevant to a project. In this paper we describe the
processes employed by the system to extract and classify objects of interest
from requirements documents. These processes are illustrated using a small
example.
| 2014-07-23 03:29:44.000000000 |
2401.07696 | Towards Automatic Translation of Machine Learning Visual Insights to
Analytical Assertions | cs.SE | We present our vision for developing an automated tool capable of translating
visual properties observed in Machine Learning (ML) visualisations into Python
assertions. The tool aims to streamline the process of manually verifying these
visualisations in the ML development cycle, which is critical as real-world
data and assumptions often change post-deployment. In a prior study, we mined
$54,070$ Jupyter notebooks from Github and created a catalogue of $269$
semantically related visualisation-assertion (VA) pairs. Building on this
catalogue, we propose to build a taxonomy that organises the VA pairs based on
ML verification tasks. The input feature space comprises of a rich source of
information mined from the Jupyter notebooks -- visualisations, Python source
code, and associated markdown text. The effectiveness of various AI models,
including traditional NLP4Code models and modern Large Language Models, will be
compared using established machine translation metrics and evaluated through a
qualitative study with human participants. The paper also plans to address the
challenge of extending the existing VA pair dataset with additional pairs from
Kaggle and to compare the tool's effectiveness with commercial generative AI
models like ChatGPT. This research not only contributes to the field of ML
system validation but also explores novel ways to leverage AI for automating
and enhancing software engineering practices in ML.
| 2024-01-15 14:11:59.000000000 |
2109.07251 | Towards a new approach of continuous process improvement based on CMMI
and PMBOK | cs.SE | A process-centric approach helps an organization to improve the way it works
with. It allows scalability and provides a way to capitalize knowledge on best
practices. It also makes better use of resources and helps to understand
trends. PMBOK is a project management methodology, while CMMI is a model for
process improvement. In this paper, we conduct a study on PMBOK and CMMI
frameworks to show that they can be converged and complementary. We expect this
paper research will be useful for organizations to deploy a new approach of
continuous process improvement based on pooling CMMI and PMBOK.
| 2021-09-13 14:58:26.000000000 |
1105.1191 | Software Architecture for Fiji National University Campus Information
Systems | cs.SE | Software Architecture defines the overview of the system which consists of
various components and their relationships among the software. Architectural
design is very important in the development of large scale software solution
and plays a very active role in achieving business goals, quality and reusable
solution. It is often difficult to choose the best software architecture for
your system from the several candidate types available. In this paper we look
at the several architectural types and compare them based on the key
requirements of our system, and select the most appropriate architecture for
the implementation of campus information systems at Fiji National University.
Finally we provide details of proposed architecture and outline future plans
for implementation of our system.
| 2011-05-05 23:17:47.000000000 |
2301.08022 | Source Code Metrics for Software Defects Prediction | cs.SE | In current research, there are contrasting results about the applicability of
software source code metrics as features for defect prediction models. The goal
of the paper is to evaluate the adoption of software metrics in models for
software defect prediction, identifying the impact of individual source code
metrics. With an empirical study on 275 release versions of 39 Java projects
mined from GitHub, we compute 12 software metrics and collect software defect
information. We train and compare three defect classification models. The
results across all projects indicate that Decision Tree (DT) and Random Forest
(RF) classifiers show the best results. Among the highest-performing individual
metrics are NOC, NPA, DIT, and LCOM5. While other metrics, such as CBO, do not
bring significant improvements to the models.
| 2023-01-19 11:46:06.000000000 |
1910.01309 | Seamless design of information system architecture based on adaptive
clustering method | cs.SE | The paper considers the concept of building the architecture of an
information system that provides a seamless connection between architectural
representations of various levels of abstraction. The concept is based on the
application of the adaptive clustering method of information systems developed
by the author. Seamless connection is understood as the presence of connections
between elements of architectural models related to architectural
representations of various levels of abstraction.
| 2019-10-03 05:35:13.000000000 |
2102.10985 | Software Architecture for Next-Generation AI Planning Systems | cs.AI cs.SE | Artificial Intelligence (AI) planning is a flourishing research and
development discipline that provides powerful tools for searching a course of
action that achieves some user goal. While these planning tools show excellent
performance on benchmark planning problems, they represent challenging software
systems when it comes to their use and integration in real-world applications.
In fact, even in-depth understanding of their internal mechanisms does not
guarantee that one can successfully set up, use and manipulate existing
planning tools. We contribute toward alleviating this situation by proposing a
service-oriented planning architecture to be at the core of the ability to
design, develop and use next-generation AI planning systems. We collect and
classify common planning capabilities to form the building blocks of the
planning architecture. We incorporate software design principles and patterns
into the architecture to allow for usability, interoperability and reusability
of the planning capabilities. Our prototype planning system demonstrates the
potential of our approach for rapid prototyping and flexibility of system
composition. Finally, we provide insight into the qualitative advantages of our
approach when compared to a typical planning tool.
| 2021-02-22 13:43:45.000000000 |
1910.09911 | Designing Security and Privacy Requirements in Internet of Things: A
Survey | cs.SE cs.HC | The design and development process for the Internet of Things (IoT)
applications is more complicated than that for desktop, mobile, or web
applications. First, IoT applications require both software and hardware to
work together across different nodes with different capabilities under
different conditions. Secondly, IoT application development involves different
software engineers such as desktop, web, embedded and mobile to cooperate. In
addition, the development process required different software\hardware stacks
to integrated together. Due to above complexities, more often non-functional
requirements (such as security and privacy) tend to get ignored in IoT
application development process.
In this paper, we have reviewed techniques, methods and tools that are being
developed to support incorporating security and privacy requirements into
traditional application designs. By doing so, we aim to explore how those
techniques could be applicable to the IoT domain.
In this paper, we primarily focused on two different aspects: (1) design
notations, models, and languages that facilitate capturing non-functional
requirements (i.e., security and privacy), and (2) proactive and reactive
interaction techniques that can be used to support and augment the IoT
application design process. Our goal is not only to analyse past research work
but also to discuss their applicability towards the IoT.
| 2019-10-22 12:04:07.000000000 |
2201.08167 | Chatbot Based Solution for Supporting Software Incident Management
Process | cs.SE cs.AI | A set of steps for implementing a chatbot, to support decision-making
activities in the software incident management process is proposed and
discussed in this article. Each step is presented independently of the platform
used for the construction of chatbots and are detailed with their respective
activities. The proposed steps can be carried out in a continuous and adaptable
way, favoring the constant training of a chatbot and allowing the increasingly
cohesive interpretatin of the intentions of the specialists who work in the
Software Incident Management Process. The software incident resolution process
accordingly to the ITIL framework, is considered for the experiment. The
results of the work present the steps for the chatbot construction, the
solution based on DialogFlow platform and some conclusions based on the
experiment.
| 2022-01-15 23:16:13.000000000 |
1805.07354 | A Testing Scheme for Self-Adaptive Software Systems with Architectural
Runtime Models | cs.SE | Self-adaptive software systems (SASS) are equipped with feedback loops to
adapt autonomously to changes of the software or environment. In established
fields, such as embedded software, sophisticated approaches have been developed
to systematically study feedback loops early during the development. In order
to cover the particularities of feedback, techniques like one-way and
in-the-loop simulation and testing have been included. However, a related
approach to systematically test SASS is currently lacking. In this paper we
therefore propose a systematic testing scheme for SASS that allows engineers to
test the feedback loops early in the development by exploiting architectural
runtime models. These models that are available early in the development are
commonly used by the activities of a feedback loop at runtime and they provide
a suitable high-level abstraction to describe test inputs as well as expected
test results. We further outline our ideas with some initial evaluation results
by means of a small case study.
| 2018-05-17 18:46:04.000000000 |
2111.02038 | Fair-SSL: Building fair ML Software with less data | cs.SE cs.LG | Ethical bias in machine learning models has become a matter of concern in the
software engineering community. Most of the prior software engineering works
concentrated on finding ethical bias in models rather than fixing it. After
finding bias, the next step is mitigation. Prior researchers mainly tried to
use supervised approaches to achieve fairness. However, in the real world,
getting data with trustworthy ground truth is challenging and also ground truth
can contain human bias. Semi-supervised learning is a machine learning
technique where, incrementally, labeled data is used to generate pseudo-labels
for the rest of the data (and then all that data is used for model training).
In this work, we apply four popular semi-supervised techniques as
pseudo-labelers to create fair classification models. Our framework, Fair-SSL,
takes a very small amount (10%) of labeled data as input and generates
pseudo-labels for the unlabeled data. We then synthetically generate new data
points to balance the training data based on class and protected attribute as
proposed by Chakraborty et al. in FSE 2021. Finally, the classification model
is trained on the balanced pseudo-labeled data and validated on test data.
After experimenting on ten datasets and three learners, we find that Fair-SSL
achieves similar performance as three state-of-the-art bias mitigation
algorithms. That said, the clear advantage of Fair-SSL is that it requires only
10% of the labeled training data. To the best of our knowledge, this is the
first SE work where semi-supervised techniques are used to fight against
ethical bias in SE ML models.
| 2021-11-03 06:47:47.000000000 |
1703.06568 | Evaluating the Stream Control Transmission Protocol Using Uppaal | cs.LO cs.NI cs.SE | The Stream Control Transmission Protocol (SCTP) is a Transport Layer protocol
that has been proposed as an alternative to the Transmission Control Protocol
(TCP) for the Internet of Things (IoT). SCTP, with its four-way handshake
mechanism, claims to protect the Server from a Denial-of-Service (DoS) attack
by ensuring the legitimacy of the Client, which has been a known issue
pertaining to the three-way handshake of TCP. This paper compares the
handshakes of TCP and SCTP to discuss its shortcomings and strengths. We
present an Uppaal model of the TCP three-way handshake and SCTP four-way
handshake and show that SCTP is able to cope with the presence of an
Illegitimate Client, while TCP fails. The results confirm that SCTP is better
equipped to deal with this type of attack.
| 2017-03-20 02:46:57.000000000 |
2209.12984 | Characteristics and Main Threats about Multi-Factor Authentication: A
Survey | cs.CR cs.SE | This work reports that the Systematic Literature Review process is
responsible for providing theoretical support to research in the Threat Model
and Multi-Factor Authentication. However, different from the related works,
this study aims to evaluate the main characteristics of authentication
solutions and their threat model. Also, it intends to list characteristics,
threats, and related content to a state-of-art. As a result, we brought a
portfolio analysis through charts, figures, and tables presented in the
discussion section.
| 2022-09-26 19:42:59.000000000 |
1909.00805 | CrowdOS: A Ubiquitous Operating System for Crowdsourcing and Mobile
Crowd Sensing | cs.CY cs.OS cs.SE | With the rise of crowdsourcing and mobile crowdsensing techniques, a large
number of crowdsourcing applications or platforms (CAP) have appeared. In the
mean time, CAP-related models and frameworks based on different research
hypotheses are rapidly emerging, and they usually address specific issues from
a certain perspective. Due to different settings and conditions, different
models are not compatible with each other. However, CAP urgently needs to
combine these techniques to form a unified framework. In addition, these models
needs to be learned and updated online with the extension of crowdsourced data
and task types, thus requiring a unified architecture that integrates lifelong
learning concepts and breaks down the barriers between different modules. This
paper draws on the idea of ubiquitous operating systems and proposes a novel OS
(CrowdOS), which is an abstract software layer running between native OS and
application layer. In particular, based on an in-depth analysis of the complex
crowd environment and diverse characteristics of heterogeneous tasks, we
construct the OS kernel and three core frameworks including Task Resolution and
Assignment Framework (TRAF), Integrated Resource Management (IRM), and Task
Result quality Optimization (TRO). In addition, we validate the usability of
CrowdOS, module correctness and development efficiency. Our evaluation further
reveals TRO brings enormous improvement in efficiency and a reduction in energy
consumption.
| 2019-09-02 17:28:05.000000000 |
0901.4404 | Performance of Buchberger's Improved Algorithm using Prime Based
Ordering | cs.SE cs.SC | Prime-based ordering which is proved to be admissible, is the encoding of
indeterminates in power-products with prime numbers and ordering them by using
the natural number order. Using Eiffel, four versions of Buchberger's improved
algorithm for obtaining Groebner Bases have been developed: two total degree
versions, representing power products as strings and the other two as integers
based on prime-based ordering. The versions are further distinguished by
implementing coefficients as 64-bit integers and as multiple-precision
integers. By using primebased power product coding, iterative or recursive
operations on power products are replaced with integer operations. It is found
that on a series of example polynomial sets, significant reductions in
computation time of 30% or more are almost always obtained.
| 2009-01-28 05:47:24.000000000 |
2103.08644 | A Systematic Literature Review on Model-driven Engineering for
Cyber-Physical Systems | cs.SE | This technical report presents a Systematic Literature Review (SLR) study
that focuses on identifying and classifying the recent research practices
pertaining to CPS development through MDE approaches. The study evaluates 140
research papers published during 2010-2018. Accordingly, a comprehensive
analysis of various MDE approaches used in the development life-cycle of CPS is
presented. Furthermore, the study identifies the research gaps and areas that
need more investigation. The contribution helps researchers and practitioners
to get an overall understanding of the research trends and existing challenges
for further research/development.
| 2021-03-15 18:34:46.000000000 |
2008.05804 | Process Discovery for Structured Program Synthesis | cs.AI cs.LG cs.SE | A core task in process mining is process discovery which aims to learn an
accurate process model from event log data. In this paper, we propose to use
(block-) structured programs directly as target process models so as to
establish connections to the field of program synthesis and facilitate the
translation from abstract process models to executable processes, e.g., for
robotic process automation. Furthermore, we develop a novel bottom-up
agglomerative approach to the discovery of such structured program process
models. In comparison with the popular top-down recursive inductive miner, our
proposed agglomerative miner enjoys the similar theoretical guarantee to
produce sound process models (without deadlocks and other anomalies) while
exhibiting some advantages like avoiding silent activities and accommodating
duplicate activities. The proposed algorithm works by iteratively applying a
few graph rewriting rules to the directly-follows-graph of activities. For
real-world (sparse) directly-follows-graphs, the algorithm has quadratic
computational complexity with respect to the number of distinct activities. To
our knowledge, this is the first process discovery algorithm that is made for
the purpose of program synthesis. Experiments on the BPI-Challenge 2020 dataset
and the Karel programming dataset have demonstrated that our proposed algorithm
can outperform the inductive miner not only according to the traditional
process discovery metrics but also in terms of the effectiveness in finding out
the true underlying structured program from a small number of its execution
traces.
| 2020-08-13 10:33:10.000000000 |
1805.07353 | Model-Driven Engineering of Self-Adaptive Software with EUREMA | cs.SE | The development of self-adaptive software requires the engineering of an
adaptation engine that controls the underlying adaptable software by feedback
loops. The engine often describes the adaptation by runtime models representing
the adaptable software and by activities such as analysis and planning that use
these models. To systematically address the interplay between runtime models
and adaptation activities, runtime megamodels have been proposed. A runtime
megamodel is a specific model capturing runtime models and adaptation
activities. In this article, we go one step further and present an executable
modeling language for ExecUtable RuntimE MegAmodels (EUREMA) that eases the
development of adaptation engines by following a model-driven engineering
approach. We provide a domain-specific modeling language and a runtime
interpreter for adaptation engines, in particular feedback loops. Megamodels
are kept alive at runtime and by interpreting them, they are directly executed
to run feedback loops. Additionally, they can be dynamically adjusted to adapt
feedback loops. Thus, EUREMA supports development by making feedback loops
explicit at a higher level of abstraction and it enables solutions where
multiple feedback loops interact or operate on top of each other and
self-adaptation co-exists with offline adaptation for evolution.
| 2018-05-17 18:24:11.000000000 |
0712.2943 | Software (Re-)Engineering with PSF | cs.SE | This paper investigates the usefulness of PSF in software engineering and
reengineering. PSF is based on ACP (Algebra of Communicating Processes) and as
some architectural description languages are based on process algebra, we
investigate whether PSF can be used at the software architecture level, but we
also use PSF at lower abstract levels. As a case study we reengineer the
compiler from the Toolkit of PSF.
| 2007-12-18 12:25:02.000000000 |
2104.07847 | Exploring software developers' work practices: Task differences,
participation, engagement, and speed of task resolution | cs.SE | In seeking to understand the processes enacted during software development,
an increasing number of studies have mined software repositories. In
particular, studies have endeavored to show how teams resolve software defects.
Although much of this work has been useful, we contend that large-scale
examinations across the range of activities that are commonly performed, beyond
defect-related issues alone, would help us to more fully understand the reasons
why defects occur as well as their consequences. More generally, these
explorations would reveal how team processes occur during all software
development efforts. We thus extend such studies by investigating how software
practitioners work while undertaking the range of software tasks that are
typically performed. Multiple forms of analyses of a longitudinal case study
reveal that software practitioners were mostly involved in fixing defects, and
that their engagement covaried depending on the nature of the work they were
performing. Furthermore, multiple external factors affected speed of task
resolution. Our outcomes suggest that behavioral and intrinsic issues may
interact with extrinsic factors becoming significant predictors of the speed of
software task resolution.
| 2021-04-16 01:55:45.000000000 |
1503.05530 | Exploration of the scalability of LocFaults | cs.AI cs.SE | A model checker can produce a trace of counterexample, for an erroneous
program, which is often long and difficult to understand. In general, the part
about the loops is the largest among the instructions in this trace. This makes
the location of errors in loops critical, to analyze errors in the overall
program. In this paper, we explore the scalability capabilities of LocFaults,
our error localization approach exploiting paths of CFG(Control Flow Graph)
from a counterexample to calculate the MCDs (Minimal Correction Deviations),
and MCSs (Minimal Correction Subsets) from each found MCD. We present the times
of our approach on programs with While-loops unfolded b times, and a number of
deviated conditions ranging from 0 to n. Our preliminary results show that the
times of our approach, constraint-based and flow-driven, are better compared to
BugAssist which is based on SAT and transforms the entire program to a Boolean
formula, and further the information provided by LocFaults is more expressive
for the user.
| 2015-03-18 18:40:55.000000000 |
0907.3983 | Service-oriented high level architecture | cs.SE | Service-oriented High Level Architecture (SOHLA) refers to the high level
architecture (HLA) enabled by Service-Oriented Architecture (SOA) and Web
Services etc. techniques which supports distributed interoperating services.
The detailed comparisons between HLA and SOA are made to illustrate the
importance of their combination. Then several key enhancements and changes of
HLA Evolved Web Service API are introduced in comparison with native APIs, such
as Federation Development and Execution Process, communication mechanisms, data
encoding, session handling, testing environment and performance analysis. Some
approaches are summarized including Web-Enabling HLA at the communication
layer, HLA interface specification layer, federate interface layer and
application layer. Finally the problems of current research are discussed, and
the future directions are pointed out.
| 2009-07-23 04:39:37.000000000 |
2306.08869 | Detecting Misuses of Security APIs: A Systematic Review | cs.CR cs.SE | Security Application Programming Interfaces (APIs) play a vital role in
ensuring software security. However, misuse of security APIs may introduce
vulnerabilities that can be exploited by hackers. API design complexities,
inadequate documentation and insufficient security training are some of the
reasons for misusing security APIs. In order to help developers and
organizations, software security community have devised and evaluated several
approaches to detecting misuses of security APIs. We rigorously analyzed and
synthesized the literature on security APIs misuses for building a body of
knowledge on the topic. Our review has identified and discussed the security
APIs studied from misuse perspective, the types of reported misuses and the
approaches developed to detect misuses and how the proposed approaches have
been evaluated. Our review has also highlighted the open research issues for
advancing the state-of-the-art of detecting misuse of security APIs.
| 2023-06-15 05:53:23.000000000 |
2003.03172 | Detecting and Characterizing Bots that Commit Code | cs.SE cs.CR cs.LG cs.SI stat.ML | Background: Some developer activity traditionally performed manually, such as
making code commits, opening, managing, or closing issues is increasingly
subject to automation in many OSS projects. Specifically, such activity is
often performed by tools that react to events or run at specific times. We
refer to such automation tools as bots and, in many software mining scenarios
related to developer productivity or code quality it is desirable to identify
bots in order to separate their actions from actions of individuals. Aim: Find
an automated way of identifying bots and code committed by these bots, and to
characterize the types of bots based on their activity patterns. Method and
Result: We propose BIMAN, a systematic approach to detect bots using author
names, commit messages, files modified by the commit, and projects associated
with the ommits. For our test data, the value for AUC-ROC was 0.9. We also
characterized these bots based on the time patterns of their code commits and
the types of files modified, and found that they primarily work with
documentation files and web pages, and these files are most prevalent in HTML
and JavaScript ecosystems. We have compiled a shareable dataset containing
detailed information about 461 bots we found (all of whom have more than 1000
commits) and 13,762,430 commits they created.
| 2020-03-02 21:54:07.000000000 |
2308.02955 | An Empirical Study of AI-based Smart Contract Creation | cs.SE cs.LG | The introduction of large language models (LLMs) like ChatGPT and Google
Palm2 for smart contract generation seems to be the first well-established
instance of an AI pair programmer. LLMs have access to a large number of
open-source smart contracts, enabling them to utilize more extensive code in
Solidity than other code generation tools. Although the initial and informal
assessments of LLMs for smart contract generation are promising, a systematic
evaluation is needed to explore the limits and benefits of these models. The
main objective of this study is to assess the quality of generated code
provided by LLMs for smart contracts. We also aim to evaluate the impact of the
quality and variety of input parameters fed to LLMs. To achieve this aim, we
created an experimental setup for evaluating the generated code in terms of
validity, correctness, and efficiency. Our study finds crucial evidence of
security bugs getting introduced in the generated smart contracts as well as
the overall quality and correctness of the code getting impacted. However, we
also identified the areas where it can be improved. The paper also proposes
several potential research directions to improve the process, quality and
safety of generated smart contract codes.
| 2023-08-05 21:38:57.000000000 |
0803.0163 | Rapid Spreadsheet Reshaping with Excelsior: multiple drastic changes to
content and layout are easy when you represent enough structure | cs.SE | Spreadsheets often need changing in ways made tedious and risky by Excel. For
example: simultaneously altering many tables' size, orientation, and position;
inserting cross-tabulations; moving data between sheets; splitting and merging
sheets. A safer, faster restructuring tool is, we claim, Excelsior. The result
of a research project into reducing spreadsheet risk, Excelsior is the first
ever tool for modularising spreadsheets; i.e. for building them from components
which can be independently created, tested, debugged, and updated. It
represents spreadsheets in a way that makes these components explicit,
separates them from layout, and allows both components and layout to be changed
without breaking dependent formulae. Here, we report experiments to test that
this does indeed make such changes easier. In one, we automatically generated a
cross-tabulation and added it to a spreadsheet. In the other, we generated new
versions of a 10,000-cell housing-finance spreadsheet containing many
interconnected 20*40 tables. We varied table sizes from 5*10 to 200*2,000;
moved tables between sheets; and flipped table orientations. Each change
generated a spreadsheet with different structure but identical outputs; each
change took just a few minutes.
| 2008-03-03 01:11:39.000000000 |
2112.06222 | Rise of Distributed Deep Learning Training in the Big Model Era: From a
Software Engineering Perspective | cs.SE | Deep learning (DL) has become a key component of modern software. In the "big
model" era, the rich features of DL-based software substantially rely on
powerful DL models, e.g., BERT, GPT-3, and the recently emerging GPT-4, which
are trained on the powerful cloud with large datasets. Hence, training
effective DL models has become a vital stage in the whole software lifecycle.
When training deep learning models, especially those big models, developers
need to parallelize and distribute the computation and memory resources amongst
multiple devices in the training process, which is known as distributed deep
learning training, or distributed training for short. However, the unique
challenges that developers encounter in distributed training process have not
been studied in the software engineering community. Given the increasingly
heavy dependence of current DL-based software on distributed training, this
paper aims to fill in the knowledge gap and presents the first comprehensive
study on developers' issues in distributed training. To this end, we analyze
1,131 real-world developers' issues about using these frameworks reported on
Stack Overflow and GitHub. We construct a fine-grained taxonomy consisting of
30 categories regarding the fault symptoms and summarize common fix patterns
for different symptoms. Based on the results, we suggest actionable
implications on research avenues that can potentially facilitate the
distributed training to develop DL-based software, such as focusing on the
frequent and common fix patterns when designing testing or debugging tools,
developing efficient testing and debugging techniques for communication
configuration along with the synthesis of network configuration analysis,
designing new multi-device checkpoint-and-replay techniques to help
reproduction, and designing serverless APIs for cloud platforms.
| 2021-12-12 12:58:48.000000000 |
2302.01707 | Parfum: Detection and Automatic Repair of Dockerfile Smells | cs.SE | Docker is a popular tool for developers and organizations to package, deploy,
and run applications in a lightweight, portable container. One key component of
Docker is the Dockerfile, a simple text file that specifies the steps needed to
build a Docker image. While Dockerfiles are easy to create and use, creating an
optimal image is complex in particular since it is easy to not follow the best
practices, when it happens we call it Docker smell. To improve the quality of
Dockerfiles, previous works have focused on detecting Docker smells, but they
do not offer suggestions or repair the smells. In this paper, we propose,
Parfum, a tool that detects and automatically repairs Docker smells while
producing minimal patches. Parfum is based on a new Dockerfile AST parser
called Dinghy. We evaluate the effectiveness of Parfum by analyzing and
repairing a large set of Dockerfiles and comparing it against existing tools.
We also measure the impact of the repair on the Docker image in terms of build
failure and image size. Finally, we opened 35 pull requests to collect
developers' feedback and ensure that the repairs and the smells are meaningful.
Our results show that Parfum is able to repair 806 245 Docker smells and have a
significant impact on the Docker image size, and finally, developers are
welcoming the patches generated by Parfum while merging 20 pull requests.
| 2023-02-03 13:04:47.000000000 |
2309.12347 | Transitioning a Project-Based Course between Onsite and Online. An
Experience Report | cs.CY cs.SE | We present an investigation regarding the challenges faced by student teams
across four consecutive iterations of a team-focused, project-based course in
software engineering. The studied period includes the switch to fully online
activities in the spring of 2020, and covers the return to face-to-face
teaching two years later. We cover the feedback provided by over 1,500
students, collected in a free-text form on the basis of a survey. A qualitative
research method was utilized to discern and examine the challenges and
perceived benefits of a course that was conducted entirely online. We show that
technical challenges remain a constant in project-based courses, with time
management being the most affected by the move to online. Students reported
that the effective use of collaborative tools eased team organization and
communication while online. We conclude by providing a number of action points
regarding the integration of online activities in face-to-face course unfolding
related to project management, communication tools, the importance of teamwork,
and of active mentor participation.
| 2023-08-28 07:37:20.000000000 |
2403.07506 | Robustness, Security, Privacy, Explainability, Efficiency, and Usability
of Large Language Models for Code | cs.SE | Large language models for code (LLM4Code), which demonstrate strong
performance (e.g., high accuracy) in processing source code, have significantly
transformed software engineering. Many studies separately investigate the
non-functional properties of LM4Code, but there is no systematic review of how
these properties are evaluated and enhanced. This paper fills this gap by
thoroughly examining 146 relevant studies, thereby presenting the first
systematic literature review to identify seven important properties beyond
accuracy, including robustness, security, privacy, explainability, efficiency,
and usability. We discuss the current state-of-the-art methods and trends,
identify gaps in existing research, and present promising directions for future
study.
| 2024-03-12 10:43:26.000000000 |
1402.6045 | Multi-Dimensional Customization Modelling Based On Metagraph For Saas
Multi-Tenant Applications | cs.SE | Software as a Service (SaaS) is a new software delivery model in which
pre-built applications are delivered to customers as a service. SaaS providers
aim to attract a large number of tenants (users) with minimal system
modifications to meet economics of scale. To achieve this aim, SaaS
applications have to be customizable to meet requirements of each tenant.
However, due to the rapid growing of the SaaS, SaaS applications could have
thousands of tenants with a huge number of ways to customize applications.
Modularizing such customizations still is a highly complex task. Additionally,
due to the big variation of requirements for tenants, no single customization
model is appropriate for all tenants. In this paper, we propose a
multi-dimensional customization model based on metagraph. The proposed mode
addresses the modelling variability among tenants, describes customizations and
their relationships, and guarantees the correctness of SaaS customizations made
by tenants.
| 2014-02-25 03:44:09.000000000 |
2108.04640 | Modeling and Evaluating Personas with Software Explainability
Requirements | cs.HC cs.CY cs.SE | This work focuses on the context of software explainability, which is the
production of software capable of explaining to users the dynamics that govern
its internal functioning. User models that include information about their
requirements and their perceptions of explainability are fundamental when
building software with such capability. This study investigates the process of
creating personas that include information about users' explainability
perceptions and needs. The proposed approach is based on data collection with
questionnaires, modeling of empathy maps, grouping the maps, generating
personas from them and evaluation employing the Persona Perception Scale
method. In an empirical study, personas are created from 61 users' response
data to a questionnaire. The generated personas are evaluated by 60 users and
38 designers considering attributes of the Persona Perception Scale method. The
results include a set of 5 distinct personas that users rate as representative
of them at an average level of 3.7 out of 5, and designers rate as having
quality 3.5 out of 5. The median rate is 4 out of 5 in the majority of criteria
judged by users and designers. Both the personas and their creation and
evaluation approach are contributions of this study to the design of software
that satisfies the explainability requirement.
| 2021-08-10 12:43:18.000000000 |
2304.12562 | Empirical Evaluation of ChatGPT on Requirements Information Retrieval
Under Zero-Shot Setting | cs.SE cs.AI | Recently, various illustrative examples have shown the impressive ability of
generative large language models (LLMs) to perform NLP related tasks. ChatGPT
undoubtedly is the most representative model. We empirically evaluate ChatGPT's
performance on requirements information retrieval (IR) tasks to derive insights
into designing or developing more effective requirements retrieval methods or
tools based on generative LLMs. We design an evaluation framework considering
four different combinations of two popular IR tasks and two common artifact
types. Under zero-shot setting, evaluation results reveal ChatGPT's promising
ability to retrieve requirements relevant information (high recall) and limited
ability to retrieve more specific requirements information (low precision). Our
evaluation of ChatGPT on requirements IR under zero-shot setting provides
preliminary evidence for designing or developing more effective requirements IR
methods or tools based on LLMs.
| 2023-04-25 04:09:45.000000000 |
1610.09012 | Empirical Evaluation of Effort on Composing Design Models | cs.SE | Model composition plays a central role in many software engineering
activities such as evolving models to add new features and reconciling
conflicting design models developed in parallel by different development teams.
As model composition is usually an error-prone and effort-consuming task, its
potential benefits, such as gains in productivity can be compromised. However,
there is no empirical knowledge nowadays about the effort required to compose
design models. Only feedbacks of model composition evangelists are available,
and they often diverge. Consequently, developers are unable to conduct any
cost-effectiveness analysis as well as identify, predict, or reduce composition
effort. The inability of evaluating composition effort is due to three key
problems. First, the current evaluation frameworks do not consider fundamental
concepts in model composition such as conflicts and inconsistencies. Second,
researchers and developers do not know what factors can influence the
composition effort in practice. Third, practical knowledge about how such
influential factors may affect the developers' effort is severely lacking. In
this context, the contributions of this thesis are threefold: (i) a quality
model for supporting the evaluation of model composition effort, (ii) practical
knowledge, derived from a family of quantitative and qualitative empirical
studies, about model composition effort and its influential factors, and (iii)
insight about how to evaluate model composition efforts and tame the side
effects of such influential factors.
| 2016-10-27 20:51:55.000000000 |
1702.07484 | Featured Weighted Automata | cs.FL cs.LO cs.SE | A featured transition system is a transition system in which the transitions
are annotated with feature expressions: Boolean expressions on a finite number
of given features. Depending on its feature expression, each individual
transition can be enabled when some features are present, and disabled for
other sets of features. The behavior of a featured transition system hence
depends on a given set of features. There are algorithms for featured
transition systems which can check their properties for all sets of features at
once, for example for LTL or CTL properties.
Here we introduce a model of featured weighted automata which combines
featured transition systems and (semiring-) weighted automata. We show that
methods and techniques from weighted automata extend to featured weighted
automata and devise algorithms to compute quantitative properties of featured
weighted automata for all sets of features at once. We show applications to
minimum reachability and to energy properties.
| 2017-02-24 07:40:36.000000000 |
1701.06146 | The Influence of Teamwork Quality on Software Team Performance | cs.SE | Traditionally, software quality is thought to depend on sound software
engineering and development methodologies such as structured programming and
agile development. However, high quality software depends just as much on high
quality collaboration within the team. Since the success rate of software
development projects is low (Wateridge, 1995; The Standish Group, 2009), it is
important to understand which characteristics of interactions within software
development teams significantly influence performance. Hoegl and Gemuenden
(2001) reported empirical evidence for the relation between teamwork quality
and software quality, using a six-factor teamwork quality (TWQ) model. This
article extends the work of Hoegl and Gemuenden (2001) with the aim of finding
additional factors that may influence software team performance. We introduce
three new TWQ factors: trust, value sharing, and coordination of expertise. The
relationship between TWQ and team performance and the improvement of the model
are tested using data from 252 team members and stakeholders. Results show that
teamwork quality is significantly related to team performance, as rated by both
team members and stakeholders: TWQ explains 81% of the variance of team
performance as rated by team members and 61% as rated by stakeholders. This
study shows that trust, shared values, and coordination of expertise are
important factors for team leaders to consider in order to achieve high quality
software team work.
| 2017-01-22 10:00:32.000000000 |
1209.1428 | Challenges and Directions for Engineering Multi-agent Systems | cs.MA cs.SE | In this talk I review where we stand regarding the engineering of multi-agent
systems. There is both good news and bad news. The good news is that over the
past decade we've made considerable progress on techniques for engineering
multi-agent systems: we have good, usable methodologies, and mature tools.
Furthermore, we've seen a wide range of demonstrated applications, and have
even begun to quantify the advantages of agent technology. However, industry
involvement in AAMAS appears to be declining (as measured by industry
sponsorship of the conference), and industry affiliated attendants at AAMAS
2012 were few (1-2%). Furthermore, looking at the applications of agents being
reported at recent AAMAS, usage of Agent Oriented Software Engineering (AOSE)
and of Agent Oriented Programming Languages (AOPLs) is quite limited. This
observation is corroborated by the results of a 2008 survey by Frank and
Virginia Dignum. Based on these observations, I make five recommendations: (1)
Re-engage with industry; (2) Stop designing AOPLs and AOSE methodologies ...
and instead ... (3) Move to the "macro" level: develop techniques for designing
and implementing interaction, integrate micro (single cognitive agent) and
macro (MAS) design and implementation; (4) Develop techniques for the Assurance
of MAS; and (5) Re-engage with the US.
| 2012-09-07 00:27:39.000000000 |
2303.07826 | Implant Global and Local Hierarchy Information to Sequence based Code
Representation Models | cs.SE cs.AI | Source code representation with deep learning techniques is an important
research field. There have been many studies that learn sequential or
structural information for code representation. But sequence-based models and
non-sequence-models both have their limitations. Researchers attempt to
incorporate structural information to sequence-based models, but they only mine
part of token-level hierarchical structure information. In this paper, we
analyze how the complete hierarchical structure influences the tokens in code
sequences and abstract this influence as a property of code tokens called
hierarchical embedding. The hierarchical embedding is further divided into
statement-level global hierarchy and token-level local hierarchy. Furthermore,
we propose the Hierarchy Transformer (HiT), a simple but effective sequence
model to incorporate the complete hierarchical embeddings of source code into a
Transformer model. We demonstrate the effectiveness of hierarchical embedding
on learning code structure with an experiment on variable scope detection task.
Further evaluation shows that HiT outperforms SOTA baseline models and show
stable training efficiency on three source code-related tasks involving
classification and generation tasks across 8 different datasets.
| 2023-03-14 12:01:39.000000000 |
cs/0105006 | Reverse Engineering from Assembler to Formal Specifications via Program
Transformations | cs.SE cs.PL | The FermaT transformation system, based on research carried out over the last
sixteen years at Durham University, De Montfort University and Software
Migrations Ltd., is an industrial-strength formal transformation engine with
many applications in program comprehension and language migration. This paper
is a case study which uses automated plus manually-directed transformations and
abstractions to convert an IBM 370 Assembler code program into a very
high-level abstract specification.
| 2001-05-04 09:21:21.000000000 |
2104.12295 | Vulnerabilities and Open Issues of Smart Contracts: A Systematic Mapping | cs.SE cs.CR | Smart Contracts (SCs) are programs stored in a Blockchain to ensure
agreements between two or more parties. Due to the unchangeable essence of
Blockchain, failures or errors in SCs become perpetual once published. The
reliability of SCs is essential to avoid financial losses. So, SCs must be
checked to ensure the absence of errors. Hence, many studies addressed new
methods and tools for zero-bug software in SCs. This paper conducted a
systematic literature mapping identifying initiatives and tools to analyze SCs
and how to deal with the identified vulnerabilities. Besides, this work
identifies gaps that may lead to research topics for future work.
| 2021-04-26 00:46:39.000000000 |
2105.10466 | Setting Out a Software Stack Capable of Hosting a Virtual ROS-based
Competition | cs.SE | Traditional academic competitions that foster collaboration from student
communities all around the globe have either been postponed indefinitely or
cancelled due to restrictions posed upon in-person gathering. Owing to this,
virtual competitions are gaining importance as they provide the student
community with an outlet for exposure and academic growth whilst having the
convenience of being online and safe. Mars Society South Asia (MSSA) has
developed a software stack capable of replicating on-site competition tasks via
ROS-based simulations. The software enables users to perform necessary
simulations without them having to interact with ROS or Gazebo. Currently, the
software has been set out to be used for the Virtual Mars Rover Challenge
(VMRC), a competition that focuses on simulating the next generation of Mars
rovers and thereby providing the student community with a substitute to their
on-site competitions. The objective of this white paper is to explain the
software architecture, installation, usage, maintenance and ease of use which
is the singular most important factor in software adoption.
| 2021-05-21 17:09:48.000000000 |
1305.0189 | A Comparative Study of Web Services Composition Networks | cs.SE | Web services growth makes the composition process a hard task to solve. This
numerous interacting elements can be adequately represented by a network.
Discovery and composition can benefit from the knowledge of the network
structure. In this paper, we investigate the topological properties of two
models of syntactic and semantic Web services composition networks: dependency
and interaction. Results show that they share a similar organization
characterized by the small-world property, a heavy-tailed degree distribution
and a low transitivity value. Furthermore, the networks are disassortative.
| 2013-05-01 14:47:44.000000000 |
2309.07026 | APICom: Automatic API Completion via Prompt Learning and Adversarial
Training-based Data Augmentation | cs.SE | Based on developer needs and usage scenarios, API (Application Programming
Interface) recommendation is the process of assisting developers in finding the
required API among numerous candidate APIs. Previous studies mainly modeled API
recommendation as the recommendation task, which can recommend multiple
candidate APIs for the given query, and developers may not yet be able to find
what they need. Motivated by the neural machine translation research domain, we
can model this problem as the generation task, which aims to directly generate
the required API for the developer query. After our preliminary investigation,
we find the performance of this intuitive approach is not promising. The reason
is that there exists an error when generating the prefixes of the API. However,
developers may know certain API prefix information during actual development in
most cases. Therefore, we model this problem as the automatic completion task
and propose a novel approach APICom based on prompt learning, which can
generate API related to the query according to the prompts (i.e., API prefix
information). Moreover, the effectiveness of APICom highly depends on the
quality of the training dataset. In this study, we further design a novel
gradient-based adversarial training method {\atpart} for data augmentation,
which can improve the normalized stability when generating adversarial
examples. To evaluate the effectiveness of APICom, we consider a corpus of 33k
developer queries and corresponding APIs. Compared with the state-of-the-art
baselines, our experimental results show that APICom can outperform all
baselines by at least 40.02\%, 13.20\%, and 16.31\% in terms of the performance
measures EM@1, MRR, and MAP. Finally, our ablation studies confirm the
effectiveness of our component setting (such as our designed adversarial
training method, our used pre-trained model, and prompt learning) in APICom.
| 2023-09-13 15:31:50.000000000 |
2212.03404 | Towards using Few-Shot Prompt Learning for Automating Model Completion | cs.SE cs.CL | We propose a simple yet a novel approach to improve completion in domain
modeling activities. Our approach exploits the power of large language models
by using few-shot prompt learning without the need to train or fine-tune those
models with large datasets that are scarce in this field. We implemented our
approach and tested it on the completion of static and dynamic domain diagrams.
Our initial evaluation shows that such an approach is effective and can be
integrated in different ways during the modeling activities.
| 2022-12-07 02:11:26.000000000 |
1508.00037 | Neuro-Fuzzy Algorithmic (NFA) Models and Tools for Estimation | cs.SE cs.AI | Accurate estimation such as cost estimation, quality estimation and risk
analysis is a major issue in management. We propose a patent pending soft
computing framework to tackle this challenging problem. Our generic framework
is independent of the nature and type of estimation. It consists of neural
network, fuzzy logic, and an algorithmic estimation model. We made use of the
Constructive Cost Model (COCOMO), Analysis of Variance (ANOVA), and Function
Point Analysis as the algorithmic models and validated the accuracy of the
Neuro-Fuzzy Algorithmic (NFA) Model in software cost estimation using
industrial project data. Our model produces more accurate estimation than using
an algorithmic model alone. We also discuss the prototypes of our tools that
implement the NFA Model. We conclude with our roadmap and direction to enrich
the model in tackling different estimation challenges.
| 2015-07-31 21:30:42.000000000 |
cs/0503068 | A Survey of Reverse Engineering and Program Comprehension | cs.SE | Reverse engineering has been a standard practice in the hardware community
for some time. It has only been within the last ten years that reverse
engineering, or "program comprehension", has grown into the current
sub-discipline of software engineering. Traditional software engineering is
primarily focused on the development and design of new software. However, most
programmers work on software that other people have designed and developed. Up
to 50% of a software maintainers time can be spent determining the intent of
source code. The growing demand to reevaluate and reimplement legacy software
systems, brought on by the proliferation of clientserver and World Wide Web
technologies, has underscored the need for reverse engineering tools and
techniques. This paper introduces the terminology of reverse engineering and
gives some of the obstacles that make reverse engineering difficult. Although
reverse engineering remains heavily dependent on the human component, a number
of automated tools are presented that aid the reverse engineer.
| 2005-03-24 13:55:53.000000000 |
2005.14015 | MACER: A Modular Framework for Accelerated Compilation Error Repair | cs.SE cs.LG cs.PL stat.ML | Automated compilation error repair, the problem of suggesting fixes to buggy
programs that fail to compile, has generated significant interest in recent
years. Apart from being a tool of general convenience, automated code repair
has significant pedagogical applications for novice programmers who find
compiler error messages cryptic and unhelpful. Existing approaches largely
solve this problem using a blackbox-application of a heavy-duty generative
learning technique, such as sequence-to-sequence prediction (TRACER) or
reinforcement learning (RLAssist). Although convenient, such black-box
application of learning techniques makes existing approaches bulky in terms of
training time, as well as inefficient at targeting specific error types.
We present MACER, a novel technique for accelerated error repair based on a
modular segregation of the repair process into repair identification and repair
application. MACER uses powerful yet inexpensive discriminative learning
techniques such as multi-label classifiers and rankers to first identify the
type of repair required and then apply the suggested repair.
Experiments indicate that the fine-grained approach adopted by MACER offers
not only superior error correction, but also much faster training and
prediction. On a benchmark dataset of 4K buggy programs collected from actual
student submissions, MACER outperforms existing methods by 20% at suggesting
fixes for popular errors that exactly match the fix desired by the student.
MACER is also competitive or better than existing methods at all error types --
whether popular or rare. MACER offers a training time speedup of 2x over TRACER
and 800x over RLAssist, and a test time speedup of 2-4x over both.
| 2020-05-28 14:00:03.000000000 |
2304.04347 | Taming Android Fragmentation through Lightweight Crowdsourced Testing | cs.SE cs.PL | Android fragmentation refers to the overwhelming diversity of Android devices
and OS versions. These lead to the impossibility of testing an app on every
supported device, leaving a number of compatibility bugs scattered in the
community and thereby resulting in poor user experiences. To mitigate this, our
fellow researchers have designed various works to automatically detect such
compatibility issues. However, the current state-of-the-art tools can only be
used to detect specific kinds of compatibility issues (i.e., compatibility
issues caused by API signature evolution), i.e., many other essential types of
compatibility issues are still unrevealed. For example, customized OS versions
on real devices and semantic changes of OS could lead to serious compatibility
issues, which are non-trivial to be detected statically. To this end, we
propose a novel, lightweight, crowdsourced testing approach, LAZYCOW, to fill
this research gap and enable the possibility of taming Android fragmentation
through crowdsourced efforts. Specifically, crowdsourced testing is an emerging
alternative to conventional mobile testing mechanisms that allow developers to
test their products on real devices to pinpoint platform-specific issues.
Experimental results on thousands of test cases on real-world Android devices
show that LAZYCOW is effective in automatically identifying and verifying
API-induced compatibility issues. Also, after investigating the user experience
through qualitative metrics, users' satisfaction provides strong evidence that
LAZYCOW is useful and welcome in practice.
| 2023-04-10 01:37:16.000000000 |
2202.13830 | Curb Your Self-Modifying Code | cs.SE | Self-modifying code has many intriguing applications in a broad range of
fields including software security, artificial general intelligence, and
open-ended evolution. Having control over self-modifying code, however, is
still an open challenge since it is a balancing act between providing as much
freedom as possible so as not to limit possible solutions, while at the same
time imposing restriction to avoid security issues and invalid code or
solutions. In the present study, I provide a prototype implementation of how
one might curb self-modifying code by introducing control mechanisms for code
modifications within specific regions and for specific transitions between code
and data. I show that this is possible to achieve with the so-called allagmatic
method - a framework to formalise, model, implement, and interpret complex
systems inspired by Gilbert Simondon's philosophy of individuation and Alfred
North Whitehead's philosophy of organism. Thereby, the allagmatic method serves
as guidance for self-modification based on concepts defined in a metaphysical
framework. I conclude that the allagmatic method seems to be a suitable
framework for control mechanisms in self-modifying code and that there are
intriguing analogies between the presented control mechanisms and gene
regulation.
| 2022-02-28 14:39:34.000000000 |
2110.09371 | Proceedings of the 19th International Overture Workshop | cs.SE | This volume contains the papers presented at the 19th International Overture
Workshop, which was held in an hybrid format: online and physically at Aarhus,
Denmark on 22th October 2021. This event was the latest in a series of
workshops around the Vienna Development Method (VDM), the open-source project
Overture, and related tools and formalisms. VDM is one of the longest
established formal methods for systems development. A lively community of
researchers and practitioners has grown up in academia and industry around the
modelling languages (VDM-SL, VDM++, VDM-RT, CML) and tools (VDMTools, Overture,
VDM VSCode extension, Crescendo, Symphony, the INTO-CPS chain, and ViennaTalk).
Together, these provide a platform for work on modelling and analysis
technology that includes static and dynamic analysis, test generation,
execution support, and model checking. This workshop provided updates on the
emerging technology of VDM/Overture, including collaboration infrastructure,
collaborative modelling and co-simulation for Cyber-Physical Systems.
| 2021-10-18 14:57:12.000000000 |