input
stringlengths
29
3.27k
created_at
stringlengths
29
29
__index_level_0__
int64
0
16k
Background: Requirements engineering is of a principal importance when starting a new project. However, the number of the requirements involved in a single project can reach up to thousands. Controlling and assuring the quality of natural language requirements (NLRs), in these quantities, is challenging. Aims: In a field study, we investigated with the Swedish Transportation Agency (STA) to what extent the characteristics of requirements had an influence on change requests and budget changes in the project. Method: We choose the following models to characterize system requirements formulated in natural language: Concern-based Model of Requirements (CMR), Requirements Abstractions Model (RAM) and Software-Hardware model (SHM). The classification of the NLRs was conducted by the three authors. The robust statistical measure Fleiss' Kappa was used to verify the reliability of the results. We used descriptive statistics, contingency tables, results from the Chi-Square test of association along with post hoc tests. Finally, a multivariate statistical technique, Correspondence analysis was used in order to provide a means of displaying a set of requirements in two-dimensional graphical form. Results: The results showed that software requirements are associated with less budget cost than hardware requirements. Moreover, software requirements tend to stay open for a longer period indicating that they are "harder" to handle. Finally, the more discussion or interaction on a change request can lower the actual estimated change request cost. Conclusions: The results lead us to a need to further investigate the reasons why the software requirements are treated differently from the hardware requirements, interview the project managers, understand better the way those requirements are formulated and propose effective ways of Software management.
2023-10-01 14:08:02.000000000
4,518
Source code (Context) and its parsed abstract syntax tree (AST; Structure) are two complementary representations of the same computer program. Traditionally, designers of machine learning models have relied predominantly either on Structure or Context. We propose a new model, which jointly learns on Context and Structure of source code. In contrast to previous approaches, our model uses only language-agnostic features, i.e., source code and features that can be computed directly from the AST. Besides obtaining state-of-the-art on monolingual code summarization on all five programming languages considered in this work, we propose the first multilingual code summarization model. We show that jointly training on non-parallel data from multiple programming languages improves results on all individual languages, where the strongest gains are on low-resource languages. Remarkably, multilingual training only from Context does not lead to the same improvements, highlighting the benefits of combining Structure and Context for representation learning on code.
2021-03-19 20:32:33.000000000
2,575
Learning new programming skills requires tailored guidance. With the emergence of advanced Natural Language Generation models like the ChatGPT API, there is now a possibility of creating a convenient and personalized tutoring system with AI for computer science education. This paper presents GPTutor, a ChatGPT-powered programming tool, which is a Visual Studio Code extension using the ChatGPT API to provide programming code explanations. By integrating Visual Studio Code API, GPTutor can comprehensively analyze the provided code by referencing the relevant source codes. As a result, GPTutor can use designed prompts to explain the selected code with a pop-up message. GPTutor is now published at the Visual Studio Code Extension Marketplace, and its source code is openly accessible on GitHub. Preliminary evaluation indicates that GPTutor delivers the most concise and accurate explanations compared to vanilla ChatGPT and GitHub Copilot. Moreover, the feedback from students and teachers indicated that GPTutor is user-friendly and can explain given codes satisfactorily. Finally, we discuss possible future research directions for GPTutor. This includes enhancing its performance and personalization via further prompt programming, as well as evaluating the effectiveness of GPTutor with real users.
2023-05-01 18:52:11.000000000
15,442
This paper presents the modular automation for reuse in manufacturing systems (modAT4rMS) approach to support the model-driven engineering (MDE) of object oriented manufacturing automation software with regard to its usability and software modularity. With usability we refer to the aspects effectiveness, efficiency and user acceptance, as defined by ISO 9241-11. The modAT4rMS notations are based on selected features from the Unified Modeling Language (UML) and the Systems Modeling language (SysML) and iteratively further developed by a series of empirical studies with industrial practitioners as well as mechatronics trainees. With modAT4rMS a MDE approach for Programmable Logic Controller (PLC) programming was developed with the goal to facilitate modular object oriented programming of PLC software by improving the representation of the relationships between the structure and behavior diagram types and by reducing the level of abstraction in the structure model. modAT4rMS notations for PLC software structure and software behavior modeling are presented and illustrated with a modeling example using a modAT4rMS editor prototype. For the evaluation of the developed notations the results from a study with 168 participants is presented, showing the benefits of this new approach in comparison to the classic procedural paradigm (IEC 61131-3) and the domain specific UML profile plcML in regard to programming performance and usability aspects. Finally the advantages and limitations of the approach are discussed and an outlook for further development is given.
2022-12-16 15:43:08.000000000
549
The number of companies opting for remote working has been increasing over the years, and Agile methodologies, such as Scrum, were adapted to mitigate the challenges caused by the distributed teams. However, the COVID-19 pandemic imposed a fully working from home context, which has never existed before. This paper investigation a two-phased Multi-Method study. In the first phase, we uncover how working from home impacted Scrum practitioners through a qualitative survey. Then, in the second phase, we propose a theoretical model that we test and generalize using Partial Least Squares - Structural Equation Modeling (PLS-SEM) through a sample study of 200 software engineers who worked from home within Scrum projects. From assessing our model, we can conclude that all the latent variables are reliable and all the hypotheses are significant. Moreover, we performed an Importance-Performance Map Analysis (IPMA), highlighting the benefits of the home working environment and the use of Scrum for project success. We emphasize the importance of supporting the three innate psychological needs of autonomy, competence, and relatedness in the home working environment. We conclude that the home working environment and the use of Scrum both contribute to project success, with Scrum acting as a mediator.
2021-07-12 13:32:27.000000000
3,213
A means of building safe critical systems consists of formally modeling the requirements formulated by stakeholders and ensuring their consistency with respect to application domain properties. This paper proposes a metamodel for an ontology modeling formalism based on OWL and PLIB. This modeling formalism is part of a method for modeling the domain of systems whose requirements are captured through SysML/KAOS. The formal semantics of SysML/KAOS goals are represented using Event-B specifications. Goals provide the set of events, while domain models will provide the structure of the system state of the Event-B specification. Our proposal is illustrated through a case study dealing with a Cycab localization component specification. The case study deals with the specification of a localization software component that uses GPS,Wi-Fi and sensor technologies for the realtime localization of the Cycab vehicle, an autonomous ground transportation system designed to be robust and completely independent.
2017-09-29 07:04:08.000000000
12,663
This paper investigates the application of eXplainable Artificial Intelligence (XAI) in the design of embedded systems using machine learning (ML). As a case study, it addresses the challenging problem of static silent store prediction. This involves identifying redundant memory writes based only on static program features. Eliminating such stores enhances performance and energy efficiency by reducing memory access and bus traffic, especially in the presence of emerging non-volatile memory technologies. To achieve this, we propose a methodology consisting of: 1) the development of relevant ML models for explaining silent store prediction, and 2) the application of XAI to explain these models. We employ two state-of-the-art model-agnostic XAI methods to analyze the causes of silent stores. Through the case study, we evaluate the effectiveness of the methods. We find that these methods provide explanations for silent store predictions, which are consistent with known causes of silent store occurrences from previous studies. Typically, this allows us to confirm the prevalence of silent stores in operations that write the zero constant into memory, or the absence of silent stores in operations involving loop induction variables. This suggests the potential relevance of XAI in analyzing ML models' decision in embedded system design. From the case study, we share some valuable insights and pitfalls we encountered. More generally, this study aims to lay the groundwork for future research in the emerging field of XAI for embedded system design.
2024-03-06 09:19:35.000000000
4,335
Nowadays, working from home (WFH) has become a popular work arrangement due to its many potential benefits for both companies and employees (e.g., increasing job satisfaction and retention of employees). Many previous studies have investigated the impact of working from home on the productivity of employees. However, most of these studies usually use a qualitative analysis method such as survey and interview, and the studied participants do not work from home for a long continuing time. Due to the outbreak of coronavirus disease 2019 (COVID-19), a large number of companies asked their employees to work from home, which provides us an opportunity to investigate whether working from home affects their productivity. In this study, to investigate the difference of developer productivity between working from home and working onsite, we conduct a quantitative analysis based on a dataset of developers' daily activities from Baidu Inc, one of the largest IT companies in China. In total, we collected approximately four thousand records of 139 developers' activities of 138 working days. Out of these records, 1,103 records are submitted when developers work from home due to COVID-19 pandemic. We find that WFH has both positive and negative impacts on developer productivity in terms of different metrics, e.g., the number of builds/commits/code reviews. We also notice that working from home has different impacts on projects with different characteristics including programming language, project type/age/size. For example, working from home has a negative impact on developer productivity for large projects. Additionally, we find that productivity varies for different developers. Based on these findings, we get some feedbacks from developers of Baidu and understand some reasons why WFH has different impacts on developer productivity.
2020-05-26 00:04:10.000000000
9,043
This volume contains the proceedings of F-IDE 2016, the third international workshop on Formal Integrated Development Environment, which was held as an FM 2016 satellite event, on November 8, 2016, in Limassol (Cyprus). High levels of safety, security and also privacy standards require the use of formal methods to specify and develop compliant software (sub)systems. Any standard comes with an assessment process, which requires a complete documentation of the application in order to ease the justification of design choices and the review of code and proofs. Thus tools are needed for handling specifications, program constructs and verification artifacts. The aim of the F-IDE workshop is to provide a forum for presenting and discussing research efforts as well as experience returns on design, development and usage of formal IDE aiming at making formal methods "easier" for both specialists and non-specialists.
2017-01-20 22:25:27.000000000
2,748
Exascale computing will feature novel and potentially disruptive hardware architectures. Exploiting these to their full potential is non-trivial. Numerical modelling frameworks involving finite difference methods are currently limited by the 'static' nature of the hand-coded discretisation schemes and repeatedly may have to be re-written to run efficiently on new hardware. In contrast, OpenSBLI uses code generation to derive the model's code from a high-level specification. Users focus on the equations to solve, whilst not concerning themselves with the detailed implementation. Source-to-source translation is used to tailor the code and enable its execution on a variety of hardware.
2016-08-31 19:06:06.000000000
9,957
Issue Tracking Systems (ITS) such as Bugzilla can be viewed as Process Aware Information Systems (PAIS) generating event-logs during the life-cycle of a bug report. Process Mining consists of mining event logs generated from PAIS for process model discovery, conformance and enhancement. We apply process map discovery techniques to mine event trace data generated from ITS of open source Firefox browser project to generate and study process models. Bug life-cycle consists of diversity and variance. Therefore, the process models generated from the event-logs are spaghetti-like with large number of edges, inter-connections and nodes. Such models are complex to analyse and difficult to comprehend by a process analyst. We improve the Goodness (fitness and structural complexity) of the process models by splitting the event-log into homogeneous subsets by clustering structurally similar traces. We adapt the K-Medoid clustering algorithm with two different distance metrics: Longest Common Subsequence (LCS) and Dynamic Time Warping (DTW). We evaluate the goodness of the process models generated from the clusters using complexity and fitness metrics. We study back-forth \& self-loops, bug reopening, and bottleneck in the clusters obtained and show that clustering enables better analysis. We also propose an algorithm to automate the clustering process -the algorithm takes as input the event log and returns the best cluster set.
2015-11-17 11:49:36.000000000
5,576
Testing and code reviews are known techniques to improve the quality and robustness of software. Unfortunately, the complexity of modern software systems makes it impossible to anticipate all possible problems that can occur at runtime, which limits what issues can be found using testing and reviews. Thus, it is of interest to consider autonomous self-healing software systems, which can automatically detect, diagnose, and contain unanticipated problems at runtime. Most research in this area has adopted a model-driven approach, where actual behavior is checked against a model specifying the intended behavior, and a controller takes action when the system behaves outside of the specification. However, it is not easy to develop these specifications, nor to keep them up-to-date as the system evolves. We pose that, with the recent advances in machine learning, such models may be learned by observing the system. Moreover, we argue that artificial immune systems (AISs) are particularly well-suited for building self-healing systems, because of their anomaly detection and diagnosis capabilities. We present the state-of-the-art in self-healing systems and in AISs, surveying some of the research directions that have been considered up to now. To help advance the state-of-the-art, we develop a research agenda for building self-healing software systems using AISs, identifying required foundations, and promising research directions.
2021-01-06 16:01:49.000000000
14,433
AI-powered education technologies can support students and teachers in computer science education. However, with the recent developments in generative AI, and especially the increasingly emerging popularity of ChatGPT, the effectiveness of using large language models for solving programming tasks has been underexplored. The present study examines ChatGPT's ability to generate code solutions at different difficulty levels for introductory programming courses. We conducted an experiment where ChatGPT was tested on 127 randomly selected programming problems provided by Kattis, an automatic software grading tool for computer science programs, often used in higher education. The results showed that ChatGPT independently could solve 19 out of 127 programming tasks generated and assessed by Kattis. Further, ChatGPT was found to be able to generate accurate code solutions for simple problems but encountered difficulties with more complex programming tasks. The results contribute to the ongoing debate on the utility of AI-powered tools in programming education.
2023-11-30 18:08:02.000000000
5,369
Over the last decade, researchers and engineers have developed a vast body of methodologies and technologies in requirements engineering for self-adaptive systems. Although existing studies have explored various aspects of this topic, few of them have categorized and summarized these areas of research in require-ments modeling and analysis. This study aims to investigate the research themes based on the utilized modeling methods and RE activities. We conduct a thematic study in the systematic literature review. The results are derived by synthesizing the extracted data with statistical methods. This paper provides an updated review of the research literature, enabling researchers and practitioners to better understand the research themes in these areas and identify research gaps which need to be further studied.
2017-03-31 12:47:50.000000000
4,666
Deep learning has gained substantial popularity in recent years. Developers mainly rely on libraries and tools to add deep learning capabilities to their software. What kinds of bugs are frequently found in such software? What are the root causes of such bugs? What impacts do such bugs have? Which stages of deep learning pipeline are more bug prone? Are there any antipatterns? Understanding such characteristics of bugs in deep learning software has the potential to foster the development of better deep learning platforms, debugging mechanisms, development practices, and encourage the development of analysis and verification frameworks. Therefore, we study 2716 high-quality posts from Stack Overflow and 500 bug fix commits from Github about five popular deep learning libraries Caffe, Keras, Tensorflow, Theano, and Torch to understand the types of bugs, root causes of bugs, impacts of bugs, bug-prone stage of deep learning pipeline as well as whether there are some common antipatterns found in this buggy software. The key findings of our study include: data bug and logic bug are the most severe bug types in deep learning software appearing more than 48% of the times, major root causes of these bugs are Incorrect Model Parameter (IPS) and Structural Inefficiency (SI) showing up more than 43% of the times. We have also found that the bugs in the usage of deep learning libraries have some common antipatterns that lead to a strong correlation of bug types among the libraries.
2019-06-03 09:52:26.000000000
11,716
The role of Enterprise Resource Planning (ERP) systems with digital transformation strategies have become an important aspect of modern businesses to stay competitive in the fast-paced digital landscape. ERP modernization refers to the process of updating an organization's Enterprise Resource Planning (ERP) system to take advantage of the latest technological advancements and features. This article aims to provide insights into the role of ERP modernization in digital transformation and its impact on organizations. In particular, the article focuses on the PeopleSoft ERP system and its capabilities to support digital transformation initiatives. Peoplesoft, a widely used enterprise resource planning (ERP) system, has served organizations for over three decades. This article covers the functionalities of a modern ERP system; the benefits organizations can derive from implementing a modern ERP system, and the challenges organizations face in modernizing their ERP systems. This article also provides practical recommendations for overcoming these challenges and highlights the importance of considering factors such as scalability, security, and user experience while modernizing ERP systems.
2023-03-04 14:36:53.000000000
5,461
Automatic program repair (APR) techniques have the potential to reduce manual efforts in uncovering and repairing program defects during the code review (CR) process. However, the limited accuracy and considerable time costs associated with existing APR approaches hinder their adoption in industrial practice. One key factor is the under-utilization of review comments, which provide valuable insights into defects and potential fixes. Recent advancements in Large Language Models (LLMs) have enhanced their ability to comprehend natural and programming languages, enabling them to generate patches based on review comments. This paper conducts a comprehensive investigation into the effective utilization of LLMs for repairing CR defects. In this study, various prompts are designed and compared across mainstream LLMs using two distinct datasets from human reviewers and automated checkers. Experimental results demonstrate a remarkable repair rate of 72.97% with the best prompt, highlighting a substantial improvement in the effectiveness and practicality of automatic repair techniques.
2023-12-28 02:36:02.000000000
13,267
Dynamic program analysis (also known as profiling) is well-known for its powerful capabilities of identifying performance inefficiencies in software packages. Although a large number of dynamic program analysis techniques are developed in academia and industry, very few of them are widely used by software developers in their regular software developing activities. There are three major reasons. First, the dynamic analysis tools (also known as profilers) are disjoint from the coding environments such as IDEs and editors; frequently switching focus between them significantly complicates the entire cycle of software development. Second, mastering various tools to interpret their analysis results requires substantial efforts; even worse, many tools have their own design of graphical user interfaces (GUI) for data presentation, which steepens the learning curves. Third, most existing tools expose few interfaces to support user-defined analysis, which makes the tools less customizable to fulfill diverse user demands. We develop EasyView, a general solution to integrate the interpretation and visualization of various profiling results in the coding environments, which bridges software developers with profilers to provide easy and intuitive dynamic analysis during the code development cycle. The novelty of EasyView is three-fold. First, we develop a generic data format, which enables EasyView to support mainstream profilers for different languages. Second, we develop a set of customizable schemes to analyze and visualize the profiles in intuitive ways. Third, we tightly integrate EasyView with popular coding environments, such as Microsoft Visual Studio Code, with easy code exploration and user interaction. Our evaluation shows that EasyView is able to support various profilers for different languages and provide unique insights into performance inefficiencies in different domains.
2023-12-24 13:12:39.000000000
205
Despite the recent trend of developing and applying neural source code models to software engineering tasks, the quality of such models is insufficient for real-world use. This is because there could be noise in the source code corpora used to train such models. We adapt data-influence methods to detect such noises in this paper. Data-influence methods are used in machine learning to evaluate the similarity of a target sample to the correct samples in order to determine whether or not the target sample is noisy. Our evaluation results show that data-influence methods can identify noisy samples from neural code models in classification-based tasks. This approach will contribute to the larger vision of developing better neural source code models from a data-centric perspective, which is a key driver for developing useful source code models in practice.
2022-05-24 03:14:25.000000000
14,298
Machine Learning (ML) is an application of Artificial Intelligence (AI) that uses big data to produce complex predictions and decision-making systems, which would be challenging to obtain otherwise. To ensure the success of ML-enabled systems, it is essential to be aware of certain qualities of ML solutions (performance, transparency, fairness), known from a Requirement Engineering (RE) perspective as non-functional requirements (NFRs). However, when systems involve ML, NFRs for traditional software may not apply in the same ways; some NFRs may become more prominent or less important; NFRs may be defined over the ML model, data, or the entire system; and NFRs for ML may be measured differently. In this work, we aim to understand the state-of-the-art and challenges of dealing with NFRs for ML in industry. We interviewed ten engineering practitioners working with NFRs and ML. We find examples of (1) the identification and measurement of NFRs for ML, (2) identification of more and less important NFRs for ML, and (3) the challenges associated with NFRs and ML in the industry. This knowledge paints a picture of how ML-related NFRs are treated in practice and helps to guide future RE for ML efforts.
2021-09-01 09:41:11.000000000
9,560
Blockchain is a disruptive technology intended at implementing secure decentralized distributed systems, in which transactional data can be shared, stored and verified by participants of a system using cryptographic and consensus mechanisms, elevating the need for a central authentication/verification authority. Contrary to the belief, blockchain-based systems are not inherently secure by design; it is crucial for security software engineers to be aware of the various blockchain specific architectural design decisions and choices and their consequences on the dependability of the software system. We argue that sub-optimal and ill-informed design decisions and choices of blockchain components and their configurations including smart contracts, key management, cryptographic and consensus mechanisms, on-chain vs. off chain storage choices can introduce security technical debt into the system. The technical debt metaphor can serve as a powerful tool for early, preventive and transparent evaluation of the security design of blockchain-based systems by making the potential security technical debt visible to security software engineers. We review the core architectural components of blockchain-based systems and we show how the ill-choice or sub-optimal design decisions and configuration of these components can manifest into security technical debt. We contribute to a taxonomy that classifies the blockchain specific design decisions and choices and we describe their connection to potential debts. The taxonomy can help architects of this category of systems avoid potential security risks by visualising the security technical debts and raising its visibility. We use examples from two case studies to discuss the taxonomy and its application.
2019-01-08 23:34:43.000000000
5,030
Setting up effective and efficient mechanisms for controlling software and system development projects is still challenging in industrial practice. On the one hand, necessary prerequisites such as established development processes, understanding of cause-effect relationships on relevant indicators, and sufficient sustainability of measurement programs are often missing. On the other hand, there are more fundamental methodological deficits related to the controlling process itself and to appropriate tool support. Additional activities that would guarantee the usefulness, completeness, and precision of the result- ing controlling data are widely missing. This article presents a conceptual architecture for so-called Software Project Control Centers (SPCC) that addresses these challenges. The architecture includes mechanisms for getting sufficiently precise and complete data and supporting the information needs of different stakeholders. In addition, an implementation of this architecture, the so-called Specula Project Support Environment, is sketched, and results from evaluating this implementation in industrial settings are presented.
2014-01-06 02:32:51.000000000
9,599
We propose the use of structured natural language (English) in specifying service choreographies, focusing on the what rather than the how of the required coordination of participant services in realising a business application scenario. The declarative approach we propose uses the OMG standard Semantics of Business Vocabulary and Rules (SBVR) as a modelling language. The service choreography approach has been proposed for describing the global orderings of the invocations on interfaces of participant services. We therefore extend SBVR with a notion of time which can capture the coordination of the participant services, in terms of the observable message exchanges between them. The extension is done using existing modelling constructs in SBVR, and hence respects the standard specification. The idea is that users - domain specialists rather than implementation specialists - can verify the requested service composition by directly reading the structured English used by SBVR. At the same time, the SBVR model can be represented in formal logic so it can be parsed and executed by a machine.
2015-12-19 02:15:49.000000000
12,872
The Distributed Messaging Systems (DMSs) used in IoT systems require timely and reliable data dissemination, which can be achieved through configurable parameters. However, the high-dimensional configuration space makes it difficult for users to find the best options that maximize application throughput while meeting specific latency constraints. Existing approaches to automatic software profiling have limitations, such as only optimizing throughput, not guaranteeing explicit latency limitations, and resulting in local optima due to discretizing parameter ranges. To overcome these challenges, a novel configuration tuning system called DMSConfig is proposed that uses machine learning and deep reinforcement learning. DMSConfig interacts with a data-driven environment prediction model, avoiding the cost of online interactions with the production environment. DMSConfig employs the deep deterministic policy gradient (DDPG) method and a custom reward mechanism to make configuration decisions based on predicted DMS states and performance. Experiments show that DMSConfig performs significantly better than the default configuration, is highly adaptive to serve tuning requests with different latency boundaries, and has similar throughput to prevalent parameter tuning tools with fewer latency violations.
2023-02-14 18:47:21.000000000
284
Code changes are performed differently in the mobile and non-mobile platforms. Prior work has investigated the differences in specific platforms. However, we still lack a deeper understanding of how code changes evolve across different software platforms. In this paper, we present a study aiming at investigating the frequency of changes and how source code, build and test changes co-evolve in mobile and non-mobile platforms. We developed regression models to explain which factors influence the frequency of changes and applied the Apriori algorithm to find types of changes that frequently co-occur. Our findings show that non-mobile repositories have a higher number of commits per month and our regression models suggest that being mobile significantly impacts on the number of commits in a negative direction when controlling for confound factors, such as code size. We also found that developers do not usually change source code files together with build or test files. We argue that our results can provide valuable information for developers on how changes are performed in different platforms so that practices adopted in successful software systems can be followed.
2019-10-22 18:33:28.000000000
13,627
Open source software (OSS) licenses regulate the conditions under which users can reuse, modify, and distribute the software legally. However, there exist various OSS licenses in the community, written in a formal language, which are typically long and complicated to understand. In this paper, we conducted a 661-participants online survey to investigate the perspectives and practices of developers towards OSS licenses. The user study revealed an indeed need for an automated tool to facilitate license understanding. Motivated by the user study and the fast growth of licenses in the community, we propose the first study towards automated license summarization. Specifically, we released the first high quality text summarization dataset and designed two tasks, i.e., license text summarization (LTS), aiming at generating a relatively short summary for an arbitrary license, and license term classification (LTC), focusing on the attitude inference towards a predefined set of key license terms (e.g., Distribute). Aiming at the two tasks, we present LiSum, a multi-task learning method to help developers overcome the obstacles of understanding OSS licenses. Comprehensive experiments demonstrated that the proposed jointly training objective boosted the performance on both tasks, surpassing state-of-the-art baselines with gains of at least 5 points w.r.t. F1 scores of four summarization metrics and achieving 95.13% micro average F1 score for classification simultaneously. We released all the datasets, the replication package, and the questionnaires for the community.
2023-09-08 14:17:00.000000000
2,608
Obtaining a relevant dataset is central to conducting empirical studies in software engineering. However, in the context of mining software repositories, the lack of appropriate tooling for large scale mining tasks hinders the creation of new datasets. Moreover, limitations related to data sources that change over time (e.g., code bases) and the lack of documentation of extraction processes make it difficult to reproduce datasets over time. This threatens the quality and reproducibility of empirical studies. In this paper, we propose a tool-supported approach facilitating the creation of large tailored datasets while ensuring their reproducibility. We leveraged all the sources feeding the Software Heritage append-only archive which are accessible through a unified programming interface to outline a reproducible and generic extraction process. We propose a way to define a unique fingerprint to characterize a dataset which, when provided to the extraction process, ensures that the same dataset will be extracted. We demonstrate the feasibility of our approach by implementing a prototype. We show how it can help reduce the limitations researchers face when creating or reproducing datasets.
2023-06-17 08:51:42.000000000
3,794
Declarative approaches to process modeling are regarded as well suited for highly volatile environments as they provide a high degree of flexibility. However, problems in understanding and maintaining declarative business process models impede often their usage. In particular, how declarative models are understood has not been investigated yet. This paper takes a first step toward addressing this question and reports on an exploratory study investigating how analysts make sense of declarative process models. We have handed out real-world declarative process models to subjects and asked them to describe the illustrated process. Our qualitative analysis shows that subjects tried to describe the processes in a sequential way although the models represent circumstantial information, namely, conditions that produce an outcome, rather than a sequence of activities. Finally, we observed difficulties with single building blocks and combinations of relations between activities.
2015-11-05 19:57:03.000000000
7,791
Recently, requirements for the explainability of software systems have gained prominence. One of the primary motivators for such requirements is that explainability is expected to facilitate stakeholders' trust in a system. Although this seems intuitively appealing, recent psychological studies indicate that explanations do not necessarily facilitate trust. Thus, explainability requirements might not be suitable for promoting trust. One way to accommodate this finding is, we suggest, to focus on trustworthiness instead of trust. While these two may come apart, we ideally want both: a trustworthy system and the stakeholder's trust. In this paper, we argue that even though trustworthiness does not automatically lead to trust, there are several reasons to engineer primarily for trustworthiness -- and that a system's explainability can crucially contribute to its trustworthiness.
2021-08-08 12:10:20.000000000
437
The embedding of fault tolerance provisions into the application layer of a programming language is a non-trivial task that has not found a satisfactory solution yet. Such a solution is very important, and the lack of a simple, coherent and effective structuring technique for fault tolerance has been termed by researchers in this field as the "software bottleneck of system development". The aim of this paper is to report on the current status of a novel fault tolerance linguistic structure for distributed applications characterized by soft real-time requirements. A compliant prototype architecture is also described. The key aspect of this structure is that it allows to decompose the target fault-tolerant application into three distinct components, respectively responsible for (1) the functional service, (2) the management of the fault tolerance provisions, and (3) the adaptation to the current environmental conditions. The paper also briefly mentions a few case studies and preliminary results obtained exercising the prototype.
2014-01-13 09:08:53.000000000
2,417
The objective of this research work is to improve the degree of excellence by removing the number of exceptions from the software. The modern age is more concerned with the quality of software. Extensive research is being carried out in this direction. The rate of improvement of quality of software largely depends on the development time. This development time is chiefly calculated in clock hours. However development time does not reflect the effort put in by the developer. A better parameter can be the rate of improvement of quality level or the rate of improvement of the degree of excellence with respect to time. Now this parameter needs the prediction of error level and degree of excellence at a particular stage of development of the software. This paper explores an attempt to develop a system to predict rate of improvement of the software quality at a particular point of time with respect to the number of lines of code present in the software. Having calculated the error level and degree of excellence at two points in time, we can move forward towards the estimation of the rate of improvement of the software quality with respect to time. This parameter can estimate the effort put in while development of the software and can add a new dimension to the understanding of software quality in software engineering domain. In order to obtain the results we have used an indigenous tool for software quality prediction and for graphical representation of data, we have used Microsoft office 2007 graphical chart.
2014-04-14 10:57:29.000000000
8,504
Software is now ubiquitous and involved in complex interactions with the human users and the physical world in so-called cyber-physical systems where the management of time is a major issue. Separation of concerns is a key asset in the development of these ever more complex systems. Two different kinds of separation exist: a first one corresponds to the different steps in a development leading from the abstract requirements to the system implementation and is qualified as vertical. It matches the commonly used notion of refinement. A second one corresponds to the various components in the system architecture at a given level of refinement and is called horizontal. Refinement has been studied thoroughly for the data, functional and concurrency concerns while our work focuses on the time modeling concern. This contribution aims at providing a formal construct for the verification of refinement in time models, through the definition of an order between strict partial orders used to relate the different instants in asynchronous systems. This relation allows the designer at the concrete level to distinguish events that are coincident at the abstract level while preserving the properties assessed at the abstract level. This work has been conducted using the proof assistant Agda and is connected to a previous work on the asynchronous language CCSL, which has also been modelled using the same tool.
2018-10-19 15:00:34.000000000
9,628
To address the increasing size and complexity of modern software systems, compositional verification separates the verification of single components from the verification of their composition. In architecture-based verification, the former is done using Model Checking, while this does not seem to be the case in general the latter is done using interactive theorem proving (ITP). As of today, however, architects are usually not trained in using a full-fledged interactive theorem prover. Thus, to bridge the gap between ITP and the architecture domain, we developed APML: an architecture proof modeling language. APML allows one to sketch proofs about component composition at the level of architecture using notations similar to Message Sequence Charts. With this paper, we introduce APML: We describe the language, show its soundness and completeness for the verification of architecture contracts, and provide an algorithm to map an APML proof to a corresponding proof for the interactive theorem prover Isabelle. Moreover, we describe its implementation in terms of an Eclipse/EMF modeling application, demonstrate it by means of a running example, and evaluate it in terms of a larger case study. Although our results are promising, the case study also reveals some limitations, which lead to new directions for future work.
2019-07-04 18:31:09.000000000
12,396
The ongoing fourth Industrial Revolution depends mainly on robust Industrial Cyber-Physical Systems (ICPS). ICPS includes computing (software and hardware) abilities to control complex physical processes in distributed industrial environments. Industrial agents, originating from the well-established multi-agent systems field, provide complex and cooperative control mechanisms at the software level, allowing us to develop larger and more feature-rich ICPS. The IEEE P2660.1 standardisation project, "Recommended Practices on Industrial Agents: Integration of Software Agents and Low Level Automation Functions" focuses on identifying Industrial Agent practices that can benefit ICPS systems of the future. A key problem within this project is identifying the best-fit industrial agent practices for a given ICPS. This paper reports on the design and development of a tool to address this challenge. This tool, called IASelect, is built using graph databases and provides the ability to flexibly and visually query a growing repository of industrial agent practices relevant to ICPS. IASelect includes a front-end that allows industry practitioners to interactively identify best-fit practices without having to write manual queries.
2021-08-02 06:51:13.000000000
10,047
The rise of Generative Artificial Intelligence systems ("AI systems") has created unprecedented social engagement. AI code generation systems provide responses (output) to questions or requests by accessing the vast library of open-source code created by developers over the past few decades. However, they do so by allegedly stealing the open-source code stored in virtual libraries, known as repositories. This Article focuses on how this happens and whether there is a solution that protects innovation and avoids years of litigation. We also touch upon the array of issues raised by the relationship between AI and copyright. Looking ahead, we propose the following: (a) immediate changes to the licenses for open-source code created by developers that will limit access and/or use of any open-source code to humans only; (b) we suggest revisions to the Massachusetts Institute of Technology ("MIT") license so that AI systems are required to procure appropriate licenses from open-source code developers, which we believe will harmonize standards and build social consensus for the benefit of all of humanity, rather than promote profit-driven centers of innovation; (c) we call for urgent legislative action to protect the future of AI systems while also promoting innovation; and (d) we propose a shift in the burden of proof to AI systems in obfuscation cases.
2023-06-14 16:20:59.000000000
7,719
Dependency injection (DI) is generally known to improve maintainability by keeping application classes separate from the library. Particularly within the Java environment, there are many applications using the principles of DI with the aim to improve maintainability. There exists some work that provides an inference on the impact of DI on maintainability, but no conclusive evidence is provided. The fact that there are no publicly available tools for quantifying DI makes such an evidence more difficult to be produced. In this paper, we propose a novel metric, DCBO, to measure the proportion of DI in a project based on weighted couplings. We describe how DCBO can serve as a more meaningful metric in computing maintainability when DI is also considered. The metric is implemented in the CKJM-Analyzer, an extension of the CKJM tool that utilizes static code analysis to detect DI. We discuss the algorithmic approach behind the static analysis and prove the soundness of the tool using a set of open-source Java projects.
2022-05-10 16:17:20.000000000
15,479
Search is an integral part of a software development process. Developers often use search engines to look for information during development, including reusable code snippets, API understanding, and reference examples. Developers tend to prefer general-purpose search engines like Google, which are often not optimized for code related documents and use search strategies and ranking techniques that are more optimized for generic, non-code related information. In this paper, we explore whether a general purpose search engine like Google is an optimal choice for code-related searches. In particular, we investigate whether the performance of searching with Google varies for code vs. non-code related searches. To analyze this, we collect search logs from 310 developers that contains nearly 150,000 search queries from Google and the associated result clicks. To differentiate between code-related searches and non-code related searches, we build a model which identifies the code intent of queries. Leveraging this model, we build an automatic classifier that detects a code and non-code related query. We confirm the effectiveness of the classifier on manually annotated queries where the classifier achieves a precision of 87%, a recall of 86%, and an F1-score of 87%. We apply this classifier to automatically annotate all the queries in the dataset. Analyzing this dataset, we observe that code related searching often requires more effort (e.g., time, result clicks, and query modifications) than general non-code search, which indicates code search performance with a general search engine is less effective.
2018-03-20 16:52:12.000000000
11,035
Sampling techniques, such as t-wise interaction sampling are used to enable efficient testing for configurable systems. This is achieved by generating a small yet representative sample of configurations for a system, which circumvents testing the entire solution space. However, by design, most recent approaches for t-wise interaction sampling only consider combinations of configuration options from a configurable system's variability model and do not take into account their mapping onto the solution space, thus potentially leaving critical implementation artifacts untested. Tartler et al. address this problem by considering presence conditions of implementation artifacts rather than pure configuration options, but do not consider the possible interactions between these artifacts. In this paper, we introduce t-wise presence condition coverage, which extends the approach of Tartler et al. by using presence conditions extracted from the code as basis to cover t-wise interactions. This ensures that all t-wise interactions of implementation artifacts are included in the sample and that the chance of detecting combinations of faulty configuration options is increased. We evaluate our approach in terms of testing efficiency and testing effectiveness by comparing the approach to existing t-wise interaction sampling techniques. We show that t-wise presence condition sampling is able to produce mostly smaller samples compared to t-wise interaction sampling, while guaranteeing a t-wise presence condition coverage of 100%.
2022-05-27 14:10:12.000000000
9,958
The authors' industry experiences suggest that compiler warnings, a lightweight version of program analysis, are valuable early bug detection tools. Significant costs are associated with patches and security bulletins for issues that could have been avoided if compiler warnings were addressed. Yet, the industry's attitude towards compiler warnings is mixed. Practices range from silencing all compiler warnings to having a zero-tolerance policy as to any warnings. Current published data indicates that addressing compiler warnings early is beneficial. However, support for this value theory stems from grey literature or is anecdotal. Additional focused research is needed to truly assess the cost-benefit of addressing warnings.
2022-01-24 05:27:20.000000000
1,253
In industry, RESTful APIs are widely used to build modern Cloud Applications. Testing them is challenging, because not only they rely on network communications, but also they deal with external services like databases. Therefore, there has been a large amount of research sprout in recent years on how to automatically verify this kind of web services. In this paper, we review the current state-of-the-art on testing RESTful APIs, based on an analysis of 92 scientific articles. This review categories and summarizes the existing scientific work on this topic, and discusses the current challenges in the verification of RESTful APIs.
2022-12-27 04:39:26.000000000
6,228
Bitcoin's success has led to significant interest in its underlying components, particularly Blockchain technology. Over 10 years after Bitcoin's initial release, the community still suffers from a lack of clarity regarding what properties defines Blockchain technology, its relationship to similar technologies, and which of its proposed use-cases are tenable and which are little more than hype. In this paper we answer four common questions regarding Blockchain technology: (1) what exactly is Blockchain technology, (2) what capabilities does it provide, and (3) what are good applications for Blockchain technology, and (4) how does it relate to other approache distributed technologies (e.g., distributed databases). We accomplish this goal by using grounded theory (a structured approach to gathering and analyzing qualitative data) to thoroughly analyze a large corpus of literature on Blockchain technology. This method enables us to answer the above questions while limiting researcher bias, separating thought leadership from peddled hype and identifying open research questions related to Blockchain technology. The audience for this paper is broad as it aims to help researchers in a variety of areas come to a better understanding of Blockchain technology and identify whether it may be of use in their own research.
2019-09-24 13:23:16.000000000
365
By adequate employing of complex event processing (CEP), valuable information can be extracted from the underlying complex system and used in controlling and decision situations. An example application area is management of IT systems for maintaining required dependability attributes of services based on the infrastructure. In practice, one usually faces the problem of the vast number of distributed event sources, which makes depicting complex event patterns a non-trivial task. In this paper, I present a novel, model-driven approach to define complex event patterns and directly generate event processing configuration for an open source CEP engine widely used in the industry. One of the key results of my research work is a textual modeling language called Complex Event Description Language (CEDL), which will be presented by its algebraic semantics and some typical examples.
2012-04-09 14:05:02.000000000
3,486
Automatically generated static code warnings suffer from a large number of false alarms. Hence, developers only take action on a small percent of those warnings. To better predict which static code warnings should not be ignored, we suggest that analysts need to look deeper into their algorithms to find choices that better improve the particulars of their specific problem. Specifically, we show here that effective predictors of such warnings can be created by methods that locally adjust the decision boundary (between actionable warnings and others). These methods yield a new high water-mark for recognizing actionable static code warnings. For eight open-source Java projects (cassandra, jmeter, commons, lucene-solr, maven, ant, tomcat, derby) we achieve perfect test results on 4/8 datasets and, overall, a median AUC (area under the true negatives, true positives curve) of 92%.
2022-05-02 02:42:16.000000000
2,300
When Requirements Engineering(RE) models are unreasonably complex, they cannot support efficient decision making. SHORT is a tool to simplify that reasoning by exploiting the "key" decisions within RE models. These "keys" have the property that once values are assigned to them, it is very fast to reason over the remaining decisions. Using these "keys", reasoning about RE models can be greatly SHORTened by focusing stakeholder discussion on just these key decisions. This paper evaluates the SHORT tool on eight complex RE models. We find that the number of keys are typically only 12% of all decisions. Since they are so few in number, keys can be used to reason faster about models. For example, using keys, we can optimize over those models (to achieve the most goals at least cost) two to three orders of magnitude faster than standard methods. Better yet, finding those keys is not difficult: SHORT runs in low order polynomial time and terminates in a few minutes for the largest models.
2017-02-14 17:51:33.000000000
862
This paper presents a transpiler framework for converting RTL pseudo code of the POWER Instruction Set Architecture (ISA) to C code, enabling its execution on the Cavatools simulator. The transpiler consists of a lexer and parser, which parse the RTL pseudo code and generate corresponding C code representations. The lexer tokenizes the input code, while the parser applies grammar rules to build an abstract syntax tree (AST). The transpiler ensures compatibility with the Cavatools simulator by generating C code that adheres to its requirements. The resulting C code can be executed on the Cavatools simulator, allowing developers to analyze the instruction-level performance of the Power ISA in real time. The proposed framework facilitates the seamless integration of RTL pseudo code into the Cavatools ecosystem, enabling comprehensive performance analysis and optimization of Power ISA-based code.
2023-06-13 16:28:37.000000000
11,137
Call graphs depict the static, caller-callee relation between "functions" in a program. With most source/target languages supporting functions as the primitive unit of composition, call graphs naturally form the fundamental control flow representation available to understand/develop software. They are also the substrate on which various interprocedural analyses are performed and are integral part of program comprehension/testing. Given their universality and usefulness, it is imperative to ask if call graphs exhibit any intrinsic graph theoretic features -- across versions, program domains and source languages. This work is an attempt to answer these questions: we present and investigate a set of meaningful graph measures that help us understand call graphs better; we establish how these measures correlate, if any, across different languages and program domains; we also assess the overall, language independent software quality by suitably interpreting these measures.
2008-03-27 22:58:43.000000000
6,798
We present TEGCER, an automated feedback tool for novice programmers. TEGCER uses supervised classification to match compilation errors in new code submissions with relevant pre-existing errors, submitted by other students before. The dense neural network used to perform this classification task is trained on 15000+ error-repair code examples. The proposed model yields a test set classification Pred[USER] accuracy of 97.7% across 212 error category labels. Using this model as its base, TEGCER presents students with the closest relevant examples of solutions for their specific error on demand.
2019-08-30 08:55:44.000000000
13,695
The stakeholders of a system are legitimately interested in whether and how its architecture reflects their respective concerns at each point of its development and maintenance processes. Having such knowledge available at all times would enable them to continually adjust their systems structure at each juncture and reduce the buildup of technical debt that can be hard to reduce once it has persisted over many iterations. Unfortunately, software systems often lack reliable and current documentation about their architecture. In order to remedy this situation, researchers have conceived a number of architectural recovery methods, some of them concern-oriented. However, the design choices forming the bases of most existing recovery methods make it so none of them have a complete set of desirable qualities for the purpose stated above. Tailoring a recovery to a system is either not possible or only through iterative experiments with numeric parameters. Furthermore, limitations in their scalability make it prohibitive to apply the existing techniques to large systems. Finally, since several current recovery methods employ non-deterministic sampling, their inconsistent results do not lend themselves well to tracking a systems course over several versions, as needed by its stakeholders. RELAX (RELiable Architecture EXtraction), a new concern-based recovery method that uses text classification, addresses these issues efficiently by (1) assembling the overall recovery result from smaller, independent parts, (2) basing it on an algorithm with linear time complexity and (3) being tailorable to the recovery of a single system or a sequence thereof through the selection of meaningfully named, semantic topics. An intuitive, informative architectural visualization rounds out RELAX's contributions. RELAX is illustrated on a number of existing open-source systems and compared to other recovery methods.
2019-03-14 12:33:22.000000000
1,407
Developers often wonder how to implement a certain functionality (e.g., how to parse XML files) using APIs. Obtaining an API usage sequence based on an API-related natural language query is very helpful in this regard. Given a query, existing approaches utilize information retrieval models to search for matching API sequences. These approaches treat queries and APIs as bag-of-words (i.e., keyword matching or word-to-word alignment) and lack a deep understanding of the semantics of the query. We propose DeepAPI, a deep learning based approach to generate API usage sequences for a given natural language query. Instead of a bags-of-words assumption, it learns the sequence of words in a query and the sequence of associated APIs. DeepAPI adapts a neural language model named RNN Encoder-Decoder. It encodes a word sequence (user query) into a fixed-length context vector, and generates an API sequence based on the context vector. We also augment the RNN Encoder-Decoder by considering the importance of individual APIs. We empirically evaluate our approach with more than 7 million annotated code snippets collected from GitHub. The results show that our approach generates largely accurate API sequences and outperforms the related approaches.
2016-05-23 17:26:33.000000000
5,773
While there was great progress regarding the technology and its implementation for vehicles equipped with automated driving systems (ADS), the problem of how to proof their safety as a necessary precondition prior to market launch remains unsolved. One promising solution are scenario-based test approaches; however, there is no commonly accepted way of how to systematically generate and extract the set of relevant scenarios to be tested to sufficiently capture the real-world traffic dynamics, especially for urban operational design domains. Within the scope of this paper, the overall concept of a novel simulation-based toolchain for the development and testing of ADS-equipped vehicles in urban environments is presented. Based on previous work regarding highway environments, the developed novel enhancements aim at empowering the toolchain to be able to deal with the increased complexity due to the more complex road networks with multi-modal interactions of various traffic participants. Based on derived requirements, a thorough explanation of different modules constituting the toolchain is given, showing first results and identified research gaps, respectively. A closer look is taken on two use cases: First, it is investigated whether the toolchain is capable to serve as synthetic data source within the development phase of ADS-equipped vehicles to enrich a scenario database in terms of extent, complexity and impacts of different what-if-scenarios for future mixed traffic. Second, it is analyzed how to combine the individual advantages of real recorded data and an agent-based simulation within a so-called adaptive replay-to-sim approach to support the testing phase of an ADS-equipped vehicle. The developed toolchain contributes to the overarching goal of a commonly accepted methodology for the validation and safety proof of ADS-equipped vehicles, especially in urban environments.
2021-09-07 10:41:02.000000000
11,060
This Master thesis examines issues of interoperability and integration between the Classic Information Science (CIS) and Quantum Information Science (QIS). It provides a short introduction to the Extensible Markup Language (XML) and proceeds to describe the development steps that have lead to a prototype XML specification for quantum computing (QIS-XML). QIS-XML is a proposed framework, based on the widely used standard (XML) to describe, visualize, exchange and process quantum gates and quantum circuits. It also provides a potential approach to a generic programming language for quantum computers through the concept of XML driven compilers. Examples are provided for the description of commonly used quantum gates and circuits, accompanied with tools to visualize them in standard web browsers. An algorithmic example is also presented, performing a simple addition operation with quantum circuits and running the program on a quantum computer simulator. Overall, this initial effort demonstrates how XML technologies could be at the core of the architecture for describing and programming quantum computers. By leveraging a widely accepted standard, QIS-XML also builds a bridge between classic and quantum IT, which could foster the acceptance of QIS by the ICT community and facilitate the understanding of quantum technology by IT experts. This would support the consolidation of Classic Information Science and Quantum Information Science into a Complete Information Science, a challenge that could be referred to as the "Information Science Grand Unification Challenge".
2011-06-08 15:25:40.000000000
12,545
Multi-tenancy is one of the most important concepts for any Software as a Service (SaaS) application. Multi-tenant SaaS application serves a large number of tenants with one single application instance. Complex SaaS application that serves significant number of tenants could have a huge number of customizations with complicated relationships, which increases the customization complexity and reduces the customization understandability. Modeling such customizations, validating each tenant's customization, and adapting SaaS applications on the fly based on each tenant's requirements become very complex tasks. To mitigate these challenges, we propose an aspect-oriented approach that makes use of the Orthogonal Variability Model (OVM) and Metagraphs. The OVM is used to provide the tenants with simple and understandable customization model. A Metagraph-based algorithm has been developed to validate tenants' customizations. On the other hand, the aspect-oriented approach offers a high level of runtime adaptability.
2014-09-01 13:24:14.000000000
4,955
There is an urgent societal need to assess whether autonomous vehicles (AVs) are safe enough. From published quantitative safety and reliability assessments of AVs, we know that, given the goal of predicting very low rates of accidents, road testing alone requires infeasible numbers of miles to be driven. However, previous analyses do not consider any knowledge prior to road testing - knowledge which could bring substantial advantages if the AV design allows strong expectations of safety before road testing. We present the advantages of a new variant of Conservative Bayesian Inference (CBI), which uses prior knowledge while avoiding optimistic biases. We then study the trend of disengagements (take-overs by human drivers) by applying Software Reliability Growth Models (SRGMs) to data from Waymo's public road testing over 51 months, in view of the practice of software updates during this testing. Our approach is to not trust any specific SRGM, but to assess forecast accuracy and then improve forecasts. We show that, coupled with accuracy assessment and recalibration techniques, SRGMs could be a valuable test planning aid.
2019-08-16 09:10:48.000000000
3,662
Software reliability analysis is performed at various stages during the process of engineering software as an attempt to evaluate if the software reliability requirements have been (or might be) met. In this report, I present a summary of some fundamental black-box and white-box software reliability models. I also present some general shortcomings of these models and suggest avenues for further research.
2013-04-11 11:35:44.000000000
5,411
Can we simplify explanations for software analytics? Maybe. Recent results show that systems often exhibit a "keys effect"; i.e. a few key features control the rest. Just to say the obvious, for systems controlled by a few keys, explanation and control is just a matter of running a handful of "what-if" queries across the keys. By exploiting the keys effect, it should be possible to dramatically simplify even complex explanations, such as those required for ethical AI systems.
2021-07-06 20:32:14.000000000
11,756
We study the problem of safety verification of direct perception neural networks, where camera images are used as inputs to produce high-level features for autonomous vehicles to make control decisions. Formal verification of direct perception neural networks is extremely challenging, as it is difficult to formulate the specification that requires characterizing input as constraints, while the number of neurons in such a network can reach millions. We approach the specification problem by learning an input property characterizer which carefully extends a direct perception neural network at close-to-output layers, and address the scalability problem by a novel assume-guarantee based verification approach. The presented workflow is used to understand a direct perception neural network (developed by Audi) which computes the next waypoint and orientation for autonomous vehicles to follow.
2019-04-05 12:30:31.000000000
3,845
The creation of Linked Data from raw data sources is, in theory, no rocket science (pun intended). Depending on the nature of the input and the mapping technology in use, it can become a quite tedious task. For our work on mapping real-life touristic data to the schema.org vocabulary we used RML but soon encountered, that the existing Java mapper implementations reached their limits and were not sufficient for our use cases. In this paper we describe a new implementation of an RML mapper. Written with the JavaScript based NodeJS framework it performs quite well for our uses cases where we work with large XML and JSON files. The performance testing and the execution of the RML test cases have shown, that the implementation has great potential to perform heavy mapping tasks in reasonable time, but comes with some limitations regarding JOINs, Named Graphs and inputs other than XML and JSON - which is fine at the moment, due to the nature of the given use cases.
2019-03-08 08:21:09.000000000
11,377
Increasing code velocity is a common goal for a variety of software projects. The efficiency of the code review process significantly impacts how fast the code gets merged into the final product and reaches the customers. We conducted a survey to study the code velocity-related beliefs and practices in place. We analyzed 75 completed surveys from 39 participants from the industry and 36 from the open-source community. Our critical findings are (a) the industry and open-source community hold a similar set of beliefs, (b) quick reaction time is of utmost importance and applies to the tooling infrastructure and the behavior of other engineers, (c) time-to-merge is the essential code review metric to improve, (d) engineers have differing opinions about the benefits of increased code velocity for their career growth, and (e) the controlled application of the commit-then-review model can increase code velocity. Our study supports the continued need to invest in and improve code velocity regardless of the underlying organizational ecosystem.
2023-11-02 11:54:58.000000000
2,198
Machine learning-based malware detection dominates current security defense approaches for Android apps. However, due to the evolution of Android platforms and malware, existing such techniques are widely limited by their need for constant retraining that are costly, and reliance on new malware samples that may not be timely available. As a result, new and emerging malware slips through, as seen from the continued surging of malware in the wild. Thus, a more practical detector needs not only to be accurate but, more critically, to be able to sustain its capabilities over time without frequent retraining. In this paper, we study how Android apps evolve as a population over time, in terms of their behaviors related to accesses to sensitive information and operations. We first perform a longitudinal characterization of 6K benign and malicious apps developed across seven years, with focus on these sensitive accesses in app executions. Our study reveals, during the long evolution, a consistent, clear differentiation between malware and benign apps regarding such accesses, measured by relative statistics of relevant method calls. Following these findings, we developed DroidSpan, a novel classification system based on a new behavioral profile for Android apps. Through an extensive evaluation, we showed that DroidSpan can not only effectively detect malware but sustain high detection accuracy (93% F1 measure) for four years (with 81% F1 for five years). Through a dedicated study, we also showed its resiliency to sophisticated evasion schemes. By comparing to a state-of-the-art malware detector, we demonstrated the largely superior sustainability of our approach at reasonable costs.
2018-07-19 23:35:14.000000000
665
Software engineering is knowledge-intensive and requires software developers to continually search for knowledge, often on community question answering platforms such as Stack Overflow. Such information sharing platforms do not exist in isolation, and part of the evidence that they exist in a broader software documentation ecosystem is the common presence of hyperlinks to other documentation resources found in forum posts. With the goal of helping to improve the information diffusion between Stack Overflow and other documentation resources, we conducted a study to answer the question of how and why documentation is referenced in Stack Overflow threads. We sampled and classified 759 links from two different domains, regular expressions and Android development, to qualitatively and quantitatively analyze the links' context and purpose, including attribution, awareness, and recommendations. We found that links on Stack Overflow serve a wide range of distinct purposes, ranging from citation links attributing content copied into Stack Overflow, over links clarifying concepts using Wikipedia pages, to recommendations of software components and resources for background reading. This purpose spectrum has major corollaries, including our observation that links to documentation resources are a reflection of the information needs typical to a technology domain. We contribute a framework and method to analyze the context and purpose of Stack Overflow links, a public dataset of annotated links, and a description of five major observations about linking practices on Stack Overflow. We further point to potential tool support to enhance the information diffusion between Stack Overflow and other documentation resources.
2019-06-07 09:01:22.000000000
3,170
Stepwise refinement and Design-by-Contract are two formal approaches for modelling systems. These approaches are widely used in the development of systems. Both approaches have (dis-)advantages. This thesis aims to answer, is it possible to combine both approaches in the development of systems, providing the user with the benefits of both? We answer this question by translating the stepwise refinement method with Event-B to Design-by-Contract with Java and JML, so users can take full advantage of both formal approaches without losing their benefits. This thesis presents a set of syntactic rules that translates Event-B to JML-annotated Java code. It also presents the implementation of the syntactic rules as the EventB2Java tool. We used the tool to translate several Event-B models. It generated JML-annotated Java code for all the considered models that serve as initial implementation. We also used EventB2Java for the development of two software applications. Additionally, we compared EventB2Java against two other tools for Event-B code generation. EventB2Java enables users to start the software development process in Event-B, where users can model the system and prove its consistency, to then transition to JML-annotated Java code, where users can continue the development process.
2016-01-30 18:13:15.000000000
12,697
Measuring and evaluating source code similarity is a fundamental software engineering activity that embraces a broad range of applications, including but not limited to code recommendation, duplicate code, plagiarism, malware, and smell detection. This paper proposes a systematic literature review and meta-analysis on code similarity measurement and evaluation techniques to shed light on the existing approaches and their characteristics in different applications. We initially found over 10000 articles by querying four digital libraries and ended up with 136 primary studies in the field. The studies were classified according to their methodology, programming languages, datasets, tools, and applications. A deep investigation reveals 80 software tools, working with eight different techniques on five application domains. Nearly 49% of the tools work on Java programs and 37% support C and C++, while there is no support for many programming languages. A noteworthy point was the existence of 12 datasets related to source code similarity measurement and duplicate codes, of which only eight datasets were publicly accessible. The lack of reliable datasets, empirical evaluations, hybrid methods, and focuses on multi-paradigm languages are the main challenges in the field. Emerging applications of code similarity measurement concentrate on the development phase in addition to the maintenance.
2023-06-27 00:26:15.000000000
13,119
The overall goal of the described research project was to create applicable quality assurance patterns for Java software systems using the aspect-oriented programming language extension AspectJ 5. We tried to develop aspects to check static quality criteria as a variable mutator convention and architectural layering rules. We successfully developed aspects for automating the following dynamic quality criteria: Parameterized Exception Chaining, Comfortable Declaration of Parameterized Exceptions, Not-Null Checking of Reference Variables.
2010-06-25 01:00:30.000000000
8,173
This project aims to build an elastic web application that can automatically scale out and scale in on-demand and cost-effectively by utilizing cloud resources, specifically from Amazon Web Services (AWS). The application is an image classification service exposed as a RESTful web service for clients to access. The infrastructure is divided into two tiers, the Web-Tier and the Application-Tier, with the former providing a user interface for uploading images, while the latter contains core functionality for image classification, business logic, and database manipulation functions. AWS EC2, SQS, and S3 resources are utilized to create this infrastructure, and scaling in and out of resources is determined by the number of incoming images. The project successfully demonstrated the implementation of an image classification application using AWS, which can be used in various industries, such as medical diagnosis, agriculture, retail, security, environmental monitoring, and manufacturing. However, the evaluation of the system based on metrics of response time, boot time, and accuracy highlighted some issues that need to be addressed to improve the application's performance. Overall, the scalability and cost-effectiveness of the infrastructure make it a suitable choice for developing image classification applications.
2023-05-01 15:37:39.000000000
15,423
Recent years have seen the successful application of deep learning to software engineering (SE). In particular, the development and use of pre-trained models of source code has enabled state-of-the-art results to be achieved on a wide variety of SE tasks. This paper provides an overview of this rapidly advancing field of research and reflects on future research directions.
2022-05-23 05:58:28.000000000
7,596
Bug-fix benchmarks are fundamental in advancing various sub-fields of software engineering such as automatic program repair (APR) and fault localization (FL). A good benchmark must include recent examples that accurately reflect technologies and development practices of today. To be executable in the long term, a benchmark must feature test suites that do not degrade overtime due to, for example, dependencies that are no longer available. Existing benchmarks fail in meeting both criteria. For instance, Defects4J, one of the foremost Java benchmarks, last received an update in 2020. Moreover, full-reproducibility has been neglected by the majority of existing benchmarks. In this paper, we present GitBug-Actions: a novel tool for building bug-fix benchmarks with modern and fully-reproducible bug-fixes. GitBug-Actions relies on the most popular CI platform, GitHub Actions, to detect bug-fixes and smartly locally execute the CI pipeline in a controlled and reproducible environment. To the best of our knowledge, we are the first to rely on GitHub Actions to collect bug-fixes. To demonstrate our toolchain, we deploy GitBug-Actions to build a proof-of-concept Go bug-fix benchmark containing executable, fully-reproducible bug-fixes from different repositories. A video demonstrating GitBug-Actions is available at: [LINK].
2023-10-22 07:16:02.000000000
12,056
Feedback is an essential composition operator in many classes of reactive and other systems. This paper studies feedback in the context of compositional theories with refinement. Such theories allow to reason about systems on a component-by-component basis, and to characterize substitutability as a refinement relation. Although compositional theories of feedback do exist, they are limited either to deterministic systems (functions) or input-receptive systems (total relations). In this work we propose a compositional theory of feedback which applies to non-deterministic and non-input-receptive systems (e.g., partial relations). To achieve this, we use the semantic frameworks of predicate and property transformers, and relations with fail and unknown values. We show how to define instantaneous feedback for stateless systems and feedback with unit delay for stateful systems. Both operations preserve the refinement relation, and both can be applied to non-deterministic and non-input-receptive systems.
2015-10-16 13:16:24.000000000
12,287
Context: Social aspects are of high importance for being successful using agile methods in software development. People are influenced by their cultural imprint, as the underlying cultural values are guiding us in how we think and act. Thus, one may assume that in multicultural agile software development teams, cultural characteristics influence the result in terms of quality of the team work and consequently, the product to be delivered. Objective: We aim to identify barriers and potentials that may arise in multicultural agile software development teams to provide valuable strategies for both researchers and practitioners faced with barriers or unrealized potentials of cultural diversity. Method: The study is designed as a single-case study with two units of analysis using a mixed-method design consisting quantitative and qualitative methods. Results: First, our results suggest that the cultural characteristics at the team level need to be analyzed individually in intercultural teams, Second, we identified key potentials regarding cultural characteristics providing key potentials such as a individual team subculture that fits agile values like open communication. Third, we derived strategies supporting the potentials of cultural diversity in agile software development teams. Conclusion: Our findings show, that a deeper understanding of cultural influences in multicultural agile software development teams is needed. Based on the results, we already prepare future work to validate the results in other industries.
2023-11-19 21:06:00.000000000
6,682
Logic programs are now used as a representation of object-oriented source code in academic prototypes for about a decade. This representation allows a clear and concise implementation of analyses of the object-oriented source code. The full potential of this approach is far from being explored. In this paper, we report about an application of the well-established theory of update propagation within logic programs. Given the representation of the object-oriented code as facts in a logic program, a change to the code corresponds to an update of these facts. We demonstrate how update propagation provides a generic way to generate incremental versions of such analyses.
2013-01-09 16:22:40.000000000
14,391
Security vulnerabilities in modern software are prevalent and harmful. While automated vulnerability detection tools have made promising progress, their scalability and applicability remain challenging. Recently, Large Language Models (LLMs), such as GPT-4 and CodeLlama, have demonstrated remarkable performance on code-related tasks. However, it is unknown whether such LLMs can do complex reasoning over code. In this work, we explore whether pre-trained LLMs can detect security vulnerabilities and address the limitations of existing tools. We evaluate the effectiveness of pre-trained LLMs on a set of five diverse security benchmarks spanning two languages, Java and C/C++, and including code samples from synthetic and real-world projects. We evaluate the effectiveness of LLMs in terms of their performance, explainability, and robustness. By designing a series of effective prompting strategies, we obtain the best results on the synthetic datasets with GPT-4: F1 scores of 0.79 on OWASP, 0.86 on Juliet Java, and 0.89 on Juliet C/C++. Expectedly, the performance of LLMs drops on the more challenging real-world datasets: CVEFixes Java and CVEFixes C/C++, with GPT-4 reporting F1 scores of 0.48 and 0.62, respectively. We show that LLMs can often perform better than existing static analysis and deep learning-based vulnerability detection tools, especially for certain classes of vulnerabilities. Moreover, LLMs also often provide reliable explanations, identifying the vulnerable data flows in code. We find that fine-tuning smaller LLMs can outperform the larger LLMs on synthetic datasets but provide limited gains on real-world datasets. When subjected to adversarial attacks on code, LLMs show mild degradation, with average accuracy reduction of up to 12.67%. Finally, we share our insights and recommendations for future work on leveraging LLMs for vulnerability detection.
2023-11-24 14:33:41.000000000
5,616
Background: Test suites are frequently used to quantify relevant software attributes, such as quality or productivity. Problem: We have detected that the same response variable, measured using different test suites, yields different experiment results. Aims: Assess to which extent differences in test case construction influence measurement accuracy and experimental outcomes. Method: Two industry experiments have been measured using two different test suites, one generated using an ad-hoc method and another using equivalence partitioning. The accuracy of the measures has been studied using standard procedures, such as ISO 5725, Bland-Altman and Interclass Correlation Coefficients. Results: There are differences in the values of the response variables up to +-60%, depending on the test suite (ad-hoc vs. equivalence partitioning) used. Conclusions: The disclosure of datasets and analysis code is insufficient to ensure the reproducibility of SE experiments. Experimenters should disclose all experimental materials needed to perform independent measurement and re-analysis.
2021-11-05 16:56:28.000000000
15,249
Spreadsheets that are informally created are harder to test than they should be. Simple cross-foot checks or being easily readable are modest but attainable goals for every spreadsheet developer. This paper lists some tips on building self-checking into a spreadsheet in order to provide more confidence to the reader that a spreadsheet is robust.
2009-08-06 19:03:02.000000000
4,409
T-Reqs is a text-based requirements management solution based on the git version control system. It combines useful conventions, templates and helper scripts with powerful existing solutions from the git ecosystem and provides a working solution to address some known requirements engineering challenges in large-scale agile system development. Specifically, it allows agile cross-functional teams to be aware of requirements at system level and enables them to efficiently propose updates to those requirements. Based on our experience with T-Reqs, we i) relate known requirements challenges of large-scale agile system development to tool support; ii) list key requirements for tooling in such a context; and iii) propose concrete solutions for challenges.
2018-05-04 23:29:19.000000000
7,254
Deep learning models are increasingly used in mobile applications as critical components. Unlike the program bytecode whose vulnerabilities and threats have been widely-discussed, whether and how the deep learning models deployed in the applications can be compromised are not well-understood since neural networks are usually viewed as a black box. In this paper, we introduce a highly practical backdoor attack achieved with a set of reverse-engineering techniques over compiled deep learning models. The core of the attack is a neural conditional branch constructed with a trigger detector and several operators and injected into the victim model as a malicious payload. The attack is effective as the conditional logic can be flexibly customized by the attacker, and scalable as it does not require any prior knowledge from the original model. We evaluated the attack effectiveness using 5 state-of-the-art deep learning models and real-world samples collected from 30 users. The results demonstrated that the injected backdoor can be triggered with a success rate of 93.5%, while only brought less than 2ms latency overhead and no more than 1.4% accuracy decrease. We further conducted an empirical study on real-world mobile deep learning apps collected from Google Play. We found 54 apps that were vulnerable to our attack, including popular and security-critical ones. The results call for the awareness of deep learning application developers and auditors to enhance the protection of deployed models.
2021-01-15 04:15:26.000000000
7,352
Before any software maintenance can occur, developers must read the identifier names found in the code to be maintained. Thus, high-quality identifier names are essential for productive program comprehension and maintenance activities. With developers free to construct identifier names to their liking, it can be difficult to automatically reason about the quality and semantics behind an identifier name. Studying the structure of identifier names can help alleviate this problem. Existing research focuses on studying words within identifiers, but there are other symbols that appear in identifier names -- such as digits. This paper explores the presence and purpose of digits in identifier names through an empirical study of 800 open-source Java systems. We study how digits contribute to the semantics of identifier names and how identifier names that contain digits evolve over time through renaming. We envision our findings improving the efficiency of name appraisal and recommendation tools and techniques.
2022-02-27 18:05:21.000000000
11,654
Software safety is a crucial aspect during the development of modern safety-critical systems. Software is becoming responsible for most of the critical functions of systems. Therefore, the software components in the systems need to be tested extensively against their safety requirements to ensure a high level of system safety. However, performing testing exhaustively to test all software behaviours is impossible. Numerous testing approaches exist. However, they do not directly concern the information derived during the safety analysis. STPA (Systems-Theoretic Process Analysis) is a unique safety analysis approach based on system and control theory, and was developed to identify unsafe scenarios of a complex system including software. In this paper, we present a systematic and semi-automatic testing approach based on STPA to generate test cases from the STPA safety analysis results to help software and safety engineers to recognize and reduce the associated software risks. We also provide an open-source safety-based testing tool called STPA TCGenerator to support the proposed approach. We illustrate the proposed approach with a prototype of a software of the Adaptive Cruise Control System (ACC) with a stop-and-go function with a Lego-Mindstorms EV3 robot.
2016-12-06 02:36:36.000000000
9,012
Software development is getting changed so rapidly. It will be highly benefited if we can accelerate software development process by guiding developers. Appropriate guidelines and accurate recommendations to developers during development process can reduce software development expenses, as well as can save valuable times of developers. There are a number of approaches to speed up the software development process. It can be done through code assistance tools that help developers by recommending relevant items from searching particular repository of Application Programming Interface (API). Some approaches are based on online searching that have some drawbacks due to request and response latency as it has to deal with the extra-large files in a server. Developers generally uses previously completed resources as well as libraries or frameworks to generate relevant snippets which are supplied by the referral repository of APIs. Developers find it hard to choose the appropriate methods as there are thousands of methods in which some are not properly documented. In this paper we have proposed a concept and its respective framework to guide developers that suggests relevant API methods from an offline mined repository. From the investigation we made, we can say that our approach works much better than some of the existing approaches.
2020-06-19 08:02:31.000000000
13,303
Third-party libraries with rich functionalities facilitate the fast development of Node.js software, but also bring new security threats that vulnerabilities could be introduced through dependencies. In particular, the threats could be excessively amplified by transitive dependencies. Existing research either considers direct dependencies or reasoning transitive dependencies based on reachability analysis, which neglects the NPM-specific dependency resolution rules, resulting in wrongly resolved dependencies. Consequently, further fine-grained analysis, such as vulnerability propagation and their evolution in dependencies, cannot be carried out precisely at a large scale, as well as deriving ecosystem-wide solutions for vulnerabilities in dependencies. To fill this gap, we propose a knowledge graph-based dependency resolution, which resolves the dependency relations of dependencies as trees (i.e., dependency trees), and investigates the security threats from vulnerabilities in dependency trees at a large scale. We first construct a complete dependency-vulnerability knowledge graph (DVGraph) that captures the whole NPM ecosystem (over 10 million library versions and 60 million well-resolved dependency relations). Based on it, we propose DTResolver to statically and precisely resolve dependency trees, as well as transitive vulnerability propagation paths, by considering the official dependency resolution rules. Based on that, we carry out an ecosystem-wide empirical study on vulnerability propagation and its evolution in dependency trees. Our study unveils lots of useful findings, and we further discuss the lessons learned and solutions for different stakeholders to mitigate the vulnerability impact in NPM. For example, we implement a dependency tree based vulnerability remediation method (DTReme) for NPM packages, and receive much better performance than the official tool (npm audit fix).
2022-01-09 13:39:10.000000000
402
Context: Model-Driven Security (MDS) is as a specialised Model-Driven Engineering research area for supporting the development of secure systems. Over a decade of research on MDS has resulted in a large number of publications. Objective: To provide a detailed analysis of the state of the art in MDS, a systematic literature review (SLR) is essential. Method: We conducted an extensive SLR on MDS. Derived from our research questions, we designed a rigorous, extensive search and selection process to identify a set of primary MDS studies that is as complete as possible. Our three-pronged search process consists of automatic searching, manual searching, and snowballing. After discovering and considering more than thousand relevant papers, we identified, strictly selected, and reviewed 108 MDS publications. Results: The results of our SLR show the overall status of the key artefacts of MDS, and the identified primary MDS studies. E.g. regarding security modelling artefact, we found that developing domain-specific languages plays a key role in many MDS approaches. The current limitations in each MDS artefact are pointed out and corresponding potential research directions are suggested. Moreover, we categorise the identified primary MDS studies into 5 principal MDS studies, and other emerging or less common MDS studies. Finally, some trend analyses of MDS research are given. Conclusion: Our results suggest the need for addressing multiple security concerns more systematically and simultaneously, for tool chains supporting the MDS development cycle, and for more empirical studies on the application of MDS methodologies. To the best of our knowledge, this SLR is the first in the field of Software Engineering that combines a snowballing strategy with database searching. This combination has delivered an extensive literature study on MDS.
2015-05-18 10:18:45.000000000
6,893
We present our vision for developing an automated tool capable of translating visual properties observed in Machine Learning (ML) visualisations into Python assertions. The tool aims to streamline the process of manually verifying these visualisations in the ML development cycle, which is critical as real-world data and assumptions often change post-deployment. In a prior study, we mined $54,070$ Jupyter notebooks from Github and created a catalogue of $269$ semantically related visualisation-assertion (VA) pairs. Building on this catalogue, we propose to build a taxonomy that organises the VA pairs based on ML verification tasks. The input feature space comprises of a rich source of information mined from the Jupyter notebooks -- visualisations, Python source code, and associated markdown text. The effectiveness of various AI models, including traditional NLP4Code models and modern Large Language Models, will be compared using established machine translation metrics and evaluated through a qualitative study with human participants. The paper also plans to address the challenge of extending the existing VA pair dataset with additional pairs from Kaggle and to compare the tool's effectiveness with commercial generative AI models like ChatGPT. This research not only contributes to the field of ML system validation but also explores novel ways to leverage AI for automating and enhancing software engineering practices in ML.
2024-01-14 18:12:03.000000000
5,209
Engineering more secure software has become a critical challenge in the cyber world. It is very important to develop methodologies, techniques, and tools for developing secure software. To develop secure software, software developers need to think like an attacker through mining software repositories. These aim to analyze and understand the data repositories related to software development. The main goal is to use these software repositories to support the decision-making process of software development. There are different vulnerability databases like Common Weakness Enumeration (CWE), Common Vulnerabilities and Exposures database (CVE), and CAPEC. We utilized a database called MITRE. MITRE ATT&CK tactics and techniques have been used in various ways and methods, but tools for utilizing these tactics and techniques in the early stages of the software development life cycle (SDLC) are lacking. In this paper, we use machine learning algorithms to map requirements to the MITRE ATT&CK database and determine the accuracy of each mapping depending on the data split.
2023-02-10 04:33:11.000000000
2,900
Synchronous programming is a paradigm of choice for the design of safety-critical reactive systems. Runtime enforcement is a technique to ensure that the output of a black-box system satisfies some desired properties. This paper deals with the problem of runtime enforcement in the context of synchronous programs. We propose a framework where an enforcer monitors both the inputs and the outputs of a synchronous program and (minimally) edits erroneous inputs/outputs in order to guarantee that a given property holds. We define enforceability conditions, develop an online enforcement algorithm, and prove its correctness. We also report on an implementation of the algorithm on top of the KIELER framework for the SCCharts synchronous language. Experimental results show that enforcement has minimal execution time overhead, which decreases proportionally with larger benchmarks.
2016-12-13 20:35:24.000000000
15,063
Context: In the Requirements Engineering (RE) process of an Open Source Software (OSS) community, an involved firm is a stakeholder among many. Conflicting agendas may create miss-alignment with the firm's internal requirements strategy. In communities with meritocratic governance or with aspects thereof, a firm has the opportunity to affect the RE process in line with their own agenda by gaining influence through active and symbiotic engagements. Objective: The focus of this study has been to identify what aspects that firms should consider when they assess their need of influencing the RE process in an OSS community, as well as what engagement practices that should be considered in order to gain this influence. Method: Using a design science approach, 21 interviews with 18 industry professionals from 12 different software-intensive firms were conducted to explore, design and validate an artifact for the problem context. Results: A Community Strategy Framework (CSF) is presented to help firms create community strategies that describe if and why they need influence on the RE process in a specific (meritocratic) OSS community, and how the firm could gain it. The framework consists of aspects and engagement practices. The aspects help determine how important an OSS project and its community is from business and technical perspectives. A community perspective is used when considering the feasibility and potential in gaining influence. The engagement practices are intended as a tool-box for how a firm can engage with a community in order to build influence needed. Conclusion: It is concluded from interview-based validation that the proposed CSF may provide support for firms in creating and tailoring community strategies and help them to focus resources on communities that matter and gain the influence needed on their respective RE processes.
2022-08-04 06:39:48.000000000
5,772
In source code search, a common information-seeking strategy involves providing a short initial query with a broad meaning, and then iteratively refining the query using terms gleaned from the results of subsequent searches. This strategy requires programmers to spend time reading search results that are irrelevant to their development needs. In contrast, when programmers seek information from other humans, they typically refine queries by asking and answering clarifying questions. Clarifying questions have been shown to benefit general-purpose search engines, but have not been examined in the context of code search. We present a method for generating natural-sounding clarifying questions using information extracted from function names and comments. Our method outperformed a keyword-based method for single-turn refinement in synthetic studies, and was associated with shorter search duration in human studies.
2022-01-19 06:32:47.000000000
4,486
Commit messages record code changes (e.g., feature modifications and bug repairs) in natural language, and are useful for program comprehension. Due to the frequent updates of software and time cost, developers are generally unmotivated to write commit messages for code changes. Therefore, automating the message writing process is necessitated. Previous studies on commit message generation have been benefited from generation models or retrieval models, but the code structure of changed code, i.e., AST, which can be important for capturing code semantics, has not been explicitly involved. Moreover, although generation models have the advantages of synthesizing commit messages for new code changes, they are not easy to bridge the semantic gap between code and natural languages which could be mitigated by retrieval models. In this paper, we propose a novel commit message generation model, named ATOM, which explicitly incorporates the abstract syntax tree for representing code changes and integrates both retrieved and generated messages through hybrid ranking. Specifically, the hybrid ranking module can prioritize the most accurate message from both retrieved and generated messages regarding one code change. We evaluate the proposed model ATOM on our dataset crawled from 56 popular Java repositories. Experimental results demonstrate that ATOM increases the state-of-the-art models by 30.72% in terms of BLEU-4 (an accuracy measure that is widely used to evaluate text generation systems). Qualitative analysis also demonstrates the effectiveness of ATOM in generating accurate code commit messages.
2019-12-03 10:45:53.000000000
6,736
Formal methods are widely recognized as a powerful engineering method for the specification, simulation, development, and verification of distributed interactive systems. However, most formal methods rely on a two-valued logic, and are therefore limited to the axioms of that logic: a specification is valid or invalid, component behavior is realizable or not, safety properties hold or are violated, systems are available or unavailable. Especially when the problem domain entails uncertainty, impreciseness, and vagueness, the appliance of such methods becomes a challenging task. In order to overcome the limitations resulting from the strict modus operandi of formal methods, the main objective of this work is to relax the boolean notion of formal specifications by using fuzzy logic. The present approach is based on Focus theory, a model-based and strictly formal method for componentbased interactive systems. The contribution of this work is twofold: i) we introduce a specification technique based on fuzzy logic which can be used on top of Focus to develop formal specifications in a qualitative fashion; ii) we partially extend Focus theory to a fuzzy one which allows the specification of fuzzy components and fuzzy interactions. While the former provides a methodology for approximating I/O behaviors under imprecision, the latter enables to capture a more quantitative view of specification properties such as realizability.
2015-03-16 02:07:50.000000000
4,632
In a recent empirical study we found that evaluating abstractions of Model-Driven Engineering (MDE) is not as straight forward as it might seem. In this paper, we report on the challenges that we as researchers faced when we conducted the aforementioned field study. In our study we found that modeling happens within a complex ecosystem of different people working in different roles. An empirical evaluation should thus mind the ecosystem, that is, focus on both technical and human factors. In the following, we present and discuss five lessons learnt from our recent work.
2012-09-24 16:33:13.000000000
6,256
Clone group mapping has a very important significance in the evolution of code clone. The topic modeling techniques were applied into code clone firstly and a new clone group mapping method was proposed. The method is very effective for not only Type-1 and Type-2 clone but also Type-3 clone .By making full use of the source text and structure information, topic modeling techniques transform the mapping problem of high-dimensional code space into a low-dimensional topic space, the goal of clone group mapping was indirectly reached by mapping clone group topics. Experiments on four open source software show that the recall and precision are up to 0.99, thus the method can effectively and accurately reach the goal of clone group mapping.
2015-02-12 02:14:57.000000000
635
Algebraic specification has a long tradition in bridging the gap between specification and programming by making specifications executable. Building on extensive experience in designing, implementing and using specification formalisms that are based on algebraic specification and term rewriting (namely Asf and Asf+Sdf), we are now focusing on using the best concepts from algebraic specification and integrating these into a new programming language: Rascal. This language is easy to learn by non-experts but is also scalable to very large meta-programming applications. We explain the algebraic roots of Rascal and its main application areas: software analysis, software transformation, and design and implementation of domain-specific languages. Some example applications in the domain of Model-Driven Engineering (MDE) are described to illustrate this.
2011-06-30 09:33:18.000000000
4,228
Quantum computing (QC) represents the future of computing systems, but the tools for reasoning about the quantum model of computation, in which the laws obeyed are those on the quantum mechanical scale, are still a mix of linear algebra and Dirac notation; two subjects more suitable for physicists, rather than computer scientists and software engineers. On this ground, we believe it is possible to provide a more intuitive approach to thinking and writing about quantum computing systems, in order to simplify the design of quantum algorithms and the development of quantum software. In this paper, we move the first step in such direction, introducing a specification language as the tool to represent the operations of a quantum computer via axiomatic definitions, by adopting the same symbolisms and reasoning principles used by formal methods in software engineering. We name this approach formal quantum software engineering (F-QSE). This work assumes familiarity with the basic principles of quantum mechanics (QM), with the use of Zed (Z) which is a formal language of software engineering (SE), and with the notation and techniques of first-order logic (FOL) and functional programming (FP).
2021-11-15 13:40:03.000000000
13,085
Input constraints are useful for many software development tasks. For example, input constraints of a function enable the generation of valid inputs, i.e., inputs that follow these constraints, to test the function deeper. API functions of deep learning (DL) libraries have DL specific input constraints, which are described informally in the free form API documentation. Existing constraint extraction techniques are ineffective for extracting DL specific input constraints. To fill this gap, we design and implement a new technique, DocTer, to analyze API documentation to extract DL specific input constraints for DL API functions. DocTer features a novel algorithm that automatically constructs rules to extract API parameter constraints from syntactic patterns in the form of dependency parse trees of API descriptions. These rules are then applied to a large volume of API documents in popular DL libraries to extract their input parameter constraints. To demonstrate the effectiveness of the extracted constraints, DocTer uses the constraints to enable the automatic generation of valid and invalid inputs to test DL API functions. Our evaluation on three popular DL libraries (TensorFlow, PyTorch, and MXNet) shows that the precision of DocTer in extracting input constraints is 85.4%. DocTer detects 94 bugs from 174 API functions, including one previously unknown security vulnerability that is now documented in the CVE database, while a baseline technique without input constraints detects only 59 bugs. Most (63) of the 94 bugs are previously unknown, 54 of which have been fixed or confirmed by developers after we report them. In addition, DocTer detects 43 inconsistencies in documents, 39 of which are fixed or confirmed.
2021-09-01 14:26:58.000000000
14,563
Code classification is a difficult issue in program understanding and automatic coding. Due to the elusive syntax and complicated semantics in programs, most existing studies use techniques based on abstract syntax tree (AST) and graph neural network (GNN) to create code representations for code classification. These techniques utilize the structure and semantic information of the code, but they only take into account pairwise associations and neglect the high-order correlations that already exist between nodes in the AST, which may result in the loss of code structural information. On the other hand, while a general hypergraph can encode high-order data correlations, it is homogeneous and undirected which will result in a lack of semantic and structural information such as node types, edge types, and directions between child nodes and parent nodes when modeling AST. In this study, we propose to represent AST as a heterogeneous directed hypergraph (HDHG) and process the graph by heterogeneous directed hypergraph neural network (HDHGN) for code classification. Our method improves code understanding and can represent high-order data correlations beyond paired interactions. We assess heterogeneous directed hypergraph neural network (HDHGN) on public datasets of Python and Java programs. Our method outperforms previous AST-based and GNN-based methods, which demonstrates the capability of our model.
2023-05-06 00:28:49.000000000
6,520
Increased popularity of `intelligent' web services provides end-users with machine-learnt functionality at little effort to developers. However, these services require a decision threshold to be set which is dependent on problem-specific data. Developers lack a systematic approach for evaluating intelligent services and existing evaluation tools are predominantly targeted at data scientists for pre-development evaluation. This paper presents a workflow and supporting tool, Threshy, to help software developers select a decision threshold suited to their problem domain. Unlike existing tools, Threshy is designed to operate in multiple workflows including pre-development, pre-release, and support. Threshy is designed for tuning the confidence scores returned by intelligent web services and does not deal with hyper-parameter optimisation used in ML models. Additionally, it considers the financial impacts of false positives. Threshold configuration files exported by Threshy can be integrated into client applications and monitoring infrastructure. Demo: [LINK].
2020-08-17 19:30:43.000000000
2,398
In this position paper, we elaborate on the possibilities and needs to integrate Design Thinking into Requirements Engineering. We draw from our research and project experiences to compare what is understood as Design Thinking and Requirements Engineering considering their involved artifacts. We suggest three approaches for tailoring and integrating Design Thinking and Requirements Engineering with complementary synergies and point at open challenges for research and practice.
2019-08-19 00:10:37.000000000
14,685
Continuous Integration (CI) and Continuous Delivery (CD) have been demonstrated to be effective in facilitating software building, testing, and deployment. Many research studies have investigated and subsequently improved their working processes. Unfortunately, such research efforts have largely not touched on the usage of CI/CD in the development of Android apps. We fill this gap by conducting an exploratory study of CI/CD adoption in open-source Android apps. We start by collecting a set of 84,475 open-source Android apps from the most popular three online code hosting sites, namely Github, GitLab, and Bitbucket. We then look into those apps and find that (1) only around 10\% of apps have leveraged CI/CD services, i.e., the majority of open-source Android apps are developed without accessing CI/CD services, (2) a small number of apps (291) has even adopted multiple CI/CD services, (3) nearly half of the apps adopted CI/CD services have not really used them, and (4) CI/CD services are useful to improve the popularity of projects.
2022-10-20 17:59:19.000000000
8,572
Today, reusable components are available in several repositories. These last are certainly conceived for the reusing However, this re-use is not immediate; it requires, in the fact, to pass through some essential conceptual operations, among them in particular, research, integration, adaptation, and composition. We are interested in the present work to the problem of semantic integration of heterogeneous Business Components. This problem is often put in syntactical terms, while the real stake is of semantic order. Our contribution concerns a model proposal for Business components integration as well as resolution method of semantic naming conflicts, met during the integration of Business Components.
2010-05-24 07:45:33.000000000
7,515
In order to properly train a machine learning model, data must be properly collected. To guarantee a proper data collection, verifying that the collected data set holds certain properties is a possible solution. For example, guaranteeing that the data set contains samples across the whole input space, or that the data set is balanced w.r.t. different classes. We present a formal approach for verifying a set of arbitrarily stated properties over a data set. The proposed approach relies on the transformation of the data set into a first order logic formula, which can be later verified w.r.t. the different properties also stated in the same logic. A prototype tool, which uses the z3 solver, has been developed; the prototype can take as an input a set of properties stated in a formal language and formally verify a given data set w.r.t. to the given set of properties. Preliminary experimental results show the feasibility and performance of the proposed approach, and furthermore the flexibility for expressing properties of interest.
2021-08-25 02:03:14.000000000
8,963
Agile methods in undergraduate courses have been explored in an effort to close the gap between industry and professional profiles. We have structured an Android application development course based on a tailored user-centered Agile process for development of educational digital tools. This process is based on Scrum and Extreme Programming in combination with User Experience (UX) approaches. The course is executed in two phases: the first half of the semester presents theory on Agile and mobile applications development, the latter half is managed as a workshop where students develop for an actual client. The introduction of UX and user-centered design exploiting the close relationship with stakeholders expected from Agile processes allows for different quality features development. Since 2019 two of the projects have been extended and one project has been developed with the described process and course alumni. Students and stakeholders have found value in the generated products and process.
2023-11-04 19:22:23.000000000
8,347
This paper introduce a software system including widely-used Swarm Intelligence algorithms or approaches to be used for the related scientific research studies associated with the subject area. The programmatic infrastructure of the system allows working on a fast, easy-to-use, interactive platform to perform Swarm Intelligence based studies in a more effective, efficient and accurate way. In this sense, the system employs all of the necessary controls for the algorithms and it ensures an interactive platform on which computer users can perform studies on a wide spectrum of solution approaches associated with simple and also more advanced problems.
2017-04-01 22:50:42.000000000
4,447
The use of semi-autonomous Unmanned Aerial Vehicles (UAVs or drones) to support emergency response scenarios, such as fire surveillance and search-and-rescue, has the potential for huge societal benefits. Onboard sensors and artificial intelligence (AI) allow these UAVs to operate autonomously in the environment. However, human intelligence and domain expertise are crucial in planning and guiding UAVs to accomplish the mission. Therefore, humans and multiple UAVs need to collaborate as a team to conduct a time-critical mission successfully. We propose a meta-model to describe interactions among the human operators and the autonomous swarm of UAVs. The meta-model also provides a language to describe the roles of UAVs and humans and the autonomous decisions. We complement the meta-model with a template of requirements elicitation questions to derive models for specific missions. We also identify common scenarios where humans should collaborate with UAVs to augment the autonomy of the UAVs. We introduce the meta-model and the requirements elicitation process with examples drawn from a search-and-rescue mission in which multiple UAVs collaborate with humans to respond to the emergency. We then apply it to a second scenario in which UAVs support first responders in fighting a structural fire. Our results show that the meta-model and the template of questions support the modeling of the human-on-the-loop human interactions for these complex missions, suggesting that it is a useful tool for modeling the human-on-the-loop interactions for multi-UAVs missions.
2020-09-20 21:56:04.000000000
12,026