input
stringlengths
29
3.27k
created_at
stringlengths
29
29
__index_level_0__
int64
0
16k
Quality requirements typically differ among software features, e.g., due to different usage contexts of the features, different impacts of related quality deficiencies onto overall user satisfaction, or long-term plans of the developing organization. For instance, maintainability requirements might be particularly high for software features which are frequently used or bear strategic value for the developing organization. Also, software features where even the smallest delays are perceived as negative by the user will be subjected to specially tight performance requirements. We defined an operational DSL to define software quality requirements as individual feature-level constraints based on quantitative measures. The DSL provides language elements to define the operationalization of measures from external systems, time series operations, time filters, and the automatic evaluation of these feature-level constraints in DevOps based on comparison operators and threshold values. In addition, quality ratings summarize evaluation results of features on an ordinal grading scheme. Likewise, quality gates use these quality ratings to reflect the fitness of software features or the overall software product using different states. Finally, we show an example based on a widely-adopted secure mobile messaging app that illustrates the interplay of the different DSL elements.
2022-03-01 14:29:37.000000000
6,678
Microservices have become popular in the past few years, attracting the interest of both academia and industry. Despite of its benefits, this new architectural style still poses important challenges, such as resilience, performance and evolution. Self-adaptation techniques have been applied recently as an alternative to solve or mitigate those problems. However, due to the range of quality attributes that affect microservice architectures, many different self-adaptation strategies can be used. Thus, to understand the state-of-the-art of the use of self-adaptation techniques and mechanisms in microservice-based systems, this work conducted a systematic mapping, in which 21 primary studies were analyzed considering qualitative and quantitative research questions. The results show that most studies focus on the Monitor phase (28.57%) of the adaptation control loop, address the self-healing property (23.81%), apply a reactive adaptation strategy (80.95%) in the system infrastructure level (47.62%) and use a centralized approach (38.10%). From those, it was possible to propose some research directions to fill existing gaps.
2021-03-15 10:46:36.000000000
7,689
Surprise Adequacy (SA) is one of the emerging and most promising adequacy criteria for Deep Learning (DL) testing. As an adequacy criterion, it has been used to assess the strength of DL test suites. In addition, it has also been used to find inputs to a Deep Neural Network (DNN) which were not sufficiently represented in the training data, or to select samples for DNN retraining. However, computation of the SA metric for a test suite can be prohibitively expensive, as it involves a quadratic number of distance calculations. Hence, we developed and released a performance-optimized, but functionally equivalent, implementation of SA, reducing the evaluation time by up to 97\%. We also propose refined variants of the SA omputation algorithm, aiming to further increase the evaluation speed. We then performed an empirical study on MNIST, focused on the out-of-distribution detection capabilities of SA, which allowed us to reproduce parts of the results presented when SA was first released. The experiments show that our refined variants are substantially faster than plain SA, while producing comparable outcomes. Our experimental results exposed also an overlooked issue of SA: it can be highly sensitive to the non-determinism associated with the DNN training procedure.
2021-03-09 15:28:34.000000000
8,196
Context and motivation: Contribution Management helps firms engaged in Open Source Software (OSS) ecosystems to motivate what they should contribute and when, but also what they should focus their resources on and to what extent. Such guidelines are also referred to as contribution strategies. The motivation for developing tailored contribution strategies is to maximize return on investment and sustain the influence needed in the ecosystem. Question/Problem: We aim to develop a framework to help firms understand their current situation and create a starting point to develop an effective contribution management process. Principal ideas/results: Through a design science approach, a prototype framework is created based on literature and validated iteratively with expert opinions through interviews. Contribution: In this research preview, we present our initial results after our first design cycle and consultation with one experienced OSS manager at a large OSS oriented software-intensive firm. The initial validation highlights importance of stakeholder identification and analysis, as well as the general need for contribution management and alignment with internal product planning. This encourages future work to develop the framework further using expert and case validation.
2022-07-31 18:21:09.000000000
2,125
Recently, GitHub introduced a new social feature, named reactions, which are "pictorial characters" similar to emoji symbols widely used nowadays in text-based communications. Particularly, GitHub users can use a pre-defined set of such symbols to react to issues and pull requests. However, little is known about the real usage and impact of GitHub reactions. In this paper, we analyze the reactions provided by developers to more than 2.5 million issues and 9.7 million issue comments, in order to answer an extensive list of nine research questions about the usage and adoption of reactions. We show that reactions are being increasingly used by open source developers. Moreover, we also found that issues with reactions usually take more time to be handled and have longer discussions.
2019-09-28 13:15:38.000000000
9,460
Recent technological developments and advances in Artificial Intelligence (AI) have enabled sophisticated capabilities to be a part of Digital Twin (DT), virtually making it possible to introduce automation into all aspects of work processes. Given these possibilities that DT can offer, practitioners are facing increasingly difficult decisions regarding what capabilities to select while deploying a DT in practice. The lack of research in this field has not helped either. It has resulted in the rebranding and reuse of emerging technological capabilities like prediction, simulation, AI, and Machine Learning (ML) as necessary constituents of DT. Inappropriate selection of capabilities in a DT can result in missed opportunities, strategic misalignments, inflated expectations, and risk of it being rejected as just hype by the practitioners. To alleviate this challenge, this paper proposes the digitalization framework, designed and developed by following a Design Science Research (DSR) methodology over a period of 18 months. The framework can help practitioners select an appropriate level of sophistication in a DT by weighing the pros and cons for each level, deciding evaluation criteria for the digital twin system, and assessing the implications of the selected DT on the organizational processes and strategies, and value creation. Three real-life case studies illustrate the application and usefulness of the framework.
2022-01-17 10:16:59.000000000
742
Communication between practitioners is essential for the system's quality in the DevOps context. To improve this communication, practitioners often use informal diagrams to represent the components of a system. However, as systems evolve, it is a challenge to synchronize diagrams with production environments consistently. Hence, the inconsistency of architectural diagrams can affect communication between practitioner and their understanding of systems. In this paper, we propose the use of system descriptors to improve deployment diagram consistency. We state two main hypotheses: (1) if an architectural diagram is generated from a valid system descriptor, then the diagram is consistent; (2) if a valid system descriptor is generated from an architectural diagram, then the diagram is consistent. We report a case study to explore our hypotheses. Furthermore, we constructed a system descriptor from the Netflix deployment diagram, and we applied our tool to generate a new architectural diagram. Finally, we compare the original and generated diagrams to evaluate our proposal. Our case study shows all Docker compose description elements can be graphically represented in the generated architectural diagram, and the generated diagram does not present inconsistent aspects of the original diagram. Thus, our preliminary results lead to further evaluation in controlled and empirical experiments to test our hypotheses.
2021-03-22 22:04:26.000000000
4,019
Model deployment in machine learning has emerged as an intriguing field of research in recent years. It is comparable to the procedure defined for conventional software development. Continuous Integration and Continuous Delivery (CI/CD) have been shown to smooth down software advancement and speed up businesses when used in conjunction with development and operations (DevOps). Using CI/CD pipelines in an application that includes Machine Learning Operations (MLOps) components, on the other hand, has difficult difficulties, and pioneers in the area solve them by using unique tools, which is typically provided by cloud providers. This research provides a more in-depth look at the machine learning lifecycle and the key distinctions between DevOps and MLOps. In the MLOps approach, we discuss tools and approaches for executing the CI/CD pipeline of machine learning frameworks. Following that, we take a deep look into push and pull-based deployments in Github Operations (GitOps). Open exploration issues are also identified and added, which may guide future study.
2022-02-05 15:15:36.000000000
15,160
In modern Web technology, JavaScript (JS) code plays an important role. To avoid the exposure of original source code, the variable names in JS code deployed in the wild are often replaced by short, meaningless names, thus making the code extremely difficult to manually understand and analysis. This paper presents JSNeat, an information retrieval (IR)-based approach to recover the variable names in minified JS code. JSNeat follows a data-driven approach to recover names by searching for them in a large corpus of open-source JS code. We use three types of contexts to match a variable in given minified code against the corpus including the context of properties and roles of the variable, the context of that variable and relations with other variables under recovery, and the context of the task of the function to which the variable contributes. We performed several empirical experiments to evaluate JSNeat on the dataset of more than 322K JS files with 1M functions, and 3.5M variables with 176K unique variable names. We found that JSNeat achieves a high accuracy of 69.1%, which is the relative improvements of 66.1% and 43% over two state-of-the-art approaches JSNice and JSNaughty, respectively. The time to recover for a file or for a variable with JSNeat is twice as fast as with JSNice and 4x as fast as with JNaughty, respectively.
2019-06-02 13:47:10.000000000
6,825
Context: Software startups are newly created companies with no operating history and fast in producing cutting-edge technologies. These companies develop software under highly uncertain conditions, tackling fast-growing markets under severe lack of resources. Therefore, software startups present an unique combination of characteristics which pose several challenges to software development activities. Objective: This study aims to structure and analyze the literature on software development in startup companies, determining thereby the potential for technology transfer and identifying software development work practices reported by practitioners and researchers. Method: We conducted a systematic mapping study, developing a classification schema, ranking the selected primary studies according their rigor and relevance, and analyzing reported software development work practices in startups. Results: A total of 43 primary studies were identified and mapped, synthesizing the available evidence on software development in startups. Only 16 studies are entirely dedicated to software development in startups, of which 10 result in a weak contribution (advice and implications (6); lesson learned (3); tool (1)). Nineteen studies focus on managerial and organizational factors. Moreover, only 9 studies exhibit high scientific rigor and relevance. From the reviewed primary studies, 213 software engineering work practices were extracted, categorized and analyzed. Conclusion: This mapping study provides the first systematic exploration of the state-of-art on software startup research. The existing body of knowledge is limited to a few high quality studies. Furthermore, the results indicate that software engineering work practices are chosen opportunistically, adapted and configured to provide value under the constrains imposed by the startup context.
2023-07-24 02:39:53.000000000
9,308
The COVID-19 outbreak has admittedly caused a major disruption worldwide. The interruptions to production, transportation, and mobility have clearly had a significant impact on the well-functioning of the global supply and demand chain. But what happened to the companies developing digital services, such as software. Were they interrupted as much or at all? And how has the enforced Working-From-Home mode impacted their ability to continue to deliver software? We hear that some managers are concerned that their engineers are not working effectively from home, or even lack the motivation to work in general, that teams lose touch and that managers do not notice when things go wrong. In this article, we share our findings from monitoring the situation in an international software company with engineers located in Sweden, USA, and the UK. We analyzed different aspects of productivity, such as developer satisfaction and well-being, activity, communication and collaboration, efficiency and flow based on the archives of commit data, calendar invites, and Slack communication, as well as the internal reports of WFH experiences and 18 interviews. We find that company engineers continue committing code and carry out their daily duties without significant disruptions, while their routines have gradually adjusted to the new norm with new emerging practices and various changes to the old ones. In a way, our message is that there is no news, which is good news. Yet, the experiences gained with the WFH of such scale have already made significant changes in the software industry's future, work from anywhere being an example of major importance.
2021-01-18 19:52:29.000000000
8,440
In any sufficiently complex software system there are experts, having a deeper understanding of parts of the system than others. However, it is not always clear who these experts are and which particular parts of the system they can provide help with. We propose a framework to elicit the expertise of developers and recommend experts by analyzing complexity measures over time. Furthermore, teams can detect those parts of the software for which currently no, or only few experts exist and take preventive actions to keep the collective code knowledge and ownership high. We employed the developed approach at a medium-sized company. The results were evaluated with a survey, comparing the perceived and the computed expertise of developers. We show that aggregated code metrics can be used to identify experts for different software components. The identified experts were rated as acceptable candidates by developers in over 90% of all cases.
2018-09-14 03:07:40.000000000
1,776
This paper introduces Jasper, a web programming framework which allows web applications to be developed in an essentially platform indepedent manner and which is also suited to a formal treatment. It outlines Jasper conceptually and shows how Jasper is implemented on several commonplace platforms. It also introduces the Jasper Music Store, a web application powered by Jasper and implemented on each of these platforms. And it briefly describes a formal treatment and outlines the tools and languages planned that will allow this treatment to be automated.
2012-10-19 11:31:40.000000000
9,190
In software engineering, conceptual modeling focuses on creating representations of the world that are as faithful and rich as possible, with the aim of guiding the development of software systems. In contrast, in the computing realm, the notion of ontology has been characterized as being closely related to conceptual modeling and is often viewed as a specification of a conceptualization. Accordingly, conceptual modeling and ontology engineering now address the same problem of representing the world in a suitable fashion. A high-level ontology provides a means to describe concepts and their interactions with each other and to capture structural and behavioral features in the intended domain. This paper aims to analyze ontological concepts and semantics of modeling notations to provide a common understanding among software engineers. An important issue in this context concerns the question of whether the modeled world might be stratified into ontological levels. We introduce an abstract system of two-level domain ontology to be used as a foundation for conceptual models. We study the two levels of staticity and dynamics in the context of the thinging machine (TM) model using the notions of potentiality and actuality that the Franco-Romanian philosopher Stephane Lupasco developed in logic. He provided a quasi-universal rejection of contradiction where every event was always associated with a no event, such that the actualization of an event entails the potentialization of a no event and vice versa without either ever disappearing completely. This approach is illustrated by re-modeling UML state machines in TM modeling. The results strengthen the semantics of a static versus dynamic levels in conceptual modeling and sharpen the notion of events as a phenomenon without negativity alternating between the two levels of dynamics and staticity.
2022-10-25 21:07:37.000000000
4,685
A growing number of researchers suggest that software process must be tailored to a project's context to achieve maximal performance. Researchers have studied 'context' in an ad-hoc way, with focus on those contextual factors that appear to be of significance. The result is that we have no useful basis upon which to contrast and compare studies. We are currently researching a theoretical basis for software context for the purpose of tailoring and note that a deeper consideration of the meaning of the term 'context' is required before we can proceed. In this paper, we examine the term and present a model based on insights gained from our initial categorisation of contextual factors from the literature. We test our understanding by analysing a further six documents. Our contribution thus far is a model that we believe will support a theoretical operationalisation of software context for the purpose of process tailoring.
2021-02-17 02:56:31.000000000
4,738
Collaborative activities among knowledge workers such as software developers underlie the development of modern society, but the in-depth understanding of their behavioral patterns in open online communities is very challenging. The availability of large volumes of data in open-source software (OSS) repositories (e.g. bug tracking data, emails, and comments) enables us to investigate this issue in a quantitative way. In this paper, we conduct an empirical analysis of online collaborative activities closely related to assure software quality in two well-known OSS communities, namely Eclipse and Mozilla. Our main findings include two aspects: (1) developers exhibit two diametrically opposite behavioral patterns in spatial and temporal scale when they work under two different states (i.e. normal and overload), and (2) the processing times (including bug fixing times and bug tossing times) follow a stretched exponential distribution instead of the common power law distribution. Our work reveals regular patterns in human dynamics beyond online collaborative activities among skilled developers who work under different task-driven load conditions, and it could be an important supplementary to the current work on human dynamics.
2017-02-23 12:36:14.000000000
7,190
Testing networked systems is challenging. The client or server side cannot be tested by itself. We present a solution using tool "Modbat" that generates test cases for Java's network library java.nio, where we test both blocking and non-blocking network functions. Our test model can dynamically simulate actions in multiple worker and client threads, thanks to a carefully orchestrated design that covers non-determinism while ensuring progress.
2017-03-20 02:49:04.000000000
11,208
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code -- supporting symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development tends to produce code that is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. Though hybrid approaches aim for the "best of both worlds," using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution -- avoiding performance bottlenecks and semantically inequivalent results. We present our ongoing work on an automated refactoring approach that assists developers in specifying whether and how their otherwise eagerly-executed imperative DL code could be reliably and efficiently executed as graphs at run-time in a semantics-preserving fashion. The approach, based on a novel tensor analysis specifically for imperative DL code, consists of refactoring preconditions for automatically determining when it is safe and potentially advantageous to migrate imperative DL code to graph execution and modifying decorator parameters or eagerly executing code already running as graphs. The approach is being implemented as a PyDev Eclipse IDE plug-in and uses the WALA Ariadne analysis framework. We discuss our ongoing work towards optimizing imperative DL code to its full potential.
2023-08-22 07:23:12.000000000
15,060
When we consider the application layer of networked infrastructures, data and control flow are important concerns in distributed systems integration. Modularity is a fundamental principle in software design, in particular for distributed system architectures. Modularity emphasizes high cohesion of individual modules and low coupling between modules. Microservices are a recent modularization approach with the specific requirements of independent deployability and, in particular, decentralized data management. Cohesiveness of microservices goes hand-in-hand with loose coupling, making the development, deployment, and evolution of microservice architectures flexible and scalable. However, in our experience with microservice architectures, interactions and flows among microservices are usually more complex than in traditional, monolithic enterprise systems, since services tend to be smaller and only have one responsibility, causing collaboration needs. We suggest that for loose coupling among microservices, explicit control-flow modeling and execution with central workflow engines should be avoided on the application integration level. On the level of integrating microservices, data-flow modeling should be dominant. Control-flow should be secondary and preferably delegated to the microservices. We discuss coupling in distributed systems integration and reflect the history of business process modeling with respect to data and control flow. To illustrate our recommendations, we present some results for flow-based programming in our Industrial DevOps project Titan, where we employ flow-based programming for the Industrial Internet of Things.
2021-08-17 14:30:35.000000000
3,492
All non-trivial software systems suffer from unanticipated production failures. However, those systems are passive with respect to failures and do not take advantage of them in order to improve their future behavior: they simply wait for them to happen and trigger hard-coded failure recovery strategies. Instead, I propose a new paradigm in which software systems learn from their own failures. By using an advanced monitoring system they have a constant awareness of their own state and health. They are designed in order to automatically explore alternative recovery strategies inferred from past successful and failed executions. Their recovery capabilities are assessed by self-injection of controlled failures; this process produces knowledge in prevision of future unanticipated failures.
2015-01-28 16:17:58.000000000
8,610
In this paper, a new hierarchical software architecture is proposed to improve the safety and reliability of a safety-critical drone system from the perspective of its source code. The proposed architecture uses formal verification methods to ensure that the implementation of each module satisfies its expected design specification, so that it prevents a drone from crashing due to unexpected software failures. This study builds on top of a formally verified operating system kernel, certified kit operating system (CertiKOS). Since device drivers are considered the most important parts affecting the safety of the drone system, we focus mainly on verifying bus drivers such as the serial peripheral interface and the inter-integrated circuit drivers in a drone system using a rigorous formal verification method. Experiments have been carried out to demonstrate the improvement in reliability in case of device anomalies.
2019-05-15 21:16:12.000000000
2,675
In agile ontology-based software engineering projects support for modular reuse of ontologies from large existing remote repositories, ontology project life cycle management, and transitive dependency management are important needs. The contribution of this paper is a new design artifact called OntoMaven combined with a unified approach to ontology modularization, aspect-oriented ontology development, which was inspired by aspect-oriented programming. OntoMaven adopts the Apache Maven-based development methodology and adapts its concepts to knowledge engineering for Maven-based ontology development and management of ontology artifacts in distributed ontology repositories. The combination with aspect-oriented ontology development allows for fine-grained, declarative configuration of ontology modules.
2015-06-24 21:07:47.000000000
8,942
Despite recent initiatives aimed at improving accessibility, the field of digital accessibility remains markedly behind contemporary advancements in the software industry as a large number of real world software and web applications continue to fall short of accessibility requirements. A persisting skills deficit within the existing technology workforce has been an enduring impediment, hindering organizations from delivering truly accessible software products. This, in turn, elevates the risk of isolating and excluding a substantial portion of potential users. In this paper, we report lessons learned from a training program for teaching digital accessibility using the Communities of Practice (CoP) framework to industry professionals. We recruited 66 participants from a large multi-national software company and assigned them to two groups: one participating in a CoP and the other using self-paced learning. We report experiences from designing the training program, conducting the actual training, and assessing the efficiency of the two approaches. Based on these findings, we provide recommendations for practitioners in Learning and Development teams and educators in designing accessibility courses for industry professionals.
2023-12-28 15:47:30.000000000
609
Model-based safety assessment has been one of the leading research thrusts of the System Safety Engineering community for over two decades. However, there is still a lack of consensus on what MBSA is. The ambiguity in the identity of MBSA impedes the advancement of MBSA as an active research area. For this reason, this paper aims to investigate the identity of MBSA to help achieve a consensus across the community. Towards this end, we first reason about the core activities that an MBSA approach must conduct. Second, we characterize the core patterns in which the core activities must be conducted for an approach to be considered MBSA. Finally, a recently published MBSA paper is reviewed to test the effectiveness of our characterization of MBSA.
2022-12-08 16:31:30.000000000
4,734
Mocking in the context of automated software tests allows testing program units in isolation. Designing realistic interactions between a unit and its environment, and understanding the expected impact of these interactions on the behavior of the unit, are two key challenges that software testers face when developing tests with mocks. In this paper, we propose to monitor an application in production to generate tests that mimic realistic execution scenarios through mocks. Our approach operates in three phases. First, we instrument a set of target methods for which we want to generate tests, as well as the methods that they invoke, which we refer to as mockable method calls. Second, in production, we collect data about the context in which target methods are invoked, as well as the parameters and the returned value for each mockable method call. Third, offline, we analyze the production data to generate test cases with realistic inputs and mock interactions. The approach is automated and implemented in an open-source tool called RICK. We evaluate our approach with 3 real-world, open-source Java applications. RICK monitors the invocation of 128 methods in production across the 3 applications and captures their behavior. Based on this captured data, RICK generates test cases that include realistic initial states and test inputs, mocks, and stubs. The three kinds of mock-based oracles generated by RICK verify the actual interactions between the method and its environment. All the generated test cases are executable, and 52.4% of them successfully mimic the complete execution context of the methods observed in production. The mock-based oracles are effective at detecting regressions within the target methods, complementing each other in their fault-finding ability. We interview 5 developers from the industry who confirm the relevance of using production observations to design mocks and stubs.
2022-07-31 20:26:46.000000000
9,688
We review a case study of a UI design project for a complete travel search engine system prototype for regular and corporate users. We discuss various usage scenarios, guidelines, and so for, and put them into a web-based prototype with screenshots and the like. We combined into our prototype the best features found at the time (2002) on most travel-like sites and added more to them as a part of our research. We conducted feasibility studies, review common design guidelines and Nelson's heuristics while constructing this work. The prototype is itself open-source, but has no backend functionality, as the focus is the user-centered design of such a system. While the prototype is mostly static, some dynamic activity is present through the use of PHP.
2010-05-10 07:57:14.000000000
14,302
Software developers cannot always anticipate how users will actually use their software as it may vary from user to user, and even from use to use for an individual user. In order to address questions raised by system developers and evaluators about software usage, we define new probabilistic models that characterise user behaviour, based on activity patterns inferred from actual logged user traces. We encode these new models in a probabilistic model checker and use probabilistic temporal logics to gain insight into software usage. We motivate and illustrate our approach by application to the logged user traces of an iOS app.
2014-03-25 11:23:14.000000000
11,415
Background: A systematic literature review (SLR) is a methodology used to aggregate all relevant existing evidence to answer a research question of interest. Although crucial, the process used to select primary studies can be arduous, time consuming, and must often be conducted manually. Objective: We propose a novel approach, known as 'Systematic Literature Review based on Visual Text Mining' or simply SLR-VTM, to support the primary study selection activity using visual text mining (VTM) techniques. Method: We conducted a case study to compare the performance and effectiveness of four doctoral students in selecting primary studies manually and using the SLR-VTM approach. To enable the comparison, we also developed a VTM tool that implemented our approach. We hypothesized that students using SLR-VTM would present improved selection performance and effectiveness. Results: Our results show that incorporating VTM in the SLR study selection activity reduced the time spent in this activity and also increased the number of studies correctly included. Conclusions: Our pilot case study presents promising results suggesting that the use of VTM may indeed be beneficial during the study selection activity when performing an SLR.
2021-02-03 03:39:19.000000000
1,041
Scripting languages are becoming more and more important as a tool for software development, as they provide great flexibility for rapid prototyping and for configuring componentware applications. In this paper we present LuaJava, a scripting tool for Java. LuaJava adopts Lua, a dynamically typed interpreted language, as its script language. Great emphasis is given to the transparency of the integration between the two languages, so that objects from one language can be used inside the other like native objects. The final result of this integration is a tool that allows the construction of configurable Java applications, using off-the-shelf components, in a high abstraction level.
1998-10-26 16:52:34.000000000
1,736
Application Programming Interfaces (APIs) are designed to help developers build software more effectively. Recommending the right APIs for specific tasks has gained increasing attention among researchers and developers in recent years. To comprehensively understand this research domain, we have surveyed to analyze API recommendation studies published in the last 10 years. Our study begins with an overview of the structure of API recommendation tools. Subsequently, we systematically analyze prior research and pose four key research questions. For RQ1, we examine the volume of published papers and the venues in which these papers appear within the API recommendation field. In RQ2, we categorize and summarize the prevalent data sources and collection methods employed in API recommendation research. In RQ3, we explore the types of data and common data representations utilized by API recommendation approaches. We also investigate the typical data extraction procedures and collection approaches employed by the existing approaches. RQ4 delves into the modeling techniques employed by API recommendation approaches, encompassing both statistical and deep learning models. Additionally, we compile an overview of the prevalent ranking strategies and evaluation metrics used for assessing API recommendation tools. Drawing from our survey findings, we identify current challenges in API recommendation research that warrant further exploration, along with potential avenues for future research.
2023-12-15 14:33:37.000000000
13,547
To detect large-variance code clones (i.e. clones with relatively more differences) in large-scale code repositories is difficult because most current tools can only detect almost identical or very similar clones. It will make promotion and changes to some software applications such as bug detection, code completion, software analysis, etc. Recently, CCAligner made an attempt to detect clones with relatively concentrated modifications called large-gap clones. Our contribution is to develop a novel and effective detection approach of large-variance clones to more general cases for not only the concentrated code modifications but also the scattered code modifications. A detector named LVMapper is proposed, borrowing and changing the approach of sequencing alignment in bioinformatics which can find two similar sequences with more differences. The ability of LVMapper was tested on both self-synthetic datasets and real cases, and the results show substantial improvement in detecting large-variance clones compared with other state-of-the-art tools including CCAligner. Furthermore, our new tool also presents good recall and precision for general Type-1, Type-2 and Type-3 clones on the widely used benchmarking dataset, BigCloneBench.
2019-09-08 14:18:31.000000000
13,810
Static analysis tools are traditionally used to detect and flag programs that violate properties. We show that static analysis tools can also be used to perturb programs that satisfy a property to construct variants that violate the property. Using this insight we can construct paired data sets of unsafe-safe program pairs, and learn strategies to automatically repair property violations. We present a system called \sysname, which automatically repairs information flow vulnerabilities using this approach. Since information flow properties are non-local (both to check and repair), \sysname also introduces a novel domain specific language (DSL) and strategy learning algorithms for synthesizing non-local repairs. We use \sysname to synthesize strategies for repairing two types of information flow vulnerabilities, unvalidated dynamic calls and cross-site scripting, and show that \sysname successfully repairs several hundred vulnerabilities from open source {\sc JavaScript} repositories, outperforming neural baselines built using {\sc CodeT5} and {\sc Codex}. Our datasets can be downloaded from \url{[LINK]}.
2023-07-21 18:46:45.000000000
3,182
Self-adaptive systems manage themselves to deal with uncertainties that can only be resolved during operation. A common approach to realize self-adaptation is by adding a feedback loop to the system that monitors the system and adapts it to realize a set of adaptation goals. ActivFORMS (Active FORmal Models for Self-adaptation) provides an end-to-end approach for engineering self-adaptive systems. ActivFORMS relies on feedback loops that consists of formally verified models that are directly deployed and executed at runtime to realize self-adaptation. At runtime, the approach relies on statistical verification techniques that allow efficient analysis of the possible options for adaptation. Further, ActivFORMS supports on-the-fly changes of adaptation goals and updates of the verified models to to meet the new goals. ActivFORMSi provides a tool-supported instance of ActivFORMS. The approach has been validates using an IoT application for building security monitoring. This report provides complementary material to the paper ``ActivFORMS: A Formally-Founded Model-Based Approach to Engineer Self-Adaptive Systems'' [Weyns and Iftikhar 2019].
2021-12-09 13:32:41.000000000
8,262
Programming language documentation refers to the set of technical documents that provide application developers with a description of the high-level concepts of a language. Such documentation is essential to support application developers in the effective use of a programming language. One of the challenges faced by documenters (i.e., personnel that produce documentation) is to ensure that documentation has relevant information that aligns with the concrete needs of developers. In this paper, we present an automated approach to support documenters in evaluating the differences and similarities between the concrete information need of developers and the current state of documentation (a problem that we refer to as the topical alignment of a programming language documentation). Our approach leverages semi-supervised topic modelling to assess the similarities and differences between the topics of Q&A posts and the official documentation. To demonstrate the application of our approach, we perform a case study on the documentation of Rust. Our results show that there is a relatively high level of topical alignment in Rust documentation. Still, information about specific topics is scarce in both the Q&A websites and the documentation, particularly related topics with programming niches such as network, game, and database development. For other topics (e.g., related topics with language features such as structs, patterns and matchings, and foreign function interface), information is only available on Q&A websites while lacking in the official documentation. Finally, we discuss implications for programming language documenters, particularly how to leverage our approach to prioritize topics that should be added to the documentation.
2022-02-07 22:04:38.000000000
2,063
Today, service oriented systems need to be enhanced to sense and react to users context in order to provide a better user experience. To meet this requirement, Context-Aware Services (CAS) have emerged as an underling design and development paradigm for the development of context-aware systems. The fundamental challenges for such systems development are context-awareness management and service adaptation to the users context. To cope with such requirements, we propose a well designed architecture, named ACAS, to support the development of Context-Aware Service Oriented Systems (CASOS). This architecture relies on a set of context-awareness and CAS specifications and metamodels to enhance a core service, in service oriented systems, to be context-aware. This enhancement is fulfilled by the Aspect Adaptations Weaver (A2W) which, based on the Aspect Paradigm (AP) concepts, considers the services adaptations as aspects.
2012-11-12 14:04:38.000000000
4,894
Registries play a key role in service-oriented applications. Originally, they were neutral players between service providers and clients. The UDDI Business Registry (UBR) was meant to foster these concepts and provide a common reference for companies interested in Web services. The more Web services were used, the more companies started create their own local registries: more efficient discovery processes, better control over the quality of published information, and also more sophisticated publication policies motivated the creation of private repositories. The number and heterogeneity of the different registries - besides the decision to close the UBR are pushing for new and sophisticated means to make different registries cooperate. This paper proposes DIRE (DIstributed REgistry), a novel approach based on a publish and subscribe (P/S) infrastructure to federate different heterogeneous registries and make them exchange information about published services. The paper discusses the main motivations for the P/S-based infrastructure, proposes an integrated service model, introduces the main components of the framework, and exemplifies them on a simple case study.
2010-07-19 17:51:54.000000000
3,185
A major determinant of the quality of software systems is the quality of their requirements, which should be both understandable and precise. Most requirements are written in natural language, good for understandability but lacking in precision. To make requirements precise, researchers have for years advocated the use of mathematics-based notations and methods, known as "formal". Many exist, differing in their style, scope and applicability. The present survey discusses some of the main formal approaches and compares them to informal methods. The analysis uses a set of 9 complementary criteria, such as level of abstraction, tool availability, traceability support. It classifies the approaches into five categories: general-purpose, natural-language, graph/automata, other mathematical notations, seamless (programming-language-based). It presents approaches in all of these categories, altogether 22 different ones, including for example SysML, Relax, Eiffel, Event-B, Alloy. The review discusses a number of open questions, including seamlessness, the role of tools and education, and how to make industrial applications benefit more from the contributions of formal approaches. (This is the full version of the survey, including some sections and two appendices which, because of length restrictions, do not appear in the submitted version.)
2019-11-04 08:52:16.000000000
4,764
The emergence of open-source ML libraries such as TensorFlow and Google Auto ML has enabled developers to harness state-of-the-art ML algorithms with minimal overhead. However, during this accelerated ML development process, said developers may often make sub-optimal design and implementation decisions, leading to the introduction of technical debt that, if not addressed promptly, can have a significant impact on the quality of the ML-based software. Developers frequently acknowledge these sub-optimal design and development choices through code comments during software development. These comments, which often highlight areas requiring additional work or refinement in the future, are known as self-admitted technical debt (SATD). This paper aims to investigate SATD in ML code by analyzing 318 open-source ML projects across five domains, along with 318 non-ML projects. We detected SATD in source code comments throughout the different project snapshots, conducted a manual analysis of the identified SATD sample to comprehend the nature of technical debt in the ML code, and performed a survival analysis of the SATD to understand the evolution of such debts. We observed: i) Machine learning projects have a median percentage of SATD that is twice the median percentage of SATD in non-machine learning projects. ii) ML pipeline components for data preprocessing and model generation logic are more susceptible to debt than model validation and deployment components. iii) SATDs appear in ML projects earlier in the development process compared to non-ML projects. iv) Long-lasting SATDs are typically introduced during extensive code changes that span multiple files exhibiting low complexity.
2023-11-18 22:12:59.000000000
11,735
The general acceptance of sequence diagrams can be attributed to their relatively intuitive nature and ability to describe partial behaviors (as opposed to such diagrams as state charts). However, studies have shown that over 80 percent of graduating students were unable to create a software design or even a partial design, and many students had no idea how sequence diagrams were constrained by other models. Many students exhibited difficulties in identifying valid interacting objects and constructing messages with appropriate arguments. Additionally, according to authorities, even though many different semantics have been proposed for sequence diagrams (e.g., translations to state machines), there exists no suitable semantic basis refinement of required sequence diagram behavior because direct style semantics do not precisely capture required sequence diagram behaviors; translations to other formalisms disregard essential features of sequence diagrams such as guard conditions and critical regions. This paper proposes an alternative to sequence diagrams, a generalized model that provides further understanding of sequence diagrams to assimilate them into a new modeling language called thinging machine (TM). The sequence diagram is extended horizontally by removing the superficial vertical-only dimensional limitation of expansion to preserve the logical chronology of events. TM diagramming is spread nonlinearly in terms of actions. Events and their chronology are constructed on a second plane of description that is superimposed on the initial static description. The result is a more refined representation that would simplify the modeling process. This is demonstrated through remodeling sequence diagram cases from the literature.
2021-05-31 06:50:58.000000000
5,702
The ever-increasing amount, variety as well as generation and processing speed of today's data pose a variety of new challenges for developing Data-Intensive Software Systems (DISS). As with developing other kinds of software systems, developing DISS is often done under severe pressure and strict schedules. Thus, developers of DISS often have to make technical compromises to meet business concerns. This position paper proposes a conceptual model that outlines where Technical Debt (TD) can emerge and proliferate within such data-centric systems by separating a DISS into three parts (Software Systems, Data Storage Systems and Data). Further, the paper illustrates the proliferation of Database Schema Smells as TD items within a relational database-centric software system based on two examples.
2019-05-28 16:08:07.000000000
7,087
During the earliest phase of architectural design process, practitioners after analyzing the client's design program, legal requirements, topographic constraints, and preferences synthesize these requirements into architectural floor plan drawings. Design decisions taken in this phase may significantly contribute to the building performance. On account of this reason, it is important to estimate and compare alternative solutions, when it is still manageable to change the building design. The authors have been developing a prototype tool to assist architects during this initial design phase. It is made up of two algorithms. The first algorithm generates alternative floor plans according to the architect's preferences and requirements, and the client's design program. It consists in one evolutionary strategy approach enhanced with local search technique to allocate rooms on several levels in the two-dimensional space. The second algorithm evaluates, ranks, and optimizes those floor plans according to thermal performance criteria. The prototype tool is coupled with dynamic simulation program, which estimates the thermal behavior of each solution. A sequential variable optimization is used to change several geometric values of different architectural elements in the floor plans to explore the improvement potential. In the present communication, the two algorithms are used in an iterative process to generate and optimize the thermal performance of alternative floor plans. In the building simulation specifications of EnergyPlus program, the airflow network model has been used in order to adequately model the air infiltration and the airflows through indoor spaces. A case study of a single-family house with three rooms in a single level is presented.
2014-09-27 04:00:59.000000000
13,466
Software fault localization is one of the most expensive, tedious, and time-consuming activities in program debugging. This activity becomes even much more challenging in Software Product Line (SPL) systems due to the variability of failures in SPL systems. These unexpected behaviors are caused by variability faults which can only be exposed under some combinations of system features. Although localizing bugs in non-configurable code has been investigated in-depth, variability fault localization in SPL systems still remains mostly unexplored. To approach this challenge, we propose a benchmark for variability fault localization with a large set of 1,570 buggy versions of six SPL systems and baseline variability fault localization performance results. Our hope is to engage the community to propose new and better approaches to the problem of variability fault localization in SPL systems.
2021-07-09 01:08:49.000000000
1,151
Service Oriented Architecture is a loosely coupled architecture designed to tackle the problem of Business Infrastructure alignment to meet the needs of an organization. A SOA based platform enables the enterprises to develop applications in the form of independent services. To provide scalable service interactions, there is a need to maintain services performance and have a good sizing guideline of the underlying software platform. Sizing aids in finding the optimum resources required to configure and implement a system that would satisfy the requirements of Business Process Integration being planned. A web based Sizing Tool prototype is developed using Java Application Programming Interfaces to automate the process of sizing the applications deployed on SOA platform that not only scales the performance of the system but also predicts its business growth in the future.
2014-11-17 14:41:07.000000000
939
In the competitive environment, companies have given increasing importance to the IT sector and the resources it delivers as strategic. As a result, IT becomes a living being within the company. This sector is being subject to continuous changes in this scenario. These changes can occur within the own IT sector or whether IT to other sectors of the company. For both scenarios, it is important to have a good change control to avoid unnecessary trouble and expense. This paper aims to show through a case study, the benefits and results obtained with the implementation of a process of managing and controlling changes in the information technology environment of a large government company in Brazil.
2016-03-04 15:05:54.000000000
5,082
Performance is an important non-functional aspect of the software requirement. Modern software systems are highly-configurable and misconfigurations may easily cause performance issues. A software system that suffers performance issues may exhibit low program throughput and long response time. However, the sheer size of the configuration space makes it challenging for administrators to manually select and adjust the configuration options to achieve better performance. In this paper, we propose ConfRL, an approach to tune software performance automatically. The key idea of ConfRL is to use reinforcement learning to explore the configuration space by a trial-and-error approach and to use the feedback received from the environment to tune configuration option values to achieve better performance. To reduce the cost of reinforcement learning, ConfRL employs sampling, clustering, and dynamic state reduction techniques to keep states in a large configuration space manageable. Our evaluation of four real-world highly-configurable server programs shows that ConfRL can efficiently and effectively guide software systems to achieve higher long-term performance.
2020-09-30 12:52:35.000000000
15,096
This volume contains the final and revised versions of the papers presented at the 8th International Workshop on Automated Specification and Verification of Web Systems (WWV 2012). The workshop was held in Stockholm, Sweden, on June 16, 2012, as part of DisCoTec 2012. WWV is a yearly workshop that aims at providing an interdisciplinary forum to facilitate the cross-fertilization and the advancement of hybrid methods that exploit concepts and tools drawn from Rule-based programming, Software engineering, Formal methods and Web-oriented research. WWV has a reputation for being a lively, friendly forum for presenting and discussing work in progress. The proceedings have been produced after the symposium to allow the authors to incorporate the feedback gathered during the event in the published papers. All papers submitted to the workshop were reviewed by at least three Program Committee members or external referees. The Program Committee held an electronic discussion leading to the acceptance of all papers for presentation at the workshop. In addition to the presentation of the contributed papers, the scientific programme included the invited talks by two outstanding speakers: Rocco De Nicola (IMT, Institute for Advanced Studies Lucca, Italy) and Jos\`e Luiz Fiadeiro (Royal Holloway, United Kingdom).
2012-10-19 10:58:10.000000000
7,004
Variability management of process models is a major challenge for Process-Aware Information Systems. Process model variants can be attributed to any of the following reasons: new technologies, governmental rules, organizational context or adoption of new standards. Current approaches to manage variants of process models address issues such as reducing the huge effort of modeling from scratch, preventing redundancy, and controlling inconsistency in process models. Although the effort to manage process model variants has been exerted, there are still limitations. Furthermore, existing approaches do not focus on variants that come from change in organizational perspective of process models. Organizational-driven variant management is an important area that still needs more study that we focus on in this paper. Object Life Cycle (OLC) is an important aspect that may change from an organization to another. This paper introduces an approach inspired by real life scenario to generate consistent process model variants that come from adaptations in the OLC.
2017-08-15 19:57:23.000000000
8,433
We formalize automated analysis techniques for the validation of web services specified in BPEL and a RBAC variant tailored to BPEL. The idea is to use decidable fragments of first-order logic to describe the state space of a certain class of web services and then use state-of-the-art SMT solvers to handle their reachability problems. To assess the practical viability of our approach, we have developed a prototype tool implementing our techniques and applied it to a digital contract signing service inspired by an industrial case study.
2010-09-22 04:03:04.000000000
4,469
Testing is one of the most indispensable tasks in software engineering. The role of testing in software development has grown significantly because testing is able to reveal defects in the code in an early stage of development. Many unit test frameworks compatible with C/C++ code exist, but a standard one is missing. Unfortunately, many unsolved problems can be mentioned with the existing methods, for example usually external tools are necessary for testing C++ programs. In this paper we present a new approach for testing C++ programs. Our solution is based on C++ template metaprogramming facilities, so it can work with the standard-compliant compilers. The metaprogramming approach ensures that the overhead of testing is minimal at runtime. This approach also supports that the specification language can be customized among other advantages. Nevertheless, the only necessary tool is the compiler itself.
2010-11-30 05:24:01.000000000
2,856
Asynchronous waits are one of the most prevalent root causes of flaky tests and a major time-influential factor of web application testing. To investigate the characteristics of asynchronous wait flaky tests and their fixes in web testing, we build a dataset of 49 reproducible flaky tests, from 26 open-source projects, caused by asynchronous waits, along with their corresponding developer-written fixes. Our study of these flaky tests reveals that in approximately 63% of them (31 out of 49), developers addressed Asynchronous Wait flaky tests by adapting the wait time, even for cases where the root causes lie elsewhere. Based on this finding, we propose TRaf, an automated time-based repair method for asynchronous wait flaky tests in web applications. TRaf tackles the flakiness issues by suggesting a proper waiting time for each asynchronous call in a web application, using code similarity and past change history. The core insight is that as developers often make similar mistakes more than once, hints for the efficient wait time exist in the current or past codebase. Our analysis shows that TRaf can suggest a shorter wait time to resolve the test flakiness compared to developer-written fixes, reducing the test execution time by 11.1%. With additional dynamic tuning of the new wait time, TRaf further reduces the execution time by 20.2%.
2023-05-14 04:10:56.000000000
13,221
This work presents an approach for using GitHub classroom as a shared, structured, and persistent repository to support project-based courses at the Software Engineering Undergraduate program at PUC Minas, in Brazil. We discuss the needs of the different stakeholders that guided the development of the approach. Results on the perceptions of professors and students show that the approach brings benefits. Besides the lessons learned, we present insights on improving the education of the next generation of software engineers by employing metrics to monitor skill development, verifying student work portfolios, and employing tooling support in project-based courses.
2021-03-12 07:26:25.000000000
128
The cloud computing model is rapidly transforming the IT landscape. Cloud computing is a new computing paradigm that delivers computing resources as a set of reliable and scalable internet-based services allowing customers to remotely run and manage these services. Infrastructure-as-a-service (IaaS) is one of the popular cloud computing services. IaaS allows customers to increase their computing resources on the fly without investing in new hardware. IaaS adapts virtualization to enable on-demand access to a pool of virtual computing resources. Although there are great benefits to be gained from cloud computing, cloud computing also enables new categories of threats to be introduced. These threats are a result of the cloud virtual infrastructure complexity created by the adoption of the virtualization technology. Breaching the security of any component in the cloud virtual infrastructure significantly impacts on the security of other components and consequently affects the overall system security. This paper explores the security problem of the cloud platform virtual infrastructure identifying the existing security threats and the complexities of this virtual infrastructure. The paper also discusses the existing security approaches to secure the cloud virtual infrastructure and their drawbacks. Finally, we propose and explore some key research challenges of implementing new virtualization-aware security solutions that can provide the pre-emptive protection for complex and ever- dynamic cloud virtual infrastructure.
2016-12-19 12:29:27.000000000
9,681
Objective: To create a commons for infectious disease (ID) epidemiology in which epidemiologists, public health officers, data producers, and software developers can not only share data and software, but receive assistance in improving their interoperability. Materials and Methods: We represented 586 datasets, 54 software, and 24 data formats in OWL 2 and then used logical queries to infer potentially interoperable combinations of software and datasets, as well as statistics about the FAIRness of the collection. We represented the objects in DATS 2.2 and a software metadata schema of our own design. We used these representations as the basis for the Content, Search, FAIR-o-meter, and Workflow pages that constitute the MIDAS Digital Commons. Results: Interoperability was limited by lack of standardization of input and output formats of software. When formats existed, they were human-readable specifications (22/24; 92%); only 3 formats (13%) had machine-readable specifications. Nevertheless, logical search of a triple store based on named data formats was able to identify scores of potentially interoperable combinations of software and datasets. Discussion: We improved the findability and availability of a sample of software and datasets and developed metrics for assessing interoperability. The barriers to interoperability included poor documentation of software input/output formats and little attention to standardization of most types of data in this field. Conclusion: Centralizing and formalizing the representation of digital objects within a commons promotes FAIRness, enables its measurement over time and the identification of potentially interoperable combinations of data and software.
2023-11-09 16:37:34.000000000
6,826
TouchDevelop is a new programming environment that allows users to create applications on mobile devices. Applications created with TouchDevelop have continued to grow in popularity since TouchDevelop was first released to public in 2011. This paper presents a field study of 31,699 applications, focusing on different characteristics between 539 game scripts and all other non-game applications, as well as what make some game applications more popular than others to users. The study provides a list of findings on characteristics of game scripts and also implications for improving end-user programming of game applications.
2013-10-09 13:27:14.000000000
5,265
Android app developers extensively employ code reuse, integrating many third-party libraries into their apps. While such integration is practical for developers, it can be challenging for static analyzers to achieve scalability and precision when libraries account for a large part of the code. As a direct consequence, it is common practice in the literature to consider developer code only during static analysis --with the assumption that the sought issues are in developer code rather than the libraries. However, analysts need to distinguish between library and developer code. Currently, many static analyses rely on white lists of libraries. However, these white lists are unreliable, inaccurate, and largely non-comprehensive. In this paper, we propose a new approach to address the lack of comprehensive and automated solutions for the production of accurate and ``always up to date" sets of libraries. First, we demonstrate the continued need for a white list of libraries. Second, we propose an automated approach to produce an accurate and up-to-date set of third-party libraries in the form of a dataset called AndroLibZoo. Our dataset, which we make available to the community, contains to date 34 813 libraries and is meant to evolve.
2023-07-23 06:06:38.000000000
12,411
We present our ongoing work on requirements specification and analysis for the geographically distributed software and systems. Developing software and systems within/for different countries or states or even within/for different organisations means that the requirements to them can differ in each particular case. These aspects naturally impact on the software architecture and on the development process as a whole. The challenge is to deal with this diversity in a systematic way, avoiding contradictions and non-compliance. In this paper, we present a formal framework for the analysis of the requirements diversity, which comes from the differences in the regulations, laws and cultural aspects for different countries or organisations. The framework also provides the corresponding architectural view and the methods for requirements structuring and optimisation.
2015-07-31 21:30:42.000000000
512
Programming languages development has intensified in recent years. New ones are created; new features, often cross-paradigm, are featured in old ones. This new programming landscape makes language selection a more complex decision, both from the companies points of view (technical, recruiting) and from the developers point of view (career development). In this paper, however, we argue that programming languages have a secondary role in software development design decisions. We illustrate, based on a practical example, how the main influencer are higher-level traits: those traditionally assigned with programming paradigms. Following this renovated perspective, concerns about language choice are shifted for all parties. Beyond particular syntax, grammar, execution model or code organization, the main consequence of the predominance of one paradigm or another in the mind of the developer is the way solutions are designed.
2020-10-14 14:25:20.000000000
3,906
Smartphones and tablets have established themselves as mainstays in the modern computing landscape. It is conceivable that in the near future such devices may supplant laptops and desktops, becoming many users primary means of carrying out typical computer assisted tasks. In turn, this means that mobile applications will continue on a trajectory to becoming more complex, and the primary focus of millions of developers worldwide. In order to properly create and maintain these apps developers will need support, especially with regard to the prompt confirmation and resolution of bug reports. Unfortunately, current issue tracking systems typically only implement collection of coarse grained natural language descriptions, and lack features to facilitate reporters including important information in their reports. This illustrates the lexical information gap that exists in current bug reporting systems for mobile and GUI-based apps. This paper outlines promising preliminary work towards addressing this problem and proposes a comprehensive research program which aims to implement new bug reporting mechanisms and examine the impact that they might have on related software maintenance tasks.
2018-01-15 23:09:59.000000000
6,095
Software effort estimation requires high accuracy, but accurate estimations are difficult to achieve. Increasingly, data mining is used to improve an organization's software process quality, e. g. the accuracy of effort estimations . There are a large number of different method combination exists for software effort estimation, selecting the most suitable combination becomes the subject of research in this paper. In this study, three simple preprocessors are taken (none, norm, log) and effort is measured using COCOMO model. Then results obtained from different preprocessors are compared and norm preprocessor proves to be more accurate as compared to other preprocessors.
2013-10-18 12:21:54.000000000
7,711
Security patterns are a means to encapsulate and communicate proven security solutions. They are well-established approaches for introducing security into the software development process. Our objective is to explore the research efforts on security patterns and discuss the current state of the art. This study will serve as a guideline for researchers, practitioners, and teachers interested in this field. We have conducted a systematic mapping study of relevant literature from 1997 until the end of 2017 and identified 403 relevant papers, 274 of which were selected for analysis based on quality criteria. This study derives a customized research strategy from established systematic approaches in the literature. We have utilized an exhaustive 3-tier search strategy to ensure a high degree of completeness during the study collection and used a test set to evaluate our search. The first 3 research questions address the demographics of security pattern research such as topic classification, trends, and distribution between academia and industry, along with prominent researchers and venues. The next 9 research questions focus on more in-depth analyses such as pattern presentation notations and classification criteria, pattern evaluation techniques, and pattern usage environments. The results and discussions of this study have significant implications for researchers, practitioners, and teachers in software engineering and information security.
2018-11-29 12:41:38.000000000
9,218
Background. Software companies need to manage and refactor Technical Debt issues. Therefore, it is necessary to understand if and when refactoring Technical Debt should be prioritized with respect to developing features or fixing bugs. Objective. The goal of this study is to investigate the existing body of knowledge in software engineering to understand what Technical Debt prioritization approaches have been proposed in research and industry. Method. We conducted a Systematic Literature Review among 384 unique papers published until 2018, following a consolidated methodology applied in Software Engineering. We included 38 primary studies. Results. Different approaches have been proposed for Technical Debt prioritization, all having different goals and optimizing on different criteria. The proposed measures capture only a small part of the plethora of factors used to prioritize Technical Debt qualitatively in practice. We report an impact map of such factors. However, there is a lack of empirical and validated set of tools. Conclusion. We observed that technical Debt prioritization research is preliminary and there is no consensus on what are the important factors and how to measure them. Consequently, we cannot consider current research conclusive and in this paper, we outline different directions for necessary future investigations.
2019-04-27 03:16:07.000000000
15,543
Software systems often leverage on open source software libraries to reuse functionalities. Such libraries are readily available through software package managers like npm for JavaScript. Due to the huge amount of packages available in such package distributions, developers often decide to rely on or contribute to a software package based on its popularity. Moreover, it is a common practice for researchers to depend on popularity metrics for data sampling and choosing the right candidates for their studies. However, the meaning of popularity is relative and can be defined and measured in a diversity of ways, that might produce different outcomes even when considered for the same studies. In this paper, we show evidence of how different is the meaning of popularity in software engineering research. Moreover, we empirically analyse the relationship between different software popularity measures. As a case study, for a large dataset of 175k npm packages, we computed and extracted 9 different popularity metrics from three open source tracking systems: libraries.io, npmjs.com and GitHub. We found that indeed popularity can be measured with different unrelated metrics, each metric can be defined within a specific context. This indicates a need for a generic framework that would use a portfolio of popularity metrics drawing from different concepts.
2019-01-10 20:04:33.000000000
2,480
This paper presents some of the results of the first year of DANSE, one of the first EU IP projects dedicated to SoS. Concretely, we offer a tool chain that allows to specify SoS and SoS requirements at high level, and analyse them using powerful toolsets coming from the formal verification area. At the high level, we use UPDM, the system model provided by the british army as well as a new type of contract based on behavioral patterns. At low level, we rely on a powerful simulation toolset combined with recent advances from the area of statistical model checking. The approach has been applied to a case study developed at EADS Innovation Works.
2013-11-14 19:40:41.000000000
9,319
Software testing is an expensive and important task. Plenty of researches and industrial efforts have been invested on improving software testing techniques, including criteria, tools, etc. These studies can provide guidelines to select suitable test techniques for software engineers. However, in some engineering projects, business issues may be more important than technical ones, hence we need to lobby non-technical members to support our decisions. In this paper, a well-known investment model, Nelson-Siegel model, is introduced to evaluate and forecast the processes of testing with different testing criteria. Through this model, we provide a new perspective to understand short-term, medium-term, and long-term returns of investments throughout the process of testing. A preliminary experiment is conducted to investigate three testing criteria from the viewpoint of investments. The results show that statement-coverage criterion performs best in gaining long-term yields; the short-term and medium-term yields of testing depend on the scale of programs and the number of faults they contain.
2017-07-27 07:47:55.000000000
7,175
Starting from an informal requirements description of a toy production cell used in an intra-project competition in 1994, we give a formal specification that is as close as possible to requirements. We use the deductive program synthesis approach by Manna and Waldinger (1980) to obtain a verified TTL-like circuitery to control the cell. The formal specification also covers mechanical aspects and thus allows to reason not only about software issues but also about issues of mechanical engineering. Besides an approach confined to first order predicate logic with explicit, continuous time, an attempt is presented to employ application specific user-defined logical operators to get a more concise specification as well as proof.
2014-04-03 10:44:42.000000000
2,664
Real-time systems are computing systems in which the meeting of their requirements is vital for their correctness. Consequently, if the real-time requirements of these systems are poorly understood and verified, the results can be disastrous and lead to irremediable project failures at the early phases of development. The present work addresses the problem of detecting deadlock situations early in the requirements specification phase of a concurrent real time system, proposing a simple proof-of-concepts prototype that joins scenario-based requirements specifications and techniques based on topology. The efforts are concentrated in the integration of the formal representation of Message Sequence Chart scenarios into the deadlock detection algorithm of Fajstrup et al., based on geometric and algebraic topology.
2008-02-01 22:12:47.000000000
9,499
Software architecture is defined as the process of a well-structured solution that meets all of the technical and operational requirements, as well as improving the quality attributes of the system such as readability, Reliability, maintainability, and performance. It involves a series of design decisions that can have a considerable impact on the systems quality attributes, and on the overall success of the application. In this work, we start with analysis and investigation of two open source software (OSS) platforms DMARF and GIPSY, predominantly implemented in Java. Many research papers have been studied in order to gain more insights and clear background about their architectures, enhancement, evolution, challenges, and features. Subsequently, we extract and find their needs, high-level requirements, and architectural structures which lead to important design decisions and thus influence their quality attributes. Primarily, we reversed engineering each system0s source code to reconstruct its domain model and class diagram model. We tried to achieve the traceability between requirements and other design artifacts to be consistent. Additionally, we conducted both manual and automated refactoring techniques to get rid of some existing code smells to end up with more readable and understandable code without affecting its observable behavior.
2014-12-16 05:13:29.000000000
7,851
Runtime enforcement is a dynamic analysis technique that instruments a monitor with a system in order to ensure its correctness as specified by some property. This paper explores bidirectional enforcement strategies for properties describing the input and output behaviour of a system. We develop an operational framework for bidirectional enforcement and use it to study the enforceability of the safety fragment of Hennessy-Milner logic with recursion (sHML). We provide an automated synthesis function that generates correct monitors from sHML formulas, and show that this logic is enforceable via a specific type of bidirectional enforcement monitors called action disabling monitors.
2022-01-06 18:47:50.000000000
11,102