input
stringlengths
29
3.27k
created_at
stringlengths
29
29
the most prominent agile framework scrum, is often criticized for its amount of meetings. these regular events are essential to the empirical inspect-and-adapt cycle proposed by agile methods. scrum meetings face several challenges, such as being perceived as boring, repetitive, or irrelevant, leading to decreased cooperation in teams and less successful projects. in an attempt to address these challenges, agile practitioners have adopted teamwork, innovation, and design techniques geared towards improving collaboration. additionally, they have developed their own activities to be used in scrum meetings, most notably for conducting retrospective and planning events. design thinking incorporates non-designers and designers in design and conceptualization activities, including user research, ideation, or testing. accordingly, the design thinking approach provides a process with different phases and accompanying techniques for each step. these design thinking techniques can support shared understanding in teams and can improve collaboration, creativity, and product understanding. for these reasons, design thinking techniques represent a worthwhile addition to the scrum meeting toolkit and can support agile meetings in preventing or countering common meeting challenges and achieving meeting goals. this chapter explores how techniques from the design thinking toolkit can support scrum meetings from a theoretical and practical viewpoint. we analyze scrum meetings' requirements, goals, and challenges and link them to groups of techniques from the design thinking toolkit. in addition, we review interview and observational data from two previous studies with software development practitioners and derive concrete examples. as a result, we present initial guidelines on integrating design thinking techniques into scrum meetings to make them more engaging, collaborative, and interactive.
2021-09-13 17:05:43.000000000
comprehensive quality-aware automated semantic web service composition is an np-hard problem, where service composition workflows are unknown, and comprehensive quality, i.e., quality of services (qos) and quality of semantic matchmaking (qosm) are simultaneously optimized. the objective of this problem is to find a solution with optimized or near-optimized overall qos and qosm within polynomial time over a service request. in this paper, we proposed novel memetic eda-based approaches to tackle this problem. the proposed method investigates the effectiveness of several neighborhood structures of composite services by proposing domain-dependent local search operators. apart from that, a joint strategy of the local search procedure is proposed to integrate with a modified eda to reduce the overall computation time of our memetic approach. to better demonstrate the effectiveness and scalability of our approach, we create a more challenging, augmented version of the service composition benchmark based on wsc-08 \cite{bansal2008wsc} and wsc-09 \cite{kona2009wsc}. experimental results on this benchmark show that one of our proposed memetic eda-based approach (i.e., meeda-lop) significantly outperforms existing state-of-the-art algorithms.
2019-06-17 15:24:28.000000000
a product line approach can save valuable resources by reusing artifacts. especially for software artifacts, the reuse of existing components is highly desirable. in recent literature, the creation of software product lines is mainly proposed from a top-down point of view regarding features which are visible by customers. in practice, however, the design for a product line often arises from one or few existing products that descend from a very first product starting with copy-paste and evolving individually. in this contribution, we propose the theoretical basis to derive a set of metrics for evaluating similar software products in an objective manner. these metrics are used to evaluate the set of product's ability to form a product line.
2014-09-22 11:46:37.000000000
literature and intuition suggest that a developer's intelligence and personality have an impact on their performance in comprehending source code. researchers made this suggestion in the past when discussing threats to validity of their study results. however, the lack of studies investigating the relationship of intelligence and personality to performance in code comprehension makes scientifically sound reasoning about their influence difficult. we conduct the first empirical evaluation, a correlational study with undergraduates, to investigate the correlation of intelligence and personality with performance in code comprehension, that is with correctness in answering comprehension questions on code snippets. we found that personality traits are unlikely to impact code comprehension performance, at least not considered in isolation. conscientiousness, in combination with other factors, however, explains some of the variance in code comprehension performance. for intelligence, significant small to moderate positive effects on code comprehension performance were found for three of four factors measured, i.e., fluid intelligence, visual perception, and cognitive speed. crystallized intelligence has a positive but statistically insignificant effect on code comprehension performance. according to our results, several intelligence facets as well as the personality trait conscientiousness are potential confounders that should not be neglected in code comprehension studies of individual performance and should be controlled for via an appropriate study design. we call for the conduct of further studies on the relationship between intelligence and personality with code comprehension, in part because code comprehension involves more facets than we can measure in a single study and because our regression model explains only a small portion of the variance in code comprehension performance.
2021-09-27 09:04:19.000000000
as neural networks are increasingly included as core components of safety-critical systems, developing effective testing techniques specialized for them becomes crucial. the bulk of the research has focused on testing neural-network models; but these models are defined by writing programs, and there is growing evidence that these neural-network programs often have bugs too. this paper presents annotest: an approach to generating test inputs for neural-network programs. a fundamental challenge is that the dynamically-typed languages (e.g., python) commonly used to program neural networks cannot express detailed constraints about valid function inputs (e.g., matrices with certain dimensions). without knowing these constraints, automated test-case generation is prone to producing invalid inputs, which trigger spurious failures and are useless for identifying real bugs. to address this problem, we introduce a simple annotation language tailored for concisely expressing valid function inputs in neural-network programs. annotest takes as input an annotated program, and uses property-based testing to generate random inputs that satisfy the validity constraints. in the paper, we also outline guidelines that simplify writing annotest annotations. we evaluated annotest on 19 neural-network programs from islam et al's survey., which we manually annotated following our guidelines -- producing 6 annotations per tested function on average. annotest automatically generated test inputs that revealed 94 bugs, including 63 bugs that the survey reported for these projects. these results suggest that annotest can be a valuable approach to finding widespread bugs in real-world neural-network programs.
2021-12-07 17:07:53.000000000
recent years have seen a rise in the popularity of quality diversity (qd) optimization, a branch of optimization that seeks to find a collection of diverse, high-performing solutions to a given problem. to grow further, we believe the qd community faces two challenges: developing a framework to represent the field's growing array of algorithms, and implementing that framework in software that supports a range of researchers and practitioners. to address these challenges, we have developed pyribs, a library built on a highly modular conceptual qd framework. by replacing components in the conceptual framework, and hence in pyribs, users can compose algorithms from across the qd literature; equally important, they can identify unexplored algorithm variations. furthermore, pyribs makes this framework simple, flexible, and accessible, with a user-friendly api supported by extensive documentation and tutorials. this paper overviews the creation of pyribs, focusing on the conceptual framework that it implements and the design principles that have guided the library's development.
2023-02-28 10:08:57.000000000
concurrent software for engineering computations consists of multiple cooperating modules. the behavior of individual modules is described by means on state diagrams. in the paper, the constraints on state diagrams are proposed, allowing for the specification of designer's intentions as to the synchronization of modules. also, the translation of state diagrams (with enforcement constraints) into concurrent state machines is shown, which provides formal framework for the verification of inter-module synchronization. an example of engineering software design based on the method is presented.
2017-03-21 02:55:32.000000000
for the right application, the use of programming paradigms such as functional or logic programming can enormously increase productivity in software development. but these powerful paradigms are tied to exotic programming languages, while the management of software development dictates standardization on a single language. this dilemma can be resolved by using object-oriented programming in a new way. it is conventional to analyze an application by object-oriented modeling. in the new approach, the analysis identifies the paradigm that is ideal for the application; development starts with object-oriented modeling of the paradigm. in this paper we illustrate the new approach by giving examples of object-oriented modeling of dataflow and constraint programming. these examples suggest that it is no longer necessary to embody a programming paradigm in a language dedicated to it.
2005-10-04 08:53:10.000000000
business systems these days need to be agile to address the needs of a changing world. in particular the discipline of enterprise application integration requires business process management to be highly reconfigurable with the ability to support dynamic workflows, inter-application integration and process reconfiguration. basing eai systems on model-resident or on a so-called description-driven approach enables aspects of flexibility, distribution, system evolution and integration to be addressed in a domain-independent manner. such a system called cristal is described in this paper with particular emphasis on its application to eai problem domains. a practical example of the cristal technology in the domain of manufacturing systems, called agilium, is described to demonstrate the principles of model-driven system evolution and integration. the approach is compared to other model-driven development approaches such as the model-driven architecture of the omg and so-called adaptive object models.
2003-10-06 11:51:15.000000000
we propose a knowledge engine called sinoledge mainly for doctors, physicians, and researchers in medical field to organize thoughts, manage reasoning process, test and deploy to production environments effortlessly. our proposal can be related to rule engine usually used in business or medical fields. more importantly, our proposal provides a user-friendly interface, an easy-maintain way of organizing knowledge, an understandable testing functionality and a highly available and efficient back-end architecture.
2021-09-15 23:43:24.000000000
inverse transparency is created by making all usages of employee data visible to them. this requires tools that handle the logging and storage of usage information, and making logged data visible to data owners. for research and teaching contexts that integrate inverse transparency, creating this required infrastructure can be challenging. the inverse transparency toolchain presents a flexible solution for such scenarios. it can be easily deployed and is tightly integrated. with it, we successfully handled use cases covering empirical studies with users, prototyping in university courses, and experimentation with our industry partner.
2023-08-07 18:50:57.000000000
developers' api needs should be more pragmatic, such as seeking suggestive, explainable, and extensible apis rather than the so-called best result. existing api search research cannot meet these pragmatic needs because they are solely concerned with query-api relevance. this necessitates a focus on enhancing the entire query process, from query definition to query refinement through intent clarification to query results promoting divergent thinking about results. this paper designs a novel knowledge-aware human-ai dialog agent (kahaid) which guides the developer to clarify the uncertain, under-specified query through multi-round question answering and recommends apis for the clarified query with relevance explanation and extended suggestions (e.g., alternative, collaborating or opposite-function apis). we systematically evaluate kahaid. in terms of human-ai dialogue efficiency, it achieves a high diversity of question options and the ability to guide developers to find apis using fewer dialogue rounds. for api recommendation, kahaid achieves an mrr and map of 0.769 and 0.794, outperforming state-of-the-art methods biker and clear by at least 47% in mrr and 226.7% in map. for knowledge extension, kahaid obtains an mrr and map of 0.815 and 0.864, surpassing zacq by at least 42% in mrr and 45.2\% in map. furthermore, we conduct a user study. it shows that explainable api recommendations, as implemented by kahaid, can help developers identify the best api approach more easily or confidently, improving inspiration of clarification question options by at least 20.83% and the extensibility of extended apis by at least 12.5%.
2023-04-26 11:18:39.000000000
large language models (llm) have proven to be effective at automated program repair (apr). however, using llms can be highly costly, with companies invoicing users by the number of tokens. in this paper, we propose cigar, the first llm-based apr tool that focuses on minimizing the repair cost. cigar works in two major steps: generating a plausible patch and multiplying plausible patches. cigar optimizes the prompts and the prompt setting to maximize the information given to llms in the smallest possible number of tokens. our experiments on 267 bugs from the widely used defects4j dataset shows that cigar reduces the token cost by 62. on average, cigar spends 171k tokens per bug while the baseline uses 451k tokens. on the subset of bugs that are fixed by both, cigar spends 20k per bug while the baseline uses 695k tokens, a cost saving of 97. our extensive experiments show that cigar is a cost-effective llm-based program repair tool that uses a low number of tokens to generate automatic patches.
2024-02-08 19:42:37.000000000
in this paper, we propose to develop service model architecture by merging multi-agentsystems and semantic web technology. the proposed architecture works in two stages namely, query identification and solution development. a person referred to as customer will submit the problem details or requirements which will be referred to as a query. anyone who can provide a service will need to register with the registrar module of the architecture. services can be anything ranging from expert consultancy in the field of agriculture to academic research, from selling products to manufacturing goods, from medical help to legal issues or even providing logistics. query submitted by customer is first parsed and then iteratively understood with the help of domain experts and the customer to get a precise set of properties. query thus identified will be solved again with the help of intelligent agent systems which will search the semantic web for all those who can find or provide a solution. a workable solution workflow is created and then depending on the requirements, using the techniques of negotiation or auctioning, solution is implemented to complete the service for customer. this part is termed as solution development. in this service oriented architecture, we first try to analyze the complex set of user requirements then try to provide best possible solution in an optimized way by combining better information searches through semantic web and better workflow provisioning using multi agent systems.
2012-08-29 14:44:39.000000000
in this paper we introduce the responsive graphical user interface (regui) approach to creating applications, and demonstrate how this approach can be implemented in matlab. the same general technique can be used in other programming languages.
2017-04-10 12:15:43.000000000
in this paper we describe a hepml format and a corresponding c++ library developed for keeping complete description of parton level events in a unified and flexible form. hepml tags contain enough information to understand what kind of physics the simulated events describe and how the events have been prepared. a hepml block can be included into event files in the lhef format. the structure of the hepml block is described by means of several xml schemas. the schemas define necessary information for the hepml block and how this information should be located within the block. the library libhepml is a c++ library intended for parsing and serialization of hepml tags, and representing the hepml block in computer memory. the library is an api for external software. for example, matrix element monte carlo event generators can use the library for preparing and writing a header of a lhef file in the form of hepml tags. in turn, showering and hadronization event generators can parse the hepml header and get the information in the form of c++ classes. libhepml can be used in c++, c, and fortran programs. all necessary parts of hepml have been prepared and we present the project to the hep community.
2010-01-14 08:36:43.000000000
in this article we consider the role of policy and process in open source usage and propose in-workflow automation as the best path to promoting compliance.
2020-11-16 12:06:00.000000000
we formulate the novel task of automatically updating an existing natural language comment based on changes in the body of code it accompanies. we propose an approach that learns to correlate changes across two distinct language representations, to generate a sequence of edits that are applied to the existing comment to reflect the source code modifications. we train and evaluate our model using a dataset that we collected from commit histories of open-source software projects, with each example consisting of a concurrent update to a method and its corresponding comment. we compare our approach against multiple baselines using both automatic metrics and human evaluation. results reflect the challenge of this task and that our model outperforms baselines with respect to making edits.
2020-04-22 18:23:14.000000000
test doubles, such as mocks and stubs, are nifty fixtures in unit tests. they allow developers to test individual components in isolation from others that lie within or outside of the system. however, implementing test doubles within tests is not straightforward. with this demonstration, we introduce rick, a tool that observes executing applications in order to automatically generate tests with realistic mocks and stubs. rick monitors the invocation of target methods and their interactions with external components. based on the data collected from these observations, rick produces unit tests with mocks, stubs, and mock-based oracles. we highlight the capabilities of rick, and how it can be used with real-world java applications, to generate tests with mocks.
2023-02-08 13:04:52.000000000
context: stack overflow (so) has won the intention from software engineers (e.g., architects) to learn, practice, and utilize development knowledge, such as architectural knowledge (ak). but little is known about ak communicated in so, which is a type of high-level but important knowledge in development. objective: this study aims to investigate the ak in so posts in terms of their categories and characteristics as well as their usefulness from the point of view of so users. method: we conducted an exploratory study by qualitatively analyzing a statistically representative sample of 968 architecture related posts (arps) from so. results: the main findings are: (1) architecture related questions can be classified into 9 core categories, in which "architecture configuration" is the most common category, followed by the "architecture decision" category, and (2) architecture related questions that provide clear descriptions together with architectural diagrams increase their likelihood of getting more than one answer, while poorly structured architecture questions tend to only get one answer. conclusions: our findings suggest that future research can focus on enabling automated approaches and tools that could facilitate the search and (re)use of ak in so. so users can refer to our proposed guidelines to compose architecture related questions with the likelihood of getting more responses in so.
2023-01-02 01:03:47.000000000
in this paper, we investigate the naturalness of semantic-preserving transformations and their impacts on the evaluation of npr. to achieve this, we conduct a two-stage human study, including (1) interviews with senior software developers to establish the first concrete criteria for assessing the naturalness of code transformations and (2) a survey involving 10 developers to assess the naturalness of 1178 transformations, i.e., pairs of original and transformed programs, applied to 225 real-world bugs. our findings reveal that nearly 60% and 20% of these transformations are considered natural and unnatural with substantially high agreement among human annotators. furthermore, the unnatural code transformations introduce a 25.2% false alarm rate on robustness of five well-known npr systems. additionally, the performance of the npr systems drops notably when evaluated using natural transformations, i.e., a drop of up to 22.9% and 23.6% in terms of the numbers of correct and plausible patches generated by these systems. these results highlight the importance of robustness testing by considering naturalness of code transformations, which unveils true effectiveness of npr systems. finally, we conduct an exploration study on automating the assessment of naturalness of code transformations by deriving a new naturalness metric based on cross-entropy. based on our naturalness metric, we can effectively assess naturalness for code transformations automatically with an auc of 0.7.
2024-02-17 00:36:20.000000000
progressive digitalization is changing the game of many industrial sectors. focus-ing on product quality the main profitability driver of this so-called industry 4.0 will be the horizontal integration of information over the complete supply chain. therefore, the european rfcs project 'quality4.0' aims in developing an adap-tive platform, which releases decisions on product quality and provides tailored information of high reliability that can be individually exchanged with customers. in this context machine learning will be used to detect outliers in the quality data. this paper discusses the intermediate project results and the concepts developed so far for this horizontal integration of quality information.
2020-11-10 10:24:43.000000000
assuring traceability from requirements to implementation is a key element when developing safety critical software systems. traditionally, this traceability is ensured by a waterfall-like process, where phases follow each other, and tracing between different phases can be managed. however, new software development paradigms, such as continuous software engineering and devops, which encourage a steady stream of new features, committed by developers in a seemingly uncontrolled fashion in terms of former phasing, challenge this view. in this paper, we introduce our approach that adds traceability capabilities to github, so that the developers can act like they normally do in github context but produce the documentation needed by the regulatory purposes in the process.
2021-10-25 01:55:24.000000000
automated landing for unmanned aerial vehicles (uavs), like multirotor drones, requires intricate software encompassing control algorithms, obstacle avoidance, and machine vision, especially when landing markers assist. failed landings can lead to significant costs from damaged drones or payloads and the time spent seeking alternative landing solutions. therefore, it's important to fully test auto-landing systems through simulations before deploying them in the real-world to ensure safety. this paper proposes rlaga, a reinforcement learning (rl) augmented search-based testing framework, which constructs diverse and real marker-based landing cases that involve safety violations. specifically, rlaga introduces a genetic algorithm (ga) to conservatively search for diverse static environment configurations offline and rl to aggressively manipulate dynamic objects' trajectories online to find potential vulnerabilities in the target deployment environment. quantitative results reveal that our method generates up to 22.19% more violation cases and nearly doubles the diversity of generated violation cases compared to baseline methods. qualitatively, our method can discover those corner cases which would be missed by state-of-the-art algorithms. we demonstrate that select types of these corner cases can be confirmed via real-world testing with drones in the field.
2023-10-10 16:32:21.000000000
machine learning for source code (ml4code) is an active research field in which extensive experimentation is needed to discover how to best use source code's richly structured information. with this in mind, we introduce jemma, an extensible java dataset for ml4code applications, which is a large-scale, diverse, and high-quality dataset targeted at ml4code. our goal with jemma is to lower the barrier to entry in ml4code by providing the building blocks to experiment with source code models and tasks. jemma comes with a considerable amount of pre-processed information such as metadata, representations (e.g., code tokens, asts, graphs), and several properties (e.g., metrics, static analysis results) for 50,000 java projects from the 50kc dataset, with over 1.2 million classes and over 8 million methods. jemma is also extensible allowing users to add new properties and representations to the dataset, and evaluate tasks on them. thus, jemma becomes a workbench that researchers can use to experiment with novel representations and tasks operating on source code. to demonstrate the utility of the dataset, we also report results from two empirical studies on our data, ultimately showing that significant work lies ahead in the design of context-aware source code models that can reason over a broader network of source code entities in a software project, the very task that jemma is designed to help with.
2022-12-15 19:49:34.000000000
to effectively test parts of the internet of things (iot) systems with a state machine character, model-based testing (mbt) approach can be taken. in mbt, a system model is created, and test cases are generated automatically from the model, and a number of current strategies exist. in this paper, we propose a novel alternative strategy that concurrently allows us to flexibly adjust the preferred length of the generated test cases, as well as to mark the states, in which the test case can start and end. compared with an intuitive n-switch coverage-based strategy that aims at the same goals, our proposal generates a lower number of shorter test cases with fewer test step duplications.
2020-05-19 09:19:10.000000000
code review is an important process for quality assurance in software development. for an effective code review, the reviewers must explain their feedback to enable the authors of the code change to act on them. however, the explanation needs may differ among developers, who may require different types of explanations. it is therefore crucial to understand what kind of explanations reviewers usually use in code reviews. to the best of our knowledge, no study published to date has analyzed the types of explanations used in code review. in this study, we present the first analysis of explanations in useful code reviews. we extracted a set of code reviews based on their usefulness and labeled them based on whether they contained an explanation, a solution, or both a proposed solution and an explanation thereof. based on our analysis, we found that a significant portion of the code review comments (46%) only include solutions without providing an explanation. we further investigated the remaining 54% of code review comments containing an explanation and conducted an open card sorting to categorize the reviewers' explanations. we distilled seven distinct categories of explanations based on the expression forms developers used. then, we utilize large language models, specifically chatgpt, to assist developers in getting a code review explanation that suits their preferences. specifically, we created prompts to transform a code review explanation into a specific type of explanation. our evaluation results show that chatgpt correctly generated the specified type of explanation in 88/90 cases and that 89/90 of the cases have the correct explanation. overall, our study provides insights into the types of explanations that developers use in code review and showcases how chatgpt can be leveraged during the code review process to generate a specific type of explanation.
2023-11-14 19:24:31.000000000
daany is .net and cross platform data analytics and linear algebra library written in c# supposed to be a tool for data preparation, feature engineering and other kind of data transformations and feature engineering. the library is implemented on top of .net standard 2.1 and supports .net core 3.0 and above separated on several visual studio projects that can be installed as a nuget package. the library implements dataframe as the core component with extensions of a set of data science and linear algebra features. the library contains several implementation of time series decomposition (ssa, stl arima), optimization methods (sgd) as well as plotting support. the library also implements set of features based on matrix, vectors and similar linear algebra operations. the main part of the library is the daany.dataframe with similar implementation that can be found in python based pandas library. the paper presents the main functionalities and the implementation behind the daany packages in the form of developer guide and can be used as manual in using the daany in every-day work. to end this the paper shows the list of papers used the library.
2021-07-06 12:55:58.000000000
fault identification and testing has always been the most specific concern in the field of software development. to identify and testify the bug we should be aware of the source of the failure or any unwanted issue. in this paper, we are trying to extract the location of failure and trying to cope up with the bug. using directed graph, we tried to obtain the dependency of multiple activities in live environment to trace the origin of fault. software development comes up with series of activities and we tried to show the dependency of multiple activities on each other. critical activities are considered as they cause abnormal functioning of the whole system. the paper discuss about the priorities of activities of dependency of software failure on the critical activities. matrix representation of activities as part of the software is chosen to determine root of the failure using concept of dependency. it can vary with the topography of network and software environment. when faults occur, the possible symptoms will be reflected in the dependency matrix with high probability in fault itself. thus, independent faults are located in the main diagonal of dependency matrix.
2014-05-03 05:06:29.000000000
the introduction of large language models has significantly advanced code generation. however, open-source models often lack the execution capabilities and iterative refinement of advanced systems like the gpt-4 code interpreter. to address this, we introduce opencodeinterpreter, a family of open-source code systems designed for generating, executing, and iteratively refining code. supported by code-feedback, a dataset featuring 68k multi-turn interactions, opencodeinterpreter integrates execution and human feedback for dynamic code refinement. our comprehensive evaluation of opencodeinterpreter across key benchmarks such as humaneval, mbpp, and their enhanced versions from evalplus reveals its exceptional performance. notably, opencodeinterpreter-33b achieves an accuracy of 83.2 (76.4) on the average (and plus versions) of humaneval and mbpp, closely rivaling gpt-4's 84.2 (76.2) and further elevates to 91.6 (84.6) with synthesized human feedback from gpt-4. opencodeinterpreter brings the gap between open-source code generation models and proprietary systems like gpt-4 code interpreter.
2024-02-22 06:34:50.000000000
github is a popular repository for hosting software projects, both due to ease of use and the seamless integration with its testing environment. native github actions make it easy for software developers to validate new commits and have confidence that new code does not introduce major bugs. the freely available test environments are limited to only a few popular setups but can be extended with custom action runners. our team had access to a kubernetes cluster with gpu accelerators, so we explored the feasibility of automatically deploying gpu-providing runners there. all available kubernetes-based setups, however, require cluster-admin level privileges. to address this problem, we developed a simple custom setup that operates in a completely unprivileged manner. in this paper we provide a summary description of the setup and our experience using it in the context of two knight lab projects on the prototype national research platform system.
2023-05-16 19:56:45.000000000
deep learning (dl) techniques have been used to support several code-related tasks such as code summarization and bug-fixing. in particular, pre-trained transformer models are on the rise, also thanks to the excellent results they achieved in natural language processing (nlp) tasks. the basic idea behind these models is to first pre-train them on a generic dataset using a self-supervised task (e.g, filling masked words in sentences). then, these models are fine-tuned to support specific tasks of interest (e.g, language translation). a single model can be fine-tuned to support multiple tasks, possibly exploiting the benefits of transfer learning. this means that knowledge acquired to solve a specific task (e.g, language translation) can be useful to boost performance on another task (e.g, sentiment classification). while the benefits of transfer learning have been widely studied in nlp, limited empirical evidence is available when it comes to code-related tasks. in this paper, we assess the performance of the text-to-text transfer transformer (t5) model in supporting four different code-related tasks: (i) automatic bug-fixing, (ii) injection of code mutants, (iii) generation of assert statements, and (iv) code summarization. we pay particular attention in studying the role played by pre-training and multi-task fine-tuning on the model's performance. we show that (i) the t5 can achieve better performance as compared to state-of-the-art baselines; and (ii) while pre-training helps the model, not all tasks benefit from a multi-task fine-tuning.
2022-06-15 15:08:29.000000000
following the onset of the covid-19 pandemic and subsequent lockdowns, the daily lives of software engineers were heavily disrupted as they were abruptly forced to work remotely from home. to better understand and contrast typical working days in this new reality with work in pre-pandemic times, we conducted one exploratory (n = 192) and one confirmatory study (n = 290) with software engineers recruited remotely. specifically, we build on self-determination theory to evaluate whether and how specific activities are associated with software engineers' satisfaction and productivity. to explore the subject domain, we first ran a two-wave longitudinal study. we found that the time software engineers spent on specific activities (e.g., coding, bugfixing, helping others) while working from home was similar to pre-pandemic times. also, the amount of time developers spent on each activity was unrelated to their general well-being, perceived productivity, and other variables such as basic needs. our confirmatory study found that activity-specific variables (e.g., how much autonomy software engineers had during coding) do predict activity satisfaction and productivity but not by activity-independent variables such as general resilience or a good work-life balance. interestingly, we found that satisfaction and autonomy were significantly higher when software engineers were helping others and lower when they were bugfixing. finally, we discuss implications for software engineers, management, and researchers. in particular, active company policies to support developers' need for autonomy, relatedness, and competence appear particularly effective in a wfh context.
2021-07-15 11:14:03.000000000
generating a readable summary that describes the functionality of a program is known as source code summarization. in this task, learning code representation by modeling the pairwise relationship between code tokens to capture their long-range dependencies is crucial. to learn code representation for summarization, we explore the transformer model that uses a self-attention mechanism and has shown to be effective in capturing long-range dependencies. in this work, we show that despite the approach is simple, it outperforms the state-of-the-art techniques by a significant margin. we perform extensive analysis and ablation studies that reveal several important findings, e.g., the absolute encoding of source code tokens' position hinders, while relative encoding significantly improves the summarization performance. we have made our code publicly available to facilitate future research.
2020-04-29 13:33:18.000000000
feature models are a mechanism to organize the configuration space and facilitate the construction of software variants by describing configuration options using features, i.e., a name representing a functionality. the development of feature models is an error prone activity and detecting their anomalies is a challenging and important task needed to promote their usage. recently, feature models have been extended with context to capture the correlation of configuration options with contextual influences and user customizations. unfortunately, this extension makes the task of detecting anomalies harder. in this paper, we formalize the anomaly analysis in context-aware feature models and we show how quantified boolean formula (qbf) solvers can be used to detect anomalies without relying on iterative calls to a sat solver. by extending the reconfigurator engine hyvarrec, we present findings evidencing that qbf solvers can outperform the common techniques for anomaly analysis.
2020-07-25 22:39:03.000000000
code generation aims to automatically generate source code from high-level task specifications, which can significantly increase productivity of software engineering. recently, approaches based on large language models (llms) have shown remarkable code generation abilities on simple tasks. however, generate code for more complex tasks, such as competition-level problems, remains challenging. in this paper, we introduce brainstorm framework for code generation. it leverages a brainstorming step that generates and selects diverse thoughts on the problem to facilitate algorithmic reasoning, where the thoughts are possible blueprint of solving the problem. we demonstrate that brainstorm significantly enhances the ability of llms to solve competition-level programming problems, resulting in a more than 50% increase in the pass@$k$ metrics for chatgpt on the codecontests benchmark, achieving state-of-the-art performance. furthermore, our experiments conducted on leetcode contests show that our framework boosts the ability of chatgpt to a level comparable to that of human programmers.
2023-05-16 21:57:16.000000000
software developers are faced with the issue of either adapting their programming model to the execution model (e.g. cloud platforms) or finding appropriate tools to adapt the model and code automatically. a recent execution model which would benefit from automated enablement is function-as-a-service. automating this process requires a pipeline which includes steps for code analysis, transformation and deployment. in this paper, we outline the design and runtime characteristics of podilizer, a tool which implements the pipeline specifically for java source code as input and aws lambda as output. we contribute technical and economic metrics about this concrete 'faasification' process by observing the behaviour of podilizer with two representative java software projects.
2017-02-13 18:15:19.000000000
an established trend in software engineering insists on using components (sometimes also called services or packages) to encapsulate a set of related functionalities or data. by defining interfaces specifying what functionalities they provide or use, components can be combined with others to form more complex components. in this way, it systems can be designed by mostly re-using existing components and developing new ones to provide new functionalities. in this paper, we introduce a notion of component and a combination mechanism for an important class of software artifacts, called security-sensitive workflows. these are business processes in which execution constraints on the tasks are complemented with authorization constraints (e.g., separation of duty) and authorization policies (constraining which users can execute which tasks). we show how well-known workflow execution patterns can be simulated by our combination mechanism and how authorization constraints can also be imposed across components. then, we demonstrate the usefulness of our notion of component by showing (i) the scalability of a technique for the synthesis of run-time monitors for security-sensitive workflows and (ii) the design of a plug-in for the re-use of workflows and related run-time monitors inside an editor for security-sensitive workflows.
2015-07-24 18:20:05.000000000
docker, a widely adopted tool for packaging and deploying applications leverages dockerfiles to build images. however, creating an optimal dockerfile can be challenging, often leading to "docker smells" or deviations from best practices. this paper presents a study of the impact of 14 docker smells on the size of docker images. to assess the size impact of docker smells, we identified and repaired 16 145 docker smells from 11 313 open-source dockerfiles. we observe that the smells result in an average increase of 48.06 mb (4.6%) per smelly image. depending on the smell type, the size increase can be up to 10%, and for some specific cases, the smells can represent 89% of the image size. interestingly, the most impactful smells are related to package managers which are commonly encountered and are relatively easy to fix. to collect the perspective of the developers regarding the size impact of the docker smells, we submitted 34 pull requests that repair the smells and we reported their impact on the docker image to the developers. 26/34 (76.5%) of the pull requests have been merged and they contribute to a saving of 3.46 gb (16.4%). the developer's comments demonstrate a positive interest in addressing those docker smells even when the pull requests have been rejected
2023-12-20 15:23:00.000000000
large language models are powerful tools for program synthesis and advanced auto-completion, but come with no guarantee that their output code is syntactically correct. this paper contributes an incremental parser that allows early rejection of syntactically incorrect code, as well as efficient detection of complete programs for fill-in-the-middle (fitm) tasks. we develop earley-style parsers that operate over left and right quotients of arbitrary context-free grammars, and we extend our incremental parsing and quotient operations to several context-sensitive features present in the grammars of many common programming languages. the result of these contributions is an efficient, general, and well-grounded method for left and right quotient parsing. to validate our theoretical contributions -- and the practical effectiveness of certain design decisions -- we evaluate our method on the particularly difficult case of fitm completion for python 3. our results demonstrate that constrained generation can significantly reduce the incidence of syntax errors in recommended code.
2024-02-27 06:24:01.000000000
based on developer needs and usage scenarios, api (application programming interface) recommendation is the process of assisting developers in finding the required api among numerous candidate apis. previous studies mainly modeled api recommendation as the recommendation task, which can recommend multiple candidate apis for the given query, and developers may not yet be able to find what they need. motivated by the neural machine translation research domain, we can model this problem as the generation task, which aims to directly generate the required api for the developer query. after our preliminary investigation, we find the performance of this intuitive approach is not promising. the reason is that there exists an error when generating the prefixes of the api. however, developers may know certain api prefix information during actual development in most cases. therefore, we model this problem as the automatic completion task and propose a novel approach apicom based on prompt learning, which can generate api related to the query according to the prompts (i.e., api prefix information). moreover, the effectiveness of apicom highly depends on the quality of the training dataset. in this study, we further design a novel gradient-based adversarial training method {\atpart} for data augmentation, which can improve the normalized stability when generating adversarial examples. to evaluate the effectiveness of apicom, we consider a corpus of 33k developer queries and corresponding apis. compared with the state-of-the-art baselines, our experimental results show that apicom can outperform all baselines by at least 40.02\%, 13.20\%, and 16.31\% in terms of the performance measures em[USER], mrr, and map. finally, our ablation studies confirm the effectiveness of our component setting (such as our designed adversarial training method, our used pre-trained model, and prompt learning) in apicom.
2023-09-12 12:08:40.000000000
modern code review (mcr) is a widely known practice of software quality assurance. however, the existing body of knowledge of mcr is currently not understood as a whole. objective: our goal is to identify the state of the art on mcr, providing a structured overview and an in-depth analysis of the research done in this field. method: we performed a systematic literature review, selecting publications from four digital libraries. results: a total of 139 papers were selected and analyzed in three main categories. foundational studies are those that analyze existing or collected data from the adoption of mcr. proposals consist of techniques and tools to support mcr, while evaluations are studies to assess an approach or compare a set of them. conclusion: the most represented category is foundational studies, mainly aiming to understand the motivations for adopting mcr, its challenges and benefits, and which influence factors lead to which mcr outcomes. the most common types of proposals are code reviewer recommender and support to code checking. evaluations of mcr-supporting approaches have been done mostly offline, without involving human subjects. five main research gaps have been identified, which point out directions for future work in the area.
2021-03-15 18:34:46.000000000
since its launch in november 2022, chatgpt has gained popularity among users, especially programmers who use it as a tool to solve development problems. however, while offering a practical solution to programming problems, chatgpt should be mainly used as a supporting tool (e.g., in software education) rather than as a replacement for the human being. thus, detecting automatically generated source code by chatgpt is necessary, and tools for identifying ai-generated content may need to be adapted to work effectively with source code. this paper presents an empirical study to investigate the feasibility of automated identification of ai-generated code snippets, and the factors that influence this ability. to this end, we propose a novel approach called gptsniffer, which builds on top of codebert to detect source code written by ai. the results show that gptsniffer can accurately classify whether code is human-written or ai-generated, and outperforms two baselines, gptzero and openai text classifier. also, the study shows how similar training data or a classification context with paired snippets helps to boost classification performances.
2023-07-18 03:28:39.000000000
ai-augmented business process management systems (abpmss) are an emerging class of process-aware information systems, empowered by trustworthy ai technology. an abpms enhances the execution of business processes with the aim of making these processes more adaptable, proactive, explainable, and context-sensitive. this manifesto presents a vision for abpmss and discusses research challenges that need to be surmounted to realize this vision. to this end, we define the concept of abpms, we outline the lifecycle of processes within an abpms, we discuss core characteristics of an abpms, and we derive a set of challenges to realize systems with these characteristics.
2022-01-28 12:45:56.000000000
studies of aspect-oriented programming (aop) usually focus on a language in which a specific aspect extension is integrated with a base language. languages specified in this manner have a fixed, non-extensible aop functionality. in this paper we consider the more general case of integrating a base language with a set of domain specific third-party aspect extensions for that language. we present a general mixin-based method for implementing aspect extensions in such a way that multiple, independently developed, dynamic aspect extensions can be subject to third-party composition and work collaboratively.
2005-02-08 01:15:47.000000000
cargo, the software packaging manager of rust, provides a yank mechanism to support release-level deprecation, which can prevent packages from depending on yanked releases. most prior studies focused on code-level (i.e., deprecated apis) and package-level deprecation (i.e., deprecated packages). however, few studies have focused on release-level deprecation. in this study, we investigate how often and how the yank mechanism is used, the rationales behind its usage, and the adoption of yanked releases in the cargo ecosystem. our study shows that 9.6% of the packages in cargo have at least one yanked release, and the proportion of yanked releases kept increasing from 2014 to 2020. package owners yank releases for other reasons than withdrawing a defective release, such as fixing a release that does not follow semantic versioning or indicating a package is removed or replaced. in addition, we found that 46% of the packages directly adopted at least one yanked release and the yanked releases propagated through the dependency network, which leads to 1.4% of the releases in the ecosystem having unresolved dependencies.
2022-01-27 05:16:08.000000000
a risk in adopting third-party dependencies into an application is their potential to serve as a doorway for malicious code to be injected (most often unknowingly). while many initiatives from both industry and research communities focus on the most critical dependencies (i.e., those most depended upon within the ecosystem), little is known about whether the rest of the ecosystem suffers the same fate. our vision is to promote and establish safer practises throughout the ecosystem. to motivate our vision, in this paper, we present preliminary data based on three representative samples from a population of 88,416 pull requests (prs) and identify unsafe dependency updates (i.e., any pull request that risks being unsafe during runtime), which clearly shows that unsafe dependency updates are not limited to highly impactful libraries. to draw attention to the long tail, we propose a research agenda comprising six key research questions that further explore how to safeguard against these unsafe activities. this includes developing best practises to address unsafe dependency updates not only in top-tier libraries but throughout the entire ecosystem.
2023-09-07 08:53:16.000000000
as software systems evolve, their architecture is meant to adapt accordingly by following the changes in requirements, the environment, and the implementation. however, in practice, the evolving system often deviates from the architecture, causing severe consequences to system maintenance and evolution. this phenomenon of architecture erosion has been studied extensively in research, but not yet been examined from the point of view of developers. in this exploratory study, we look into how developers perceive the notion of architecture erosion, its causes and consequences, as well as tools and practices to identify and control architecture erosion. to this end, we searched through several popular online developer communities for collecting data of discussions related to architecture erosion. besides, we identified developers involved in these discussions and conducted a survey with 10 participants and held interviews with 4 participants. our findings show that: (1) developers either focus on the structural manifestation of architecture erosion or on its effect on run-time qualities, maintenance and evolution; (2) alongside technical factors, architecture erosion is caused to a large extent by non-technical factors; (3) despite the lack of dedicated tools for detecting architecture erosion, developers usually identify erosion through a number of symptoms; and (4) there are effective measures that can help to alleviate the impact of architecture erosion.
2021-03-21 02:07:17.000000000
supporting learners in introductory programming assignments at scale is a necessity. this support includes automated feedback on what learners did incorrectly. existing approaches cast the problem as automatically repairing learners' incorrect programs extrapolating the data from an existing correct program from other learners. however, such approaches are limited because they only compare programs with similar control flow and order of statements. a potentially valuable set of repair feedback from flexible comparisons is thus missing. in this paper, we present several modifications to clara, a data-driven automated repair approach that is open source, to deal with real-world introductory programs. we extend clara's abstract syntax tree processor to handle common introductory programming constructs. additionally, we propose a flexible alignment algorithm over control flow graphs where we enrich nodes with semantic annotations extracted from programs using operations and calls. using this alignment, we modify an incorrect program's control flow graph to match the correct programs to apply clara's original repair process. we evaluate our approach against a baseline on the twenty most popular programming problems in codeforces. our results indicate that flexible alignment has a significantly higher percentage of successful repairs at 46% compared to 5% for baseline clara. our implementation is available at [LINK].
2024-01-02 03:39:19.000000000
to facilitate evaluation of code generation systems across diverse scenarios, we present codebenchgen, a framework to create scalable execution-based benchmarks that only requires light guidance from humans. specifically, we leverage a large language model (llm) to convert an arbitrary piece of code into an evaluation example, including test cases for execution-based evaluation. we illustrate the usefulness of our framework by creating a dataset, exec-csn, which includes 1,931 examples involving 293 libraries revised from code in 367 github repositories taken from the codesearchnet dataset. to demonstrate the complexity and solvability of examples in exec-csn, we present a human study demonstrating that 81.3% of the examples can be solved by humans and 61% are rated as ``requires effort to solve''. we conduct code generation experiments on open-source and proprietary models and analyze the performance of both humans and models. we will release the code of both the framework and the dataset upon acceptance.
2024-03-28 10:19:18.000000000
container orchestrator (co) is a vital technology for managing clusters of containers, which may form a virtualized infrastructure for developing and operating software systems. like any other software system, securing co is critical, but can be quite challenging task due to large number of configurable options. manual configuration is not only knowledge intensive and time consuming, but also is error prone. for automating security configuration of co, we propose a novel knowledge graph based security configuration, kgsecconfig, approach. our solution leverages keyword and learning models to systematically capture, link, and correlate heterogeneous and multi-vendor configuration space in a unified structure for supporting automation of security configuration of co. we implement kgsecconfig on kubernetes, docker, azure, and vmware to build secured configuration knowledge graph. our evaluation results show 0.98 and 0.94 accuracy for keyword and learning-based secured configuration option and concept extraction, respectively. we also demonstrate the utilization of the knowledge graph for automated misconfiguration mitigation in a kubernetes cluster. we assert that our knowledge graph based approach can help in addressing several challenges, e.g., misconfiguration of security, associated with manually configuring the security of co.
2021-12-22 00:32:18.000000000
deep learning (dl) techniques are on the rise in the software engineering research community. more and more approaches have been developed on top of dl models, also due to the unprecedented amount of software-related data that can be used to train these models. one of the recent applications of dl in the software engineering domain concerns the automatic detection of software vulnerabilities. while several dl models have been developed to approach this problem, there is still limited empirical evidence concerning their actual effectiveness especially when compared with shallow machine learning techniques. in this paper, we partially fill this gap by presenting a large-scale empirical study using three vulnerability datasets and five different source code representations (i.e., the format in which the code is provided to the classifiers to assess whether it is vulnerable or not) to compare the effectiveness of two widely used dl-based models and of one shallow machine learning model in (i) classifying code functions as vulnerable or non-vulnerable (i.e., binary classification), and (ii) classifying code functions based on the specific type of vulnerability they contain (or "clean", if no vulnerability is there). as a baseline we include in our study the automl utility provided by the google cloud platform. our results show that the experimented models are still far from ensuring reliable vulnerability detection, and that a shallow learning classifier represents a competitive baseline for the newest dl-based models.
2021-03-22 09:44:33.000000000
recent research explores optimization using large language models (llms) by either iteratively seeking next-step solutions from llms or directly prompting llms for an optimizer. however, these approaches exhibit inherent limitations, including low operational efficiency, high sensitivity to prompt design, and a lack of domain-specific knowledge. we introduce llamoco, the first instruction-tuning framework designed to adapt llms for solving optimization problems in a code-to-code manner. specifically, we establish a comprehensive instruction set containing well-described problem prompts and effective optimization codes. we then develop a novel two-phase learning strategy that incorporates a contrastive learning-based warm-up procedure before the instruction-tuning phase to enhance the convergence behavior during model fine-tuning. the experiment results demonstrate that a codegen (350m) model fine-tuned by our llamoco achieves superior optimization performance compared to gpt-4 turbo and the other competitors across both synthetic and realistic problem sets. the fine-tuned model and the usage instructions are available at [LINK].
2024-03-01 11:07:41.000000000
microservice architecture has transformed the way developers are building and deploying applications in the nowadays cloud computing centers. this new approach provides increased scalability, flexibility, manageability, and performance while reducing the complexity of the whole software development life cycle. the increase in cloud resource utilization also benefits microservice providers. various microservice platforms have emerged to facilitate the devops of containerized services by enabling continuous integration and delivery. microservice platforms deploy application containers on virtual or physical machines provided by public/private cloud infrastructures in a seamless manner. in this paper, we study and evaluate the provisioning performance of microservice platforms by incorporating the details of all layers (i.e., both micro and macro layers) in the modelling process. to this end, we first build a microservice platform on top of amazon ec2 cloud and then leverage it to develop a comprehensive performance model to perform what-if analysis and capacity planning for microservice platforms at scale. in other words, the proposed performance model provides a systematic approach to measure the elasticity of the microservice platform by analyzing the provisioning performance at both the microservice platform and the back-end macroservice infrastructures.
2019-02-07 13:13:13.000000000
companies that collaborate within the product development processes need to implement an effective management of their collaborative activities. despite the implementation of a plm system, the collaborative activities are not efficient as it might be expected. this paper presents an analysis of the problems related to the collaborative work using a plm system. from this analysis, we propose an approach for improving collaborative processes within a plm system, based on monitoring indicators. this approach leads to identify and therefore to mitigate the brakes of the collaborative work.
2008-03-05 14:04:19.000000000
internet of things (iot) systems allow software to directly interact with the physical world. recent iot failures can be attributed to recurring software design flaws, suggesting iot software engineers may not be learning from past failures. we examine the use of failure stories to improve iot system designs. we conducted an experiment to evaluate the influence of failure-related learning treatments on design decisions. our experiment used a between-subjects comparison of novices (computer engineering students) completing a design questionnaire. there were three treatments: a control group (n=7); a group considering a set of design guidelines (n=8); and a group considering failure stories (proposed treatment, n=6). we measured their design decisions and their design rationales. all subjects made comparable decisions. their rationales varied by treatment: subjects treated with guidelines and failure stories made greater use of criticality as a rationale, while subjects exposed to failure stories more frequently used safety as a rationale. building on these findings, we suggest several research directions toward a failure-aware iot engineering process.
2022-06-26 10:51:25.000000000
csdms, the community surface dynamics modeling system, is an nsf funded project whose focus is to aid a diverse community of earth and ocean system model users and developers to use and create robust software quickly. to this end, csdms develops, integrates, archives and disseminates earth-system models and tools to an international (67 country) community with the goal of building the set of tools necessary to model the earth system. modelers use csdms for access to hundreds of open source surface-dynamics models and tools, as well as model metadata. such a model repository increases model transparency and helps eliminate duplication by presenting the current state of modeling efforts. to increase software sustainability, composability and interoperability, csdms promotes standards that define common modeling interfaces, semantic mediation between models, and model metadata. through online resources and workshops, csdms promotes software engineering best practices, which are unfamiliar to many developers within our modeling community. for example, version control, unit testing, continuous integration, test-driven development, and well-written clean code are all topics of the educational mission of csdms.
2014-07-15 06:40:28.000000000
stackoverflow (so) is a widely used question-and-answer (q\&a) website for software developers and computer scientists. github is an online development platform used for storing, tracking, and collaborating on software projects. prior work relates the information mined from both platforms to link user accounts or compare developers' activities across platforms. however, not much work is done to characterize the so answers reused by github projects. for this paper, we did an empirical study by mining the so answers reused by java projects available on github. we created a hybrid approach of clone detection, keyword-based search, and manual inspection, to identify the answer(s) actually leveraged by developers. based on the identified answers, we further studied topics of the discussion threads, answer characteristics (e.g., scores, ages, code lengths, and text lengths), and developers' reuse practices. we observed that most reused answers offer programs to implement specific coding tasks. among all analyzed so discussion threads, the reused answers often have relatively higher scores, older ages, longer code, and longer text than unused answers. in only 9% of scenarios (40/430), developers fully copied answer code for reuse. in the remaining scenarios, they reused partial code or created brand new code from scratch. our study characterized 130 so discussion threads referred to by java developers in 357 github projects. our empirical findings can guide so answerers to provide better answers, and shed lights on future research related to so and github.
2023-08-18 02:30:19.000000000
security challenges for cloud or fog-based machine learning services pose several concerns. securing the underlying cloud or fog services is essential, as successful attacks against these services, on which machine learning applications rely, can lead to significant impairments of these applications. because the requirements for ai applications can also be different, we differentiate according to whether they are used in the cloud or in a fog computing network. this then also results in different threats or attack possibilities. for cloud platforms, the responsibility for security can be divided between different parties. security deficiencies at a lower level can have a direct impact on the higher level where user data is stored. while responsibilities are simpler for fog computing networks, by moving services to the edge of the network, we have to secure them against physical access to the devices. we conclude by outlining specific information security requirements for ai applications.
2023-10-28 22:28:32.000000000
program comprehension concerns the ability of an individual to make an understanding of an existing software system to extend or transform it. software systems comprise of data that are noisy and missing, which makes program understanding even more difficult. a software system consists of various views including the module dependency graph, execution logs, evolutionary information and the vocabulary used in the source code, that collectively defines the software system. each of these views contain unique and complementary information; together which can more accurately describe the data. in this paper, we investigate various techniques for combining different sources of information to improve the performance of a program comprehension task. we employ state-of-the-art techniques from learning to 1) find a suitable similarity function for each view, and 2) compare different multi-view learning techniques to decompose a software system into high-level units and give component-level recommendations for refactoring of the system, as well as cross-view source code search. the experiments conducted on 10 relatively large java software systems show that by fusing knowledge from different views, we can guarantee a lower bound on the quality of the modularization and even improve upon it. we proceed by integrating different sources of information to give a set of high-level recommendations as to how to refactor the software system. furthermore, we demonstrate how learning a joint subspace allows for performing cross-modal retrieval across views, yielding results that are more aligned with what the user intends by the query. the multi-view approaches outlined in this paper can be employed for addressing problems in software engineering that can be encoded in terms of a learning problem, such as software bug prediction and feature location.
2019-01-31 07:50:09.000000000
rubrics and oral feedback are approaches to help students improve performance and meet learning outcomes. however, their effect on the actual improvement achieved is inconclusive. this paper evaluates the effect of rubrics and oral feedback on student learning outcomes. an experiment was conducted in a software engineering course on requirements engineering, using the two approaches in course assignments. both approaches led to statistically significant improvements, though no material improvement (i.e., a change by more than one grade) was achieved. the rubrics led to a significant decrease in the number of complaints and questions regarding grades.
2023-07-24 02:18:31.000000000
software vulnerabilities can have serious consequences, which is why many techniques have been proposed to defend against them. among these, vulnerability detection techniques are a major area of focus. however, there is a lack of a comprehensive approach for benchmarking these proposed techniques. in this paper, we present the first survey that comprehensively investigates and summarizes the current state of software vulnerability detection benchmarking. we review the current literature on benchmarking vulnerability detection, including benchmarking approaches in technique-proposing papers and empirical studies. we also separately discuss the benchmarking approaches for traditional and deep learning-based vulnerability detection techniques. our survey analyzes the challenges of benchmarking software vulnerability detection techniques and the difficulties involved. we summarize the challenges of benchmarking software vulnerability detection techniques and describe possible solutions for addressing these challenges.
2023-03-28 06:13:16.000000000
architecture styles characterise families of architectures sharing common characteristics. we have recently proposed configuration logics for architecture style specification. in this paper, we study a graphical notation to enhance readability and easiness of expression. we study simple architecture diagrams and a more expressive extension, interval architecture diagrams. for each type of diagrams, we present its semantics, a set of necessary and sufficient consistency conditions and a method that allows to characterise compositionally the specified architectures. we provide several examples illustrating the application of the results. we also present a polynomial-time algorithm for checking that a given architecture conforms to the architecture style specified by a diagram.
2016-08-09 10:48:13.000000000
in nature ecosystems, animal life-spans are determined by genes and some other biological characteristics. similarly, the software project life-spans are related to some internal or external characteristics. analyzing the relations between these characteristics and the project life-span, may help developers, investors, and contributors to control the development cycle of the software project. the paper provides an insight on the project life-span for a free open source software ecosystem. the statistical analysis of some project characteristics in github is presented, and we find that the choices of programming languages, the number of files, the label format of the project, and the relevant membership expressions can impact the life-span of a project. based on these discovered characteristics, we also propose a prediction model to estimate the project life-span in open source software ecosystems. these results may help developers reschedule the project in open source software ecosystem.
2017-10-26 18:36:24.000000000
the concept of agile process models has attained great popularity in software (sw) development community in last few years. agile models promote fast development. fast development has certain drawbacks, such as weak documentation and performance for medium and large development projects. fast development also promotes use of agile process models in small-scale projects. this paper modifies and evaluates extreme programming (xp) process model and proposes a novel process model based on these modifications.
2012-02-12 08:38:28.000000000
contributors to open source software (oss) communities assume diverse roles to take different responsibilities. one major limitation of the current oss tools and platforms is that they provide a uniform user interface regardless of the activities performed by the various types of contributors. this paper serves as a non-trivial first step towards resolving this challenge by demonstrating a methodology and establishing knowledge to understand how the contributors' roles and their dynamics, reflected in the activities contributors perform, are exhibited in oss communities. based on an analysis of user action data from 29 github projects, we extracted six activities that distinguished four active roles and five supporting roles of oss contributors, as well as patterns in role changes. through the lens of the activity theory, these findings provided rich design guidelines for oss tools to support diverse contributor roles.
2019-03-10 20:25:11.000000000
this paper is a reproduction of work by ray et al. which claimed to have uncovered a statistically significant association between eleven programming languages and software defects in projects hosted on github. first we conduct an experimental repetition, repetition is only partially successful, but it does validate one of the key claims of the original work about the association of ten programming languages with defects. next, we conduct a complete, independent reanalysis of the data and statistical modeling steps of the original study. we uncover a number of flaws that undermine the conclusions of the original study as only four languages are found to have a statistically significant association with defects, and even for those the effect size is exceedingly small. we conclude with some additional sources of bias that should be investigated in follow up work and a few best practice recommendations for similar efforts.
2019-01-25 19:51:37.000000000
good code quality is a prerequisite for efficiently developing maintainable software. in this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. we employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. the interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. we devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.
2019-07-24 09:33:07.000000000
repairnator is a bot. it constantly monitors software bugs discovered during continuous integration of open-source software and tries to fix them automatically. if it succeeds to synthesize a valid patch, repairnator proposes the patch to the human developers, disguised under a fake human identity. to date, repairnator has been able to produce 5 patches that were accepted by the human developers and permanently merged in the code base. this is a milestone for human-competitiveness in software engineering research on automatic program repair.
2018-10-10 14:09:38.000000000
information system quality (isq) management discipline requires a set of assessment mechanisms to evaluate external quality characteristics that are influenced by the environmental parameters and impacted by the ecosystem factors. the present paper suggests a new assessment oriented model that takes into consideration all facets of each external quality feature. the proposed model, named ratqual, gives a hierarchical categorization for quality. ratqual is designed to quantify dependent-environment qualities by considering internal, external and in use aspects. this model is supported by a tool that automates the assessment process. this tool gives assistance in quality evolution planning and serves for periodical monitoring operations used to enhance and improve information system quality.
2013-10-24 17:58:50.000000000
software model optimization is the task of automatically generate design alternatives, usually to improve quality aspects of software that are quantifiable, like performance and reliability. in this context, multi-objective optimization techniques have been applied to help the designer find suitable trade-offs among several non-functional properties. in this process, design alternatives can be generated through automated model refactoring, and evaluated on non-functional models. due to their complexity, this type of optimization tasks require considerable time and resources, often limiting their application in software engineering processes. in this paper, we investigate the effects of using a search budget, specifically a time limit, to the search for new solutions. we performed experiments to quantify the impact that a change in the search budget may have on the quality of solutions. furthermore, we analyzed how different genetic algorithms (i.e., nsga-ii, spea2, and pesa2) perform when imposing different budgets. we experimented on two case studies of different size, complexity, and domain. we observed that imposing a search budget considerably deteriorates the quality of the generated solutions, but the specific algorithm we choose seems to play a crucial role. from our experiments, nsga-ii is the fastest algorithm, while pesa2 generates solutions with the highest quality. differently, spea2 is the slowest algorithm, and produces the solutions with the lowest quality.
2022-12-14 19:49:57.000000000
in recent years open innovation (oi) has gained much attention and made firms aware that they need to consider the open environment surrounding them. to facilitate this shift requirements engineering (re) needs to be adapted in order to manage the increase and complexity of new requirements sources as well as networks of stakeholders. in response we build on and advance an earlier proposed software engineering framework for fostering oi, focusing on stakeholder management, when to open up, and prioritization and release planning. literature in open source re is contrasted against recent findings of oi in software engineering to establish a current view of the area. based on the synthesized findings we propose a research agenda within the areas under focus, along with a framing-model to help researchers frame and break down their research questions to consider the different angles implied by the oi model.
2022-07-31 18:17:40.000000000
modern code generation tools, utilizing ai models like large language models (llms), have gained popularity for producing functional code. however, their usage presents security challenges, often resulting in insecure code merging into the code base. evaluating the quality of generated code, especially its security, is crucial. while prior research explored various aspects of code generation, the focus on security has been limited, mostly examining code produced in controlled environments rather than real-world scenarios. to address this gap, we conducted an empirical study, analyzing code snippets generated by github copilot from github projects. our analysis identified 452 snippets generated by copilot, revealing a high likelihood of security issues, with 32.8% of python and 24.5% of javascript snippets affected. these issues span 38 different common weakness enumeration (cwe) categories, including significant ones like cwe-330: use of insufficiently random values, cwe-78: os command injection, and cwe-94: improper control of generation of code. notably, eight cwes are among the 2023 cwe top-25, highlighting their severity. our findings confirm that developers should be careful when adding code generated by copilot and should also run appropriate security checks as they accept the suggested code. it also shows that practitioners should cultivate corresponding security awareness and skills.
2023-10-03 04:46:59.000000000
deep learning (dl) techniques have gained significant popularity among software engineering (se) researchers in recent years. this is because they can often solve many se challenges without enormous manual feature engineering effort and complex domain knowledge. although many dl studies have reported substantial advantages over other state-of-the-art models on effectiveness, they often ignore two factors: (1) replicability - whether the reported experimental result can be approximately reproduced in high probability with the same dl model and the same data; and (2) reproducibility - whether one reported experimental findings can be reproduced by new experiments with the same experimental protocol and dl model, but different sampled real-world data. unlike traditional machine learning (ml) models, dl studies commonly overlook these two factors and declare them as minor threats or leave them for future work. this is mainly due to high model complexity with many manually set parameters and the time-consuming optimization process. in this study, we conducted a literature review on 93 dl studies recently published in twenty se journals or conferences. our statistics show the urgency of investigating these two factors in se. moreover, we re-ran four representative dl models in se. experimental results show the importance of replicability and reproducibility, where the reported performance of a dl model could not be replicated for an unstable optimization process. reproducibility could be substantially compromised if the model training is not convergent, or if performance is sensitive to the size of vocabulary and testing data. it is therefore urgent for the se community to provide a long-lasting link to a replication package, enhance dl-based solution stability and convergence, and avoid performance sensitivity on different sampled data.
2020-06-22 21:41:47.000000000
this report is a high-level summary analysis of the 2017 github open source survey dataset, presenting frequency counts, proportions, and frequency or proportion bar plots for every question asked in the survey.
2017-06-04 23:08:12.000000000
during the life cycle of an xml application, both schemas and queries may change from one version to another. schema evolutions may affect query results and potentially the validity of produced data. nowadays, a challenge is to assess and accommodate the impact of theses changes in rapidly evolving xml applications. this article proposes a logical framework and tool for verifying forward/backward compatibility issues involving schemas and queries. first, it allows analyzing relations between schemas. second, it allows xml designers to identify queries that must be reformulated in order to produce the expected results across successive schema versions. third, it allows examining more precisely the impact of schema changes over queries, therefore facilitating their reformulation.
2008-11-26 14:37:01.000000000
in previous work, we have introduced a contract-based real- izability checking algorithm for assume-guarantee contracts involving infinite theories, such as linear integer/real arith- metic and uninterpreted functions over infinite domains. this algorithm can determine whether or not it is possible to con- struct a realization (i.e. an implementation) of an assume- guarantee contract. the algorithm is similar to k-induction model checking, but involves the use of quantifiers to deter- mine implementability. while our work on realizability is inherently useful for vir- tual integration in determining whether it is possible for sup- pliers to build software that meets a contract, it also provides the foundations to solving the more challenging problem of component synthesis. in this paper, we provide an initial synthesis algorithm for assume-guarantee contracts involv- ing infinite theories. to do so, we take advantage of our realizability checking procedure and a skolemization solver for forall-exists formulas, called ae-val. we show that it is possible to immediately adapt our existing algorithm towards syn- thesis by using this solver, using a demonstration example. we then discuss challenges towards creating a more robust synthesis algorithm.
2016-01-25 02:23:07.000000000
the regression test suite, a key resource for managing program evolution, needs to achieve 100% coverage, or very close, to be useful. devising a test suite manually is unacceptably tedious, but existing automated methods are often inefficient. the method described in this article, ``seeding contradiction'', inserts incorrect instructions into every basic block of the program, enabling an smt-based hoare-style prover to generate a counterexample for every branch of the program and, from the collection of all such counterexamples, a test suite. the method is static, works fast, and achieves excellent coverage.
2023-09-07 15:47:11.000000000
static analysis of smart-contracts is becoming more widespread on blockchain platforms. analyzers rely on techniques like symbolic execution or model checking, but few of them can provide strong soundness properties and guarantee the analysis termination at the same time. as smart-contracts often manipulate economic assets, proving numerical properties beyond the absence of runtime errors is also desirable. smart-contract execution models differ considerably from mainstream programming languages and vary from one blockchain to another, making state-of-the-art analyses hard to adapt. for instance, smart-contract calls may modify a persistent storage impacting subsequent calls. this makes it difficult for tools to infer invariants required to formally ensure the absence of exploitable vulnerabilities. the michelson smart-contract language, used in the tezos blockchain, is strongly typed, stack-based, and has a strict execution model leaving few opportunities for implicit runtime errors. we present a work in progress static analyzer for michelson based on abstract interpretation and implemented within mopsa, a modular static analyzer. our tool supports the michelson semantic features, including inner calls to external contracts. it can prove the absence of runtime errors and infer invariants on the persistent storage over an unbounded number of calls. it is also being extended to prove high-level numerical and security properties. ccs concepts: $\bullet$ security and privacy $\rightarrow$ logic and verification; $\bullet$ software and its engineering $\rightarrow$ automated static analysis.
2022-10-10 13:10:56.000000000
software quality assurance activities become increasingly difficult as software systems become more and more complex and continuously grow in size. moreover, testing becomes even more expensive when dealing with large-scale systems. thus, to effectively allocate quality assurance resources, researchers have proposed fault prediction (fp) which utilizes machine learning (ml) to predict fault-prone code areas. however, ml algorithms typically make use of stochastic elements to increase the prediction models' generalizability and efficiency of the training process. these stochastic elements, also known as nondeterminism-introducing (ni) factors, lead to variance in the training process and as a result, lead to variance in prediction accuracy and training time. this variance poses a challenge for reproducibility in research. more importantly, while fault prediction models may have shown good performance in the lab (e.g., often-times involving multiple runs and averaging outcomes), high variance of results can pose the risk that these models show low performance when applied in practice. in this work, we experimentally analyze the variance of a state-of-the-art fault prediction approach. our experimental results indicate that ni factors can indeed cause considerable variance in the fault prediction models' accuracy. we observed a maximum variance of 10.10% in terms of the per-class accuracy metric. we thus, also discuss how to deal with such variance.
2023-10-25 06:10:22.000000000
in the realm of software applications in the transportation industry, domain-specific languages (dsls) have enjoyed widespread adoption due to their ease of use and various other benefits. with the ceaseless progress in computer performance and the rapid development of large-scale models, the possibility of programming using natural language in specified applications - referred to as application-specific natural language (asnl) - has emerged. asnl exhibits greater flexibility and freedom, which, in turn, leads to an increase in computational complexity for parsing and a decrease in processing performance. to tackle this issue, our paper advances a design for an intermediate representation (ir) that caters to asnl and can uniformly process transportation data into graph data format, improving data processing performance. experimental comparisons reveal that in standard data query operations, our proposed ir design can achieve a speed improvement of over forty times compared to direct usage of standard xml format data.
2023-07-12 09:32:31.000000000
peer code reviews are crucial for maintaining the quality of the code in software repositories. developers have introduced a number of software bots to help with the code review process. despite the benefits of automating code review tasks, many developers face challenges interacting with these bots due to non-comprehensive feedback and disruptive notifications. in this paper, we analyze how incorporating a bot in software development cycle will decrease turnaround time of pull request. we created a bot called suggestion bot to automatically review the code base using github's suggested changes functionality in order to solve this issue. a preliminary comparative empirical investigation between the utilization of this bot and manual review procedures was also conducted in this study. we evaluate suggestion bot concerning its impact on review time and also analyze whether the comments given by the bot are clear and useful for users. our results provide implications for the design of future systems and improving human-bot interactions for code review.
2023-05-10 04:58:39.000000000
context: software code review aims to early find code anomalies and to perform code improvements when they are less expensive. however, issues and challenges faced by developers who do not apply code review practices regularly are unclear. goal: investigate difficulties developers face to apply code review practices without limiting the target audience to developers who already use this practice regularly. method: we conducted a web-based survey with 350 brazilian practitioners engaged on the software development industry. results: code review practices are widespread among brazilian practitioners who recognize its importance. however, there is no routine for applying these practices. in addition, they report difficulties to fit static analysis tools in the software development process. one possible reason recognized by practitioners is that most of these tools use a single metric threshold, which might be not adequate to evaluate all system classes. conclusion: improving guidelines to fit code review practices into the software development process could help to make them widely used. additionally, future studies should investigate whether multiple metric thresholds that take source code context into account reduce static analysis tool false alarms. finally, these tools should allow their use in distinct phases of the software development process.
2020-07-27 14:18:59.000000000
the expanding hardware diversity in high performance computing adds enormous complexity to scientific software development. developers who aim to write maintainable software have two options: 1) to use a so-called data locality abstraction that handles portability internally, thereby, performance-productivity becomes a trade off. such abstractions usually come in the form of libraries, domain-specific languages, and run-time systems. 2) to use generic programming where performance, productivity and portability are subject to software design. in the direction of the second, this work describes a design approach that allows the integration of low-level and verbose programming tools into high-level generic algorithms based on template meta-programming in c++. this enables the development of performance-portable applications targeting host-device computer architectures, such as cpus and gpus. with a suitable design in place, the extensibility of generic algorithms to new hardware becomes a well defined procedure that can be developed in isolation from other parts of the code. that allows scientific software to be maintainable and efficient in a period of diversifying hardware in hpc. as proof of concept, a finite-difference modelling algorithm for the acoustic wave equation is developed and benchmarked using roofline model analysis on intel xeon gold 6248 cpu, nvidia tesla v100 gpu, and amd mi100 gpu.
2023-11-08 10:31:18.000000000
software development has been changing rapidly. this development process can be influenced through changing developer friendly approaches. we can save time consumption and accelerate the development process if we can automatically guide programmer during software development. there are some approaches that recommended relevant code snippets and apiitems to the developer. some approaches apply general code, searching techniques and some approaches use an online based repository mining strategies. but it gets quite difficult to help programmers when they need particular type conversion problems. more specifically when they want to adapt existing interfaces according to their expectation. one of the familiar triumph to guide developers in such situation is adapting collections and arrays through automated adaptation of object ensembles. but how does it help to a novice developer in real time software development that is not explicitly specified? in this paper, we have developed a system that works as a plugin-tool integrated with a particular data mining integrated environment (dmie) to recommend relevant interface while they seek for a type conversion situation. we have a mined repository of respective adapter classes and related apis from where developer, search their query and get their result using the relevant transformer classes. the system that recommends developers titled automated objective ensembles (aoe plugin).from the investigation as we have ever made, we can see that our approach much better than some of the existing approaches.
2020-05-06 06:46:27.000000000
enterprise integration patterns (eip) are a collection of widely used stencils for integrating enterprise applications and business processes. these patterns represent a "de-facto" standard reference for design decisions when integrating enterprise applications. for each of these patterns we present the integration semantics (model) and the conceptual translation (syntax) to the business process model and notation (bpmn), which is a "de-facto" standard for modelling business process semantics and their runtime behavior.
2014-03-15 04:12:39.000000000
in a buggy configurable system, configuration-dependent bugs cause the failures in only certain configurations due to unexpected interactions among features. manually localizing configuration-dependent faults in configurable systems could be highly time-consuming due to their complexity. however, the cause of configuration-dependent bugs is not considered by existing automated fault localization techniques, which are designed to localize bugs in non-configurable code. thus, their capacity for efficient configuration-dependent localization is limited. in this work, we propose cofl, a novel approach to localize configuration-dependent bugs by identifying and analyzing suspicious feature interactions that potentially cause the failures in buggy configurable systems. we evaluated the efficiency of cofl in fault localization of artificial configuration-dependent faults in a highly-configurable system. we found that cofl significantly improves the baseline spectrum-based approaches. with cofl, on average, the correctness in ranking the buggy statements increases more than 7 times, and the search space is significantly narrowed down, about 15 times.
2019-11-18 01:22:58.000000000
developers use question and answer (q&a) websites to exchange knowledge and expertise. stack overflow is a popular q&a website where developers discuss coding problems and share code examples. although all stack overflow posts are free to access, code examples on stack overflow are governed by the creative commons attribute-sharealike 3.0 unported license that developers should obey when reusing code from stack overflow or posting code to stack overflow. in this paper, we conduct a case study with 399 android apps, to investigate whether developers respect license terms when reusing code from stack overflow posts (and the other way around). we found 232 code snippets in 62 android apps from our dataset that were potentially reused from stack overflow, and 1,226 stack overflow posts containing code examples that are clones of code released in 68 android apps, suggesting that developers may have copied the code of these apps to answer stack overflow questions. we investigated the licenses of these pieces of code and observed 1,279 cases of potential license violations (related to code posting to stack overflow or code reuse from stack overflow). this paper aims to raise the awareness of the software engineering community about potential unethical code reuse activities taking place on q&a websites like stack overflow.
2017-03-07 14:40:58.000000000
recent advances in ict enable the evolution of the manufacturing industry to meet the new requirements of the society. cyber-physical systems, internet-of-things (iot), and cloud computing, play a key role in the fourth industrial revolution known as industry 4.0. the microservice architecture has evolved as an alternative to soa and promises to address many of the challenges in software development. in this paper, we adopt the concept of microservice and describe a framework for manufacturing systems that has the cyber-physical microservice as the key construct. the manufacturing plant processes are defined as compositions of primitive cyber-physical microservices adopting either the orchestration or the choreography pattern. iot technologies are used for system integration and model-driven engineering is utilized to semi-automate the development process for the industrial engineer, who is not familiar with microservices and iot. two case studies demonstrate the feasibility of the proposed approach.
2018-01-29 13:02:37.000000000
the sustainability of any data warehouse system (dws) is closely correlated with user satisfaction. therefore, analysts, designers and developers focused more on achieving all its functionality, without considering others kinds of requirement such as dependability s aspects. moreover, these latter are often considered as properties of the system that will must be checked and corrected once the project is completed. the practice of "fix it later" can cause the obsolescence of the entire data warehouse system. therefore, it requires the adoption of a methodology that will ensure the integration of aspects of dependability since the early stages of project dws. in this paper, we first define the concepts related to dependability of dws. then we present our approach inspired from the mda (model driven architecture) approach to model dependability s aspects namely: availability, reliability, maintainability and security, taking into account their interaction.
2013-10-28 15:56:25.000000000
large language models (llms) have been garnering significant attention of ai researchers, especially following the widespread popularity of chatgpt. however, due to llms' intricate architecture and vast parameters, several concerns and challenges regarding their quality assurance require to be addressed. in this paper, a fine-tuned gpt-based sentiment analysis model is first constructed and studied as the reference in ai quality analysis. then, the quality analysis related to data adequacy is implemented, including employing the content-based approach to generate reasonable adversarial review comments as the wrongly-annotated data, and developing surprise adequacy (sa)-based techniques to detect these abnormal data. experiments based on amazon.com review data and a fine-tuned gpt model were implemented. results were thoroughly discussed from the perspective of ai quality assurance to present the quality analysis of an llm model on generated adversarial textual data and the effectiveness of using sa on anomaly detection in data quality assurance.
2023-10-07 04:11:23.000000000
the choice of programming language is a very important decision as it not only affects the performance and maintainability of the software but also dictates the talent pool and community support available. to better understand the trade-offs involved in making such a decision, we define and compute popularity, demand, availability and community engagement of programming languages through online collaboration platforms. we perform our analysis using data from github and stackoverflow, two of the most popular programming communities. we get data related projects, languages and developer engagement from github and programming questions with answers along with language tags from stackoverflow. we compute metrics separately for the two data sources and then combine the metrics to provide a holistic and robust picture of the communities for the most popular programming languages.
2020-05-31 19:11:25.000000000
since distributed software systems are ubiquitous, their correct functioning is crucially important. static verification is possible in principle, but requires high expertise and effort which is not feasible in many eco-systems. runtime verification can serve as a lean alternative, where monitoring mechanisms are automatically generated from property specifications, to check compliance at runtime. this paper contributes a practical solution for powerful and flexible runtime verification of distributed, object-oriented applications, via a combination of the runtime verification tool larva and the active object framework proactive. even if larva supports in itself only the generation of local, sequential monitors, we empower larva for distributed monitoring by connecting monitors with active objects, turning them into active, communicating monitors. we discuss how this allows for a variety of monitoring architectures. further, we show how property specifications, and thereby the generated monitors, provide a model that splits the blame between the local object and its environment. while larva itself focuses on monitoring of control-oriented properties, we use the larva front-end starvoors to also capture data-oriented (pre/post) properties in the distributed monitoring. we demonstrate this approach to distributed runtime verification with a case study, a distributed key/value store.
2019-08-25 11:23:42.000000000
the fragmentation problem has extended from android to different platforms, such as ios, mobile web, and even mini-programs within some applications (app). in such a situation, recording and replaying test scripts is a popular automated mobile app testing approaches. but such approach encounters severe problems when crossing platforms. different versions of the same app need to be developed to support different platforms relying on different platform supports. therefore, mobile app developers need to develop and maintain test scripts for multiple platforms aimed at completely the same test requirements, greatly increasing testing costs. however, we discover that developers adopt highly similar user interface layouts for versions of the same app on different platforms. such a phenomenon inspires us to replay test scripts from the perspective of similar ui layouts. we propose an image-driven mobile app testing framework, utilizing widget feature matching and layout characterization matching. we use computer vision technologies to perform ui feature comparison and layout hierarchy extraction on app screenshots to obtain ui structures with rich contextual information, including coordinates, relative relationship, etc. based on acquired ui structures, we can form a platform-independent test script, and then locate the target widgets under test. thus, the proposed framework non-intrusively replays test scripts according to a novel platform-independent test script model. we also design and implement a tool named lit to devote the proposed framework into practice, based on which, we conduct an empirical study to evaluate the effectiveness and usability of the proposed testing framework. results show that the overall replay accuracy reaches around 63.39% on android (14% improvement over state-of-the-art approaches) and 21.83% on ios (98% improvement over state-of-the-art approaches).
2020-08-11 15:23:51.000000000
reproducibility and comparability of empirical results are at the core tenet of the scientific method in any scientific field. to ease reproducibility of empirical studies, several benchmarks in software engineering research, such as defects4j, have been developed and widely used. for quantum software engineering research, however, no benchmark has been established yet. in this position paper, we propose a new benchmark -- named qbugs -- which will provide experimental subjects and an experimental infrastructure to ease the evaluation of new research and the reproducibility of previously published results on quantum software engineering.
2021-03-30 08:51:33.000000000
semantic web technologies offer the prospect of significantly reducing the amount of effort required to integrate existing enterprise functionality in support of new composite processes; whether within a given organization or across multiple ones. a significant body of work in this area has aimed to fully automate this process, while assuming that all functionality has already been encapsulated in the form of semantic web services with rich and accurate annotations. in this article, we argue that this assumption is often unrealistic. instead, we describe a mixed initiative framework for semantic web service discovery and composition that aims at flexibly interleaving human decision making and automated functionality in environments where annotations may be incomplete and even inconsistent.
2020-06-01 18:16:39.000000000
quantum software plays a critical role in exploiting the full potential of quantum computing systems. as a result, it has been drawing increasing attention recently. this paper defines the term "quantum software engineering" and introduces a quantum software life cycle. the paper also gives a generic view of quantum software engineering and discusses the quantum software engineering processes, methods, and tools. based on these, the paper provides a comprehensive survey of the current state of the art in the field and presents the challenges and opportunities we face. the survey summarizes the technology available in the various phases of the quantum software life cycle, including quantum software requirements analysis, design, implementation, test, and maintenance. it also covers the crucial issues of quantum software reuse and measurement.
2020-07-12 22:52:40.000000000
in imperative programming, the domain-driven design methodology helps in coping with the complexity of software development by materializing in code the invariants of a domain of interest. code is cleaner and more secure because any implicit assumption is removed in favor of invariants, thus enabling a fail fast mindset and the immediate reporting of unexpected conditions. this article introduces a notion of template for answer set programming that, in addition to the don't repeat yourself principle, enforces locality of some predicates by means of a simple naming convention. local predicates are mapped to the usual global namespace adopted by mainstream engines, using universally unique identifiers to avoid name clashes. this way, local predicates can be used to enforce invariants on the expected outcome of a template in a possibly empty context of application, independently by other rules that can be added to such a context. template applications transpiled this way can be processed by mainstream engines and safely shared with other knowledge designers, even when they have zero knowledge of templates.
2023-07-11 21:11:21.000000000
neural language models of code, or neural code models (ncms), are rapidly progressing from research prototypes to commercial developer tools. as such, understanding the capabilities and limitations of such models is becoming critical. however, the abilities of these models are typically measured using automated metrics that often only reveal a portion of their real-world performance. while, in general, the performance of ncms appears promising, currently much is unknown about how such models arrive at decisions. to this end, this paper introduces $do_{code}$, a post hoc interpretability method specific to ncms that is capable of explaining model predictions. $do_{code}$ is based upon causal inference to enable programming language-oriented explanations. while the theoretical underpinnings of $do_{code}$ are extensible to exploring different model properties, we provide a concrete instantiation that aims to mitigate the impact of spurious correlations by grounding explanations of model behavior in properties of programming languages. to demonstrate the practical benefit of $do_{code}$, we illustrate the insights that our framework can provide by performing a case study on two popular deep learning architectures and ten ncms. the results of this case study illustrate that our studied ncms are sensitive to changes in code syntax. all our ncms, except for the bert-like model, statistically learn to predict tokens related to blocks of code (\eg brackets, parenthesis, semicolon) with less confounding bias as compared to other programming language constructs. these insights demonstrate the potential of $do_{code}$ as a useful method to detect and facilitate the elimination of confounding bias in ncms.
2023-02-06 08:59:09.000000000
we present a novel verification technique to prove interesting properties of a class of array programs with a symbolic parameter n denoting the size of arrays. the technique relies on constructing two slightly different versions of the same program. it infers difference relations between the corresponding variables at key control points of the joint control-flow graph of the two program versions. the desired post-condition is then proved by inducting on the program parameter $n$, wherein the difference invariants are crucially used in the inductive step. this contrasts with classical techniques that rely on finding potentially complex loop invaraints for each loop in the program. our synergistic combination of inductive reasoning and finding simple difference invariants helps prove properties of programs that cannot be proved even by the winner of arrays sub-category from sv-comp 2021. we have implemented a prototype tool called diffy to demonstrate these ideas. we present results comparing the performance of diffy with that of state-of-the-art tools.
2021-05-28 23:03:21.000000000