input
stringlengths
29
3.27k
created_at
stringlengths
29
29
a data warehouse stores integrated information as materialized views over data from one or more remote sources. these materialized views must be maintained in response to actual relation updates in the remote sources. the data warehouse view maintenance techniques are classified into four major categories self maintainable recomputation, not self maintainable recomputation, self maintainable incremental maintenance, and not self maintainable incremental maintenance. this paper provides a comprehensive comparison of the techniques in these four categories in terms of the data warehouse space usage and number of rows accessed in order to propagate an update from a remote data source to a target materialized view in the data warehouse.
2010-02-10 20:08:36.000000000
cyber-physical systems (cpss) in critical infrastructure face a pervasive threat from attackers, motivating research into a variety of countermeasures for securing them. assessing the effectiveness of these countermeasures is challenging, however, as realistic benchmarks of attacks are difficult to manually construct, blindly testing is ineffective due to the enormous search spaces and resource requirements, and intelligent fuzzing approaches require impractical amounts of data and network access. in this work, we propose active fuzzing, an automatic approach for finding test suites of packet-level cps network attacks, targeting scenarios in which attackers can observe sensors and manipulate packets, but have no existing knowledge about the payload encodings. our approach learns regression models for predicting sensor values that will result from sampled network packets, and uses these predictions to guide a search for payload manipulations (i.e. bit flips) most likely to drive the cps into an unsafe state. key to our solution is the use of online active learning, which iteratively updates the models by sampling payloads that are estimated to maximally improve them. we evaluate the efficacy of active fuzzing by implementing it for a water purification plant testbed, finding it can automatically discover a test suite of flow, pressure, and over/underflow attacks, all with substantially less time, data, and network access than the most comparable approach. finally, we demonstrate that our prediction models can also be utilised as countermeasures themselves, implementing them as anomaly detectors and early warning systems.
2020-05-27 14:08:56.000000000
app stores are highly competitive markets, sometimes offering dozens of apps for a single use case. unexpected app changes such as a feature removal might incite even loyal users to explore alternative apps. sentiment analysis tools can help monitor users' emotions expressed, e.g., in app reviews or tweets. we found that these emotions include four recurring patterns corresponding to the app releases. based on these patterns and online reports about popular apps, we derived five release lessons to assist app vendors maintain positive emotions and gain competitive advantages.
2019-06-12 12:42:42.000000000
devops is a modern software engineering paradigm that is gaining widespread adoption in industry. the goal of devops is to bring software changes into production with a high frequency and fast feedback cycles. this conflicts with software quality assurance activities, particularly with respect to performance. for instance, performance evaluation activities -- such as load testing -- require a considerable amount of time to get statistically significant results. we conducted an industrial survey to get insights into how performance is addressed in industrial devops settings. in particular, we were interested in the frequency of executing performance evaluations, the tools being used, the granularity of the obtained performance data, and the use of model-based techniques. the survey responses, which come from a wide variety of participants from different industry sectors, indicate that the complexity of performance engineering approaches and tools is a barrier for wide-spread adoption of performance analysis in devops. the implication of our results is that performance analysis tools need to have a short learning curve, and should be easy to integrate into the devops pipeline.
2018-08-17 21:53:09.000000000
code generation tools driven by artificial intelligence have recently become more popular due to advancements in deep learning and natural language processing that have increased their capabilities. the proliferation of these tools may be a double-edged sword because while they can increase developer productivity by making it easier to write code, research has shown that they can also generate insecure code. in this paper, we perform a user-centered evaluation github's copilot to better understand its strengths and weaknesses with respect to code security. we conduct a user study where participants solve programming problems (with and without copilot assistance) that have potentially vulnerable solutions. the main goal of the user study is to determine how the use of copilot affects participants' security performance. in our set of participants (n=25), we find that access to copilot accompanies a more secure solution when tackling harder problems. for the easier problem, we observe no effect of copilot access on the security of solutions. we also observe no disproportionate impact of copilot use on particular kinds of vulnerabilities. our results indicate that there are potential security benefits to using copilot, but more research is warranted on the effects of the use of code generation tools on technically complex problems with security requirements.
2023-08-10 19:31:02.000000000
ui design is an integral part of software development. for many developers who do not have much ui design experience, exposing them to a large database of real-application ui designs can help them quickly build up a realistic understanding of the design space for a software feature and get design inspirations from existing applications. however, existing keyword-based, image-similarity-based, and component-matching-based methods cannot reliably find relevant high-fidelity ui designs in a large database alike to the ui wireframe that the developers sketch, in face of the great variations in ui designs. in this article, we propose a deep-learning-based ui design search engine to fill in the gap. the key innovation of our search engine is to train a wireframe image autoencoder using a large database of real-application ui designs, without the need for labeling relevant ui designs. we implement our approach for android ui design search, and conduct extensive experiments with artificially created relevant ui designs and human evaluation of ui design search results. our experiments confirm the superior performance of our search engine over existing image-similarity or component-matching-based methods and demonstrate the usefulness of our search engine in real-world ui design tasks.
2021-03-11 20:15:08.000000000
the development process of web information systems nowadays improved a lot regarding effectiveness and tool support, but still contains many redundant steps for similar tasks. in order to overcome this, we use a model-driven approach to specify a web information system in an agile way and generate a full- edged and runnable application from a set of models. the covered aspects of the system comprise data structure, page structure including view on data, page- and workflow within the system as well as overall application structure and user rights management. appropriate tooling allows transforming these models to complete systems and thus gives us opportunity for a lightweight development process based on models. in this paper, we describe how we approach the page- and workflow aspect by using activity diagrams as part of the agile modeling approach montiwis. we give an overview of the defined syntax, describe the supported forms of action contents and finally explain how the behavior is realized in the generated application.
2014-08-25 09:11:11.000000000
the ease of using a large language model (llm) to answer a wide variety of queries and their high availability has resulted in llms getting integrated into various applications. llm-based recommenders are now routinely used by students as well as professional software programmers for code generation and testing. though llm-based technology has proven useful, its unethical and unattributed use by students and professionals is a growing cause of concern. as such, there is a need for tools and technologies which may assist teachers and other evaluators in identifying whether any portion of a source code is llm generated. in this paper, we propose a neural network-based tool that instructors can use to determine the original effort (and llm's contribution) put by students in writing source codes. our tool is motivated by minimum description length measures like kolmogorov complexity. our initial experiments with moderate sized (up to 500 lines of code) have shown promising results that we report in this paper.
2023-07-08 15:37:48.000000000
safety critical java (scj) is a profile of the real-time specification for java that brings to the safety-critical industry the possibility of using java. scj defines three compliance levels: level 0, level 1 and level 2. the scj specification is clear on what constitutes a level 2 application in terms of its use of the defined api, but not the occasions on which it should be used. this paper broadly classifies the features that are only available at level 2 into three groups:~nested mission sequencers, managed threads, and global scheduling across multiple processors. we explore the first two groups to elicit programming requirements that they support. we identify several areas where the scj specification needs modifications to support these requirements fully; these include:~support for terminating managed threads, the ability to set a deadline on the transition between missions, and augmentation of the mission sequencer concept to support composibility of timing constraints. we also propose simplifications to the termination protocol of missions and their mission sequencers. to illustrate the benefit of our changes, we present excerpts from a formal model of scj level~2 written in circus, a state-rich process algebra for refinement.
2018-05-23 13:37:31.000000000
in previous work we have described how refinements can be checked using a temporal logic based model-checker, and how we have built a model-checker for z by providing a translation of z into the sal input language. in this paper we draw these two strands of work together and discuss how we have implemented refinement checking in our z2sal toolset. the net effect of this work is that the sal toolset can be used to check refinements between z specifications supplied as input files written in the latex mark-up. two examples are used to illustrate the approach and compare it with a manual translation and refinement check.
2011-06-21 05:24:01.000000000
in order to optimize the usage of testing efforts and to assess risks of software-based systems, risk-based testing uses risk (re-)assessments to steer all phases in a test process. several risk-based testing approaches have been proposed in academia and/or applied in industry, so that the determination of principal concepts and methods in risk-based testing is needed to enable a comparison of the weaknesses and strengths of different risk-based testing approaches. in this chapter we provide an (updated) taxonomy of risk-based testing aligned with risk considerations in all phases of a test process. it consists of three top-level classes, i.e., contextual setup, risk assessment, and risk-based test strategy. this taxonomy provides a framework to understand, categorize, assess and compare risk-based testing approaches to support their selection and tailoring for specific purposes. furthermore, we position four recent risk-based testing approaches into the taxonomy in order to demonstrate its application and alignment with available risk-based testing approaches.
2018-01-19 02:21:57.000000000
the notion of quiescence - the absence of outputs - is vital in both behavioural modelling and testing theory. although the need for quiescence was already recognised in the 90s, it has only been treated as a second-class citizen thus far. this paper moves quiescence into the foreground and introduces the notion of quiescent transition systems (qtss): an extension of regular input-output transition systems (iotss) in which quiescence is represented explicitly, via quiescent transitions. four carefully crafted rules on the use of quiescent transitions ensure that our qtss naturally capture quiescent behaviour. we present the building blocks for a comprehensive theory on qtss supporting parallel composition, action hiding and determinisation. in particular, we prove that these operations preserve all the aforementioned rules. additionally, we provide a way to transform existing iotss into qtss, allowing even iotss as input that already contain some quiescent transitions. as an important application, we show how our qts framework simplifies the fundamental model-based testing theory formalised around ioco.
2012-02-28 05:33:24.000000000
how to apply automated verification technology such as model checking and static program analysis to millions of lines of embedded c/c++ code? how to package this technology in a way that it can be used by software developers and engineers, who might have no background in formal verification? and how to convince business managers to actually pay for such a software? this work addresses a number of those questions. based on our own experience on developing and distributing the goanna source code analyzer for detecting software bugs and security vulnerabilities in c/c++ code, we explain the underlying technology of model checking, static analysis and smt solving, steps involved in creating industrial-proof tools.
2012-12-28 17:47:10.000000000
we demonstrate the first recurrent neural network architecture for learning signal temporal logic formulas, and present the first systematic comparison of formula inference methods. legacy systems embed much expert knowledge which is not explicitly formalized. there is great interest in learning formal specifications that characterize the ideal behavior of such systems -- that is, formulas in temporal logic that are satisfied by the system's output signals. such specifications can be used to better understand the system's behavior and improve design of its next iteration. previous inference methods either assumed certain formula templates, or did a heuristic enumeration of all possible templates. this work proposes a neural network architecture that infers the formula structure via gradient descent, eliminating the need for imposing any specific templates. it combines learning of formula structure and parameters in one optimization. through systematic comparison, we demonstrate that this method achieves similar or better mis-classification rates (mcr) than enumerative and lattice methods. we also observe that different formulas can achieve similar mcr, empirically demonstrating the under-determinism of the problem of temporal logic inference.
2022-08-09 21:33:19.000000000
since the birth of web service composition, minimizing the number of web services of the resulting composition while satisfying the user request has been a significant perspective of research. with the increase of the number of services released across the internet, seeking efficient algorithms for this research is an urgent need. in this paper we present an efficient mechanism to solve the problem of web service composition. for the given request, a service dependency graph is firstly generated with the relevant services picked from an external repository. then, each search step on the graph is transformed into a dynamic knapsack problem by mapping services to items whose volume and cost is changeable, after which a knapsack-variant algorithm is applied to solve each problem after transformation. once the last search step is completed, the minimal composition that satisfies the request can be obtained. experiments on eight public datasets proposed for the web service challenge 2008 shows that the proposed mechanism outperforms the state-of-the-art ones by generating solutions containing the same or smaller number of services with much higher efficiency.
2018-01-22 11:03:43.000000000
this research, undertaken in highly structured software-intensive organizations, outlines challenges associated to agile, lean and devops practices and principles adoption. the approach collected data via a series of thirty (30) interviews, with practitioners from the emea region (czech republic, estonia, italy, georgia, greece, the netherlands, saudi arabia, south africa, uae, uk), working in nine (9) different industry domains and ten (10) different countries. a set of agile, lean and devops practices and principles, which organizations choose to include in their devops adoption journeys were identified. the most frequently adopted structured service management practices, contributing to devops practice adoption success, indicate that those with software development and operation roles in devops-oriented organizations benefit from existence of highly structured service management approaches such as itil.
2020-08-21 01:21:59.000000000
because spreadsheets have a large and growing importance in real-world work, their contents need to be controlled and validated. generally spreadsheets have been difficult to verify, since data and executable information are stored together. spreadsheet applications with multiple authors are especially difficult to verify, since controls over access are difficult to enforce. facing similar problems, traditional software engineering has developed numerous tools and methodologies to control, verify and audit large applications with multiple developers. we present some tools we have developed to enable 1) the audit of selected, filtered, or all changes in a spreadsheet, that is, when a cell was changed, its original and new contents and who made the change, and 2) control of access to the spreadsheet file(s) so that auditing is trustworthy. our tools apply to openoffice.org calc spreadsheets, which can generally be exchanged with microsoft excel.
2008-07-20 17:28:25.000000000
large language models are successfully adopted in software engineering, especially in code generation. updating these models with new knowledge is very expensive, and is often required to fully realize their value. in this paper, we propose a novel and effective model editing approach, \textsc{ment}, to patch llms in coding tasks. based on the mechanism of generative llms, \textsc{ment} enables model editing in next-token predictions, and further supports common coding tasks. \textsc{ment} is effective, efficient, and reliable. it can correct a neural model by patching 1 or 2 neurons. as the pioneer work on neuron-level model editing of generative models, we formalize the editing process and introduce the involved concepts. besides, we also introduce new measures to evaluate its generalization ability, and build a benchmark for further study. our approach is evaluated on three coding tasks, including api-seq recommendation, line-level code generation, and pseudocode-to-code transaction. it outperforms the state-of-the-art by a significant margin on both effectiveness and efficiency measures. in addition, we demonstrate the usages of \textsc{ment} for llm reasoning in software engineering. by editing the llm knowledge with \textsc{ment}, the directly or indirectly dependent behaviors in the chain-of-thought change accordingly and automatically.
2023-12-08 02:58:26.000000000
agent-based models play an important role in simulating complex emergent phenomena and supporting critical decisions. in this context, a software fault may result in poorly informed decisions that lead to disastrous consequences. the ability to rigorously test these models is therefore essential. in this systematic literature review, we answer five research questions related to the key aspects of test case generation in agent-based models: what are the information artifacts used to generate tests? how are these tests generated? how is a verdict assigned to a generated test? how is the adequacy of a generated test suite measured? what level of abstraction of an agent-based model is targeted by a generated test? our results show that whilst the majority of techniques are effective for testing functional requirements at the agent and integration levels of abstraction, there are comparatively few techniques capable of testing society-level behaviour. additionally, we identify a need for more thorough evaluation using realistic case studies that feature challenging properties associated with a typical agent-based model.
2021-03-12 10:51:18.000000000
automating hardware design could obviate a significant amount of human error from the engineering process and lead to fewer errors. verilog is a popular hardware description language to model and design digital systems, thus generating verilog code is a critical first step. emerging large language models (llms) are able to write high-quality code in other programming languages. in this paper, we characterize the ability of llms to generate useful verilog. for this, we fine-tune pre-trained llms on verilog datasets collected from github and verilog textbooks. we construct an evaluation framework comprising test-benches for functional analysis and a flow to test the syntax of verilog code generated in response to problems of varying difficulty. our findings show that across our problem scenarios, the fine-tuning results in llms more capable of producing syntactically correct code (25.9% overall). further, when analyzing functional correctness, a fine-tuned open-source codegen llm can outperform the state-of-the-art commercial codex llm (6.5% overall). training/evaluation scripts and llm checkpoints are available: [LINK].
2022-12-20 22:13:27.000000000
modern highly automated and autonomous traffic systems and subsystems require new approaches to test their functional safety in the context of validation and verification. one approach that has taken a leading role in current research is scenario-based testing. for various reasons, simulation is considered to be the most practicable solution for a wide range of test scenarios. however, this is where many existing simulation systems in research reach their limits. in order to be able to integrate the widest possible range of systems to be tested into the simulation, the use of co-simulation has proven to be particularly useful. in this work, the high level architecture defined in the ieee 1516-2010 standard is specifically addressed and a concept is developed that establishes the foundation for the feasible use of scenario-based distributed co-simulation on its basis. the main challenge identified and addressed is the resolution of the double-sided dependency between scenario and simulation models. the solution was to fully automate the generation and instantiation of the simulation environment on the basis of a scenario instance. finally, the developed concept was implemented as a prototype and the resulting process for its use is presented here using an example scenario. based on the experience gained during the creation of the concept and the prototype, the next steps for future work are outlined in conclusion.
2022-08-11 07:26:18.000000000
one of the great challenges the information society faces is dealing with the huge amount of information generated and handled daily on the internet. today, progress in big data proposals attempts to solve this problem, but there are certain limitations to information search and retrieval due basically to the large volumes handled the heterogeneity of the information, and its dispersion among a multitude of sources. in this article, a formal framework is defined to facilitate the design and development of an environmental management information system, which works with a heterogeneous and large amount of data. nevertheless, this framework can be applied to other information systems that work with big data, because it does not depend on the type of data and can be utilized in other domains. the framework is based on an ontological web-trading model (ontotrader), which follows model-driven engineering and ontology-driven engineering guidelines to separate the system architecture from its implementation. the proposal is accompanied by a case study, soleres-krs, an environmental knowledge representation system designed and developed using software agents and multi-agent systems.
2022-06-20 17:05:00.000000000
purpose - the research aims to show the relevance of company client sponsored student projects in the university of asia and the pacific information technology (ua&p it) capstone program through the use ofan agile methodology with scrum approach. method - the modified program is employed on two batches with content analysis and survey results as benchmarks. results - surveys at the end of the sprints for both clients and students revealed that the length of the sprint was a critical factor in the development of the information system, and that students learned from addressing additional challenges such as academic load, team pressure and communication issues. conclusion - over-all results showed that clients were impressed and keen to adopt the student works. recommendations - maintainability aspects of the research can be analyzed for future studies. increasing the sample size with additional batches could lead to discovery of additional factors not previously seen. research implications - the research could help improve other capstone programs while improving communication with company clients.
2019-02-01 17:49:47.000000000
serverless computing is an emerging cloud computing paradigm, being adopted to develop a wide range of software applications. it allows developers to focus on the application logic in the granularity of function, thereby freeing developers from tedious and error-prone infrastructure management. meanwhile, its unique characteristic poses new challenges to the development and deployment of serverless-based applications. to tackle these challenges, enormous research efforts have been devoted. this paper provides a comprehensive literature review to characterize the current research state of serverless computing. specifically, this paper covers 164 papers on 17 research directions of serverless computing, including performance optimization, programming framework, application migration, multi-cloud development, testing and debugging, etc. it also derives research trends, focus, and commonly-used platforms for serverless computing, as well as promising research opportunities.
2022-06-23 11:02:18.000000000
business processes may face a variety of problems due to the number of tasks that need to be handled within short time periods, resources' workload and working patterns, as well as bottlenecks. these problems may arise locally and be short-lived, but as the process is forced to operate outside its standard capacity, the effect on the underlying process instances can be costly. we use the term high-level behavior to cover all process behavior which can not be captured in terms of the individual process instances. %whenever such behavior emerges, we call the cases which are involved in it participating cases. the natural question arises as to how the characteristics of cases relate to the high-level behavior they give rise to. in this work, we first show how to detect and correlate observations of high-level problems, as well as determine the corresponding (non-)participating cases. then we show how to assess the connection between any case-level characteristic and any given detected sequence of high-level problems. applying our method on the event data of a real loan application process revealed which specific combinations of delays, batching and busy resources at which particular parts of the process correlate with an application's duration and chance of a positive outcome.
2023-09-02 08:22:21.000000000
product quality level is become a key factor for companies' competitiveness. a lot of time and money are required to ensure and guaranty it. besides, motivated by the need of traceability, collecting production data is now commonplace in most companies. our paper aims to show that we can ensure the required quality thanks to an "on-line quality approch" and proposes a neural network based process to determine the optimal setting for production machines. we will illustrate this with the acta-mobilier case, which is a high quality lacquerer company.
2013-06-11 12:28:33.000000000
national digital identity systems have become a key requirement for easy access to online public services, specially during covid-19. while many countries have adopted a national digital identity system, many are still in the process of establishing one. through a comparative analysis of the technological and legal dimensions of a few selected national digital identity solutions currently being used in different countries, we highlight the diversity of technologies and architectures and the key role of the legal framework of a given digital identity solution. we also present several key issues related to the implementation of these solutions, how to ensure the state sovereignty over them, and how to strike the right balance between private sector and public sector needs. this position paper aims to help policy makers, software developers and concerned users understand the challenges of designing, implementing and using a national digital identity management system and establishing a legal framework for digital identity management, including personal data protection measures. the authors of this paper have a favorable position for self-sovereign identity management systems that are based on blockchain technology, and we believe they are the most suitable for national digital identity systems.
2023-09-30 20:36:23.000000000
white-box test generator tools rely only on the code under test to select test inputs, and capture the implementation's output as assertions. if there is a fault in the implementation, it could get encoded in the generated tests. tool evaluations usually measure fault-detection capability using the number of such fault-encoding tests. however, these faults are only detected, if the developer can recognize that the encoded behavior is faulty. we designed an exploratory study to investigate how developers perform in classifying generated white-box test as faulty or correct. we carried out the study in a laboratory setting with 54 graduate students. the tests were generated for two open-source projects with the help of the intellitest tool. the performance of the participants were analyzed using binary classification metrics and by coding their observed activities. the results showed that participants incorrectly classified a large number of both fault-encoding and correct tests (with median misclassification rate 33% and 25% respectively). thus the real fault-detection capability of test generators could be much lower than typically reported, and we suggest to take this human factor into account when evaluating generated white-box tests.
2017-06-04 17:57:47.000000000
devops is a collaborative and multidisciplinary organizational effort to automate continuous delivery of new software updates while guaranteeing their correctness and reliability. the present survey investigates and discusses devops challenges from the perspective of engineers, managers, and researchers. we review the literature and develop a devops conceptual map, correlating the devops automation tools with these concepts. we then discuss their practical implications for engineers, managers, and researchers. finally, we critically explore some of the most relevant devops challenges reported by the literature.
2019-09-09 10:15:48.000000000
context: when conducting a systematic literature review (slr), researchers usually face the challenge of designing a search strategy that appropriately balances result quality and review effort. using digital library (or database) searches or snowballing alone may not be enough to achieve high-quality results. on the other hand, using both digital library searches and snowballing together may increase the overall review effort. objective: the goal of this research is to propose and evaluate hybrid search strategies that selectively combine database searches with snowballing. method: we propose four hybrid search strategies combining database searches in digital libraries with iterative, parallel, or sequential backward and forward snowballing. we simulated the strategies over three existing slrs in se that adopted both database searches and snowballing. we compared the outcome of digital library searches, snowballing, and hybrid strategies using precision, recall, and f-measure to investigate the performance of each strategy. results: our results show that, for the analyzed slrs, combining database searches from the scopus digital library with parallel or sequential snowballing achieved the most appropriate balance of precision and recall. conclusion: we put forward that, depending on the goals of the slr and the available resources, using a hybrid search strategy involving a representative digital library and parallel or sequential snowballing tends to represent an appropriate alternative to be used when searching for evidence in slrs.
2020-04-19 13:20:05.000000000
code review is considered a key process in the software industry for minimizing bugs and improving code quality. inspection of review process effectiveness and continuous improvement can boost development productivity. such inspection is a time-consuming and human-bias-prone task. we propose a semi-supervised learning based system reviewranker which is aimed at assigning each code review a confidence score which is expected to resonate with the quality of the review. our proposed method is trained based on simple and and well defined labels provided by developers. the labeling task requires little to no effort from the developers and has an indirect relation to the end goal (assignment of review confidence score). reviewranker is expected to improve industry-wide code review quality inspection through reducing human bias and effort required for such task. the system has the potential of minimizing the back-and-forth cycle existing in the development and review process. usable code and dataset for this research can be found at: [LINK]
2023-07-07 16:13:43.000000000
the vast majority of the long tail of scientific software, the myriads of tools that implement the many analysis and visualization methods for different scientific fields, is highly specialized, purpose-built for a research project, and has to rely on community uptake and reuse for its continued development and maintenance. although uptake cannot be controlled over even guaranteed, some of the key factors that influence whether new users or developers decide to adopt an existing tool or start a new one are about how easy or difficult it is to use or enhance a tool for a purpose for which it was not originally designed. the science of software engineering has produced techniques and practices that would reduce or remove a variety of barriers to community uptake of software, but for a variety of reasons employing trained software engineers as part of the development of long tail scientific software has proven to be challenging. as a consequence, community uptake of long tail tools is often far more difficult than it would need to be, even though opportunities for reuse abound. we discuss likely reasons why employing software engineering in the long tail is challenging, and propose that many of those obstacles could be addressed in the form of a cross-cutting non-profit center of excellence that makes software engineering broadly accessible as a shared service, conceptually and in its effect similar to shared instrumentation.
2013-09-06 21:57:09.000000000
background: hackathons have become popular events for teams to collaborate on projects and develop software prototypes. most existing research focuses on activities during an event with limited attention to the evolution of the code brought to or created during a hackathon. aim: we aim to understand the evolution of hackathon-related code, specifically, how much hackathon teams rely on pre-existing code or how much new code they develop during a hackathon. moreover, we aim to understand if and where that code gets reused. method: we collected information about 22,183 hackathon projects from devpost -- a hackathon database -- and obtained related code (blobs), authors, and project characteristics from the world of code. we investigated if code blobs in hackathon projects were created before, during, or after an event by identifying the original blob creation date and author, and also checked if the original author was a hackathon project member. we tracked code reuse by first identifying all commits containing blobs created during an event before determining all projects that contain those commits. result: while only approximately 9.14% of the code blobs are created during hackathons, this amount is still significant considering the time and member constraints of such events. approximately a third of these code blobs get reused in other projects. conclusion: our study demonstrates to what extent pre-existing code is used and new code is created during a hackathon and how much of it is reused elsewhere afterwards. our findings help to better understand code reuse as a phenomenon and the role of hackathons in this context and can serve as a starting point for further studies in this area.
2021-03-17 16:42:07.000000000
generation of software from modeling languages such as uml and domain specific languages (dsls) has become an important paradigm in software engineering. in this contribution, we present some positions on software development in a model based, generative manner based on home grown dsls as well as the uml. this includes development of dsls as well as development of models in these languages in order to generate executable code, test cases or models in different languages. development of formal dsls contains concepts of meta-models or grammars (syntax), context conditions (static analysis and quality assurance) as well as possibilities to define the semantics of a language. the growing number and complexity of dsls is addressed by concepts for the modular and compositional development of languages and their tools. moreover, we introduce approaches to code generation and model transformation. finally, we give an overview of the relevance of dsls for various steps of software development processes.
2014-09-08 14:22:45.000000000
widespread adoption of autonomous cars will require greater confidence in their safety than is currently possible. certified control is a new safety architecture whose goal is two-fold: to achieve a very high level of safety, and to provide a framework for justifiable confidence in that safety. the key idea is a runtime monitor that acts, along with sensor hardware and low-level control and actuators, as a small trusted base, ensuring the safety of the system as a whole. unfortunately, in current systems complex perception makes the verification even of a runtime monitor challenging. unlike traditional runtime monitoring, therefore, a certified control monitor does not perform perception and analysis itself. instead, the main controller assembles evidence that the proposed action is safe into a certificate that is then checked independently by the monitor. this exploits the classic gap between the costs of finding and checking. the controller is assigned the task of finding the certificate, and can thus use the most sophisticated algorithms available (including learning-enabled software); the monitor is assigned only the task of checking, and can thus run quickly and be smaller and formally verifiable. this paper explains the key ideas of certified control and illustrates them with a certificate for lidar data and its formal verification. it shows how the architecture dramatically reduces the amount of code to be verified, providing an end-to-end safety analysis that would likely not be achievable in a traditional architecture.
2021-04-13 01:01:48.000000000
quantum computing is a new paradigm that enables several advances which are impossible using classical technology. with the rise of quantum computers, the software is also invited to change so that it can better fit this new computation way. however, although a lot of research is being conducted in the quantum computing field, it is still scarce studies about the differences of the software and software engineering in this new context. therefore, this article presents a systematic mapping study to present a wide review on the particularities and characteristics of software that are developed for quantum computers. a total of 24 papers were selected using digital libraries with the objective of answering three research questions elaborated in the conduct of this research.
2021-05-27 08:09:08.000000000
large language models of code (code-llms) have recently brought tremendous advances to code completion, a fundamental feature of programming assistance and code intelligence. however, most existing works ignore the possible presence of bugs in the code context for generation, which are inevitable in software development. therefore, we introduce and study the buggy-code completion problem, inspired by the realistic scenario of real-time code suggestion where the code context contains potential bugs -- anti-patterns that can become bugs in the completed program. to systematically study the task, we introduce two datasets: one with synthetic bugs derived from semantics-altering operator changes (buggy-humaneval) and one with realistic bugs derived from user submissions to coding problems (buggy-fixeval). we find that the presence of potential bugs significantly degrades the generation performance of the high-performing code-llms. for instance, the passing rates of codegen-2b-mono on test cases of buggy-humaneval drop more than 50% given a single potential bug in the context. finally, we investigate several post-hoc methods for mitigating the adverse effect of potential bugs and find that there remains a significant gap in post-mitigation performance.
2023-06-05 15:53:42.000000000
lack of security expertise among software practitioners is a problem with many implications. first, there is a deficit of security professionals to meet current needs. additionally, even practitioners who do not plan to work in security may benefit from increased understanding of security. the goal of this paper is to aid software engineering educators in designing a comprehensive software security course by sharing an experience running a software security course for the eleventh time. through all the eleven years of running the software security course, the course objectives have been comprehensive - ranging from security testing, to secure design and coding, to security requirements to security risk management. for the first time in this eleventh year, a theme of the course assignments was to map vulnerability discovery to the security controls of the open web application security project (owasp) application security verification standard (asvs). based upon student performance on a final exploratory penetration testing project, this mapping may have increased students' depth of understanding of a wider range of security topics. the students efficiently detected 191 unique and verified vulnerabilities of 28 different common weakness enumeration (cwe) types during a three-hour period in the openmrs project, an electronic health record application in active use.
2021-03-08 11:20:44.000000000
testing is widely recognized as an important stage of the software development lifecycle. effective software testing can provide benefits such as bug finding, preventing regressions, and documentation. in terms of documentation, unit tests express a unit's intended functionality, as conceived by the developer. a test oracle, typically expressed as an condition, documents the intended behavior of a unit under a given test prefix. synthesizing a functional test oracle is a challenging problem, as it must capture the intended functionality rather than the implemented functionality. in this paper, we propose toga (a neural method for test oracle generation), a unified transformer-based neural approach to infer both exceptional and assertion test oracles based on the context of the focal method. our approach can handle units with ambiguous or missing documentation, and even units with a missing implementation. we evaluate our approach on both oracle inference accuracy and functional bug-finding. our technique improves accuracy by 33\% over existing oracle inference approaches, achieving 96\% overall accuracy on a held out test dataset. furthermore, we show that when integrated with a automated test generation tool (evosuite), our approach finds 57 real world bugs in large-scale java programs, including 30 bugs that are not found by any other automated testing method in our evaluation.
2021-09-17 16:36:40.000000000
software comes in releases. an implausible change to software is something that has never been changed in prior releases. when planning how to reduce defects, it is better to use plausible changes, i.e., changes with some precedence in the prior releases. to demonstrate these points, this paper compares several defect reduction planning tools. lime is a local sensitivity analysis tool that can report the fewest changes needed to alter the classification of some code module (e.g., from "defective" to "non-defective"). timelime is a new tool, introduced in this paper, that improves lime by restricting its plans to just those attributes which change the most within a project. in this study, we compared the performance of lime and timelime and several other defect reduction planning algorithms. the generated plans were assessed via (a) the similarity scores between the proposed code changes and the real code changes made by developers; and (b) the improvement scores seen within projects that followed the plans. for nine project trails, we found that timelime outperformed all other algorithms (in 8 out of 9 trials). hence, we strongly recommend using past releases as a source of knowledge for computing fixes for new releases (using timelime). apart from these specific results about planning defect reductions and timelime, the more general point of this paper is that our community should be more careful about using off-the-shelf ai tools, without first applying se knowledge. in this case study, it was not difficult to augment a standard ai algorithm with se knowledge (that past releases are a good source of knowledge for planning defect reductions). as shown here, once that se knowledge is applied, this can result in dramatically better systems.
2020-06-11 10:59:59.000000000
runtime monitoring is one of the central tasks to provide operational decision support to running business processes, and check on-the-fly whether they comply with constraints and rules. we study runtime monitoring of properties expressed in ltl on finite traces (ltlf) and in its extension ldlf. ldlf is a powerful logic that captures all monadic second order logic on finite traces, which is obtained by combining regular expressions and ltlf, adopting the syntax of propositional dynamic logic (pdl). interestingly, in spite of its greater expressivity, ldlf has exactly the same computational complexity of ltlf. we show that ldlf is able to capture, in the logic itself, not only the constraints to be monitored, but also the de-facto standard rv-ltl monitors. this makes it possible to declaratively capture monitoring metaconstraints, and check them by relying on usual logical services instead of ad-hoc algorithms. this, in turn, enables to flexibly monitor constraints depending on the monitoring state of other constraints, e.g., "compensation" constraints that are only checked when others are detected to be violated. in addition, we devise a direct translation of ldlf formulas into nondeterministic automata, avoiding to detour to buechi automata or alternating automata, and we use it to implement a monitoring plug-in for the prom suite.
2014-04-30 16:17:16.000000000
software vulnerabilities in access control models can represent a serious threat in a system. in fact, owasp lists broken access control as number 5 in severity among the top 10 vulnerabilities. in this paper, we study the permission model of an emerging smart-home platform, smartthings, and explore an approach that detects privilege escalation in its permission model. our approach is based on model driven engineering (mde) in addition to static analysis. this approach allows for better coverage of privilege escalation detection than static analysis alone, and takes advantage of analyzing free-form text that carries extra permissions details. our experimental results demonstrate a very high accuracy for detecting over-privilege vulnerabilities in iot applications
2022-05-23 03:49:48.000000000
"evolution behaves like a tinkerer" (francois jacob, science, 1977). software systems provide a unique opportunity to understand biological processes using concepts from network theory. the debian gnu/linux operating system allows us to explore the evolution of a complex network in a novel way. the modular design detected during its growth is based on the reuse of existing code in order to minimize costs during programming. the increase of modularity experienced by the system over time has not counterbalanced the increase in incompatibilities between software packages within modules. this negative effect is far from being a failure of design. a random process of package installation shows that the higher the modularity the larger the fraction of packages working properly in a local computer. the decrease in the relative number of conflicts between packages from different modules avoids a failure in the functionality of one package spreading throughout the entire system. some potential analogies with the evolutionary and ecological processes determining the structure of ecological networks of interacting species are discussed.
2011-11-22 12:07:40.000000000
we present a full-program induction technique for proving (a sub-class of) quantified as well as quantifier-free properties of programs manipulating arrays of parametric size n. instead of inducting over individual loops, our technique inducts over the entire program (possibly containing multiple loops) directly via the program parameter n. significantly, this does not require generation or use of loop-specific invariants. we have developed a prototype tool vajra to assess the efficacy of our technique. we demonstrate the performance of vajra vis-a-vis several state-of-the-art tools on a set of array manipulating benchmarks.
2020-02-18 17:53:55.000000000
recently, there have been original attempts to use the concept of "code similarity" in program repair, suggesting that similarity analysis has an important role in the repair process. however, there is no dedicated work to characterize and quantify the role of similarity in redundancy-based program repair, where the patch is composed from source code taken from somewhere else. this is where our paper makes a major contribution: we perform a deep and systematic analysis of the role of code similarity during the exploration of the repair search space. we define and set up a large-scale experiment based on four code similarity metrics that capture different similarities: character, token, semantic and structure similarity. overall, we have computed 56 million similarity score over 15 million source code components. we show that with similarity analysis, at least 90% of search space can be ignored to find the correct patch. code similarity is capable of ranking the correct repair ingredient first in 4 - 33 % of the considered cases.
2018-11-08 19:31:33.000000000
self-adaptive software systems continuously adapt in response to internal and external changes in their execution environment, captured as contexts. the cop paradigm posits a technique for the development of self-adaptive systems, capturing their main characteristics with specialized programming language constructs. cop adaptations are specified as independent modules composed in and out of the base system as contexts are activated and deactivated in response to sensed circumstances from the surrounding environment. however, the definition of adaptations, their contexts and associated specialized behavior, need to be specified at design time. in complex cps this is intractable due to new unpredicted operating conditions. we propose auto-cop, a new technique to enable generation of adaptations at run time. auto-cop uses rl options to build action sequences, based on the previous instances of the system execution. options are explored in interaction with the environment, and the most suitable options for each context are used to generate adaptations exploiting cop. to validate auto-cop, we present two case studies exhibiting different system characteristics and application domains: a driving assistant and a robot delivery system. we present examples of auto-cop code generated at run time, to illustrate the types of circumstances (contexts) requiring adaptation, and the corresponding generated adaptations for each context. we confirm that the generated adaptations exhibit correct system behavior measured by domain-specific performance metrics, while reducing the number of required execution/actuation steps by a factor of two showing that the adaptations are regularly selected by the running system as adaptive behavior is more appropriate than the execution of primitive actions.
2021-03-08 21:28:04.000000000
with the growth of the open-source data science community, both the number of data science libraries and the number of versions for the same library are increasing rapidly. to match the evolving apis from those libraries, open-source organizations often have to exert manual effort to refactor the apis used in the code base. moreover, due to the abundance of similar open-source libraries, data scientists working on a certain application may have an abundance of libraries to choose, maintain and migrate between. the manual refactoring between apis is a tedious and error-prone task. although recent research efforts were made on performing automatic api refactoring between different languages, previous work relies on statistical learning with collected pairwise training data for the api matching and migration. using large statistical data for refactoring is not ideal because such training data will not be available for a new library or a new version of the same library. we introduce synthesis for open-source api refactoring (soar), a novel technique that requires no training data to achieve api migration and refactoring. soar relies only on the documentation that is readily available at the release of the library to learn api representations and mapping between libraries. using program synthesis, soar automatically computes the correct configuration of arguments to the apis and any glue code required to invoke those apis. soar also uses the interpreter's error messages when running refactored code to generate logical constraints that can be used to prune the search space. our empirical evaluation shows that soar can successfully refactor 80% of our benchmarks corresponding to deep learning models with up to 44 layers with an average run time of 97.23 seconds, and 90% of the data wrangling benchmarks with an average run time of 17.31 seconds.
2021-02-12 11:49:18.000000000
continuous testing during development is a well-established technique for software-quality assurance. continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. we have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. the precision of an abstract domain specifies the level of abstraction that the analysis works on. precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. in particular, precisions are small and thus easy to store; they are easy to process and have a large impact on resource consumption. we experimentally show the impact of precision reuse on industrial verification problems, namely, 59 device drivers with 1119 revisions from the linux kernel.
2013-05-28 09:43:03.000000000
scientific models hold the key to better understanding and predicting the behavior of complex systems. the most comprehensive manifestation of a scientific model, including crucial assumptions and parameters that underpin its usability, is usually embedded in associated source code and documentation, which may employ a variety of (potentially outdated) programming practices and languages. domain experts cannot gain a complete understanding of the implementation of a scientific model if they are not familiar with the code. furthermore, rapid research and development iterations make it challenging to keep up with constantly evolving scientific model codebases. to address these challenges, we develop a system for the automated creation and human-assisted curation of a knowledge graph of computable scientific models that analyzes a model's code in the context of any associated inline comments and external documentation. our system uses knowledge-driven as well as data-driven approaches to identify and extract relevant concepts from code and equations from textual documents to semantically annotate models using domain terminology. these models are converted into executable python functions and then can further be composed into complex workflows to answer different forms of domain-driven questions. we present experimental results obtained using a dataset of code and associated text derived from nasa's hypersonic aerodynamics website.
2022-02-26 02:00:35.000000000
this study employs a simulation-based approach, adapting the waterfall model, to provide estimates for software project and individual phase completion times. additionally, it pinpoints potential efficiency issues stemming from suboptimal resource levels. we implement our software development lifecycle simulation using simpy, a python discrete-event simulation framework. our model is executed within the context of a software house on 100 projects of varying sizes examining two scenarios. the first provides insight based on an initial set of resources, which reveals the presence of resource bottlenecks, particularly a shortage of programmers for the implementation phase. the second scenario uses a level of resources that would achieve zero-wait time, identified using a stepwise algorithm. the findings illustrate the advantage of using simulations as a safe and effective way to experiment and plan for software development projects. such simulations allow those managing software development projects to make accurate, evidence-based projections as to phase and project completion times as well as explore the interplay with resources.
2023-08-07 05:48:53.000000000
bioinformatics software plays a very important role in making critical decisions within many areas including medicine and health care. however, most of the research is directed towards developing tools, and little time and effort is spent on testing the software to assure its quality. in testing, a test oracle is used to determine whether a test is passed or failed during testing, and unfortunately, for much of bioinformatics software, the exact expected outcomes are not well defined. thus, the main challenge associated with conducting systematic testing on bioinformatics software is the oracle problem. metamorphic testing (mt) is a technique used to test programs that face the oracle problem. mt uses metamorphic relations (mrs) to determine whether a test has passed or failed and specifies how the output should change according to a specific change made to the input. in this work, we use mt to test lingpipe, a tool for processing text using computational linguistics, often used in bioinformatics for bio-entity recognition from biomedical literature. first, we identify a set of mrs for testing any bio-entity recognition program. then we develop a set of test cases that can be used to test lingpipe's bio-entity recognition functionality using these mrs. to evaluate the effectiveness of this testing process, we automatically generate a set of faulty versions of lingpipe. according to our analysis of the experimental results, we observe that our mrs can detect the majority of these faulty versions, which shows the utility of this testing technique for quality assurance of bioinformatics software.
2018-02-19 02:05:54.000000000
contract conformance is hard to determine statically, prior to the deployment of large pieces of software. a scalable alternative is to monitor for contract violations post-deployment: once a violation is detected, the trace characterising the offending execution is analysed to pinpoint the source of the offence. a major drawback with this technique is that, often, contract violations take time to surface, resulting in long traces that are hard to analyse. this paper proposes a methodology together with an accompanying tool for simplifying traces and assisting contract-violation debugging.
2012-09-12 08:09:51.000000000
crowdtesting is effective especially when it comes to the feedback on gui systems, or subjective opinions about features. despite of this, we find crowdtesting reports are highly replicated, i.e., 82% of them are replicates of others. hence automatically detecting replicate reports could help reduce triaging efforts. most of the existing approaches mainly adopted textual information for replicate detection, and suffered from low accuracy because of the expression gap. our observation on real industrial crowdtesting data found that when dealing with crowdtesting reports of gui systems, the reports would accompanied with images, i.e., the screenshots of the app. we assume the screenshot to be valuable for replicate crowdtesting report detection because it reflects the real scenario of the failure and is not affected by the variety of natural languages. in this work, we propose a replicate detection approach, tsdetector, which combines information from the screenshots and the textual descriptions to detect replicate crowdtesting reports. we extract four types of features to characterize the screenshots and the textual descriptions, and design an algorithm to detect replicates based on four similarity scores derived from the four different features respectively. we investigate the effectiveness and advantage of tsdetector on 15 commercial projects with 4,172 reports from one of the chinese largest crowdtesting platforms.results show that tsdetector can outperform existing state-of-the-art approaches significantly. in addition, we also evaluate its usefulness using real-world case studies. the feedback from real-world testers demonstrates its practical value
2018-05-03 14:52:18.000000000
we present montre, a monitoring tool to search patterns specified by timed regular expressions over real-time behaviors. we use timed regular expressions as a compact, natural, and highly-expressive pattern specification language for monitoring applications involving quantitative timing constraints. our tool essentially incorporates online and offline timed pattern matching algorithms so it is capable of finding all occurrences of a given pattern over both logged and streaming behaviors. furthermore, montre is designed to work with other tools via standard interfaces to perform more complex and versatile tasks for analyzing and reasoning about cyber-physical systems. as the first of its kind, we believe montre will enable a new line of inquiries and techniques in these fields.
2016-05-12 16:43:16.000000000
this volume contains the papers presented at the 19th international overture workshop, which was held in an hybrid format: online and physically at aarhus, denmark on 22th october 2021. this event was the latest in a series of workshops around the vienna development method (vdm), the open-source project overture, and related tools and formalisms. vdm is one of the longest established formal methods for systems development. a lively community of researchers and practitioners has grown up in academia and industry around the modelling languages (vdm-sl, vdm++, vdm-rt, cml) and tools (vdmtools, overture, vdm vscode extension, crescendo, symphony, the into-cps chain, and viennatalk). together, these provide a platform for work on modelling and analysis technology that includes static and dynamic analysis, test generation, execution support, and model checking. this workshop provided updates on the emerging technology of vdm/overture, including collaboration infrastructure, collaborative modelling and co-simulation for cyber-physical systems.
2021-10-16 21:19:59.000000000
the simulation of tactile sensation using haptic devices is increasingly investigated in conjunction with simulation and training. in this paper we explore the most popular haptic frameworks and apis. we provide a comprehensive review and comparison of their features and capabilities, from the perspective of the need to develop a haptic simulator for medical training purposes. in order to compare the studied frameworks and apis, we identified and applied a set of 11 criteria and we obtained a classification of platforms, from the perspective of our project. according to this classification, we used the best platform to develop a visual-haptic prototype for liver diagnostics.
2019-01-29 21:29:05.000000000
as program comprehension is a vast research area, it is necessary to get an overview of its rising and falling trends. we performed an n-gram frequency analysis on titles, abstracts and keywords of 1885 articles about program comprehension from the years 2000-2014. according to this analysis, the most rising trends are feature location and open source systems, the most falling ones are program slicing and legacy systems.
2018-07-23 09:24:18.000000000
docker is a tool for lightweight os-level virtualization. docker images are created by performing a build, controlled by a source-level artifact called a dockerfile. we studied dockerfiles on github, and -- to our great surprise -- found that over a quarter of the examined dockerfiles failed to build (and thus to produce images). to address this problem, we propose shipwright, a human-in-the-loop system for finding repairs to broken dockerfiles. shipwright uses a modified version of the bert language model to embed build logs and to cluster broken dockerfiles. using these clusters and a search-based procedure, we were able to design 13 rules for making automated repairs to dockerfiles. with the aid of shipwright, we submitted 45 pull requests (with a 42.2% acceptance rate) to github projects with broken dockerfiles. furthermore, in a "time-travel" analysis of broken dockerfiles that were later fixed, we found that shipwright proposed repairs that were equivalent to human-authored patches in 22.77% of the cases we studied. finally, we compared our work with recent, state-of-the-art, static dockerfile analyses, and found that, while static tools detected possible build-failure-inducing issues in 20.6--33.8% of the files we examined, shipwright was able to detect possible issues in 73.25% of the files and, additionally, provide automated repairs for 18.9% of the files.
2021-03-03 08:39:53.000000000
we report on our experiences in an e-government project for supporting the automatic generation of e-forms for services provided by local governments. the approach requires the integration of both the model-based user interface design (mbuid) and software product line engineering approaches. during the domain engineering activity the commonality and variability of product services is modeled using feature diagrams and the corresponding ui models are defined. to support the automation of e-forms the implemented feature models are on their turn used to generate e-forms automatically to enhance productivity, increase quality and reduce cost of development. we have developed three different approaches for e-form generation in increasing complexity: (1) offline model transformation without interaction (2) model transformation with initial interaction (3) model-transformation with run-time interaction. we discuss the lessons learned and propose a systematic approach for defining model transformations that is based on an interactive paradigm.
2020-03-17 14:17:52.000000000
we aim to increase the flexibility at which a data worker can choose the right tool for the job, regardless of whether the tool is a code library or an interactive graphical user interface (gui). to achieve this flexibility, we extend computational notebooks with a new api mage, which supports tools that can represent themselves as both code and gui as needed. we discuss the design of mage as well as design opportunities in the space of flexible code/gui tools for data work. to understand tooling needs, we conduct a study with nine professional practitioners and elicit their feedback on mage and potential areas for flexible code/gui tooling. we then implement six client tools for mage that illustrate the main themes of our study findings. finally, we discuss open challenges in providing flexible code/gui interactions for data workers.
2020-09-22 01:43:05.000000000
the logic programming through prolog has been widely used for supply persistence in many systems that need store knowledge. some implementations of prolog programming language used for supply persistence have bidirectional interfaces with other programming languages over all with object oriented programing languages. in present days is missing tools and frameworks for the systems development that use logic predicate persistence in easy and agile form. more specifically an object oriented and logic persistence provider is need in present days that allow the object manipulation in main memory and the persistence for this objects have a logic programming predicates aspect. the present work introduce an object-prolog declarative mappings alternative to support by an object oriented and logic persistence provider. the proposed alternative consists in a correspondence of the logic programming predicates with an object oriented approach, where for each element of the logic programming one object oriented element makes to reciprocate. the object oriented representation of logic programming predicates offers facility of manipulation on the elements that compose a knowledge.
2017-04-27 02:22:26.000000000
in today's digital age, the imperative to protect data privacy and security is a paramount concern, especially for business-to-business (b2b) enterprises that handle sensitive information. these enterprises are increasingly constructing data platforms, which are integrated suites of technology solutions architected for the efficient management, processing, storage, and data analysis. it has become critical to design these data platforms with mechanisms that inherently support data privacy and security, particularly as they encounter the added complexity of safeguarding unstructured data types such as log files and text documents. within this context, data masking stands out as a vital feature of data platform architecture. it proactively conceals sensitive elements, ensuring data privacy while preserving the information's value for business operations and analytics. this protective measure entails a strategic two-fold process: firstly, accurately pinpointing the sensitive data that necessitates concealment, and secondly, applying sophisticated methods to disguise that data effectively within the data platform infrastructure. this research delves into the nuances of embedding advanced data masking techniques within the very fabric of data platforms and an in-depth exploration of how enterprises can adopt a comprehensive approach toward effective data masking implementation by exploring different identification and anonymization techniques.
2023-12-04 15:50:16.000000000
developers need to make a constant effort to improve the quality of their code if they want to stay productive. tools that highlight code locations that could benefit from refactoring are thus highly desirable. the most common name for such locations is "bad code smell". a number of tools offer such quality feedback and there is a substantial body of related research. however, all these tools, including those based on machine learning, still produce false positives. every single false positive shown to the developer places a cognitive burden on her and should thus be avoided. the literature discusses the choice of metric thresholds, the general subjectivity of such a judgment and the relation to conscious design choices, "design ideas". to examine false positives and the relation between bad smells and design ideas, we designed and conducted an exploratory case study. while previous research presented a broad overview, we have chosen a narrow setting to reach for even deeper insights: the framework jhotdraw had been designed so thoughtfully that most smell warnings are expected to be false positives. nevertheless, the "law of good style", better known as the "law of demeter", is a rather restrictive design rule so that we still expected to find some potential bad smells, i.e. violations of this "law". this combination led to 1215 potential smells of which at most 42 are true positives. we found generic as well as specific design ideas that were traded for the smell. our confidence in that decision ranged from high enough to very high. we were surprised to realize that the smell definition itself required the formulation of constructive design ideas. finally we found some smells to be the result of the limitation of the language and one could introduce auxiliary constructive design ideas to compensate for them. the decision whether a potential smell occurrence is actually a true positive was made very meticulously. for that purpose we took three qualities that the smell could affect negatively into account and we discussed the result of the recommended refactorings. if we were convinced that we had found a false positive, we described the relationships with design ideas. the realization that not only general design ideas but also specific design ideas have an influence on whether a potential smell is a true positive turns the problem of false positives from a scientific problem ("what is the true definition of the smell?") to a engineering problem ("how can we incorporate design ideas into smell definitions?"). we recommend to add adaptation points to the smell definitions. higher layers may then adapt the smell for specific contexts. after adaptation the tool may continuously provide distinct and precise quality feedback, reducing the cognitive load for the developer and preventing habituation. furthermore, the schema for the discussion of potential smells may be used to elaborate more sets of true and false smell occurrences. finally, it follows that smell detection based on machine learning should also take signs of design ideas into account.
2020-02-13 04:34:57.000000000
aspects such as limited resources, frequently changing market demands, and different technical restrictions regarding the implementation of software requirements (features) often demand for the prioritization of requirements. the task of prioritization is the ranking and selection of requirements that should be included in future software releases. in this context, an intelligent prioritization decision support is extremely important. the prioritization approaches discussed in this paper are based on different artificial intelligence (ai) techniques that can help to improve the overall quality of requirements prioritization processes
2021-07-31 12:19:47.000000000
it industry should inculcate effective defect management on a continual basis to deploy nearly a zerodefect product to their customers. inspection is one of the most imperative and effective strategies of defect management. nevertheless, existing defect management strategies in leading software industries are successful to deliver a maximum of 96% defect-free product. an empirical study of various projects across several service-based and product-based industries proves the above affirmations. this paper provides an enhanced approach of inspection through a four-step approach model of inspection (fami). fami consists of i) integration of inspection life cycle in v-model of software development, ii) implementation of process metric depth of inspection (di), iii) implementation of people metric inspection performance metric (ipm), iv) application of bayesian probability approach for selection of appropriate values of inspection affecting parameters to achieve the desirable di. the managers of software houses can make use of p2 metric as a benchmarking tool for the projects in order to improve the in-house defect management process. implementation of fami in software industries reflects a continual process improvement and leads to the development of nearly a zero-defect product through effective defect management.
2012-09-26 00:27:33.000000000
code changes are performed differently in the mobile and non-mobile platforms. prior work has investigated the differences in specific platforms. however, we still lack a deeper understanding of how code changes evolve across different software platforms. in this paper, we present a study aiming at investigating the frequency of changes and how source code, build and test changes co-evolve in mobile and non-mobile platforms. we developed regression models to explain which factors influence the frequency of changes and applied the apriori algorithm to find types of changes that frequently co-occur. our findings show that non-mobile repositories have a higher number of commits per month and our regression models suggest that being mobile significantly impacts on the number of commits in a negative direction when controlling for confound factors, such as code size. we also found that developers do not usually change source code files together with build or test files. we argue that our results can provide valuable information for developers on how changes are performed in different platforms so that practices adopted in successful software systems can be followed.
2019-10-22 18:33:28.000000000
mutation testing is a standard technique to evaluate the quality of a test suite. due to its computationally intensive nature, many approaches have been proposed to make this technique feasible in real case scenarios. among these approaches, uniform random mutant selection has been demonstrated to be simple and promising. however, works on this area analyze mutant samples at project level mainly on projects with adequate test suites. in this paper, we fill this lack of empirical validation by analyzing random mutant selection at class level on projects with non-adequate test suites. first, we show that uniform random mutant selection underachieves the expected results. then, we propose a new approach named weighted random mutant selection which generates more representative mutant samples. finally, we show that representative mutant samples are larger for projects with high test adequacy.
2016-07-06 17:42:24.000000000
[context] artificial intelligence (ai) components used in building software solutions have substantially increased in recent years. however, many of these solutions focus on technical aspects and ignore critical human-centered aspects. [objective] including human-centered aspects during requirements engineering (re) when building ai-based software can help achieve more responsible, unbiased, and inclusive ai-based software solutions. [method] in this paper, we present a new framework developed based on human-centered ai guidelines and a user survey to aid in collecting requirements for human-centered ai-based software. we provide a catalog to elicit these requirements and a conceptual model to present them visually. [results] the framework is applied to a case study to elicit and model requirements for enhancing the quality of 360 degree~videos intended for virtual reality (vr) users. [conclusion] we found that our proposed approach helped the project team fully understand the human-centered needs of the project to deliver. furthermore, the framework helped to understand what requirements need to be captured at the initial stages against later stages in the engineering process of ai-based software.
2023-03-03 23:14:15.000000000
natural language processing (nlp) is widely used to support the automation of different requirements engineering (re) tasks. most of the proposed approaches start with various nlp steps that analyze requirements statements, extract their linguistic information, and convert them to easy-to-process representations, such as lists of features or embedding-based vector representations. these nlp-based representations are usually used at a later stage as inputs for machine learning techniques or rule-based methods. thus, requirements representations play a major role in determining the accuracy of different approaches. in this paper, we conducted a survey in the form of a systematic literature mapping (classification) to find out (1) what are the representations used in re tasks literature, (2) what is the main focus of these works, (3) what are the main research directions in this domain, and (4) what are the gaps and potential future directions. after compiling an initial pool of 2,227 papers, and applying a set of inclusion/exclusion criteria, we obtained a final pool containing 104 relevant papers. our survey shows that the research direction has changed from the use of lexical and syntactic features to the use of advanced embedding techniques, especially in the last two years. using advanced embedding representations has proved its effectiveness in most re tasks (such as requirement analysis, extracting requirements from reviews and forums, and semantic-level quality tasks). however, representations that are based on lexical and syntactic features are still more appropriate for other re tasks (such as modeling and syntax-level quality tasks) since they provide the required information for the rules and regular expressions used when handling these tasks. in addition, we identify four gaps in the existing literature, why they matter, and how future research can begin to address them.
2022-05-31 14:39:49.000000000
with the increase in amount of big data being generated each year, tools and technologies developed and used for the purpose of storing, processing and analyzing big data has also improved. open-source software has been an important factor in the success and innovation in the field of big data while apache software foundation (asf) has played a crucial role in this success and innovation by providing a number of state-of-the-art projects, free and open to the public. asf has classified its project in different categories. in this report, projects listed under big data category are deeply analyzed and discussed with reference to one-of-the seven sub-categories defined. our investigation has shown that many of the apache big data projects are autonomous but some are built based on other apache projects and some work in conjunction with other projects to improve and ease development in big data space.
2020-04-27 11:07:21.000000000
scenario-based development and test processes are a promising approach for verifying and validating automated driving functions. for this purpose, scenarios have to be generated during the development process in a traceable manner. in early development stages, the operating scenarios of the item to be developed are usually described in an abstract, linguistic way.within the scope of a simulation-assisted test process, these linguistically described scenarios have to be transformed into a state space representation and converted into data formats which can be used with the respective simulation environment. currently, this step of detailing scenarios takes a considerable manual effort. furthermore, a standardized interpretation of the linguistically described scenarios and a consistent transformation into the data formats are not guaranteed due to multiple authors as well as many constraints between the scenario parameters. in this paper, the authors present an approach to automatically detail a keyword-based scenario description for execution in a simulation environment and provide a basis for test case generation. as a first step, the keyword-based description is transformed into a parameter space representation. at the same time, constraints regarding the selection and combination of parameter values are documented for the following process steps (e. g. evolutionary or stochastic test methods). as a second step, the parameter space representation is converted into data formats required by the simulation environment. as an example, the authors use scenarios on german freeways and convert them into the data formats opendrive (description of the road) and openscenario (description of traffic participants and environmental conditions) for execution in the simulation environment virtual test drive.
2019-05-06 20:45:25.000000000
with the increasing use of machine learning (ml) in critical autonomous systems, runtime monitors have been developed to detect prediction errors and keep the system in a safe state during operations. monitors have been proposed for different applications involving diverse perception tasks and ml models, and specific evaluation procedures and metrics are used for different contexts. this paper introduces three unified safety-oriented metrics, representing the safety benefits of the monitor (safety gain), the remaining safety gaps after using it (residual hazard), and its negative impact on the system's performance (availability cost). to compute these metrics, one requires to define two return functions, representing how a given ml prediction will impact expected future rewards and hazards. three use-cases (classification, drone landing, and autonomous driving) are used to demonstrate how metrics from the literature can be expressed in terms of the proposed metrics. experimental results on these examples show how different evaluation choices impact the perceived performance of a monitor. as our formalism requires us to formulate explicit safety assumptions, it allows us to ensure that the evaluation conducted matches the high-level system requirements.
2022-08-29 10:14:53.000000000
defects in requirements specifications can have severe consequences during the software development lifecycle. some of them may result in poor product quality and/or time and budget overruns due to incorrect or missing quality characteristics, such as security. this characteristic requires special attention in web applications because they have become a target for manipulating sensible data. several concerns make security difficult to deal with. for instance, security requirements are often misunderstood and improperly specified due to lack of security expertise and emphasis on security during early stages of software development. this often leads to unspecified or ill-defined security-related aspects. these concerns become even more challenging in agile contexts, where lightweight documentation is typically produced. to tackle this problem, we designed an approach for reviewing security-related aspects in agile requirements specifications of web applications. our proposal considers user stories and security specifications as inputs and relates those user stories to security properties via natural language processing. based on the related security properties, our approach identifies high-level security requirements from the open web application security project (owasp) to be verified, and generates a reading technique to support reviewers in detecting defects. we evaluate our approach via three experiment trials conducted with 56 novice software engineers, measuring effectiveness, efficiency, usefulness, and ease of use. we compare our approach against using: (1) the owasp high-level security requirements, and (2) a perspective-based approach as proposed in contemporary state of the art. the results strengthen our confidence that using our approach has a positive impact (with large effect size) on the performance of inspectors in terms of effectiveness and efficiency.
2020-09-07 13:39:42.000000000
software reuse enables developers to reuse architecture, programs and other software artifacts. realizing a systematical reuse in software brings a large amount of benefits for stakeholders, including lower maintenance efforts, lower development costs, and time to market. unfortunately, currently implementing a framework for large-scale software reuse in android apps is still a huge problem, regarding the complexity of the task and lacking of practical technical support from either tools or domain experts. therefore, proposing a feature location benchmark for apps will help developers either optimize their feature location techniques or reuse the assets created in the benchmark for reusing. in this paper, we release a feature location benchmark, which can be used for those developers, who intend to compose software product lines (spl) and release reuse in apps. the benchmark not only contributes to the research community for reuse research, but also helps participants in industry for optimizing their architecture and enhancing modularity. in addition, we also develop an android studio plugin named caide for developers to view and operate on the benchmark.
2020-05-06 19:21:39.000000000
the rapid escalation of applying machine learning (ml) in various domains has led to paying more attention to the quality of ml components. there is then a growth of techniques and tools aiming at improving the quality of ml components and integrating them into the ml-based system safely. although most of these tools use bugs' lifecycle, there is no standard benchmark of bugs to assess their performance, compare them and discuss their advantages and weaknesses. in this study, we firstly investigate the reproducibility and verifiability of the bugs in ml-based systems and show the most important factors in each one. then, we explore the challenges of generating a benchmark of bugs in ml-based software systems and provide a bug benchmark namely defect4ml that satisfies all criteria of standard benchmark, i.e. relevance, reproducibility, fairness, verifiability, and usability. this faultload benchmark contains 100 bugs reported by ml developers in github and stack overflow, using two of the most popular ml frameworks: tensorflow and keras. defect4ml also addresses important challenges in software reliability engineering of ml-based software systems, like: 1) fast changes in frameworks, by providing various bugs for different versions of frameworks, 2) code portability, by delivering similar bugs in different ml frameworks, 3) bug reproducibility, by providing fully reproducible bugs with complete information about required dependencies and data, and 4) lack of detailed information on bugs, by presenting links to the bugs' origins. defect4ml can be of interest to ml-based systems practitioners and researchers to assess their testing tools and techniques.
2022-06-23 12:47:47.000000000
concolic testing is a popular software verification technique based on a combination of concrete and symbolic execution. its main focus is finding bugs and generating test cases with the aim of maximizing code coverage. a previous approach to concolic testing in logic programming was not sound because it only dealt with positive constraints (by means of substitutions) but could not represent negative constraints. in this paper, we present a novel framework for concolic testing of clp programs that generalizes the previous technique. in the clp setting, one can represent both positive and negative constraints in a natural way, thus giving rise to a sound and (potentially) more efficient technique. defining verification and testing techniques for clp programs is increasingly relevant since this framework is becoming popular as an intermediate representation to analyze programs written in other programming paradigms.
2020-07-30 17:31:31.000000000
large language models (llm) and generative pre-trained transformers (gpt), are reshaping the field of software engineering (se). they enable innovative methods for executing many software engineering tasks, including automated code generation, debugging, maintenance, etc. however, only a limited number of existing works have thoroughly explored the potential of gpt agents in se. this vision paper inquires about the role of gpt-based agents in se. our vision is to leverage the capabilities of multiple gpt agents to contribute to se tasks and to propose an initial road map for future work. we argue that multiple gpt agents can perform creative and demanding tasks far beyond coding and debugging. gpt agents can also do project planning, requirements engineering, and software design. these can be done through high-level descriptions given by the human developer. we have shown in our initial experimental analysis for simple software (e.g., snake game, tic-tac-toe, notepad) that multiple gpt agents can produce high-quality code and document it carefully. we argue that it shows a promise of unforeseen efficiency and will dramatically reduce lead-times. to this end, we intend to expand our efforts to understand how we can scale these autonomous capabilities further.
2023-11-29 02:15:16.000000000
reentrancy is one of the most notorious vulnerabilities in smart contracts, resulting in significant digital asset losses. however, many previous works indicate that current reentrancy detection tools suffer from high false positive rates. even worse, recent years have witnessed the emergence of new reentrancy attack patterns fueled by intricate and diverse vulnerability exploit mechanisms. unfortunately, current tools face a significant limitation in their capacity to adapt and detect these evolving reentrancy patterns. consequently, ensuring precise and highly extensible reentrancy vulnerability detection remains critical challenges for existing tools. to address this issue, we propose a tool named reep, designed to reduce the false positives for reentrancy vulnerability detection. additionally, reep can integrate multiple tools, expanding its capacity for vulnerability detection. it evaluates results from existing tools to verify vulnerability likelihood and reduce false positives. reep also offers excellent extensibility, enabling the integration of different detection tools to enhance precision and cover different vulnerability attack patterns. we perform reep to eight existing state-of-the-art reentrancy detection tools. the average precision of these eight tools increased from the original 0.5% to 73% without sacrificing recall. furthermore, reep exhibits robust extensibility. by integrating multiple tools, the precision further improved to a maximum of 83.6%. these results demonstrate that reep effectively unites the strengths of existing works, enhances the precision of reentrancy vulnerability detection tools.
2024-02-13 11:08:08.000000000
systems integration is a difficult matter particularly when its components are varied. the problem becomes even more difficult when such components are heterogeneous such as humans, robots and software systems. currently, the humans are regarded as users of artificial systems (robots and software systems). this has several disadvantages such as: (1) incoherence of artificial systems exploitation where humans' roles are not clear and (2) vain research of a user's universal model. in this paper, we adopted a cooperative approach where the system's components are regarded as being of the same level and they cooperate for the service of the global system. we concretized such approach by considering humans, robots and software systems as autonomous agents assuming roles in an organization. the latter will be implemented as a multi-agent system developed using a multi-agent development methodology.
2014-08-25 13:27:33.000000000
the human capital invested into software development plays a vital role in the success of any software project. by human capital, we do not mean the individuals themselves, but involves the range of knowledge and skills (i.e., human aspects) invested to create value during development. however, there is still no consensus on how these broad terms of human aspects relate to the health of a project. in this study, we reconceptualize human aspects of software engineering (se) into a framework (i.e., se human capital). the study presents a systematic mapping to survey and classify existing human aspect studies into four dimensions of the framework: capacity, deployment, development, and know-how (based on the global human capital index). from premium se publishing venues (five journal articles and four conferences), we extract 2,698 hits of papers published between 2013 to 2017. using a search criteria, we then narrow our results to 340 papers. finally, we use inclusion and exclusion criteria to manually select 78 papers (49 quantitative and 29 qualitative studies). using research questions, we uncover related topics, theories and data origins. the key outcome of this paper is a set of indicators for se human capital. this work is towards the creation of a se human capital index (se-hci) to capture and rank human aspects, with the potential to assess progress within projects, and point to opportunities for cross-project learning and exchange across software projects.
2018-05-09 10:04:28.000000000
model driven engineering (mde) has been widely applied in software development, aiming to facilitate the coordination among various stakeholders. such a methodology allows for a more efficient and effective development process. nevertheless, modeling is a strenuous activity that requires proper knowledge of components, attributes, and logic to reach the level of abstraction required by the application domain. in particular, metamodels play an important role in several paradigms, and specifying wrong entities or attributes in metamodels can negatively impact on the quality of the produced artifacts as well as other elements of the whole process. during the metamodeling phase, modelers can benefit from assistance to avoid mistakes, e.g., getting recommendations like meta-classes and structural features relevant to the metamodel being defined. however, suitable machinery is needed to mine data from repositories of existing modeling artifacts and compute recommendations. in this work, we propose memorec, a novel approach that makes use of a collaborative filtering strategy to recommend valuable entities related to the metamodel under construction. our approach can provide suggestions related to both metaclasses and structured features that should be added in the metamodel under definition. we assess the quality of the work with respect to different metrics, i.e., success rate, precision, and recall. the results demonstrate that memorec is capable of suggesting relevant items given a partial metamodel and supporting modelers in their task.
2022-03-10 05:43:35.000000000
many popular blockchain platforms are supporting smart contracts for building decentralized applications. however, the vulnerabilities within smart contracts have led to serious financial loss to their end users. for the eosio blockchain platform, effective vulnerability detectors are still limited. furthermore, existing vulnerability detection tools can only support one blockchain platform. in this work, we present wana, a cross-platform smart contract vulnerability detection tool based on the symbolic execution of webassembly bytecode. furthermore, wana proposes a set of test oracles to detect the vulnerabilities in eosio and ethereum smart contracts based on webassembly bytecode analysis. our experimental analysis shows that wana can effectively detect vulnerabilities in both eosio and ethereum smart contracts with high efficiency.
2020-07-28 14:18:37.000000000
model based testing is a well-established approach to verify implementations modeled by i/o labeled transition systems (ioltss). one of the challenges stemming from model based testing is the conformance checking and the generation of test suites, specially when completeness is a required property. in order to check whether an implementation under test is in compliance with its respective specification one resorts to some form of conformance relation that guarantees the expected behavior of the implementations, given the behavior of the specification. the ioco conformance relation is an example of such a relation, specially suited for asynchronous models. in this work we study a more general conformance relation, show how to generate finite and complete test suites, and discuss the complexity of the test generation mechanism under this more general conformance relation. we also show that ioco conformance is a special case of this new conformance relation, and we investigate the complexity of classical ioco-complete test suites. further, we relate our contributions to more recent works, accommodating the restrictions of their classes of fault models within our more general approach as special cases, and expose the complexity of generating any complete test suite that must satisfy their restrictions.
2019-02-25 04:59:04.000000000
this paper presents an approach to model features and function nets of automotive systems comprehensively. in order to bridge the gap between feature requirements and function nets, we describe an approach to describe both using a sysml-based notation. if requirements on the automotive system are changed by several developers responsible for different features, it is important for developers to have a good overview and understanding of the functions affected. we show that this can be comprehensively modeled using so called "feature views". in order to validate these views against the complete function nets, consistency checks are provided.
2014-09-22 17:03:11.000000000
system relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. a common technique to verify programs is the analysis of their abstract syntax tree (ast). tree structures can be elegantly analyzed with the logic programming language prolog. moreover, prolog offers further advantages for a thorough analysis: on the one hand, it natively provides versatile options to efficiently process tree or graph data structures. on the other hand, prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. a rule-based approach with prolog allows to characterize the verification goals in a concise and declarative way. in this paper, we describe our approach to verify the source code of a flash file system with the help of prolog. the flash file system is written in c++ and has been developed particularly for the use in satellites. we transform a given abstract syntax tree of c++ source code into prolog facts and derive the call graph and the execution sequence (tree), which then are further tested against verification goals. the different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. finally, these subtrees are verified in prolog. we illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system rodos. we rely on computation tree logic (ctl) and have designed an embedded domain specific language (dsl) in prolog to express the verification goals.
2016-12-29 08:15:05.000000000
web api specifications are machine-readable descriptions of apis. these specifications, in combination with related tooling, simplify and support the consumption of apis. however, despite the increased distribution of web apis, specifications are rare and their creation and maintenance heavily relies on manual efforts by third parties. in this paper, we propose an automatic approach and an associated tool called d2spec for extracting specifications from web api documentation pages. given a seed online documentation page on an api, d2spec first crawls all documentation pages on the api, and then uses a set of machine learning techniques to extract the base url, path templates, and http methods, which collectively describe the endpoints of an api. we evaluated whether d2spec can accurately extract endpoints from documentation on 120 web apis. the results showed that d2spec achieved a precision of 87.5% in identifying base urls, a precision of 81.3% and a recall of 80.6% in generating path templates, and a precision of 84.4% and a recall of 76.2% in extracting http methods. in addition, we found that d2spec was useful when applied to apis with pre-existing api specifications: d2spec revealed many inconsistencies between web api documentation and their corresponding publicly available specifications. thus, d2spec can be used by web api providers to keep documentation and specifications in synchronization.
2018-01-23 16:38:18.000000000
the overall objective of requirements engineering is to specify, in a systematic way, a system that satisfies the expectations of its stakeholders. despite tremendous effort in the field, recent studies demonstrate this is objective is not always achieved. in this paper, we discuss one particularly challenging factor to requirements engineering projects, namely the change of requirements. we proposes a rough discussion of how learning and time explain requirements changes, how it can be introduced as a key variable in the formulation of the requirements engineering problem, and how this induces costs for a requirements engineering project. this leads to a new discipline of requirements economics.
2017-11-22 03:45:15.000000000
with the wide application of machine translation, the testing of machine translation systems (mtss) has attracted much attention. recent works apply metamorphic testing (mt) to address the oracle problem in mts testing. existing mt methods for mts generally follow the workflow of input transformation and output relation comparison, which generates a follow-up input sentence by mutating the source input and compares the source and follow-up output translations to detect translation errors, respectively. these methods use various input transformations to generate test case pairs and have successfully triggered numerous translation errors. however, they have limitations in performing fine-grained and rigorous output relation comparison and thus may report false alarms and miss true errors. in this paper, we propose a word closure-based output comparison method to address the limitations of the existing mts mt methods. specifically, we first build a new comparison unit called word closure, where each closure includes a group of correlated input and output words in the test case pair. word closures suggest the linkages between the appropriate fragment in the source output translation and its counterpart in the follow-up output for comparison. next, we compare the semantics on the level of word closure to identify the translation errors. in this way, we perform a fine-grained and rigorous semantic comparison for the outputs and thus realize more effective violation identification. we evaluate our method with the test cases generated by five existing input transformations and translation outputs from three popular mtss. results show that our method significantly outperforms the existing works in violation identification by improving the precision and recall and achieving an average increase of 29.8% in f1 score. it also helps to increase the f1 score of translation error localization by 35.9%.
2023-12-18 18:33:58.000000000
current research in the computer vision field mainly focuses on improving deep learning (dl) correctness and inference time performance. however, there is still little work on the huge carbon footprint that has training dl models. this study aims to analyze the impact of the model architecture and training environment when training greener computer vision models. we divide this goal into two research questions. first, we analyze the effects of model architecture on achieving greener models while keeping correctness at optimal levels. second, we study the influence of the training environment on producing greener models. to investigate these relationships, we collect multiple metrics related to energy efficiency and model correctness during the models' training. then, we outline the trade-offs between the measured energy efficiency and the models' correctness regarding model architecture, and their relationship with the training environment. we conduct this research in the context of a computer vision system for image classification. in conclusion, we show that selecting the proper model architecture and training environment can reduce energy consumption dramatically (up to 81.38%) at the cost of negligible decreases in correctness. also, we find evidence that gpus should scale with the models' computational complexity for better energy efficiency.
2023-07-10 11:33:46.000000000
context: it is an enigma that agile projects can succeed 'without requirements' when weak requirements engineering is a known cause for project failures. while agile development projects often manage well without extensive requirements test cases are commonly viewed as requirements and detailed requirements are documented as test cases. objective: we have investigated this agile practice of using test cases as requirements to understand how test cases can support the main requirements activities, and how this practice varies. method: we performed an iterative case study at three companies and collected data through 14 interviews and two focus groups. results: the use of test cases as requirements poses both benefits and challenges when eliciting, validating, verifying, and managing requirements, and when used as a documented agreement. we have identified five variants of the test-cases-as-requirements practice, namely de facto, behaviour-driven, story-test driven, stand-alone strict and stand-alone manual for which the application of the practice varies concerning the time frame of requirements documentation, the requirements format, the extent to which the test cases are a machine executable specification and the use of tools which provide specific support for the practice of using test cases as requirements. conclusions: the findings provide empirical insight into how agile development projects manage and communicate requirements. the identified variants of the practice of using test cases as requirements can be used to perform in-depth investigations into agile requirements engineering. practitioners can use the provided recommendations as a guide in designing and improving their agile requirements practices based on project characteristics such as number of stakeholders and rate of change.
2023-08-22 07:23:12.000000000
large language models (llms) have demonstrated exceptional coding capability. however, as another critical component of programming proficiency, the debugging capability of llms remains relatively unexplored. previous evaluations of llms' debugging ability are significantly limited by the risk of data leakage, the scale of the dataset, and the variety of tested bugs. to overcome these deficiencies, we introduce `debugbench', an llm debugging benchmark consisting of 4,253 instances. it covers four major bug categories and 18 minor types in c++, java, and python. to construct debugbench, we collect code snippets from the leetcode community, implant bugs into source data with gpt-4, and assure rigorous quality checks. we evaluate two commercial and three open-source models in a zero-shot scenario. we find that (1) while closed-source models like gpt-4 exhibit inferior debugging performance compared to humans, open-source models such as code llama fail to attain any pass rate scores; (2) the complexity of debugging notably fluctuates depending on the bug category; (3) incorporating runtime feedback has a clear impact on debugging performance which is not always helpful. as an extension, we also compare llm debugging and code generation, revealing a strong correlation between them for closed-source models. these findings will benefit the development of llms in debugging.
2024-01-08 13:36:31.000000000
automatic generators of gui tests often fail to generate semantically relevant test cases, and thus miss important test scenarios. to address this issue, test adaptation techniques can be used to automatically generate semantically meaningful gui tests from test cases of applications with similar functionalities. in this paper, we present adaptdroid, a technique that approaches the test adaptation problem as a search-problem, and uses evolutionary testing to adapt gui tests (including oracles) across similar android apps. in our evaluation with 32 popular android apps, adaptdroid successfully adapted semantically relevant test cases in 11 out of 20 cross-app adaptation scenarios.
2021-04-09 21:26:32.000000000
this work relates to context-awareness of things that belong to iot networks. preferences understood as a priority in selection are considered, and dynamic preference models for such systems are built. preference models are based on formal logic, and they are built on-the-fly by software agents observing the behavior of users/inhabitants, and gathering knowledge about preferences expressed in terms of logical specifications. a 3-level structure of agents has been introduced to support iot inference. these agents cooperate with each other basing on the graph representation of the system knowledge. an example of such a system is presented.
2014-04-04 09:39:06.000000000
software estimation is a crucial task in software engineering. software estimation encompasses cost, effort, schedule, and size. the importance of software estimation becomes critical in the early stages of the software life cycle when the details of software have not been revealed yet. several commercial and non-commercial tools exist to estimate software in the early stages. most software effort estimation methods require software size as one of the important metric inputs and consequently, software size estimation in the early stages becomes essential. one of the approaches that has been used for about two decades in the early size and effort estimation is called use case points. use case points method relies on the use case diagram to estimate the size and effort of software projects. although the use case points method has been widely used, it has some limitations that might adversely affect the accuracy of estimation. this paper presents some techniques using fuzzy logic and neural networks to improve the accuracy of the use case points method. results showed that an improvement up to 22% can be obtained using the proposed approach.
2016-11-27 12:08:26.000000000
agile methods in undergraduate courses have been explored in an effort to close the gap between industry and professional profiles. we have structured an android application development course based on a tailored user-centered agile process for development of educational digital tools. this process is based on scrum and extreme programming in combination with user experience (ux) approaches. the course is executed in two phases: the first half of the semester presents theory on agile and mobile applications development, the latter half is managed as a workshop where students develop for an actual client. the introduction of ux and user-centered design exploiting the close relationship with stakeholders expected from agile processes allows for different quality features development. since 2019 two of the projects have been extended and one project has been developed with the described process and course alumni. students and stakeholders have found value in the generated products and process.
2023-11-06 04:26:20.000000000
quality assurance is of great importance for deep learning (dl) systems, especially when they are applied in safety-critical applications. while quality issues of native dl applications have been extensively analyzed, the issues of javascript-based dl applications have never been systematically studied. compared with native dl applications, javascript-based dl applications can run on major browsers, making the platform- and device-independent. specifically, the quality of javascript-based dl applications depends on the 3 parts: the application, the third-party dl library used and the underlying dl framework (e.g., tensorflow.js), called javascript-based dl system. in this paper, we conduct the first empirical study on the quality issues of javascript-based dl systems. specifically, we collect and analyze 700 real-world faults from relevant github repositories, including the official tensorflow.js repository, 13 third-party dl libraries, and 58 javascript-based dl applications. to better understand the characteristics of these faults, we manually analyze and construct taxonomies for the fault symptoms, root causes, and fix patterns, respectively. moreover, we also study the fault distributions of symptoms and root causes, in terms of the different stages of the development lifecycle, the 3-level architecture in the dl system, and the 4 major components of tensorflow.js framework. based on the results, we suggest actionable implications and research avenues that can potentially facilitate the development, testing, and debugging of javascript-based dl systems.
2022-09-08 19:13:53.000000000
in ics, wut a cosma design environment is being developed. cosma is based on concurrent state machines (csm) formalism of system specification. it contains a graphical tool for system design, various tools for the analysis (including a temporal model checker), simulator and code generator. in many projects, some common susbsystems take place. this concerns both complicated modules and simple counters. in the report, a concept of macrogeneration technique for building of libraries of automata is presented. the new technique will support a compactness of projects and reusability of modules.
2017-10-20 06:47:58.000000000
automated program repair (apr) is a fast growing area with numerous new techniques being developed to tackle one of the most challenging software engineering problems. apr techniques have shown promising results, giving us hope that one day it will be possible for software to repair itself. in this paper, we focus on the problem of objective performance evaluation of apr techniques. we introduce a new approach, explaining automated program repair (e-apr), which identifies features of buggy programs that explain why a particular instance is difficult for an apr technique. e-apr is used to examine the diversity and quality of the buggy programs used by most researchers, and analyse the strengths and weaknesses of existing apr techniques. e-apr visualises an instance space of buggy programs, with each buggy program represented as a point in the space. the instance space is constructed to reveal areas of hard and easy buggy programs, and enables the strengths and weaknesses of apr techniques to be identified.
2020-02-07 22:47:51.000000000
background: as improving code review (cr) effectiveness is a priority for many software development organizations, projects have deployed cr analytics platforms to identify potential improvement areas. the number of issues identified, which is a crucial metric to measure cr effectiveness, can be misleading if all issues are placed in the same bin. therefore, a finer-grained classification of issues identified during crs can provide actionable insights to improve cr effectiveness. although a recent work by fregnan et al. proposed automated models to classify cr-induced changes, we have noticed two potential improvement areas -- i) classifying comments that do not induce changes and ii) using deep neural networks (dnn) in conjunction with code context to improve performances. aims: this study aims to develop an automated cr comment classifier that leverages dnn models to achieve a more reliable performance than fregnan et al. method: using a manually labeled dataset of 1,828 cr comments, we trained and evaluated supervised learning-based dnn models leveraging code context, comment text, and a set of code metrics to classify cr comments into one of the five high-level categories proposed by turzo and bosu. results: based on our 10-fold cross-validation-based evaluations of multiple combinations of tokenization approaches, we found a model using codebert achieving the best accuracy of 59.3%. our approach outperforms fregnan et al.'s approach by achieving 18.7% higher accuracy. conclusion: besides facilitating improved cr analytics, our proposed model can be useful for developers in prioritizing code review feedback and selecting reviewers.
2023-07-06 20:30:35.000000000
code generation problems differ from common natural language problems - they require matching the exact syntax of the target language, identifying happy paths and edge cases, paying attention to numerous small details in the problem spec, and addressing other code-specific issues and requirements. hence, many of the optimizations and tricks that have been successful in natural language generation may not be effective for code tasks. in this work, we propose a new approach to code generation by llms, which we call alphacodium - a test-based, multi-stage, code-oriented iterative flow, that improves the performances of llms on code problems. we tested alphacodium on a challenging code generation dataset called codecontests, which includes competitive programming problems from platforms such as codeforces. the proposed flow consistently and significantly improves results. on the validation set, for example, gpt-4 accuracy (pass[USER]) increased from 19% with a single well-designed direct prompt to 44% with the alphacodium flow. many of the principles and best practices acquired in this work, we believe, are broadly applicable to general code generation tasks. full implementation is available at: [LINK]
2024-01-16 05:50:42.000000000
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
74
Edit dataset card