Change search
Refine search result
123 1 - 50 of 110
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Alahyari, Hiva
    et al.
    Chalmers; Göteborgs Universitet, SWE.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A study of value in agile software development organizations2017In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 125, p. 271-288Article in journal (Refereed)
    Abstract [en]

    The Agile manifesto focuses on the delivery of valuable software. In Lean, the principles emphasise value, where every activity that does not add value is seen as waste. Despite the strong focus on value, and that the primary critical success factor for software intensive product development lies in the value domain, no empirical study has investigated specifically what value is. This paper presents an empirical study that investigates how value is interpreted and prioritised, and how value is assured and measured. Data was collected through semi-structured interviews with 23 participants from 14 agile software development organisations. The contribution of this study is fourfold. First, it examines how value is perceived amongst agile software development organisations. Second, it compares the perceptions and priorities of the perceived values by domains and roles. Third, it includes an examination of what practices are used to achieve value in industry, and what hinders the achievement of value. Fourth, it characterises what measurements are used to assure, and evaluate value-creation activities.

  • 2.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    FLOW-assisted value stream mapping in the early phases of large-scale software development2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 111, p. 213-227Article in journal (Refereed)
    Abstract [en]

    Value stream mapping (VSM) has been successfully applied in the context of software process improvement. However, its current adaptations from Lean manufacturing focus mostly on the flow of artifacts and have taken no account of the essential information flows in software development. A solution specifically targeted toward information flow elicitation and modeling is FLOW. This paper aims to propose and evaluate the combination of VSM and FLOW to identify and alleviate information and communication related challenges in large-scale software development. Using case study research, FLOW-assisted VSM was used for a large product at Ericsson AB, Sweden. Both the process and the outcome of FLOW-assisted VSM have been evaluated from the practitioners’ perspective. It was noted that FLOW helped to systematically identify challenges and improvements related to information flow. Practitioners responded favorably to the use of VSM and FLOW, acknowledged the realistic nature and impact on the improvement on software quality, and found the overview of the entire process using the FLOW notation very useful. The combination of FLOW and VSM presented in this study was successful in systematically uncovering issues and characterizing their solutions, indicating their practical usefulness for waste removal with a focus on information flow related issues.

  • 3.
    Andersson, Niclas
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Overview and industrial application of code generator generators1996In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 32, no 3, p. 185-214Article in journal (Refereed)
    Abstract [en]

    During the past 10 to 15 years, there has been active research in the area of automatically generating the code generator part of compilers from formal specifications. However, little has been reported on the application of these systems in an industrial setting. This paper attempts to fill this gap, in addition to providing a tutorial overview of the most well-known methods. Four systems for automatic generation of code generators are described in this paper. CGSS, BEG, TWIG and BURG. CGSS is an older Graham-Glanville style system based on pattern matching through parsing, whereas BEG, TWIG, and BURG are more recent systems based on tree pattern matching combined with dynamic programming. An industrial-strength code generator previously implemented for a special-purpose language using the CGSS system is described and compared in some detail to our new implementation based on the BEG system. Several problems of integrating local and global register allocations within automatically generated code generators are described, and some solutions are proposed. In addition, the specification of a full code generator for SUN SPARC with register windows using the BEG system is described. We finally conclude that current technology of automatically generating code generators is viable in an industrial setting. However, further research needs to be done on the problem of properly integrating register allocation and instruction scheduling with instruction selection, when both are generated from declarative specifications.

  • 4.
    Asplund, Fredrik
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Törngren, Martin
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    The Discourse on Tool Integration Beyond Technology, A Literature Survey2015In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 106, p. 117-131Article in journal (Refereed)
    Abstract [en]

    The tool integration research area emerged in the 1980s. This survey focuses on those strands of tool integration research that discuss issues beyond technology.

     

    We reveal a discourse centered around six frequently mentioned non-functional properties. These properties have been discussed in relation to technology and high level issues. However, while technical details have been covered, high level issues and, by extension, the contexts in which tool integration can be found, are treated indifferently. We conclude that this indifference needs to be challenged, and research on a larger set of stakeholders and contexts initiated.

     

    An inventory of the use of classification schemes underlines the difficulty of evolving the classical classification scheme published by Wasserman. Two frequently mentioned redefinitions are highlighted to facilitate their wider use.

     

    A closer look at the limited number of research methods and the poor attention to research design indicates a need for a changed set of research methods. We propose more critical case studies and method diversification through theory triangulation.

     

    Additionally, among disparate discourses we highlight several focusing on standardization which are likely to contain relevant findings. This suggests that open communities employed in the context of (pre-)standardization could be especially important in furthering the targeted discourse.

  • 5.
    Avritzer, Alberto
    et al.
    Siemens Corporate Research, United States .
    Cole, R
    JHU/Applied Physics Laboratory, United States .
    Weyuker, Elaine
    AT and T Labs - Research, United States.
    Methods and Opportunities for Rejuvenation in Aging Distributed Software2010In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 83, no 9, p. 1568-1578Article in journal (Refereed)
    Abstract [en]

    In this paper we describe several methods for detecting the need for software rejuvenation in mission critical systems that are subjected to worm infection, and introduce new software rejuvenation algorithms. We evaluate these algorithms' effectiveness using both simulation studies and analytic modeling, by assessing the probability of mission success. The system under study emulates a Mobile Ad-Hoc Network (MANET) of processing nodes. Our analysis determined that some of our rejuvenation algorithms are quite effective in maintaining a high probability of mission success while the system is under explicit attack by a worm infection.

  • 6.
    Axelsson, Jakob
    et al.
    RISE, Swedish ICT, SICS, Software and Systems Engineering Laboratory.
    Skoglund, Mats
    RISE, Swedish ICT, SICS.
    Quality assurance in software ecosystems: A systematic literature mapping and research agenda2015In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 114, p. 69-81Article in journal (Refereed)
    Abstract [en]

    Abstract Software ecosystems are becoming a common model for software development in which different actors cooperate around a shared platform. However, it is not clear what the implications are on software quality when moving from a traditional approach to an ecosystem, and this is becoming increasingly important as ecosystems emerge in critical domains such as embedded applications. Therefore, this paper investigates the challenges related to quality assurance in software ecosystems, and identifies what approaches have been proposed in the literature. The research method used is a systematic literature mapping, which however only resulted in a small set of six papers. The literature findings are complemented with a constructive approach where areas are identified that merit further research, resulting in a set of research topics that form a research agenda for quality assurance in software ecosystems. The agenda spans the entire system life-cycle, and focuses on challenges particular to an ecosystem setting, which are mainly the results of the interactions across organizational borders, and the dynamic system integration being controlled by the users.

  • 7.
    Badampudi, D.
    et al.
    Blekinge Institute of Technology, Karlskrona, Sweden.
    Wnuk, K.
    Blekinge Institute of Technology, Karlskrona, Sweden.
    Wohlin, C.
    Blekinge Institute of Technology, Karlskrona, Sweden.
    Franke, U.
    Blekinge Institute of Technology, Karlskrona, Sweden.
    Smite, D.
    Blekinge Institute of Technology, Karlskrona, Sweden.
    Cicchetti, Antonio
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    A decision-making process-line for selection of software asset origins and components2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 135, p. 88-104Article in journal (Refereed)
    Abstract [en]

    Selecting sourcing options for software assets and components is an important process that helps companies to gain and keep their competitive advantage. The sourcing options include: in-house, COTS, open source and outsourcing. The objective of this paper is to further refine, extend and validate a solution presented in our previous work. The refinement includes a set of decision-making activities, which are described in the form of a process-line that can be used by decision-makers to build their specific decision-making process. We conducted five case studies in three companies to validate the coverage of the set of decision-making activities. The solution in our previous work was validated in two cases in the first two companies. In the validation, it was observed that no activity in the proposed set was perceived to be missing, although not all activities were conducted and the activities that were conducted were not executed in a specific order. Therefore, the refinement of the solution into a process-line approach increases the flexibility and hence it is better in capturing the differences in the decision-making processes observed in the case studies. The applicability of the process-line was then validated in three case studies in a third company. 

  • 8.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Claes, Wohlin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kai, Petersen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Component Decision-making: In-house, OSS, COTS or Outsourcing: A Systematic Literature Review2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 121, p. 105-124Article in journal (Refereed)
    Abstract [en]

    Component-based software systems require decisions on component origins for acquiring components. A component origin is an alternative of where to get a component from. Objective: To identify factors that could influence the decision to choose among different component origins and solutions for decision-making (For example, optimization) in the literature. Method: A systematic review study of peer-reviewed literature has been conducted. Results: In total we included 24 primary studies. The component origins compared were mainly focused on in-house vs. COTS and COTS vs. OSS. We identified 11 factors affecting or influencing the decision to select a component origin. When component origins were compared, there was little evidence on the relative (either positive or negative) effect of a component origin on the factor. Most of the solutions were proposed for in-house vs. COTS selection and time, cost and reliability were the most considered factors in the solutions. Optimization models were the most commonly proposed technique used in the solutions. Conclusion: The topic of choosing component origins is a green field for research, and in great need of empirical comparisons between the component origins, as well of how to decide between different combinations of them.

  • 9.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Sweden.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Sweden.
    Wohlin, Claes
    Blekinge Institute of Technology, Sweden.
    Franke, Ulrik
    RISE - Research Institutes of Sweden, ICT, SICS.
    Smite, Darja
    Blekinge Institute of Technology, Sweden.
    Cicchetti, Antonio
    Mälardalen University, Sweden.
    A decision-making process-line for selection of software asset origins and components2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 135, no January, p. 88-104Article in journal (Refereed)
    Abstract [en]

    Selecting sourcing options for software assets and components is an important process that helps companies to gain and keep their competitive advantage. The sourcing options include: in-house, COTS, open source and outsourcing. The objective of this paper is to further refine, extend and validate a solution presented in our previous work. The refinement includes a set of decision-making activities, which are described in the form of a process-line that can be used by decision-makers to build their specific decision-making process. We conducted five case studies in three companies to validate the coverage of the set of decision-making activities. The solution in our previous work was validated in two cases in the first two companies. In the validation, it was observed that no activity in the proposed set was perceived to be missing, although not all activities were conducted and the activities that were conducted were not executed in a specific order. Therefore, the refinement of the solution into a process-line approach increases the flexibility and hence it is better in capturing the differences in the decision-making processes observed in the case studies. The applicability of the process-line was then validated in three case studies in a third company

  • 10.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Franke, Ulrik
    Swedish Institute of Computer Science, SWE.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Cicchetti, Antonio
    Mälardalens högskola, SWE.
    A decision-making process-line for selection of software asset origins and components2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 135, p. 88-104Article in journal (Refereed)
    Abstract [en]

    Selecting sourcing options for software assets and components is an important process that helps companies to gain and keep their competitive advantage. The sourcing options include: in-house, COTS, open source and outsourcing. The objective of this paper is to further refine, extend and validate a solution presented in our previous work. The refinement includes a set of decision-making activities, which are described in the form of a process-line that can be used by decision-makers to build their specific decision-making process. We conducted five case studies in three companies to validate the coverage of the set of decision-making activities. The solution in our previous work was validated in two cases in the first two companies. In the validation, it was observed that no activity in the proposed set was perceived to be missing, although not all activities were conducted and the activities that were conducted were not executed in a specific order. Therefore, the refinement of the solution into a process-line approach increases the flexibility and hence it is better in capturing the differences in the decision-making processes observed in the case studies. The applicability of the process-line was then validated in three case studies in a third company. © 2017 Elsevier Inc.

  • 11.
    Bagheri, M.
    et al.
    Sharif University of Technology, Tehran, Iran.
    Sirjani, Marjan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. Reykjavik University, Reykjavik, Iceland.
    Khamespanah, E.
    Reykjavik University, Reykjavik, Iceland.
    Khakpour, N.
    Linnaeus University, Växjö Campus, Sweden.
    Akkaya, I.
    University of California at Berkeley, CA, United States.
    Movaghar, A.
    Sharif University of Technology, Tehran, Iran.
    Lee, E. A.
    University of California at Berkeley, CA, United States.
    Coordinated actor model of self-adaptive track-based traffic control systems2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 143, p. 116-139Article in journal (Refereed)
    Abstract [en]

    Self-adaptation is a well-known technique to handle growing complexities of software systems, where a system autonomously adapts itself in response to changes in a dynamic and unpredictable environment. With the increasing need for developing self-adaptive systems, providing a model and an implementation platform to facilitate integration of adaptation mechanisms into the systems and assuring their safety and quality is crucial. In this paper, we target Track-based Traffic Control Systems (TTCSs) in which the traffic flows through pre-specified sub-tracks and is coordinated by a traffic controller. We introduce a coordinated actor model to design self-adaptive TTCSs and provide a general mapping between various TTCSs and the coordinated actor model. The coordinated actor model is extended to build large-scale self-adaptive TTCSs in a decentralized setting. We also discuss the benefits of using Ptolemy II as a framework for model-based development of large-scale self-adaptive systems that supports designing multiple hierarchical MAPE-K feedback loops interacting with each other. We propose a template based on the coordinated actor model to design a self-adaptive TTCS in Ptolemy II that can be instantiated for various TTCSs. We enhance the proposed template with a predictive adaptation feature. We illustrate applicability of the coordinated actor model and consequently the proposed template by designing two real-life case studies in the domains of air traffic control systems and railway traffic control systems in Ptolemy II. 

  • 12.
    Bagheri, Maryam
    et al.
    Sharif Univ Technol, Iran.
    Sirjani, Marjan
    Mälardalen University;Reykjavik Univ, Iceland.
    Khamespanah, Ehsan
    Reykjavik Univ, Iceland;Univ Tehran, Iran.
    Khakpour, Narges
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Akkaya, Ilge
    Univ Calif Berkeley, USA.
    Movaghar, Ali
    Sharif Univ Technol, Iran.
    Lee, Edward A.
    Univ Calif Berkeley, USA.
    Coordinated actor model of self-adaptive track-based traffic control systems2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 143, p. 116-139Article in journal (Refereed)
    Abstract [en]

    Self-adaptation is a well-known technique to handle growing complexities of software systems, where a system autonomously adapts itself in response to changes in a dynamic and unpredictable environment. With the increasing need for developing self-adaptive systems, providing a model and an implementation platform to facilitate integration of adaptation mechanisms into the systems and assuring their safety and quality is crucial. In this paper, we target Track-based Traffic Control Systems (TTCSs) in which the traffic flows through pre-specified sub-tracks and is coordinated by a traffic controller. We introduce a coordinated actor model to design self-adaptive TTCSs and provide a general mapping between various TTCSs and the coordinated actor model. The coordinated actor model is extended to build large-scale self-adaptive TTCSs in a decentralized setting. We also discuss the benefits of using Ptolemy II as a framework for model-based development of large-scale self-adaptive systems that supports designing multiple hierarchical MAPE-K feedback loops interacting with each other. We propose a template based on the coordinated actor model to design a self-adaptive TTCS in Ptolemy II that can be instantiated for various TTCSs. We enhance the proposed template with a predictive adaptation feature. We illustrate applicability of the coordinated actor model and consequently the proposed template by designing two real-life case studies in the domains of air traffic control systems and railway traffic control systems in Ptolemy II.

  • 13.
    Bate, Iain
    University of York.
    Systematic approaches to understanding and evaluating design trade-offs2008In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 81, no 8, p. 1253-1271Article in journal (Refereed)
    Abstract [en]

    The use of trade-off analysis as part of optimising designs has been an emerging technique for a number of years. However, only recently has much work been done with respect to systematically deriving the understanding of the system problem to be optimised and using this information as part of the design process. As systems have become larger and more complex then a need has arisen for suitable approaches. The system problem consists of design choices, measures for individual values related to quality attributes and weights to balance the relative importance of each individual quality attribute. In this paper, a method is presented for establishing an understanding of a system problem using the goal structuring notation (GSN). The motivation for this work is borne out of experience working on embedded systems in the context of critical systems where the cost of change can be large and the impact of design errors potentially catastrophic. A particular focus is deriving an understanding of the problem so that different solutions can be assessed quantitatively, which allows more definitive choices to be made. A secondary benefit is it also enables design using heuristic search approaches which is another area of our research. The overall approach is demonstrated through a case study which is a task allocation problem.

  • 14.
    Berglund, Erik
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Designing electronic reference documentation for software component libraries2003In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 68, no 1, p. 65-75Article in journal (Refereed)
    Abstract [en]

    Contemporary software development is based on global sharing of software component libraries. As a result, programmers spend much time reading reference documentation rather than writing code, making library reference documentation a central programming tool. Traditionally, reference documentation is designed for textbooks even though it may be distributed online. However, the computer provides new dimensions of change, evolution, and adaptation that can be utilized to support efficiency and quality in software development. What is difficult to determine is how the electronic text dimensions best can be utilized in library reference documentation.

    This article presents a study of the design of electronic reference documentation for software component libraries. Results are drawn from a study in an industrial environment based on the use of an experimental electronic reference documentation (called Dynamic Javadoc or DJavadoc) used in a real-work situation for 4 months. The results from interviews with programmers indicate that the electronic library reference documentation does not require adaptation or evolution on an individual level. More importantly, reference documentation should facilitate the transfer of code from documentation to source files and also support the integration of multiple documentation sources.

  • 15.
    Boucké, N.
    et al.
    Katholieke University Leuven.
    Weyns, Danny
    Katholieke University Leuven.
    Holvoet, Tom
    Katholieke University Leuven.
    Composition of architectural models: Empirical analysis and language support2010In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 83, no 11, p. 2108-2127Article in journal (Refereed)
    Abstract [en]

    Managing the architectural description (AD) of a complex software system and maintaining consistency among the different models is a demanding task. To understand the underlying problems, we analyse several non-trivial software architectures. The empirical study shows that a substantial amount of information of ADs is repeated, mainly by integrating information of different models in new models. Closer examination reveals that the absence of rigorously specified dependencies among models and the lack of support for automated composition of models are primary causes of management and consistency problems in software architecture. To tackle these problems, we introduce an approach in which compositions of models, together with relations among models, are explicitly supported in the ADL. We introduce these concepts formally and discuss a proof-of-concept instantiation of composition in xADL and its supporting tools. The approach is evaluated by comparing the original and revised ADs in an empirical study. The study indicates that our approach reduces the number of manually specified elements by 29%, and reduces the number of manual changes to elements for several realistic change scenarios by 52%.

  • 16. Bousse, Erwan
    et al.
    Leroy, Dorian
    Combemale, Benoit
    Wimmer, Manuel
    Baudry, Benoit
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Omniscient debugging for executable DSLs2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 137, p. 261-288Article in journal (Refereed)
    Abstract [en]

    Omniscient debugging is a promising technique that relies on execution traces to enable free traversal of the states reached by a model (or program) during an execution. While a few General-Purpose Languages (GPLs) already have support for omniscient debugging, developing such a complex tool for any executable Domain Specific Language (DSL) remains a challenging and error prone task. A generic solution must: support a wide range of executable DSLs independently of the metaprogramming approaches used for implementing their semantics; be efficient for good responsiveness. Our contribution relies on a generic omniscient debugger supported by efficient generic trace management facilities. To support a wide range of executable DSLs, the debugger provides a common set of debugging facilities, and is based on a pattern to define runtime services independently of metaprogramming approaches. Results show that our debugger can be used with various executable DSLs implemented with different metaprogramming approaches. As compared to a solution that copies the model at each step, it is on average sixtimes more efficient in memory, and at least 2.2 faster when exploring past execution states, while only slowing down the execution 1.6 times on average.

  • 17.
    Breivold, Hongyu Pei
    et al.
    ABB Corp Res.
    Crnkovic, Ivica
    Mälardalen University, School of Innovation, Design and Engineering.
    Larsson, Magnus
    ABB Corp Res.
    Software architecture evolution through evolvability analysis2012In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 85, no 11, p. 2574-2592Article in journal (Refereed)
    Abstract [en]

    Software evolvability is a multifaceted quality attribute that describes a software system's ability to easily accommodate future changes. It is a fundamental characteristic for the efficient implementation of strategic decisions, and the increasing economic value of software. For long life systems, there is a need to address evolvability explicitly during the entire software lifecycle in order to prolong the productive lifetime of software systems. However, designing and evolving software architectures are the challenging task. To improve the ability to understand and systematically analyze the evolution of software system architectures, in this paper, we describe software architecture evolution characterization, and propose an architecture evolvability analysis process that provides replicable techniques for performing activities to aim at understanding and supporting software architecture evolution. The activities are embedded in: (i) the application of a software evolvability model; (ii) a structured qualitative method for analyzing evolvability at the architectural level; and (iii) a quantitative evolvability analysis method with explicit and quantitative treatment of stakeholders' evolvability concerns and the impact of potential architectural solutions on evolvability. The qualitative and quantitative assessments manifested in the evolvability analysis process have been applied in two large-scale industrial software systems at ABB and Ericsson, with experiences and reflections described. (c) 2012 Elsevier Inc. All rights reserved.

  • 18. Brodnik, Andrej
    et al.
    Carlsson, Svante
    Blekinge Institute of Technology, Karlskrona.
    Fredman, Michael L.
    Department of Computer Science, Rutgers University, New Brunswick, NJ.
    Karlsson, Johan
    Luleå tekniska universitet.
    Munro, J. Ian
    School of Computer Science, University of Waterloo, Waterloo, Ontario.
    Worst case constant time priority queue2005In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 78, no 3, p. 249-156Article in journal (Refereed)
    Abstract [en]

    We present a new data structure of size 3M bits, where M is the size of the universe at hand, for realizing a discrete priority queue. When this data structure is used in combination with a new memory topology it executes all discrete priority queue operations in O(1) worst case time. In doing so we demonstrate how an unconventional, but practically implementable, memory architecture can be employed to sidestep known lower bounds and achieve constant time performance.

  • 19.
    Butler, Simon
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Gamalielsson, Jonas
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lundell, Björn
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Brax, Christoffer
    Combitech AB, Linköping, Sweden.
    Mattsson, Anders
    Husqvarna AB, Huskvarna, Sweden.
    Gustavsson, Tomas
    PrimeKey Solutions AB, Stockholm, Sweden.
    Feist, Jonas
    RedBridge AB, Stockholm, Sweden.
    Lönroth, Erik
    Scania IT AB, Södertälje, Sweden.
    Maintaining interoperability in open source software: A case study of the Apache PDFBox project2020In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 159, article id 110452Article in journal (Refereed)
    Abstract [en]

    Software interoperability is commonly achieved through the implementation of standards for communication protocols or data representation formats. Standards documents are often complex, difficult to interpret, and may contain errors and inconsistencies, which can lead to differing interpretations and implementations that inhibit interoperability. Through a case study of two years of activity in the Apache PDFBox project we examine day-to-day decisions made concerning implementation of the PDF specifications and standards in a community open source software (OSS) project. Thematic analysis is used to identify semantic themes describing the context of observed decisions concerning interoperability. Fundamental decision types are identified including emulation of the behaviour of dominant implementations and the extent to which to implement the PDF standards. Many factors influencing the decisions are related to the sustainability of the project itself, while other influences result from decisions made by external actors, including the developers of dependencies of PDFBox. This article contributes a fine grained perspective of decision-making about software interoperability by contributors to a community OSS project. The study identifies how decisions made support the continuing technical relevance of the software, and factors that motivate and constrain project activity. 

  • 20.
    Campeanu, Gabriel
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. Bombardier Transportation, Sweden.
    Carlson, Jan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. Mälardalen University, School of Health, Care and Social Welfare.
    Sentilles, Séverine
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Component-based development of embedded systems with GPUs2020In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 161, article id 110488Article in journal (Refereed)
    Abstract [en]

    One pressing challenge of many modern embedded systems is to successfully deal with the considerable amount of data that originates from the interaction with the environment. A recent solution comes from the use of GPUs, providing a significantly improved performance for data-parallel applications. Another trend in the embedded systems domain is component-based development. However, existing component-based approaches lack specific support to develop embedded systems with GPUs. As a result, components with GPU capability need to encapsulate all the required GPU information, leading to component specialization to specific platforms, hence drastically impeding component reusability. To facilitate component-based development of embedded systems with GPUs, we introduce the concept of flexible components. This increases the design flexibility by allowing the system developer to decide component allocation (i.e., either the CPU or GPU) at a later stage of the system development, with no change to the component implementation. Furthermore, we provide means to automatically generate code for adapting flexible components corresponding to their hardware placement, as well as code for component communication. Through the introduced support, components with GPU capability are platform-independent, and can be executed, without manual adjustment, on a large variety of hardware (i.e., platforms with different GPU characteristics).

  • 21.
    Caporuscio, Mauro
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Ghezzi, Carlo
    Politecnico di Milano, Italy.
    Engineering Future Internet applications: The Prime approach2015In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 106, p. 9-27Article in journal (Refereed)
    Abstract [en]

    The Future Internet is envisioned as a worldwide environment connecting a large open-ended collection of heterogeneous and autonomous resources, namely Things, Services and Contents, which interact with each other anywhere and anytime. Applications will possibly emerge dynamically as opportunistic aggregation of resources available at a given time, and will be able to self-adapt according to the environment dynamics. In this context, engineers should be provided with proper modeling and programming abstractions to develop applications able to benefit from Future Internet, by being at the same time fluid, as well as dependable. Indeed, such abstractions should (i) facilitate the development of autonomous and independent interacting resources (loose coupling), (ii) deal with the run-time variability of the application in terms of involved resources (flexibility), (iii) provide mechanisms for run-time resources discovery and access (dynamism), and (iv) enable the running application to accommodate unforeseen resources (serendipity).

    To this end, Prime (P-Rest at design/run tIME) defines the P-REST architectural style, and a set of P-REST oriented modeling and programming abstractions to provide engineers with both design-time and run-time support for specifying, implementing and operating P-RESTful applications.

  • 22.
    Caporuscio, Mauro
    et al.
    Università dell Aquila, Italy.
    Marco, Antinisca Di
    Università dell Aquila, Italy.
    Inverardi, Paola
    Università dell Aquila, Italy.
    Model-based system reconfiguration for dynamic performance management2007In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 80, no 4, p. 455-473Article in journal (Refereed)
    Abstract [en]

    Recently, growing attention focused on run-time management of Quality of Service (QoS) of complex software systems. In this context, system reconfiguration is considered a useful technique to manage QoS. Several reconfiguration approaches to performance management exist that help systems to maintain performance requirements at run time. However, many of them use prefixed strategies that are in general coded in the application or in the reconfiguration framework.

    In this work we propose a framework to manage performance of software systems at run time based on monitoring and model-based performance evaluation. The approach makes use of software architectures as abstractions of the managed system to avoid unnecessary details that can heavily affect the model evaluation in terms of complexity and resolution time.

  • 23.
    Ciccozzi, Federico
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Explicit connection patterns (ECP) profile and semantics for modelling and generating explicit connections in complex UML composite structures2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 121, p. 329-344Article in journal (Refereed)
    Abstract [en]

    Model-driven engineering can help in mitigating ever-growing complexity of modern software systems. In this sense, the Unified Modelling Language (UML) has gained a thick share in the market of modelling languages adopted in industry. Nevertheless, the generality of UML can make it hard to build complete code generators, simulators, model-based analysis or testing tools without setting variability in the semantics of the language. To tailor semantics variability the notion of semantic variation point has been introduced in UML 2.0. Our research focuses on the semantic variation point that leaves the rules for matching multiplicities of connected instances of components and ports undecided in UML composite structures. In order to allow model analysability, simulation and code generation, this semantics needs to be set. At the same time, leaving the burden of this task to the developers is often overwhelming for complex systems. In this paper we provide a solution for supporting modelling and automatic calculation and generation of explicit interconnections in complex UML composite structures. This is achieved by (i) defining a set of connection patterns, in terms of a UML profile, and related semantic rules for driving the calculation, (ii) providing a generation algorithm to calculate the explicit interconnections.

  • 24.
    Crnkovic, Ivica
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Heineman, George
    Schmidt, Heinz
    Stafford, Judith
    Wallnau, Kurt
    Guest Editorial2007In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 80, no 5, p. 641-642Article in journal (Other academic)
  • 25. Damm, Lars-Ola
    et al.
    Lundberg, Lars
    Wohlin, Claes
    A model for software rework reduction through a combination of anomaly metrics 2008In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 81, no 11, p. 1968-1982Article in journal (Refereed)
    Abstract [en]

    Analysis of anomalies reported during testing of a project can tell a lot about how well the processes and products work. Still, organizations rarely use anomaly reports for more than progress tracking although projects commonly spend a significant part of the development time on finding and correcting faults. This paper presents an anomaly metrics model that organizations can use for identifying improvements in the development process, i.e. to reduce the cost and lead-time spent on rework-related activities and to improve the quality of the delivered product. The model is the result of a four year research project performed at Ericsson. © 2008 Elsevier Inc. All rights reserved.

  • 26.
    Danglot, Benjamin
    et al.
    INRIA, Lille, France..
    Vera-Perez, Oscar
    INRIA, Rennes, France..
    Yu, Zhongxing
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Zaidman, Andy
    Delft Univ Technol, Delft, Netherlands..
    Monperrus, Martin
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Baudry, Benoit
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    A snowballing literature study on test amplification2019In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 157, article id UNSP 110398Article in journal (Refereed)
    Abstract [en]

    The adoption of agile approaches has put an increased emphasis on testing, resulting in extensive test suites. These suites include a large number of tests, in which developers embed knowledge about meaningful input data and expected properties as oracles. This article surveys works that exploit this knowledge to enhance manually written tests with respect to an engineering goal (e.g., improve coverage or refine fault localization). While these works rely on various techniques and address various goals, we believe they form an emerging and coherent field of research, which we coin "test amplification". We devised a first set of papers from DBLP, searching for all papers containing "test" and "amplification" in their title. We reviewed the 70 papers in this set and selected the 4 papers that fit the definition of test amplification. We use them as the seeds for our snowballing study, and systematically followed the citation graph. This study is the first that draws a comprehensive picture of the different engineering goals proposed in the literature for test amplification. We believe that this survey will help researchers and practitioners entering this new field to understand more quickly and more deeply the intuitions, concepts and techniques used for test amplification.

  • 27.
    de Oliveira Neto, Francisco Gomes
    et al.
    Chalmers, SWE.
    Torkar, Richard
    Göteborgs universitet, SWE.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gren, Lucas
    Chalmers, SWE.
    Furia, Carlo Alberto
    Universitá della Svizzera italiana, ITA.
    Huang, Z.
    Göteborgs universitet, SWE.
    Evolution of statistical analysis in empirical software engineering research: Current state and steps forward2019In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 156, p. 246-267Article in journal (Refereed)
    Abstract [en]

    Software engineering research is evolving and papers are increasingly based on empirical data from a multitude of sources, using statistical tests to determine if and to what degree empirical evidence supports their hypotheses. To investigate the practices and trends of statistical analysis in empirical software engineering (ESE), this paper presents a review of a large pool of papers from top-ranked software engineering journals. First, we manually reviewed 161 papers and in the second phase of our method, we conducted a more extensive semi-automatic classification of papers spanning the years 2001–2015 and 5196 papers. Results from both review steps was used to: i) identify and analyse the predominant practices in ESE (e.g., using t-test or ANOVA), as well as relevant trends in usage of specific statistical methods (e.g., nonparametric tests and effect size measures) and, ii) develop a conceptual model for a statistical analysis workflow with suggestions on how to apply different statistical methods as well as guidelines to avoid pitfalls. Lastly, we confirm existing claims that current ESE practices lack a standard to report practical significance of results. We illustrate how practical significance can be discussed in terms of both the statistical analysis and in the practitioner's context. © 2019 Elsevier Inc.

  • 28.
    Eklund, Ulrik
    et al.
    Malmö högskola, School of Technology.
    Bosch, Jan
    Architecture for embedded open software ecosystems2014In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 92, p. 128-142Article in journal (Refereed)
    Abstract [en]

    Software is prevalent in embedded products and may be critical for the success of the products, but manufacturers may view software as a necessary evil rather than as a key strategic opportunity and business differentiator. One of the reasons for this can be extensive supplier and subcontractor relationships and the cost, effort or unpredictability of the deliverables from the subcontractors are experienced as a major problem. The paper proposes open software ecosystem as an alternative approach to develop software for embedded systems, and elaborates on the necessary quality attributes of an embedded platform underlying such an ecosystem. The paper then defines a reference architecture consisting of 17 key decisions together with four architectural patterns, and provides the rationale why they are essential for an open software ecosystem platform for embedded systems in general and automotive systems in particular. The reference architecture is validated through a prototypical platform implementation in an industrial setting, providing a deeper understanding of how the architecture could be realised in the automotive domain. Four potential existing platforms, all targeted at the embedded domain (Android, OKL4, AUTOSAR and Robocop), are evaluated against the identified quality attributes to see how they could serve as a basis for an open software ecosystem platform with the conclusion that while none of them is a perfect fit they all have fundamental mechanisms necessary for an open software ecosystem approach.

  • 29.
    Eriksson, Magnus
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Börstler, Jürgen
    Umeå universitet, Institutionen för datavetenskap.
    Borg, Kjell
    BAE Systems Hägglunds AB, Örnsköldsvik, Sweden.
    Managing requirements specifications for product lines: An approach and industry case study2009In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 82, no 3, p. 435-447Article in journal (Refereed)
    Abstract [en]

    Software product line development has emerged as a leading approach for software reuse. This paper describes an approach to manage natural-language requirements specifications in a software product line context. Variability in such product line specifications is modeled and managed using a feature model. The proposed approach has been introduced in the Swedish defense industry. We present a multiple-case study covering two different product lines with in total eight product instances. These were compared to experiences from previous projects in the organization employing clone-and-own reuse. We conclude that the proposed product line approach performs better than clone-and-own reuse of requirements specifications in this particular industrial context.

  • 30.
    Eriksson, Magnus
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Börstler, Jürgen
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Borg, Kjell
    BAE Systems Hägglunds AB, Örnsköldsvik, Sweden.
    Managing requirements specifications for product lines: An approach and industry case study2009In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 82, no 3, p. 435-447Article in journal (Refereed)
    Abstract [en]

    Software product line development has emerged as a leading approach for software reuse. This paper describes an approach to manage natural-language requirements specifications in a software product line context. Variability in such product line specifications is modeled and managed using a feature model. The proposed approach has been introduced in the Swedish defense industry. We present a multiple-case study covering two different product lines with in total eight product instances. These were compared to experiences from previous projects in the organization employing clone-and-own reuse. We conclude that the proposed product line approach performs better than clone-and-own reuse of requirements specifications in this particular industrial context.

  • 31. Etemaadi, R
    et al.
    Lind, K
    RISE, Swedish ICT, Viktoria.
    Heldal, R
    Chaudron, M
    Quality-Driven Optimization of System Architecture: Industrial Case Study on an Automotive Sub-System2013In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 86, no 10, p. 2559-Article in journal (Refereed)
  • 32.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Lisper, Björn
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Sandström, Kristian
    RISE SICS, Västerås, Sweden.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    A resource efficient framework to run automotive embedded software on multi-core ECUs2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, p. 64-83Article in journal (Refereed)
    Abstract [en]

    The increasing functionality and complexity of automotive applications requires not only the use of more powerful hardware, e.g., multi-core processors, but also efficient methods and tools to support design decisions. Component-based software engineering proved to be a promising solution for managing software complexity and allowing for reuse. However, there are several challenges inherent in the intersection of resource efficiency and predictability of multi-core processors when it comes to running component-based embedded software. In this paper, we present a software design framework addressing these challenges. The framework includes both mapping of software components onto executable tasks, and the partitioning of the generated task set onto the cores of a multi-core processor. This paper aims at enhancing resource efficiency by optimizing the software design with respect to: 1) the inter-software-components communication cost, 2) the cost of synchronization among dependent transactions of software components, and 3) the interaction of software components with the basic software services. An engine management system, one of the most complex automotive sub-systems, is considered as a use case, and the experimental results show a reduction of up to 11.2% total CPU usage on aquad-core processor, in comparison with the common framework in the literature. 

  • 33.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, Sweden.
    Lisper, Björn
    Mälardalen University, Sweden.
    Sandström, Kristian
    RISE - Research Institutes of Sweden, ICT, SICS.
    Nolte, Thomas
    Mälardalen University, Sweden.
    A resource efficient framework to run automotive embedded software on multi-core ECUs2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 139, p. 64-83Article in journal (Refereed)
    Abstract [en]

    The increasing functionality and complexity of automotive applications requires not only the use of more powerful hardware, e.g., multi-core processors, but also efficient methods and tools to support design decisions. Component-based software engineering proved to be a promising solution for managing software complexity and allowing for reuse. However, there are several challenges inherent in the intersection of resource efficiency and predictability of multi-core processors when it comes to running component-based embedded software. In this paper, we present a software design framework addressing these challenges. The framework includes both mapping of software components onto executable tasks, and the partitioning of the generated task set onto the cores of a multi-core processor. This paper aims at enhancing resource efficiency by optimizing the software design with respect to: 1) the inter-software-components communication cost, 2) the cost of synchronization among dependent transactions of software components, and 3) the interaction of software components with the basic software services. An engine management system, one of the most complex automotive sub-systems, is considered as a use case, and the experimental results show a reduction of up to 11.2% total CPU usage on a quad-core processor, in comparison with the common framework in the literature.

  • 34.
    Felderer, Michael
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Holmström Olsson, Helena
    Malmö universtitet, SWE.
    Rabiser, Rick
    Johannes Kepler Universitat, AUT.
    Introduction to the special issue on quality engineering and management of software-intensive systems2019In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 149, p. 533-534Article in journal (Refereed)
  • 35. Felderer, Michael
    et al.
    Olsson Holmström, Helena
    Malmö University, Faculty of Technology and Society (TS), Department of Computer Science and Media Technology (DVMT).
    Rabiser, Rick
    Introduction to the special issue on quality engineering and management of software-intensive systems2019In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 149, p. 533-534Article in journal (Other academic)
  • 36. Forsman, Mattias
    et al.
    Glad, Andreas
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Algorithms for Automated Live Migration of Virtual Machines2015In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 101, p. 110-126Article in journal (Refereed)
    Abstract [en]

    We present two strategies to balance the load in a system with multiple virtual machines (VMs) through automated live migration. When the push strategy is used, overloaded hosts try to migrate workload to less loaded nodes. On the other hand, when the pull strategy is employed, the light-loaded hosts take the initiative to offload overloaded nodes. The performance of the proposed strategies was evaluated through simulations. We have discovered that the strategies complement each other, in the sense that each strategy comes out as “best” under different types of workload. For example, the pull strategy is able to quickly re-distribute the load of the system when the load is in the range low-to-medium, while the push strategy is faster when the load is medium-to-high. Our evaluation shows that when adding or removing a large number of virtual machines in the system, the “best” strategy can re-balance the system in 4–15 minutes.

  • 37.
    Fritzson, Peter
    Linköping University.
    Symbolic Debugging through Incremental Compilation in an Integrated Environment1983In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 3, no 4, p. 285-294Article in journal (Refereed)
    Abstract [en]

    It is demonstrated that fine-grained incremental compilation is a relevant technique when implementing powerful debuggers an incremental programming environments. A debugger and an incremental compiler for pascal has been implemented in the DICE system (Distributed Incremental Compiling environment). Incremental compilation is at the statement level which makes it useful for the debugger which also operates at the statement level. The quality of code produced by the incremental compiler approaches that required for production use. The algorithms involved an incremental compilation are not very complicated, but they require information that is easily available only in an integrated system, like DICE, where editor, compiler, linker, debugger and program data-base are well integrated into a single system. The extra information that has to be kept around, like the cross-reference database, can be used for multiple purposes, which makes total system economics favorable.

  • 38.
    Fritzson, Peter
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Auguston, Mikhail
    New Mexico State University, Las Cruces, USA.
    Shahmehri, Nahid
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Using assertions in declarative and operational models for automated debugging1994In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 25, no 3, p. 223-239Article in journal (Refereed)
    Abstract [en]

    This article presents an improved method for semiautomatic bug localization, by extending our previous generalized algorithm debugging technique, (GADT) [Fritzson et al. 1991], which uses declarative assertions about program units such as procedures and operational assertions about program behavior. For example, functional properties are best expressed through declarative assertions about procedure units, whereas order-dependent properties, or sequencing constraints in general, are more easily expressed using operational semantics. A powerful assertion language, called FORMAN, has been developed to this end. Such assertions can be collected into assertion libraries, which can greatly increase the degree of automation in bug localization. The long-range goal of this work is a semiautomatic debugging and testing system, which can be used during large-scale program development of nontrivial programs. To our knowledge, the extended GADT (EGADT) presented here is the first method that uses powerful operational assertions integrated with algorithmic debugging. In addition to providing support for local-level bug localization within procedures (which is not handled well by basic algorithmic debugging), the operational assertions reduce the number of irrelevant questions to the programmer during bug localization, thus further improving bug localization. A prototype of the GADT, implemented in PASCAL, supports debugging in a subset of Pascal. An interpreter of FORMAN assertions has also been implemented in PASCAL. During bug localization, both declarative and operational assertions are evaluated on execution traces.

  • 39.
    Galster, Matthias
    et al.
    Univ Canterbury, Canterbury, New Zealand.
    Avgeriou, Paris
    Univ Groningen, NL-9700 AB Groningen, Netherlands.
    Mannisto, Tomi
    Univ Helsinki, FIN-00014 Helsinki, Finland.
    Weyns, Danny
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Variability in software architecture: State of the art2014In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 91, p. 1-2Article in journal (Other academic)
  • 40.
    Gamalielsson, Jonas
    et al.
    University of Skövde, The Informatics Research Centre. University of Skövde, School of Informatics.
    Lundell, Björn
    University of Skövde, The Informatics Research Centre. University of Skövde, School of Informatics.
    Sustainability of Open Source software communities beyond a fork: How and why has the LibreOffice project evolved?2014In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 89, no 1, p. 128-145Article in journal (Refereed)
    Abstract [en]

    Many organisations are dependent upon long-term sustainable software systems and associated communities. In this paper we consider long-term sustainability of Open Source software communities in Open Source software projects involving a fork. There is currently a lack of studies in the literature that address how specific Open Source software communities are affected by a fork. We report from a study aiming to investigate the developer community around the LibreOffice project, which is a fork from the OpenOffice.org project. In so doing, our analysis also covers the OpenOffice.org project and the related Apache OpenOffice project. The results strongly suggest a long-term sustainable LibreOffice community and that there are no signs of stagnation in the LibreOffice project 33 months after the fork. Our analysis provides details on developer communities for the LibreOffice and Apache OpenOffice projects and specifically concerning how they have evolved from the OpenOffice.org community with respect to project activity, developer commitment, and retention of committers over time. Further, we present results from an analysis of first hand experiences from contributors in the LibreOffice community. Findings from our analysis show that Open Source software communities can outlive Open Source software projects and that LibreOffice is perceived by its community as supportive, diversified, and independent. The study contributes new insights concerning challenges related to long-term sustainability of Open Source software communities.

  • 41.
    Garcia-Valls, Marisol
    et al.
    Univ Carlos III Madrid, Spain.
    Perez-Palacin, Diego
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Mirandola, Raffaela
    Politecn Milan, Italy.
    Pragmatic cyber physical systems design based on parametric models2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 144, p. 559-572Article in journal (Refereed)
    Abstract [en]

    The adaptive nature of cyber physical systems (CPS) comes from the fact that they are deeply immersed in the physical environments that are inherently dynamic. CPS also have stringent requirements on real-time operation and safety that are fulfilled by rigorous model design and verification. In the real-time literature, adaptation is mostly limited to off-line modeling of well known and predicted transitions; but this is not appropriate for cyber physical systems as each transition can have unique and unknown characteristics. In the adaptive systems literature, adaptation solutions are silent about timely execution and about the underlying hardware possibilities that can potentially speed up execution. This paper presents a solution for designing adaptive cyber physical systems by using parametric models that are verified during the system execution (i.e., online), so that adaptation decisions are made based on the timing requirements of each particular adaptation event. Our approach allows the system to undergo timely adaptations that exploit the potential parallelism of the software and its execution over multicore processors. We exemplify the approach on a specific use case with autonomous vehicles communication, showing its applicability for situations that require time-bounded online adaptations.

  • 42.
    Garousi, Vahid
    et al.
    Wageningen University and Research Centre, NLD.
    Giray, Görkem
    Independent Researcher, TUR.
    Tüzün, Eray
    Bilkent Üniversitesi, TUR.
    Catal, Cagatay
    Wageningen University and Research Centre, NLD.
    Felderer, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Aligning software engineering education with industrial needs: A meta-analysis2019In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 156, p. 65-83Article in journal (Refereed)
    Abstract [en]

    Context: According to various reports, many software engineering (SE) graduates often face difficulties when beginning their careers, which is mainly due to misalignment of the skills learned in university education with what is needed in the software industry. Objective: Our objective is to perform a meta-analysis to aggregate the results of the studies published in this area to provide a consolidated view on how to align SE education with industry needs, to identify the most important skills and also existing knowledge gaps. Method: To synthesize the body of knowledge, we performed a systematic literature review (SLR), in which we systematically selected a pool of 35 studies and then conducted a meta-analysis using data extracted from those studies. Results: Via a meta-analysis and using data from 13 countries and over 4,000 data points, highlights of the SLR include: (1) software requirements, design, and testing are the most important skills; and (2) the greatest knowledge gaps are in configuration management, SE models and methods, SE process, design (and architecture), as well as in testing. Conclusion: This paper provides implications for both educators and hiring managers by listing the most important SE skills and the knowledge gaps in the industry. © 2019 Elsevier Inc.

  • 43.
    Grante, Christian
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Mechanical Engineering, Fluid and Mechanical Engineering Systems.
    Papadopoulos, Y
    Evolving car designs using model-based automated safety analysis and optimisation techniques2004In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. x, no xArticle in journal (Refereed)
  • 44.
    Graziotin, Daniel
    et al.
    Universitat Stuttgart, DEU.
    Fagerholm, Fabian
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wang, Xiaofeng
    Free University of Bozen-Bolzano, ITA.
    Abrahamsson, Pekka
    Jyvaskylan Yliopisto, FIN.
    What happens when software developers are (un)happy2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 140, p. 32-47Article in journal (Refereed)
    Abstract [en]

    The growing literature on affect among software developers mostly reports on the linkage between happiness, software quality, and developer productivity. Understanding happiness and unhappiness in all its components – positive and negative emotions and moods – is an attractive and important endeavor. Scholars in industrial and organizational psychology have suggested that understanding happiness and unhappiness could lead to cost-effective ways of enhancing working conditions, job performance, and to limiting the occurrence of psychological disorders. Our comprehension of the consequences of (un)happiness among developers is still too shallow, being mainly expressed in terms of development productivity and software quality. In this paper, we study what happens when developers are happy and unhappy while developing software. Qualitative data analysis of responses given by 317 questionnaire participants identified 42 consequences of unhappiness and 32 of happiness. We found consequences of happiness and unhappiness that are beneficial and detrimental for developers’ mental well-being, the software development process, and the produced artifacts. Our classification scheme, available as open data enables new happiness research opportunities of cause-effect type, and it can act as a guideline for practitioners for identifying damaging effects of unhappiness and for fostering happiness on the job. © 2018

  • 45.
    Gren, Lucas
    et al.
    Chalmers, SWE.
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Group development and group maturity when building agile teams: A qualitative and quantitative investigation at eight large companies2017In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 124, p. 104-119Article in journal (Refereed)
    Abstract [en]

    The agile approach to projects focuses more on close-knit teams than traditional waterfall projects, which means that aspects of group maturity become even more important. This psychological aspect is not much researched in connection to the building of an “agile team.” The purpose of this study is to investigate how building agile teams is connected to a group development model taken from social psychology. We conducted ten semi-structured interviews with coaches, Scrum Masters, and managers responsible for the agile process from seven different companies, and collected survey data from 66 group-members from four companies (a total of eight different companies). The survey included an agile measurement tool and the one part of the Group Development Questionnaire. The results show that the practitioners define group developmental aspects as key factors to a successful agile transition. Also, the quantitative measurement of agility was significantly correlated to the group maturity measurement. We conclude that adding these psychological aspects to the description of the “agile team” could increase the understanding of agility and partly help define an “agile team.” We propose that future work should develop specific guidelines for how software development teams at different maturity levels might adopt agile principles and practices differently.

  • 46. Hansson, Christina
    et al.
    Dittrich, Yvonne
    Gustavsson, Björn
    Zarnaak, Stefan
    How agile are industrial software development practices?2006In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 79, no 9Article in journal (Refereed)
    Abstract [en]

    Representatives from the agile development movement claim that agile ways of developing software are more fitting to what is actually needed in industrial software development. If this is so, successful industrial software development should already exhibit agile characteristics. This article therefore aims to examine whether that is the case. It presents an analysis of interviews with software developers from five different companies. We asked about concrete projects, both about the project models and the methods used, but also about the real situation in their daily work. Based on the interviews, we describe and then analyze their development practices. The analysis shows that the software providers we interviewed have more agile practices than they might themselves be aware of. However, plans and more formal development models also are well established. The conclusions answer the question posed in the title: It all depends! It depends on which of the different principles you take to judge agility. And it depends on the characteristics not only of the company but also of the individual project.

  • 47.
    Hjertström, Andreas
    et al.
    Mälardalen University, School of Innovation, Design and Engineering.
    Nyström, Dag
    Mälardalen University, School of Innovation, Design and Engineering.
    Sjödin, Mikael
    Mälardalen University, School of Innovation, Design and Engineering.
    Data Management for Component-Based Embedded Real-Time Systems: the Database Proxy Approach2012In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 85, no 4, p. 821-834Article in journal (Refereed)
    Abstract [en]

    We introduce the concept of database proxies intended to mitigate the gap between two disjoint productivity-enhancing techniques: Component Based Software Engineering (CBSE) and Real-Time Database Management Systems (RTDBMS). The two techniques promote opposing design goals and their coexistence is neither obvious nor intuitive. CBSE promotes encapsulation and decoupling of component internals from the component environment, whilst an RTDBMS provide mechanisms for efficient and predictable global data sharing. A component with direct access to an RTDBMS is dependent on that specific RTDBMS and may not be useable in an alternative environment. For components to remain encapsulated and reusable, database proxies decouple components from an underlying database residing in the component framework, while providing temporally predictable access to data maintained in a database. Our approach provide access to features such as extensive data modeling tools, predictable access to hard real-time data, dynamic access to soft real-time data using standardized queries and controlled data sharing; thus allowing developers to employ the full potential of both CBSE and an RTDBMS. Our approach primarily targets embedded systems with a subset of functionality with real-time requirements. The implementation results show that the benefits of using proxies do not come at the expense of significant run-time overheads or less accurate timing predictions.

  • 48.
    Inam, Rafia
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Carlson, Jan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Sjödin, Mikael
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Kuncar, Jiri
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Predictable integration and reuse of executable real-time components2014In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 91, p. 147-162Article in journal (Refereed)
    Abstract [en]

    We present the concept of runnable virtual node (RVN) as a means to achieve predictable integration and reuse of executable real-time components in embedded systems. A runnable virtual node is a coarse-grained software component that provides functional and temporal isolation with respect to its environment. Its interaction with the environment is bounded both by a functional and a temporal interface, and the validity of its internal temporal behaviour is preserved when integrated with other components or when reused in a new environment. Our realization of RVN exploits the latest techniques for hierarchical scheduling to achieve temporal isolation, and the principles from component-based software-engineering to achieve functional isolation. It uses a two-level deployment process, i.e. deploying functional entities to RVNs and then deploying RVNs to physical nodes, and thus also gives development benefits with respect to composability, system integration, testing, and validation. In addition, we have implemented a server-based inter-RVN communication strategy to not only support the predictable integration and reuse properties of RVNs by keeping the communication code in a separate server, but also increasing the maintainability and flexibility to change the communication code without affecting the timing properties of RVNs. We have applied our approach to a case study, implemented in the ProCom component technology executing on top of a FreeRTOS-based hierarchical scheduling framework and present the results as a proof-of-concept.

  • 49.
    Jayaputera, G. T.
    et al.
    Monash University, Melbourne, VIC.
    Zaslavsky, Arkady
    Loke, Seng Wai
    La Trobe University.
    Enabling run-time composition and support for heterogeneous pervasive multi-agent systems2007In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 80, no 12, p. 2039-2062Article in journal (Refereed)
    Abstract [en]

    User needs-driven and computer-supported development of pervasive heterogeneous and dynamic multi-agent systems remains a great challenge for agent research community. This paper presents an innovative approach to composing, validating and supporting multi-agent systems at run-time. Multi-agent systems (MASs) can and should be assembled quasi-automatically and dynamically based on high-level user specifications which are transformed into a shared and common goal-mission. Dynamically generating agents could also be supported as a pervasive service. Heterogeneity of MASs refers to diverse functionality and constituency of the system which include mobile as well as host associated software agents. This paper proposes and demonstrates on-demand and just-in-time agent composition approach which is combined with run-time support for MASs. Run-time support is based on mission cost-efficiency and shared objectives which enable termination, generation, injection and replacement of software agents as the mission evolves at run-time. We present the formal underpinning of our approach and describe the prototype tool - called eHermes, which has been implemented using available agent platforms. Analysis and results of evaluating eHermes are presented and discussed.

  • 50.
    Jägemar, Marcus
    et al.
    Ericsson, Stockholm, Sweden.
    Eldh, Sigrid
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ermedahl, Andreas
    Ericsson, Stockholm, Sweden.
    Lisper, Björn
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Automatic Message Compression with Overload Protection2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 121, no 1 nov, p. 209-222Article in journal (Refereed)
    Abstract [en]

    In this paper, we show that it is possible to increase the message throughput of a large-scale industrial system by selectively compress messages. The demand for new high-performance message processing systems conflicts with the cost effectiveness of legacy systems. The result is often a mixed environment with several concurrent system generations. Such a mixed environment does not allow a complete replacement of the communication backbone to provide the increased messaging performance. Thus, performance-enhancing software solutions are highly attractive. Our contribution is 1) an online compression mechanism that automatically selects the most appropriate compression algorithm to minimize the message round trip time; 2) a compression overload mechanism that ensures ample resources for other processes sharing the same CPU. We have integrated 11 well-known compression algorithms/configurations and tested them with production node traffic. In our target system, automatic message compression results is a 9.6% reduction of message round trip time. The selection procedure is fully automatic and does not require any manual intervention. The automatic behavior makes it particularly suitable for large systems where it is difficult to predict future system behavior.

123 1 - 50 of 110
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf