Ändra sökning
Avgränsa sökresultatet
1234567 1 - 50 av 2968
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
• apa
• ieee
• modern-language-association-8th-edition
• vancouver
• Annat format
Fler format
Språk
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Annat språk
Fler språk
Utmatningsformat
• html
• text
• asciidoc
• rtf
Träffar per sida
• 5
• 10
• 20
• 50
• 100
• 250
Sortering
• Standard (Relevans)
• Författare A-Ö
• Författare Ö-A
• Titel A-Ö
• Titel Ö-A
• Publikationstyp A-Ö
• Publikationstyp Ö-A
• Äldst först
• Nyast först
• Disputationsdatum (tidigaste först)
• Disputationsdatum (senaste först)
• Standard (Relevans)
• Författare A-Ö
• Författare Ö-A
• Titel A-Ö
• Titel Ö-A
• Publikationstyp A-Ö
• Publikationstyp Ö-A
• Äldst först
• Nyast först
• Disputationsdatum (tidigaste först)
• Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
• 1.
Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
Performance Tradeoffs in Software Transactional Memory2010Självständigt arbete på avancerad nivå (masterexamen)Studentuppsats (Examensarbete)

Transactional memory (TM), a new programming paradigm, is one of the latest approaches to write programs for next generation multicore and multiprocessor systems. TM is an alternative to lock-based programming. It is a promising solution to a hefty and mounting problem that programmers are facing in developing programs for Chip Multi-Processor (CMP) architectures by simplifying synchronization to shared data structures in a way that is scalable and compos-able. Software Transactional Memory (STM) a full software approach of TM systems can be defined as non-blocking synchronization mechanism where sequential objects are automatically converted into concurrent objects. In this thesis, we present performance comparison of four different STM implementations – RSTM of V. J. Marathe, et al., TL2 of D. Dice, et al., TinySTM of P. Felber, et al. and SwissTM of A. Dragojevic, et al. It helps us in deep understanding of potential tradeoffs involved. It further helps us in assessing, what are the design choices and configuration parameters that may provide better ways to build better and efficient STMs. In particular, suitability of an STM is analyzed against another STM. A literature study is carried out to sort out STM implementations for experimentation. An experiment is performed to measure performance tradeoffs between these STM implementations. The empirical evaluations done as part of this thesis conclude that SwissTM has significantly higher throughput than state-of-the-art STM implementations, namely RSTM, TL2, and TinySTM, as it outperforms consistently well while measuring execution time and aborts per commit parameters on STAMP benchmarks. The results taken in transaction retry rate measurements show that the performance of TL2 is better than RSTM, TinySTM and SwissTM.

• 2.
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
Designing Self-Adaptive Software Systems with Reuse2018Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)

Modern software systems are increasingly more connected, pervasive, and dynamic, as such, they are subject to more runtime variations than legacy systems. Runtime variations affect system properties, such as performance and availability. The variations are difficult to anticipate and thus mitigate in the system design.

Self-adaptive software systems were proposed as a solution to monitor and adapt systems in response to runtime variations. Research has established a vast body of knowledge on engineering self-adaptive systems. However, there is a lack of systematic process support that leverages such engineering knowledge and provides for systematic reuse for self-adaptive systems development.

This thesis proposes the Autonomic Software Product Lines (ASPL), which is a strategy for developing self-adaptive software systems with systematic reuse. The strategy exploits the separation of a managed and a managing subsystem and describes three steps that transform and integrate a domain-independent managing system platform into a domain-specific software product line for self-adaptive software systems.

Applying the ASPL strategy is however not straightforward as it involves challenges related to variability and uncertainty. We analyzed variability and uncertainty to understand their causes and effects. Based on the results, we developed the Autonomic Software Product Lines engineering (ASPLe) methodology, which provides process support for the ASPL strategy. The ASPLe has three processes, 1) ASPL Domain Engineering, 2) Specialization and 3) Integration. Each process maps to one of the steps in the ASPL strategy and defines roles, work-products, activities, and workflows for requirements, design, implementation, and testing. The focus of this thesis is on requirements and design.

We validate the ASPLe through demonstration and evaluation. We developed three demonstrator product lines using the ASPLe. We also conducted an extensive case study to evaluate key design activities in the ASPLe with experiments, questionnaires, and interviews. The results show a statistically significant increase in quality and reuse levels for self-adaptive software systems designed using the ASPLe compared to current engineering practices.

• 3.
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
Architectural reasoning for dynamic software product lines2013Ingår i: Proceedings of the 17th International Software Product Line Conference co-located workshops, ACM Press, 2013, s. 117-124Konferensbidrag (Refereegranskat)

Software quality is critical in today's software systems. A challenge is the trade-off situation architects face in the design process. Designers often have two or more alternatives, which must be compared and put into context before a decision is made. The challenge becomes even more complex for dynamic software product lines, where domain designers have to take runtime variations into consideration as well. To address the problem we propose extensions to an architectural reasoning framework with constructs/artifacts to define and model a domain's scope and dynamic variability. The extended reasoning framework encapsulates knowledge to understand and reason about domain quality behavior and self-adaptation as a primary variability mechanism. The framework is demonstrated for a self-configuration property, self-upgradability on an educational product-line.

• 4.
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM), Institutionen för datavetenskap (DV).
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM), Institutionen för datavetenskap (DV).
ASPLe: a methodology to develop self-adaptive software systems with reuse2017Rapport (Övrigt vetenskapligt)

Advances in computing technologies are pushing software systems and their operating environments to become more dynamic and complex. The growing complexity of software systems coupled with uncertainties induced by runtime variations leads to challenges in software analysis and design. Self-Adaptive Software Systems (SASS) have been proposed as a solution to address design time complexity and uncertainty by adapting software systems at runtime. A vast body of knowledge on engineering self-adaptive software systems has been established. However, to the best of our knowledge, no or little work has considered systematic reuse of this knowledge. To that end, this study contributes an Autonomic Software Product Lines engineering (ASPLe) methodology. The ASPLe is based on a multi-product lines strategy which leverages systematic reuse through separation of application and adaptation logic. It provides developers with repeatable process support to design and develop self-adaptive software systems with reuse across several application domains. The methodology is composed of three core processes, and each process is organized for requirements, design, implementation, and testing activities. To exemplify and demonstrate the use of the ASPLe methodology, three application domains are used as running examples throughout the report.

• 5.
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV). Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV). Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap (DV).
Rigorous architectural reasoning for self-adaptive software systems2016Ingår i: Proceedings: First Workshop on Qualitative Reasoning abut Software Architectures, QRASA 2016 / [ed] Lisa O'Conner, IEEE, 2016, s. 11-18Konferensbidrag (Refereegranskat)

Designing a software architecture requires architectural reasoning, i.e., activities that translate requirements to an architecture solution. Architectural reasoning is particularly challenging in the design of product-lines of self-adaptive systems, which involve variability both at development time and runtime. In previous work we developed an extended Architectural Reasoning Framework (eARF) to address this challenge. However, evaluation of the eARF showed that the framework lacked support for rigorous reasoning, ensuring that the design complies to the requirements. In this paper, we introduce an analytical framework that enhances eARF with such support. The framework defines a set of artifacts and a series of activities. Artifacts include templates to specify domain quality attribute scenarios, concrete models, and properties. The activities support architects with transforming requirement scenarios to architecture models that comply to required properties. Our focus in this paper is on architectural reasoning support for a single product instance. We illustrate the benefits of the approach by applying it to an example client-server system, and outline challenges for future work. © 2016 IEEE.

• 6.
Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM. Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
Towards Autonomic Software Product Lines (ASPL) - A Technical Report2011Rapport (Övrigt vetenskapligt)

This report describes a work in progress to develop Autonomic Software Product Lines (ASPL). The ASPL is a dynamic software product line approach with a novel variability handling mechanism that enables traditional software product lines to adapt themselves at runtime in response to changes in their context, requirements and business goals. The ASPL variability mechanism is composed of three key activities: 1) context-profiling, 2) context-aware composition, and 3) online learning. Context-profiling is an offline activity that prepares a knowledge base for context-aware composition. The context-aware composition uses the knowledge base to derive a new product or adapts an existing product based on a product line's context attributes and goals. The online learning optimizes the knowledge base to remove errors and suboptimal information and to incorporate new knowledge. The three activities together form a simple yet powerful variability handling mechanism that learns and adapts a system at runtime in response to changes in system context and goals. We evaluated the ASPL variability mechanism on three small-scale software product lines and got promising results. The ASPL approach is, however, is yet at an initial stage and require improved development support with more rigorous evaluation.

• 7.
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM). Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
ASPLe: a methodology to develop self-adaptive software systems with systematic reuseManuskript (preprint) (Övrigt vetenskapligt)

More than two decades of research have demonstrated an increasing need for software systems to be self-adaptive. Self-adaptation is required to deal with runtime dynamics which are difficult to predict before deployment. A vast body of knowledge to develop Self-Adaptive Software Systems (SASS) has been established. We, however, discovered a lack of process support to develop self-adaptive systems with reuse. To that end, we propose a domain-engineering based methodology, Autonomic Software Product Lines engineering (ASPLe), which provides step-by-step guidelines for developing families of SASS with systematic reuse. The evaluation results from a case study show positive effects on quality and reuse for self-adaptive systems designed using the ASPLe compared to state-of-the-art engineering practices.

• 8.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
Optimal dynamic partial order reduction2014Ingår i: Proc. 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, New York: ACM Press, 2014, s. 373-384Konferensbidrag (Refereegranskat)

Stateless model checking is a powerful technique for program verification, which however suffers from an exponential growth in the number of explored executions. A successful technique for reducing this number, while still maintaining complete coverage, is Dynamic Partial Order Reduction (DPOR). We present a new DPOR algorithm, which is the first to be provably optimal in that it always explores the minimal number of executions. It is based on a novel class of sets, called source sets, which replace the role of persistent sets in previous algorithms. First, we show how to modify an existing DPOR algorithm to work with source sets, resulting in an efficient and simple to implement algorithm. Second, we extend this algorithm with a novel mechanism, called wakeup trees, that allows to achieve optimality. We have implemented both algorithms in a stateless model checking tool for Erlang programs. Experiments show that source sets significantly increase the performance and that wakeup trees incur only a small overhead in both time and space.

• 9.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
Source Sets: A Foundation for Optimal Dynamic Partial Order Reduction2017Ingår i: Journal of the ACM, ISSN 0004-5411, E-ISSN 1557-735X, Vol. 64, nr 4, artikel-id 25Artikel i tidskrift (Refereegranskat)

Stateless model checking is a powerful method for program verification that, however, suffers from an exponential growth in the number of explored executions. A successful technique for reducing this number, while still maintaining complete coverage, is Dynamic Partial Order Reduction (DPOR), an algorithm originally introduced by Flanagan and Godefroid in 2005 and since then not only used as a point of reference but also extended by various researchers. In this article, we present a new DPOR algorithm, which is the first to be provably optimal in that it always explores the minimal number of executions. It is based on a novel class of sets, called source sets, that replace the role of persistent sets in previous algorithms. We begin by showing how to modify the original DPOR algorithm to work with source sets, resulting in an efficient and simple-to-implement algorithm, called source-DPOR. Subsequently, we enhance this algorithm with a novel mechanism, called wakeup trees, that allows the resulting algorithm, called optimal-DPOR, to achieve optimality. Both algorithms are then extended to computational models where processes may disable each other, for example, via locks. Finally, we discuss tradeoffs of the source-and optimal-DPOR algorithm and present programs that illustrate significant time and space performance differences between them. We have implemented both algorithms in a publicly available stateless model checking tool for Erlang programs, while the source-DPOR algorithm is at the core of a publicly available stateless model checking tool for C/pthread programs running on machines with relaxed memory models. Experiments show that source sets significantly increase the performance of stateless model checking compared to using the original DPOR algorithm and that wakeup trees incur only a small overhead in both time and space in practice.

• 10.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datalogi.
Stateless model checking for TSO and PSO2015Ingår i: Tools and Algorithms for the Construction and Analysis of Systems: TACAS 2015, Springer Berlin/Heidelberg, 2015, s. 353-367Konferensbidrag (Refereegranskat)
• 11.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik. Academia Sinica. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik. Linköping University.
MEMORAX, a Precise and Sound Tool for Automatic Fence Insertion under TSO2013Ingår i: Tools and Algorithms for the Construction and Analysis of Systems, Springer Berlin/Heidelberg, 2013, s. 530-536Konferensbidrag (Refereegranskat)
• 12. Abel, John H.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för beräkningsvetenskap. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Tillämpad beräkningsvetenskap.
GillesPy: A Python package for stochastic model building and simulation2016Ingår i: IEEE Life Sciences Letters, E-ISSN 2332-7685, Vol. 2, s. 35-38Artikel i tidskrift (Refereegranskat)
• 13.
Mälardalens högskola, Ekonomihögskolan.
Programmeringens grunder - med exempel i C#2004Bok (Övrig (populärvetenskap, debatt, mm))

Detta är en lärobok i grundläggande programmering. Den fokuserar på att lära ut det som är gemensamt för de flesta programmeringsspråk – de grundläggande elementen och programkonstruktionerna och hur dessa relaterar till varandra, oberoende av språket. Vilket språk programmet skrivs i måste vara underordnat den uppgift programmet skall lösa. Boken går därför igenom grunderna i strukturerad programmering och visar otaliga exempel i såväl flödesdiagram, strukturdiagram, pseudokod som i källkod. I slutet av boken finns kapitel som går igenom grunderna även i objektorienterad programmering.

Boken tonar ner inlärningen av ett specifikt programmeringsspråk, men i slutändan måste program ändå skrivas i något språk. Exemplen är skrivna i C# som har slagit igenom som ett praktiskt programmeringsspråk på mycket kort tid och redan börjat göra sitt intåg i högskolekurser i programmering.

I första hand är boken avsedd för nybörjare i programmering på högskolenivå, som vill lära sig programmeringens grunder.

• 14.
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Comparative Analysis of Software Development Practices across Software Organisations: India and Sweden2016Självständigt arbete på avancerad nivå (magisterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)

Context. System Development Methodologies (SDM’s) have been an area of intensive research in the field of software engineering. Different software organisations adopt different development methodologies and use different development practices. The frequency of usage of development practices and acceptance factors for adoption of development methodology are crucial for software organisations. The factors of acceptance and development practices differ across geographical locations. Many challenges have been presented in the literature with respect to the mismatch of the development practices across organisations while collaborating across organisations in distributed development. There is no considerable amount of research done in context of differences across development practices and acceptance factors for adoption of a particular development methodology. Objectives. The primary objectives of the research are to find out a) differences in (i) practice usage (ii) acceptance factors such as organisational, social and cultural b) explore the reasons for the differences and also investigate consequences of such differences while collaborating, across organisations located in India and Sweden. Methods. A literature review was conducted by searching in scientific databases for identifying common agile and plan-driven development practices and acceptance theories for development methodologies. Survey was conducted across organisations located in India and Sweden to find out the usage frequency of development practices and acceptance factors. Ten interviews were conducted to investigate, reasons for differences and consequences of differences from the software practitioners from organisations located in India and Sweden. Literature evidences were used to support the results collected from interviews. Results. From the survey, organisations in India have adopted a higher frequency of plan driven practices when compared to Sweden and agile practices were adopted at higher frequency in Sweden when compared to India. The number of organisations adopting "pure agile" methodologies have been significantly higher in Sweden. There was significant differences were found across the acceptance factors such as cultural, organisational, image and career factors between India and Sweden. The factors such as cultural, social, human, business and organisational factors are responsible for such differences across development practices and acceptance factors. Challenges related to communication, coordination and control were found due to the differences, while collaborating between Indian and Sweden sites. Conclusions. The study signifies the importance of identifying the frequency of development practices and also the acceptance factors responsible for adoption of development methodologies in the software organisations. The mismatch between these practices will led to various challenges. The study draws insights into various non-technical factors such as cultural, human, organisational, business and social while collaborating between organisations. Variations across these factors will lead to many coordination, communication and control issues. Keywords: Development Practices, Agile Development, Plan Driven Development, Acceptance Factors, Global Software Development.

• 15.
KTH, Skolan för informations- och kommunikationsteknik (ICT). Technische Universität Braunschweig.
A Multi-leader Approach to Byzantine Fault Tolerance: Achieving Higher Throughput Using Concurrent Consensus2015Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)

Byzantine Fault Tolerant protocols are complicated and hard to implement.Today’s software industry is reluctant to adopt these protocols because of thehigh overhead of message exchange in the agreement phase and the high resourceconsumption necessary to tolerate faults (as 3 f + 1 replicas are required totolerate f faults). Moreover, total ordering of messages is needed by mostclassical protocols to provide strong consistency in both agreement and executionphases. Research has improved throughput of the execution phase by introducingconcurrency using modern multicore infrastructures in recent years. However,improvements to the agreement phase remains an open area.

Byzantine Fault Tolerant systems use State Machine Replication to tolerate awide range of faults. The approach uses leader based consensus algorithms for thedeterministic execution of service on all replicas to make sure all correct replicasreach same state. For this purpose, several algorithms have been proposed toprovide total ordering of messages through an elected leader. Usually, a singleleader is considered to be a bottleneck as it cannot provide the desired throughputfor real-time software services. In order to achieve a higher throughput there is aneed for a solution which can execute multiple consensus rounds concurrently.

We present a solution that enables multiple consensus rounds in parallel bychoosing multiple leaders. By enabling concurrent consensus, our approach canexecute several requests in parallel. In our approach we incorporate applicationspecific knowledge to split the total order of events into multiple partial orderswhich are causally consistent in order to ensure safety. Furthermore, a dependencycheck is required for every client request before it is assigned to a particular leaderfor agreement. This methodology relies on optimistic prediction of dependenciesto provide higher throughput. We also propose a solution to correct the course ofexecution without rollbacking if dependencies were wrongly predicted.

Our evaluation shows that in normal cases this approach can achieve upto 100% higher throughput than conventional approaches for large numbers ofclients. We also show that this approach has the potential to perform better incomplex scenarios

• 16.
Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
Reviewing and Evaluating Techniques for Modeling and Analyzing Security Requirements2007Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)

The software engineering community recognized the importance of addressing security requirements with other functional requirements from the beginning of the software development life cycle. Therefore, there are some techniques that have been developed to achieve this goal. Thus, we conducted a theoretical study that focuses on reviewing and evaluating some of the techniques that are used to model and analyze security requirements. Thus, the Abuse Cases, Misuse Cases, Data Sensitivity and Threat Analyses, Strategic Modeling, and Attack Trees techniques are investigated in detail to understand and highlight the similarities and differences between them. We found that using these techniques, in general, help requirements engineer to specify more detailed security requirements. Also, all of these techniques cover the concepts of security but in different levels. In addition, the existence of different techniques provides a variety of levels for modeling and analyzing security requirements. This helps requirements engineer to decide which technique to use in order to address security issues for the system under investigation. Finally, we found that using only one of these techniques will not be suitable enough to satisfy the security requirements of the system under investigation. Consequently, we consider that it would be beneficial to combine the Abuse Cases or Misuse Cases techniques with the Attack Trees technique or to combine the Strategic Modeling and Attack Trees techniques together in order to model and analyze security requirements of the system under investigation. The concentration on using the Attack Trees technique is due to the reusability of the produced attack trees, also this technique helps in covering a wide range of attacks, thus covering security concepts as well as security requirements in a proper way.

• 17.
Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
Coordination in Global Software Development: Challenges, associated threats, and mitigating practices2012Självständigt arbete på avancerad nivå (masterexamen)Studentuppsats (Examensarbete)

Global Software Development (GSD) is an emerging trend in today's software world in which teams are geographically dispersed, either in close proximity or globally. GSD provides certain advantages to development companies like low development cost, access to cheap and skilled labour etc. This type of development is noted as a more risky and challenging as compared to projects developed with teams under same roof. Inherently the nature of GSD projects are cooperative in which many software developers work on a common project, share information and coordinate activities. Coordination is a fundamental part of software development. GSD comprises different types of development systems i.e. insourcing, outsourcing, nearshoring, or farshoring, whatever the types of development systems selected by a company there exist the challenges to coordination. Therefore the knowledge of potential challenges, associated threats to coordination and practices to mitigate them plays a vital role for running a successful global project.

• 18.
Frederick University, Cyprus.
University of Cyprus, Cyprus. University of Cyprus, Cyprus. University of Cyprus, Cyprus. Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM). SYNYO GmbH, Austria. SYNYO GmbH, Austria. SYNYO GmbH, Austria. BioTalentum Ltd, Hungary. BioTalentum Ltd, Hungary.
SciChallenge: A Social Media Aware Platform for Contest-Based STEM Education and Motivation of Young Students2018Ingår i: IEEE Transactions on Learning Technologies, ISSN 1939-1382, E-ISSN 1939-1382Artikel i tidskrift (Refereegranskat)

Scientific and technological innovations have become increasingly important as we face the benefits and challenges of both globalization and a knowledge-based economy. Still, enrolment rates in STEM degrees are low in many European countries and consequently there is a lack of adequately educated workforce in industries. We believe that this can be mainly attributed to pedagogical issues, such as the lack of engaging hands-on activities utilized for science and math education in middle and high schools. In this paper, we report our work in the SciChallenge European project, which aims at increasing the interest of pre-university students in STEM disciplines, through its distinguishing feature, the systematic use of social media for providing and evaluation of the student-generated content. A social media-aware contest and platform were thus developed and tested in a pan-European contest that attracted >700 participants. The statistical analysis and results revealed that the platform and contest positively influenced participants STEM learning and motivation, while only the gender factor for the younger study group appeared to affect the outcomes (confidence level – p<.05).

• 19.
Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Institutionen för informationsteknologi och medier.
Visualisering av datastrukturer: Utveckling av ett tolkningsverktyg2013Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)

Tolking och tillgodogörande av datastrukturer, organiserad information ochprogramkodsfiler förekommer frekvent i arbete med mjukvaruutveckling. Dennainformation är lagrad i textbaserad form och dess förståelse kräver stornoggrannhet och tidsinvestering från utvecklarens sida. I syfte att försöka förenklaprocessen beskriver detta examensarbete utvecklingen av en prototyp till ettverktygsprogram, vilket automatiserar tolkning av XML-data och källkodsfiler förprogrammeringsspråken C och C++. Programmet skapar och presenterar sedanen visuell graf av den undersökta strukturen. Algoritmen klarar av att presenteragodtyckligt stora XML-filer samt ett begränsat antal samtidigt inlästakällkodsfiler. Effekterna på tolkningens tidsåtgång och dess tillförlitlighet harutvärderats i en undersökning bland studenter inom mjukvaruutveckling.Resultatet visade på en viss mätbar ökning i antalet korrekta slutsatser somanvändaren drog efter att ha studerat datasammanhanget grafiskt jämfört meddess ursprungliga textform. Tidsåtgången mättes inte mer noggrant än subjektivthos användarna, av vilka en övervägande andel ansåg att tiden förkortades medden grafiska representationen till deras hjälp. Examensarbetet visar attanvändandet av detta eller motsvarande verktyg kan öka tillgodogörandet avdatastrukturer genom att både höja graden av tillförlitligheten hos dennainformation och samtidigt minska tidsåtgången. Däremot är den kvantifierbaravinsten av dessa resultat inte statistiskt säkerställd till en högre grad.

• 20.
Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap.
Parameterstyrd tillverkning av rör för marina fartyg2012Studentuppsats (Examensarbete)

Innehållet i denna rapport är ett resultat av ett moment i utbildningen till Utvecklingsingenjör i Maskinteknik. Arbetet har skett genom ett samarbete mellan Linus Adolfsen, Kockums AB och Blekinge Tekniska Högskola. Rapporten behandlar i stort två moment, ett praktiskt och ett teoretiskt. Den första delen, den praktiska, gick ut på att finna en metod för att överbrygga steget från modell till verklighet på ett effektivt sätt. Detta resulterade i en egenutvecklad programvara som kan läsa in utdatafilen från Tribon (CAD programvara) och översätta detta till en programfil för Herber CNC 90 bockningsmaskin. Den andra delen är teoretisk och är en analys av verksamheten utifrån perspektivet att medge förtillverkning. Resultatet blev en analys av den berörda verksamheten med förslag på hur man ska åtgärda de problem och hinder som finns idag. Det gav även stort upphov till förslag på vidare studier.

• 21.
Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap.
The State of the Art in Distributed Mobile Robotics2001Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)

Uppsatsen är en brett spektrum på vilken forskning som pågår rörande distribuerad mobil robotik, dvs hur många robotar kan samverka för att lösa uppgifter.

• 22.
Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Avdelningen för informationssystem och -teknologi.
Genomsökning av filsystem för att hitta personuppgifter: Med Linear chain conditional random field och Regular expression2018Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)

Den nya lagen General data protection regulation (GDPR) började gälla för alla företag inom Europeiska unionen efter den 25 maj. Detta innebär att det blir strängare lagkrav för företag som på något sätt lagrar personuppgifter. Målet med detta projekt är därför att underlätta för företag att uppfylla de nya lagkraven. Detta genom att skapa ett verktyg som söker igenom filsystem och visuellt visar användaren i ett grafiskt användargränssnitt vilka filer som innehåller personuppgifter. Verktyget använder Named Entity Recognition med algoritmen Linear Chain Conditional Random Field som är en typ av ”supervised” learning metod inom maskininlärning. Denna algoritm används för att hitta namn och adresser i filer. De olika modellerna tränas med olika parametrar och träningen sker med hjälp av biblioteket Stanford NER i Java. Modellerna testas genom en testfil som innehåller 45 000 ord där modellerna själva får förutspå alla klasser till orden i filen. Modellerna jämförs sedan med varandra med hjälp av mätvärdena precision, recall och F-score för att hitta den bästa modellen. Verktyget använder även Regular expression för att hitta e- mails, IP-nummer och personnummer. Resultatet på den slutgiltiga maskininlärnings modellen visar att den inte hittar alla namn och adresser men att det är något som kan förbättras genom att öka träningsdata. Detta är dock något som kräver en kraftfullare dator än den som användes i detta projekt. En undersökning på hur det svenska språket är uppbyggt skulle även också behöva göras för att använda de lämpligaste parametrarna vid träningen av modellen.

• 23.
Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
Evaluation of selected data mining algorithms implemented in Medical Decision Support Systems2007Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)

The goal of this master’s thesis is to identify and evaluate data mining algorithms which are commonly implemented in modern Medical Decision Support Systems (MDSS). They are used in various healthcare units all over the world. These institutions store large amounts of medical data. This data may contain relevant medical information hidden in various patterns buried among the records. Within the research several popular MDSS’s are analyzed in order to determine the most common data mining algorithms utilized by them. Three algorithms have been identified: Naïve Bayes, Multilayer Perceptron and C4.5. Prior to the very analyses the algorithms are calibrated. Several testing configurations are tested in order to determine the best setting for the algorithms. Afterwards, an ultimate comparison of the algorithms orders them with respect to their performance. The evaluation is based on a set of performance metrics. The analyses are conducted in WEKA on five UCI medical datasets: breast cancer, hepatitis, heart disease, dermatology disease, diabetes. The analyses have shown that it is very difficult to name a single data mining algorithm to be the most suitable for the medical data. The results gained for the algorithms were very similar. However, the final evaluation of the outcomes allowed singling out the Naïve Bayes to be the best classifier for the given domain. It was followed by the Multilayer Perceptron and the C4.5.

• 24. Afzal, Wasif
Lessons from applying experimentation in software engineering prediction systems2008Konferensbidrag (Refereegranskat)

Within software engineering prediction systems, experiments are undertaken primarliy to investigate relationships and to measure/compare models' accuracy. This paper discusses our experience and presents useful lessons/guidelines in experimenting with software engineering prediction systems. For this purpose, we use a typical software engineering experimentation process as a baseline. We found that the typical software engineering experimentation process in software engineering is supportive in developing prediction systems and have highlighted issues more central to the domain of software engineering prediction systems.

• 25.
Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
Metrics in Software Test Planning and Test Design Processes2007Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)

Software metrics plays an important role in measuring attributes that are critical to the success of a software project. Measurement of these attributes helps to make the characteristics and relationships between the attributes clearer. This in turn supports informed decision making. The field of software engineering is affected by infrequent, incomplete and inconsistent measurements. Software testing is an integral part of software development, providing opportunities for measurement of process attributes. The measurement of software testing process attributes enables the management to have better insight in to the software testing process. The aim of this thesis is to investigate the metric support for software test planning and test design processes. The study comprises of an extensive literature study and follows a methodical approach. This approach consists of two steps. The first step comprises of analyzing key phases in software testing life cycle, inputs required for starting the software test planning and design processes and metrics indicating the end of software test planning and test design processes. After establishing a basic understanding of the related concepts, the second step identifies the attributes of software test planning and test design processes including metric support for each of the identified attributes. The results of the literature survey showed that there are a number of different measurable attributes for software test planning and test design processes. The study partitioned these attributes in multiple categories for software test planning and test design processes. For each of these attributes, different existing measurements are studied. A consolidation of these measurements is presented in this thesis which is intended to provide an opportunity for management to consider improvement in these processes.

• 26. Afzal, Wasif
Search-based approaches to software fault prediction and software testing2009Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)

Software verification and validation activities are essential for software quality but also constitute a large part of software development costs. Therefore efficient and cost-effective software verification and validation activities are both a priority and a necessity considering the pressure to decrease time-to-market and intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions related to software quality, when to stop testing, testing schedule and testing resource allocation needs to be as accurate as possible. This thesis investigates the application of search-based techniques within two activities of software verification and validation: Software fault prediction and software testing for non-functional system properties. Software fault prediction modeling can provide support for making important decisions as outlined above. In this thesis we empirically evaluate symbolic regression using genetic programming (a search-based technique) as a potential method for software fault predictions. Using data sets from both industrial and open-source software, the strengths and weaknesses of applying symbolic regression in genetic programming are evaluated against competitive techniques. In addition to software fault prediction this thesis also consolidates available research into predictive modeling of other attributes by applying symbolic regression in genetic programming, thus presenting a broader perspective. As an extension to the application of search-based techniques within software verification and validation this thesis further investigates the extent of application of search-based techniques for testing non-functional system properties. Based on the research findings in this thesis it can be concluded that applying symbolic regression in genetic programming may be a viable technique for software fault prediction. We additionally seek literature evidence where other search-based techniques are applied for testing of non-functional system properties, hence contributing towards the growing application of search-based techniques in diverse activities within software verification and validation.

• 27. Afzal, Wasif
Search-Based Prediction of Software Quality: Evaluations and Comparisons2011Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)

Software verification and validation (V&V) activities are critical for achieving software quality; however, these activities also constitute a large part of the costs when developing software. Therefore efficient and effective software V&V activities are both a priority and a necessity considering the pressure to decrease time-to-market and the intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions that affects software quality, e.g., how to allocate testing resources, develop testing schedules and to decide when to stop testing, needs to be as stable and accurate as possible. The objective of this thesis is to investigate how search-based techniques can support decision-making and help control variation in software V&V activities, thereby indirectly improving software quality. Several themes in providing this support are investigated: predicting reliability of future software versions based on fault history; fault prediction to improve test phase efficiency; assignment of resources to fixing faults; and distinguishing fault-prone software modules from non-faulty ones. A common element in these investigations is the use of search-based techniques, often also called metaheuristic techniques, for supporting the V&V decision-making processes. Search-based techniques are promising since, as many problems in real world, software V&V can be formulated as optimization problems where near optimal solutions are often good enough. Moreover, these techniques are general optimization solutions that can potentially be applied across a larger variety of decision-making situations than other existing alternatives. Apart from presenting the current state of the art, in the form of a systematic literature review, and doing comparative evaluations of a variety of metaheuristic techniques on large-scale projects (both industrial and open-source), this thesis also presents methodological investigations using search-based techniques that are relevant to the task of software quality measurement and prediction. The results of applying search-based techniques in large-scale projects, while investigating a variety of research themes, show that they consistently give competitive results in comparison with existing techniques. Based on the research findings, we conclude that search-based techniques are viable techniques to use in supporting the decision-making processes within software V&V activities. The accuracy and consistency of these techniques make them important tools when developing future decision-support for effective management of software V&V activities.

• 28. Afzal, Wasif
Using faults-slip-through metric as a predictor of fault-proneness2010Konferensbidrag (Refereegranskat)

The majority of software faults are present in small number of modules, therefore accurate prediction of fault-prone modules helps improve software quality by focusing testing efforts on a subset of modules. This paper evaluates the use of the faults-slip-through (FST) metric as a potential predictor of fault-prone modules. Rather than predicting the fault-prone modules for the complete test phase, the prediction is done at the specific test levels of integration and system test. We applied eight classification techniques to the task of identifying fault-prone modules, representing a variety of approaches, including a standard statistical technique for classification (logistic regression), tree-structured classifiers (C4.5 and random forests), a Bayesian technique (Na\"{i}ve Bayes), machine-learning techniques (support vector machines and back-propagation artificial neural networks) and search-based techniques (genetic programming and artificial immune recognition systems) on FST data collected from two large industrial projects from the telecommunication domain. \emph{Results:} Using area under the receiver operating characteristic (ROC) curve and the location of (PF, PD) pairs in the ROC space, GP showed impressive results in comparison with other techniques for predicting fault-prone modules at both integration and system test levels. The use of faults-slip-through metric in general provided good prediction results at the two test levels. The accuracy of GP is statistically significant in comparison with majority of the techniques for predicting fault-prone modules at integration and system test levels. (ii) Faults-slip-through metric has the potential to be a generally useful predictor of fault-proneness at integration and system test levels.

• 29. Afzal, Wasif
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik. Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
An experiment on the effectiveness and efficiency of exploratory testing2015Ingår i: Empirical Software Engineering, ISSN 1382-3256, Vol. 20, nr 3, s. 844-878Artikel i tidskrift (Refereegranskat)

The exploratory testing (ET) approach is commonly applied in industry, but lacks scientific research. The scientific community needs quantitative results on the performance of ET taken from realistic experimental settings. The objective of this paper is to quantify the effectiveness and efficiency of ET vs. testing with documented test cases (test case based testing, TCT). We performed four controlled experiments where a total of 24 practitioners and 46 students performed manual functional testing using ET and TCT. We measured the number of identified defects in the 90-minute testing sessions, the detection difficulty, severity and types of the detected defects, and the number of false defect reports. The results show that ET found a significantly greater number of defects. ET also found significantly more defects of varying levels of difficulty, types and severity levels. However, the two testing approaches did not differ significantly in terms of the number of false defect reports submitted. We conclude that ET was more efficient than TCT in our experiment. ET was also more effective than TCT when detection difficulty, type of defects and severity levels are considered. The two approaches are comparable when it comes to the number of false defect reports submitted.

• 30. Afzal, Wasif
A Comparative Evaluation of Using Genetic Programming for Predicting Fault Count Data2008Konferensbidrag (Refereegranskat)

There have been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of models' assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count data.

• 31.
Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
Incorporating Metrics in an Organizational Test Strategy2008Konferensbidrag (Refereegranskat)

An organizational level test strategy needs to incorporate metrics to make the testing activities visible and available to process improvements. The majority of testing measurements that are done are based on faults found in the test execution phase. In contrast, this paper investigates metrics to support software test planning and test design processes. We have assembled metrics in these two process types to support management in carrying out evidence-based test process improvements and to incorporate suitable metrics as part of an organization level test strategy. The study is composed of two steps. The first step creates a relevant context by analyzing key phases in the software testing lifecycle, while the second step identifies the attributes of software test planning and test design processes along with metric(s) support for each of the identified attributes.

• 32. Afzal, Wasif
On the application of genetic programming for software engineering predictive modeling: A systematic review2011Ingår i: Expert Systems with Applications, ISSN 0957-4174 , Vol. 38, nr 9, s. 11984-11997Artikel, forskningsöversikt (Refereegranskat)

The objective of this paper is to investigate the evidence for symbolic regression using genetic programming (GP) being an effective method for prediction and estimation in software engineering, when compared with regression/machine learning models and other comparison groups (including comparisons with different improvements over the standard GP algorithm). We performed a systematic review of literature that compared genetic programming models with comparative techniques based on different independent project variables. A total of 23 primary studies were obtained after searching different information sources in the time span 1995-2008. The results of the review show that symbolic regression using genetic programming has been applied in three domains within software engineering predictive modeling: (i) Software quality classification (eight primary studies). (ii) Software cost/effort/size estimation (seven primary studies). (iii) Software fault prediction/software reliability growth modeling (eight primary studies). While there is evidence in support of using genetic programming for software quality classification, software fault prediction and software reliability growth modeling: the results are inconclusive for software cost/effort/size estimation.

• 33. Afzal, Wasif
Suitability of Genetic Programming for Software Reliability Growth Modeling2008Konferensbidrag (Refereegranskat)

Genetic programming (GP) has been found to be effective in finding a model that fits the given data points without making any assumptions about the model structure. This makes GP a reasonable choice for software reliability growth modeling. This paper discusses the suitability of using GP for software reliability growth modeling and highlights the mechanisms that enable GP to progressively search for fitter solutions.

• 34. Afzal, Wasif
Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
Towards benchmarking feature subset selection methods for software fault prediction2016Ingår i: Studies in Computational Intelligence, Springer, 2016, 617, Vol. 617, s. 33-58Kapitel i bok, del av antologi (Refereegranskat)

Despite the general acceptance that software engineering datasets often contain noisy, irrelevant or redundant variables, very few benchmark studies of feature subset selection (FSS) methods on real-life data from software projects have been conducted. This paper provides an empirical comparison of state-of-the-art FSS methods: information gain attribute ranking (IG); Relief (RLF); principal component analysis (PCA); correlation-based feature selection (CFS); consistencybased subset evaluation (CNS); wrapper subset evaluation (WRP); and an evolutionary computation method, genetic programming (GP), on five fault prediction datasets from the PROMISE data repository. For all the datasets, the area under the receiver operating characteristic curveâthe AUC value averaged over 10-fold cross-validation runsâwas calculated for each FSS method-dataset combination before and after FSS. Two diverse learning algorithms, C4.5 and naÃ¯ve Bayes (NB) are used to test the attribute sets given by each FSS method. The results show that although there are no statistically significant differences between the AUC values for the different FSS methods for both C4.5 and NB, a smaller set of FSS methods (IG, RLF, GP) consistently select fewer attributes without degrading classification accuracy. We conclude that in general, FSS is beneficial as it helps improve classification accuracy of NB and C4.5. There is no single best FSS method for all datasets but IG, RLF and GP consistently select fewer attributes without degrading classification accuracy within statistically significant boundaries. Â© Springer International Publishing Switzerland 2016.

• 35.
Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system. Bahria University, Islamabad, Pakistan .
Blekinge Institute of Technology, Karlskrona, Sweden; Chalmers University of Technology, Sweden.
Towards benchmarking feature subset selection methods for software fault prediction2016Ingår i: Computational Intelligence and Quantitative Software Engineering / [ed] Witold Pedrycz, Giancarlo Succi and Alberto Sillitti, Springer-Verlag , 2016, s. 33-58Kapitel i bok, del av antologi (Övrigt vetenskapligt)

Despite the general acceptance that software engineering datasets often contain noisy, irrele- vant or redundant variables, very few benchmark studies of feature subset selection (FSS) methods on real-life data from software projects have been conducted. This paper provides an empirical comparison of state-of-the-art FSS methods: information gain attribute ranking (IG); Relief (RLF); principal com- ponent analysis (PCA); correlation-based feature selection (CFS); consistency-based subset evaluation (CNS); wrapper subset evaluation (WRP); and an evolutionary computation method, genetic program- ming (GP), on five fault prediction datasets from the PROMISE data repository. For all the datasets, the area under the receiver operating characteristic curvethe AUC value averaged over 10-fold cross- validation runswas calculated for each FSS method-dataset combination before and after FSS. Two diverse learning algorithms, C4.5 and na ??ve Bayes (NB) are used to test the attribute sets given by each FSS method. The results show that although there are no statistically significant differences between the AUC values for the different FSS methods for both C4.5 and NB, a smaller set of FSS methods (IG, RLF, GP) consistently select fewer attributes without degrading classification accuracy. We conclude that in general, FSS is beneficial as it helps improve classification accuracy of NB and C4.5. There is no single best FSS method for all datasets but IG, RLF and GP consistently select fewer attributes without degrading classification accuracy within statistically significant boundaries.

• 36. Afzal, Wasif
A Systematic Mapping Study on Non-Functional Search-Based Software Testing2008Konferensbidrag (Refereegranskat)
• 37. Afzal, Wasif
A systematic review of search-based testing for non-functional system properties2009Ingår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 51, nr 6, s. 957-976Artikel i tidskrift (Refereegranskat)

Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996-2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques.

• 38. Afzal, Wasif
Prediction of fault count data using genetic programming2008Konferensbidrag (Refereegranskat)

Software reliability growth modeling helps in deciding project release time and managing project resources. A large number of such models have been presented in the past. Due to the existence of many models, the models' inherent complexity, and their accompanying assumptions; the selection of suitable models becomes a challenging task. This paper presents empirical results of using genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The goodness of fit (adaptability) and predictive accuracy of the evolved model is measured using five different measures in an attempt to present a fair evaluation. The results show that the GP evolved model has statistically significant goodness of fit and predictive accuracy.

• 39.
Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation. Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
Resampling Methods in Software Quality Classification2012Ingår i: International Journal of Software Engineering and Knowledge Engineering, ISSN 0218-1940, Vol. 22, nr 2, s. 203-223Artikel i tidskrift (Refereegranskat)

In the presence of a number of algorithms for classification and prediction in software engineering, there is a need to have a systematic way of assessing their performances. The performance assessment is typically done by some form of partitioning or resampling of the original data to alleviate biased estimation. For predictive and classification studies in software engineering, there is a lack of a definitive advice on the most appropriate resampling method to use. This is seen as one of the contributing factors for not being able to draw general conclusions on what modeling technique or set of predictor variables are the most appropriate. Furthermore, the use of a variety of resampling methods make it impossible to perform any formal meta-analysis of the primary study results. Therefore, it is desirable to examine the influence of various resampling methods and to quantify possible differences. Objective and method: This study empirically compares five common resampling methods (hold-out validation, repeated random sub-sampling, 10-fold cross-validation, leave-one-out cross-validation and non-parametric bootstrapping) using 8 publicly available data sets with genetic programming (GP) and multiple linear regression (MLR) as software quality classification approaches. Location of (PF, PD) pairs in the ROC (receiver operating characteristics) space and area under an ROC curve (AUC) are used as accuracy indicators. Results: The results show that in terms of the location of (PF, PD) pairs in the ROC space, bootstrapping results are in the preferred region for 3 of the 8 data sets for GP and for 4 of the 8 data sets for MLR. Based on the AUC measure, there are no significant differences between the different resampling methods using GP and MLR. Conclusion: There can be certain data set properties responsible for insignificant differences between the resampling methods based on AUC. These include imbalanced data sets, insignificant predictor variables and high-dimensional data sets. With the current selection of data sets and classification techniques, bootstrapping is a preferred method based on the location of (PF, PD) pair data in the ROC space. Hold-out validation is not a good choice for comparatively smaller data sets, where leave-one-out cross-validation (LOOCV) performs better. For comparatively larger data sets, 10-fold cross-validation performs better than LOOCV.

• 40. Afzal, Wasif
Search-based prediction of fault count data2009Konferensbidrag (Refereegranskat)

Symbolic regression, an application domain of genetic programming (GP), aims to find a function whose output has some desired property, like matching target values of a particular data set. While typical regression involves finding the coefficients of a pre-defined function, symbolic regression finds a general function, with coefficients, fitting the given set of data points. The concepts of symbolic regression using genetic programming can be used to evolve a model for fault count predictions. Such a model has the advantages that the evolution is not dependent on a particular structure of the model and is also independent of any assumptions, which are common in traditional time-domain parametric software reliability growth models. This research aims at applying experiments targeting fault predictions using genetic programming and comparing the results with traditional approaches to compare efficiency gains.

• 41. Afzal, Wasif
Genetic programming for cross-release fault count predictions in large and complex software projects2010Ingår i: Evolutionary Computation and Optimization Algorithms in Software Engineering: Applications and Techniques / [ed] Chis, Monica, Hershey: IGI Global, Hershey, USA , 2010Kapitel i bok, del av antologi (Refereegranskat)

Software fault prediction can play an important role in ensuring software quality through efficient resource allocation. This could, in turn, reduce the potentially high consequential costs due to faults. Predicting faults might be even more important with the emergence of short-timed and multiple software releases aimed at quick delivery of functionality. Previous research in software fault prediction has indicated that there is a need i) to improve the validity of results by having comparisons among number of data sets from a variety of software, ii) to use appropriate model evaluation measures and iii) to use statistical testing procedures. Moreover, cross-release prediction of faults has not yet achieved sufficient attention in the literature. In an attempt to address these concerns, this paper compares the quantitative and qualitative attributes of 7 traditional and machine-learning techniques for modeling the cross-release prediction of fault count data. The comparison is done using extensive data sets gathered from a total of 7 multi-release open-source and industrial software projects. These software projects together have several years of development and are from diverse application areas, ranging from a web browser to a robotic controller software. Our quantitative analysis suggests that genetic programming (GP) tends to have better consistency in terms of goodness of fit and accuracy across majority of data sets. It also has comparatively less model bias. Qualitatively, ease of configuration and complexity are less strong points for GP even though it shows generality and gives transparent models. Artificial neural networks did not perform as well as expected while linear regression gave average predictions in terms of goodness of fit and accuracy. Support vector machine regression and traditional software reliability growth models performed below average on most of the quantitative evaluation criteria while remained on average for most of the qualitative measures.

• 42. Afzal, Wasif
Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation. Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation. Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
Prediction of faults-slip-through in large software projects: an empirical evaluation2014Ingår i: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 22, nr 1, s. 51-86Artikel i tidskrift (Refereegranskat)

A large percentage of the cost of rework can be avoided by finding more faults earlier in a software test process. Therefore, determination of which software test phases to focus improvement work on has considerable industrial interest. We evaluate a number of prediction techniques for predicting the number of faults slipping through to unit, function, integration, and system test phases of a large industrial project. The objective is to quantify improvement potential in different test phases by striving toward finding the faults in the right phase. The results show that a range of techniques are found to be useful in predicting the number of faults slipping through to the four test phases; however, the group of search-based techniques (genetic programming, gene expression programming, artificial immune recognition system, and particle swarm optimization-based artificial neural network) consistently give better predictions, having a representation at all of the test phases. Human predictions are consistently better at two of the four test phases. We conclude that the human predictions regarding the number of faults slipping through to various test phases can be well supported by the use of search-based techniques. A combination of human and an automated search mechanism (such as any of the search-based techniques) has the potential to provide improved prediction results.

• 43. Afzal, Wasif
Search-based prediction of fault-slip-through in large software projects2010Konferensbidrag (Refereegranskat)

A large percentage of the cost of rework can be avoided by finding more faults earlier in a software testing process. Therefore, determination of which software testing phases to focus improvements work on, has considerable industrial interest. This paper evaluates the use of five different techniques, namely particle swarm optimization based artificial neural networks (PSO-ANN), artificial immune recognition systems (AIRS), gene expression programming (GEP), genetic programming (GP) and multiple regression (MR), for predicting the number of faults slipping through unit, function, integration and system testing phases. The objective is to quantify improvement potential in different testing phases by striving towards finding the right faults in the right phase. We have conducted an empirical study of two large projects from a telecommunication company developing mobile platforms and wireless semiconductors. The results are compared using simple residuals, goodness of fit and absolute relative error measures. They indicate that the four search-based techniques (PSO-ANN, AIRS, GEP, GP) perform better than multiple regression for predicting the fault-slip-through for each of the four testing phases. At the unit and function testing phases, AIRS and PSO-ANN performed better while GP performed better at integration and system testing phases. The study concludes that a variety of search-based techniques are applicable for predicting the improvement potential in different testing phases with GP showing more consistent performance across two of the four test phases.

• 44.
Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
An experimental comparison of five prioritization methods: Investigating ease of use, accuracy and scalability2005Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)

Requirements prioritization is an important part of developing the right product in the right time. There are different ideas about which method is the best to use when prioritizing requirements. This thesis takes a closer look at five different methods and then put them into an controlled experiment, in order to find out which of the methods that would be the best method to use. The experiment was designed to find out which method yields the most accurate result, the method’s ability to scale up to many more requirements, what time it took to prioritize with the method, and finally how easy the method was to use. These four criteria combined will indicate which method is more suitable, i.e. be the best method, to use in prioritizing of requirements. The chosen methods are the well-known analytic hierarchy process, the computer algorithm binary search tree, and from the ideas of extreme programming come planning game. The fourth method is an old but well used method, the 100 points method. The last method is a new method, which combines planning game with the analytic hierarchy process. Analysis of the data from the experiment indicates that the planning game combined with analytic hierarchy process could be a good candidate. However, the result from the experiment clearly indicates that the binary search tree yields accurate result, is able to scale up and was the easiest method to use. For these three reasons the binary search tree clearly is the better method to use for prioritizing requirements

• 45.
Blekinge Tekniska Högskola, Sektionen för teknokultur, humaniora och samhällsbyggnad.
Blekinge Tekniska Högskola, Sektionen för teknokultur, humaniora och samhällsbyggnad.
24-timmarsmyndighetens användbarhet2004Självständigt arbete på grundnivå (kandidatexamen)Studentuppsats (Examensarbete)

The communication with government and municipality through Internet has in-creased during the last couple of years. Therefore we have chosen to focus our bachelor thesis in this particularly area and the needs for usable web services for the citizens. In this bachelor thesis we are studying a increasing group users, namely elderly citizens. During the study we have analysed E-governments ser-vices usability through usability tests. The combination of conversations and meetings with individuals, observations of interactions and literature studies give us the opportunity to explore the users needs. The users need is the central in how they understand and interact with the E-government. The web sites we used dur-ing our user tests are all connected with the E-government. Through a analytic study of the information we could make five important design proposals and guidelines, that we suggest are required when e-services are developed for the E-government.

• 46.
MobiAnn: androidapplikationen som underlättar lärares arbetsuppgifter2011Självständigt arbete på grundnivå (kandidatexamen), 10,5 poäng / 16 hpStudentuppsats (Examensarbete)

Examensarbetet diskuterar behovet av ett stödsystem för lärare vid undervisning och tar upp olika aspekter utifrån lärarnas arbetssituation. Som en del av denna diskussion finns en implementation av ett system i form av en Androidapplikation.

Applikationen ger lärarna möjlighet till ett stödsystem med olika användningsområde som närvarokontroll, anmärkningsmöjligheter om förseningar och stök under lektionstid, verktyg för att anteckna elevarbete och motivera betyg direkt på plats.

Stor vikt har lagts på att göra applikationen lättanvänd och användarvänlig och därför har tester med användare varit en stor del under utvecklingen.

• 47.
Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
How to support and enhance communication: in a student software development project2002Självständigt arbete på grundnivå (kandidatexamen)Studentuppsats (Examensarbete)

I denna rapport, som baserar sig på ett studentprojekt utfört under våren 2002, har vi fokuserat på ordet kommunikation. Vi beskriver hur användande av designverktyg kan spela en nyckelroll när det gäller att stöda kommunikation i gruppaktiviteter och i vilken utsträckning kommunikation kan stödas och förstärkas av verktyg som mockuper och metaforer. Vi beskriver också en designprogress från initiala skisser till färdig mockup av ett grafiskt användargränssnitt för en demoapplikation av en vykortstjänst.

Requirements prioritization with respect to Geographically Distributed Stakeholders2011Konferensbidrag (Refereegranskat)

Requirements selection for software releases can play a vital role in the success of software product. This selection of requirements is done by different requirements prioritization techniques. This paper discusses limitations of these Requirements Prioritization Techniques (100$Method and Binary Search Tree) with respect to Geographical Distribution of Stakeholders. We conducted two experiments, in this paper, in order to analyze the variations among the results of these Requirements Prioritization Techniques. This paper also discusses attributes that can affect the requirements prioritization when dealing with Geographically Distributed Stakeholders. We conducted first experiment with 100$ Dollar method and Binary Search Tree technique and second experiment has been conducted with modified 100\$ Dollar method and Binary search tree technique. Results of these experiments have been discussed in this paper. This paper provides a framework that can be used to identify those requirements that can play an important role in a product success during distributed development.

• 49.
Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
The Importance of Knowledge Management Practices in Overcoming the Global Software Engineering Challenges in Requirements Understanding2008Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)

Going offshore has become a norm in current software organizations due to several benefits like availability of competent people, cost, proximity to market and customers, time and so on. Despite the fact that Global Software Engineering (GSE) offers many benefits to software organizations but it has also created several challenges/issues for practitioners and researchers like culture, communication, co-ordination and collaboration, team building and so on. As Requirements Engineering (RE) is more human intensive activity and is one of the most challenging and important phase in software development. Therefore, RE becomes even more challenging when comes to GSE context because of culture, communication, coordination, collaboration and so on. Due to the fore mentioned GSE factors, requirements’ understanding has become a challenge for software organizations involved in GSE. Furthermore, Knowledge Management (KM) is considered to be the most important asset of an organization because it not only enables organizations to efficiently share and create knowledge but also helps in resolving culture, communication and co-ordination issues especially in GSE. The aim of this study is to present how KM practices helps globally dispersed software organizations in requirements understanding. For this purpose a thorough literature study is performed along with interviews in two industries with the intent to identify useful KM practices and challenges of requirements understanding in GSE. Then based on the analysis of identified challenges of requirements understanding in GSE both from literature review and industrial interviews, useful KM practices are shown and discussed to reduce requirements understanding issues faced in GSE.

Limitations of the analytic hierarchy process technique with respect to geographically distributed stakeholders2010Ingår i: Proceedings of World Academy of Science, Engineering and Technology, ISSN 2010-376X, E-ISSN 2070-3740, Vol. 70, nr Sept., s. 111-116Artikel i tidskrift (Refereegranskat)

The selection of appropriate requirements for product releases can make a big difference in a product success. The selection of requirements is done by different requirements prioritization techniques. These techniques are based on pre-defined and systematic steps to calculate the requirements relative weight. Prioritization is complicated by new development settings, shifting from traditional co-located development to geographically distributed development. Stakeholders, connected to a project, are distributed all over the world. These geographically distributions of stakeholders make it hard to prioritize requirements as each stakeholder have their own perception and expectations of the requirements in a software project. This paper discusses limitations of the Analytical Hierarchy Process with respect to geographically distributed stakeholders' (GDS) prioritization of requirements. This paper also provides a solution, in the form of a modified AHP, in order to prioritize requirements for GDS. We will conduct two experiments in this paper and will analyze the results in order to discuss AHP limitations with respect to GDS. The modified AHP variant is also validated in this paper.

1234567 1 - 50 av 2968
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
• apa
• ieee
• modern-language-association-8th-edition
• vancouver
• Annat format
Fler format
Språk
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Annat språk
Fler språk
Utmatningsformat
• html
• text
• asciidoc
• rtf
v. 2.34.0
| |