Change search
Refine search result
1234567 151 - 200 of 2847
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 151. Artho, Cyrille
    et al.
    Ma, Lei
    Classification of Randomly Generated Test Cases2016In: Proc. 1st Int. Workshop on Validating Software Tests (VST 2016), IEEE conference proceedings, 2016Conference paper (Refereed)
    Abstract [en]

    Random test case generation produces relatively diverse test sequences, but the validity of the test verdict is always uncertain. Because tests are generated without taking the specification and documentation into account, many tests are invalid. To understand the prevalent types of successful and invalid tests, we present a classification of 56 issues that were derived from 208 failed, randomly generated test cases. While the existing workflow successfully eliminated more than half of the tests as irrelevant, half of the remaining failed tests are false positives. We show that the new @NonNull annotation of Java 8 has the potential to eliminate most of the false positives, highlighting the importance of machine-readable documentation.

  • 152. Artho, Cyrille
    et al.
    Oiwa, Yutaka
    Suzaki, Kuniyasu
    Hagiya, Masami
    Extraction of properties in C implementations of security APIs for verification of Java applications2009In: Proc. 3rd Int. Workshop on Analysis of Security APIs, 2009Conference paper (Refereed)
  • 153. Artho, Cyrille
    et al.
    Suzaki, Kuniyasu
    Cosmo, Roberto di
    Treinen, Ralf
    Zacchiroli, Stefano
    Why Do Software Packages Conflict?2012In: Proc. 9th Working Conf. on Mining Software Repositories (MSR 2012), 2012, p. 141-150Conference paper (Refereed)
  • 154. Artho, Cyrille
    et al.
    Suzaki, Kuniyasu
    Cosmo, Roberto di
    Zacchiroli, Stefano
    Sources of Inter-package Conflicts in Debian2011In: Proc. Workshop on Logics for Component Configuration (LoCoCo 2011), 2011Conference paper (Refereed)
  • 155. Artho, Cyrille
    et al.
    Suzaki, Kuniyasu
    Hagiya, Masami
    Leungwattanakit, Watcharin
    Potter, Richard
    Platon, Eric
    Tanabe, Yoshinori
    Weitl, Franz
    Yamamoto, Mitsuharu
    Using Checkpointing and Virtualization for Fault Injection2015In: International Journal of Networking and Computing, ISSN 2185-2839, E-ISSN 2185-2847, Vol. 5, no 2, p. 347-372Article in journal (Refereed)
  • 156.
    Arts, Thomas
    et al.
    Quviq AB, Gothenburg, Sweden.
    Mousavi, Mohammad Reza
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
    Automatic Consequence Analysis of Automotive Standards (AUTO-CAAS) [Position Paper]2015In: WASA '15: Proceedings of the First International Workshop on Automotive Software Architecture / [ed] Yanja Dajsuren, Harald Altinger & Miroslaw Staron, New York, NY: ACM Press, 2015, p. 35-38Conference paper (Refereed)
    Abstract [en]

    This paper provides some background and the roadmap of the AUTO-CAAS project, which is a 3-year project financed by the Swedish Knowledge Foundation and is ongoing as a joint project among three academic and industrial partners. The aim of the project is to exploit the formal models of the AUTOSAR standard, developed by the industrial partner of the project Quviq AB, in order to predict possible future failures in concrete implementations of components. To this end, the deviations from the formal specification will be exploited to generate test-cases that can push concrete components to the corners where such deviation will result in observable failures. The same information will also be used in the diagnosis of otherwise detected failures in order to pinpoint their root causes.

  • 157.
    Arvidsson, Carl
    et al.
    Linköping University, Department of Computer and Information Science.
    Bergström, David
    Linköping University, Department of Computer and Information Science.
    Eilert, Pernilla
    Linköping University, Department of Computer and Information Science.
    Gudmundsson, Håkan
    Linköping University, Department of Computer and Information Science.
    Henriksson, Christoffer
    Linköping University, Department of Computer and Information Science.
    Magnusson, Filip
    Linköping University, Department of Computer and Information Science.
    Nåbo, Henning
    Linköping University, Department of Computer and Information Science.
    Petersén, Elin
    Linköping University, Department of Computer and Information Science.
    Utveckling av en applikation för framtagning av hjärnresponser vid funktionell magnetresonanstomografi2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Rapporten behandlar utvecklandet av programvaran JABE som ska användas i forskning om hjärnresponser. Syftet med rapporten är att utreda frågeställningarna kring hur ett sådant system kan utformas så att det skapar värde för kund och samtidigt underlättar vidareutveckling. Rapporten behandlar även hur ett användargränssnitt kan anpassas för en användares kunskapsnivå och vilka erfarenheter som kan dokumenteras från projektet i allmänhet. Problemet tacklas med stark kundkontakt, flera enkätundersökningar, agila arbetsmetodiker och genomgående dokumentering. Programvaran JABE är beställd av CMIV, centrum för medicisk bildvetenskap och visualisering, vid Linköpings Universitet och är den enda i sitt slag. Resultatet är, förutom en programvara, en genomgående beskrivning av erfarenheter, beskrivning av systemet och en utvärdering av SEMAT Kernel ALPHA. Kandidatrapporten innehåller även åtta individuella bidrag som gör fördjupningar i områden kopplade till projektet.

  • 158.
    Arvola Bjelkesten, Kim
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Feasibility of Point Grid Room First Structure Generation: A bottom-up approach2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Procedural generation becomes increasingly important for videogames in an age where the scope of the content required demands bot a lot of time and work. One of the fronts of this field is structure generation where algorithms create models for the game developers to use. Objectives. This study aims to explore the feasibility of the bottom-up approach within the field of structure generation for video games. Methods. Developing an algorithm using the bottom-up approach, PGRFSG, and utilizing a user study to prove the validity of the results. Each participant evaluates five structures giving them a score based on if they belong in a video game. Results. The participants evaluations show that among the structures generated were some that definitely belonged in a video game world. Two of the five structures got a high score though for one structure that was deemed as not the case. Conclusions. A conclusion can be made that the PGRFSG algorithm creates structures that belong in a video game world and that the bottom-up approach is a suitable one for structure generation based on the results presented.

  • 159.
    Aryal, Dhiraj
    et al.
    Blekinge Institute of Technology, School of Computing.
    Shakya, Anup
    Blekinge Institute of Technology, School of Computing.
    A Taxonomy of SQL Injection Defense Techniques2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: SQL injection attack (SQLIA) poses a serious defense threat to web applications by allowing attackers to gain unhindered access to the underlying databases containing potentially sensitive information. A lot of methods and techniques have been proposed by different researchers and practitioners to mitigate SQL injection problem. However, deploying those methods and techniques without a clear understanding can induce a false sense of security. Classification of such techniques would provide a great assistance to get rid of such false sense of security. Objectives: This paper is focused on classification of such techniques by building taxonomy of SQL injection defense techniques. Methods: Systematic literature review (SLR) is conducted using five reputed and familiar e-databases; IEEE, ACM, Engineering Village (Inspec/Compendex), ISI web of science and Scopus. Results: 61 defense techniques are found and based on these techniques, a taxonomy of SQL injection defense techniques is built. Our taxonomy consists of various dimensions which can be grouped under two higher order terms; detection method and evaluation criteria. Conclusion: The taxonomy provides a basis for comparison among different defense techniques. Organization(s) can use our taxonomy to choose suitable owns depending on their available resources and environments. Moreover, this classification can lead towards a number of future research directions in the field of SQL injection.

  • 160.
    Asghari, Negin
    Blekinge Institute of Technology, School of Computing.
    Evaluating GQM+ Strategies Framework for Planning Measurement System2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Most organizations are aware of the significance of software measurement programs to help organizations assess and improve the ways they develop software. Measurement plays a vital role in improving software process and products. However, the number of failing measurement programs is high and the reasons are vary. A recent approach for planning measurement programs is GQM+Strategies, which makes an important extension to existing approaches, it links measurements and improvement activities to strategic goals and ways to achieve this goals. However, concrete guides of how to collect the information needed to use GQM+strategies is not provided in the literature yet. Objectives. The contribution of this research is to propose and assess an elicitation approach (The Goal Strategy Elicitation (GSE) approach) for the information needed to apply GQM+strategies in an organization, which also leads to a partial evaluation of GQM+strategies as such. In this thesis, the initial focus is placed on eliciting the goals and strategies in the most efficient way. Methods. The primary research approach used is action research, which allows to flexibly assess a new method or technique in an iterative manner, where the feedback of one iteration is taken into the next iteration, thus improving on the method or technique proposed. Complementary to that, we used literature review with the primary focus to position the work, explore GQM+strategies, and to determine which elicitation approach for the support of measurement programs have been proposed. Results. The Goal Strategy Elicitation (GSE) approach as a tool for eliciting goals and strategies within the software organization to contribute in planning a measurement program has been developed. The iterations showed that the approach of elicitation may not be too structured (e.g. template/notation based), but rather shall support the stakeholders to express their thoughts relatively freely. Hence, the end-result was an interview guide, not based on notations (as in the first iteration), and asking questions in a way that the interviewees are able to express themselves easily without having to e.g. distinguish definitions for goals and strategies. Conclusions. We conclude that the GSE approach is a strong tool for the software organization to be able to elicit the goals and strategies to support GQM+Strategies. GSE approach evolved in each iteration and the latest iteration together with the guideline is still used within the studied company for eliciting goals and strategies, and the organization acknowledged that they will continue to do so. Moreover, we conclude that there is a need for further empirical validation of the GSE approach in further full-scale industry trials.

  • 161.
    Asif, Sajjad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Investigating Web Size Metrics for Early Web Cost Estimation2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context Web engineering is a new research field which utilizes engineering principles to produce quality web applications. Web applications have become more complex with the passage of time and it's quite difficult to analyze the web metrics for the estimation due to a wide range of web applications. Correct estimates for web development effort play a very important role in the success of large-scale web development projects.

    Objectives In this study I investigated size metrics and cost drivers used by web companies for early web cost estimation. I also aim to get validation through industrial interviews and web quote form. This form is designed based on most frequently occurring metrics after analyzing different companies. Secondly, this research aims to revisit previous work done by Mendes (a senior researcher and contributor in this research area) to validate whether early web cost estimation trends are same or changed? The ultimate goal is to help companies in web cost estimation.

    Methods First research question is answered by conducting an online survey through 212 web companies and finding their web predictor forms (quote forms). All companies included in the survey used Web forms to give quotes on Web development projects based on gathered size and cost measures. The second research question is answered by finding most occurring size metrics from the results of Survey 1. List of size metrics are validated by two methods: (i) Industrial interviews are conducted with 15 web companies to validate results of the first survey (ii) a quote form is designed using validated results from industrial interviews and quote form sent to web companies around the world to seek data on real Web projects. Data gathered from Web projects are analyzed using CBR tool and results are validated with Industrial interview results along with Survey 1.  Final results are compared with old research to justify answer of third research question whether size metrics have been changed. All research findings are contributed to Tukutuku research benchmark project.

    Results “Number of pages/features” and “responsive implementation” are top web size metrics for early Web cost estimation.

    Conclusions. This research investigated metrics which can be used for early Web cost estimation at the early stage of Web application development. This is the stage where the application is not built yet but just requirements are being collected and an expected cost estimation is being evaluated. List of new metrics variable is concluded which can be added in Tukutuku project.

  • 162.
    Ask, Anna Vikström
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Reasons for fire fighting in projects2003Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    This work is a study examining the causes of fire fighting in software projects. Fire fighting is the practice of reactive management, i.e. focus being put at solving the problem of the moment. The study in the thesis is performed in two parts, one part is a literature study examining what academia considers as the reasons of fire fighting and how to minimise the problem. The other part of the thesis is an interview series performed in the industry with the purpose of finding what they consider the causes of the fire fighting phenomena. What is indicated by the interview series, as being the main causes of the problems are problems that are related to requirements, and problems caused by persons with key knowledge leaving the project.

  • 163.
    Aslam, Gulshan
    et al.
    Jönköping University, School of Engineering, JTH. Research area Information Engineering.
    Farooq, Faisal
    Jönköping University, School of Engineering, JTH. Research area Information Engineering.
    A comparative study on Traditional Software Development Methods and Agile Software Development Methods2011Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Everyone is talking about the software development methods but these methods are categorised into the different parts and the most important are two categories, one is agile software development methods and second is using the traditional software development methods. Agile software methods are relatively considered to be quick and for the small teams. Our main mission is to check which method is better from each other, so for that purpose we go out in the software development market to meet the professional to ask about their satisfaction on these software development methods. Our research is based on to see the suitable method for the professionals; see the challenges on the adoptability of methods and which method is quicker. To perform this study we have gone through a survey questionnaire, and results are analysed by using mixed method approach. Results shows that professionals from both types of methods are satisfied but professionals with traditional methods are more satisfy with their methods with respect to development of quality software, whereas agile professionals are more satisfied with their methods with respect of better communication with their customers. With agility point of view, our study says that both methods have characteristics which support agility but not fully support, so in such case we need to customize features from both types of methodologies.

  • 164.
    Aslam, Khurum
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Khurum, Mahvish
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    A Model for Early Requirements Triage and Selection Utilizing Product Strategies2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    In market-driven product development, large numbers of requirements flow in continuously. It is critical for product management to select the requirements aligned with overall business goals and discard others as early as possible. It has been suggested in literature to utilize product strategies for early requirements triage and selection. However, no explicit method/model/framework has been suggested as how to do it. This thesis presents a model for early requirements triage and selection utilizing product strategies based on a literature study and interviews with people at two organizations about the requirements triage and selection processes and product strategies formulation. The model is validated statically within the same two organizations.

  • 165. Aspvall, Bengt
    et al.
    Halldorsson, MM
    Manne, F
    Approximations for the general block distribution of a matrix2001In: Theoretical Computer Science, ISSN 0304-3975, E-ISSN 1879-2294, Vol. 262, no 1-2, p. 145-160Article in journal (Refereed)
    Abstract [en]

    The general block distribution of a matrix is a rectilinear partition of the matrix into orthogonal blocks such that the maximum sum of the elements within a single block is minimized. This corresponds to partitioning the matrix onto parallel processors so as to minimize processor load while maintaining regular communication patterns. Applications of the problem include various parallel sparse matrix computations, compilers for high-performance languages, particle in cell computations, video and image compression, and simulations associated with a communication network. We analyze the performance guarantee of a natural and practical heuristic based on iterative refinement, which has previously been shown to give good empirical results. When p2 is the number of blocks, we show that the tight performance ratio is Theta(rootp). When the matrix has rows of large cost, the details of the objective function of the algorithm are shown to be important, since a naive implementation can lead to a Ohm (p) performance ratio. Extensions to more general cost functions, higher-dimensional arrays, and randomized initial configurations are also considered. (C) 2001 Elsevier Science B.V. All rights reserved.

  • 166.
    Ata-Ul-Nasar, Mansoor
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Modeling Intel® Cilk™ Plus Programs with Unified Modeling Languages2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Recently multi-core processors have become very popular in computer systems. It allows multiple threads to be executed simultaneously. The advantage of multi-core comes by parallelizing codes to expand the work across hardware. Furthermore, this can be done by using a parallel environment developed by M.I.T. called Intel Cilk Plus, which is design to provide an easy and well-structured parallel programming approach.

         Intel Cilk Plus is an extension of C and C++ programming languages that describes data parallelism. This extension is very helpful and easy to use among other languages in this field. It has different features including keywords, reducers and array notations etc. In general, this article describes Intel Cilk Plus and its features. In addition, Unified Modelling Language, activity diagrams are used in term of graphical modelling of Intel Cilk Plus by describing the process of a system, capturing the dynamic behaviour of it and representing the flow from one activity to another using control flow. Later on Intel Cilk Plus keywords and UML diagrams tools will be evaluated, a comparison of different UML modelling tools will also be provided.

  • 167. Aurum, Aybüke
    et al.
    Jeffery, RossWohlin, ClaesHandzic, Meliha
    Managing Software Engineering Knowledge2003Collection (editor) (Other academic)
  • 168. Aurum, Aybüke
    et al.
    Petersson, Håkan
    Wohlin, Claes
    State-of-the-art: Software Inspections after 25 Years2002In: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689, Vol. 12, no 3, p. 133-154Article in journal (Refereed)
    Abstract [en]

    Software inspections, which were originally developed by Michael Fagan in 1976, are an important means to verify and achieve sufficient quality in many software projects today. Since Fagan's initial work, the importance of software inspections has been long recognized by software developers and many organizations. Various proposals have been made by researchers in the hope of improving Fagan's inspection method. The proposals include structural changes to the process and several types of support for the inspection process. Most of the proposals have been empirically investigated in different studies. This is a review paper focusing on the software inspection process in the light of Fagan's inspection method and it summarizes and reviews other types of software inspection processes that have emerged in the last 25 years. This paper also addresses important issues related to the inspection process and examines experimental studies and their findings that are of interest with the purpose of identifying future avenues of research in software inspection.

  • 169. Aurum, Aybüke
    et al.
    Wohlin, Claes
    A Value-Based Approach in Requirements Engineering: Explaining Some of the Fundamental Concepts2007Conference paper (Refereed)
  • 170. Aurum, Aybüke
    et al.
    Wohlin, Claes
    Aligning Requirements with Business Objectives: A Framework for Requirements Engineering Decisions2005Conference paper (Refereed)
    Abstract [en]

    As software development continues to increase in complexity, involving far-reaching consequences, there is a need for decision support to improve the decision making process in requirements engineering (RE) activities. This research begins with a detailed investigation of the complexity of decision making during RE activities on organizational, product and project levels. Secondly, it presents a conceptual model which describes the RE decision making environment in terms of stakeholders, information requirements, decision types and business objectives. The purpose of this model is to facilitate the development of decision support systems in RE and to help further structure and analyse the decision making process in RE.

  • 171. Aurum, Aybüke
    et al.
    Wohlin, Claes
    Applying Decision-Making Models in Requirements Engineering.2002Conference paper (Refereed)
  • 172. Aurum, Aybüke
    et al.
    Wohlin, Claes
    The Fundamental Nature of Requirements Engineering Activities as a Decision-Making Process2003In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 45, no 14, p. 945-954Article in journal (Refereed)
    Abstract [en]

    The requirements engineering (RE) process is a decision-rich complex problem solving activity. This paper examines the elements of organization-oriented macro decisions as well as process-oriented micro decisions in the RE process and illustrates how to integrate classical decision-making models with RE process models. This integration helps in formulating a common vocabulary and model to improve the manageability of the RE process, and contributes towards the learning process by validating and verifying the consistency of decision-making in RE activities.

  • 173. Aurum, Aybüke
    et al.
    Wohlin, Claes
    Petersson, Håkan
    Increasing the Understanding of Effectiveness in Software Inspections Using Published Data Sets2005In: Journal of Research and Practice in Information Technology, ISSN 1443-458X , Vol. 37, no 3, p. 51-64Article in journal (Refereed)
    Abstract [en]

    Since its inception into software engineering software inspection has been viewed as a cost-effective way of increasing software quality. Despite this many questions remain unanswered regarding, for example, ideal team size or cost effectiveness. This paper addresses some of these questions by performing an analysis using 30 published data sets from empirical experiments of software inspections. The main question is concerned with determining a suitable team size for software inspections. The effectiveness of different team sizes is also studied. Furthermore, the differences in mean effectiveness between different team sizes are investigated based on the inspection environmental context, document types and reading technique. It is concluded that it is possible to choose a suitable team size based on the effectiveness of inspections. This can be used as a tool to assist in the planning of inspections. A particularly interesting result is that variation in the effectiveness between different teams is considerably higher for certain types of documents than for others. Our findings contain important information for anyone planning, controlling or managing software inspections.

  • 174. Aurum, Aybüke
    et al.
    Wohlin, Claes
    Porter, A.
    Aligning Software Project Decisions: A Case Study2006In: International Journal of Software Engineering and Knowledge Engineering, ISSN 0218-1940 , Vol. 16, no 6, p. 795-818Article in journal (Refereed)
  • 175. Avatare, Anneli
    et al.
    Jää-Aro, Kai-Mikael
    The APS/AMID Project1991Conference paper (Other academic)
  • 176. Avritzer, Alberto
    et al.
    Beecham, Sarah
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kroll, Josiane
    Menaché, Daniel
    Noll, John
    Paasivaara, Maria
    Extending Survivability Models for Global Software Development with Media Synchronicity Theory2015In: Proceeding of the IEEE 10th International Conference on Global Software Engineering, IEEE Communications Society, 2015, p. 23-32Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a new framework to assess survivability of software projects accounting for media capability details as introduced in Media Synchronicity Theory (MST). Specifically, we add to our global engineering framework the assessment of the impact of inadequate conveyance and convergence available in the communication infrastructure selected to be used by the project, on the system ability to recover from project disasters. We propose an analytical model to assess how the project recovers from project disasters related to process and communication failures. Our model is based on media synchronicity theory to account for how information exchange impacts recovery. Then, using the proposed model we evaluate how different interventions impact communication effectiveness. Finally, we parameterize and instantiate the proposed survivability model based on a data gathering campaign comprising thirty surveys collected from senior global software development experts at ICGSE'2014 and GSD'2015.

  • 177.
    Awan, Nasir Majeed
    et al.
    Blekinge Institute of Technology, School of Computing.
    Alvi, Adnan Khadem
    Blekinge Institute of Technology, School of Computing.
    Predicting software test effort in iterative development using a dynamic Bayesian network2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    It is important to manage iterative projects in a way to maximize quality and minimize cost. To achieve high quality, accurate project estimates are of high importance. It is challenging to predict the effort that is required to perform test activities in an iterative development. If testers put extra effort in testing then schedule might be delayed, however, if testers spend less effort then quality could be affected. Currently there is no model for test effort prediction in iterative development to overcome such challenges. This paper introduces and validates a dynamic Bayesian network to predict test effort in iterative software development. In this research work, the proposed framework is evaluated in a number of ways: First, the framework behavior is observed by considering different parameters and performing initial validation. Then secondly, the framework is validated by incorporating data from two industrial projects. The accuracy of the results has been verified through different prediction accuracy measurements and statistical tests. The results from the verification confirmed that the framework has the ability to predict test effort in iterative projects accurately.

  • 178.
    Awan, Rashid
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Requirements Engineering Process Maturity Model for Market Driven Projects: The REPM-M Model2005Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Several software projects are over budgeted or have to face failures during operations. One big reason of this is Software Company develops wrong software due to wrong interpretation of requirements. Requirements engineering is one of the well known discipline within Software engineering which deals with this problem. RE is the process of eliciting, analyzing and specifying requirements so that there won’t be any ambiguity between the development company and the customers. Another emerging discipline within requirements engineering is requirements engineering for market driven projects. It deals with the requirements engineering of a product targeting a mass market. In this thesis, a maturity model is developed which can be used to assess the maturity of requirements engineering process for market driven projects. The objective of this model is to provide a quick assessment tool through which a company would be able to know what are the strengths and weaknesses of their requirements engineering process.

  • 179.
    Axelsson, Jesper
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Implementering av PostgreSQL som databashanterare för MONITOR2014Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [sv]

    Monitors affärssystem MONITOR är under ständig utveckling och i och med detta ville man kolla upp huruvida PostgreSQL skulle kunna användas som DBMS istället för det nuvarande; Sybase SQL Anywhere. Examensarbete har därför bestått av en jämförelse hur PostgreSQL står sig jämte andra DBMS:er, en implementering utav en PostgreSQLdatabas som MONITOR arbetar mot samt ett prestandatest utav skapandet av databasen.

    I många avseenden verkar PostgreSQL vara ett alternativ till SQL Anywhere;

    1. Alla datatyper finns i båda dialekterna.
    2. Backup av data finns i olika utföranden och går att automatisera
    3. Enkelt att installera och uppdatera.
    4. Ingen licensieringskostnad existerar.
    5. Support finns tillgänglig i olika former.

    Dock så är inte PostgreSQL ett bra DBMS att byta till i dagsläget då systemet inte fungerade på grund av att vissa uttryck inte översattes ordentligt samt att ingen motsvarighet till LIST existerar. Ännu större är dock problemet med tiden det tar att flytta data till en PostgreSQL-databas då det inte är intressant att lösa problem med funktioner i systemet om det ändå inte går att använda på grund utav att konvertering av data tar så lång tid som det gör.

  • 180.
    Axelsson, Markus
    et al.
    Halmstad University, School of Business, Engineering and Science.
    Lundgren, Oskar
    Halmstad University, School of Business, Engineering and Science.
    Raytelligent Cloud2017Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Todays age sees more and more devices connected to the internet providing otherwise quite limited hardware with the ability to perform more complex calculations. This project aims to create a system for managing a users radar devices using a cloud platform. The system also provides the ability for the user to upload their own custom applications which can make use of data provided by the radar device, run on virtual machines and if required have the ability to push notifications to the users mobile applications. To simplify the system development, it has been divided into three separate subsystems, specifically the radar device, the cloud service and the mobile application. The result of the project is a complete system with a web application which provides the user with the ability to register their radar device(s), upload source code which is compiled and run on the cloud platform and the ability to send push notices to a mobile application. 

  • 181.
    Axelsson, Mattias
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Sonesson, Johan
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Business Process Performance Measurement for Rollout Success2004Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Business process improvement for increased product quality is of continuous importance in the software industry. Quality managers in this sector need effective, hands-on tools for decision-making in engineering projects and for rapidly spotting key improvement areas. Measurement programs are a widespread approach for introducing quality improvement in software processes, yet employing all-embracing state-of-the art quality assurance models is labor intensive. Unfortunately, these do not primarily focus on measures, revealing a need for an instant and straightforward technique for identifying and defining measures in projects without resources or need for entire measurement programs. This thesis explores and compares prevailing quality assurance models using measures, rendering the Measurement Discovery Process constructed from selected parts of the PSM and GQM techniques. The composed process is applied to an industrial project with the given prerequisites, providing a set of measures that are subsequently evaluated. In addition, the application gives foundation for analysis of the Measurement Discovery Process. The application and analysis of the process show its general applicability to projects with similar constraints as well as the importance of formal target processes and exhaustive project domain knowledge among measurement implementers. Even though the Measurement Discovery Process is subject to future refinement, it is clearly a step towards rapid delivery of tangible business performance indicators for process improvement.

  • 182. Axelsson, Stefan
    Using Normalized Compression Distance for Classifying File Fragments2010Conference paper (Refereed)
    Abstract [en]

    We have applied the generalised and universal distance measure NCD-Normalised Compression Distance-to the problem of determining the types of file fragments via example. A corpus of files that can be redistributed to other researchers in the field was developed and the NCD algorithm using k-nearest-neighbour as the classification algorithm was applied to a random selection of file fragments. The experiment covered circa 2000 fragments from 17 different file types. While the overall accuracy of the n-valued classification only improved the prior probability of the class from approximately 6% to circa 50% overall, the classifier reached accuracies of 85%-100% for the most successful file types.

  • 183.
    Axelsson, Stefan
    et al.
    Blekinge Institute of Technology, School of Computing.
    Bajwa, Kamran Ali
    Srikanth, Mandhapati Venkata
    Blekinge Institute of Technology, School of Computing.
    File Fragment Analysis Using Normalized Compression Distance2013Conference paper (Refereed)
    Abstract [en]

    The first step when recovering deleted files using file carving is to identify the file type of a block, also called file fragment analysis. Several researchers have demonstrated the applicability of Kolmogorov complexity methods such as the normalized compression distance (NCD) to this problem. NCD methods compare the results of compressing a pair of data blocks with the compressed concatenation of the pair. One parameter that is required is the compression algorithm to be used. Prior research has identified the NCD compressor properties that yield good performance. However, no studies have focused on its applicability to file fragment analysis. This paper describes the results of experiments on a large corpus of files and file types with different block lengths. The experimental results demonstrate that, in the case of file fragment analysis, compressors with the desired properties do not perform statistically better than compressors with less computational complexity.

  • 184.
    Axelsson, Veronica
    Gotland University, School of Game Design, Technology and Learning Processes.
    What technique is most appropriate for 3D modeling a chair for a movie production?2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Making 3D models with polygon modeling is the most common technique used for a 3D animated movie production, but there are also other good modeling techniques to work with. The aim of this thesis is to examine which of three chosen modeling technique is most appropriate to use for modeling a chair for a 3D animated movie production. I made three models of the same chair design and compared the results. The modeling technique used is polygon modeling, NURBS modeling and digital sculpting. A few factors were considered when I judged which one of the three techniques that was most suitable: The model's geometry, the workflow and the rendering (material and lightning).

    The three chairs were rendered in the same scene with the same lightning and settings. The results showed that the model's geometry and how smooth it is to work with the modeling technique matter most for judging which technique is the most appropriate. In addition, the results show that how the light falls and reflects the surface depends on how the geometry was placed on the model rather than which of the other modeling techniques that was used.

  • 185.
    Ayalew, Tigist
    et al.
    Blekinge Institute of Technology, School of Computing.
    Kidane, Tigist
    Blekinge Institute of Technology, School of Computing.
    Identification and Evaluation of Security Activities in Agile Projects: A Systematic Literature Review and Survey Study2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Today’s software development industry requires high-speed software delivery from the development team. In order to do this, organizations make transformation from their conventional software development method to agile development method while preserving customer satisfaction. Even though this approach is becoming popular development method, from security point of view, it has some disadvantage. Because, this method has several constraints imposed such as lack of a complete overview of a product, higher development pace and lack of documentation. Although security-engineering (SE) process is necessary in order to build secure software, no SE process is developed specifically for agile model. As a result, SE processes that are commonly used in waterfall model are being used in agile models. However, there is a clash or disparity between the established waterfall SE processes and the ideas and methodologies proposed by the agile manifesto. This means that, while agile models work with short development increments that adapt easily to change, the existing SE processes work in plan-driven development setting and try to reduce defects found in a program before the occurrence of threats through heavy and inflexible process. This study aims at bridging the gap in agile model and security by providing insightful understanding of the SE process that are used in the current agile industry. Objectives: The objectives of this thesis are to identify and evaluate security activities from high-profile waterfall SE-process that are used in the current agile industry. Then, to suggest the most compatible and beneficial security activities to agile model based on the study results. Methods: The study involved two approaches: systematic literature review and survey. The systematic literature review has two main aims. The first aim is to gain a comprehensive understanding of security in an agile process model; the second one is to identify high-profile SE processes that are commonly used in waterfall model. Moreover, it helped to compare the thesis result with other previously done works on the area. A survey is conducted to identify and evaluate waterfall security activities that are used in the current agile industry projects. The evaluation criteria were based on the security activity integration cost and benefit provides to agile projects. Results: The results of the systematic review are organized in a tabular form for clear understanding and easy analysis. High-profile SE processes and their activities are obtained. These results are used as an input for the survey study. From the survey study, security activities that are used in the current agile industry are identified. Furthermore, the identified security activities are evaluated in terms of benefit and cost. As a result the best security activities, that are compatible and beneficial, are investigated to agile process model. Conclusions: To develop secure software in agile model, there is a need of SE-process or practice that can address security issues in every phase of the agile project lifecycle. This can be done either by integrating the most compatible and beneficial security activities from waterfall SE processes with agile process or by creating new SE-process. In this thesis, it has been found that, from the investigated high-profile waterfall SE processes, none of the SE processes was fully compatible and beneficial to agile projects.

  • 186.
    Ayaz, Muhammad
    Linköping University, Department of Computer and Information Science.
    Model-Based Diagnosis of Software Functional Dependencies2010Independent thesis Advanced level (degree of Master (Two Years)), 30 credits / 45 HE creditsStudent thesis
    Abstract [en]

    Researchers have developed framework for diagnosis analysis that are called “Model Based Diagnosis Systems”. These systems are very general in scope, covers a wide range of malfunctions uncovering and identifying repair measures. This thesis is an effort to diagnose complex and lengthy static source code. Without executing source code discrepancies can only be identified by finding procedural dependencies.

    With respect to modern programming languages, many software bugs arise due to logical erroneous calculations or miss handling of data structures. Modern Integrated Development Environments (IDE) like Visual Studio, J-Builder and Eclipse etc are strong enough to analyze and parse static text code to identify syntactical and type conversion errors. Some of IDE’s can automatically fix such kind of errors or provide different possible suggestions to developer.

    In this thesis we have analyzed and extracted functional dependencies of source code. This extracted information can increase programmer’s understanding about code when they are extremely large or complex. By modeling this information into a model system, reduces time to debug the code in case of any failure. This increases productivity in terms of software development and in debugger skills as well. The main contribution of this thesis is the use of model based diagnosis techniques on software functional dependency graphs and charts.

    Keywords: Model Based Diagnosis Systems, Integrated Development Environments, Procedural Dependencies, Erroneous calculations, Call graphs, Directed graph markup language.

  • 187. Azhar, Damir
    et al.
    Riddle, Patricia
    Mendes, Emilia
    Blekinge Institute of Technology, School of Computing.
    Mittas, Nikolaos
    Angelis, Lefteris
    Using ensembles for web effort estimation2013Conference paper (Refereed)
    Abstract [en]

    Background: Despite the number of Web effort estimation techniques investigated, there is no consensus as to which technique produces the most accurate estimates, an issue shared by effort estimation in the general software estimation domain. A previous study in this domain has shown that using ensembles of estimation techniques can be used to address this issue. Aim: The aim of this paper is to investigate whether ensembles of effort estimation techniques will be similarly successful when used on Web project data. Method: The previous study built ensembles using solo effort estimation techniques that were deemed superior. In order to identify these superior techniques two approaches were investigated: The first involved replicating the methodology used in the previous study, while the second approach used the Scott-Knott algorithm. Both approaches were done using the same 90 solo estimation techniques on Web project data from the Tukutuku dataset. The replication identified 16 solo techniques that were deemed superior and were used to build 15 ensembles, while the Scott-Knott algorithm identified 19 superior solo techniques that were used to build two ensembles. Results: The ensembles produced by both approaches performed very well against solo effort estimation techniques. With the replication, the top 12 techniques were all ensembles, with the remaining 3 ensembles falling within the top 17 techniques. These 15 effort estimation ensembles, along with the 2 built by the second approach, were grouped into the best cluster of effort estimation techniques by the Scott-Knott algorithm. Conclusion: While it may not be possible to identify a single best technique, the results suggest that ensembles of estimation techniques consistently perform well even when using Web project data

  • 188.
    Azhar, Muhammad Saad Bin
    et al.
    Blekinge Institute of Technology, School of Computing.
    Aslam, Ammad
    Blekinge Institute of Technology, School of Computing.
    Multiple Coordinated Information Visualization Techniques in Control Room Environment2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Presenting large amount of Multivariate Data is not a simple problem. When there are multiple correlated variables involved, it becomes difficult to comprehend data using traditional ways. Information Visualization techniques provide an interactive way to present and analyze such data. This thesis has been carried out at ABB Corporate Research, Västerås, Sweden. Use of Parallel Coordinates and Multiple Coordinated Views was has been suggested to realize interactive reporting and trending of Multivariate Data for ABB’s Network Manager SCADA system. A prototype was developed and an empirical study was conducted to evaluate the suggested design and test it for usability from an actual industry perspective. With the help of this prototype and the evaluations carried out, we are able to achieve stronger results regarding the effectiveness and efficiency of the visualization techniques used. The results confirm that such interfaces are more effective, efficient and intuitive for filtering and analyzing Multivariate Data.

  • 189.
    Aziz, Yama
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Exploring a keyword driven testing framework: a case study at Scania IT2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this thesis is to investigate organizational quality assurance through the international testing standard ISO 29119. The focus will be on how an organization carries out testing processes and designs and implements test cases. Keyword driven testing is a test composition concept in ISO 29119 and suitable for automation. This thesis will answer how keyword driven testing can facilitate the development of maintainable test cases and support test automation in an agile organization.

    The methodology used was a qualitative case study including semi-structured interviews and focus groups with agile business units within Scania IT. Among the interview participants were developers, test engineers, scrum masters and a unit manager.

    The results describe testing practices carried out in several agile business units, maintainability issues with test automation and general ideas of how test automation should be approached. Common issues with test automation were test cases failing due to changed test inputs, inexperience with test automation frameworks and lack of resources due to project release cycle.

    This thesis concludes that keyword driven testing has the potential of solving several maintainability issues with test cases breaking. However, the practicality and effectiveness of said potential remain unanswered. Moreover, successfully developing an automated keyword driven testing framework requires integration with existing test automation tools and considering the agile organizational circumstances.

  • 190.
    AZIZ, YASSAR
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Mathematics and Natural Sciences.
    ASLAM, MUHAMMAD NAEEM
    Blekinge Institute of Technology, School of Engineering, Department of Mathematics and Natural Sciences.
    Traffic Engineering with Multi-Protocol Label Switching, Performance Comparison with IP networks2008Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Traffic Engineering (TE) is the stage which deals with geometric design planning and traffic operation of networks, network devices and relationship of routers for the transportation of data. TE is that feature of network engineering which concentrate on problems of performance optimization of operational networks. It involves techniques and application of knowledge to gain performance objectives, which includes movement of data through network, reliability, planning of network capacity and efficient use of network resources. This thesis addresses the problems of traffic engineering and suggests a solution by using the concept of Multi-Protocol Label Switching (MPLS). We have done simulation in Matlab environment to compare the performance of MPLS against the IP network in a simulated environment. MPLS is a modern technique for forwarding network data. It broadens routing according to path controlling and packet forwarding. In this thesis MPLS is computed on the basis of its performance, efficiency for sending data from source to destination. A MATLAB based simulation tool is developed to compare MPLS with IP network in a simulated environment. The results show the performance of MPLS network in comparison of IP network.

  • 191. Azizyan, G.
    et al.
    Magarian, M. K.
    Kajko-Mattson, Mira
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Survey of agile tool usage and needs2011In: Proceedings - 2011 Agile Conference, 2011, p. 29-38Conference paper (Refereed)
    Abstract [en]

    Today little is known about what tools software companies are using to support their Agile methods and whether they are satisfied or dissatisfied with them. This is due to lack of objective surveys on the subject. The surveys that have been conducted so far are of a subjective nature and have mostly been performed by tool vendors. They are very limited in number and focus mainly on company structure and adherence to a specific Agile method rather than on tool usage and needs. For this reason many companies have difficulties to choose appropriate tools to support their Agile process. One such company is the Swedish telecommunications giant Ericsson. To account for this lack of data Ericsson commissioned us to conduct an independent survey focusing on the tool usage and needs as experienced by the Agile software community today. In this paper we report on the results of our survey. The survey covers 121 responses from 120 different companies coming from 35 different countries. Our results show that the most satisfactory tool aspect is ease of use whereas the least satisfactory one is lack of integration with other systems. Finally our results provide a list of features that are most desired by the software companies today.

  • 192. Baca, Dejan
    Automated static code analysis: A tool for early vulnerability detection2009Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Software vulnerabilities are added into programs during its development. Architectural flaws are introduced during planning and design, while implementation faults are created during coding. Penetration testing is often used to detect these vulnerabilities. This approach is expensive because it is performed late in development and any correction would increase lead-time. An alternative would be to detect and correct vulnerabilities in the phase of development where they are the least expensive to correct and detect. Source code audits have often been suggested and used to detect implementations vulnerabilities. However, manual audits are time consuming and require extended expertise to be efficient. A static code analysis tool could achieve the same results as a manual audit but at fraction of the time. Through a set of cases studies and experiments at Ericsson AB, this thesis investigates the technical capabilities and limitations of using a static analysis tool as an early vulnerability detector. The investigation is extended to studying the human factor by examining how the developers interact and use the static analysis tool. The contributions of this thesis include the identification of the tools capabilities so that further security improvements can focus on other types of vulnerabilities. By using static analysis early in development possible cost saving measures are identified. Additionally, the thesis presents the limitations of static code analysis. The most important limitation being the incorrect warnings that are reported by static analysis tools. In addition, a development process overhead was deemed necessary to successfully use static analysis in an industry setting.

  • 193.
    Baca, Dejan
    Blekinge Institute of Technology, School of Computing.
    Developing Secure Software: in an Agile Process2012Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background: Software developers are facing increased pressure to lower development time, release new software versions more frequent to customers and to adapt to a faster market. This new environment forces developers and companies to move from a plan based waterfall development process to a flexible agile process. By minimizing the pre development planning and instead increasing the communication between customers and developers, the agile process tries to create a new, more flexible way of working. This new way of working allows developers to focus their efforts on the features that customers want. With increased connectability and the faster feature release, the security of the software product is stressed. To develop secure software, many companies use security engineering processes that are plan heavy and inflexible. These two approaches are each others opposites and they directly contradict each other. Objective: The objective of the thesis is to evaluate how to develop secure software in an agile process. In particular, what existing best practices can be incorporated into an agile project and still provide the same benefit if the project was using a waterfall process. How the best practices can be incorporated and adapted to fit the process while still measuring the improvement. Some security engineering concepts are useful but the best practice is not agile compatible and would require extensive adaptation to integrate with an agile project. Method: The primary research method used throughout the thesis is case studies conducted in a real industry setting. As secondary methods for data collection a variety of approaches have been used, such as semi-structured interviews, workshops, study of literature, and use of historical data from the industry. Results: The security engineering best practices were investigated though a series of case studies. The base agile and security engineering compatibility was assessed in literature, by developers and in practical studies. The security engineering best practices were group based on their purpose and their compatibility with the agile process. One well known and popular best practice, automated static code analysis, was toughly investigated for its usefulness, deployment and risks of using as part of the process. For the risk analysis practices, a novel approach was introduced and improved. As such, a way of adapting existing practices to agile is proposed. Conclusion: With regard of agile and security engineering we did not find that any of the investigated processes was agile compatible. Agile is reaction driven that adapts to change, while the security engineering processes are proactive and try to prevent threats before they happen. To develop secure software in an agile process the developers should adopt and adapt key concepts from security engineering. These changes will affect the flexibility of the agile process but it is a necessity if developers want the same software security state as security engineering processes can provide.

  • 194. Baca, Dejan
    et al.
    Carlsson, Bengt
    Agile development with security engineering activities2011Conference paper (Refereed)
    Abstract [en]

    Agile software development has been used by industry to create a more flexible and lean software development process, i.e making it possible to develop software at a faster rate and with more agility during development. There are however concerns that the higher development pace and lack of documentation are creating less secure software. We have therefore looked at three known Security Engineering processes, Microsoft SDL, Cigatel touchpoints and Common Criteria and identified what specific security activities they performed. We then compared these activities with an Agile development process that is used in industry. Developers, from a large telecommunication manufacturer, were interviewed to learn their impressions on using these security activities in an agile development process. We produced a security enhanced Agile development process that we present in this paper. This new Agile process use activities from already established security engineering processes that provide the benefit the developers wanted but did not hinder or obstruct the Agile process in a significant way.

  • 195. Baca, Dejan
    et al.
    Carlsson, Bengt
    Lundberg, Lars
    Evaluating the Cost Reduction of Static Code Analysis for Software Security2008Conference paper (Refereed)
    Abstract [en]

    Automated static code analysis is an efficient technique to increase the quality of software during early development. This paper presents a case study in which mature software with known vul-nerabilities is subjected to a static analysis tool. The value of the tool is estimated based on reported failures from customers. An average of 17% cost savings would have been possible if the static analysis tool was used. The tool also had a 30% success rate in detecting known vulnerabilities and at the same time found 59 new vulnerabilities in the three examined products.

  • 196.
    Baca, Dejan
    et al.
    Blekinge Institute of Technology, School of Computing.
    Carlsson, Bengt
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Lundberg, Lars
    Blekinge Institute of Technology, School of Computing.
    Improving software security with static automated code analysis in an industry setting2013In: Software, practice & experience, ISSN 0038-0644, E-ISSN 1097-024X, Vol. 43, no 3, p. 259-279Article in journal (Refereed)
    Abstract [en]

    Software security can be improved by identifying and correcting vulnerabilities. In order to reduce the cost of rework, vulnerabilities should be detected as early and efficiently as possible. Static automated code analysis is an approach for early detection. So far, only few empirical studies have been conducted in an industrial context to evaluate static automated code analysis. A case study was conducted to evaluate static code analysis in industry focusing on defect detection capability, deployment, and usage of static automated code analysis with a focus on software security. We identified that the tool was capable of detecting memory related vulnerabilities, but few vulnerabilities of other types. The deployment of the tool played an important role in its success as an early vulnerability detector, but also the developers perception of the tools merit. Classifying the warnings from the tool was harder for the developers than to correct them. The correction of false positives in some cases created new vulnerabilities in previously safe code. With regard to defect detection ability, we conclude that static code analysis is able to identify vulnerabilities in different categories. In terms of deployment, we conclude that the tool should be integrated with bug reporting systems, and developers need to share the responsibility for classifying and reporting warnings. With regard to tool usage by developers, we propose to use multiple persons (at least two) in classifying a warning. The same goes for making the decision of how to act based on the warning.

  • 197.
    Baca, Dejan
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Countermeasure graphs for software security risk assessment: An action research2013In: Journal of Systems and Software, ISSN 0164-1212, Vol. 86, no 9, p. 2411-2428Article in journal (Refereed)
    Abstract [en]

    Software security risk analysis is an important part of improving software quality. In previous research we proposed countermeasure graphs (CGs), an approach to conduct risk analysis, combining the ideas of different risk analysis approaches. The approach was designed for reuse and easy evolvability to support agile software development. CGs have not been evaluated in industry practice in agile software development. In this research we evaluate the ability of CGs to support practitioners in identifying the most critical threats and countermeasures. The research method used is participatory action research where CGs were evaluated in a series of risk analyses on four different telecom products. With Peltier (used prior to the use of CGs at the company) the practitioners identified attacks with low to medium risk level. CGs allowed practitioners to identify more serious risks (in the first iteration 1 serious threat, 5 high risk threats, and 11 medium threats). The need for tool support was identified very early, tool support allowed the practitioners to play through scenarios of which countermeasures to implement, and supported reuse. The results indicate that CGs support practitioners in identifying high risk security threats, work well in an agile software development context, and are cost-effective.

  • 198. Baca, Dejan
    et al.
    Petersen, Kai
    Prioritizing Countermeasures through the Countermeasure Method for Software Security (CM-Sec)2010Conference paper (Refereed)
    Abstract [en]

    Software security is an important quality aspect of a software system. Therefore, it is important to integrate software security touch points throughout the development life-cycle. So far, the focus of touch points in the early phases has been on the identification of threats and attacks. In this paper we propose a novel method focusing on the end product by prioritizing countermeasures. The method provides an extension to attack trees and a process for identification and prioritization of countermeasures. The approach has been applied on an open-source application and showed that countermeasures could be identified. Furthermore, an analysis of the effectiveness and cost-efficiency of the countermeasures could be provided.

  • 199. Baca, Dejan
    et al.
    Petersen, Kai
    Carlsson, Bengt
    Lundberg, Lars
    Static Code Analysis to Detect Software Security Vulnerabilities: Does Experience Matter?2009Conference paper (Refereed)
    Abstract [en]

    Code reviews with static analysis tools are today recommended by several security development processes. Developers are expected to use the tools' output to detect the security threats they themselves have introduced in the source code. This approach assumes that all developers can correctly identify a warning from a static analysis tool (SAT) as a security threat that needs to be corrected. We have conducted an industry experiment with a state of the art static analysis tool and real vulnerabilities. We have found that average developers do not correctly identify the security warnings and only developers with specific experiences are better than chance in detecting the security vulnerabilities. Specific SAT experience more than doubled the number of correct answers and a combination of security experience and SAT experience almost tripled the number of correct security answers.

  • 200. Backman, Anders
    et al.
    Bodin, Kenneth
    Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Lacoursière, Claude
    Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Servin, Martin
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Democratizing CAE with Interactive Multiphysics Simulation and Simulators2012Conference paper (Other academic)
1234567 151 - 200 of 2847
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf