Change search
Refine search result
1234567 151 - 200 of 2978
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 151.
    Arsalan, Muhammad
    Blekinge Institute of Technology, School of Engineering.
    Future Tuning Process For Embedded Control Systems2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    This master’s thesis concerns development of embedded control systems.Development process for embedded control systems involves several steps, such as control design, rapid prototyping, fixedpoint implementation and hardware-in-the-loop-simulations. Another step, which Volvo is not currently (September 2009) using within climate control is on-line tuning. One reason for not using this technique today is that the available tools for this task (ATI Vision, INCA from ETAS or CalDesk from dSPACE) do not handle parameter dependencies in a atisfactory way. With these constraints of today, it is not possible to use online tuning and controller development process is more laborious and time consuming.The main task of this thesis is to solve the problem with parameter dependencies and to make online tuning possible.

  • 152.
    Arslan, Muhammad
    et al.
    Blekinge Institute of Technology, School of Computing.
    Riaz, Muhammad Assad
    Blekinge Institute of Technology, School of Computing.
    A Roadmap for Usability and User Experience Measurement during early phases of Web Applications Development2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Web usability and User Experience (UX) play a vital role in the success and failure of web applications. However, the usability and UX measurement during the software development life cycle provide many challenges. Based on a systematic literature review, this thesis discusses the current usability and user experience evaluation and measurement methods and the defined measures as well as their applicability during the software development life cycle. The challenges of using those methods also identified. In order to elaborate more on the challenges, we conducted informal interviews within a software company. Based on the findings, we defined a usability and user experience measurement and evaluation roadmap for web applications development companies. The roadmap contains a set of usability evaluation and measurement methods as well as measures that we found suitable to be used during the early stages (requirement, design, and development) of web application development lifecycle. To validate the applicability of the defined roadmap, a case study was performed on a real time market oriented real estate web application. The results and the discussions of the findings as well as the future research directions are presented.

  • 153. Artho, Cyrille
    et al.
    Biere, Armin
    Advanced Unit Testing—How to Scale Up a Unit Test Framework2006In: Proc. Workshop on Automation of Software Test (AST 2006), 2006, p. 462-465Conference paper (Refereed)
  • 154. Artho, Cyrille
    et al.
    Biere, Armin
    Applying Static Analysis to Large-Scale, Multithreaded Java Programs2001In: Proc. 13th ASWEC, 2001, p. 68-75Conference paper (Refereed)
  • 155. Artho, Cyrille
    et al.
    Chen, Zhongwei
    Honiden, Shinichi
    AOP-based automated unit test classification of large benchmarks2007In: Proc. 3rd Int. Workshop on Aspect-Oriented Software Development (AOAsia 2007), 2007, Vol. 2, p. 17-22Conference paper (Refereed)
  • 156. Artho, Cyrille
    et al.
    Gros, Quentin
    Rousset, Guillaume
    Precondition Coverage in Software Testing2016In: Proc. 1st Int. Workshop on Validating Software Tests (VST 2016), IEEE conference proceedings, 2016Conference paper (Refereed)
    Abstract [en]

    Preconditions indicate when it is permitted to use a given function. However, it is not always the case that both outcomes of a precondition are observed during testing. A precondition that is always false makes a function unusable, a precondition that is always true may turn out to be actually an invariant. In model-based testing, preconditions describes when a transition may be executed from a given state. If no outgoing transition is enabled in a given state because all preconditions of all outgoing transitions are false, the test model may be flawed. Experiments show a low test coverage of preconditions in the Scala library. We also investigate preconditions in Modbat models for model-based testing, in that case, a certain number of test cases is needed to produce sufficient coverage, but remaining cases of low coverage indeed point to legitimate flaws in test models or code.

  • 157.
    Artho, Cyrille
    et al.
    KTH, School of Computer Science and Communication (CSC). National Institute of Advanced Industrial Science and Technology (AIST), Japan.
    Gros, Quentin
    Rousset, Guillaume
    Banzai, Kazuaki
    Ma, Lei
    Kitamura, Takashi
    Hagiya, Masami
    Tanabe, Yoshinori
    Yamamoto, Mitsuharu
    Model-based API Testing of Apache ZooKeeper2017In: 2017 10TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST), IEEE , 2017, p. 288-298Conference paper (Refereed)
    Abstract [en]

    Apache ZooKeeper is a distributed data storage that is highly concurrent and asynchronous due to network communication; testing such a system is very challenging. Our solution using the tool "Modbat" generates test cases for concurrent client sessions, and processes results from synchronous and asynchronous callbacks. We use an embedded model checker to compute the test oracle for non-deterministic outcomes; the oracle model evolves dynamically with each new test step. Our work has detected multiple previously unknown defects in ZooKeeper. Finally, a thorough coverage evaluation of the core classes show how code and branch coverage strongly relate to feature coverage in the model, and hence modeling effort.

  • 158. Artho, Cyrille
    et al.
    Havelund, Klaus
    Kumar, Rahul
    Yamagata, Yoriyuki
    Domain-Specific Languages with Scala2015In: Proc. 17th Int. Conf. on Formal Engineering Methods (ICFEM 2015), 2015, Vol. 9407, p. 1-16Conference paper (Refereed)
  • 159. Artho, Cyrille
    et al.
    Hayamizu, Koji
    Ramler, Rudolf
    Yamagata, Yoriyuki
    With an Open Mind: How to Write Good Models2014In: Proc. 2nd Int. Workshop on Formal Techniques for Safety-Critical Systems (FTSCS 2013), 2014, p. 3-18Conference paper (Refereed)
  • 160. Artho, Cyrille
    et al.
    Ma, Lei
    Classification of Randomly Generated Test Cases2016In: Proc. 1st Int. Workshop on Validating Software Tests (VST 2016), IEEE conference proceedings, 2016Conference paper (Refereed)
    Abstract [en]

    Random test case generation produces relatively diverse test sequences, but the validity of the test verdict is always uncertain. Because tests are generated without taking the specification and documentation into account, many tests are invalid. To understand the prevalent types of successful and invalid tests, we present a classification of 56 issues that were derived from 208 failed, randomly generated test cases. While the existing workflow successfully eliminated more than half of the tests as irrelevant, half of the remaining failed tests are false positives. We show that the new @NonNull annotation of Java 8 has the potential to eliminate most of the false positives, highlighting the importance of machine-readable documentation.

  • 161. Artho, Cyrille
    et al.
    Oiwa, Yutaka
    Suzaki, Kuniyasu
    Hagiya, Masami
    Extraction of properties in C implementations of security APIs for verification of Java applications2009In: Proc. 3rd Int. Workshop on Analysis of Security APIs, 2009Conference paper (Refereed)
  • 162. Artho, Cyrille
    et al.
    Suzaki, Kuniyasu
    Cosmo, Roberto di
    Treinen, Ralf
    Zacchiroli, Stefano
    Why Do Software Packages Conflict?2012In: Proc. 9th Working Conf. on Mining Software Repositories (MSR 2012), 2012, p. 141-150Conference paper (Refereed)
  • 163. Artho, Cyrille
    et al.
    Suzaki, Kuniyasu
    Cosmo, Roberto di
    Zacchiroli, Stefano
    Sources of Inter-package Conflicts in Debian2011In: Proc. Workshop on Logics for Component Configuration (LoCoCo 2011), 2011Conference paper (Refereed)
  • 164. Artho, Cyrille
    et al.
    Suzaki, Kuniyasu
    Hagiya, Masami
    Leungwattanakit, Watcharin
    Potter, Richard
    Platon, Eric
    Tanabe, Yoshinori
    Weitl, Franz
    Yamamoto, Mitsuharu
    Using Checkpointing and Virtualization for Fault Injection2015In: International Journal of Networking and Computing, ISSN 2185-2839, E-ISSN 2185-2847, Vol. 5, no 2, p. 347-372Article in journal (Refereed)
  • 165.
    Arts, Thomas
    et al.
    Quviq AB, Gothenburg, Sweden.
    Mousavi, Mohammad Reza
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
    Automatic Consequence Analysis of Automotive Standards (AUTO-CAAS) [Position Paper]2015In: WASA '15: Proceedings of the First International Workshop on Automotive Software Architecture / [ed] Yanja Dajsuren, Harald Altinger & Miroslaw Staron, New York, NY: ACM Press, 2015, p. 35-38Conference paper (Refereed)
    Abstract [en]

    This paper provides some background and the roadmap of the AUTO-CAAS project, which is a 3-year project financed by the Swedish Knowledge Foundation and is ongoing as a joint project among three academic and industrial partners. The aim of the project is to exploit the formal models of the AUTOSAR standard, developed by the industrial partner of the project Quviq AB, in order to predict possible future failures in concrete implementations of components. To this end, the deviations from the formal specification will be exploited to generate test-cases that can push concrete components to the corners where such deviation will result in observable failures. The same information will also be used in the diagnosis of otherwise detected failures in order to pinpoint their root causes.

  • 166.
    Arvidsson, Carl
    et al.
    Linköping University, Department of Computer and Information Science.
    Bergström, David
    Linköping University, Department of Computer and Information Science.
    Eilert, Pernilla
    Linköping University, Department of Computer and Information Science.
    Gudmundsson, Håkan
    Linköping University, Department of Computer and Information Science.
    Henriksson, Christoffer
    Linköping University, Department of Computer and Information Science.
    Magnusson, Filip
    Linköping University, Department of Computer and Information Science.
    Nåbo, Henning
    Linköping University, Department of Computer and Information Science.
    Petersén, Elin
    Linköping University, Department of Computer and Information Science.
    Utveckling av en applikation för framtagning av hjärnresponser vid funktionell magnetresonanstomografi2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Rapporten behandlar utvecklandet av programvaran JABE som ska användas i forskning om hjärnresponser. Syftet med rapporten är att utreda frågeställningarna kring hur ett sådant system kan utformas så att det skapar värde för kund och samtidigt underlättar vidareutveckling. Rapporten behandlar även hur ett användargränssnitt kan anpassas för en användares kunskapsnivå och vilka erfarenheter som kan dokumenteras från projektet i allmänhet. Problemet tacklas med stark kundkontakt, flera enkätundersökningar, agila arbetsmetodiker och genomgående dokumentering. Programvaran JABE är beställd av CMIV, centrum för medicisk bildvetenskap och visualisering, vid Linköpings Universitet och är den enda i sitt slag. Resultatet är, förutom en programvara, en genomgående beskrivning av erfarenheter, beskrivning av systemet och en utvärdering av SEMAT Kernel ALPHA. Kandidatrapporten innehåller även åtta individuella bidrag som gör fördjupningar i områden kopplade till projektet.

  • 167.
    Arvola Bjelkesten, Kim
    Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
    Feasibility of Point Grid Room First Structure Generation: A bottom-up approach2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Context. Procedural generation becomes increasingly important for videogames in an age where the scope of the content required demands bot a lot of time and work. One of the fronts of this field is structure generation where algorithms create models for the game developers to use. Objectives. This study aims to explore the feasibility of the bottom-up approach within the field of structure generation for video games. Methods. Developing an algorithm using the bottom-up approach, PGRFSG, and utilizing a user study to prove the validity of the results. Each participant evaluates five structures giving them a score based on if they belong in a video game. Results. The participants evaluations show that among the structures generated were some that definitely belonged in a video game world. Two of the five structures got a high score though for one structure that was deemed as not the case. Conclusions. A conclusion can be made that the PGRFSG algorithm creates structures that belong in a video game world and that the bottom-up approach is a suitable one for structure generation based on the results presented.

  • 168.
    Aryal, Dhiraj
    et al.
    Blekinge Institute of Technology, School of Computing.
    Shakya, Anup
    Blekinge Institute of Technology, School of Computing.
    A Taxonomy of SQL Injection Defense Techniques2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: SQL injection attack (SQLIA) poses a serious defense threat to web applications by allowing attackers to gain unhindered access to the underlying databases containing potentially sensitive information. A lot of methods and techniques have been proposed by different researchers and practitioners to mitigate SQL injection problem. However, deploying those methods and techniques without a clear understanding can induce a false sense of security. Classification of such techniques would provide a great assistance to get rid of such false sense of security. Objectives: This paper is focused on classification of such techniques by building taxonomy of SQL injection defense techniques. Methods: Systematic literature review (SLR) is conducted using five reputed and familiar e-databases; IEEE, ACM, Engineering Village (Inspec/Compendex), ISI web of science and Scopus. Results: 61 defense techniques are found and based on these techniques, a taxonomy of SQL injection defense techniques is built. Our taxonomy consists of various dimensions which can be grouped under two higher order terms; detection method and evaluation criteria. Conclusion: The taxonomy provides a basis for comparison among different defense techniques. Organization(s) can use our taxonomy to choose suitable owns depending on their available resources and environments. Moreover, this classification can lead towards a number of future research directions in the field of SQL injection.

  • 169.
    Asghari, Negin
    Blekinge Institute of Technology, School of Computing.
    Evaluating GQM+ Strategies Framework for Planning Measurement System2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Most organizations are aware of the significance of software measurement programs to help organizations assess and improve the ways they develop software. Measurement plays a vital role in improving software process and products. However, the number of failing measurement programs is high and the reasons are vary. A recent approach for planning measurement programs is GQM+Strategies, which makes an important extension to existing approaches, it links measurements and improvement activities to strategic goals and ways to achieve this goals. However, concrete guides of how to collect the information needed to use GQM+strategies is not provided in the literature yet. Objectives. The contribution of this research is to propose and assess an elicitation approach (The Goal Strategy Elicitation (GSE) approach) for the information needed to apply GQM+strategies in an organization, which also leads to a partial evaluation of GQM+strategies as such. In this thesis, the initial focus is placed on eliciting the goals and strategies in the most efficient way. Methods. The primary research approach used is action research, which allows to flexibly assess a new method or technique in an iterative manner, where the feedback of one iteration is taken into the next iteration, thus improving on the method or technique proposed. Complementary to that, we used literature review with the primary focus to position the work, explore GQM+strategies, and to determine which elicitation approach for the support of measurement programs have been proposed. Results. The Goal Strategy Elicitation (GSE) approach as a tool for eliciting goals and strategies within the software organization to contribute in planning a measurement program has been developed. The iterations showed that the approach of elicitation may not be too structured (e.g. template/notation based), but rather shall support the stakeholders to express their thoughts relatively freely. Hence, the end-result was an interview guide, not based on notations (as in the first iteration), and asking questions in a way that the interviewees are able to express themselves easily without having to e.g. distinguish definitions for goals and strategies. Conclusions. We conclude that the GSE approach is a strong tool for the software organization to be able to elicit the goals and strategies to support GQM+Strategies. GSE approach evolved in each iteration and the latest iteration together with the guideline is still used within the studied company for eliciting goals and strategies, and the organization acknowledged that they will continue to do so. Moreover, we conclude that there is a need for further empirical validation of the GSE approach in further full-scale industry trials.

  • 170.
    Asif, Sajjad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Investigating Web Size Metrics for Early Web Cost Estimation2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context Web engineering is a new research field which utilizes engineering principles to produce quality web applications. Web applications have become more complex with the passage of time and it's quite difficult to analyze the web metrics for the estimation due to a wide range of web applications. Correct estimates for web development effort play a very important role in the success of large-scale web development projects.

    Objectives In this study I investigated size metrics and cost drivers used by web companies for early web cost estimation. I also aim to get validation through industrial interviews and web quote form. This form is designed based on most frequently occurring metrics after analyzing different companies. Secondly, this research aims to revisit previous work done by Mendes (a senior researcher and contributor in this research area) to validate whether early web cost estimation trends are same or changed? The ultimate goal is to help companies in web cost estimation.

    Methods First research question is answered by conducting an online survey through 212 web companies and finding their web predictor forms (quote forms). All companies included in the survey used Web forms to give quotes on Web development projects based on gathered size and cost measures. The second research question is answered by finding most occurring size metrics from the results of Survey 1. List of size metrics are validated by two methods: (i) Industrial interviews are conducted with 15 web companies to validate results of the first survey (ii) a quote form is designed using validated results from industrial interviews and quote form sent to web companies around the world to seek data on real Web projects. Data gathered from Web projects are analyzed using CBR tool and results are validated with Industrial interview results along with Survey 1.  Final results are compared with old research to justify answer of third research question whether size metrics have been changed. All research findings are contributed to Tukutuku research benchmark project.

    Results “Number of pages/features” and “responsive implementation” are top web size metrics for early Web cost estimation.

    Conclusions. This research investigated metrics which can be used for early Web cost estimation at the early stage of Web application development. This is the stage where the application is not built yet but just requirements are being collected and an expected cost estimation is being evaluated. List of new metrics variable is concluded which can be added in Tukutuku project.

  • 171.
    Ask, Anna Vikström
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Reasons for fire fighting in projects2003Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    This work is a study examining the causes of fire fighting in software projects. Fire fighting is the practice of reactive management, i.e. focus being put at solving the problem of the moment. The study in the thesis is performed in two parts, one part is a literature study examining what academia considers as the reasons of fire fighting and how to minimise the problem. The other part of the thesis is an interview series performed in the industry with the purpose of finding what they consider the causes of the fire fighting phenomena. What is indicated by the interview series, as being the main causes of the problems are problems that are related to requirements, and problems caused by persons with key knowledge leaving the project.

  • 172.
    Asklund, Ulf
    et al.
    Lund University, SWE.
    Höst, Martin
    Lund University, SWE.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Experiences from Monitoring Effect of Architectural Changes2016In: Software Quality.: The Future of Systems- and Software Development / [ed] Winkler, Dietmar, Biffl, Stefan, Bergsmann, Johannes, 2016, p. 97-108Conference paper (Refereed)
    Abstract [en]

    A common situation is that an initial architecture has been sufficient in the initial phases of a project, but when the size and complexity of the product increases the architecture must be changed. In this paper experiences are presented from changing an architecture into independent units, providing basic reuse of main functionality although giving higher priority to independence than reuse. An objective was also to introduce metrics in order to monitor the architectural changes. The change was studied in a case-study through weekly meetings with the team, collected metrics, and questionnaires. The new architecture was well received by the development team, who found it to be less fragile. Concerning the metrics for monitoring it was concluded that a high abstraction level was useful for the purpose.

  • 173.
    Aslam, Gulshan
    et al.
    Jönköping University, School of Engineering, JTH. Research area Information Engineering.
    Farooq, Faisal
    Jönköping University, School of Engineering, JTH. Research area Information Engineering.
    A comparative study on Traditional Software Development Methods and Agile Software Development Methods2011Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Everyone is talking about the software development methods but these methods are categorised into the different parts and the most important are two categories, one is agile software development methods and second is using the traditional software development methods. Agile software methods are relatively considered to be quick and for the small teams. Our main mission is to check which method is better from each other, so for that purpose we go out in the software development market to meet the professional to ask about their satisfaction on these software development methods. Our research is based on to see the suitable method for the professionals; see the challenges on the adoptability of methods and which method is quicker. To perform this study we have gone through a survey questionnaire, and results are analysed by using mixed method approach. Results shows that professionals from both types of methods are satisfied but professionals with traditional methods are more satisfy with their methods with respect to development of quality software, whereas agile professionals are more satisfied with their methods with respect of better communication with their customers. With agility point of view, our study says that both methods have characteristics which support agility but not fully support, so in such case we need to customize features from both types of methodologies.

  • 174.
    Aslam, Khurum
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Khurum, Mahvish
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    A Model for Early Requirements Triage and Selection Utilizing Product Strategies2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    In market-driven product development, large numbers of requirements flow in continuously. It is critical for product management to select the requirements aligned with overall business goals and discard others as early as possible. It has been suggested in literature to utilize product strategies for early requirements triage and selection. However, no explicit method/model/framework has been suggested as how to do it. This thesis presents a model for early requirements triage and selection utilizing product strategies based on a literature study and interviews with people at two organizations about the requirements triage and selection processes and product strategies formulation. The model is validated statically within the same two organizations.

  • 175. Aspvall, Bengt
    et al.
    Halldorsson, MM
    Manne, F
    Approximations for the general block distribution of a matrix2001In: Theoretical Computer Science, ISSN 0304-3975, E-ISSN 1879-2294, Vol. 262, no 1-2, p. 145-160Article in journal (Refereed)
    Abstract [en]

    The general block distribution of a matrix is a rectilinear partition of the matrix into orthogonal blocks such that the maximum sum of the elements within a single block is minimized. This corresponds to partitioning the matrix onto parallel processors so as to minimize processor load while maintaining regular communication patterns. Applications of the problem include various parallel sparse matrix computations, compilers for high-performance languages, particle in cell computations, video and image compression, and simulations associated with a communication network. We analyze the performance guarantee of a natural and practical heuristic based on iterative refinement, which has previously been shown to give good empirical results. When p2 is the number of blocks, we show that the tight performance ratio is Theta(rootp). When the matrix has rows of large cost, the details of the objective function of the algorithm are shown to be important, since a naive implementation can lead to a Ohm (p) performance ratio. Extensions to more general cost functions, higher-dimensional arrays, and randomized initial configurations are also considered. (C) 2001 Elsevier Science B.V. All rights reserved.

  • 176.
    Astner, Thomas
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Forest thinning in VR: A VR application with the theme of forest thinning2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The purpose with this project was to create a virtual reality game were the users should be able to carry out a thinning. The main goals are to use real forests terrains as terrain models in the game, the GameObjects and the teleportation system in the application should be able to handle changing terrains and the application should not cause virtual reality sickness. The application has been developed with the help of the game engine Unity and plugins from Unitys own asset store. User tests and measurements will be carried out in order to evaluate if the game causes virtual reality sickness or not. The results shows that it is possible to use real forests terrains and that the solution is suited for this application. The downside is that in order to use real life terrains several steps has to be taken and that the terrain object has to be designed manually. It also shows that the GameObjects and the Teleportation system has been implemented in a way so they can handle changing terrains. Furthermore it shows that some of the functionalities of the application could be improved, especially the scoring system. The users tests and the measurements showed that the application isn't causing virtual reality sickness but it also showed that the users feels like there are things missing in the application

  • 177.
    Ata-Ul-Nasar, Mansoor
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Modeling Intel® Cilk™ Plus Programs with Unified Modeling Languages2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Recently multi-core processors have become very popular in computer systems. It allows multiple threads to be executed simultaneously. The advantage of multi-core comes by parallelizing codes to expand the work across hardware. Furthermore, this can be done by using a parallel environment developed by M.I.T. called Intel Cilk Plus, which is design to provide an easy and well-structured parallel programming approach.

         Intel Cilk Plus is an extension of C and C++ programming languages that describes data parallelism. This extension is very helpful and easy to use among other languages in this field. It has different features including keywords, reducers and array notations etc. In general, this article describes Intel Cilk Plus and its features. In addition, Unified Modelling Language, activity diagrams are used in term of graphical modelling of Intel Cilk Plus by describing the process of a system, capturing the dynamic behaviour of it and representing the flow from one activity to another using control flow. Later on Intel Cilk Plus keywords and UML diagrams tools will be evaluated, a comparison of different UML modelling tools will also be provided.

  • 178. Aurum, Aybüke
    et al.
    Jeffery, RossWohlin, ClaesHandzic, Meliha
    Managing Software Engineering Knowledge2003Collection (editor) (Other academic)
  • 179. Aurum, Aybüke
    et al.
    Petersson, Håkan
    Wohlin, Claes
    State-of-the-art: Software Inspections after 25 Years2002In: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689, Vol. 12, no 3, p. 133-154Article in journal (Refereed)
    Abstract [en]

    Software inspections, which were originally developed by Michael Fagan in 1976, are an important means to verify and achieve sufficient quality in many software projects today. Since Fagan's initial work, the importance of software inspections has been long recognized by software developers and many organizations. Various proposals have been made by researchers in the hope of improving Fagan's inspection method. The proposals include structural changes to the process and several types of support for the inspection process. Most of the proposals have been empirically investigated in different studies. This is a review paper focusing on the software inspection process in the light of Fagan's inspection method and it summarizes and reviews other types of software inspection processes that have emerged in the last 25 years. This paper also addresses important issues related to the inspection process and examines experimental studies and their findings that are of interest with the purpose of identifying future avenues of research in software inspection.

  • 180. Aurum, Aybüke
    et al.
    Wohlin, Claes
    A Value-Based Approach in Requirements Engineering: Explaining Some of the Fundamental Concepts2007Conference paper (Refereed)
  • 181. Aurum, Aybüke
    et al.
    Wohlin, Claes
    Aligning Requirements with Business Objectives: A Framework for Requirements Engineering Decisions2005Conference paper (Refereed)
    Abstract [en]

    As software development continues to increase in complexity, involving far-reaching consequences, there is a need for decision support to improve the decision making process in requirements engineering (RE) activities. This research begins with a detailed investigation of the complexity of decision making during RE activities on organizational, product and project levels. Secondly, it presents a conceptual model which describes the RE decision making environment in terms of stakeholders, information requirements, decision types and business objectives. The purpose of this model is to facilitate the development of decision support systems in RE and to help further structure and analyse the decision making process in RE.

  • 182. Aurum, Aybüke
    et al.
    Wohlin, Claes
    Applying Decision-Making Models in Requirements Engineering.2002Conference paper (Refereed)
  • 183. Aurum, Aybüke
    et al.
    Wohlin, Claes
    The Fundamental Nature of Requirements Engineering Activities as a Decision-Making Process2003In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 45, no 14, p. 945-954Article in journal (Refereed)
    Abstract [en]

    The requirements engineering (RE) process is a decision-rich complex problem solving activity. This paper examines the elements of organization-oriented macro decisions as well as process-oriented micro decisions in the RE process and illustrates how to integrate classical decision-making models with RE process models. This integration helps in formulating a common vocabulary and model to improve the manageability of the RE process, and contributes towards the learning process by validating and verifying the consistency of decision-making in RE activities.

  • 184. Aurum, Aybüke
    et al.
    Wohlin, Claes
    Petersson, Håkan
    Increasing the Understanding of Effectiveness in Software Inspections Using Published Data Sets2005In: Journal of Research and Practice in Information Technology, ISSN 1443-458X , Vol. 37, no 3, p. 51-64Article in journal (Refereed)
    Abstract [en]

    Since its inception into software engineering software inspection has been viewed as a cost-effective way of increasing software quality. Despite this many questions remain unanswered regarding, for example, ideal team size or cost effectiveness. This paper addresses some of these questions by performing an analysis using 30 published data sets from empirical experiments of software inspections. The main question is concerned with determining a suitable team size for software inspections. The effectiveness of different team sizes is also studied. Furthermore, the differences in mean effectiveness between different team sizes are investigated based on the inspection environmental context, document types and reading technique. It is concluded that it is possible to choose a suitable team size based on the effectiveness of inspections. This can be used as a tool to assist in the planning of inspections. A particularly interesting result is that variation in the effectiveness between different teams is considerably higher for certain types of documents than for others. Our findings contain important information for anyone planning, controlling or managing software inspections.

  • 185. Aurum, Aybüke
    et al.
    Wohlin, Claes
    Porter, A.
    Aligning Software Project Decisions: A Case Study2006In: International Journal of Software Engineering and Knowledge Engineering, ISSN 0218-1940 , Vol. 16, no 6, p. 795-818Article in journal (Refereed)
  • 186. Avatare, Anneli
    et al.
    Jää-Aro, Kai-Mikael
    The APS/AMID Project1991Conference paper (Other academic)
  • 187. Avritzer, Alberto
    et al.
    Beecham, Sarah
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kroll, Josiane
    Menaché, Daniel
    Noll, John
    Paasivaara, Maria
    Extending Survivability Models for Global Software Development with Media Synchronicity Theory2015In: Proceeding of the IEEE 10th International Conference on Global Software Engineering, IEEE Communications Society, 2015, p. 23-32Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a new framework to assess survivability of software projects accounting for media capability details as introduced in Media Synchronicity Theory (MST). Specifically, we add to our global engineering framework the assessment of the impact of inadequate conveyance and convergence available in the communication infrastructure selected to be used by the project, on the system ability to recover from project disasters. We propose an analytical model to assess how the project recovers from project disasters related to process and communication failures. Our model is based on media synchronicity theory to account for how information exchange impacts recovery. Then, using the proposed model we evaluate how different interventions impact communication effectiveness. Finally, we parameterize and instantiate the proposed survivability model based on a data gathering campaign comprising thirty surveys collected from senior global software development experts at ICGSE'2014 and GSD'2015.

  • 188.
    Awan, Nasir Majeed
    et al.
    Blekinge Institute of Technology, School of Computing.
    Alvi, Adnan Khadem
    Blekinge Institute of Technology, School of Computing.
    Predicting software test effort in iterative development using a dynamic Bayesian network2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    It is important to manage iterative projects in a way to maximize quality and minimize cost. To achieve high quality, accurate project estimates are of high importance. It is challenging to predict the effort that is required to perform test activities in an iterative development. If testers put extra effort in testing then schedule might be delayed, however, if testers spend less effort then quality could be affected. Currently there is no model for test effort prediction in iterative development to overcome such challenges. This paper introduces and validates a dynamic Bayesian network to predict test effort in iterative software development. In this research work, the proposed framework is evaluated in a number of ways: First, the framework behavior is observed by considering different parameters and performing initial validation. Then secondly, the framework is validated by incorporating data from two industrial projects. The accuracy of the results has been verified through different prediction accuracy measurements and statistical tests. The results from the verification confirmed that the framework has the ability to predict test effort in iterative projects accurately.

  • 189.
    Awan, Rashid
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Requirements Engineering Process Maturity Model for Market Driven Projects: The REPM-M Model2005Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Several software projects are over budgeted or have to face failures during operations. One big reason of this is Software Company develops wrong software due to wrong interpretation of requirements. Requirements engineering is one of the well known discipline within Software engineering which deals with this problem. RE is the process of eliciting, analyzing and specifying requirements so that there won’t be any ambiguity between the development company and the customers. Another emerging discipline within requirements engineering is requirements engineering for market driven projects. It deals with the requirements engineering of a product targeting a mass market. In this thesis, a maturity model is developed which can be used to assess the maturity of requirements engineering process for market driven projects. The objective of this model is to provide a quick assessment tool through which a company would be able to know what are the strengths and weaknesses of their requirements engineering process.

  • 190.
    Axelsson, Jesper
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Implementering av PostgreSQL som databashanterare för MONITOR2014Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [sv]

    Monitors affärssystem MONITOR är under ständig utveckling och i och med detta ville man kolla upp huruvida PostgreSQL skulle kunna användas som DBMS istället för det nuvarande; Sybase SQL Anywhere. Examensarbete har därför bestått av en jämförelse hur PostgreSQL står sig jämte andra DBMS:er, en implementering utav en PostgreSQLdatabas som MONITOR arbetar mot samt ett prestandatest utav skapandet av databasen.

    I många avseenden verkar PostgreSQL vara ett alternativ till SQL Anywhere;

    1. Alla datatyper finns i båda dialekterna.
    2. Backup av data finns i olika utföranden och går att automatisera
    3. Enkelt att installera och uppdatera.
    4. Ingen licensieringskostnad existerar.
    5. Support finns tillgänglig i olika former.

    Dock så är inte PostgreSQL ett bra DBMS att byta till i dagsläget då systemet inte fungerade på grund av att vissa uttryck inte översattes ordentligt samt att ingen motsvarighet till LIST existerar. Ännu större är dock problemet med tiden det tar att flytta data till en PostgreSQL-databas då det inte är intressant att lösa problem med funktioner i systemet om det ändå inte går att använda på grund utav att konvertering av data tar så lång tid som det gör.

  • 191.
    Axelsson, Markus
    et al.
    Halmstad University, School of Business, Engineering and Science.
    Lundgren, Oskar
    Halmstad University, School of Business, Engineering and Science.
    Raytelligent Cloud2017Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Todays age sees more and more devices connected to the internet providing otherwise quite limited hardware with the ability to perform more complex calculations. This project aims to create a system for managing a users radar devices using a cloud platform. The system also provides the ability for the user to upload their own custom applications which can make use of data provided by the radar device, run on virtual machines and if required have the ability to push notifications to the users mobile applications. To simplify the system development, it has been divided into three separate subsystems, specifically the radar device, the cloud service and the mobile application. The result of the project is a complete system with a web application which provides the user with the ability to register their radar device(s), upload source code which is compiled and run on the cloud platform and the ability to send push notices to a mobile application. 

  • 192.
    Axelsson, Mattias
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Sonesson, Johan
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Business Process Performance Measurement for Rollout Success2004Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Business process improvement for increased product quality is of continuous importance in the software industry. Quality managers in this sector need effective, hands-on tools for decision-making in engineering projects and for rapidly spotting key improvement areas. Measurement programs are a widespread approach for introducing quality improvement in software processes, yet employing all-embracing state-of-the art quality assurance models is labor intensive. Unfortunately, these do not primarily focus on measures, revealing a need for an instant and straightforward technique for identifying and defining measures in projects without resources or need for entire measurement programs. This thesis explores and compares prevailing quality assurance models using measures, rendering the Measurement Discovery Process constructed from selected parts of the PSM and GQM techniques. The composed process is applied to an industrial project with the given prerequisites, providing a set of measures that are subsequently evaluated. In addition, the application gives foundation for analysis of the Measurement Discovery Process. The application and analysis of the process show its general applicability to projects with similar constraints as well as the importance of formal target processes and exhaustive project domain knowledge among measurement implementers. Even though the Measurement Discovery Process is subject to future refinement, it is clearly a step towards rapid delivery of tangible business performance indicators for process improvement.

  • 193. Axelsson, Stefan
    Using Normalized Compression Distance for Classifying File Fragments2010Conference paper (Refereed)
    Abstract [en]

    We have applied the generalised and universal distance measure NCD-Normalised Compression Distance-to the problem of determining the types of file fragments via example. A corpus of files that can be redistributed to other researchers in the field was developed and the NCD algorithm using k-nearest-neighbour as the classification algorithm was applied to a random selection of file fragments. The experiment covered circa 2000 fragments from 17 different file types. While the overall accuracy of the n-valued classification only improved the prior probability of the class from approximately 6% to circa 50% overall, the classifier reached accuracies of 85%-100% for the most successful file types.

  • 194.
    Axelsson, Stefan
    et al.
    Blekinge Institute of Technology, School of Computing.
    Bajwa, Kamran Ali
    Srikanth, Mandhapati Venkata
    Blekinge Institute of Technology, School of Computing.
    File Fragment Analysis Using Normalized Compression Distance2013Conference paper (Refereed)
    Abstract [en]

    The first step when recovering deleted files using file carving is to identify the file type of a block, also called file fragment analysis. Several researchers have demonstrated the applicability of Kolmogorov complexity methods such as the normalized compression distance (NCD) to this problem. NCD methods compare the results of compressing a pair of data blocks with the compressed concatenation of the pair. One parameter that is required is the compression algorithm to be used. Prior research has identified the NCD compressor properties that yield good performance. However, no studies have focused on its applicability to file fragment analysis. This paper describes the results of experiments on a large corpus of files and file types with different block lengths. The experimental results demonstrate that, in the case of file fragment analysis, compressors with the desired properties do not perform statistically better than compressors with less computational complexity.

  • 195.
    Axelsson, Veronica
    Gotland University, School of Game Design, Technology and Learning Processes.
    What technique is most appropriate for 3D modeling a chair for a movie production?2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Making 3D models with polygon modeling is the most common technique used for a 3D animated movie production, but there are also other good modeling techniques to work with. The aim of this thesis is to examine which of three chosen modeling technique is most appropriate to use for modeling a chair for a 3D animated movie production. I made three models of the same chair design and compared the results. The modeling technique used is polygon modeling, NURBS modeling and digital sculpting. A few factors were considered when I judged which one of the three techniques that was most suitable: The model's geometry, the workflow and the rendering (material and lightning).

    The three chairs were rendered in the same scene with the same lightning and settings. The results showed that the model's geometry and how smooth it is to work with the modeling technique matter most for judging which technique is the most appropriate. In addition, the results show that how the light falls and reflects the surface depends on how the geometry was placed on the model rather than which of the other modeling techniques that was used.

  • 196.
    Ayalew, Tigist
    et al.
    Blekinge Institute of Technology, School of Computing.
    Kidane, Tigist
    Blekinge Institute of Technology, School of Computing.
    Identification and Evaluation of Security Activities in Agile Projects: A Systematic Literature Review and Survey Study2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Today’s software development industry requires high-speed software delivery from the development team. In order to do this, organizations make transformation from their conventional software development method to agile development method while preserving customer satisfaction. Even though this approach is becoming popular development method, from security point of view, it has some disadvantage. Because, this method has several constraints imposed such as lack of a complete overview of a product, higher development pace and lack of documentation. Although security-engineering (SE) process is necessary in order to build secure software, no SE process is developed specifically for agile model. As a result, SE processes that are commonly used in waterfall model are being used in agile models. However, there is a clash or disparity between the established waterfall SE processes and the ideas and methodologies proposed by the agile manifesto. This means that, while agile models work with short development increments that adapt easily to change, the existing SE processes work in plan-driven development setting and try to reduce defects found in a program before the occurrence of threats through heavy and inflexible process. This study aims at bridging the gap in agile model and security by providing insightful understanding of the SE process that are used in the current agile industry. Objectives: The objectives of this thesis are to identify and evaluate security activities from high-profile waterfall SE-process that are used in the current agile industry. Then, to suggest the most compatible and beneficial security activities to agile model based on the study results. Methods: The study involved two approaches: systematic literature review and survey. The systematic literature review has two main aims. The first aim is to gain a comprehensive understanding of security in an agile process model; the second one is to identify high-profile SE processes that are commonly used in waterfall model. Moreover, it helped to compare the thesis result with other previously done works on the area. A survey is conducted to identify and evaluate waterfall security activities that are used in the current agile industry projects. The evaluation criteria were based on the security activity integration cost and benefit provides to agile projects. Results: The results of the systematic review are organized in a tabular form for clear understanding and easy analysis. High-profile SE processes and their activities are obtained. These results are used as an input for the survey study. From the survey study, security activities that are used in the current agile industry are identified. Furthermore, the identified security activities are evaluated in terms of benefit and cost. As a result the best security activities, that are compatible and beneficial, are investigated to agile process model. Conclusions: To develop secure software in agile model, there is a need of SE-process or practice that can address security issues in every phase of the agile project lifecycle. This can be done either by integrating the most compatible and beneficial security activities from waterfall SE processes with agile process or by creating new SE-process. In this thesis, it has been found that, from the investigated high-profile waterfall SE processes, none of the SE processes was fully compatible and beneficial to agile projects.

  • 197.
    Ayaz, Muhammad
    Linköping University, Department of Computer and Information Science.
    Model-Based Diagnosis of Software Functional Dependencies2010Independent thesis Advanced level (degree of Master (Two Years)), 30 credits / 45 HE creditsStudent thesis
    Abstract [en]

    Researchers have developed framework for diagnosis analysis that are called “Model Based Diagnosis Systems”. These systems are very general in scope, covers a wide range of malfunctions uncovering and identifying repair measures. This thesis is an effort to diagnose complex and lengthy static source code. Without executing source code discrepancies can only be identified by finding procedural dependencies.

    With respect to modern programming languages, many software bugs arise due to logical erroneous calculations or miss handling of data structures. Modern Integrated Development Environments (IDE) like Visual Studio, J-Builder and Eclipse etc are strong enough to analyze and parse static text code to identify syntactical and type conversion errors. Some of IDE’s can automatically fix such kind of errors or provide different possible suggestions to developer.

    In this thesis we have analyzed and extracted functional dependencies of source code. This extracted information can increase programmer’s understanding about code when they are extremely large or complex. By modeling this information into a model system, reduces time to debug the code in case of any failure. This increases productivity in terms of software development and in debugger skills as well. The main contribution of this thesis is the use of model based diagnosis techniques on software functional dependency graphs and charts.

    Keywords: Model Based Diagnosis Systems, Integrated Development Environments, Procedural Dependencies, Erroneous calculations, Call graphs, Directed graph markup language.

  • 198. Azhar, Damir
    et al.
    Riddle, Patricia
    Mendes, Emilia
    Blekinge Institute of Technology, School of Computing.
    Mittas, Nikolaos
    Angelis, Lefteris
    Using ensembles for web effort estimation2013Conference paper (Refereed)
    Abstract [en]

    Background: Despite the number of Web effort estimation techniques investigated, there is no consensus as to which technique produces the most accurate estimates, an issue shared by effort estimation in the general software estimation domain. A previous study in this domain has shown that using ensembles of estimation techniques can be used to address this issue. Aim: The aim of this paper is to investigate whether ensembles of effort estimation techniques will be similarly successful when used on Web project data. Method: The previous study built ensembles using solo effort estimation techniques that were deemed superior. In order to identify these superior techniques two approaches were investigated: The first involved replicating the methodology used in the previous study, while the second approach used the Scott-Knott algorithm. Both approaches were done using the same 90 solo estimation techniques on Web project data from the Tukutuku dataset. The replication identified 16 solo techniques that were deemed superior and were used to build 15 ensembles, while the Scott-Knott algorithm identified 19 superior solo techniques that were used to build two ensembles. Results: The ensembles produced by both approaches performed very well against solo effort estimation techniques. With the replication, the top 12 techniques were all ensembles, with the remaining 3 ensembles falling within the top 17 techniques. These 15 effort estimation ensembles, along with the 2 built by the second approach, were grouped into the best cluster of effort estimation techniques by the Scott-Knott algorithm. Conclusion: While it may not be possible to identify a single best technique, the results suggest that ensembles of estimation techniques consistently perform well even when using Web project data

  • 199.
    Azhar, Muhammad Saad Bin
    et al.
    Blekinge Institute of Technology, School of Computing.
    Aslam, Ammad
    Blekinge Institute of Technology, School of Computing.
    Multiple Coordinated Information Visualization Techniques in Control Room Environment2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Presenting large amount of Multivariate Data is not a simple problem. When there are multiple correlated variables involved, it becomes difficult to comprehend data using traditional ways. Information Visualization techniques provide an interactive way to present and analyze such data. This thesis has been carried out at ABB Corporate Research, Västerås, Sweden. Use of Parallel Coordinates and Multiple Coordinated Views was has been suggested to realize interactive reporting and trending of Multivariate Data for ABB’s Network Manager SCADA system. A prototype was developed and an empirical study was conducted to evaluate the suggested design and test it for usability from an actual industry perspective. With the help of this prototype and the evaluations carried out, we are able to achieve stronger results regarding the effectiveness and efficiency of the visualization techniques used. The results confirm that such interfaces are more effective, efficient and intuitive for filtering and analyzing Multivariate Data.

  • 200.
    Aziz, Yama
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computing Science.
    Exploring a keyword driven testing framework: a case study at Scania IT2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this thesis is to investigate organizational quality assurance through the international testing standard ISO 29119. The focus will be on how an organization carries out testing processes and designs and implements test cases. Keyword driven testing is a test composition concept in ISO 29119 and suitable for automation. This thesis will answer how keyword driven testing can facilitate the development of maintainable test cases and support test automation in an agile organization.

    The methodology used was a qualitative case study including semi-structured interviews and focus groups with agile business units within Scania IT. Among the interview participants were developers, test engineers, scrum masters and a unit manager.

    The results describe testing practices carried out in several agile business units, maintainability issues with test automation and general ideas of how test automation should be approached. Common issues with test automation were test cases failing due to changed test inputs, inexperience with test automation frameworks and lack of resources due to project release cycle.

    This thesis concludes that keyword driven testing has the potential of solving several maintainability issues with test cases breaking. However, the practicality and effectiveness of said potential remain unanswered. Moreover, successfully developing an automated keyword driven testing framework requires integration with existing test automation tools and considering the agile organizational circumstances.

1234567 151 - 200 of 2978
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf