Change search
Refine search result
1234567 51 - 100 of 2740
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 51.
    Ahmed, Mohammad Abdur Razzak and Rajib
    Blekinge Institute of Technology, School of Computing.
    Knowledge Management in Distributed Agile Projects2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Knowledge management (KM) is essential for success in Global Soft- ware Development (GSD); Distributed Software Development (DSD); or Global Software Engineering (GSE). Software organizations are managing knowledge in innovative ways to increase productivity. One of the major objectives of KM is to improve productivity through effective knowledge sharing and transfer. Therefore, to maintain effective knowledge sharing in distributed agile projects, practitioners need to adopt different types of knowledge sharing techniques and strategies. Distributed projects introduce new challenges to KM. So, practices that are used in agile teams become difficult to put into action in distributed development. Though, informal communication is the key enabler for knowledge sharing, when an agile project is distributed, informal communication and knowledge sharing are challenged by the low communication bandwidth between distributed team members, as well as by social and cultural distance. In the work presented in this thesis, we have made an overview of empirical studies of knowledge management in distributed agile projects. Based on the main theme of this study, we have categorized and reported our findings on major concepts that need empirical investigation. We have classified the main research theme in this thesis within two sub-themes: • RT1: Knowledge sharing activities in distributed agile projects. • RT2: Spatial knowledge sharing in a distributed agile project. The main contributions are: • C1: Empirical observations regarding knowledge sharing activities in distributed agile projects. • C2: Empirical observations regarding spatial knowledge sharing in a distributed agile project. • C3: Process improvement scope and guidelines for the studied project.

  • 52.
    Ahmed, Qutub Uddin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Mujib, Saifullah Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Context Aware Reminder System: Activity Recognition Using Smartphone Accelerometer and Gyroscope Sensors Supporting Context-Based Reminder Systems2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Reminder system offers flexibility in daily life activities and assists to be independent. The reminder system not only helps reminding daily life activities, but also serves to a great extent for the people who deal with health care issues. For example, a health supervisor who monitors people with different health related problems like people with disabilities or mild dementia. Traditional reminders which are based on a set of defined activities are not enough to address the necessity in a wider context. To make the reminder more flexible, the user’s current activities or contexts are needed to be considered. To recognize user’s current activity, different types of sensors can be used. These sensors are available in Smartphone which can assist in building a more contextual reminder system. Objectives. To make a reminder context based, it is important to identify the context and also user’s activities are needed to be recognized in a particular moment. Keeping this notion in mind, this research aims to understand the relevant context and activities, identify an effective way to recognize user’s three different activities (drinking, walking and jogging) using Smartphone sensors (accelerometer and gyroscope) and propose a model to use the properties of the identification of the activity recognition. Methods. This research combined a survey and interview with an exploratory Smartphone sensor experiment to recognize user’s activity. An online survey was conducted with 29 participants and interviews were held in cooperation with the Karlskrona Municipality. Four elderly people participated in the interview. For the experiment, three different user activity data were collected using Smartphone sensors and analyzed to identify the pattern for different activities. Moreover, a model is proposed to exploit the properties of the activity pattern. The performance of the proposed model was evaluated using machine learning tool, WEKA. Results. Survey and interviews helped to understand the important activities of daily living which can be considered to design the reminder system, how and when it should be used. For instance, most of the participants in the survey are used to using some sort of reminder system, most of them use a Smartphone, and one of the most important tasks they forget is to take their medicine. These findings helped in experiment. However, from the experiment, different patterns have been observed for three different activities. For walking and jogging, the pattern is discrete. On the other hand, for drinking activity, the pattern is complex and sometimes can overlap with other activities or can get noisy. Conclusions. Survey, interviews and the background study provided a set of evidences fostering reminder system based on users’ activity is essential in daily life. A large number of Smartphone users promoted this research to select a Smartphone based on sensors to identify users’ activity which aims to develop an activity based reminder system. The study was to identify the data pattern by applying some simple mathematical calculations in recorded Smartphone sensors (accelerometer and gyroscope) data. The approach evaluated with 99% accuracy in the experimental data. However, the study concluded by proposing a model to use the properties of the identification of the activities and developing a prototype of a reminder system. This study performed preliminary tests on the model, but there is a need for further empirical validation and verification of the model.

  • 53.
    Ahmed, Syed Rizwan
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Secure Software Development: Identification of Security Activities and Their Integration in Software Development Lifecycle2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Today’s software is more vulnerable to attacks due to increase in complexity, connectivity and extensibility. Securing software is usually considered as a post development activity and not much importance is given to it during the development of software. However the amount of loss that organizations have incurred over the years due to security flaws in software has invited researchers to find out better ways of securing software. In the light of research done by many researchers, this thesis presents how software can be secured by considering security in different phases of software development life cycle. A number of security activities have been identified that are needed to build secure software and it is shown that how these security activities are related with the software development activities of the software development lifecycle.

  • 54.
    ahmed, Tanveer
    et al.
    Blekinge Institute of Technology, School of Computing.
    Raju, Madhu Sudhana
    Blekinge Institute of Technology, School of Computing.
    Integrating Exploratory Testing In Software Testing Life Cycle, A Controlled Experiment2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Software testing is one of the crucial phases in software development life cycle (SDLC). Among the different manual testing methods in software testing, Exploratory testing (ET) uses no predefined test cases to detect defects. Objectives. The main objective of this study is to test the effectiveness of ET in detecting defects at different software test levels. The objective is achieved by formulating hypotheses, which are later tested for acceptance or rejection. Methods. Methods used in this thesis are literature review and experiment. Literature review is conducted to get in-depth knowledge on the topic of ET and to collect data relevant to ET. Experiment was performed to test hypotheses specific to the three different testing levels : unit , integration and system. Results. The experimental results showed that using ET did not find all the seeded defects at the three levels of unit, integration and system testing. The results were analyzed using statistical tests and interpreted with the help of bar graphs. Conclusions. We conclude that more research is required in generalizing the benefits of ET at different test levels. Particularly, a qualitative study to highlight factors responsible for the success and failure of ET is desirable. Also we encourage a replication of this experiment with subjects having a sound technical and domain knowledge.

  • 55.
    Aihara, Diogo Satoru
    Blekinge Institute of Technology, School of Computing.
    Study About the Relationship Between the Model-View-Controller Pattern and Usabiltity2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Usability is one of the most important quality attributes in the new generation of software applications and computational devices. On the other hand, Model- View-Controller is a well known software architectural pattern and is widely used in its original form or its variations. The relationship between usability and the usage of Model-View-Controller, however, is still unknown. This thesis tries to contribute to this research question by providing the outcomes of a case study where a prototype has been developed in two different versions: one using Model-View-Controller and another using a widely known Object-Oriented guideline, the GRASP patterns. Those prototypes have been developed based on a non-functional prototype with a good level of usability. With the prototypes in hands, they were compared based on their design and based on the usability heuristics proposed by Nielsen. From this study, we discovered that the usage of MVC brings more advantages and disadvantages to the usability of the system than the ones that were found on the literature review. In general, the relationship between MVC and usability is beneficial, easing the implementation of usability features like validation of input data, evolutionary consistency, multiple views, inform the result of actions and skip steps in a process.

  • 56.
    Aivars, Sablis
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Benefits of transactive memory systems in large-scale development2016Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise as one of the critical resources for high-quality work.

    Objectives. We aim at understanding whether software teams in different contexts develop transactive memory systems (TMS) and whether well-developed TMS leads to performance benefits as suggested by research conducted in other knowledge-intensive disciplines. Because multiple factors may influence the development of TMS, based on related TMS literature we also suggest to focus on task allocation strategies, task characteristics and management decisions regarding the project structure, team structure and team composition.

    Methods. We use the data from two large-scale distributed development companies and 9 teams, including quantitative data collected through a survey and qualitative data from interviews to measure transactive memory systems and their role in determining team performance. We measure teams’ TMS with a latent variable model. Finally, we use focus group interviews to analyze different organizational practices with respect to team management, as a set of decisions based on two aspects: team structure and composition, and task allocation.

    Results. Data from two companies and 9 teams are analyzed and the positive influence of well-developed TMS on team performance is found. We found that in large-scale software development, teams need not only well-developed team’s internal TMS, but also have well- developed and effective team’s external TMS. Furthermore, we identified practices that help of hinder development of TMS in large-scale projects.

    Conclusions. Our findings suggest that teams working in large-scale software development can achieve performance benefits if transactive memory practices within the team are supported with networking practices in the organization. 

  • 57.
    Akhlaq, Usman
    et al.
    Blekinge Institute of Technology, School of Computing.
    Yousaf, Muhammad Usman
    Blekinge Institute of Technology, School of Computing.
    Impact of Software Comprehension in Software Maintenance and Evolution2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    The need of change is essential for a software system to reside longer in the market. Change implementation is only done through the maintenance and successful software maintenance gives birth to a new software release that is a refined form of the previous one. This phenomenon is known as the evolution of the software. To transfer software from lower to upper or better form, maintainers have to get familiar with the particular aspects of software i.e. source code and documentation. Due to the poor quality of documentation maintainers often have to rely on source code. So, thorough understanding of source code is necessary for effective change implementation. This study explores the code comprehension problems discussed in the literature and prioritizes them according to their severity level given by maintenance personnel in the industry. Along with prioritizing the problems, study also presents the maintenance personnel suggested methodologies for improving code comprehension. Consideration of these suggestions in development might help in shortening the maintenance and evolution time.

  • 58.
    Akhter, Adeel
    et al.
    Blekinge Institute of Technology, School of Computing.
    Azhar, Hassan
    Blekinge Institute of Technology, School of Computing.
    Statistical Debugging of Programs written in Dynamic Programming Language: RUBY2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Debugging is an important and critical phase during the software development process. Software debugging is serious and tough practice involved in functional base test driven development. Software vendors encourages their programmers to practice test driven development during the initial development phases to capture the bug traces and the associated code coverage infected from diagnosed bugs. Application’s source code with fewer threats of bug existence or faulty executions is assumed as highly efficient and stable especially when real time software products are in consideration. Due to the fact that process of development of software projects relies on great number of users and testers which required having an effective fault localization technique. This specific fault localization technique can highlight the most critical areas of software system at code as well as modular level so that debugging algorithm can be used to debug the application source code. Nowadays many complex or simple software systems are in corporation with open bug repositories to localize the bugs. Any inconsistency or imperfection in early development phase of software product results in low efficient system and less reliability. Statistical debugging of program source code for visualization of fault is an important and efficient way to select and rank the suspicious lines of code. This research provides guidelines for practicing statistical debugging technique for programs coded in Ruby programming language. This thesis presents statistical debugging techniques available for dynamic programming languages. Firstly, the statistical debugging techniques were thoroughly observed with different predicate base approaches followed in previous work done in the subject area. Secondly, the new process of statistical debugging for programs coded in Ruby programming language is introduced by generating dynamic predicates. Results were analyzed by implementing multiple programs written in Ruby programming language with different complexity level. The analysis of experimentation performed on candidate programs depict that SOBER is more efficient and accurate in bug identification than Cause Isolation Scheme. It is concluded that despite of extensive research in the field of statistical debugging and fault localization it is not possible to identify majority of the bugs. Moreover SOBER and Cause Isolation Scheme algorithms are found to be two most mature and effective statistical debugging algorithms for bug identification with in software source code.

  • 59. Akkermans, Hans
    et al.
    Gustavsson, Rune
    Ygge, Fredrik
    An Integrated Structured Analysis Approach to Intelligent Agent Communication1998Report (Other academic)
    Abstract [en]

    Intelligent multi-agent systems offer promising approaches for knowledge-intensive distributed applications. Now that such systems are becoming applied on a wider industrial scale, there is a practical need for structured analysis and design methods, similarly as exist for more conventional information and knowledge systems. This is still lacking for intelligent agent software. In this paper, we describe how the process of agent communication specification can be carried out through a structured analysis approach. The structured analysis approach we propose is an integrated extension of the CommonKADS methodology, a widely used standard for knowledge analysis and systems development. Our approach is based on and illustrated by a large-scale multi-agent application for distributed energy load management in industries and households, called Homebots, which is discussed as an extensive industrial case study.

  • 60. Akkermans, Hans
    et al.
    Gustavsson, Rune
    Ygge, Fredrik
    Pragmatics of Agent Communication1998Report (Other academic)
    Abstract [en]

    The process of agent communication modeling has not yet received much attention in the knowledge systems area. Conventional knowledge systems are rather simple with respect to their communication structure: often it is a straightforward question-and-answer sequence between system and end user. However, this is different in recent intelligent multi-agent systems. Therefore, agent communication aspects are now in need of a much more advanced treatment in knowledge management, acquisition and modeling. In general, a much better integration between the respective achievements of multi-agent and knowledge-based systems modeling is an important research goal. In this paper, we describe how agent communications can be specified as an extension of well-known knowledge modeling techniques. The emphasis is on showing how a structured process of communication requirements analysis proceeds, based on existing results from agent communication languages. The guidelines proposed are illustrated by and based on a large-scale industrial multi-agent application for distributed energy load management in industries and households, called Homebots. Homebots enable cost savings in energy consumption by coordinating their actions through an auction mechanism.

  • 61. Akkermans, Hans
    et al.
    Ygge, Fredrik
    Gustavsson, Rune
    Homebots: Intelligent Decentralized Services for Energy Management1996Report (Other academic)
    Abstract [en]

    The deregulation of the European energy market, combined with emerging ad-vanced capabilities of information technology, provides strategic opportunities for new knowledge-oriented services on the power grid. HOMEBOTS is the name we have coined for one of these innovative services: decentralized power load management at the customer side, automatically carried out by a ‘society’ of interactive house-hold, industrial and utility equipment. They act as independent intelligent agents that communicate and negotiate in a computational market economy. The knowl-edge and competence aspects of this application are discussed, using an improved version of task analysis according to the COMMONKADS knowledge methodology. Illustrated by simulation results, we indicate how customer knowledge can be mo-bilized to achieve joint goals of cost and energy savings. General implications for knowledge creation and its management are discussed.

  • 62. Akkermans, Hans
    et al.
    Ygge, Fredrik
    Gustavsson, Rune
    HOMEBOTS: Intelligent Decentralized Services for Energy Management1996Conference paper (Refereed)
  • 63.
    Akkineni, Srinivasu
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    The impact of RE process factors and organizational factors during alignment between RE and V&V: Systematic Literature Review and Survey2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context: Requirements engineering (RE) and Verification and validation (V&V) areas are treated to be integrated and assure successful development of the software project. Therefore, activation of both competences in the early stages of the project will support products in meeting the customer expectation regarding the quality and functionality. However, this quality can be achieved by aligning RE and V&V. There are different practices such as requirements, verification, validation, control, tool etc. that are followed by organizations for alignment and to address different challenges faced during the alignment between RE and V&V. However, there is a requisite for studies to understand the alignment practices, challenges and factors, which can enable successful alignment between RE and V&V.

    Objectives: In this study, an exploratory investigation is carried out to know the impact of factors i.e. RE process and organizational factors during the alignment between RE and V&V. The main objectives of this study are:

    1. To find the list of RE practices that facilitate alignment between RE and V&V.
    2. To categorize RE practices with respect to their requirement phases.
    3. To find the list of RE process and organizational factors that influence alignment between RE and V&V besides their impact.
    4. To identify the challenges that are faced during the alignment between RE and V&V.
    5. To obtain list of challenges that are addressed by RE practices during the alignment between RE and V&V.

    Methods: In this study Systematic Literature Review (SLR) is conducted using snowballing procedure to identify the relevant information about RE practices, challenges, RE process factors and organizational factors. The studies were captured from Engineering Village database. Rigor and relevance analysis is performed to assess the quality of the studies obtained through SLR. Further, a questionnaire intended for industrial survey was prepared from the gathered literature and distributed to practitioners from the software industry in order to collect empirical information about this study. Thereafter, data obtained from industrial survey was analyzed using statistical analysis and chi-square significance test.

    Results: 20 studies were identified through SLR, which are relevant to this study. After analyzing the obtained studies, the list of RE process factors, organizational factors, challenges and RE practices during alignment between RE and V&V are gathered. Thereupon, an industrial survey is conducted from the obtained literature, which has obtained 48 responses. Alignment between RE and V&V possess an impact of RE process factors and organizational factors and this is also mentioned by the respondents of the survey. Moreover, this study finds an additional RE process factors and organizational factors during the alignment between RE and V&V, besides their impact. Another contribution is, addressing the unaddressed challenges by RE practices obtained through the literature. Additionally, validation of categorized RE practices with respect to their requirement phases is carried out.

    Conclusions: To conclude, the obtained results from this study will benefit practitioners for capturing more insight towards the alignment between RE and V&V. This study identified the impact of RE process factors and organizational factors during the alignment between RE and V&V along with the importance of challenges faced during the alignment between RE and V&V. This study also addressed the unaddressed challenges by RE practices obtained through literature. Respondents of the survey believe that many RE process and organizational factors have negative impact on the alignment between RE and V&V based on the size of an organization. In addition to this, validation of results for applying RE practices at different requirement phases is toted through survey. Practitioners can identify the benefits from this research and researchers can extend this study to remaining alignment practices.

  • 64.
    Al Tayr, Hydar
    et al.
    KTH, School of Information and Communication Technology (ICT).
    Al Hakim, Mahmud
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Mobile Ajax2008Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This report describes a master thesis performed at SICS (Swedish Institute of Computer Science) and KTH (The Royal Institute of Technology) in Stockholm.

    Ajax stands for "Asynchronous JavaScript and XML" and it's not a programming language, but a suite of technologies used to develop web applications with more interactivity than the traditional web pages.

    Ajax applications can be adapted for mobile and constrained devices. This has been called Mobile Ajax. While the technique is the same, Mobile Ajax generally is considered to be a special case of Ajax, because it deals with problems specific to the mobile market.

    The purpose of this thesis has been to examine which possibilities and disadvantages has the Mobile Ajax from developers and users perspective. In addition we compare Mobile Ajax with Java Micro Edition (Java ME) and Flash Lite.

    This has been done through literature studies and development of a databased chat client (MAIM -Mobile Ajax Instant Messenger). The application sends and receives direct messages in real time between differently mobile devices. Then MAIM application has been compared with our own developed Java ME and Flash Lite chat clients.

    We have tested all three applications with different models of mobile devices and on different web browsers. The results have shown that mobile Ajax makes possible the creation of sophisticated and dynamic mobile web applications and is better than the classic web application model, but this requires that the mobile device has a modern and compatible web browser like Opera mobile.

     

  • 65.
    Alahyari, Hiva
    et al.
    Chalmers; Göteborgs Universitet, SWE.
    Berntsson Svensson, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A study of value in agile software development organizations2017In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 125, 271-288 p.Article in journal (Refereed)
    Abstract [en]

    The Agile manifesto focuses on the delivery of valuable software. In Lean, the principles emphasise value, where every activity that does not add value is seen as waste. Despite the strong focus on value, and that the primary critical success factor for software intensive product development lies in the value domain, no empirical study has investigated specifically what value is. This paper presents an empirical study that investigates how value is interpreted and prioritised, and how value is assured and measured. Data was collected through semi-structured interviews with 23 participants from 14 agile software development organisations. The contribution of this study is fourfold. First, it examines how value is perceived amongst agile software development organisations. Second, it compares the perceptions and priorities of the perceived values by domains and roles. Third, it includes an examination of what practices are used to achieve value in industry, and what hinders the achievement of value. Fourth, it characterises what measurements are used to assure, and evaluate value-creation activities.

  • 66.
    Alam, Payam Norouzi
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Agile Process Recommendations for a Market-driven Company2003Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    In this master thesis problems in a small market-driven software development company are discussed. Problems such as fast changes in the company are a result of the constantly changing market. The fast changes must be managed within the company with tailored process covering the needs like short time-to-market. Misunderstandings when managing ideas from marketing and challenging issues such as communication gaps between marketing staff and developers are discussed. These problem areas are identified by the case study through interviews of selected staff. The problems adhere from fast changes and lack of processes and structures. This paper has recommended several points influenced by agile software development with Scrum and XP. The recommendations are chosen to fit the problem areas identified by the interviews. They will work as a starting point for the company to improve the situation and to use XP and Scrum for further improvements.

  • 67.
    Albihn, Amalia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Husveterinären: Responsiv design med WordPress2017Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Målet med detta projekt har varit att ta fram en responsiv webbplats åt Husveterinären

    AB. Kraven på webbplatsen var bland annat att den skulle skapas i

    WordPress, möjliggöra för kunder att ta del av uppdateringar från veterinärens

    Facebook samt innehålla ett bokningssystem. Projektet genomfördes enligt metoden

    mobile first vilket innebär att man utgår ifrån en liten skärm när man designar

    för att sedan skala upp storleken. Media queries och en flexibel layout

    baserad på relativa enheter har använts för den responsiva designen. För att utvärdera

    webbplatsens användbarhet har användartester genomförts. Resultatet

    är en webbplats som kan användas i enheter med stora såväl som med små skärmar.

    Användartesterna visade på att det var lätt att navigera mellan webbsidorna

    samt att använda både bokningssystemet och kontaktformuläret oavsett skärmstorlek.

    Med WordPress är det dessutom möjligt för veterinären att själv underhålla

    sin webbplats utan att behöva ha kunskaper om att skriva kod.

  • 68.
    Al-Daajeh, Saleh
    Blekinge Institute of Technology, School of Computing.
    Balancing Dependability Quality Attributes for Increased Embedded Systems Dependability2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Embedded systems are used in many critical applications where a failure can have serious consequences. Therefore, achieving a high level of dependability is an ultimate goal. However, in order to achieve this goal we are in need of understanding the interrelationships between the different dependability quality attributes and other embedded systems’ quality attributes. This research study provides indicators of the relationship between the dependability quality attributes and other quality attributes for embedded systems by identifying the impact of architectural tactics as the candidate solutions to construct dependable embedded systems.

  • 69. Alegroth, Emil
    et al.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kolstrom, Pirjo
    Maintenance of automated test suites in industry: An empirical study on Visual GUI Testing2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 73, 66-80 p.Article in journal (Refereed)
    Abstract [en]

    Context: Verification and validation (V&V) activities make up 20-50% of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge/experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project while also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation. (C) 2016 Elsevier B.V. All rights reserved.

  • 70.
    Alenkvist, Dennis
    Örebro University, School of Science and Technology.
    Gränssnitt till panel-PC med touch för inställningar till inmatningsenheten i en konverteringsmaskin för wellpapp2013Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This project was made to create a modern HMI for a touch screen the form of a panel PC for industrial environments. The application was supposed to display data, such as alarm lists and machine settings, from the feed unit for a machine for conversion of corrugated cardboard. The project consisted of two main parts: Creating a graphical interface to display data and to establish a communication with the machine’s HMI to exchange the data. The aim of this work was to replace the current display which is old fashioned, an annoyance to customers and has limited potential for development.

  • 71.
    Alesand, Alexander
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Emulating 3G Network Characteristics on WiFi Networks2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Mobile applications should work regardless of which type of wireless interface is used, and should be able to conceal unstable connections from the user to improve user experience. Therefore, network testing is important when developing mobile applications, but it is a challenge to reproduce network conditions when using real cellular networks since the test engineer has no control over the quality of the cellular network. Existing software tools can restrict bandwidth and add latency to the connection, but these tools do not accurately emulate cellular networks.

    This thesis proposes a system where it is possible to shape the network traffic for connected devices to mimic the network patterns of a real cellular connection when running on a WiFi connection. The design presented in this thesis is intended for testing mobile applications under diverse 3G connection parameters, such as latency, bandwidth and other characteristics.

    This thesis was conducted at Spotify, a company that provides a music streaming service which is a frequent user of network data traffic. The 3G emulator was evaluated using the Spotify Android application by measuring the correlation between packet traces from a real 3G connection and the 3G emulator. This correlation was compared to the correlation between packet traces from a real 3G connection and the current network emulator at Spotify. The evaluation shows that the proposed 3G emulator outperforms the current network emulator when performing tests on the Spotify application for Android. By using this emulator, we expect the network testing to become more effective as any 3G condition can be tested with repeatable results.

  • 72.
    Ali, Asif
    et al.
    Linköping University, Department of Computer and Information Science.
    Ramzan, Faheem
    Linköping University, Department of Computer and Information Science.
    Analysis and Monitoring of Team Collaboration in Emergency Response Training supported by a Web Based Information Management System2009Independent thesis Advanced level (degree of Master (Two Years)), 30 credits / 45 HE creditsStudent thesis
    Abstract [en]

    Our objective in this thesis work is to analyze and manage the log files which are generated after a number of experiments series on different groups using C3Fire simulation environment. It includes analyzing and extracting information from log files, and then maintaining this information in a database. This should be presented with a web interface through ICEfaces Ajax framework for Java. Log Files are generated after a number of experiments series on the different groups. All sequences and information related to task performed by team in group is organized in session log files. The work is divided into different steps; first step is to analyze and extract data from log files, and properly arrange it in several different tables in a database, for this MySQL database is used to store the information. The web interface of log file management system is implemented using ICEfaces Ajax framework, and is based on the statistics of log files generated from the C3Fire environment.  User would be able to add/remove the log files, also can view or edit the details of each session log file in database through web interface. Different events can be generated, and logged for the session information.

    C3Fire is an environment that supports training and research in team collaboration. The environment is mainly used in command, control and communication research, and in training of team decision making. Many humanitarian relief operations are doing their work without having any practice. When some disaster events occur, they cannot perform their jobs effectively. Effective and efficient relief operation is the need of humanity; even that’s not enough to move teams to the disaster place at right time; communication and co-ordination among the team members is the big factor to make effective and well-organized work. C3Fire is a simulation system which provides the training for team members to handle such type of disaster events, and makes the work more proficient at that time by doing effective coordination.

  • 73.
    Ali, Israr
    et al.
    Blekinge Institute of Technology, School of Computing.
    Shah, Syed Shahab Ali
    Blekinge Institute of Technology, School of Computing.
    Usability Requirements for GIS Application: Comparative Study of Google Maps on PC and Smartphone2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Smartphone is gaining popularity due to its feasible mobility, computing capacity and efficient energy. Emails, text messaging, navigation and visualizing geo-spatial data through browsers are common features of smartphone. Display of geo-spatial data is collected in computing format and made publically available. Therefore the need of usability evaluation becomes important due to its increasing demand. Identifying usability requirements are important as conventional functional requirements in software engineering. Non-functional usability requirements are objectives and testable using measurable metrics. Objectives: Usability evaluation plays an important role in the interaction design process as well as identifying user needs and requirements. Comparative usability requirements are identified for the evaluation of a geographical information system (Google Maps) on personal computer (Laptop) and smartphone (iPhone). Methods: ISO 9241-11 guide on usability is used as an input model for identifying and specifying usability level of Google Maps on both personal computer and smartphone for intended output. Authors set target value for usability requirements of tasks and questionnaire on each device, such as acceptability level of tasks completion, rate of efficiency and participant’s agreement of each measure through ISO 9241-11 respectively. The usability test is conducted using Co-discovery technique on six pairs of graduate students. Interviews are conducted for validation of test results and questionnaires are distributed to get feedback from participants. Results: The non-functional usability requirements were tested and used five metrics measured on user performance and satisfaction. Through usability test, the acceptability level of tasks completion and rate of efficiency was matched on personal computer but did not match on iPhone. Through questionnaire, both the devices did not match participant’s agreement of each measure but only effectiveness matched on personal computer. Usability test, interview and questionnaire feedback are included in the results. Conclusions: The authors provided suggestions based on test results and identified usability issues for the improvement of Google Maps on personal computer and iPhone.

  • 74.
    Ali, Nauman Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Is effectiveness sufficient to choose an intervention?: Considering resource use in empirical software engineering2016In: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2016, Ciudad Real, Spain, September 8-9, 2016, 2016, 54Conference paper (Refereed)
  • 75.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, School of Computing.
    Edison, Henry
    Blekinge Institute of Technology, School of Computing.
    Towards Innovation Measurement in Software Industry2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: In today’s highly competitive business environment, shortened product and technology life-cycles, it is critical for software industry to continuously innovate. To help an organisation to achieve this goal, a better understanding and control of the activities and determinants of innovation is required. This can be achieved through innovation measurement initiative which assesses innovation capability, output and performance. Objective: This study explores definitions of innovation, innovation measurement frameworks, key elements of innovation and metrics that have been proposed in literature and used in industry. The degree of empirical validation and context of studies was also investigated. It also elicited the perception of innovation, its importance, challenges and state of practice of innovation measurement in software industry. Methods: In this study, a systematic literature review, followed by online questionnaire and face-to-face interviews were conducted. The systematic review used seven electronic databases, including Compendex, Inspec, IEEE Xplore, ACM Digital Library, and Business Source premier, Science Direct and Scopus. Studies were subject to preliminary, basic and advanced criteria to judge the relevance of papers. The online questionnaire targeted software industry practitioners with different roles and firm sizes. A total of 94 completed and usable responses from 68 unique firms were collected. Seven face-to-face semi-structured interviews were conducted with four industry practitioners and three academics. Results: Based on the findings of literature review, interviews and questionnaire a comprehensive definition of innovation was identified which may be used in software industry. The metrics for evaluation of determinants, inputs, outputs and performance were aggregated and categorised. A conceptual model of the key measurable elements of innovation was constructed from the findings of the systematic review. The model was further refined after feedback from academia and industry through interviews. Conclusions: The importance of innovation measurement is well recognised in both academia and industry. However, innovation measurement is not a common practice in industry. Some of the major reasons are lack of available metrics and data collection mechanisms to measure innovation. The organisations which do measure innovation use only a few metrics that do not cover the entire spectrum of innovation. This is partly because of the lack of consistent definition of innovation in industry. Moreover, there is a lack of empirical validation of the metrics and determinants of innovation. Although there is some static validations, full scale industry trials are currently missing. For software industry, a unique challenge is development of alternate measures since some of the existing metrics are inapplicable in this context. The conceptual model constructed in this study is one step towards identifying measurable key aspects of innovation to understanding the innovation capability and performance of software firms.

  • 76.
    Ali, Nauman Bin
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    A consolidated process for software process simulation: State of the Art and Industry Experience2012Conference paper (Refereed)
    Abstract [en]

    Software process simulation is a complex task and in order to conduct a simulation project practitioners require support through a process for software process simulation modelling (SPSM), including what steps to take and what guidelines to follow in each step. This paper provides a literature based consolidated process for SPSM where the steps and guidelines for each step are identified through a review of literature and are complemented by experience from using these recommendations in an action research at a large Telecommunication vendor. We found five simulation processes in SPSM literature, resulting in a seven-step process. The consolidated process was successfully applied at the studied company, with the experiences of doing so being reported.

  • 77.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    FLOW-assisted value stream mapping in the early phases of large-scale software development2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 111, 213-227 p.Article in journal (Refereed)
    Abstract [en]

    Value stream mapping (VSM) has been successfully applied in the context of software process improvement. However, its current adaptations from Lean manufacturing focus mostly on the flow of artifacts and have taken no account of the essential information flows in software development. A solution specifically targeted toward information flow elicitation and modeling is FLOW. This paper aims to propose and evaluate the combination of VSM and FLOW to identify and alleviate information and communication related challenges in large-scale software development. Using case study research, FLOW-assisted VSM was used for a large product at Ericsson AB, Sweden. Both the process and the outcome of FLOW-assisted VSM have been evaluated from the practitioners’ perspective. It was noted that FLOW helped to systematically identify challenges and improvements related to information flow. Practitioners responded favorably to the use of VSM and FLOW, acknowledged the realistic nature and impact on the improvement on software quality, and found the overview of the entire process using the FLOW notation very useful. The combination of FLOW and VSM presented in this study was successful in systematically uncovering issues and characterizing their solutions, indicating their practical usefulness for waste removal with a focus on information flow related issues.

    The full text will be freely available from 2017-12-10 13:41
  • 78.
    Ali, Nauman Bin
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Mäntylä, Mika
    Testing highly complex system of systems: An industrial case study2012Conference paper (Refereed)
    Abstract [en]

    Systems of systems (SoS) are highly complex and are integrated on multiple levels (unit, component, system, system of systems). Many of the characteristics of SoS (such as operational and managerial independence, integration of system into system of systems, SoS comprised of complex systems) make their development and testing challenging. Contribution: This paper provides an understanding of SoS testing in large-scale industry settings with respect to challenges and how to address them. Method: The research method used is case study research. As data collection methods we used interviews, documentation, and fault slippage data. Results: We identified challenges related to SoS with respect to fault slippage, test turn-around time, and test maintainability. We also classified the testing challenges to general testing challenges, challenges amplified by SoS, and challenges that are SoS specific. Interestingly, the interviewees agreed on the challenges, even though we sampled them with diversity in mind, which meant that the number of interviews conducted was sufficient to answer our research questions. We also identified solution proposals to the challenges that were categorized under four classes of developer quality assurance, function test, testing in all levels, and requirements engineering and communication. Conclusion: We conclude that although over half of the challenges we identified can be categorized as general testing challenges still SoS systems have their unique and amplified challenges stemming from SoS characteristics. Furthermore, it was found that interviews and fault slippage data indicated that different areas in the software process should be improved, which indicates that using only one of these methods would have led to an incomplete picture of the challenges in the case company.

  • 79.
    Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evaluating strategies for study selection in systematic literature studies2014In: ESEM '14 Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ACM , 2014, Vol. article 45Conference paper (Refereed)
    Abstract [en]

    Context: The study selection process is critical to improve the reliability of secondary studies. Goal: To evaluate the selection strategies commonly employed in secondary studies in software engineering. Method: Building on these strate- gies, a study selection process was formulated and evalu- ated in a systematic review. Results: The selection process used a more inclusive strategy than the one typically used in secondary studies, which led to additional relevant articles. Conclusions: The results indicates that a good-enough sam- ple could be obtained by following a less inclusive but more efficient strategy, if the articles identified as relevant for the study are a representative sample of the population, and there is a homogeneity of results and quality of the articles.

  • 80.
    Alipour, Philip Baback
    et al.
    Blekinge Institute of Technology, School of Computing.
    Ali, Muhammad
    Blekinge Institute of Technology, School of Computing.
    An Introduction and Evaluation of a Lossless Fuzzy Binary AND/OR Compressor2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    We report a new lossless data compression algorithm (LDC) for implementing predictably-fixed compression values. The fuzzy binary and-or algorithm (FBAR), primarily aims to introduce a new model for regular and superdense coding in classical and quantum information theory. Classical coding on x86 machines would not suffice techniques for maximum LDCs generating fixed values of Cr >= 2:1. However, the current model is evaluated to serve multidimensional LDCs with fixed value generations, contrasting the popular methods used in probabilistic LDCs, such as Shannon entropy. The currently introduced entropy is of ‘fuzzy binary’ in a 4D hypercube bit flag model, with a product value of at least 50% compression. We have implemented the compression and simulated the decompression phase for lossless versions of FBAR logic. We further compared our algorithm with the results obtained by other compressors. Our statistical test shows that, the presented algorithm mutably and significantly competes with other LDC algorithms on both, temporal and spatial factors of compression. The current algorithm is a steppingstone to quantum information models solving complex negative entropies, giving double-efficient LDCs > 87.5% space savings.

  • 81.
    Alkrot, Magnus
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    React vs Angular: Slaget om användarupplevelsen2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Lately, various programming frameworks has been developed for developing web applications. These frameworks focus on increasing the user experience by performance improvements such as faster render times and response times. One of these frameworks are React, which has introduced a completely new architectural pattern for both managing the state and data flow of an application. React also offers support for native application development and makes server-side rendering possible. Something that is difficult to accomplish with an application developed with Angular 1.5, which is used by the company Dewire today. The aim of this thesis was to compare React with an existing Angular project, in order to determine whether React could be a potential replacement for Angular. To gain knowledge about the subject, a theoretical study of web- based sources has been made. While the practical part has been to rebuild a web application with React together with the architecture Flux, which is based on a view from the Angular project. The implementation process was repeated until the view was completed and a desired data flow, as in the Angular application, was reached. The resulting React application was later compared with the Angular application developed by the company, where the outcome of the comparison showed that the React performed better than Angular in all tests. In conclusion, due to the timeframe of the project, only the most important parts of the Angular project were implemented in order to carry out the measurements that were of interest to the company. By recreating most of the functionality, or the entire Angular application, more interesting comparisons could have been done.

  • 82.
    Allahyari, Hiva
    Blekinge Institute of Technology, School of Computing.
    On the concept of Understandability as a Property of Data mining Quality2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    This paper reviews methods for evaluating and analyzing the comprehensibility and understandability of models generated from data in the context of data mining and knowledge discovery. The motivation for this study is the fact that the majority of previous work has focused on increasing the accuracy of models, ignoring user-oriented properties such as comprehensibility and understandability. Approaches for analyzing the understandability of data mining models have been discussed on two different levels: one is regarding the type of the models’ presentation and the other is considering the structure of the models. In this study, we present a summary of existing assumptions regarding both approaches followed by an empirical work to examine the understandability from the user’s point of view through a survey. From the results of the survey, we obtain that models represented as decision trees are more understandable than models represented as decision rules. Using the survey results regarding understandability of a number of models in conjunction with quantitative measurements of the complexity of the models, we are able to establish correlation between complexity and understandability of the models.

  • 83.
    Almkvist, Jimmy
    Örebro University, School of Science and Technology.
    Empirecraft2014Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    I have in my thesis produced a start of a multiplayer, voxel, strategy sandbox game with

    advanced AI. The world is made out of voxels in the form of blocks that both the players and

    other units can affect and change. In a world where every block follows the laws of physics

    for both fluids and physics. The game is designed for several players that fights for controll

    over land and resources.

  • 84.
    Almström, Malin
    et al.
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Olsson, Christina
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Förbättrad Kravhantering med hjälp av Lösningsinriktad Pedagogik2002Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Abstract The purpose of writing this thesis was to improve methods during requirements engineering phase. Usercentred system engineering has some problem areas, which are examined and verified to create a new guideline for developers. This guideline tends to make requirements engineering more effective and help developers create more concrete requirements. It is not uncommon that system development projects ends up with unsatisfied users or delay in deliveries. The reasons are different kinds of communication problems between users and developers during verification of requirements. There is a therapy model, called solution-focused therapy, used in family and individual therapy. The model focuses on solutions for the future instead of problems in the past. This method has never been used in system developing until today. Based on literature studies and scenarios we have shown that it is possible to transfer this pedagogy into the system developing branch especially in requirements engineering. During our investigation we have shown that the pedagogy refute the difficulties within usercentred design. The pedagogy model can be used in four kinds of methods for capturing requirements; questionnaires, interviews, workshops and observations. The model activates and makes the user more implicated. To show this, we have applied the pedagogy model on scenarios taken from earlier experiences of requirements engineering. The outcome of this investigation is that developers can create a more pleasant communication atmosphere with this pedagogy. A result of this is that users becomes more willing and helpful to create a new system and therefore makes it easier for developers and users to cooperate. You can avoid many communication problems if you know how to go around them.

  • 85.
    Al-Refai, Ali
    et al.
    Blekinge Institute of Technology, School of Computing.
    Pandiri, Srinivasreddy
    Blekinge Institute of Technology, School of Computing.
    Cloud Computing: Trends and Performance Issues2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Cloud Computing is a very fascinating concept these days, it is attracting so many organiza-tions to move their utilities and applications into a dedicated data centers, and so it can be accessed from the Internet. This allows the users to focus solely on their businesses while Cloud Computing providers handle the technology. Choosing a best provider is a challenge for organizations that are willing to step into the Cloud Computing world. A single cloud center generally could not deliver large scale of resources for the cloud tenants; therefore, multiple cloud centers need to collaborate to achieve some business goals and to provide the best possible services at lowest possible costs. How-ever a number of aspects, legal issues, challenges, and policies should be taken into consideration when moving our service into the Cloud environment. Objectives: The aim of this research is to identify and elaborate the major technical and strategy differences between the cloud-computing providers in order to enable the organizations managements, system designers and decision makers to have better insight into the strategies of the different Cloud Computing providers. It is also to understand the risks and challenges due to implementing Cloud Computing, and “how” those issues can be moderated. This study will try to define Multi-Cloud Computing by studying the pros and cons of this new domain. It is also aiming to study the concept of load balancing in the cloud in order to examine the performance over multiple cloud environments. Methods: In this master thesis a number of research methods are used, including the systematic litera-ture review, contacting experts from the relevant field (Interviews) and performing a quantitative methodology (Experiment). Results: Based on the findings of the Literature Review, Interviews and Experiment, we got out the results for the research questions as, 1) A comprehensive study for identifying and comparing the major Cloud Computing providers, 2) Addressing a list of impacts of Cloud Computing (legal aspects, trust and privacy). 3) Creating a definition for Multi-Cloud Computing and identifying the benefits and drawbacks, 4) Finding the performance results on the cloud environment by performing an expe-riment on a load balancing solution. Conclusions: Cloud Computing becomes a central interest for many organizations nowadays. More and more companies start to step into the Cloud Computing service technologies, Amazon, Google, Microsoft, SalesForce, and Rackspace are the top five major providers in the market today. However, there is no Cloud that is perfect for all services. The legal framework is very important for the protection of the user’s private data; it is an important key factor for the safety of the user’s personal and sensitive information. The privacy threats vary according to the nature of the cloud scenario, since some clouds and services might face a very low privacy threats compare to the others, the public cloud that is accessed through the Internet is one of the most means when it comes the increasing threats of the privacy concerns. Lack of visibility of the provider supply chain will lead to suspicion and ultimately distrust. The evolution of Cloud Computing shows that it is likely, in a near future, the so-called Cloud will be in fact a Multi-cloud environment composed of a mixture of private and public Clouds to form an adaptive environment. Load balancing in the Cloud Computing environment is different from the typical load balancing. The architecture of cloud load balancing is using a number of commodity servers to perform the load balancing. The performance of the cloud differs depending on the cloud’s location even for the same provider. HAProxy load balancer is showing positive effect on the cloud’s performance at high amount of load, the effect is unnoticed at lower amounts of load. These effects can vary depending on the location of the cloud.

  • 86.
    Altskog, Tomas
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Customized Analytics Software: Investigating efficient development of an application2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Google Analytics is the most widely used web traffic analytics program in the world with a wide array of functionality which serve several different purposes for its users. However the cost of training employees in the usage of Google Analytics can be expensive and time consuming due to the generality of the software. The purpose of this thesis is to explore an alternative solution to hav- ing employees learn the default Google Analytics interface and thus possibly re- ducing training expenses. A prototype written in the Java programming lan- guage is developed which implements the MVC and facade software patterns for the purpose of making the development process more efficient. It contains a feature for retrieving custom reports from Google Analytics using Google’s Core Reporting API in addition to two web pages are integrated into the proto- type using the Google Embed API. In the result the prototype is used along with the software estimation method COCOMO to make an estimation of the amount of effort required to develop a similar program. This is done by counting the prototype’s source lines of code manually, following the guidelines given by the COCOMO manual, and then implementing the result in the COCOMO estima- tion formula. The count of lines of code for the entire prototype is 567 and the count which considers reused code is 466. The value retrieved from the formula is 1.61±0.14 person months for the estimation of the entire program and 1.31± 0.16 for a program with reused code. The conclusion of the thesis is that the res- ult from the estimation has several weaknesses and further research is necessary in order to improve the accuracy of the result.

  • 87.
    Al-Zubaidy, Hussein
    et al.
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Fodor, Viktoria
    KTH, School of Electrical Engineering (EES), Network and Systems engineering. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Dán, György
    KTH, School of Electrical Engineering (EES), Network and Systems engineering. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Flierl, Markus
    KTH, School of Electrical Engineering (EES), Information Science and Engineering. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Reliable Video Streaming With Strict Playout Deadline in Multihop Wireless Networks2017In: IEEE transactions on multimedia, ISSN 1520-9210, E-ISSN 1941-0077, Vol. 19, no 10, 2238-2251 p.Article in journal (Refereed)
    Abstract [en]

    Motivated by emerging vision-based intelligent services, we consider the problem of rate adaptation for high-quality and low-delay visual information delivery over wireless networks using scalable video coding. Rate adaptation in this setting is inherently challenging due to the interplay between the variability of the wireless channels, the queuing at the network nodes, and the frame-based decoding and playback of the video content at the receiver at very short time scales. To address the problem, we propose a low-complexity model-based rate adaptation algorithm for scalable video streaming systems, building on a novel performance model based on stochastic network calculus. We validate the analytic model using extensive simulations. We show that it allows fast near-optimal rate adaptation for fixed transmission paths, as well as cross-layer optimized routing and video rate adaptation in mesh networks, with less than 10% quality degradation compared to the best achievable performance.

  • 88.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Chalmers, SWE.
    On the long-term use of visual gui testing in industrial practice: a case study2017In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 22, no 6, 2937-2971 p.Article in journal (Refereed)
    Abstract [en]

    Visual GUI Testing (VGT) is a tool-driven technique for automated GUI-based testing that uses image recognition to interact with and assert the correctness of the behavior of a system through its GUI as it is shown to the user. The technique’s applicability, e.g. defect-finding ability, and feasibility, e.g. time to positive return on investment, have been shown through empirical studies in industrial practice. However, there is a lack of studies that evaluate the usefulness and challenges associated with VGT when used long-term (years) in industrial practice. This paper evaluates how VGT was adopted, applied and why it was abandoned at the music streaming application development company, Spotify, after several years of use. A qualitative study with two workshops and five well chosen employees is performed at the company, supported by a survey, which is analyzed with a grounded theory approach to answer the study’s three research questions. The interviews provide insights into the challenges, problems and limitations, but also benefits, that Spotify experienced during the adoption and use of VGT. However, due to the technique’s drawbacks, VGT has been abandoned for a new technique/framework, simply called the Test interface. The Test interface is considered more robust and flexible for Spotify’s needs but has several drawbacks, including that it does not test the actual GUI as shown to the user like VGT does. From the study’s results it is concluded that VGT can be used long-term in industrial practice but it requires organizational change as well as engineering best practices to be beneficial. Through synthesis of the study’s results, and results from previous work, a set of guidelines are presented that aim to aid practitioners to adopt and use VGT in industrial practice. However, due to the abandonment of the technique, future research is required to analyze in what types of projects the technique is, and is not, long-term viable. To this end, we also present Spotify’s Test interface solution for automated GUI-based testing and conclude that it has its own benefits and drawbacks.

  • 89.
    Alégroth, Emil
    et al.
    Chalmers.
    Gustafsson, Johan
    SAAB AB, SWE.
    Ivarsson, Henrik
    SAAB AB, SWE.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Replicating Rare Software Failures with Exploratory Visual GUI Testing2017In: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 34, no 5, 53-59 p., 8048660Article in journal (Refereed)
    Abstract [en]

    Saab AB developed software that had a defect that manifested itself only after months of continuous system use. After years of customer failure reports, the defect still persisted, until Saab developed failure replication based on visual GUI testing. © 1984-2012 IEEE.

  • 90.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Matsuki, Shinsuke
    Veriserve Corporation, JPN.
    Vos, Tanja
    Open University of the Netherlands, NLD.
    Akemine, Kinji
    Nippon Telegraph and Telephone Corporation, JPN.
    Overview of the ICST International Software Testing Contest2017In: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, IEEE Computer Society, 2017, 550-551 p.Conference paper (Refereed)
    Abstract [en]

    In the software testing contest, practitioners and researcher's are invited to test their test approaches against similar approaches to evaluate pros and cons and which is perceivably the best. The 2017 iteration of the contest focused on Graphical User Interface-driven testing, which was evaluated on the testing tool TESTONA. The winner of the competition was announced at the closing ceremony of the international conference on software testing (ICST), 2017. © 2017 IEEE.

  • 91.
    Amaradri, Anand Srivatsav
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nutalapati, Swetha Bindu
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Continuous Integration, Deployment and Testing in DevOps Environment2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. Owing to a multitude of factors like rapid changes in technology, market needs, and business competitiveness, software companies these days are facing pressure to deliver software rapidly and on a frequent basis. For frequent and faster delivery, companies should be lean and agile in all phases of the software development life cycle. An approach called DevOps, which is based on agile principles has come into play. DevOps bridges the gap between development and operations teams and facilitates faster product delivery. The DevOps phenomenon has gained a wide popularity in the past few years, and several companies are adopting DevOps to leverage its perceived benefits. However, the organizations may face several challenges while adopting DevOps. There is a need to obtain a clear understanding of how DevOps functions in an organization.

    Objectives. The main aim of this study is to provide a clear understanding about how DevOps works in an organization to researchers and software practitioners. The objectives of the study are to identify the benefits of implementing DevOps in organizations where agile development is in practice, the challenges faced by organizations during DevOps adoption, to identify the solutions/ mitigation strategies, to overcome the challenges,the DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Methods. A mixed methods approach having both qualitative and quantitative research methods is used to accomplish the research objectives.A Systematic Literature Review is conducted to identify the benefits and challenges of DevOps adoption, and the DevOps practices. Interviews are conducted to further validate the SLR findings, and identify the solutions to overcome DevOps adoption challenges, and the DevOps practices. The SLR and interview results are mapped, and a survey questionnaire is designed.The survey is conducted to validate the qualitative data, and to identify the other benefits and challenges of DevOps adoption, solutions to overcome the challenges, DevOps practices, and the problems faced by DevOps teams during continuous integration, deployment and testing.

    Results. 31 primary studies relevant to the research are identified for conducting the SLR. After analysing the primary studies, an initial list of the benefits and challenges of DevOps adoption, and the DevOps practices is obtained. Based on the SLR findings, a semi-structured interview questionnaire is designed, and interviews are conducted. The interview data is thematically coded, and a list of the benefits, challenges of DevOps adoption and solutions to overcome them, DevOps practices, and problems faced by DevOps teams is obtained. The survey responses are statistically analysed, and a final list of the benefits of adopting DevOps, the adoption challenges and solutions to overcome them, DevOps practices and problems faced by DevOps teams is obtained.

    Conclusions. Using the mixed methods approach, a final list of the benefits of adopting DevOps, DevOps adoption challenges, solutions to overcome the challenges, practices of DevOps, and the problems faced by DevOps teams during continuous integration, deployment and testing is obtained. The list is clearly elucidated in the document. The final list can aid researchers and software practitioners in obtaining a better understanding regarding the functioning and adoption of DevOps. Also, it has been observed that there is a need for more empirical research in this domain.

  • 92. Ambreen, T.
    et al.
    Ikram, N.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Niazi, M.
    Empirical research in requirements engineering: trends and opportunities2016In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, 1-33 p.Article in journal (Refereed)
    Abstract [en]

    Requirements engineering (RE) being a foundation of software development has gained a great recognition in the recent era of prevailing software industry. A number of journals and conferences have published a great amount of RE research in terms of various tools, techniques, methods, and frameworks, with a variety of processes applicable in different software development domains. The plethora of empirical RE research needs to be synthesized to identify trends and future research directions. To represent a state-of-the-art of requirements engineering, along with various trends and opportunities of empirical RE research, we conducted a systematic mapping study to synthesize the empirical work done in RE. We used four major databases IEEE, ScienceDirect, SpringerLink and ACM and Identified 270 primary studies till the year 2012. An analysis of the data extracted from primary studies shows that the empirical research work in RE is on the increase since the year 2000. The requirements elicitation with 22 % of the total studies, requirements analysis with 19 % and RE process with 17 % are the major focus areas of empirical RE research. Non-functional requirements were found to be the most researched emerging area. The empirical work in the sub-area of requirements validation and verification is little and has a decreasing trend. The majority of the studies (50 %) used a case study research method followed by experiments (28 %), whereas the experience reports are few (6 %). A common trend in almost all RE sub-areas is about proposing new interventions. The leading intervention types are guidelines, techniques and processes. The interest in RE empirical research is on the rise as whole. However, requirements validation and verification area, despite its recognized importance, lacks empirical research at present. Furthermore, requirements evolution and privacy requirements also have little empirical research. These RE sub-areas need the attention of researchers for more empirical research. At present, the focus of empirical RE research is more about proposing new interventions. In future, there is a need to replicate existing studies as well to evaluate the RE interventions in more real contexts and scenarios. The practitioners’ involvement in RE empirical research needs to be increased so that they share their experiences of using different RE interventions and also inform us about the current requirements-related challenges and issues that they face in their work. © 2016 Springer-Verlag London

  • 93.
    Ameerjan, Sharvathul Hasan
    Mälardalen University, School of Innovation, Design and Engineering.
    Predicting and Estimating Execution Time of Manual Test Cases - A Case Study in Railway Domain2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Testing plays a vital role in the software development life cycle by verifying and validating the software's quality. Since software testing is considered as an expensive activity and due to thelimitations of budget and resources, it is necessary to know the execution time of the test cases for an efficient planning of test-related activities such as test scheduling, prioritizing test cases and monitoring the test progress. In this thesis, an approach is proposed to predict and estimate the execution time of manual test cases written in English natural language. The method uses test specifications and historical data that are available from previously executed test cases. Our approach works by obtaining timing information from each and every step of previously executed test cases. The collected data is used to estimate the execution time for non-executed test cases by mapping them using text from their test specifications. Using natural language processing, texts are extracted from the test specification document and mapped with the obtained timing information. After estimating the time from this mapping, a linear regression analysis is used to predict the execution time of non-executed test cases. A case study has been conducted in Bombardier Transportation (BT) where the proposed method is implemented and the results are validated. The obtained results show that the predicted execution time of studied test cases are close to their actual execution time.

  • 94.
    Amiri, Javad Mohammadian
    et al.
    Blekinge Institute of Technology, School of Computing.
    Padmanabhuni, Venkata Vinod Kumar
    Blekinge Institute of Technology, School of Computing.
    A Comprehensive Evaluation of Conversion Approaches for Different Function Points2011Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context: Software cost and effort estimation are important activities for planning and estimation of software projects. One major player for cost and effort estimation is functional size of software which can be measured in variety of methods. Having several methods for measuring one entity, converting outputs of these methods becomes important. Objectives: In this study we investigate different techniques that have been proposed for conversion between different Functional Size Measurement (FSM) techniques. We addressed conceptual similarities and differences between methods, empirical approaches proposed for conversion, evaluation of the proposed approaches and improvement opportunities that are available for current approaches. Finally, we proposed a new conversion model based on accumulated data. Methods: We conducted a systematic literature review for investigating the similarities and differences between FSM methods and proposed approaches for conversion. We also identified some improvement opportunities for the current conversion approaches. Sources for articles were IEEE Xplore, Engineering Village, Science Direct, ISI, and Scopus. We also performed snowball sampling to decrease chance of missing any relevant papers. We also evaluated the existing models for conversion after merging the data from publicly available datasets. By bringing suggestions for improvement, we developed a new model and then validated it. Results: Conceptual similarities and differences between methods are presented along with all methods and models that exist for conversion between different FSM methods. We also came with three major contributions for existing empirical methods; for one existing method (piecewise linear regression) we used a systematic and rigorous way of finding discontinuity point. We also evaluated several existing models to test their reliability based on a merged dataset, and finally we accumulated all data from literature in order to find the nature of relation between IFPUG and COSMIC using LOESS regression technique. Conclusions: We concluded that many concepts used by different FSM methods are common which enable conversion. In addition statistical results show that the proposed approach to enhance piecewise linear regression model slightly increases model’s test results. Even this small improvement can affect projects’ cost largely. Results of evaluation of models show that it is not possible to say which method can predict unseen data better than others and it depends on the concerns of practitioner that which model should be used. And finally accumulated data confirms that empirical relation between IFPUG and COSMIC is not linear and can be presented by two separate lines better than other models. Also we noted that unlike COSMIC manual’s claim that discontinuity point should be around 200 FP, in merged dataset discontinuity point is around 300 to 400. Finally we proposed a new conversion approach using systematic approach and piecewise linear regression. By testing on new data, this model shows improvement in MMRE and Pred(25).

  • 95.
    Andersson, Alve
    Blekinge Institute of Technology, School of Computing.
    Att sticka ut i mängden: En studie av tekniker för variation av instansierade modeller2013Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Trots den senaste tidens hårdvaruutveckling är realtidsrendering av stora folkmassor fortfarande ingen trivial uppgift. Denna uppgift beskrivs som crowd rendering. Effektiv crowd rendering bygger ofta på instansiering, men instansiering kommer med ett problem, det skapar kloner. Denna uppsats syftar till att undersöka och utvärdera ett antal tekniker som används för att skapa mångfald för instansierade modeller. Dessa tekniker kommer tillsammans att kallas varierad instansiering. Ett annat mål är att avgöra hur många modeller som behövs för att varierad instansiering skall betala sig i jämförelse med icke- instansierad utritning. Metoden som används är att mäta tiden för varje uppdatering på GPU för varje teknik med hjälp av ett mätinstrument. Varje teknik har implementerats i en applikation som skapats speciellt för detta ändamål. Analysen av mätningarna resulterade i tre kategorier. Kategorierna är GPU procentuell arbetsbörda stigande för instans avtagande för polygon, sjunkande för instans avtagande för polygon och jämn för instans och polygon. Antalet instanser som behövs för varierad instansiering skall betala sig i jämförelse med en icke- instansierad utritning bestämdes till någonstans mellan 100 och 300 modeller, beroende på antalet polygoner.

  • 96.
    Andersson, Björn
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Persson, Marie
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Software Reliability Prediction – An Evaluation of a Novel Technique2004Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Along with continuously increasing computerization, our expectations on software and hardware reliability increase considerably. Therefore, software reliability has become one of the most important software quality attributes. Software reliability modeling based on test data is done to estimate whether the current reliability level meets the requirements for the product. Software reliability modeling also provides possibilities to predict reliability. Costs of software developing and tests together with profit issues in relation to software reliability are one of the main objectives to software reliability prediction. Software reliability prediction currently uses different models for this purpose. Parameters have to be set in order to tune the model to fit the test data. A slightly different prediction model, Time Invariance Estimation, TIE is developed to challenge the models used today. An experiment is set up to investigate whether TIE could be found useful in a software reliability prediction context. The experiment is based on a comparison between the ordinary reliability prediction models and TIE.

  • 97. Andersson, Emma
    et al.
    Peterson, Anders
    Törnquist Krasemann, Johanna
    Blekinge Institute of Technology, School of Computing.
    Quantifying railway timetable robustness in critical points2013In: Journal of Rail Transport Planning and Management, ISSN 2210-9706, Vol. 3, no 3, 95-110 p.Article in journal (Refereed)
    Abstract [en]

    Several European railway traffic networks experience high capacity consumption during large parts of the day resulting in delay-sensitive traffic system with insufficient robustness. One fundamental challenge is therefore to assess the robustness and find strategies to decrease the sensitivity to disruptions. Accurate robustness measures are needed to determine if a timetable is sufficiently robust and suggest where improvements should be made.Existing robustness measures are useful when comparing different timetables with respect to robustness. They are, however, not as useful for suggesting precisely where and how robustness should be increased. In this paper, we propose a new robustness measure that incorporates the concept of critical points. This concept can be used in the practical timetabling process to find weaknesses in a timetable and to provide suggestions for improvements. In order to quantitatively assess how crucial a critical point may be, we have defined the measure robustness in critical points (RCP). In this paper, we present results from an experimental study where a benchmark of several measures as well as RCP has been done. The results demonstrate the relevance of the concept of critical points and RCP, and how it contributes to the set of already defined robustness measures

  • 98.
    Andersson, Jesper
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Datalogi.
    Dynamic Software Architectures2007Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Software architecture is a software engineering discipline that

    provides notations and processes for high-level partitioning of

    systems' responsibilities early in the software design process. This

    thesis is concerned with a specific subclass of systems, systems with a dynamic software architecture. They have practical applications in various domains such as high-availability systems and ubiquitous computing.

    In a dynamic software architecture, the set of architectural elements and the configuration of these elements may change at run-time. These modifications are motivated by changed system requirements or by changed execution environments. The implications of change events may be the addition of new functionality or re-configuration to meet new Quality of Service requirements.

    This thesis investigates new modeling and implementation techniques for dynamic software architectures. The field of Dynamic Architecture is surveyed and a common ground defined. We introduce new concepts and techniques that simplify understanding, modeling, and implementation of systems with a dynamic architecture, with this common ground as our starting point. In addition, we investigate practical use and reuse of quality implementations, where a dynamic software architecture is a

    fundamental design principle.

    The main contributions are a taxonomy, a classification, and a set of architectural patterns for dynamic software architecture. The taxonomy and classification support analysis, while the patterns affect design and implementation work directly. The investigation of practical applications of dynamic architectures identifies several issues concerned with use and reuse, and discusses alternatives and solutions where possible.

    The results are based on surveys, case studies, and exploratory development of dynamic software architectures in different

    application domains using several approaches. The taxonomy,

    classification and architecture patterns are evaluated through several experimental prototypes, among others, a high-performance scientific computing platform.

  • 99.
    Andersson, Jesper
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Bencomo, Nelly
    Baresi, Luciano
    Lemos, Rogerio de
    Gorla, Alessandra
    Inverardi, Paola
    Vogel, Thomas
    Software Engineering Processes for Self-adaptive Systems2012In: Software Engineering for Self-adaptive Software Systems, Springer, 2012Chapter in book (Refereed)
    Abstract [en]

    In this paper, we discuss how for self-adaptive systems someactivities that traditionally occur at development-time are moved to runtime. Responsibilities for these activities shift from software engineers tothe system itself, causing the traditional boundary between development time and run-time to blur. As a consequence, we argue how the traditional  software engineering process needs to be reconceptualized to distinguishvboth development-time and run-time activities, and to support designers in taking decisions on how to properly engineer such systems.Furthermore, we identify a number of challenges related to this required reconceptualization, and we propose initial ideas based on process modeling.We use the Software and Systems Process Engineering Meta-Model(SPEM) to specify which activities are meant to be performed o-line andon-line, and also the dependencies between them. The proposed models should capture information about the costs and benets of shifting activitiesto run-time, since such models should support software engineers in their decisions when they are engineering self-adaptive systems.

  • 100.
    Andersson, Jesper
    et al.
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    de Lemos, Rogerio
    Malek, Sam
    Weyns, Danny
    Katholieke Universiteit Leuven.
    Reflecting on self-adaptive software systems2009In: Software Engineering for Adaptive and Self-Managing Systems, 2009. SEAMS '09. ICSE Workshop on, 2009, Vol. 0, 38-47 p.Conference paper (Refereed)
1234567 51 - 100 of 2740
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf