Accident models and analysis methods affect what accident investigators look for, which contributory factors are found, and which recommendations are issued. This paper contrasts the Sequentially Timed Events Plotting (STEP) method and the Functional Resonance Analysis Method (FRAM) for accident analysis and modelling. The main issue addressed in this paper is the comparison of the established multi-linear method STEP with the new systemic method FRAM and which new insights the latter provides for accident analysis in comparison to the former established multi-linear method. Since STEP and FRAM are based on a different understandings of the nature of accidents, the comparison of the methods focuses on what we can learn from both methods, how, when, and why to apply them. The main finding is that STEP helps to illustrate what happened, involving which actors at what time, whereas FRAM illustrates the dynamic interactions within socio-technical systems and lets the analyst understand the how and why by describing non-linear dependencies, performance conditions, variability, and their resonance across functions.
When shortening release cycles and moving towards continuous delivery, a different approach for quality assurance may be needed than in traditional release management. To allow the transition, all stakeholders must retain a sense of confidence in the quality of release candidates. This thesis proposes a definition for confidence consisting of 30 confidence factors to take into account to ensure confidence from all stakeholders. Confidence factors have been found through interviews with 11 stakeholders, analyzed and categorized using grounded theory analysis. The found factors are grouped into two main categories: Process and Verification Results.
The thesis additionally contains a literature review of quality measurements and explores how confidence can be expressed in a continuous delivery pipeline. It is found that it is not possible to comprehensively express confidence only with metrics displayable in a pipeline when including only currently well-researched metrics, but with the combination of processes known to be followed in the organization some metrics provide coverage for many of the confidence factors.
While GPS long has been an industry standard for localization of an entity or person anywhere in the world, it loses much of its accuracy and value when used indoors. To enable services such as indoor navigation, other methods must be used. A new standard of the Wi-Fi protocol, IEEE 802.11mc (Wi-Fi RTT), enables distance estimation between the transmitter and the receiver based on the Round-Trip Time (RTT) delay of the signal. Using these distance estimations and the known locations of the transmitting Access Points (APs), an estimation of the receiver’s location can be determined. In this thesis, a smartphone Wi-Fi RTT based Indoor Positioning System (IPS) is presented using an Unscented Kalman Filter (UKF). The UKF using only RTT based distance estimations as input, is established as a baseline implementation. Two extensions are then presented to improve the positioning performance; 1) a dead reckoning algorithm using smartphone sensors part of the Inertial Measurement Unit (IMU) as an additional input to the UKF, and 2) a method to detect and adjust distance measurements that have been made in Non-Line-of-Sight (NLoS) conditions. The implemented IPS is evaluated in an office environment in both favorable situations (plenty of Line-of-Sight conditions) and sub-optimal situations (dominant NLoS conditions). Using both extensions, meter level accuracy is achieved in both cases as well as a 90th percentile error of less than 2 meters.
Virtual reality (VR) provides many exciting new application opportunities, but also present new challenges. In contrast to 360° videos that only allow a user to select its viewing direction, in fully immersive VR, users can also move around and interact with objects in the virtual world. To most effectively deliver such services it is therefore important to understand how users move around in relation to such objects. In this paper, we present a methodology and software tool for generating run-time datasets capturing a user’s interactions with such 3D environments, evaluate and compare different object identification methods that we implement within the tool, and use datasets collected with the tool to demonstrate example uses. The tool was developed in Unity, easily integrates with existing Unity applications through the use of periodic calls that extracts information about the environment using different ray-casting methods. The software tool and example datasets are made available with this paper.
Under det senaste decenniet har det i Sverige blivit allt populärare att åka motorcykel. Enobligatorisk riskutbildning för behörighet A och A1 infördes från och med den 1 november2009. Då riskutbildningen är ny har få utvärderingar gjorts.Denna studie utvärderar riskutbildningen för motorcyklister utifrån trafikskolläraresperspektiv. Målet har varit att sammanställa synpunkter och erfarenheter från lärarna påutbildningen. Ett ytterligare mål har varit att undersöka trafikskollärarnas upplevda effekter avutbildningen på elevers trafikbeteende. Sex semistrukturerade intervjuer med trafikskolläraresamt en observationsstudie på olika trafikskolor genomfördes. Utöver detta genomfördesdeltagande observation av en fortbildning där 15 trafikskollärare deltog. Resultatet frånstudien visar att lärarna anser att behovet av riskutbildningen är stort och attimplementationen av den nya riskutbildningen gått bra. Förutom att elever efter utbildningenrefererar till den, vilket enligt lärarna indikerar att de tagit till sig vad som sagts, märks det nui större utsträckning än tidigare att elever kör lugnare och tänker sig mer för i vissasituationer. Detta påtalades vara ett klart önskvärt resultat.
During the 80’s, the first train simulator was introduced in Swedish train driver education and is still the only full scale simulator being used to educate train drivers in Sweden. The reason for this seems to be a lack of educational and economic motives for an expanded usage of simulators within education and training. Energy savings within the railway domain, i.e. energy-efficient driving, is currently a topic for all train operators in Sweden. Some operators already educate their drivers in energy efficient driving and tests of energy efficiency in real traffic has shown a potential energy saving of 16 %, after drivers have completed a theoretical education in energy-efficient driving. Because there were some uncertainties in the data from the tests carried out in real traffic, where conditions and experimental procedures varied between the drivers and it also turned out that education and access to a support system while driving resulted in a small saving in energy (13 %) there was a need to examine the potential savings under controlled conditions. Therefore, a study was conducted using a train simulator. In the simulator, the researcher has full control over the data and conditions are the same for all drivers. The simulator used in the study was developed by VTI (Swedish National Road and Transport Research Institute) and modeled after an X50 Regina. The purpose of this study was to investigate whether the same theoretical education in energy-efficient driving, in combination with simulator training under ideal conditions, could contribute to the same, or better energy saving compared to the results of the tests from real traffic. Furthermore, the effect of feedback during training with regard to energy savings was also investigated. 24 train driver students were divided into three groups with 8 students in each. Two of these groups completed two sessions (reference and test session) with theoretical education and simulator training between the sessions. The last group (control group) completed two sessions (reference and test session) without education and training between the sessions. The two groups that were given theoretical education conducted their simulator training under two different conditions, where one group trained with feedback (energy consumption and rail gradient) and the other group trained without feedback. It turns out that a theoretical education in energy efficient driving, combined with 30 minutes of simulator training, resulted in a total saving of about 24 % energy for both groups. Also, considering that the control group improved their energy consumption by simply driving the simulator two times (8 % total energy saving), the energy saving was almost equal to the result of the tests in real traffic. Since the results were equal even though the conditions differed, there is reason to investigate how different driving conditions affect the outcome. There is also a need to better understand why education in combination with a support system resulted in a lower energy saving than for those who were only given education during the tests in real traffic, and also why feedback during training in the simulator did not give a detectable effect. Basically, there are many reasons to further investigate how to design simulator training and support systems for train drivers. In addition to the energy savings, the results showed that drivers improved their arrival times i.e. arrive more accurate in relation to the time table. The results suggest that there is great potential for train simulators in the Swedish train driver education, both for training and for evaluating the effects of the training.
In this thesis report we conducted research study on driver's behavior in T-Intersections using simulated environment. This report describes and discusses correlation analysis of driver's personality traits and style while driving at T-Intersections.
The experiments were performed on multi user driving simulator under controlled settings, at Linköping University. A total of forty-eight people participated in the study and were divided into groups of four, all driving in the same simulated world.
During the experiments participants were asked to fill a series of well-known self-report questionnaires. We evaluated questionnaires to get the insight in driver's personality traits and driving style. The self-report questionnaires consist of Schwartz's configural model of 10 values types and NEO-five factor inventory. Also driver's behavior was studied with the help of questionnaires based on driver's behavior, style, conflict avoidance, time horizon and tolerance of uncertainty. Then these 10 Schwartz's values are correlated with the other questionnaires to give the detail insight of the driving habits and personality traits of the drivers.
We develop a tool to explore the behavior of parameterized systems (i.e., systems consisting of an arbitrary number of identical processes that synchronize using shared variables or global communications) and to ease user interaction with tools that verify them. The tool includes a user friendly GUI that allows the user to describe a parameterized system and to perform guided, interactive or random simulation. This tool empowers the user to plug in several independent verifiers to perform verification. A mockup verifier is developed in order to facilitate the development of the tool and testing the required functionalities. The mockup verifier involves parsing descriptions of the parameterized systems to be analyzed. In order to interact with the verifier, the tool is user friendly and flexible in the sense that the user can plug in a verifier developed in any language as long as it allows to perform a number of basic computations on the parameterized system (such as the set of enabled transitions or the set of successor configurations). In order to plug in a new tool, our tool needs to be able to make use of these operations, for instance using a wrapper written for a verifier particular to a class of parameterized systems. Given these operations, our tool enables the user to carry out various types of simulations like random, interactive or guided simulations. Moreover, our tool can submit verification queries to the underlying verifier and walk the user through the generated counter examples as if it was a simulation session.
In Ericsson, the Automation Team automates test cases that are frequently rerun. This process involves copying data related to a particular Configured Test Case from a database and then pasting it into a java file created to run a test case. In one java file, there can be more than one Configured Test Cases. So information can vary. Then the tester has to add package name, necessary imports, member variables, preamble and post amble methods, help methods and main execution methods. A lot of time and effort are consumed in writing the whole code. The Automation Team came up with a proposal of having a tool that can generate this whole information and the tester just has to add or remove minor changes. This will save time and resources. So the development of tool started and finally a tool named Automatic Test Builder developed in java was created to help automation teams in Ottawa, Kista and Linkoping.
This document elaborates problem statement, opted approach, tools used in development process, a detailed overview of all development stages of Automatic Test Builder. This document also explains issues what came during the development, evaluation and usability analysis of Automatic Test Builder.
The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity and use different evaluation modalities (e.g., blind evaluation, open evaluation, or consensus). The OAEI 2021 campaign offered 13 tracks and was attended by 21 participants.This paper is an overall presentation of that campaign.
The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity and use different evaluation modalities (e.g., blind evaluation, open evaluation, or consensus).The OAEI 2020 campaign offered 12 tracks with 36 test cases, and was attended by 19 participants. This paper is an overall presentation of that campaign.
The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity and use different evaluation modalities. The OAEI 2022 campaign offered 14 tracks and was attended by18 participants. This paper is an overall presentation of that campaign
Ontologies have been proposed as a means towards making data FAIR (Findable, Accessible, Interoperable, Reusable). This has attracted much interest in several communities and ontologies are being developed. However, to obtain good results when using ontologies in semantically-enabled applications, the ontologies need to be of high quality. One of the quality aspects is that the ontologies should be as complete as possible. In this paper we propose a first version of a tool that supports users in extending ontologies using a phrase-based approach. To demonstrate the usefulness of our proposed tool, we exemplify the use by extending the Materials Design Ontology.
Ontologies have been proposed as a means towards making data FAIR (Findable, Accessible, Interoperable, Reusable) and has recently attracted much interest in the materials science community. Ontologies for this domain are being developed and one such effort is the Materials Design Ontology. However, to obtain good results when using ontologies in semantically-enabled applications, the ontologies need to be of high quality. One of the quality aspects is that the ontologies should be as complete as possible. In this paper we show preliminary results regarding extending the Materials Design Ontology using a phrase-based topic model.
Due to importance of data FAIRness (Findable, Accessible, Interoperable, Reusable), ontologies as a means to make data FAIR have attracted more and more attention in different communities and are being used in semantically-enabled applications. However, to obtain good results while using ontologies in these applications, high quality ontologies are needed of which completeness is one of the important aspects. An ontology lacking information can lead to missing results. In this paper we present a tool, Phrase2Onto, that supports users in extending ontologies to make the ontologies more complete. It is particularly suited for ontology extension using a phrase-based topic model approach, but the tool can support any extension approach where a user needs to make decisions regarding the appropriateness of using phrases to define new concepts. We describe the functionality of the tool and a user study using Pizza Ontology. The user study showed a good usability of the system and high task completion. Further, we report on a real application where we extend the Materials Design Ontology.
Despite the X509 public key infrastructure (PKI) being essential for ensuring the trust we place in our communication with web servers, the revocation of the trust placed in individual X509 certificates is neither transparent nor well-studied, leaving many unanswered questions. In this paper, we present a temporal analysis of 36 million certificates, whose revocation statuses we followed for 120 days since first being issued. We characterize the revocation rates of different certificate authorities (CAs) and how the rates change over the lifetime of the certificates. We identify and discuss several instances where the status changes from "revoked" to "good", "unauthorized" or "unknown", respectively, before the certificates expiry. This complements prior work that has observed such inconsistencies in some CAs behavior after expiry but also highlight a potentially more severe problem. Our results highlight heterogeneous revocation practices among the CAs.
Detta examensarbete utfördes på IDA (Institutionen för datavetenskap) vid Linköpings universitet.
Syftet med det här examensarbetet var att utveckla ett program som skulle skapa förutsättningar för generativ konst med hjälp av MyPaint som är ett digitalt rit/målarverktyg. Metoden gick ut på att registrera vad användaren skapat för komponenter, dvs. musinteraktioner och kortkommandon, och därefter använda dem algoritmiskt.
Examensarbetet resulterades i ett program (SharpArt), som fångar musinteraktioner samt simulerar tangentbordstryckningar (kortkommandon) från och till Mypaint, vilket i sin tur skapar komponenter som används algoritmiskt. Programmet kan även positionera objektet på canvasen enligt det önskade koordinatvärdet.
Ontologies have become an important tool for representing data in a structured manner. Merging ontologies allows for the creation of ontologies that later can be composed into larger ontologies as well as for recognizing patterns and similarities between ontologies. Ontologies are being used nowadays in many areas, including bioinformatics. In this thesis, we present a desktop version of SAMBO, a system for merging ontologies that are represented in the languages OWL and DAML+OIL. The system has been developed in the programming language JAVA with JDK (Java Development Kit) 1.4.2. The user can open a file locally or from the network and can merge ontologies using suggestions generated by the SAMBO algorithm. SAMBO provides a user-friendly graphical interface, which guides the user through the merging process.
We present a decision procedure for a logic that combines (i) word equations over string variables denoting words of arbitrary lengths, together with (ii) constraints on the length of words, and on (iii) the regular languages to which words belong. Decidability of this general logic is still open. Our procedure is sound for the general logic, and a decision procedure for a particularly rich fragment that restricts the form in which word equations are written. In contrast to many existing procedures, our method does not make assumptions about the maximum length of words. We have developed a prototypical implementation of our decision procedure, and integrated it into a CEGAR-based model checker for the analysis of programs encoded as Horn clauses. Our tool is able to automatically establish the correctness of several programs that are beyond the reach of existing methods.
We propose an automatic fence insertion and verification framework for concurrent programs running under relaxed memory. Unlike previous approaches to this problem, which allow only variables of finite domain, we target programs with (unbounded) integer variables. The problem is difficult because it has two different sources of infiniteness: unbounded store buffers and unbounded integer variables. Our framework consists of three main components: (1) a finite abstraction technique for the store buffers, (2) a finite abstraction technique for the integer variables, and (3) a counterexample guided abstraction refinement loop of the model obtained from the combination of the two abstraction techniques. We have implemented a prototype based on the framework and run it successfully on all standard benchmarks together with several challenging examples that are beyond the applicability of existing methods.
We give a sound and complete fence insertion procedure for concurrentfinite-state programs running under the classical TSO memory model. Thismodel allows “write to read” relaxation corresponding to the addition of an unboundedstore buffer between each processor and the main memory. We introducea novel machine model, called the Single-Buffer (SB) semantics, and show thatthe reachability problem for a program under TSO can be reduced to the reachabilityproblem under SB. We present a simple and effective backward reachabilityanalysis algorithm for the latter, and propose a counter-example guided fence insertionprocedure. The procedure is augmented by a placement constraint thatallows the user to choose places inside the program where fences may be inserted.For a given placement constraint, we automatically infer all minimal setsof fences that ensure correctness. We have implemented a prototype and run itsuccessfully on all standard benchmarks together with several challenging examplesthat are beyond the applicability of existing methods.
We introduce MEMORAX, a tool for the verification of control state reachability (i.e., safety properties) of concurrent programs manipulating finite range and integer variables and running on top of weak memory models. The verification task is non-trivial as it involves exploring state spaces of arbitrary or even infinite sizes. Even for programs that only manipulate finite range variables, the sizes of the store buffers could grow unboundedly, and hence the state spaces that need to be explored could be of infinite size. In addition, MEMORAX in- corporates an interpolation based CEGAR loop to make possible the verification of control state reachability for concurrent programs involving integer variables. The reachability procedure is used to automatically compute possible memory fence placements that guarantee the unreachability of bad control states under TSO. In fact, for programs only involving finite range variables and running on TSO, the fence insertion functionality is complete, i.e., it will find all minimal sets of memory fence placements (minimal in the sense that removing any fence would result in the reachability of the bad control states). This makes MEMORAX the first freely available, open source, push-button verification and fence insertion tool for programs running under TSO with integer variables.
We introduce TRAU, an SMT solver for an expressive constraint language, including word equations, length constraints, context-free membership queries, and transducer constraints. The satisfiability problem for such a class of constraints is in general undecidable. The key idea behind TRAU is a technique called flattening, which searches for satisfying assignments that follow simple patterns. TRAU implements a Counter-Example Guided Abstraction Refinement (CEGAR) framework which contains both an under- and an over-approximation module. The approximations are refined in an automatic manner by information flow between the two modules. The technique implemented by TRAU can handle a rich class of string constraints and has better performance than state-of-the-art string solvers.
We address the problem of parameterized verification of cache coherence protocols for hardware accelerated transactional memories. In this setting, transactional memories leverage on the versioning capabilities of the underlying cache coherence protocol. The length of the transactions, their number, and the number of manipulated variables (i.e., cache lines) are parameters of the verification problem. Caches in such systems are finite-state automata communicating via broadcasts and shared variables. We augment our system with filters that restrict the set of possible executable traces according to existing conflict resolution policies. We show that the verification of coherence for parameterized cache protocols with filters can be reduced to systems with only a finite number of cache lines. For verification, we show how to account for the effect of the adopted filters in a symbolic backward reachability algorithm based on the framework of constrained monotonic abstraction. We have implemented our method and used it to verify transactional memory coherence protocols with respect to different conflict resolution policies.
We consider the verification of safety (strict se- rializability and abort consistency) and liveness (obstruction and livelock freedom) for the hybrid transactional memory framework FLEXTM. This framework allows for flexible imple- mentations of transactional memories based on an adaptation of the MESI coherence protocol. FLEXTM allows for both eager and lazy conflict resolution strategies. Like in the case of Software Transactional Memories, the verification problem is not trivial as the number of concurrent transactions, their size, and the number of accessed shared variables cannot be a priori bounded. This complexity is exacerbated by aspects that are specific to hardware and hybrid transactional memories. Our work takes into account intricate behaviours such as cache line based conflict detection, false sharing, invisible reads or non-transactional instructions. We carry out the first automatic verification of a hybrid transactional memory and establish, by adopting a small model approach, challenging properties such as strict serializability, abort consistency, and obstruction freedom for both an eager and a lazy conflict resolution strategies. We also detect an example that refutes livelock freedom. To achieve this, our prototype tool makes use the latest antichain based techniques to handle systems with tens of thousands of states.
n/a
We present a technique for automatically verifying safety properties of concurrent programs, in particular programs which rely on subtle dependencies of local states of different threads, such as lock-free implementations of stacks and queues in an environment without garbage collection. Our technique addresses the joint challenges of infinite-state specifications, an unbounded number of threads, and an unbounded heap managed by explicit memory allocation. Our technique builds on the automata-theoretic approach to model checking, in which a specification is given by an automaton that observes the execution of a program and accepts executions that violate the intended specification.We extend this approach by allowing specifications to be given by a class of infinite-state automata. We show how such automata can be used to specify queues, stacks, and other data structures, by extending a data-independence argument. For verification, we develop a shape analysis, which tracks correlations between pairs of threads, and a novel abstraction to make the analysis practical. We have implemented our method and used it to verify programs, some of which have not been verified by any other automatic method before.
Organizations often have information systems belonging to different computer generations. These systems contain much valuable data to the organizations concerned. However, these systems are often unable to communicate with each other, due to incompatibilities. Moreover, replacing these systems with new systems is also very costly. Therefore the latest trend is integrating the existing systems with each other with the help of different system integration technologies. When the systems are integrated with new technology they bring about various effects to the organizations in concern.
The purpose of this thesis is to find out how system integration affects an organization in the photo and home electronics branch, namely Expert. The questions that will be raised in this thesis are how does system integration affect the organization’s work processes and how does system integration affect the organisation’s employees. I have studied how system integration has affected the work processes and employees of the retail stores. In order to find answers to these questions three qualitative interviews were carried out. One interview took place in the central organization and the rest in retail stores in Linköping.
There are many reasons, which led Expert towards using system integration. Some of the main reasons are increased profitability and decreased costs for maintenance and upgrading of different systems. Further, the retail stores required better information channelling and streamlining of work processes in order to provide salesmen at retail stores possibility to concentrate more on customers by minimising administrative work.
I have found that system integration has affected the organization’s work processes and its employees both positively and negatively, in other words system integration has helped Expert to decrease administration work, provided salesmen at retail stores more time to deliver better service to customers, has automated key work processes saving time and reducing redundancy of work. Even if, the organization is quite satisfied with the benefits the existing system integration technologies have rendered to them, there are many more privileges, which can be achieved.
Cars are getting more technically advanced and more ECUs are being developed that results in increased safety and comfort, and a lower environmental impact. This leads to a complex work to test and verify that all the different ECUs are functioning as intended in various situations. Vehicle diagnostics often requires software from third parties that are often expensive. Syntronic AB are currently using software with a much larger functionality than needed to perform vehicle diagnostics and much of the unneces-sary functionality in the software leads to unnecessarily long runtimes for the program. By studying CAN and UDS and analyzing how they interact, I was able to create a software by systematically developing the software with two interfaces connected to each computer and continuously testing the implementation against the theoretical basis and then finally testing the software in a vehicle. The created software was better suited to the needs of the company and the more functionality-adapted software could perform the same diagnostics faster than the company’s current software. The most used UDS-service by the company could be implemented and the created software enabled more UDS services to be added without modifications of the main program or its features.
Many embedded systems are complex, and it is often required that the firmware in these systems are updatable by the end-user. For economical and confidentiality reasons, it is important that these systems only accept firmware approved by the firmware producer.
This thesis work focuses on creating a security enhanced firmware update procedure that is suitable for use in embedded systems. The common elements of embedded systems are described and various candidate algorithms are compared as candidates for firmware verification. Patents are used as a base for the proposal of a security enhanced update procedure. We also use attack trees to perform a threat analysis on an update procedure.
The results are a threat analysis of a home office router and the proposal of an update procedure. The update procedure will only accept approved firmware and prevents reversion to old, vulnerable, firmware versions. The firmware verification is performed using the hash function SHA-224 and the digital signature algorithm RSA with a key length of 2048. The selection of algorithms and key lengths mitigates the threat of brute-force and cryptanalysis attacks on the verification algorithms and is believed to be secure through 2030.
Requirements Engineering (RE) in Agile Software Development (ASD) is a challenge thatmany face and several techniques exist when doing so. One such technique is prototyping, when a model of a product is used to gather important information in software develop-ment. To describe how much a prototype resembles the product the notion of fidelity is used. The aim of this study is to contribute to research regarding prototyping in ASD,and to examine the effect of a prototype’s fidelity when using prototypes in discussionsduring RE. A case study is performed at the company Exsitec where staff are interviewedregarding prototyping in software development. Thereafter, two prototypes of low andhigh fidelity are developed and used in interviews as a basis for discussion. Based on thisstudy, the use of prototypes in software projects can help customers trust the process,improve communication with customers, and facilitate when trying to reach consensusamong different stakeholders. Furthermore, depending on how they are used, prototypescan contribute to understanding the big picture of the requirements and can also serve asdocumentation. The study also shows some, albeit subtle, differences in the informationcollected using prototypes with low and high fidelity. The use of a high fidelity prototypeseems to generate more requirements, but makes interviewees less likely to come up withlarger, more comprehensive requirement changes.
A trend seen on the web today is to create a platform where externally developed applications can run inside some kind of main application. This is often done by providing an API to access data and business logic of your service and a sandbox environment in which third-party applications can run. By providing this, it is made possible for external developers to come up with new ideas based on your service. Some good examples on this are Spotify Apps, Apps on Facebook and SalesForce.com.
Ipendo Systems AB is a company that develops a web platform for intellectual properties. Currently most things on this platform are developed by developers at Ipendo Systems AB. Some interest has though risen to enable external developers to create applications that will in some way run inside the main platform.
In this thesis an analysis of already existing solutions has been done. These solutions were Spotify Apps and Apps on Facebook. The two have different approaches on how to enable third-party applications to run inside their own service. Facebook’s solution builds mainly on iframe embedded web pages where data access is provided through a web API. Spotify on the other hand hosts the third-party applications themselves but the applications may only consist of HTML5, CSS3 and JavaScript.
In addition to the analysis a prototype was developed. The purpose of the prototype was to show possible ways to enable third-party applications to run inside your own service. Two solutions showing this were developed. The first one was based on Facebook’s approach with iframing of external web pages. The second was a slightly modified version of Spotify’s solution with only client-side code hosted by the main application. To safely embed the client side code in the main application a sandboxing tool for JavaScript called Caja was used.
Of the two versions implemented in the prototype was the Iframe solution considered more ready to be utilized in a production environment than Caja. Caja could be seen as an interesting technique for the future but might not be ready to use today. The reason behind this conclusion was that Caja decreased the performance of the written JavaScript as well as adding complexity while developing the third-party applications.
Det svenska språket ska finnas tillgängligt för alla som bor och verkar i Sverige. Därförär det viktigt att det finns lättlästa alternativ för dem som har svårighet att läsa svensktext. Detta arbete bygger vidare på att visa att det är möjligt att skapa ett automatisktomskrivningsprogram som gör texter mer lättlästa. Till grund för arbetet liggerCogFLUX som är ett verktyg för automatisk omskrivning till lätt svenska. CogFLUXinnehåller funktioner för att syntaktiskt skriva om texter till mer lättläst svenska.Omskrivningarna görs med hjälp av omskrivningsregler framtagna i ett tidigare projekt.I detta arbete implementeras ytterligare omskrivningsregler och även en ny modul förhantering av synonymer. Med dessa nya regler och modulen ska arbetet undersöka omdet är det är möjligt att skapa system som ger en mer lättläst text enligt etableradeläsbarhetsmått som LIX, OVIX och Nominalkvot. Omskrivningsreglerna ochsynonymhanteraren testas på tre olika texter med en total lägnd på ungefär hundra tusenord. Arbetet visar att det går att sänka både LIX-värdet och Nominalkvoten signifikantmed hjälp av omskrivningsregler och synonymhanterare. Arbetet visar även att det finnsfler saker kvar att göra för att framställa ett riktigt bra program för automatiskomskrivning till lätt svenska.
Creating native mobile application on multiple platforms generate a lot of duplicate code. This thesis has evaluated if the code quality attribute modifiability improves when migrating to React Native. One Android and one iOS codebase existed for an application and a third codebase was developed with React Native. The measurements of the codebases were based on the SQMMA-model. The metrics for the model were collected with static analyzers created specifically for this project. The results created consists of graphs that show the modifiability for some specific components over time and graphs that show the stability of the platforms. These graphs show that when measuring code metrics on applications over time it is better to do this on a large codebase that has been developed for some time. When calculating a modifiability value the sum of the metrics and the average value of the metrics between files should be used and it is shown that the React Native platform seems to be more stable than native.
Den här rapporten behandlar ett kandidatarbete som utfördes mot Institutionen för ekonomisk och industriell utveckling. Arbetet gick ut på att modernisera det existerande programmet MODEST som använts för att beräkna optimala energisystem. Moderniseringen gjordes genom att skapa programmet Humble, vars funktionalitet bygger på MODEST.
I den här rapporten beskrivs hur program kan utvecklas för att de ska vara enkla att använda, samt hur de kan konstrueras för att möjliggöra vidareutveckling. Dessa aspekter framställdes av kunden som viktiga i projektet.
Gruppens tillvägagångssätt för att utveckla programmet förklaras och en överskådlig bild över den arkitektur som använts ges. De erfarenheter som gruppen förskaffat sig under projektet beskrivs och reflekteras över. Detta gäller såväl tekniska erfarenheter som erfarenheter kopplade till projektprocessen. Gruppmedlemmarnas personliga erfarenheter kopplade till de roller de haft i projektet beskrivs i individuellt skrivna delar.
Slutligen diskuteras projektet och hur resultatet har uppnåtts, varefter slutsatser kopplade till frågeställningarna dras. Dessa slutsatser är att prototypning och användbarhetstester är effektiva metoder för att skapa program som är enkla att använda, samt att program som tillämpar tydligt dokumenterade designmönster och är modulärt uppbyggda möjliggör vidareutveckling.
Associations between heterozygosity and fitness traits have typically been investigated in populations characterized by low levels of inbreeding. We investigated the associations between standardized multilocus heterozygosity (stMLH) in mother trees (obtained from12 nuclear microsatellite markers) and five fitness traits measured in progenies from an inbred Scots pine population. The traits studied were proportion of sound seed, mean seed weight, germination rate, mean family height of one-year old seedlings under greenhouse conditions (GH) and mean family height of three-year old seedlings under field conditions (FH). The relatively high average inbreeding coefficient (F) in the population under study corresponds to a mixture of trees with different levels of co-ancestry, potentially resulting from a recent bottleneck. We used both frequentist and Bayesian methods of polynomial regression to investigate the presence of linear and non-linear relations between stMLH and each of the fitness traits. No significant associations were found for any of the traits except for GH, which displayed negative linear effect with stMLH. Negative HFC for GH could potentially be explained by the effect of heterosis caused by mating of two inbred mother trees (Lippman and Zamir 2006), or outbreeding depression at the most heterozygote trees and its negative impact on the fitness of the progeny, while their simultaneous action is also possible (Lynch. 1991). However,since this effect wasn’t detected for FH, we cannot either rule out that the greenhouse conditions introduce artificial effects that disappear under more realistic field conditions.
Despite a constantly growing selection of front-end JavaScript frameworks, there is a lack of research to guide the choice of which one to use in a software project. Instead, the decision is generally based on experience and personal preferences within the team. The aim of this thesis is therefore to present a structured evaluation model to provide for more informed decisions. A preliminary study is carried out where the most important qualities of a framework are identified, both according to previous literature and to practitioners. The pre-study result is used to construct a structured model to assess framework performance for the identified qualities. Finally, a test of the model is carried out to see if it can guide the choice of framework in a specific project. The study shows that the design of the model does contribute with important insights on framework performance in prioritized quality areas and the trade-offs that this entails for other important qualities. Thus, the model provides necessary information to make well-founded decisions. Furthermore, it fills the gap in contemporary research by providing an understanding of what is important in a framework according to practitioners.
Students’ learning processes can be affected negatively by long waiting times to get assistance on lesson- and lab-sessions. Studies show that digital queuing systems decrease the waiting time. Thus, the purpose of this report is to investigate how to design a web-based queuing application to achieve a high perceived usability for students and tutors. Especially based on navigability and design which in accordance with research in the area has a direct impact on the usability. To achieve a high perceived usability the application was developed iteratively. In the first version the implemented functionality was built upon the result from the feasibility study combined with research in the area. After a set of user evaluations, changes from the first version were implemented to further improve the perceived usability. Lastly, another set of evaluations were performed to confirm the improvement in the final version. The results showed that the first version of the system was perceived as 84 out of 100 on the System Usability Scale (SUS) and the final version as 88 out of 100, an improvement by four units. Uniform design, no irrelevant functionality, placing buttons in conspicuous positions and having double checks to “dangerous actions” all seem to be factors contributing to the navigability, desirability and thus the usability on a queuing-application.
Background: Stereotactic radiosurgery (SRS) can be an effective primary or adjuvant treatment option for intracranial tumors. However, it carries risks of various radiation toxicities, which can lead to functional deficits for the patients. Current inverse planning algorithms for SRS provide an efficient way for sparing organs at risk (OARs) by setting maximum radiation dose constraints in the treatment planning process.Purpose: We propose using activation maps from functional MRI (fMRI) to map the eloquent regions of the brain and define functional OARs (fOARs) for Gamma Knife SRS treatment planning.Methods: We implemented a pipeline for analyzing patient fMRI data, generating fOARs from the resulting activation maps, and loading them onto the GammaPlan treatment planning software. We used the Lightning inverse planner to generate multiple treatment plans from open MRI data of five subjects, and evaluated the effects of incorporating the proposed fOARs.Results: The Lightning optimizer designs treatment plans with high conformity to the specified parameters. Setting maximum dose constraints on fOARs successfully limits the radiation dose incident on them, but can have a negative impact on treatment plan quality metrics. By masking out fOAR voxels surrounding the tumor target it is possible to achieve high quality treatment plans while controlling the radiation dose on fOARs.Conclusions: The proposed method can effectively reduce the radiation dose incident on the eloquent brain areas during Gamma Knife SRS of brain tumors.
Anonymization of medical images is necessary for protecting the identity of the test subjects, and is therefore an essential step in data sharing. However, recent developments in deep learning may raise the bar on the amount of distortion that needs to be applied to guarantee anonymity. To test such possibilities, we have applied the novel CycleGAN unsupervised image-to-image translation framework on sagittal slices of T1 MR images, in order to reconstruct, facial features from anonymized data. We applied the CycleGAN framework on both face-blurred and face-removed images. Our results show that face blurring may not provide adequate protection against malicious attempts at identifying the subjects, while face removal provides more robust anonymization, but is still partially reversible.
Brain activation mapping using functional magnetic resonance imaging (fMRI) has been extensively studied in brain gray matter (GM), whereas in large disregarded for probing white matter (WM). This unbalanced treatment has been in part due to controversies in relation to the nature of the blood oxygenation level-dependent (BOLD) contrast in WM and its detachability. However, an accumulating body of studies has provided solid evidence of the functional significance of the BOLD signal in WM and has revealed that it exhibits anisotropic spatio-temporal correlations and structure-specific fluctuations concomitant with those of the cortical BOLD signal. In this work, we present an anisotropic spatial filtering scheme for smoothing fMRI data in WM that accounts for known spatial constraints on the BOLD signal in WM. In particular, the spatial correlation structure of the BOLD signal in WM is highly anisotropic and closely linked to local axonal structure in terms of shape and orientation, suggesting that isotropic Gaussian filters conventionally used for smoothing fMRI data are inadequate for denoising the BOLD signal in WM. The fundamental element in the proposed method is a graph-based description of WM that encodes the underlying anisotropy observed across WM, derived from diffusion-weighted MRI data. Based on this representation, and leveraging graph signal processing principles, we design subject-specific spatial filters that adapt to a subject’s unique WM structure at each position in the WM that they are applied at. We use the proposed filters to spatially smooth fMRI data in WM, as an alternative to the conventional practice of using isotropic Gaussian filters. We test the proposed filtering approach on two sets of simulated phantoms, showcasing its greater sensitivity and specificity for the detection of slender anisotropic activations, compared to that achieved with isotropic Gaussian filters. We also present WM activation mapping results on the Human Connectome Project’s 100-unrelated subject dataset, across seven functional tasks, showing that the proposed method enables the detection of streamline-like activations within axonal bundles.
Brain activation mapping using functional MRI (fMRI) based on blood oxygenation level-dependent (BOLD) contrast has been conventionally focused on probing gray matter, the BOLD contrast in white matter having been generally disregarded. Recent results have provided evidence of the functional significance of the white matter BOLD signal, showing at the same time that its correlation structure is highly anisotropic, and related to the diffusion tensor in shape and orientation. This evidence suggests that conventional isotropic Gaussian filters are inadequate for denoising white matter fMRI data, since they are incapable of adapting to the complex anisotropic domain of white matter axonal connections. In this paper we explore a graph-based description of the white matter developed from diffusion MRI data, which is capable of encoding the anisotropy of the domain. Based on this representation we design localized spatial filters that adapt to white matter structure by leveraging graph signal processing principles. The performance of the proposed filtering technique is evaluated on semi-synthetic data, where it shows potential for greater sensitivity and specificity in white matter activation mapping, compared to isotropic filtering.
Existing Bayesian spatial priors for functional magnetic resonance imaging (fMRI) data correspond to stationary isotropic smoothing filters that may oversmooth at anatomical boundaries. We propose two anatomically informed Bayesian spatial models for fMRI data with local smoothing in each voxel based on a tensor field estimated from a T1-weighted anatomical image. We show that our anatomically informed Bayesian spatial models results in posterior probability maps that follow the anatomical structure.
Software prototyping is considered to be one of the most important tools that are used by software engineersnowadays to be able to understand the customer’s requirements, and develop software products that are efficient,reliable, and acceptable economically. Software engineers can choose any of the available prototyping approaches tobe used, based on the software that they intend to develop and how fast they would like to go during the softwaredevelopment. But generally speaking all prototyping approaches are aimed to help the engineers to understand thecustomer’s true needs, examine different software solutions and quality aspect, verification activities…etc, that mightaffect the quality of the software underdevelopment, as well as avoiding any potential development risks.A combination of several prototyping approaches, and brainstorming techniques which have fulfilled the aim of theknowledge extraction approach, have resulted in developing a prototyping approach that the engineers will use todevelop one and only one throwaway prototype to extract more knowledge than expected, in order to improve thequality of the software underdevelopment by spending more time studying it from different points of view.The knowledge extraction approach, then, was applied to the developed prototyping approach in which thedeveloped model was treated as software prototype, in order to gain more knowledge out of it. This activity hasresulted in several points of view, and improvements that were implemented to the developed model and as a resultAgile Prototyping AP, was developed. AP integrated more development approaches to the first developedprototyping model, such as: agile, documentation, software configuration management, and fractional factorialdesign, in which the main aim of developing one, and only one prototype, to help the engineers gaining moreknowledge, and reducing effort, time, and cost of development was accomplished but still developing softwareproducts with satisfying quality is done by developing an evolutionary prototyping and building throwawayprototypes on top of it.
Bangladesh geographically comprised one of the largest delta landscapes of the world. Almost 6.7% of country’stotal area (147570 sqkm.) is covered byrivers and inland water bodies. These water bodies being rich in fishproduction meet the majority of the demand of protein. Bangladesh produces world's fourth largest quantity of fishand it is collected from the inland water bodies. Though shrimps were easily available in the inland water bodies forhundreds of years but shrimp culture as an export-oriented activity is a phenomenon of recent past. Bangladeshearned US$ 2.9 million by exporting shrimp in 1972-73 which was 1 % of the country's total exports. It increased to US$ 33 million in 1980 and to US$ 90.0 million in 1985. But until the mid-1980’s shrimp culture was principallydependent on open-water catches of shrimp it means that time shrimps were not cultivated in a proper planned way.But the culture of shrimp totally for export purpose started after the mid 1980’s. Since then the professional cultivation of shrimp had a very positive and effective impact on the economy of Bangladesh. Exports of shrimpfrom Bangladesh increased from US$ 91 million in FY (Fiscal Year) 1986 to US$ 280 million in FY1997. Duringthe corresponding period, the quantity of shrimp exports increased from 17.2 thousand tonnes to 25.2 thousandtonnes. As most of the shrimp farms have been developed without considering the sustainability of the environmentand some other factors like water pH & salinity, soil pH & salinity, soil texture etc. the farmers are getting lessreturn but affecting the environment most. Geographical Information System (GIS) can give an easier but effectivesolution here through selecting the best suitable sites for shrimp cultivation. Moreover GIS can solve thetransportation problem of this perishable product from the production area to the harbor or airport through transport route selection. This cansave a lot of money and time and consequently make the shrimp cultivation moreeconomic.
Recently, optimization problems have been revised in many domains, and they need powerful search methods to address them. In this paper, a novel hybrid optimization algorithm is proposed to solve various benchmark functions, which is called IPDOA. The proposed method is based on enhancing the search process of the Prairie Dog Optimization Algorithm (PDOA) by using the primary updating mechanism of the Dwarf Mongoose Optimization Algorithm (DMOA). The main aim of the proposed IPDOA is to avoid the main weaknesses of the original methods; these weaknesses are poor convergence ability, the imbalance between the search process, and premature convergence. Experiments are conducted on 23 standard benchmark functions, and the results are compared with similar methods from the literature. The results are recorded in terms of the best, worst, and average fitness function, showing that the proposed method is more vital to deal with various problems than other methods.
A prerequisite for improving road safety are reliable and consistent sources of information about traffic and accidents, which will help assess the prevailing situation and give a good indication of their severity. In many countries there is under-reporting of road accidents, deaths and injuries, no collection of data at all, or low quality of information. Potential knowledge is hidden, due to the large accumulation of traffic and accident data. This limits the investigative tasks of road safety experts and thus decreases the utilization of databases. All these factors can have serious effects on the analysis of the road safety situation, as well as on the results of the analyses.
This dissertation presents a three-tiered conceptual model to support the sharing of road safety–related information and a set of applications and analysis tools. The overall aim of the research is to build and maintain an information-sharing platform, and to construct mechanisms that can support road safety professionals and researchers in their efforts to prevent road accidents. GLOBESAFE is a platform for information sharing among road safety organizations in different countries developed during this research.
Several approaches were used, First, requirement elicitation methods were used to identify the exact requirements of the platform. This helped in developing a conceptual model, a common vocabulary, a set of applications, and various access modes to the system. The implementation of the requirements was based on iterative prototyping. Usability methods were introduced to evaluate the users’ interaction satisfaction with the system and the various tools. Second, a system-thinking approach and a technology acceptance model were used in the study of the Swedish traffic data acquisition system. Finally, visual data mining methods were introduced as a novel approach to discovering hidden knowledge and relationships in road traffic and accident databases. The results from these studies have been reported in several scientific articles.
Road accident statistics are collected and used by a large number of users and this can result in a huge volume of data which requires to be explored in order to ascertain the hidden knowledge. Potential knowledge may be hidden because of the accumulation of data, which limits the exploration task for the road safety expert and, hence, reduces the utilization of the database. In order to assist in solving these problems, this paper explores Automatic and Visual Data Mining (VDM) methods. The main purpose is to study VDM methods and their applicability to knowledge discovery in a road accident databases. The basic feature of VDM is to involve the user in the exploration process. VDM uses direct interactive methods to allow the user to obtain an insight into and recognize different patterns in the dataset. In this paper, I apply a range of methods and techniques, including a paradigm for VDM, exploratory data analysis, and clustering methods, such as K-means algorithms, hierarchical agglomerative clustering (HAC), classification trees, and self-organized-maps (SOM). These methods assist in integrating VDM with automatic data mining algorithms. Open source VDM tools offering visualization techniques were used. The first contribution of this paper lies in the area of discovering clusters and different relationships (such as the relationship between socioeconomic indicators and fatalities, traffic risk and population, personal risk and car per capita, etc.) in the road safety database. The methods used were very useful and valuable for detecting clusters of countries that share similar traffic situations. The second contribution was the exploratory data analysis where the user can explore the contents and the structure of the data set at an early stage of the analysis. This is supported by the filtering components of VDM. This assists expert users with a strong background in traffic safety analysis to be able to intimate assumptions and hypotheses concerning future situations. The third contribution involved interactive explorations based on brushing and linking methods; this novel approach assists both the experienced and inexperienced users to detect and recognize interesting patterns in the available database. The results obtained showed that this approach offers a better understanding of the contents of road safety databases, with respect to current statistical techniques and approaches used for analyzing road safety situations.