Digitala Vetenskapliga Arkivet

Change search
Refine search result
45678910 301 - 350 of 6272
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 301.
    Altarabichi, Mohammed Ghaith
    Halmstad University, School of Information Technology.
    Evolving intelligence: Overcoming challenges for Evolutionary Deep Learning2024Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Deep Learning (DL) has achieved remarkable results in both academic and industrial fields over the last few years. However, DL models are often hard to design and require proper selection of features and tuning of hyper-parameters to achieve high performance. These selections are tedious for human experts and require substantial time and resources. A difficulty that encouraged a growing number of researchers to use Evolutionary Computation (EC) algorithms to optimize Deep Neural Networks (DNN); a research branch called Evolutionary Deep Learning (EDL).

    This thesis is a two-fold exploration within the domains of EDL, and more broadly Evolutionary Machine Learning (EML). The first goal is to makeEDL/EML algorithms more practical by reducing the high computational costassociated with EC methods. In particular, we have proposed methods to alleviate the computation burden using approximate models. We show that surrogate-models can speed up EC methods by three times without compromising the quality of the final solutions. Our surrogate-assisted approach allows EC methods to scale better for both, expensive learning algorithms and large datasets with over 100K instances. Our second objective is to leverage EC methods for advancing our understanding of Deep Neural Network (DNN) design. We identify a knowledge gap in DL algorithms and introduce an EC algorithm precisely designed to optimize this uncharted aspect of DL design. Our analytical focus revolves around revealing avant-garde concepts and acquiring novel insights. In our study of randomness techniques in DNN, we offer insights into the design and training of more robust and generalizable neural networks. We also propose, in another study, a novel survival regression loss function discovered based on evolutionary search.

    Download full text (pdf)
    fulltext
  • 302.
    Altarabichi, Mohammed Ghaith
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ahmed, Mobyen Uddin
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Begum, Shahina
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Supervised Learning for Road Junctions Identification using IMU2019In: First International Conference on Advances in Signal Processing and Artificial Intelligence ASPAI' 2019, 2019Conference paper (Refereed)
  • 303.
    Altarabichi, Mohammed Ghaith
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ahmed, Mobyen Uddin
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ciceri, Maria Rita
    Università Cattolica del Sacro Cuore di Milano, Italy.
    Balzarotti, Stefania
    Università Cattolica del Sacro Cuore di Milano, Italy.
    Biassoni, Federica
    Università Cattolica del Sacro Cuore di Milano, Italy.
    Lombardi, Debora
    Università Cattolica del Sacro Cuore di Milano, Italy.
    Perego, Paolo
    Università Cattolica del Sacro Cuore di Milano, Italy.
    Reaction Time Variability Association with Unsafe Driving2020In: Transport Research Arena TRA2020, Helsinki, Finland, 2020Conference paper (Refereed)
    Abstract [en]

    This paper investigates several human factors including visual field, reaction speed, driving behavior and personality traits based on results of a cognitive assessment test targeting drivers in a Naturalistic Driving Study (NDS). Frequency of being involved in Near Miss event (fnm) and Frequency of committing Traffic Violation (ftv) are defined as indexes of safe driving in this work. Inference of association shows statistically significant correlation between Standard Deviation of Reaction Time (σRT) and both safe driving indexes fnm and ftv. Causal relationship analysis excludes age as confounding factor as variations in behavioral responses is observed in both younger and older drivers of this study.

    Download full text (pdf)
    fulltext
  • 304.
    Altarabichi, Mohammed Ghaith
    et al.
    Halmstad University, School of Information Technology.
    Alabdallah, Abdallah
    Halmstad University, School of Information Technology.
    Pashami, Sepideh
    Halmstad University, School of Information Technology.
    Ohlsson, Mattias
    Halmstad University, School of Information Technology.
    Rögnvaldsson, Thorsteinn
    Halmstad University, School of Information Technology.
    Nowaczyk, Sławomir
    Halmstad University, School of Information Technology.
    Improving Concordance Index in Regression-based Survival Analysis: Discovery of Loss Function for Neural Networks2024Manuscript (preprint) (Other academic)
    Abstract [en]

    In this work, we use an Evolutionary Algorithm (EA) to discover a novel Neural Network (NN) regression-based survival loss function with the aim of improving the C-index performance. Our contribution is threefold; firstly, we propose an evolutionary meta-learning algorithm SAGA$_{loss}$ for optimizing a neural-network regression-based loss function that maximizes the C-index; our algorithm consistently discovers specialized loss functions that outperform MSCE. Secondly, based on our analysis of the evolutionary search results, we highlight a non-intuitive insight that signifies the importance of the non-zero gradient for the censored cases part of the loss function, a property that is shown to be useful in improving concordance. Finally, based on this insight, we propose MSCE$_{Sp}$, a novel survival regression loss function that can be used off-the-shelf and generally performs better than the Mean Squared Error for censored cases. We performed extensive experiments on 19 benchmark datasets to validate our findings.

  • 305.
    Altarabichi, Mohammed Ghaith
    et al.
    Halmstad University, School of Information Technology.
    Nowaczyk, Sławomir
    Halmstad University, School of Information Technology.
    Pashami, Sepideh
    Halmstad University, School of Information Technology.
    Sheikholharam Mashhadi, Peyman
    Halmstad University, School of Information Technology.
    Handl, Julia
    University of Manchester, Manchester, United Kingdom.
    Rolling the Dice for Better Deep Learning Performance: A Study of Randomness Techniques in Deep Neural Networks2024Manuscript (preprint) (Other (popular science, discussion, etc.))
    Abstract [en]

    This paper presents a comprehensive empirical investigation into the interactions between various randomness techniques in Deep Neural Networks (DNNs) and how they contribute to network performance. It is well-established that injecting randomness into the training process of DNNs, through various approaches at different stages, is often beneficial for reducing overfitting and improving generalization. However, the interactions between randomness techniques such as weight noise, dropout, and many others remain poorly understood. Consequently, it is challenging to determine which methods can be effectively combined to optimize DNN performance. To address this issue, we categorize the existing randomness techniques into four key types: data, model, optimization, and learning. We use this classification to identify gaps in the current coverage of potential mechanisms for the introduction of noise, leading to proposing two new techniques: adding noise to the loss function and random masking of the gradient updates.

    In our empirical study, we employ a Particle Swarm Optimizer (PSO) to explore the space of possible configurations to answer where and how much randomness should be injected to maximize DNN performance. We assess the impact of various types and levels of randomness for DNN architectures applied to standard computer vision benchmarks: MNIST, FASHION-MNIST, CIFAR10, and CIFAR100. Across more than 30\,000 evaluated configurations, we perform a detailed examination of the interactions between randomness techniques and their combined impact on DNN performance. Our findings reveal that randomness in data augmentation and in weight initialization are the main contributors to performance improvement. Additionally, correlation analysis demonstrates that different optimizers, such as Adam and Gradient Descent with Momentum, prefer distinct types of randomization during the training process. A GitHub repository with the complete implementation and generated dataset is available\footnote[1]{https://github.com/Ghaith81/Radnomness\_in\_Neural\_Network}.

  • 306.
    Altayo Gonzalez, u1dr0yqp
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Stathis, Dimitrios
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    Hemani, Ahmed
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems, Electronic and embedded systems.
    Synthesis of Predictable Global NoC by Abutment in Synchoros VLSI Design2021In: Proceedings - 2021 15th IEEE/ACM International Symposium on Networks-on-Chip, NOCS 2021, Association for Computing Machinery (ACM), 2021, p. 61-66Conference paper (Refereed)
    Abstract [en]

    Synchoros VLSI design style has been proposed as an alternative to the standard cell-based design style; the word synchoros is derived from the Greek word choros for space. Synchoricity discretises space with a virtual grid, the way synchronicity discretises time with clock ticks. SiLago (Silicon Lego) blocks are atomic synchoros building blocks like Lego bricks. SiLago blocks absorb all metal layer details, i.e., all wires, to enable composition by abutment of valid; valid in the sense of being technology design rules compliant, timing clean and OCV ruggedized. Effectively, composition by abutment eliminates logic and physical synthesis for the end user. Like Lego system, synchoricity does need a finite number of SiLago block types to cater to different types of designs. Global NoCs are important system level design components. In this paper, we show, how with a small library of SiLago blocks for global NoCs, it is possible to automatically synthesize arbitrary global NoCs of different types, dimensions, and topology. The synthesized global NoCs are not only valid VLSI designs, but their cost metrics (area, latency, and energy) are known with post-layout accuracy in linear time. We argue that this is essential to be able to do chip-level design space exploration. We show how the abstract timing model of such global NoC SiLago blocks can be built and used to analyse the timing of global NoC links with post layout accuracy and in linear time. We validate this claim by subjecting the same VLSI designs of global NoC to commercial EDA's static timing analysis and show that the abstract timing analysis enabled by synchoros VLSI design gives the same results as the commercial EDA tools.

  • 307.
    Altenbernd, Peter
    et al.
    University of Applied Sciences Darmstadt, Germany.
    Ermedahl, Andreas
    Mälardalen University, School of Innovation, Design and Engineering.
    Lisper, Björn
    Mälardalen University, School of Innovation, Design and Engineering.
    Gustafsson, Jan
    Mälardalen University, School of Innovation, Design and Engineering.
    Automatic Generation of Timing Models for Timing Analysis of High-Level Code2011In: 19th International Conference on Real-Time and Network Systems (RTNS2011), 2011Conference paper (Refereed)
    Abstract [en]

    Traditional timing analysis is applied only in the late stages of embedded system software development, when the hardware is available and the code is compiled and linked. However, preliminary timing estimates are often needed already in early stages of system development, both for hard and soft real-time systems. If the hardware is not yet fully accessible, or the code is not yet ready to compile or link, then the timing estimation must be done for the source code rather than for the binary. This paper describes how source-level timing models can be derived automatically for given combinations of hardware architecture and compiler. The models are identified from measured execution times for a set of synthetic "training programs" compiled for the hardware platform in question. The models can be used to derive source-level WCET estimates, as well as for estimating the execution times for single program runs. Our experiments indicate that the models can predict the execution times of the final, compiled code with a deviation up to 20%.

  • 308.
    Altenbernd, Peter
    et al.
    University of Applied Sciences, Germany.
    Gustafsson, Jan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Lisper, Björn
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Stappert, Friedhelm
    Siemens VDO Automotive AG, Germany.
    Early execution time-estimation through automatically generated timing models2016In: Real-time systems, ISSN 0922-6443, E-ISSN 1573-1383, Vol. 52, no 6, p. 731-760Article in journal (Refereed)
    Abstract [en]

    Traditional timing analysis, such as worst-case execution time analysis, is normally applied only in the late stages of embedded system software development, when the hardware is available and the code is compiled and linked. However, preliminary timing estimates are often needed in early stages of system development as an essential prerequisite for the configuration of the hardware setup and dimensioning of the system. During this phase the hardware is often not available, and the code might not be ready to link. This article describes an approach to predict the execution time of software through an early, source-level timing analysis. A timing model for source code is automatically derived from a given combination of hardware architecture and compiler. The model is identified from measured execution times for a set of synthetic training programs, compiled for the hardware platform in question. It can be used to estimate the execution time for code running on the platform: the estimation is then done directly from the source code, without compiling and running it. Our experiments show that, using this model, we can predict the execution times of the final, compiled code surprisingly well. For instance, we achieve an average deviation of 8 % for a set of benchmark programs for the ARM7 architecture.

  • 309.
    Al-Trad, Anas
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Optimized Composition of Parallel Components on a Linux Cluster2012Independent thesis Advanced level (degree of Master (Two Years)), 30 credits / 45 HE creditsStudent thesis
    Abstract [en]

    We develop a novel framework for optimized composition of explicitly parallel software components with different implementation variants given the problem size, data distribution scheme and processor group size on a Linux cluster. We consider two approaches (or two cases of the framework). 

    In the first approach, dispatch tables are built using measurement data obtained offline by executions for some (sample) points in the ranges of the context properties. Inter-/extrapolation is then used to do actual variant-selection for a given execution context at run-time.

    In the second approach, a cost function of each component variant is provided by the component writer for variant-selection. These cost functions can internally lookup measurements' tables built, either offline or at deployment time, for computation- and communication-specific primitives.

    In both approaches, the call to an explicitly parallel software component (with different implementation variants) is made via a dispatcher instead of calling a variant directly.

    As a case study, we apply both approaches on a parallel component for matrix multiplication with multiple implementation variants. We implemented our variants using Message Passing Interface (MPI). The results show the reduction in execution time for the optimally composed applications compared to applications with hard-coded composition. In addition, the results show the comparison of estimated and measured times for each variant using different data distributions, processor group and problem sizes.

    Download full text (pdf)
    fulltext
  • 310.
    Alvarez, Ines
    et al.
    Univ Balearic Isl UIB, Dept Math & Informat, Palma De Mallorca 07122, Spain..
    Moutinho, Luis
    Inst Telecomunicacoes IT, P-3810193 Aveiro, Portugal.;Escola Super Tecnol & Gestao Agueda ESTGA, P-3750127 Agueda, Portugal..
    Pedreiras, Paulo
    Inst Telecomunicacoes IT, P-3810193 Aveiro, Portugal.;Univ Aveiro UA, Dept Elect Telecommun & Informat, P-3810193 Aveiro, Portugal..
    Bujosa Mateu, Daniel
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Proenza, Julian
    Univ Balearic Isl UIB, Dept Math & Informat, Palma De Mallorca 07122, Spain..
    Almeida, Luis
    Univ Porto FEUP, Fac Engn, Elect & Comp Engn Dept, P-4200465 Porto, Portugal.;Res Ctr Real Time & Embedded Comp Syst CISTER, P-4249015 Porto, Portugal..
    Comparing Admission Control Architectures for Real-Time Ethernet2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 136260-136260Article in journal (Refereed)
    Abstract [en]

    Industry 4.0 and Autonomous Driving are emerging resource-intensive distributed application domains that deal with open and evolving environments. These systems are subject to stringent resource, timing, and other non-functional constraints, as well as frequent reconfiguration. Thus, real-time behavior must not preclude operational flexibility. This combination is motivating ongoing efforts within the Time Sensitive Networking (TSN) standardization committee to define admission control mechanisms for Ethernet. Existing mechanisms in TSN, like those of AVB, its predecessor, follow a distributed architecture that favors scalability. Conversely, the new mechanisms envisaged for TSN (IEEE 802.1Qcc) follow a (partially) centralized architecture, favoring short reconfiguration latency. This paper shows the first quantitative comparison between distributed and centralized admission control architectures concerning reconfiguration latency. Here, we compare AVB against a dynamic real-time reconfigurable Ethernet technology with centralized management, namely HaRTES. Our experiments show a significantly lower latency using the centralized architecture. We also observe the dependence of the distributed architecture in the end nodes & x2019; performance and the benefit of having a protected channel for the admission control transactions.

  • 311.
    Alvarez Vadillo, Ines
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Servera, Andreu
    Universitat de les Illes Balears, Balears, Spain.
    Proenza, Julian
    Universitat de les Illes Balears, Balears, Spain.
    Ashjaei, Seyed Mohammad Hossein
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Mubeen, Saad
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Implementing a First CNC for Scheduling and Configuring TSN Networks2022In: IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, Institute of Electrical and Electronics Engineers Inc. , 2022, Vol. 2022-SeptemberConference paper (Refereed)
    Abstract [en]

    Novel industrial applications are leading to important changes in industrial systems. One of the most important changes is the need for systems that are capable to adapt to changes in the environment or the system itself. Because of their nature many of these applications are distributed, and their network infrastructure is key to guarantee the correct operation of the overall system. Furthermore, in order for a distributed system to be able to adapt, its network must be flexible enough to support changes in the traffic during runtime. The Time-Sensitive Networking (TSN) Task Group has proposed a series of standards that aim at providing deterministic real-time communications over Ethernet. TSN also provides centralised online configuration and control architectures which enable the online configuration of the network. A key part in TSN's centralised architectures is the Centralised Network Configuration element (CNC). In this work we present a first implementation of a CNC capable of scheduling time-triggered traffic and deploying such configuration in the network using the Network Configuration (NETCONF) protocol. We also assess the correctness of our implementation using an industrial use case provided by Volvo Construction Equipment.

  • 312.
    Alvaro, Alexandre
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Land, Rikard
    Mälardalen University, Department of Computer Science and Electronics.
    Crnkovic, Ivica
    Mälardalen University, Department of Computer Science and Electronics.
    Software Component Evaluation: A Theoretical Study on Component Selection and Certification2007Report (Other academic)
    Abstract [en]

    Software components need to be evaluated at several points during their life cycle, by different actors and for different purposes. Besides the quality assurance performed by component developers, there are two main activities which include evaluation of components: component selection (i.e. evaluation performed by the system developer in order to select the best fit component to use in a system) and an envisioned component certification (i.e. evaluation made by an independent actor in order to increase the trust in the component). This paper examines the fundamental similarities and differences between these two types of component evaluations and elaborates how these fit in the overall process views of component-based development for both COTS-based development and software product line development.

  • 313.
    Alveflo, Victor
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
    Virtual Training Tool: Mjukvarubaserat utbildningsverktyg2014Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    An important part at companies that’s manufacturing industrial machines is to provide training for the machines end-users such as daily machine operators and technicians, in order to give them a good understanding about how the concerned machine works. At these kinds of trainings it’s very important that the training material used to train the users provides a high user-friendliness.

    The goal with this thesis is to improve the user-friendliness of an existing training tool for a specific training course from the projects client. The training course that’s going to be improved is currently using a hardware-based simulator that’s used to manually simulate processes at a certain machine.

    The result was a solution in shape of a software-based simulator with associated graphical user interface. The user can thus within safe circumstances simulate the concerned machines behavior through a PC-program and e.g. create emergency situations without putting the users safety in danger. 

    Download full text (pdf)
    Victor Alveflo
  • 314.
    Alves, Ricardo
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems. 2111 NE 25th Ave, Hillsboro, OR 97124 USA..
    Kaxiras, Stefanos
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Architecture and Computer Communication. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.
    Black-Schaffer, David
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Architecture and Computer Communication.
    Early Address Prediction: Efficient Pipeline Prefetch and Reuse2021In: ACM Transactions on Architecture and Code Optimization (TACO), ISSN 1544-3566, E-ISSN 1544-3973, Vol. 18, no 3, article id 39Article in journal (Refereed)
    Abstract [en]

    Achieving low load-to-use latency with low energy and storage overheads is critical for performance. Existing techniques either prefetch into the pipeline (via address prediction and validation) or provide data reuse in the pipeline (via register sharing or LO caches). These techniques provide a range of tradeoffs between latency, reuse, and overhead. In this work, we present a pipeline prefetching technique that achieves state-of-the-art performance and data reuse without additional data storage, data movement, or validation overheads by adding address tags to the register file. Our addition of register file tags allows us to forward (reuse) load data from the register file with no additional data movement, keep the data alive in the register file beyond the instruction's lifetime to increase temporal reuse, and coalesce prefetch requests to achieve spatial reuse. Further, we show that we can use the existing memory order violation detection hardware to validate prefetches and data forwards without additional overhead. Our design achieves the performance of existing pipeline prefetching while also forwarding 32% of the loads from the register file (compared to 15% in state-of-the-art register sharing), delivering a 16% reduction in L1 dynamic energy (1.6% total processor energy), with an area overhead of less than 0.5%.

    Download full text (pdf)
    FULLTEXT01
  • 315.
    Alvila, Markus
    et al.
    Linköping University, Department of Computer and Information Science.
    Johansson, Jonathan
    Linköping University, Department of Computer and Information Science.
    Johansson, Philip
    Linköping University, Department of Computer and Information Science.
    Lenz, Silas
    Linköping University, Department of Computer and Information Science.
    Lindmark, Sebastian
    Linköping University, Department of Computer and Information Science.
    Norberg, Emil
    Linköping University, Department of Computer and Information Science.
    Regard, Viktor
    Linköping University, Department of Computer and Information Science.
    Övervakning och bedömning av flygledares prestanda2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Möjligheten att fjärrstyra flygledartorn kommer att ställa högre krav på flygledares koncentrations- och simultanförmåga runt om i Sverige. Det är viktigt att åtgärder tas för att förhindra att en trött flygledare begår ett misstag och det är just detta som det framtagna systemet försöker att förhindra.

    Med hjälp av sensorer och modeller kan systemet bestämma flygledarens trötthet, stressnivå, uppmärksamhet och nuvarande arbetsuppgift. Alla värden presenteras i ett enkelt grafiskt gränssnitt. Tillsammans med resultaten för flygledarens hälsa presenteras även all sensordata i gränssnittet.

    Systemet är främst uppbyggt av två olika ramverk: Apache NiFi och Apache Spark. Vad de båda ramverken har gemensamt är att de har funktionalitet för att bygga kluster, vilket betyder att endast antalet noder sätter gränsen för hur många flygledare som kan vara uppkopplade samtidigt.

    Denna prototyp har inte all funktionalitet på plats för att behandla flera flygledare. Grunden är däremot lagd för att enkelt kunna implementera ytterligare funktionalitet och i slutändan ha flera flygledare uppkopplade samtidigt. Systemet öppnar upp möjligheter till att fördela arbetet på de flygledare som är mest fokuserade och kan därför bidra till att öka flygsäkerheten.

    Download full text (pdf)
    flygledares-prestanda
  • 316.
    Alwan, Alaa
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Secure Application Development2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Security testing is a widely applied measure to evaluate and improve software security by identifying vulnerabilities and ensuring security requirements related to properties like confidentiality, integrity, and availability. A confidentiality policy guarantees that attackers will not be able to expose secret information. In the context of software programs, the output that attackers observe will not carry any information about the confidential input information. Integrity is the dual of confidentiality, i.e., unauthorized and untrusted data provided to the system will not affect or modify the system’s data. Availability means that systems must be available at a reasonable time. Information flow control is a mechanism to enforce confidentiality and integrity. An accurate security assessment is critical in an age when the open nature of modern software-based systems makes them vulnerable to exploitation. Security testing that verifies and validates software systems is prone to false positives, false negatives, and other such errors, requiring more resilient tools to provide an efficient way to evaluate the threats and vulnerabilities of a given system. Therefore, the newly developed tool Reax controls information flow in Java programs by synthesizing conditions under which a method or an application is secure. Reax is a command-line application, and it is hard to be used by developers. This project has its primary goal to integrate Reax by introducing a plugin for Java IDEs to perform an advanced analysis of security flaws. Specifically, by design, a graphical plugin performs advanced security analysis that detects and reacts directly to security flaws within the graphical widget toolkit environment (SWT). The project proposed a new algorithm to find the root cause of security violations through a graphical interface as a second important goal. As a result, developers will be able to detect security violations and fix their code during the implementation phase, reducing costs.

    Download full text (pdf)
    Alaa Alwan's thesis
  • 317.
    Alyoussef, Elyas
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Kategorisering på uppfattningar om digitala hot på webbapplikationer: Med en studie som visar de ekonomiska konsekvenserna av cyberattacker2022Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis presents a categorization of conceptions about digital threats on web applications with a study showing the economic consequences of cyber-attacks. The aim of this thesis is to contribute to a scientific article, which can be valuable to the public, as well as for future work and employment.

    Constant comparison method was used to analyse aggregate perceptions. The results reveal several exciting findings for theory and practice, where perceptions of the cyber world were presented in order to understand more how others see cybersecurity today. It also shows significant variations among the participants' perceptions. This shows that information security, even if it is gradually developed, has a long way to go until it becomes an unbroken part of the business. 

    This study can also serve as a guide for the different perceptions of cyber-attacks as it provides an overview of the most relevant cyber-attacks today. This thesis was supplemented with a study that highlights the economic consequences of cyberattacks. In addition to this, the cyber-attack on Coop during the summer of 2021 was also studied. 

    Download full text (pdf)
    Elyas_Examensarbete
  • 318.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mattsson, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Characteristics that affect Preference of Decision Models for Asset Selection: An Industrial Questionnaire Survey2020In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 28, no 4, p. 1675-1707Article in journal (Refereed)
    Abstract [en]

    Modern software development relies on a combination of development and re-use of technical asset, e.g. software components, libraries and APIs.In the past, re-use was mostly conducted with internal assets but today external; open source, customer off-the-shelf (COTS) and assets developed through outsourcing are also common.This access to more asset alternatives presents new challenges regarding what assets to optimally chose and how to make this decision.To support decision-makers, decision-theory has been used to develop decision models for asset selection.However, very little industrial data has been presented in literature about the usefulness, or even perceived usefulness, of these models.Additionally, only limited information has been presented about what model characteristics that determine practitioner preference towards one model over another.

    Objective: The objective of this work is to evaluate what characteristics of decision models for asset selection that determine industrial practitioner preference of a model when given the choice of a decision-model of high precision or a model with high speed.

    Method: An industrial questionnaire survey is performed where a total of 33 practitioners, of varying roles, from 18 companies are tasked to compare two decision models for asset selection.Textual analysis and formal and descriptive statistics are then applied on the survey responses to answer the study's research questions.

    Results: The study shows that the practitioners had clear preference towards the decision model that emphasised speed over the one that emphasised decision precision.This conclusion was determined to be because one of the models was perceived faster, had lower complexity, had, was more flexible in use for different decisions, was more agile how it could be used in operation, its emphasis on people, its emphasis on ``good enough'' precision and ability to fail fast if a decision was a failure.Hence, seven characteristics that the practitioners considered important for their acceptance of the model.

    Conclusion: Industrial practitioner preference, which relates to acceptance, of decision models for asset selection is dependent on multiple characteristics that must be considered when developing a model for different types of decisions such as operational day-to-day decisions as well as more critical tactical or strategic decisions.The main contribution of this work are seven identified characteristics that can serve as industrial requirements for future research on decision models for asset selection.

    Download full text (pdf)
    fulltext
  • 319.
    Amaral, Vasco
    et al.
    Universidade Nova de Lisboa, Portugal.
    Norberto, Beatriz
    Universidade Nova de Lisboa, Portugal.
    Goulão, Miguel
    Universidade Nova de Lisboa, Portugal.
    Aldinucci, Marco
    University of Torino, Italy.
    Benkner, Siegfried
    University of Vienna, Austria.
    Bracciali, Andrea
    University of Stirling, UK.
    Carreira, Paulo
    Universidade de Lisboa, Portugal.
    Celms, Edgars
    University of Latvia, Latvia.
    Correia, Luís
    Universidade de Lisboa, Portugal.
    Grelck, Clemens
    University of Amsterdam, Netherlands.
    Karatza, Helen
    Aristotle University of Thessaloniki, Greece.
    Kessler, Christoph
    Linköping University, Sweden.
    Kilpatrick, Peter
    Queens University Belfast, UK.
    Martiniano, Hugo
    Universidade de Lisboa, Portugal.
    Mavridis, Ilias
    Aristotle University of Thessaloniki, Greece.
    Pllana, Sabri
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Respício, Ana
    Universidade de Lisboa, Portugal.
    Simão, José
    Instituto Politécnico de Lisboa, Portugal.
    Veiga, Luís
    Universidade de Lisboa, Portugal.
    Visa, Ari
    Tampere University, Finland.
    Programming Languages for Data-Intensive HPC Applications: a Systematic Mapping Study2020In: Parallel Computing, ISSN 0167-8191, E-ISSN 1872-7336, Vol. 91, p. 1-17, article id 102584Article in journal (Refereed)
    Abstract [en]

    A major challenge in modelling and simulation is the need to combine expertise in both software technologies and a given scientific domain. When High-Performance Computing (HPC) is required to solve a scientific problem, software development becomes a problematic issue. Considering the complexity of the software for HPC, it is useful to identify programming languages that can be used to alleviate this issue. Because the existing literature on the topic of HPC is very dispersed, we performed a Systematic Mapping Study (SMS) in the context of the European COST Action cHiPSet. This literature study maps characteristics of various programming languages for data-intensive HPC applications, including category, typical user profiles, effectiveness, and type of articles. We organised the SMS in two phases. In the first phase, relevant articles are identified employing an automated keyword-based search in eight digital libraries. This lead to an initial sample of 420 papers, which was then narrowed down in a second phase by human inspection of article abstracts, titles and keywords to 152 relevant articles published in the period 2006–2018. The analysis of these articles enabled us to identify 26 programming languages referred to in 33 of relevant articles. We compared the outcome of the mapping study with results of our questionnaire-based survey that involved 57 HPC experts. The mapping study and the survey revealed that the desired features of programming languages for data-intensive HPC applications are portability, performance and usability. Furthermore, we observed that the majority of the programming languages used in the context of data-intensive HPC applications are text-based general-purpose programming languages. Typically these have a steep learning curve, which makes them difficult to adopt. We believe that the outcome of this study will inspire future research and development in programming languages for data-intensive HPC applications.

  • 320.
    Amatya, Suyesh
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Kurti, Arianit
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Cross-Platform Mobile Development: Challenges and Opportunities2014In: ICT Innovations 2013: ICT Innovations and Education / [ed] Vladimir Trajkovik and Misev Anastas, Springer, 2014, 1, p. 219-229Chapter in book (Refereed)
    Abstract [en]

    Mobile devices and mobile computing have made tremendous advances and become ubiquitous in the last few years. As a result, the landscape has become seriously fragmented which brings lots of challenges for the mobile development process. Whilst native approach of mobile development still is the predominant way to develop for a particular mobile platform, recently there is shifting towards cross-platform mobile development as well. In this paper, we have performed a survey of the literature to see the trends in cross-platform mobile development over the last few years. With the result of the survey, we argue that the web-based approach and in particular,hybrid approach, of mobile development serves the best for cross-platform development. The results of this work indicate that even though cross platform tools are not fully matured they show great potential. Thus we consider that cross-platform development offers great opportunities for rapid development of high-fidelity prototypes of the mobile application.

  • 321.
    Ambrosius, Robin
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Machine Learning Based Optimizations for Bot Aided Interviews: In the Field of Due Diligence2018Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Startups need investments in order to scale their business. The value of such startups, especially software-based startups, are difficult to evaluate because there is no physical value that can be judged.  The company DueDive built experience in due diligence by conducting many interviews in this area, which are the base for the due diligence. These interviews are time consuming and require a lot of domain knowledge in the field, which makes them very expensive. This thesis evaluated different machine learning algorithms to integrate into a software that supports such interviews process. The goal is to shorten the interview duration and lowering the required know know for the interviewer using suggestions by the AI. The software uses completed interview sessions to provide enhanced suggestions through artificial intelligence. The proposed solution uses basket analysis and imputation to analyze the collected data. The result is a topic-independent software that is used to administrate and carry out interviews with the help of AI. The results are validated and evaluated in a case study using a generic, self-defined interview.

  • 322.
    AMEEN HASHIM, FARHAN
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
    Al Eid, Jamal
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
    Al-Salem, Abdulkhaliq
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
    Comparing of Real-Time Properties in Networks Based On IPv6 and IPv42013Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Real time applications over IP network became widely used in different fields; social video conference, online educational lectures, industrial, military, and online robotic medical surgery.

    Online medical surgery over IP network has experienced rapid growth in the last few years primarily due to advances in technology (e.g., increased bandwidth; new cameras, monitors, and coder/decoders (CODECs)) and changes in the medical care environment (e.g., increased outpatient care, remote surgeries).

    The purpose of this study was to examine and analyze the impact of IP networks parameters; delay, jitter, throughput, and drop packet on the performance of real-time medical surgery videos sent across different IP networks; native IPv6, native IPv4, 6to4 and 6in4 tunneling transition mechanisms and compare the behavior of video packets over IP networks. The impact of each parameter over IP networks is examined by using different video codecs MPEG-1, MPEG-2, and MPEG-4.

    This study has been carried out with two main parts; theoretical and practical part, the theoretical part of this study focused on the calculations of various delays in IP networks such as transmission, processing, propagation, and queuing delays for video packet, while the practical part includes; examining of video codecs throughput over IP networks by using jperf tool and examining delay, jitter, and packet drops for different packet sizes by using IDT-G tool and how these parameters can affect quality of received video.

    The obtained theoretical and practical results were presented in different tables and plotted into different graphs to show the performance of real time video over IP networks. These results confirmed that video codecs MPEG-1, MPEG-2, and MPEG-4 were highly impacted by encapsulation and de-capsulation process except MPEG-4 codec, MPEG-4 was the least impacted by IPv4, IPv6, and IP transition mechanisms concerning throughput and wastage bandwidth. It also indicated that using IPv6-to-4 and IPv6-in-4 tunneling mechanisms caused more bandwidth wastage, high delay, jitter, and packet drop than IPv4 and IPv6.

    Download full text (pdf)
    Comparing of Real-Time
  • 323.
    Ameri, Afshin
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Curuklu, Baran
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Miloradović, Branko
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ekström, Mikael
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Planning and Supervising Autonomous Underwater Vehicles through the Mission Management Tool2020In: Global OCEANS 2020 OCEANS, 2020Conference paper (Refereed)
    Abstract [en]

    Complex underwater missions involving heterogeneous groups of AUVs and other types of vehicles require a number of steps from defining and planning the mission, orchestration during the mission execution, recovery of the vehicles, and finally post-mission data analysis. In this work the Mission Management Tool (MMT), a software solution for addressing the above-mentioned services is proposed. As demonstrated in the real-world tests the MMT is able to support the mission operators. The MMT hides the complex system consisting of software solutions, hardware, and vehicles from the user, and allows intuitive interaction with the vehicles involved in a mission. The tool can adapt to a wide spectrum of missions assuming different types of robotic systems and mission objectives.

  • 324.
    Amighi, Afshin
    et al.
    University of Twente.
    de Carvalho Gomes, Pedro
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Gurov, Dilian
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Huisman, Marieke
    University of Twente.
    Provably Correct Control-Flow Graphs from Java Programs with Exceptions2012Report (Other academic)
    Abstract [en]

    We present an algorithm to extract flow graphs from Java bytecode, including exceptional control flows. We prove its correctness, meaning that the behavior of the extracted control-flow graph is a sound over-approximation of the behavior of the original program. Thus any safety property that holds for the extracted control-flow graph also holds for the original program. This makes control-flow graphs suitable for performing various static analyses, such as model checking.The extraction is performed in two phases. In the first phase the program is transformed into a BIR program, a stack-less intermediate representation of Java bytecode, from which the control-flow graph is extracted in the second phase. We use this intermediate format because it results in compact flow graphs, with provably correct exceptional control flow. To prove the correctness of the two-phase extraction, we also define an idealized extraction algorithm, whose correctness can be proven directly. Then we show that the behavior of the control-flow graph extracted via the intermediate representation is an over-approximation of the behavior of the directly extracted graphs, and thus of the original program. We implemented the indirect extraction as the CFGEx tool and performed several test-cases to show the efficiency of the algorithm.

    Download full text (pdf)
    fulltext
  • 325.
    Amighi, Afshin
    et al.
    University of Twente.
    de Carvalho Gomes, Pedro
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Gurov, Dilian
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Huisman, Marieke
    University of Twente.
    Sound Control-Flow Graph Extraction for Java Programs with Exceptions2012In: Software Engineering and Formal Methods: 10th International Conference, SEFM 2012, Thessaloniki, Greece, October 1-5, 2012. Proceedings, Springer Berlin/Heidelberg, 2012, p. 33-47Conference paper (Refereed)
    Abstract [en]

    We present an algorithm to extract control-flow graphs from Java bytecode, considering exceptional flows. We then establish its correctness: the behavior of the extracted graphs is shown to be a sound over-approximation of the behavior of the original programs. Thus, any temporal safety property that holds for the extracted control-flow graph also holds for the original program. This makes the extracted graphs suitable for performing various static analyses, in particular model checking. The extraction proceeds in two phases. First, we translate Java bytecode into BIR, a stack-less intermediate representation. The BIR transformation is developed as a module of Sawja, a novel static analysis framework for Java bytecode. Besides Sawja’s efficiency, the resulting intermediate representation is more compact than the original bytecode and provides an explicit representation of exceptions. These features make BIR a natural starting point for sound control-flow graph extraction. Next, we formally define the transformation from BIR to control-flow graphs, which (among other features) considers the propagation of uncaught exceptions within method calls. We prove the correctness of the two-phase extraction by suitably combining the properties of the two transformations with those of an idealized control-flow graph extraction algorithm, whose correctness has been proved directly. The control-flow graph extraction algorithm is implemented in the ConFlEx tool. A number of test-cases show the efficiency and the utility of the implementation.

  • 326.
    Amin, Marian H.
    et al.
    Faculty of Management Technology, German University in Cairo, Cairo, Egypt.
    Mohamed, Ehab K.A
    Faculty of Management Technology, German University in Cairo, Cairo, Egypt.
    Elragal, Ahmed
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems.
    Corporate Disclosure via Social Media: A Data Science Approach2020In: Online information review (Print), ISSN 1468-4527, E-ISSN 1468-4535, Vol. 44, no 1, p. 278-298Article in journal (Refereed)
    Abstract [en]

    Purpose - The aim of this paper is to investigate corporate financial disclosure via Twitter among the top listed 350 companies in the UK as well as identify the determinants of the extent of social media usage to disclose financial information.

    Design/methodology/approach – This study applies an unsupervised machine learning technique, namely, Latent Dirichlet Allocation (LDA) topic modeling to identify financial disclosure tweets. Panel, Logistic, and Generalized Linear Model Regressions are also run to identify the determinants of financial disclosure on Twitter focusing mainly on board characteristics.

    Findings – Topic modeling results reveal that companies mainly tweet about 12 topics, including financial disclosure, which has a probability of occurrence of about 7 percent. Several board characteristics are found to be associated with the extent of Twitter usage as a financial disclosure platform, among which are board independence, gender diversity, and board tenure.

    Originality/value – Extensive literature examines disclosure via traditional media and its determinants, yet this paper extends the literature by investigating the relatively new disclosure channel of social media. This study is among the first to utilize machine learning, instead of manual coding techniques, to automatically unveil the tweets’ topics and reveal financial disclosure tweets. It is also among the first to investigate the relationships between several board characteristics and financial disclosure on Twitter; providing a distinction between the roles of executive versus non-executive directors relating to disclosure decisions.

  • 327. Amin, Marian Hany
    et al.
    Mohamed, Ehab Kamel
    Elragal, Ahmed
    Corporate Social Responsibility disclosure via Twitter by top listed UK companies: A Data Science Approach2018Conference paper (Refereed)
  • 328.
    Amin, Marian Hany
    et al.
    The German University in Cairo.
    Mohamed, Ehab Kamel
    Elragal, Ahmed
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Financial Disclosure on Twitter by Top Listed UK Companies: A Data Science Approach2018Conference paper (Refereed)
    Abstract [en]

    Ongoing advancements in technology have changed dramatically the disclosure media that companies adopt. Such disclosure media have evolved from the traditional paper-based ones, to the internet as the new platform to disclose information via companies’ designated websites. However, currently the new media for disclosures are the social media. The aim of this paper is to investigate corporate social media accounts for financial disclosure, as well as, identify its determinants. The sample of the study is comprised of the tweets posted on the Twitter accounts belonging to the FTSE 350 constituents. Topic modeling is applied to identify financial disclosure tweets and logistic regression is run to identify the determinants of financial disclosure on Twitter. Results show that companies use Twitter to make corporate disclosures and some board characteristics are found to have a significant relationship with financial disclosure.

  • 329.
    Amin, Marian Hany
    et al.
    German University in Cairo, Cairo, Egypt.
    Mohamed, Ehab Kamel
    Elragal, Ahmed
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Twitter: An emerging media for corporate disclosure2018Conference paper (Refereed)
    Abstract [en]

    Ongoing advancements in technology have changed dramatically the disclosure media that companies adopt. Such disclosure media have evolved from the traditional paper-based ones, to the internet as the new platform to disclose information via companies’ designated websites. However, currently the new media for disclosures are the social media. The aim of this paper is to investigate corporate social media accounts for financial disclosure, as well as, identify its determinants. The sample of the study is comprised of the tweets posted on the Twitter accounts belonging to the FTSE 350 constituents. Topic modeling is applied to identify financial disclosure tweets and logistic regression is run to identify the determinants of financial disclosure on Twitter. Results show that companies use Twitter to make corporate disclosures and some board characteristics are found to have a significant relationship with financial disclosure. 

  • 330.
    Aminifar, Amir
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Analysis, Design, and Optimization of Embedded Control Systems2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Today, many embedded or cyber-physical systems, e.g., in the automotive domain, comprise several control applications, sharing the same platform. It is well known that such resource sharing leads to complex temporal behaviors that degrades the quality of control, and more importantly, may even jeopardize stability in the worst case, if not properly taken into account.

    In this thesis, we consider embedded control or cyber-physical systems, where several control applications share the same processing unit. The focus is on the control-scheduling co-design problem, where the controller and scheduling parameters are jointly optimized. The fundamental difference between control applications and traditional embedded applications motivates the need for novel methodologies for the design and optimization of embedded control systems. This thesis is one more step towards correct design and optimization of embedded control systems.

    Offline and online methodologies for embedded control systems are covered in this thesis. The importance of considering both the expected control performance and stability is discussed and a control-scheduling co-design methodology is proposed to optimize control performance while guaranteeing stability. Orthogonal to this, bandwidth-efficient stabilizing control servers are proposed, which support compositionality, isolation, and resource-efficiency in design and co-design. Finally, we extend the scope of the proposed approach to non-periodic control schemes and address the challenges in sharing the platform with self-triggered controllers. In addition to offline methodologies, a novel online scheduling policy to stabilize control applications is proposed.

    Download full text (pdf)
    fulltext
    Download (pdf)
    omslag
    Download (jpg)
    presentationsbild
  • 331.
    Aminifar, Amir
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Self-Triggered Controllers, Resource Sharing, and Hard Guarantees2016In: 2016 2ND INTERNATIONAL CONFERENCE ON EVENT-BASED CONTROL, COMMUNICATION, AND SIGNAL PROCESSING (EBCCSP), IEEE , 2016Conference paper (Refereed)
    Abstract [en]

    Today, many control applications in embedded and cyber-physical systems are implemented on shared platforms, alongside other hard real-time or safety-critical applications. Having the resource shared among several applications, to provide hard guarantees, it is required to identify the amount of resource needed for each application. This is rather straightforward when the platform is shared among periodic control and periodic real-time applications. In the case of event-triggered and self-triggered controllers, however, the execution patterns and, in turn, the resource usage are not clear. Therefore, a major implementation challenge, when the platform is shared with self-triggered controllers, is to provide hard and efficient stability and schedulability guarantees for other applications. In this paper, we identify certain execution patterns for self-triggered controllers, using which we are able to provide hard and efficient stability guarantees for periodic control applications.

  • 332.
    Amlinger, Anton
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    An Evaluation of Clustering and Classification Algorithms in Life-Logging Devices2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Using life-logging devices and wearables is a growing trend in today’s society. These yield vast amounts of information, data that is not directly overseeable or graspable at a glance due to its size. Gathering a qualitative, comprehensible overview over this quantitative information is essential for life-logging services to serve its purpose.

    This thesis provides an overview comparison of CLARANS, DBSCAN and SLINK, representing different branches of clustering algorithm types, as tools for activity detection in geo-spatial data sets. These activities are then classified using a simple model with model parameters learned via Bayesian inference, as a demonstration of a different branch of clustering.

    Results are provided using Silhouettes as evaluation for geo-spatial clustering and a user study for the end classification. The results are promising as an outline for a framework of classification and activity detection, and shed lights on various pitfalls that might be encountered during implementation of such service.

    Download full text (pdf)
    fulltext
  • 333.
    Amor, Christian
    et al.
    Univ Politecn Madrid, Sch Aerosp Engn, E-28040 Madrid, Spain..
    Perez, Jose M.
    Univ Politecn Madrid, Sch Aerosp Engn, E-28040 Madrid, Spain..
    Schlatter, Philipp
    KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW.
    Vinuesa, Ricardo
    KTH, School of Engineering Sciences (SCI), Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW.
    Le Clainche, Soledad
    Univ Politecn Madrid, Sch Aerosp Engn, E-28040 Madrid, Spain..
    Soft Computing Techniques to Analyze the Turbulent Wake of a Wall-Mounted Square Cylinder2020In: 14th International Conference on Soft Computing Models in Industrial and Environmental Applications, SOCO 2019 / [ed] Alvarez, FM Lora, AT Munoz, JAS Quintian, H Corchado, E, Springer, 2020, Vol. 950, p. 577-586Conference paper (Refereed)
    Abstract [en]

    This paper introduces several methods, generally used in fluid dynamics, to provide low-rank approximations. The algorithm describing these methods are mainly based on singular value decomposition (SVD) and dynamic mode decomposition (DMD) techniques, and are suitable to analyze turbulent flows. The application of these methods will be illustrated in the analysis of the turbulent wake of a wall-mounted cylinder, a geometry modeling a skyscraper. A brief discussion about the large and small size structures of the flow will provide the key ideas to represent the general dynamics of the flow using low-rank approximations. If the flow physics is understood, then it is possible to adapt these techniques, or some other strategies, to solve general complex problems with reduced computational cost. The main goal is to introduce these methods as machine learning strategies that could be potentially used in the field of fluid dynamics, and that can be extended to any other research field.

  • 334.
    Amorebieta, Josu
    et al.
    Univ Basque Country, UPV EHU, Dept Commun Engn, Bilbao 48013, Spain..
    Pereira, Joao
    RISE Res Inst Sweden, Fiber Opt, S-16440 Stockholm, Sweden..
    Durana, Gaizka
    Univ Basque Country, UPV EHU, Dept Commun Engn, Bilbao 48013, Spain..
    Franciscangelis, Carolina
    RISE Res Inst Sweden, Fiber Opt, S-16440 Stockholm, Sweden..
    Ortega-Gomez, Angel
    Univ Basque Country, UPV EHU, Dept Commun Engn, Bilbao 48013, Spain..
    Zubia, Joseba
    Univ Basque Country, UPV EHU, Dept Commun Engn, Bilbao 48013, Spain..
    Villatoro, Joel
    Univ Basque Country, UPV EHU, Dept Commun Engn, Bilbao 48013, Spain.;Basque Fdn Sci, Ikerbasque, Bilbao 48011, Spain..
    Margulis, Walter
    KTH, School of Engineering Sciences (SCI), Applied Physics, Laser Physics. RISE Res Inst Sweden, Fiber Opt, S-16440 Stockholm.
    Twin-core fiber sensor integrated in laser cavity2022In: Scientific Reports, E-ISSN 2045-2322, Vol. 12, no 1, article id 11797Article in journal (Refereed)
    Abstract [en]

    In this work, we report on a twin-core fiber sensor system that provides improved spectral efficiency, allows for multiplexing and gives low level of crosstalk. Pieces of the referred strongly coupled multicore fiber are used as sensors in a laser cavity incorporating a pulsed semiconductor optical amplifier (SOA). Each sensor has its unique cavity length and can be addressed individually by electrically matching the periodic gating of the SOA to the sensor's cavity roundtrip time. The interrogator acts as a laser and provides a narrow spectrum with high signal-to-noise ratio. Furthermore, it allows distinguishing the response of individual sensors even in the case of overlapping spectra. Potentially, the number of interrogated sensors can be increased significantly, which is an appealing feature for multipoint sensing.

  • 335.
    Amoura, Jonas
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Nätverksemulatorer: Nätverksemulering i utbildningssyfte2015Independent thesis Basic level (university diploma), 5 credits / 7,5 HE creditsStudent thesis
    Abstract [en]

    This is a project that deals with network emulators for training purposes, where everything is based on open-source applications. The goal of this project was to evaluate GNS3 and CORE emulators and answer the question, if and how they can be used in educational purposes for students and teachers. The study begins by briefly describing the various emulators available through open-source, where it was chosen to focus on the following network emulators IMUNES: Marionnet, Mininet, NetKit, GNS3 and CORE. The evaluation was conducted using a form, and GNS3 and CORE emulators run on a Linux-based operating systems to test all functions and various applications available within the tools. The results showed that both emulators work great to make use of open-source applications that can emulate router functions to emulate network topologies with different routing protocols such as RIP, OSPF and BGP. The evaluation also showed that both emulators are excellent tools to be used by people with minimal knowledge in programming, because of its user-friendly interface that helps one to build complex topologies using drag-and-drop functionality only. The conclusion of the study is that both emulators work well for educational purposes to develop network technology, router protocols and Linux skills for students, as well as to create a virtual environment to develop their skills with which they also can experiment with their skills. To install the network emulators GNS3 and CORE and related applications took about 30 minutes per tool, as well as taking GNS3 23 MB and Core 10MB hard disk space to be installed without any accessory applications.

    Why these two tools work well for training purposes is that both emulators has integrated support for a large numbers of applications and the use of simple user interface to emulate network environments. In addition, the tools are completely free to all students and teachers, therefore everyone have the same opportunities to access them. The report recommendation is to use the emulator CORE precisely because the utility has so many features integrated within itself, and it is such a simple tool to use.

  • 336.
    Amponsah, Nana Frimpong
    University West, Department of Engineering Science.
    Securing connected vehicles with the MITRE ATT&CK Framework: A risk based approach2023Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In the rapidly evolving landscape of connected vehicles, the threat of cyberattacks has emerged as a significant concern. This research embarked on a journey to develop a riskbased strategy to protect these vehicles, leveraging the MITRE ATT&CK framework. An exhaustive literature review laid the foundation, revealing gaps in current defense mechanisms and highlighting the urgency of addressing vulnerabilities in connected vehicles.Using the MITRE ATT&CK framework, specific vulnerabilities in connected vehicles were identified. A bespoke risk assessment model was then crafted, which prioritized these vulnerabilities based on their potential impact and exploitability. This prioritization was not just data-driven but was further refined with insights from industry experts.To address the identified vulnerabilities, countermeasures were developed, rooted in the principles of the Layered Security Model. These countermeasures underwent rigorous testing in simulated environments, ensuring their robustness against potential cyber threats. Continuous monitoring mechanisms were also established to ensure the long-term efficacy of these countermeasures.The culmination of this research offers a comprehensive strategy to safeguard connected vehicles against cyber threats. The findings not only contribute significantly to the existing body of knowledge but also provide actionable insights for industry practitioners. The journey, from identifying vulnerabilities to developing and testing countermeasures, underscores the importance of a holistic, risk-based approach in the realm of connected vehicle cybersecurity

  • 337. Amréus, Lars
    et al.
    Berg, Kristian
    Medeltidens bildvärld - ökad tillgänglighet eller tjuvkatalog?2003In: Kulturmiljövård, ISSN 1100-4800, no 2, p. 36-41Article in journal (Other (popular science, discussion, etc.))
    Abstract [sv]

    Historiska museets nya webbplats.

    Download full text (pdf)
    fulltext
    Download (jpg)
    presentationsbild
  • 338.
    Anders, Söderholm
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Justus, Sörman
    Linköping University, Department of Computer and Information Science, Software and Systems.
    GPU-accelleration of image rendering and sorting algorithms with the OpenCL framework2016Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Today's computer systems often contains several different processing units aside from the CPU. Among these the GPU is a very common processing unit with an immense compute power that is available in almost all computer systems. How do we make use of this processing power that lies within our machines? One answer is the OpenCL framework that is designed for just this, to open up the possibilities of using all the different types of processing units in a computer system. This thesis will discuss the advantages and disadvantages of using the integrated GPU available in a basic workstation computer for computation of image processing and sorting algorithms. These tasks are computationally intensive and the authors will analyze if an integrated GPU is up to the task of accelerating the processing of these algorithms. The OpenCL framework makes it possible to run one implementation on different processing units, to provide perspective we will benchmark our implementations on both the GPU and the CPU and compare the results. A heterogeneous approach that combines the two above mentioned processing units will also be tested and discussed. The OpenCL framework is analyzed from a development perspective and what advantages and disadvantages it brings to the development process will be presented.

    Download full text (pdf)
    fulltext
  • 339.
    Anderson, Collin
    et al.
    University of Pennsylvania, USA.
    Winter, Philipp
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    -, Roya
    Global Network Interference Detection over the RIPE Atlas Network2014Conference paper (Refereed)
    Abstract [en]

    Existing censorship measurement platforms frequentlysuffer from poor adoption, insufficient geographic coverage, and scalability problems. In order to outline ananalytical framework and data collection needs for futureubiquitous measurements initiatives, we build on top ofthe existent and widely-deployed RIPE Atlas platform.In particular, we propose methods for monitoring thereachability of vital services through an algorithm thatbalances timeliness, diversity, and cost. We then use Atlas to investigate blocking events in Turkey and Russia.Our measurements identify under-examined forms of interference and provide evidence of cooperation betweena well-known blogging platform and government authorities for purposes of blocking hosted content.

    Download full text (pdf)
    foci2014
  • 340.
    Anderson, Thomas
    et al.
    University of Washington.
    Canini, Marco
    KAUST.
    Kim, Jongyul
    KAIST.
    Kostic, Dejan
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS, Network Systems Laboratory (NS Lab).
    Kwon, Youngjin
    KAIST.
    Peter, Simon
    The University of Texas at Austin.
    Reda, Waleed
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS, Network Systems Laboratory (NS Lab).
    Schuh, Henry
    University of Washington.
    Witchel, Emmett
    The University of Texas at Austin.
    Assise: Performance and Availability via Client-local NVM in a Distributed File System2020In: / [ed] USENIX Association, USENIX - The Advanced Computing Systems Association, 2020, p. 1011--1027Conference paper (Refereed)
    Abstract [en]

    The adoption of low latency persistent memory modules (PMMs) upends the long-established model of remote storage for distributed file systems. Instead, by colocating computation with PMM storage, we can provide applications with much higher IO performance, sub-second application failover, and strong consistency. To demonstrate this, we built the Assise distributed file system, based on a persistent, replicated coherence protocol that manages client-local PMM as a linearizable and crash-recoverable cache between applications and slower (and possibly remote) storage. Assise maximizes locality for all file IO by carrying out IO on process-local, socket-local, and client-local PMM whenever possible. Assise minimizes coherence overhead by maintaining consistency at IO operation granularity, rather than at fixed block sizes.

    We compare Assise to Ceph/BlueStore, NFS, and Octopus on a cluster with Intel Optane DC PMMs and SSDs for common cloud applications and benchmarks, such as LevelDB, Postfix, and FileBench. We find that Assise improves write latency up to 22x, throughput up to 56x, fail-over time up to 103x, and scales up to 6x better than its counterparts, while providing stronger consistency semantics.

    Download full text (pdf)
    assise.pdf
  • 341.
    Andersson, Anders
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering. Statens väg- och transportforskningsinstitut (VTI), Trafik och trafikant,TRAF, Fordonsteknik och simulering, FTS, Linköping, Sweden.
    Distributed Moving Base Driving Simulators: Technology, Performance, and Requirements2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Development of new functionality and smart systems for different types of vehicles is accelerating with the advent of new emerging technologies such as connected and autonomous vehicles. To ensure that these new systems and functions work as intended, flexible and credible evaluation tools are necessary. One example of this type of tool is a driving simulator, which can be used for testing new and existing vehicle concepts and driver support systems. When a driver in a driving simulator operates it in the same way as they would in actual traffic, you get a realistic evaluation of what you want to investigate. Two advantages of a driving simulator are (1.) that you can repeat the same situation several times over a short period of time, and (2.) you can study driver reactions during dangerous situations that could result in serious injuries if they occurred in the real world. An important component of a driving simulator is the vehicle model, i.e., the model that describes how the vehicle reacts to its surroundings and driver inputs. To increase the simulator realism or the computational performance, it is possible to divide the vehicle model into subsystems that run on different computers that are connected in a network. A subsystem can also be replaced with hardware using so-called hardware-in-the-loop simulation, and can then be connected to the rest of the vehicle model using a specified interface. The technique of dividing a model into smaller subsystems running on separate nodes that communicate through a network is called distributed simulation.

    This thesis investigates if and how a distributed simulator design might facilitate the maintenance and new development required for a driving simulator to be able to keep up with the increasing pace of vehicle development. For this purpose, three different distributed simulator solutions have been designed, built, and analyzed with the aim of constructing distributed simulators, including external hardware, where the simulation achieves the same degree of realism as with a traditional driving simulator. One of these simulator solutions has been used to create a parameterized powertrain model that can be configured to represent any of a number of different vehicles. Furthermore, the driver's driving task is combined with the powertrain model to monitor deviations. After the powertrain model was created, subsystems from a simulator solution and the powertrain model have been transferred to a Modelica environment. The goal is to create a framework for requirement testing that guarantees sufficient realism, also for a distributed driving simulation.

    The results show that the distributed simulators we have developed work well overall with satisfactory performance. It is important to manage the vehicle model and how it is connected to a distributed system. In the distributed driveline simulator setup, the network delays were so small that they could be ignored, i.e., they did not affect the driving experience. However, if one gradually increases the delays, a driver in the distributed simulator will change his/her behavior. The impact of communication latency on a distributed simulator also depends on the simulator application, where different usages of the simulator, i.e., different simulator studies, will have different demands. We believe that many simulator studies could be performed using a distributed setup. One issue is how modifications to the system affect the vehicle model and the desired behavior. This leads to the need for methodology for managing model requirements. In order to detect model deviations in the simulator environment, a monitoring aid has been implemented to help notify test managers when a model behaves strangely or is driven outside of its validated region. Since the availability of distributed laboratory equipment can be limited, the possibility of using Modelica (which is an equation-based and object-oriented programming language) for simulating subsystems is also examined. Implementation of the model in Modelica has also been extended with requirements management, and in this work a framework is proposed for automatically evaluating the model in a tool.

    Download full text (pdf)
    Distributed Moving Base Driving Simulators: Technology, Performance, and Requirements
    Download (png)
    presentationsbild
  • 342.
    Andersson, Anders
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering. Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Extensions for Distributed Moving Base Driving Simulators2017Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Modern vehicles are complex systems. Different design stages for such a complex system include evaluation using models and submodels, hardware-in-the-loop systems and complete vehicles. Once a vehicle is delivered to the market evaluation continues by the public. One kind of tool that can be used during many stages of a vehicle lifecycle is driving simulators.

    The use of driving simulators with a human driver is commonly focused on driver behavior. In a high fidelity moving base driving simulator it is possible to provide realistic and repetitive driving situations using distinctive features such as: physical modelling of driven vehicle, a moving base, a physical cabin interface and an audio and visual representation of the driving environment. A desired but difficult goal to achieve using a moving base driving simulator is to have behavioral validity. In other words, \A driver in a moving base driving simulator should have the same driving behavior as he or she would have during the same driving task in a real vehicle.".

    In this thesis the focus is on high fidelity moving base driving simulators. The main target is to improve the behavior validity or to maintain behavior validity while adding complexity to the simulator. One main assumption in this thesis is that systems closer to the final product provide better accuracy and are perceived better if properly integrated. Thus, the approach in this thesis is to try to ease incorporation of such systems using combinations of the methods hardware-in-the-loop and distributed simulation. Hardware-in-the-loop is a method where hardware is interfaced into a software controlled environment/simulation. Distributed simulation is a method where parts of a simulation at physically different locations are connected together. For some simulator laboratories distributed simulation is the only feasible option since some hardware cannot be moved in an easy way.

    Results presented in this thesis show that a complete vehicle or hardware-in-the-loop test laboratory can successfully be connected to a moving base driving simulator. Further, it is demonstrated that using a framework for distributed simulation eases communication and integration due to standardized interfaces. One identified potential problem is complexity in interface wrappers when integrating hardware-in-the-loop in a distributed simulation framework. From this aspect, it is important to consider the model design and the intersections between software and hardware models. Another important issue discussed is the increased delay in overhead time when using a framework for distributed simulation.

    Download full text (pdf)
    Extensions for Distributed Moving Base Driving Simulators
    Download (pdf)
    omslag
    Download (jpg)
    presentationsbild
  • 343.
    Andersson, Anders
    et al.
    Statens väg- och transportforskningsinstitut, Fordonsteknik och simulering, FTS.
    Andersson Hultgren, Jonas
    Statens väg- och transportforskningsinstitut, Fordonsteknik och simulering, FTS.
    Leandertz, Rickard
    Hiq Accelerated Concept Evaluation AB, Stockholm, Sweden.
    Johansson, Martin
    Pitch Technologies.
    Eriksson, Steve
    Pitch Technologies.
    Jakobson, Ola
    Volvo Car Corporation.
    A Driving Simulation Platform using Distributed Vehicle Simulators and HLA2015In: Proceedings of the DSC 2015 Europe: Driving Simulation Conference & Exhibition / [ed] Heinrich Bülthoff, Andras Kemeny and Paolo Pretto, 2015, p. 123-130Conference paper (Refereed)
    Abstract [en]

    Modern vehicles are complex systems consisting of an increasing large multitude of components that operate together. While functional verification on individual components is important, it is also important to test components within a driving environment, both from a functional perspective and from a driver perspective. One proven way for testing is vehicle simulators and in this work the main goals have been to increase flexibility and scalability by introducing a distributed driving simulator platform.

    As an example, consider a workflow where a developer can go from a desktop simulation to an intermediate driving simulator to a high fidelity driving simulator with Hardware-In-the-Loop systems close to a finished vehicle in an easy way. To accomplish this, a distributed simulation architecture was designed and implemented that divides a driving simulator environment into four major entities with well-defined interfaces, using HLA as the method of communication. This platform was evaluated on two aspects, flexibility/scalability and timing performance. Results show that increased flexibility and scalability was achieved when using a distributed simulation platform. It is also shown that latency was only slightly increased when using HLA.

  • 344.
    Andersson, Anders
    et al.
    Swedish National Road and Transport Research Institute, Traffic and road users, Vehicle technology and simulation.
    Andersson Hultgren, Jonas
    Swedish National Road and Transport Research Institute, Traffic and road users, Vehicle technology and simulation.
    Leandertz, Rickard
    HiQ.
    Johansson, Martin
    Pitch Technologies.
    Eriksson, Steve
    Pitch Technologies.
    Jakobson, Ola
    Volvo Car Corporation.
    A Driving Simulation Platform using Distributed Vehicle Simulators and HLA2015In: Proceedings of the DSC 2015 Europe: Driving Simulation Conference & Exhibition, 2015, p. 123-130Conference paper (Refereed)
    Abstract [en]

    Modern vehicles are complex systems consisting of an increasing large multitude of components that operate together. While functional verification on individual components is important, it is also important to test components within a driving environment, both from a functional perspective and from a driver perspective. One proven way for testing is vehicle simulators and in this work the main goals have been to increase flexibility and scalability by introducing a distributed driving simulator platform.

    As an example, consider a workflow where a developer can go from a desktop simulation to an intermediate driving simulator to a high fidelity driving simulator with Hardware-In-the-Loop systems close to a finished vehicle in an easy way. To accomplish this, a distributed simulation architecture was designed and implemented that divides a driving simulator environment into four major entities with well-defined interfaces, using HLA as the method of communication. This platform was evaluated on two aspects, flexibility/scalability and timing performance. Results show that increased flexibility and scalability was achieved when using a distributed simulation platform. It is also shown that latency was only slightly increased when using HLA.

  • 345.
    Andersson, Anders
    et al.
    Swedish National Road and Transport Research Institute, Sweden.
    Buffoni, Lena
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Powertrain Model Assessment for Different Driving Tasks through Requirement Verification2018In: The 9th EUROSIM Congress on Modelica and Simulation, 2018, p. 721-727Conference paper (Refereed)
    Abstract [en]

    For assessing whether a system model is a good candidate for a particular simulation scenario or choosing the best system model between multiple design alternatives it is important to be able to evaluate the suitability of the system model. In this paper we present a methodology based on finite state machine requirements verifying system behaviour in a Modelica environment where the intended system model usage is within a moving base driving simulator. A use case illustrate the methodology with a Modelica powertrain system model using replaceable components and measured data from a Golf V. The achieved results show the importance of context of requirements and how users are assisted in finding system model issues. 

  • 346.
    Andersson, Anders
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Swedish National Road and Transportation Research Institute.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Models for Distributed Real-Time Simulation in a Vehicle Co-Simulator Setup2013In: Proceedings of the 5th International Workshop on Equation-Based Object-Oriented Modeling Languages and Tools; April 19, University of Nottingham, Nottingham, UK / [ed] Henrik Nilsson, Linköping: Linköping University Electronic Press, 2013, Vol. 84, p. 131-139Conference paper (Refereed)
    Abstract [en]

    A car model in Modelica has been developed to be used in a new setup for distributed real-time simulation where a moving base car simulator is connected with a real car in a chassis dynamometer via a 500m fiber optic communication link. The new co-simulator set-up can be used in a number of configurations where hardware in the loop can be interchanged with software in the loop. The models presented in this paper are the basic blocks chosen for modeling the system in the context of a distributed real-time simulation; estimating parameters for the powertrain model; the choice of numeric solver; and the interaction with the solver for real-time properties.

    Download full text (pdf)
    Models for Distributed Real-Time Simulation in a Vehicle Co-Simulator Setup
  • 347.
    Andersson, Anders
    et al.
    VTI, Swedish National Road and Transport Research Institute, Linköping, Sweden.
    Kharrazi, Sogol
    VTI, Swedish National Road and Transport Research Institute, Linköping, Sweden.
    Vehicle model quality framework for moving base driving simulators, a powertrain model example2018In: International Journal of Vehicle Systems Modelling and Testing, ISSN 1745-6436, E-ISSN 1745-6444, Vol. 13, no 2, p. 93-108Article in journal (Refereed)
    Abstract [en]

    Moving base driving simulators, with an enclosed human driver, are often used to study driver-vehicle interaction or driver behaviour. Reliable results from such a driving simulator study strongly depend on the perceived realism by the driver in the performed driving task. Assuring sufficient fidelity for a vehicle dynamics model during a driving task is currently to a large degree a manual task. Focus here is to automate this process by employing a framework using collected driving data for detection of model quality for different driving tasks. Using this framework, a powertrain model credibility is predicted and assessed. Results show that chosen powertrain model is accurate enough for a driving scenario on rural roads/motorway, but need improvements for city driving. This was expected, considering the complexity of the vehicle dynamics model, and it was accurately captured by the proposed framework which includes real-time information to the simulator operator.

  • 348.
    Andersson, Birger
    et al.
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Bergholtz, Maria
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Edirisuriya, A.
    Ilayperuma, T.
    Jayaweera, P.
    Johannesson, Paul
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Zdravkovic, Jelena
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Enterprise sustainability through the alignment of goal models and business models2008In: CEUR Workshop Proc., 2008, p. 73-87Conference paper (Refereed)
    Abstract [en]

    Business modelling can be used as a starting point for business analysis. The core of a business model is information about resources, events, agents, and their relations. The motivation of a business model can be found in the goals of an enterprise and those are made explicit in a goal model. This paper discusses the alignment of business models with goal models and proposes a method for constructing business models based on goal models. The method assists in the design of business models that conform to the explicit goals of an enterprise. Main benefits are clear and uniform goal formulations, well founded business model designs, and increased traceability between models.

  • 349.
    Andersson, Carl
    et al.
    University West, Department of Engineering Science, Division of Mathematics, Computer and Surveying Engineering.
    Bränfeldt, Jonathan
    University West, Department of Engineering Science, Division of Mathematics, Computer and Surveying Engineering.
    Användare, hot och beteende i nätverket2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis contains a security analysis primarily focusing on the users and their behaviors in relationship to their daily IT usage.

    The thesis was possible due to the help via an international e-commerce company leading in their field of operation.

    The company which this thesis has been carried out with is a leading international e-commerce company in its industry. Their Swedish office has 35 employees all of whom use IT-related equipment daily in their daily tasks. Their interest is to learn more about IT security and in particular internal security.

    Through research of both network devices, user devices, interviews and a survey, the security on layer 2 as well as the users behaviour and knowledge have been examined. Following this research, results have been extracted regarding possible vulnerabilities and recommendations for further development within the area to improve the matters of today.

  • 350.
    Andersson, Dan
    KTH, School of Technology and Health (STH), Data- och elektroteknik.
    Implementation av prototyp för inomhuspositionering2013Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Development of technology constantly creates new opportunities but it can also mean major chang-es for companies and organizations. Today phones, tablets, laptops, mobile communications and cloud technology make it possible to be no longer bound by the time, location or device in order to work. The change means that a new office type that is more flexible and space efficient due to no fixed workplaces, is becoming more common. A problem with this type of offices that are known as flex-offices is that it is not obvious where or when a colleague is in the office, especially if it is large office with multiple floors.

    The aim of this work is to develop and implement a Location-Based Service for the company Connecta AB. The service will enable users to use their mobile phone to share their current workplace location in an office environment.

    The result of this work is a Location-Based Service that enables a user to use an Android phone with support for Near Field Communcication to share their current workplace position. The cloud-based server solution, Windows Azure is used to store indexed workplace positions.

    Download full text (pdf)
    fulltext
45678910 301 - 350 of 6272
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf