Change search
Refine search result
1234567 151 - 200 of 551
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 151.
    Chattopadhyay, Sudipta
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Roychoudhury, Abhik
    National University of Singapore, Singapore.
    Cache-Related Preemption Delay Analysis for Multilevel Noninclusive Caches2014In: ACM Transactions on Embedded Computing Systems, ISSN 1539-9087, E-ISSN 1558-3465, Vol. 13, no 147Article in journal (Refereed)
    Abstract [en]

    With the rapid growth of complex hardware features, timing analysis has become an increasingly difficult problem. The key to solving this problem lies in the precise and scalable modeling of performance-enhancing processor features (e.g., cache). Moreover, real-time systems are often multitasking and use preemptive scheduling, with fixed or dynamic priority assignment. For such systems, cache related preemption delay (CRPD) may increase the execution time of a task. Therefore, CRPD may affect the overall schedulability analysis. Existing works propose to bound the value of CRPD in a single-level cache. In this article, we propose a CRPD analysis framework that can be used for a two-level, noninclusive cache hierarchy. In addition, our proposed framework is also applicable in the presence of shared caches. We first show that CRPD analysis faces several new challenges in the presence of a multilevel, noninclusive cache hierarchy. Our proposed framework overcomes all such challenges and we can formally prove the correctness of our framework. We have performed experiments with several subject programs, including an unmanned aerial vehicle (UAV) controller and an in-situ space debris monitoring instrument. Our experimental results suggest that we can provide sound and precise CRPD estimates using our framework.

  • 152.
    Chattopadhyay, Sudipta
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Roychoudhury, Abhik
    National University of Singapore.
    Rosén, Jakob
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Time-Predictable Embedded Software on Multi-Core Platforms: Analysis and Optimization2014In: Foundations and Trends in Electronic Design Automation, ISSN 1551-3939, Vol. 8, no 3-4, 199-356 p.Article in journal (Refereed)
    Abstract [en]

    Multi-core architectures have recently gained popularity due to their high-performance and low-power characteristics. Most of the modern desktop systems are now equipped with multi-core processors. Despite the wide-spread adaptation of multi-core processors in desktop systems, using such processors in embedded systems still poses several challenges. Embedded systems are often constrained by several extra-functional aspects, such as time. Therefore, providing guarantees for time-predictable execution is one of the key requirements for embedded system designers. Multi-core processors adversely affect the time-predictability due to the presence of shared resources, such as shared caches and shared buses. In this contribution, we shall first discuss the challenges imposed by multi-core architectures in designing time-predictable embedded systems. Subsequently, we shall describe, in details, a comprehensive solution to guarantee time-predictable execution on multi-core platforms. Besides, we shall also perform a discussion of different techniques to provide an overview of the state-of-the-art solutions in this topic. Through this work, we aim to provide a solid background on recent trends of research towards achieving time-predictability on multi-cores. Besides, we also highlight the limitations of the state-of-the-art and discuss future research opportunities and challenges to accomplish time-predictable execution on multi-core platforms.

  • 153.
    Chen, Hubie
    et al.
    Universitat Pompeu Fabra, Barcelona, Spain.
    Wrona, Michal
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Guarded Ord-Horn: A Tractable Fragment of Quantified Constraint Satisfaction2012Conference paper (Other academic)
    Abstract [en]

    The first-order theory of dense linear orders without endpoints is well-known to be PSPACE-complete. We present polynomial-time tractability results for fragments of this theory which are defined by syntactic restriction, in particular, our fragments can be described using the framework of quantified constraint satisfaction over Ord-Horn clauses.

  • 154.
    Chen, Yi-Ching
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Co-design of Fault-Tolerant Systems with Imperfect Fault Detection2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In recent decades, transient faults have become a critical issue in modernelectronic devices. Therefore, many fault-tolerant techniques have been proposedto increase system reliability, such as active redundancy, which can beimplemented in both space and time dimensions. The main challenge of activeredundancy is to introduce the minimal overhead of redundancy and to schedulethe tasks. In many pervious works, perfect fault detectors are assumed to simplifythe problem. However, the induced resource and time overheads of suchfault detectors make them impractical to be implemented. In order to tacklethe problem, an alternative approach was proposed based on imperfect faultdetectors.

    So far, only software implementation is studied for the proposed imperfectfault detection approach. In this thesis, we take hardware-acceleration intoconsideration. Field-programmable gate array (FPGA) is used to accommodatetasks in hardware. In order to utilize the FPGA resources efficiently, themapping and the selection of fault detectors for each task replica have to be carefullydecided. In this work, we present two optimization approaches consideringtwo FPGA technologies, namely, statically reconfigurable FPGA and dynamicallyreconfigurable FPGA respectively. Both approaches are evaluated andcompared with the proposed software-only approach by extensive experiments.

  • 155.
    Cichowski, Patrick
    et al.
    FernUniversität in Hagen, Germany.
    Keller, Jörg
    FernUniversität in Hagen, Germany.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Energy-efficient Mapping of Task Collections onto Manycore Processors2013In: Proceedings of MULTIPROG'13 workshop at HiPEAC'13 / [ed] E. Ayguade et al. (eds.), 2013Conference paper (Refereed)
    Abstract [en]

    Streaming applications consist of a number of tasks that all run concurrently, and that process data at certain rates. On manycore processors, the tasks of the streaming application must be mapped onto the cores. While load balancing of such applications has been considered, especially in the MPSoC community, we investigate energy-efficient mapping of such task collections onto manycore processors. We first derive rules that guide the mapping process and show that as long as dynamic power consumption dominates static power consumption, the latter can be ignored and the problem reduces to load balancing. When however, as expected in the coming years, static power consumption will be a notable fraction of total power consumption, then an energy-efficient mapping must take it into account, e.g. by temporary shutdown of cores or by restricting the number of cores. We validate our findings with synthetic and real-world applications on the Intel SCC manycore processor.

  • 156.
    Cichowski, Patrick
    et al.
    FernUniversität in Hagen, Germany.
    Keller, Jörg
    FernUniversität in Hagen, Germany.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Modelling Power Consumption of the Intel SCC2012In: Proceedings of the 6th Many-core Applications Research Community (MARC) Symposium / [ed] Eric Noulard, HAL Archives Ouvertes , 2012Conference paper (Refereed)
    Abstract [en]

    The Intel SCC manycore processor supports energy-efficient computing by dynamic voltage and frequency scaling of cores on a fine-grained level. In order to enable the use of that feature in application-level energy optimizations, we report on experiments to measure power consumption in different situations. We process those measurements by a least-squares error analysis to derive the parameters of popular models for power consumption which are used on an algorithmic level. Thus, we provide a link between the worlds of hardware and high-level algorithmics.

  • 157.
    Creignou, Nadia
    et al.
    Aix Marseille University, France.
    Egly, Uwe
    Vienna University of Technology, Austria.
    Schmidt, Johannes
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Complexity Classifications for Logic-Based Argumentation2014In: ACM Transactions on Computational Logic, ISSN 1529-3785, E-ISSN 1557-945X, Vol. 15, no 3, 19- p.Article in journal (Refereed)
    Abstract [en]

    We consider logic-based argumentation in which an argument is a pair (Phi, alpha), where the support Phi is a minimal consistent set of formulae taken from a given knowledge base (usually denoted by Delta) that entails the claim alpha (a formula). We study the complexity of three central problems in argumentation: the existence of a support Phi subset of Delta, the verification of a support, and the relevance problem (given psi, is there a support Phi such that psi is an element of Phi?). When arguments are given in the frill language of propositional logic, these problems are computationally costly tasks: the verification problem is DP-complete; the others are Sigma(P)(2)-complete. We study these problems in Schaefers famous framework where the considered propositional formulae are in generalized conjunctive normal form. This means that formulae are conjunctions of constraints built upon a fixed finite set of Boolean relations Gamma (the constraint language). We show that according to the properties of this language Gamma, deciding whether there exists a support for a claim in a given knowledge base is either polynomial, NP-complete, coNP-complete, or Sigma(P)(2)-complete. We present a dichotomous classification, P or DP-complete, for the verification problem and a trichotomous classification for the relevance problem into either polynomial, NP-complete, or Sigma(P)(2)-complete. These last two classifications are obtained by means of algebraic tools.

  • 158.
    Creignou, Nadia
    et al.
    Aix Marseille University, France.
    Meier, Arne
    Leibniz University of Hannover, Germany.
    Mueller, Julian-Steffen
    Leibniz University of Hannover, Germany.
    Schmidt, Johannes
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Vollmer, Heribert
    Leibniz University of Hannover, Germany.
    Paradigms for Parameterized Enumeration2017In: Theory of Computing Systems, ISSN 1432-4350, E-ISSN 1433-0490, Vol. 60, no 4, 737-758 p.Article in journal (Refereed)
    Abstract [en]

    The aim of the paper is to examine the computational complexity and algorithmics of enumeration, the task to output all solutions of a given problem, from the point of view of parameterized complexity. First, we define formally different notions of efficient enumeration in the context of parameterized complexity: FPT-enumeration and delayFPT. Second, we show how different algorithmic paradigms can be used in order to get parameter-efficient enumeration algorithms in a number of examples. These paradigms use well-known principles from the design of parameterized decision as well as enumeration techniques, like for instance kernelization and self-reducibility. The concept of kernelization, in particular, leads to a characterization of fixed-parameter tractable enumeration problems. Furthermore, we study the parameterized complexity of enumerating all models of Boolean formulas having weight at least k, where k is the parameter, in the famous Schaefers framework. We consider propositional formulas that are conjunctions of constraints taken from a fixed finite set I". Given such a formula and an integer k, we are interested in enumerating all the models of the formula that have weight at least k. We obtain a dichotomy classification and prove that, according to the properties of the constraint language I", either one can enumerate all such models in delayFPT, or no such delayFPT enumeration algorithm exists under some complexity-theoretic assumptions.

  • 159.
    Creignou, Nadia
    et al.
    Aix-Marseille Université, France.
    Meier, Arne
    Leibniz Universität, Hannover, Germany.
    Müller, Julian-Steffen
    Leibniz Universität, Hannover, Germany.
    Schmidt, Johannes
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Vollmer, Heribert
    Leibniz Universität, Hannover, Germany.
    Paradigms for Parameterized Enumeration2013In: Mathematical Foundations of Computer Science 2013 / [ed] Krishnendu Chatterjee, Jirí Sgall, Springer Berlin/Heidelberg, 2013, 290-301 p.Conference paper (Refereed)
    Abstract [en]

    The aim of the paper is to examine the computational complexity and algorithmics of enumeration, the task to output all solutions of a given problem, from the point of view of parameterized complexity. First we define formally different notions of efficient enumeration in the context of parameterized complexity. Second we show how different algorithmic paradigms can be used in order to get parameter-efficient enumeration algorithms in a number of examples. These paradigms use well-known principles from the design of parameterized decision as well as enumeration techniques, like for instance kernelization and self-reducibility. The concept of kernelization, in particular, leads to a characterization of fixed-parameter tractable enumeration problems.

  • 160.
    Crusoe, Jonathan
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Metoder för användardriven gränssnittsprogrammering2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    When the user decides to develop an interface for their system, this is done with a software development tool of some kind. We need to determine which development methodology is to be used and how we can add more functionality to the system so it won't become outdated. To tackle this problem we break up the work in two parts. In the first part, we examine which programming methodology is best suited for interface development through a survey that is divided in two parts. In the second part, we look at what solutions exists for implementing new functionality to a tool and we present our solution.

  • 161.
    Cucurull, Jordi
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, Software and Systems.
    Asplund, Mikael
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Nadjm-Tehrani, Simin
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Surviving Attacks in Challenged Networks2012In: IEEE Transactions on Dependable and Secure Computing, ISSN 1545-5971, E-ISSN 1941-0018, Vol. 9, no 6, 917-929 p.Article in journal (Refereed)
    Abstract [en]

    In the event of a disaster, telecommunication infrastructures can be severely damaged or overloaded. Hastily formed networks can provide communication services in an ad hoc manner. These networks are challenging due to the chaotic context where intermittent connection is the norm and the identity and number of participants cannot be assumed. In such environments malicious actors may try to disrupt the communications to create more chaos for their own benefit. This paper proposes a general security framework for monitoring and reacting to disruptive attacks. It includes a collection of functions to detect anomalies, diagnose them, and perform mitigation. The measures are deployed in each node in a fully distributed fashion, but their collective impact is a significant resilience to attacks, so the actors can disseminate information under adverse conditions. The approach is evaluated in the context of a simulated disaster area network with a many-cast dissemination protocol, Random Walk Gossip, with a store-and-forward mechanism. A challenging threat model where adversaries may 1) try to drain the resources both at node level (battery life) and network level (bandwidth), or 2) reduce message dissemination in their vicinity, without spending much of their own energy, is adopted. The results demonstrate that the approach diminishes the impact of the attacks considerably.

  • 162.
    Cucurull, Jordi
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Nadjm-Tehrani, Simin
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Raciti, Massimiliano
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Modular Anomaly Detection for Smartphone Ad Hoc Communication2012In: Information Security Technology for Applications: 16th Nordic Conference on Secure IT Systems, NordSec 2011, Tallinn, Estonia, October 26-28, 2011, Revised Selected Papers / [ed] Peeter Laud, Springer Berlin/Heidelberg, 2012, Vol. 7161, 65-81 p.Chapter in book (Refereed)
    Abstract [en]

    The capabilities of the modern smartphones make them the obvious platform for novel mobile applications. The open architectures, however, also create new vulnerabilities. Measures for prevention, detection, and reaction need to be explored with the peculiarities that resource-constrained devices impose. Smartphones, in addition to cellular broadband network capabilities, include WiFi interfaces that can even be deployed to set up a mobile ad hoc network (MANET). While intrusion detection in MANETs is typically evaluated with network simulators, we argue that it is important to implement and test the solutions in real devices to evaluate their resource footprint. This paper presents a modular implementation of an anomaly detection and mitigation mechanism on top of a dissemination protocol for intermittently-connected MANETs. The overhead of the security solution is evaluated in a small testbed based on three Android-based handsets and a laptop. The study shows the feasibility of the statistics-based anomaly detection regime, having low CPU usage, little added latency, and acceptable memory footprint.

  • 163.
    Dag, Antymos
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Autonomous Indoor Navigation System for Mobile Robots2016Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    With an increasing need for greater traffic safety, there is an increasing demand for means by which solutions to the traffic safety problem can be studied. The purpose of this thesis is to investigate the feasibility of using an autonomous indoor navigation system as a component in a demonstration system for studying cooperative vehicular scenarios. Our method involves developing and evaluating such a navigation system. Our navigation system uses a pre-existing localization system based on passive RFID, odometry and a particle filter. The localization system is used to estimate the robot pose, which is used to calculate a trajectory to the goal. A control system with a feedback loop is used to control the robot actuators and to drive the robot to the goal.

     

    The results of our evaluation tests show that the system generally fulfills the performance requirements stated for the tests. There is however some uncertainty about the consistency of its performance. Results did not indicate that this was caused by the choice of localization techniques. The conclusion is that an autonomous navigation system using the aforementioned localization techniques is plausible for use in a demonstration system. However, we suggest that the system is further tested and evaluated before it is used with applications where accuracy is prioritized.

  • 164.
    Danylenko, Antonina
    et al.
    Linnaeus University, Växjö.
    Löwe, Welf
    Linnaeus University, Växjö.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Comparing Machine Learning Approaches for Context-Aware Composition2011In: Software Composition / [ed] Sven Apel, Ethan Jackson, Springer, 2011, 18-33 p.Conference paper (Refereed)
    Abstract [en]

    Context-Aware Composition allows to automatically select optimal variants of algorithms, data-structures, and schedules at runtime using generalized dynamic Dispatch Tables. These tables grow exponentially with the number of significant context attributes. To make Context-Aware Composition scale, we suggest four alternative implementations to Dispatch Tables, all well-known in the field of machine learning: Decision Trees, Decision Diagrams, Naive Bayes and Support Vector Machines classifiers. We assess their decision overhead and memory consumption theoretically and practically in a number of experiments on different hardware platforms. Decision Diagrams turn out to be more compact compared to Dispatch Tables, almost as accurate, and faster in decision making. Using Decision Diagrams in Context-Aware Composition leads to a better scalability, i.e., Context-Aware Composition can be applied at more program points and regard more context attributes than before.

  • 165.
    DASH, ASSMITRA
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Minimizing Test Time through Test FlowOptimization in 3D-SICs2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    3D stacked ICs (3D-SICs) with multiple dies interconnected by through-silicon-vias(TSVs) are considered as a technology driver and proven to have overwhelming advantagesover traditional ICs with a single die in a package in terms of performance, powerconsumption and silicon overhead. However, these “super chips” bring new challengesto the process of IC manufacturing; among which, testing 3D-SICs is the major andmost complex issue to deal with. In traditional ICs, tests can usually be performedat two stages (test instances), namely: a wafer sort and a package test. Whereas for3D-SICs, tests can be performed after each stacking event where a new die is stackedover a partial stack. This expands the set of available test instances. A combination ofselected test instances where a test is performed (active test instance) is known as a testflow. Test time is a major contributor to the total test cost. Test time changes with theselected test flow. Therefore, choosing a cost effective test flow which will minimizesthe test time is absolutely essential.This thesis focuses on finding an optimal test flow which minimizes the test timefor a given 3D-SIC. A mathematical model has been developed to evaluate the test timeof any test flow. Then a heuristic has been proposed for finding a near optimal test flowwhich minimizes the test time. The performance of this approach in terms of computationtime and efficiency has been compared against the minimum test time obtainedby exhaustive search. The heuristic gives good results compared to exhaustive searchwith much lesser computation time.

  • 166.
    Dastgeer, Usman
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Performance-aware Component Composition for GPU-based systems2014Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis addresses issues associated with efficiently programming modern heterogeneous GPU-based systems, containing multicore CPUs and one or more programmable Graphics Processing Units (GPUs). We use ideas from component-based programming to address programming, performance and portability issues of these heterogeneous systems. Specifically, we present three approaches that all use the idea of having multiple implementations for each computation; performance is achieved/retained either a) by selecting a suitable implementation for each computation on a given platform or b) by dividing the computation work across different implementations running on CPU and GPU devices in parallel.

    In the first approach, we work on a skeleton programming library (SkePU) that provides high-level abstraction while making intelligent  implementation selection decisions underneath either before or during the actual program execution. In the second approach, we develop a composition tool that parses extra information (metadata) from XML files, makes certain decisions online, and, in the end, generates code for making the final decisions at runtime. The third approach is a framework that uses source-code annotations and program analysis to generate code for the runtime library to make the selection decision at runtime. With a generic performance modeling API alongside program analysis capabilities, it supports online tuning as well as complex program transformations.

    These approaches differ in terms of genericity, intrusiveness, capabilities and knowledge about the program source-code; however, they all demonstrate usefulness of component programming techniques for programming GPU-based systems. With experimental evaluation, we demonstrate how all three approaches, although different in their own way, provide good performance on different GPU-based systems for a variety of applications.

  • 167.
    Dastgeer, Usman
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Enmyren, Johan
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Auto-tuning SkePU: A multi-backend skeleton programming framework for multi-GPU systems2011In: IWMSE '11 Proceedings of the 4th International Workshop on Multicore Software Engineering, New York, NY, USA: Association for Computing Machinery (ACM), 2011, 25-32 p.Conference paper (Other academic)
    Abstract [en]

    SkePU is a C++ template library that provides a simple and unified interface for specifying data-parallel computations with the help of skeletons on GPUs using CUDA and OpenCL. The interface is also general enough to support other architectures, and SkePU implements both a sequential CPU and a parallel OpenMP backend. It also supports multi-GPU systems. Currently available skeletons in SkePU include map, reduce, mapreduce, map-with-overlap, maparray, and scan. The performance of SkePU generated code is comparable to that of hand-written code, even for more complex applications such as ODE solving.

    In this paper, we discuss initial results from auto-tuning SkePU using an off-line, machine learning approach where we adapt skeletons to a given platform using training data. The prediction mechanism at execution time uses off-line pre-calculated estimates to construct an execution plan for any desired configuration with minimal overhead. The prediction mechanism accurately predicts execution time for repetitive executions and includes a mechanism to predict execution time for user functions of different complexity. The tuning framework covers selection between different backends as well as choosing optimal parameter values for the selected backend. We will discuss our approach and initial results obtained for different skeletons (map, mapreduce, reduce).

  • 168.
    Dastgeer, Usman
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    A Framework for Performance-aware Composition of Applications for GPU-based Systems2013Conference paper (Refereed)
    Abstract [en]

    User-level components of applications can be made performance-aware by annotating them with performance model and other metadata. We present a component model and a composition framework for the performance-aware composition of applications for modern GPU-based systems from such components, which may expose multiple implementation variants. The framework targets the composition problem in an integrated manner, with particular focus on global performance-aware composition across multiple invocations. We demonstrate several key features of our framework relating to performance-aware composition including implementation selection, both with performance characteristics being known (or learned) beforehand as well as cases when they are learned at runtime. We also demonstrate hybrid execution capabilities of our framework on real applications. Furthermore, as an important step towards global composition, we present a bulk composition technique that can make better composition decisions by considering information about upcoming calls along with data flow information extracted from the source program by static analysis, thus improving over the traditional greedy performance-aware policy that only considers the current call for optimization.

  • 169.
    Dastgeer, Usman
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    A performance-portable generic component for 2D convolution computations on GPU-based systems2012In: Proceedings of the Fifth International Workshop on Programmability Issues for Heterogeneous Multicores (MULTIPROG-2012) at the HiPEAC-2012 conference, Paris, Jan. 2012 / [ed] E. Ayguade, B. Gaster, L. Howes, P. Stenström, O. Unsal, 2012Conference paper (Refereed)
    Abstract [en]

    In this paper, we describe our work on providing a generic yet optimized GPU (CUDA/OpenCL) implementation for the 2D MapOverlap skeleton. We explain our implementation with the help of a 2D convolution application, implemented using the newly developed skeleton. The memory (constant and shared memory) and adaptive tiling optimizations are applied and their performance implications are evaluated on different classes of GPUs. We present two different metrics to calculate the optimal tiling factor dynamically in an automated way which helps in retaining best performance without manual tuning while moving to newGPU architectures. With our approach, we can achieve average speedups by a factor of 3.6, 2.3, and 2.4 over an otherwise optimized (without tiling) implementation on NVIDIA C2050, GTX280 and 8800 GT GPUs respectively. Above all, the performance portability is achieved without requiring any manual changes in the skeleton program or the skeleton implementation.

  • 170.
    Dastgeer, Usman
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    A performance-portable generic component for 2D convolution computations on GPU-based systems2011In: Fourth Swedish Workshop on Multi-Core Computing MCC-2011: November 23-25, 2011, Linköping University, Linköping, Sweden / [ed] Christoph Kessler, Linköping: Linköping University , 2011, Vol. S. 39-44, 39-44 p.Conference paper (Other academic)
    Abstract [en]

    In this paper, we describe our work on providing a generic yet optimized GPU (CUDA/OpenCL) implementation for the 2D MapOverlap skeleton. We explain our implementation with the help  of a 2D convolutilution application, implemented using the newly deveioped skeleton.

  • 171.
    Dastgeer, Usman
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Conditional component composition for GPU-based systems2014In: Proc. Seventh Workshop on Programmability Issues for Multi-Core Computers (MULTIPROG-2014) at HiPEAC-2014, Vienna, Austria, Jan. 2014, Vienna, Austria: HiPEAC NoE , 2014Conference paper (Refereed)
    Abstract [en]

    User-level components can expose multiple functionally equivalent implementations with different resource requirements and performance characteristics. A composition framework can then choose a suitable implementation for each component invocation guided by an objective function (execution time, energy etc.). In this paper, we describe the idea of conditional composition which enables the component writer to specify constraints on the selectability of a given component implementation based on information about the target system and component call properties. By incorporating such information, more informed and user-guided composition decisions can be made and thus more efficient code be generated, as shown with an example scenario for a GPU-based system.

  • 172.
    Dastgeer, Usman
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Performance-aware Composition Framework for GPU-based Systems2015In: Journal of Supercomputing, ISSN 0920-8542, E-ISSN 1573-0484, Vol. 71, no 12, 4646-4662 p.Article in journal (Refereed)
    Abstract [en]

    User-level components of applications can be made performance-aware by annotating them with performance model and other metadata. We present a component model and a composition framework for the automatically optimized composition of applications for modern GPU-based systems from such components, which may expose multiple implementation variants. The framework targets the composition problem in an integrated manner, with the ability to do global performance-aware composition across multiple invocations. We demonstrate several key features of our framework relating to performance-aware composition including implementation selection, both with performance characteristics being known (or learned) beforehand as well as cases when they are learned at runtime. We also demonstrate hybrid execution capabilities of our framework on real applications. Furthermore, we present a bulk composition technique that can make better composition decisions by considering information about upcoming calls along with data flow information extracted from the source program by static analysis. The bulk composition improves over the traditional greedy performance aware policy that only considers the current call for optimization.

  • 173.
    Dastgeer, Usman
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Smart Containers and Skeleton Programming for GPU-Based Systems2016In: International journal of parallel programming, ISSN 0885-7458, E-ISSN 1573-7640, Vol. 44, no 3, 506-530 p.Article in journal (Refereed)
    Abstract [en]

    In this paper, we discuss the role, design and implementation of smart containers in the SkePU skeleton library for GPU-based systems. These containers provide an interface similar to C++ STL containers but internally perform runtime optimization of data transfers and runtime memory management for their operand data on the different memory units. We discuss how these containers can help in achieving asynchronous execution for skeleton calls while providing implicit synchronization capabilities in a data consistent manner. Furthermore, we discuss the limitations of the original, already optimizing memory management mechanism implemented in SkePU containers, and propose and implement a new mechanism that provides stronger data consistency and improves performance by reducing communication and memory allocations. With several applications, we show that our new mechanism can achieve significantly (up to 33.4 times) better performance than the initial mechanism for page-locked memory on a multi-GPU based system.

  • 174.
    Dastgeer, Usman
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Li, Lu
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Adaptive Implementation Selection in the SkePU Skeleton Programming Library2013In: Advanced Parallel Processing Technologies (APPT-2013), Proceedings / [ed] Chengyung Wu and Albert Cohen (eds.), 2013, 170-183 p.Conference paper (Refereed)
    Abstract [en]

    In earlier work, we have developed the SkePU skeleton programming library for modern multicore systems equipped with one or more programmable GPUs. The library internally provides four types of implementations (implementation variants) for each skeleton: serial C++, OpenMP, CUDA and OpenCL targeting either CPU or GPU execution respectively. Deciding which implementation would run faster for a given skeleton call depends upon the computation, problem size(s), system architecture and data locality.

    In this paper, we present our work on automatic selection between these implementation variants by an offline machine learning method which generates a compact decision tree with low training overhead. The proposed selection mechanism is flexible yet high-level allowing a skeleton programmer to control different training choices at a higher abstraction level. We have evaluated our optimization strategy with 9 applications/kernels ported to our skeleton library and achieve on average more than 94% (90%) accuracy with just 0.53% (0.58%) training space exploration on two systems. Moreover, we discuss one application scenario where local optimization considering a single skeleton call can prove sub-optimal, and propose a heuristic for bulk implementation selection considering more than one skeleton call to address such application scenarios.

  • 175.
    Dastgeer, Usman
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Li, Lu
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    The PEPPHER composition tool: performance-aware composition for GPU-based systems2014In: Computing, ISSN 0010-485X, E-ISSN 1436-5057, Vol. 96, no 12, 1195-1211 p.Article in journal (Refereed)
    Abstract [en]

    The PEPPHER (EU FP7 project) component model defines the notion of component, interface and meta-data for homogeneous and heterogeneous parallel systems. In this paper, we describe and evaluate the PEPPHER composition tool, which explores the application’s components and their implementation variants, generates the necessary low-level code that interacts with the runtime system, and coordinates the native compilation and linking of the various code units to compose the overall application code to optimize performance. We discuss the concept of smart containers and its benefits for reducing dispatch overhead, exploiting implicit parallelism across component invocations and runtime optimization of data transfers. In an experimental evaluation with several applications, we demonstrate that the composition tool provides a high-level programming front-end while effectively utilizing the task-based PEPPHER runtime system (StarPU) underneath for different usage scenarios on GPU-based systems.

  • 176.
    Dastgeer, Usman
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Li, Lu
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    The PEPPHER Composition Tool: Performance-Aware Dynamic Composition of Applications for GPU-Based Systems2012In: High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion, IEEE, 2012, 711-720 p.Conference paper (Refereed)
    Abstract [en]

    The PEPPHER component model defines an environment for annotation of native C/C++ based components for homogeneous and heterogeneous multicore and manycore systems, including GPU and multi-GPU based systems. For the same computational functionality, captured as a component, different sequential and explicitly parallel implementation variants using various types of execution units might be provided, together with metadata such as explicitly exposed tunable parameters. The goal is to compose an application from its components and variants such that, depending on the run-time context, the most suitable implementation variant will be chosen automatically for each invocation. We describe and evaluate the PEPPHER composition tool, which explores the application's components and their implementation variants, generates the necessary low-level code that interacts with the runtime system, and coordinates the native compilation and linking of the various code units to compose the overall application code. With several applications, we demonstrate how the composition tool provides a high-level programming front-end while effectively utilizing the task-based PEPPHER runtime system (StarPU) underneath.

  • 177.
    Delosières, Laurent
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Nadjm-Tehrani, Simin
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    BATMAN Store-and-Forward: the Best of the Two Worlds2012In: Pervasive Computing and Communications Workshops (PERCOM Workshops), IEEE , 2012, 721-727 p.Conference paper (Refereed)
    Abstract [en]

    The need for communication is highest in disaster scenarios when the infrastructure is also adversely affected. A recent protocol for ad hoc communication, the BATMAN protocol, is dependent on minimal infrastructure, in the form of mesh nodes that are used as access points, or nodes acting as an intermediary in a multi-hop connection. While BATMAN works well in a scenario in which there is a multihop path from senders to receivers at all times, it will drop the packets in intermittently-connected networks. Moreover, although implementation on a device is essential as a proof of concept, performing large scale evaluations requires a simulation platform in which variations in the operating environment can be studied. This paper is about adding the store-and-forward mechanism to the routing component in BATMAN nodes, to overcome intermittent connectivity through mobility. We describe an extension of the protocol, SF-BATMAN, that has been implemented in an interoperable manner with BATMAN, i.e. with no added signaling, and no change of basic BATMAN settings. We have implemented SF-BATMAN in a packet level simulator (NS3), and demonstrated its performance in a scenario that consists of two regions of connectivity: a wellconnected mesh network and a set of sparser subnetworks. We show that the added capability enhances the performance of BATMAN, through an increase of the delivery ratio by 20% with a lower overhead, while it exhibits a similar latency in comparable network scenarios.

  • 178.
    Delshad, Payman
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Behavior Driven Development in a Large-Scale Application: Evaluation of Usage for Developing IFS Applications2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Nowadays, Agile software development methods are often used in large multisite organizations that develop large-scale applications. Behavior Driven Development (BDD) is a relatively new Agile software development process where the development process starts with acceptance tests written in a natural language. The premise of BDD is to create a common and effective process of communication between different roles in a software project to ensure that every activity can be mapped to the business goal of the application. This thesis work aims to find an effective and efficient BDD process and to evaluate its usage in a large-scale application in a large multisite organization through a series of interviews, a controlled experiment, and an online survey. Furthermore, by means of the aforementioned experiment, the study measures the impact of an experimental usage of BDD on testing quality. To discover an effective and efficient BDD process, two alternatives with automated tests that run on different architectural layers, namely client layer and web service layer, were examined. Based on the defined metrics, the alternative with automated tests that ran directly on the web service layer was chosen as the more efficient process which was compared against the existing Agile-based baseline that used automated client tests. The results show that an efficient BDD process improves the testing quality significantly which can, in turn, result in a better overall software quality.

  • 179.
    Delzanno, Giorgio
    et al.
    Università di Genova.
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    A lightweight regular model checking approach for parameterized systems2012In: International Journal on Software Tools for Technology Transfer (STTT), ISSN 1433-2779, E-ISSN 1433-2787, Vol. 14, no 2, 207-222 p.Article in journal (Refereed)
    Abstract [en]

    In recent years, we have designed a lightweight approach to regular model checking specifically designed for parameterized systems with global conditions. Our approach combines the strength of regular languages, used for representing infinite sets of configurations, with symbolic model checking and approximations. In this paper, we give a uniform presentation of several variations of a symbolic backward reachability scheme in which different classes of regular expressions are used in place of BDDs. The classification of the proposed methods is based on the precision of the resulting approximated analysis.

  • 180.
    Disqah, Arash
    et al.
    Faculty of Engineering and Environment, Northumbria University, Newcastle, UK.
    Maheri, Alireza
    Faculty of Engineering and Environment, Northumbria University, Newcastle, UK.
    Busawon, Krishna
    Faculty of Engineering and Environment, Northumbria University, Newcastle, UK.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Standalone DC Microgrids as Complementarity Dynamical Systems: Modeling and Applications.2014In: Control Engineering Practice, ISSN 0967-0661, Vol. 35, no 10, 102-112 p.Article in journal (Refereed)
    Abstract [en]

    It is well known that, due to bimodal operation as well as existent discontinuous differential states of batteries, standalone microgrids belong to the class of hybrid dynamical systems of non-Filippov type. In this work, however, standalone microgrids are presented as complementarity systems (CSs) of the Filippov type which is then used to develop a multivariable nonlinear model predictive control (NMPC)-based load tracking strategy as well as Modelica models for long-term simulation purposes. The developed load tracker strategy is a multi-source maximum power point tracker (MPPT) that also regulates the DC bus voltage at its nominal value with the maximum of ±2.0% error despite substantial demand and supply variations.

  • 181.
    Drabent, Wlodzimierz
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    A simple correctness proof for magic transformation2012In: Theory and Practice of Logic Programming, ISSN 1471-0684, E-ISSN 1475-3081, Vol. 12, no 6, 929-936 p.Article in journal (Refereed)
    Abstract [en]

    The paper presents a simple and concise proof of correctness of the magic transformation. We believe that it may provide a useful example of formal reasoning about logic programs. The correctness property concerns the declarative semantics. The proof, however, refers to the operational semantics (LD-resolution) of the source programs. Its conciseness is due to applying a suitable proof method.

  • 182.
    Drabent, Wlodzimierz
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology. Institute of Computer Science, Polish Academy of Sciences.
    Logic + Control: An Example2012In: Technical Communications of the 28th International Conference on Logic Programming (ICLP'12) / [ed] Agostino Dovier and Vítor Santos Costa, Dagstuhl Publishing , 2012, 301-311 p.Conference paper (Other academic)
    Abstract [en]

    We present a Prolog program (the SAT solver of Howe and King) as a logic program with added control. The control consists of a selection rule (delays of Prolog) and pruning the search space. We construct the logic program together with proofs of its correctness and completeness, with respect to a formal specication. This is augmented by a proof of termination under any selection rule. Correctness and termination are inherited by the Prolog program, the change of selection rule preserves completeness. We prove that completeness is also preserved by one case of pruning; for the other an informal justication is presented.

    For proving correctness we use a method, which should be well known but is often neglected. A contribution of this paper is a method for proving completeness. In particular we introduce a notion of semi-completeness, for which a local sucient condition exists.

    We compare the proof methods with declarative diagnosis (algorithmic debugging). We introduce a method of proving that a certain kind of pruning preserves completeness. We argue that the proof methods correspond to natural declarative thinking about programs, and that they can be used, formally or informally, in every-day programming.

  • 183.
    Drabent, Wlodzimierz
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Logic + control: On program construction and verification2017In: Theory and Practice of Logic Programming, ISSN ISSN: 1471-0684 (Print), 1475-3081 (Online)Article in journal (Refereed)
    Abstract [en]

    This paper presents an example of formal reasoning about the semantics of a Prolog program of practical importance (the SAT solver of Howe and King). The program is treated as a definite clause logic program with added control. The logic program is constructed by means of stepwise refinement, hand in hand with its correctness and completeness proofs. The proofs are declarative – they do not refer to any operational semantics. Each step of the logic program construction follows a systematic approach to constructing programs which are provably correct and complete. We also prove that correctness and completeness of the logic program is preserved in the final Prolog program. Additionally, we prove termination, occur-check freedom and non-floundering.

    Our example shows how dealing with “logic” and with “control” can be separated. Most of the proofs can be done at the “logic” level, abstracting from any operational semantics.

    The example employs approximate specifications; they are crucial in simplifying reasoning about logic programs. It also shows that the paradigm of semantics-preserving program transformations may be not sufficient.  We suggest considering transformations which preserve correctness and completeness with respect to an approximate specification.

  • 184.
    Drabent, Wlodzimierz
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    On completeness of logic programs2015In: Logic-Based Program Synthesis and Transformation - 24th International Symposium, (LOPSTR-2014), 2015, Vol. 8981, 261-278 p.Conference paper (Refereed)
    Abstract [en]

    Program correctness (in imperative and functional programming) splits in logic programming into correctness and completeness. Completeness means that a program produces all the answers required by its specification. Little work has been devoted to reasoning about completeness. This paper presents a few sufficient conditions for completeness of definite programs. We also study preserving completeness under some cases of pruning of SLD-trees (e.g. due to using the cut). We treat logic programming as a declarative paradigm, abstracting from any operational semantics as far as possible. We argue that the proposed methods are simple enough to be applied, possibly at an informal level, in practical Prolog programming. We point out importance of approximate specifications.

  • 185.
    Drabent, Wlodzimierz
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Institute of Computer Science, Polish Academy of Sciences.
    On definite program answers and least Herbrand models2016In: Theory and Practice of Logic Programming, ISSN 1471-0684, E-ISSN 1475-3081, Vol. 16, no 4, 498-508 p.Article in journal (Refereed)
    Abstract [en]

    A sufficient and necessary condition is given under which least Herbrand models exactlycharacterize the answers of definite clause programs.

  • 186.
    Drabent, Wlodzimierz
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Polish Academic Science, Poland.
    Proving completeness of logic programs with the cut2017In: Formal Aspects of Computing, ISSN 0934-5043, E-ISSN 1433-299X, Vol. 29, no 1, 155-172 p.Article in journal (Refereed)
    Abstract [en]

    Completeness of a logic program means that the program produces all the answers required by its specification. The cut is an important construct of programming language Prolog. It prunes part of the search space, this may result in a loss of completeness. This paper proposes a way of proving completeness of programs with the cut. The semantics of the cut is formalized by describing how SLD-trees are pruned. A sufficient condition for completeness is presented, proved sound, and illustrated by examples.

  • 187.
    Drabent, Włodzimierz
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Institute of Computer Science, Polish Academy of Sciences, Warzawa, Poland.
    Correctness and Completeness of Logic Programs2016In: ACM Transactions on Computational Logic, ISSN 1529-3785, E-ISSN 1557-945X, Vol. 17, no 3, 18Article in journal (Refereed)
    Abstract [en]

    We discuss proving correctness and completeness of definite clause logic programs.  We propose a method for proving completeness, while for proving correctness we employ a method which should be well known but is often neglected.  Also, we show how to prove completeness and correctness in the presence of SLD-tree pruning, and point out that approximate specifications simplify specifications and proofs.

    We compare the proof methods to declarative diagnosis (algorithmic debugging), showing that approximate specifications eliminate a major drawback of the latter.  We argue that our proof methods reflect natural declarative thinking about programs, and that they can be used, formally or informally, in every-day programming.

  • 188.
    Durairaj, Selva Ganesh
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Parallelize Automated Tests in a Build and Test Environment2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis investigates the possibilities of finding solutions, in order to reduce the total time spent for testing and waiting times for running multiple automated test cases in a test framework. The “Automated Test Framework”, developed by Axis Communications AB, is used to write the functional tests to test both hardware and software of a resource. The functional tests that tests the software is considered in this thesis work. In the current infrastructure, tests are executed sequentially and resources are allocated using First In First Out scheduling algorithm. From the user’s point of view, it is inefficient to wait for many hours to run their tests that take few minutes to execute. The thesis consists of two main parts: (1) identify a plugin that suits the framework and executes the tests in parallel, which reduces the overall execution time of tests and (2) analyze various scheduling algorithms in order to address the resource allocation problem, which arose due to limited resource availability, while the tests were run in parallel. By distributing multiple tests across several resources and executing them in parallel, help in improving the test strategy, thereby reducing the overall execution times of test suites. The case studies were created to emulate the problematic scenarios in the company and sample tests were written that reflect the real tests in the framework. Due to the complexity of the current architecture and the limited resources available for running the test in parallel, a simulator was developed with the identified plugin in a multi-core computer, with each core simulating a resource. Multiple tests were run using the simulator in order to explore, check and assess if the overall execution time of the tests can be reduced. While achieving parallelism in running the automated tests, resource allocation became a problem, since limited resources are available to run parallel tests. In order to address this problem, scheduling algorithms were considered. A prototype was developed to mimic the behaviour of a scheduling plugin and the scheduling algorithms were implemented in the prototype. The set of values were given as input to the prototype and tested with scenarios described under case studies. The results from the prototype are used to analyze the impact caused by various scheduling algorithms on reducing the waiting times of the tests. The combined usage of simulator along with scheduler prototype helped in understanding how to minimize the total time spent for testing and improving the resource allocation process.

  • 189.
    Eilert, Rickard
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Development of a framework for creating cross-platform TV HTML5 applications2015Independent thesis Basic level (professional degree), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    When developing HTML5 applications for TV platforms, the TV platforms provide, in addition to standardHTML5 functionality, also extra APIs for TV-specific features. These extra APIs differ between TVplatforms, and that is a problem when developing an application targeting several platforms. This thesis hasexamined if it is possible to design a framework which provides the developer with one API that works formany platforms by wrapping their platform-specific code. The answer is yes. With success, platform-specificfeatures including: TV remote control input, video, volume, Internet connection status, TV channel streamsand EPG data have been harmonised under an API in a JavaScript library. Furthermore, a build systempackages the code in the way the platforms expect. The framework eases the development of TV platformHTML5 applications. At the moment, the framework supports the Pace, PC and Samsung Smart TVplatforms, but it can be extended with more TV platform back-ends.

  • 190.
    Einarson, Carl
    Linköping University, Department of Computer and Information Science, Software and Systems.
    An extension of the PPSZ Algorithm to Infinite-Domain Constraint Satisfaction Problems2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The PPSZ algorithm (Paturi et al., FOCS 1998) is the fastest known algorithm for solving k-SAT when k >= 4. Hertli et al. recently extended the algorithm to solve (d, k)-Clause Satisfaction problems ((d,k)-ClSP) for which it is the fastest known algorithm for all k >= 3 (Hertli et al. CP 2016). We analyze their algorithm and extend it to solve problems over an infinite domain. More specifically we show how the extended algorithm can solve problems that have an infinite domain but where we can, for each instance of the problem, find a finite subset of the domain which has the following properties: If there exists a solution to the problem instance, then there exists a solution using only values from this subset and the size of this subset is polynomial in the size of the problem instance. We show numerically that our algorithm is the fastest known for problems over bounded disjunction languages for some values of k <= 500 and we look at the branching time temporal language, which is a bounded disjunction language, to show how to transform a specific problem to (d,k)-ClSP. We also look at Allen's interval algebra but conclude that there is already a faster algorithm for solving this problem.

  • 191.
    Enblom, Gustav
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Eskebaek, Hannes
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Real Time Vehicle Diagnostics Using Head Mounted Displays2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis evaluates how a head mounted display (HMD) can be used to increase usability compared to existing computer programs that are used during maintenance work on vehicles. Problems identified during a case study in a vehicle workshop are first described. As an attempt to solve some of the identified problems a prototype application using a HMD was developed. The prototype application aids the user during troubleshooting of systems on the vehicle by leading the mechanic with textual information and augmented reality (AR). Assessment of the prototype application was done by comparing it to the existing computer program and measuring error rate and time to completion for a predefined task. Usability was also measured using the System Usability Scale. The assessment showed that HMDs can provide higher usability in terms of efficiency and satisfaction. Furthermore, the thesis describes and discusses other possibilities and limitations that usage of HMDs and AR can lead to that were identified both from theory and during implementation.

  • 192.
    Englund, Albin
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Suther, Magnus
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Bluetooth Low Energy som trådlös standard för hemautomation2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The public has a great demand of products in the field of home automation. The latest Bluetooth standard, Bluetooth Low Energy creates new opportunities for interesting products that simplifies everyday life. Solutions such as infraredand Wi-Fi do not qualify for an energy efficient and practical way to o↵er such products, which Bluetooth Low Energy does. In this report, the standard is discussed in order to account for how it can be used to automate a home.

    For this thesis a power switch prototype and an iOSapplication where implemented, which where used to investigate and to demonstrate a concept for how the technology can be applied for home automation. Results shows that the range is the main limitation of the technology. It is also shown how the signal strength may be used as a trigger to control a power switch.

    This report also describes how the system achieves interoperability by implementing a custom profile.

  • 193. Engstrom, Robert
    et al.
    Färnqvist, Tommy
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Jonsson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Thapper, Johan
    University of Paris Est Marne La Vallee, France.
    An Approximability-related Parameter on Graphs - Properties and Applications2015In: DISCRETE MATHEMATICS AND THEORETICAL COMPUTER SCIENCE, ISSN 1462-7264, Vol. 17, no 1, 33-66 p.Article in journal (Refereed)
    Abstract [en]

    We introduce a binary parameter on optimisation problems called separation. The parameter is used to relate the approximation ratios of different optimisation problems; in other words, we can convert approximability (and non-approximability) result for one problem into (non)-approximability results for other problems. Our main application is the problem (weighted) maximum H-colourable subgraph (MAX H-COL), which is a restriction of the general maximum constraint satisfaction problem (MAX CSP) to a single, binary, and symmetric relation. Using known approximation ratios for MAX k-CUT, we obtain general asymptotic approximability results for MAX H-COL for an arbitrary graph H. For several classes of graphs, we provide near-optimal results under the unique games conjecture. We also investigate separation as a graph parameter. In this vein, we study its properties on circular complete graphs. Furthermore, we establish a close connection to work by Samal on cubical colourings of graphs. This connection shows that our parameter is closely related to a special type of chromatic number. We believe that this insight may turn out to be crucial for understanding the behaviour of the parameter, and in the longer term, for understanding the approximability of optimisation problems such as MAX H-COL.

  • 194.
    Engström, Adam
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Strömsparande arkitektur för inbyggnadslinux2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The objective of this work was to evaluate and implement a number of energy saving functions for a specific embedded system. The functions were then grouped into a number of energy levels with known properties in terms of functionality, energy consumption, and transition time between the levels.

    The embedded system consisted of an AT91 ARM9 processor, GSM/GPRS modem, display, Ethernet and other peripheral units. Some energy saving methods that were considered were suspend to RAM, suspend to disk, frequency scaling, and methods for saving energy in the modem, Ethernet, USB and display backlight. The functions were grouped into levels and an interface was specified for controlling the energy level.

    It proved possible to get known properties within the defined energy levels, even though the paritioning of functions into these levels proved to be sub-optimal in a typical application usage scenario because it was designed for mainly energy consumption, not usage.

    The final result is a number of energy saving functions grouped into levels, which are controllable via an application interface. Each of the levels have a known energy consumption in both loaded and un-loaded mode.

  • 195.
    Eriksson, Björn
    Linköping University, Department of Computer and Information Science, Software and Systems.
    A study of Bitcoin as a currency for email-based micro-transactions2016Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Bitcoin is a cryptocurrency that has been the focus of a lot of discussions lately and has attracted a large number of users. Its offers many possibilities for cheap transactions and unregulated finances which has been realized in numerous sites and applications on the web and in mobile phones. One medium that seem to have been neglected when it comes to Bitcoins development is email. This is curious since Bitcoin by its nature seem to have many properties that would work well with texted messages. The purpose of this study it to analyze the current papers about Bitcoin to find the current status of email based Bitcoin services,  and try to analyze if email as a tool is a suitable medium to be used with Bitcoin. This analyze is done through a systematic literature review of current papers, followed by an examination of past and current Bitcoin companies that has used email as part of their service. In the end the results suggested that the low security in email, and the apparent lack of services that would benefit from an email based Bitcoin service suggest that a service like that would be hard to develop today and not very useful to the public.

  • 196.
    Eriksson, Joakim
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Representation of asynchronous communication protocols in Scala and Akka2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis work investigates how to represent protocols for asynchronous communication in the Scala programming language and the Akka actor framework, to be run on Java Virtual Machine (JVM). Further restrictions from the problem domain - the coexistence of multiple protocol instances sharing the same Java thread - imply that neither an asynchronous call waiting for response nor anything else can block the underlying Java threads.

    A common way to represent asynchronous communication protocols is to use state machines. This thesis seeks a way to shrink the size of and to reduce the complexity of the protocol implementations by representing sequences of asynchronous communication calls (i.e. sequences of sent and received messages) as a type of procedure. The idea is find a way to make the procedures that contain asynchronous calls look like synchronous communication procedures by hiding the asynchronous details. In other words, the resulting procedure code should show what to do and not so much focus on how to overcome the impediment of the asynchronous calls.

    With the help of an asynchronous communication protocol toy example, this report shows how such an protocol can be implemented with a combination of a state machine and a procedure representation in Scala and Akka. The procedure representation hides away the asynchronous details by using the Scala capability to use CPS-transformed delimited continuations. As a sub-problem, this thesis also shows how to safely schedule asynchronous communication timeouts with help of Scala and Akka within the restrictions of the thesis problem domain.

  • 197.
    Eriksson, Jonas
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Partitioning methodology validation for embedded systems design2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    As modern embedded systems are becoming more sophisticated the demands on their applications significantly increase. A current trend is to utilize the advances of heterogeneous platforms (i.e. platform consisting of different computational units (e.g. CPU, FPGA or GPU)) where different parts of the application can be distributed among the different computational units as software and hardware implementations. This technology can improve the application characteristics to meet requirements (e.g. execution time, power consumption and design cost), but it leads to a new challenge in finding the best combination of hardware and software implementation (referred as system configuration). The decisions whether a part of the application should be implemented in software (e.g. as C code) or hardware (e.g. as VHDL code) affect the entire product life-cycle. This is traditionally done manually by the developers in the early stage of the design phase. However, due to the increasing complexity of the application the need of a systematic process that aids the developer when making these decisions to meet the demands rises. Prior to this work a methodology called MULTIPAR has been designed to address this problem. MULTIPAR applies component-/model-based techniques to design the application, i.e. the application is modeled as a number of interconnected components, where some of the components will be implemented as software and the remaining ones as hardware. To perform the partitioning decisions, i.e. determining for each component whether it should be implemented as software or hardware, MULTIPAR proposes a set of formulas to calculate the properties of the entire system based on the properties for each component working in isolation.

    This thesis aims to show to what extent the proposed system formulas are valid. In particular it focuses on validating the formulas that calculate the system response time, system power consumption, system static memory and system FPGA area. The formulas were validated trough an industrial case study, where the system properties for different system configurations were measured and calculated by applying these formulas. The measured values and calculated values for the system properties were compared by conducting a statistical analysis. The case study demonstrated that the system properties can be accurately calculated by applying the system formulas.

  • 198.
    Eriksson, Mattias
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Integrated Code Generation for Loops2012In: ACM Transactions on Embedded Computing Systems, ISSN 1539-9087, E-ISSN 1558-3465, Vol. 11, no 1Article in journal (Refereed)
    Abstract [en]

    Code generation in a compiler is commonly divided into several phases: instruction selection, scheduling, register allocation, spill code generation, and, in the case of clustered architectures, cluster assignment. These phases are interdependent; for instance, a decision in the instruction selection phase affects how an operation can be scheduled We examine the effect of this separation of phases on the quality of the generated code. To study this we have formulated optimal methods for code generation with integer linear programming; first for acyclic code and then we extend this method to modulo scheduling of loops. In our experiments we compare optimal modulo scheduling, where all phases are integrated, to modulo scheduling, where instruction selection and cluster assignment are done in a separate phase. The results show that, for an architecture with two clusters, the integrated method finds a better solution than the nonintegrated method for 27% of the instances.

  • 199.
    Eriksson, Mattias
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Integrated Offset Assignment2011In: Proceedings 9th Workshop on Optimizations for DSP and Embedded Systems (ODES-9) / [ed] George Cai and Tom van der Aa, 2011, 47-54 p.Conference paper (Refereed)
    Abstract [en]

    One important part of generating code for DSP processors is to make good use of the address generation unit (AGU). In this paper we divide the code generation into three parts: (1) scheduling, (2) address register assignment, and (3) storage layout. The goal is to nd out if solving these three subproblems as one big integrated problem gives better results compared to when scheduling or address register assignment is solved separately. We present optimal dynamic programming algorithms for both integrated and non-integrated code generation for DSP processors. In our experiments we nd that integrationis benecial when the AGU has 1 or 2 address registers; for the other cases existing heuristics are near optimal. We also nd that integrating address register assignment and storage layout gives slightly better results than integrating scheduling and storage layout. I.e. address register assignment is more important than scheduling.

  • 200.
    Eriksson, Oskar
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Rydkvist, Emil
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    An in-depth analysis of dynamically rendered vector-based maps with WebGL using Mapbox GL JS2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The regular way of displaying maps in a web browser is by downloading raster images from a server and lay them side by side to make up a map. If any information on the map is changed, new images has to be downloaded, it cannot be done on the client. The introduction of WebGL opens up a whole new world of delivering advanced graphics content to the end user in a web browser. Utilizing this technology for displaying maps means only the source data is sent to the web browser where the map gets rendered using the device's GPU. This adds a number of benefits such as the ability of changing map appearance on the client, add new features to the map and often less data transfer. It however sets higher expectations of the client device's hardware as it needs to render the map at a high enough frame rate to not appear slow and unresponsive. This thesis investigates a framework for client side map rendering in a web browser, Mapbox GL JS, with focus on performance. It shows how map source data can be generated as well as its corresponding style rules are constructed with performance in mind. It provides benchmarking results of different map data sets with different detail intensity and shows that a device with good GPU performance is needed for an acceptable user experience. It also shows that lowering the amount of rendered detail does not necessarily result in better performance.

1234567 151 - 200 of 551
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf