Change search
Refine search result
56575859606162 2901 - 2950 of 3552
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 2901.
    Sentilles, Séverine
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Papatheocharous, Efi
    Swedish Institute of Computer Science, Kista, Stockholm, Sweden.
    Ciccozzi, Federico
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Petersen, Kai
    Blekinge Institute of Technology, Sweden.
    A Property Model Ontology2016In: 42nd Euromicro Conference series on Software Engineering and Advanced Applications SEAA 2016, 2016, p. 165-172Conference paper (Refereed)
    Abstract [en]

    Efficient development of high quality software is tightly coupled to the ability of quickly taking complex deci- sions based on trustworthy facts. In component-based software engineering, the decisions related to selecting the most suitable component among functionally-equivalent ones are of paramount importance. Despite sharing the same functionality, components differ in terms of their extra-functional properties. Therefore, to make informed selections, it is crucial to evaluate extra-functional properties in a systematic way. To date, many properties and evaluation methods that are not necessarily compatible with each other exist. The property model ontology presented in this paper represents the first step towards providing a systematic way to describe extra-functional properties and their evaluation methods, and thus making them comparable. This is beneficial from two perspectives. First, it aids researchers in identifying comparable property models as a guide for empirical evaluations. Second, practitioners are supported in choosing among alterna- tive evaluation methods for the properties of their interest. The use of the ontology is illustrated by instantiating a subset of property models relevant in the automotive domain.

  • 2902.
    Sentilles, Séverine
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Vulgarakis, Aneta
    Mälardalen University, Department of Computer Science and Electronics.
    Crnkovic, Ivica
    Mälardalen University, Department of Computer Science and Electronics.
    A Model-Based Framework for Designing Embedded Real-Time Systems2007In: Proceedings of the Work-In-Progress (WIP) track of the 19th Euromicro Conference on Real-Time Systems (ECRTS 2007), Pisa, Italy, 2007Conference paper (Refereed)
    Abstract [en]

    This paper addresses the challenge of designing embedded real-time systems in an uniformed view no matter what their targeted utilisation domain is. Although Component-Based Development is an acknowledged approach for developing non real-time and non embedded systems, it still struggles to emerge in embedded real-time domains. This is mainly due to an inability to have accepted definitions and standards well-suited with the highly constrained characteristics (timing requirements, memory size, CPU speed) of such domain. In leaning upon a model-based framework, this paper describes a work-in-progress which aims at getting a common definition of what real-time components are as well as having a common structure to specify and design them.

  • 2903.
    Servat, Harald
    et al.
    Barcelona Supercomputing Center.
    Gonzalez, Cecilia
    Barcelona Supercomputing Center.
    Aguilar, Xavier
    Barcelona Supercomputing Center.
    Cabrera, Daniel
    Barcelona Supercomputing Center.
    Jimenez, Daniel
    Barcelona Supercomputing Center.
    Drug Design on the Cell BroadBand Engine2007In: Parallel Architecture and Compilation Techniques: Conference Proceedings, PACT, IEEE Computer Society, 2007, p. 425-425Conference paper (Refereed)
    Abstract [en]

    We evaluate a well known protein docking application in the bioinformatic field, Fourier Transform Docking (FTDock) (Gabb et al., 1997), on a Blade with two 3.2GHz cell broadband engine (BE) processor (Kahle et al., 2005). FTDock is a geometry complementary approximation of the protein docking problem, and uses 3D FFTs to reduce the complexity of the algorithm. FTDock achieves a significant speedup when most time consuming functions are offloaded to SPEs, and vectorized. We show the performance impact evolution of of-loading and vectorizing two functions of FTDock (CM and SC) on 1 SPU. We show total execution time of FTDock when CM and SC run in the PPU (bar 1), CM is off loaded (bar 2), CM is also vectorized (bar 3), SC is offloaded (bar 4) and SC is also vectorized (bar 5). Parallelizing functions that are not offloaded, using OpenMP for instance, on the dual-thread PPE helps to increase the PPEpipeline use and system throughput, and the scalability of the application.

  • 2904.
    Servat, Harald
    et al.
    Barcelona Supercomputing Center.
    González-Alvarez, Cecilia
    Barcelona Supercomputing Center.
    Aguilar, Xavier
    Barcelona Supercomputing Center.
    Cabrera-Benitez, Daniel
    Barcelona Supercomputing Center.
    Jiménez-González, Daniel
    Barcelona Supercomputing Center.
    Drug Design Issues on the Cell BE2008In: High Performance Embedded Architectures and Compilers / [ed] Per Stenström and Michel Dubois and Manolis Katevenis and Rajiv Gupta and Theo Ungerer, Springer, 2008, p. 176-190Conference paper (Refereed)
    Abstract [en]

    Structure alignment prediction between proteins (protein docking) is crucial for drug design, and a challenging problem for bioinformatics, pharmaceutics, and current and future processors due to it is a very time consuming process. Here, we analyze a well known protein docking application in the Bioinformatic field, Fourier Transform Docking (FTDock), on a 3.2GHz Cell Broadband Engine (BE) processor. FTDock is a geometry complementary approximation of the protein docking problem, and baseline of several protein docking algorithms currently used. In particular, we measure the performance impact of reducing, tuning and overlapping memory accesses, and the efficiency of different parallelization strategies (SIMD, MPI, OpenMP, etc.) on porting that biomedical application to the Cell BE. Results show the potential of the Cell BE processor for drug design applications, but also that there are important memory and computer architecture aspects that should be considered.

  • 2905.
    Sha, Maoxuan
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
    Xie, Jun
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
    Xu, Xiao Lin
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
    Switched multi-hop EDF networks: The influence of offsets on real-time performance2011Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In computer science, real-time research is an interesting topic. Nowadays real-time applications are close to us in our daily life. Skype, MSN, satellite communication, automation car and Ethernet are all things related to the real-time field. Many of our computer systems are also real-time, such as RT-Linux, Windows CE. In other words, we live in a “real-time” world. However, not everyone knows much about its existence. Hence, we chose this thesis in order to take a knowledge journey in the real-time field. For an average reader, we hope to provide some basic knowledge about real-time. For a computer science student, we will try to provide a discussion on switched multi-hop network with offsets, and the influence of offsets on real-time network performance. We try to prove that offsets provide networks of high predictability and utilization because offsets adjust packet‟s sending time. A packet‟s sending time is the time when a sender/router starts to transmit a date packet. Packets are sent one after the other. Therefore, we need to lower the time interval between one packet and another. Hence, in our network model, network performance is more predictable and effective. There might be some things left to discuss in future, so we would like to receive any advice and also suggestions for future discussions.

  • 2906.
    Shafiq, ur Réhman
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Royal Institute of Technology (KTH), Sweden..
    Using Vibrotactile Language for Multimodal Human Animals Communication and Interaction2014In: Proceedings of the 2014 Workshops on Advances in Computer Entertainment Conference, ACE '14, Association for Computing Machinery (ACM), 2014, p. 1:1-1:5Conference paper (Refereed)
    Abstract [en]

    In this work we aim to facilitate computer mediated multimodal communication and interaction between human and animal based on vibrotactile stimuli. To study and influence the behavior of animals, usually researchers use 2D/3D visual stimuli. However we use vibrotactile pattern based language which provides the opportunity to communicate and interact with animals. We have performed experiment with a vibrotactile based human-animal multimodal communication system to study the effectiveness of vibratory stimuli applied to the animal skin along with audio and visual stimuli. The preliminary results are encouraging and indicate that low-resolution tactual displays are effective in transmitting information.

  • 2907.
    Shah, Ghafoor
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory.
    Arslan, Saad
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory.
    Design of an in-field Embedded Test Controller2011Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Electronic systems installed in their operation environments often require regular testing. The nanometer transistor size in new IC design technologies makes the electronic systems more vulnerable to defects. Due to certain reasons like wear out or over heating and difficulty to access systems in remote areas, in-field testing is vital. For in-field testing, embedded test controllers are more effective in terms of maintenance cost than external testers. For in-field testing, fault coverage, high memory requirements, test application time, flexibility and diagnosis are the main challenges.

    In this thesis, an Embedded Test Controller (ETC) is designed and implemented which provides flexible in-field testing and diagnostic capability with high fault coverage. The ETC has relatively low memory requirements for storing deterministic test data as compared to storing complete test vectors. The test patterns used by the ETC are stored separately for each component of the device under test, in system memory. The test patterns for each component are concatenated during test application according to a flexible test command. To address test application time (which corresponds to down time of the system), two different versions of the ETC are designed and implemented. These versions provide a trade off between test application time and hardware overhead. Hence, a system integrator can select which version to use depending on the cost factors at hand. The ETC can make use of an embedded CPU in the Device Under Test (DUT), for performing test on the DUT. For DUTs where no embedded CPU is available, there is the additional cost of a test specific CPU for the ETC. To access the DUT during the test application, the IEEE 1149.1 (JTAG) interface is used. The ETC generates test result that provides information of failing ICs and patterns.

    The designed and implemented versions of the ETC are validated through experimentations. An FPGA platform is used for experimental validation of the ETC versions. A set of tools are developed for automating the experimental setup. Performance and hardware cost of the ETC versions are evaluated using the ITC'02 benchmarks.

  • 2908.
    Shahrad, Mohammad
    et al.
    Princeton University.
    Klein, Cristian
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Zheng, Liang
    Princeton University.
    Chiang, Mung
    Princeton University / Purdue University.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Wentzlaf, David
    Princeton University.
    Incentivizing Self-Capping to Increase Cloud Utilization2017In: Proceedings of the 2017 Symposium on Cloud Computing (SOCC '17), Association for Computing Machinery (ACM), 2017, p. 52-65Conference paper (Refereed)
    Abstract [en]

    Cloud Infrastructure as a Service (IaaS) providers continually seek higher resource utilization to better amortize capital costs. Higher utilization not only can enable higher profit for IaaS providers but also provides a mechanism to raise energy efficiency; therefore creating greener cloud services. Unfortunately, achieving high utilization is difficult mainly due to infrastructure providers needing to maintain spare capacity to service demand fluctuations.

    Graceful degradation is a self-adaptation technique originally designed for constructing robust services that survive resource shortages. Previous work has shown that graceful degradation can also be used to improve resource utilization in the cloud by absorbing demand fluctuations and reducing spare capacity. In this work, we build a system and pricing model that enables infrastructure providers to incentivize their tenants to use graceful degradation. By using graceful degradation with an appropriate pricing model, the infrastructure provider can realize higher resource utilization while simultaneously, its tenants can increase their profit. Our proposed solution is based on a hybrid model which guarantees both reserved and peak on-demand capacities over flexible periods. It also includes a global dynamic price pair for capacity which remains uniform during each tenant's Service Level Agreement (SLA) term.

    We evaluate our scheme using simulations based on real-world traces and also implement a prototype using RUBiS on the Xen hypervisor as an end-to-end demonstration. Our analysis shows that the proposed scheme never hurts a tenant's net profit, but can improve it by as much as 93%. Simultaneously, it can also improve the effective utilization of contracts from 42% to as high as 99%.

  • 2909.
    Shahzad, Muhammad Khurram
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Recent issues identified and conceptualized for simulating business scenarios in multi-version data warehouse2006In: International Journal of Soft Computing, ISSN 1816-9503, Vol. 1, no 2, p. 97-102Article in journal (Refereed)
    Abstract [en]

    Data Warehouse provides an opportunity to analyze businesses in order to precise decision- making, various tools has been build and used for such purpose. But due to static structure of DW it is unable to manage changes in external sources, also simulating various scenarios can increase the decision efficiency, for such purpose Multi-Version Data Warehouse is proposed which contains two types of versions called real and alternative. But due to diversity in simulations two types of alternative versions has been proposed. Also for querying these versions we are extending the functionality of Synthetic Warehouse Builder. This study presents the challenges we are facing and visualizing during the extensions.

  • 2910.
    Shami, Muhammad Ali
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Dynamically Reconfigurable Resource Array2012Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The goals set by the International Technology Roadmap for Semiconductors (ITRS) for the consumer portable category, to be realized by 2020, are 1000X improvement in performance with only 40\% increase in power budget and no increase in design team size. To meet these goals, the challenges facing the VLSI community are gaps in architecture efficacy, design productivity and battery capacity.As the causes of the gaps in architecture efficacy and battery capacity, this thesis identifies: a) instruction granularity mismatch, b) bit-width granularity mismatch, c) silicon granularity mismatch and d) parallelism mismatch. Field Programmable Gate Array(FPGA) technology can address instruction/bit-width granularity and parallelism mismatch but suffers from silicon granularity mismatch due to high reconfiguration overheads. The ultimate design goal of a system-on-chip is to achieve an ASIC-like performance and FPGA-like flexibility, design time and cost. Coarse Grain Reconfigurable Architectures (CGRAs) are a compromise between ASIC and FPGA since they provide better computational efficiency compared to FPGAs and better engineering efficiency compared to ASIC. However, the current generation of CGRAs lack many architectural properties that would enable them to replace ASIC and/or FPGA by mainstream industry.To objectively discuss these properties, in the first part of the thesis a classification scheme has been proposed that classifies parallel computing machines into 47 classes and propose how they can be graded in terms of flexibility. We apply this classification scheme on academic and industrial reconfigurable architectures to compare them for their similarities and differences. We identify an instruction flow spatial computing class to be used for a CGRA fabric called Dynamically Reconfigurable Resource Array (DRRA) presented in the second part of this thesis. The DRRA fabric is a Parallel Distributed Digital Signal Processing (PDDSP) fabric with distributed arithmetic, logic, interconnect and control resources. Problems associated with the distributed control model of DRRA are identified and architectural solutions that can be exploited by the compiler tools are presented.After logical and physical synthesis, DRRA shows a peak performance of 21 GOPS and peak silicon efficiency of 16.03 GOPS/mm\textsuperscript{2}. We further performed a three-level validation of the DRRA fabric. At first level, we mapped a number of signal and compute intensive algorithms to demonstrate the flexibility of the DRRA fabric. At second level, we measured the gap between ASIC, DRRA and FPGA. On average DRRA shows 22.87x area, 10.75x power consumption, 852x configuration bits, 959x configuration cycles, 63,94x silicon efficiency, 4.78x computational efficiency, and 6.15E+10x better energy-delay product improvements compared to FPGA. Finally, at third level we present the use of DRRA for a real world example of implementing a 128-, 256-, 512-, 1024-, 2048-point configurable FFT processor. For 1024 point FFT, in terms of computational efficiency, DRRA outperforms all CGRAs by at least 2x and is worse than ASIC by 3.45x. As regards silicon efficiency, although dedicated processors perform 1.6x better, DRRA is better than all other CGRAs.

  • 2911.
    Shan, Min
    Umeå University, Faculty of Social Sciences, Department of Informatics.
    What matters in the digital shopping mall?: A study of Chinese consumers’ adoption of E-business platforms and vendors2012Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    E-business is growing rapidly all over the world and especially in China, which now has the largest C2C market in the world. Most studies of users’ experience of E-business either focus on the platform usage, platform adoption or include platform usage and vendors’ behavior as variables in general e-retailing models. However, we do not know much about what effect the interplay between E-business platforms and vendors operating on the platform has on consumers E-business behavior. In this paper, buyers' behavior in terms of choosing platforms and choosing stores is examined separately, while measurements for influencing factors such as size of vendor base and trust of platform owner is included to capture second order effects. Data was gathered through a questionnaire, published on a professional Chinese survey website for collecting data. Afterwards, SPSS was used for analyzing data. Similarities and differences between the outcomes for the two research questions were analyzed. The main patterns in the two models are similar, suggesting that the interaction between platform owners and vendors has impact on buyers as well. Price, which was one of the most important features of E-business, proved to be of minor importance for choosing both E-business platform and vendors. However, there are some differences between adoption of platforms and vendors, range of market is important for platform adoption, while it is not a indicator for consumers to choose a certain vendor. These findings suggest that there are second order effects involved in E-business platforms. Further, they indicate that once an E-business platform has acquired a large enough user base, the owner might consider increasing revenues from vendor fees, as long as they translate to small product price increases rather than a decreased vendor user base.

  • 2912.
    Shao, Botao
    et al.
    KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK. KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Chen, Qiang
    KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK. KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Amin, Yasar
    KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK. KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Sarmiento Mendoza, David
    KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK. KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Liu, Ran
    Fudan University.
    Zheng, Li-Rong
    KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK. KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    An ultra-low-cost RFID tag with 1.67 Gbps data rate by ink-jet printing on paper substrate2010In: 2010 6th IEEE Asian Solid-State Circuits Conference, A-SSCC 2010, 2010, p. 109-112Conference paper (Other academic)
    Abstract [en]

    A fully metallic ink-jet printed passive chipless RFID tag on paper substrate is presented. The tag consists of an ultra-wide-band antenna, a microstrip transmission line with distributed shunt capacitors as information coding element which is reconfigurable by ink-jet printing process. Tapered microstrip line is employed to overcome the limitations of low conductivity and thin film thickness of ink-jet printed metal tracks. Measurement results show that the tag features a robust readability over 80 cm reading distance and a high data rate of 1.67 Gb/s.

  • 2913.
    Shao, Botao
    et al.
    KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK. KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Chen, Qiang
    KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK. KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Liu, Ran
    Fudan University.
    Zheng, Li-Rong
    KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK. KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    A reconfigurable chipless RFID tag based on sympathetic oscillation for liquid-bearing applications2011In: 2011 5th IEEE International Conference on RFID, RFID 2011, 2011, p. 170-175Conference paper (Other academic)
    Abstract [en]

    This paper reports on the development of a 10-bit chipless RFID tag on flexible plastic substrate. This tag is based on sympathetic oscillations of a group of LC circuits with different resonant frequencies. Sophisticated designs including the placement of capacitors involved in each LC circuit, and various LC combinations are examined for the trade-off of the readability and the tag sizes. Moreover, the antennas for detecting the proposed tags are presented. The measurement results show that the proposed tag possesses remarkable readability for a read range up to 21 cm and more importantly, it is suited for tagging liquid-bearing containers, which are widely used in food and medical industries. In addition, this tag is reconfigurable on circuit level, enabling a potential pathway towards the realization of low cost RFID tags for HF/VHF band applications.

  • 2914.
    Shao, Botao
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic Systems. KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK.
    Chen, Qiang
    KTH, School of Information and Communication Technology (ICT), Electronic Systems. KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK.
    Liu, Ran
    Zheng, Li-Rong
    KTH, School of Information and Communication Technology (ICT), Electronic Systems. KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK.
    Design of fully printable and configurable chipless RFID tag on flexible substrate2012In: Microwave and optical technology letters (Print), ISSN 0895-2477, E-ISSN 1098-2760, Vol. 54, no 1, p. 226-230Article in journal (Refereed)
    Abstract [en]

    This article presents the design and implementation of a chipless radio frequency identification (RFID) tag on flexible substrate.The tag is designed based on the sympathetic oscillations of multiple LC (inductor–capacitor) circuits that possess distinct resonant frequencies. Information is encoded by controlling placement of these resonant frequencies. To trade off the readability and size of the tag, the optimizations including capacitor placements and different LC combinations are studied. The tag is then realized onto flexible polyimide substrate using toner-transferring process. The detection system is also constructed and used to measure the proposed tag. The measurement results show that the tag can provide an excellent readability more than 20 cm reading range. In addition, this tag is fully printable and configurable, hence making it more feasible and considerably cheaper to be used. This tag can provide a meaningful approach toward the realization of ultralow-cost RFID tags attached onto low-value items.

  • 2915.
    Shao, Botao
    et al.
    KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK. KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Chen, Qiang
    KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK. KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Liu, Ran
    Zheng, Li-Rong
    KTH, School of Information and Communication Technology (ICT), Centres, VinnExcellence Center for Intelligence in Paper and Packaging, iPACK. KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Linearly-tapered RFID tag antenna with 40% material reduction for ultra-low-cost applications2011In: 2011 IEEE International Conference on RFID, 2011, p. 45-49Conference paper (Other academic)
    Abstract [en]

    The development of RFID technology are requiring high performance and low cost tag antennas than ever before. To meet these demands, linear tapering technique is firstly proposed in the design of planar tag antennas. With this strategy, the current distribution along antenna arms is effectively assigned by varying the antenna line width. Compared with conventional ones, the tapered antennas can reduce the material cost by over 40% not only for PCB (Printed Circuit Board) processed, but also for ink-jet printing produced dipole and meander line antennas, while they still maintain comparable performance. With an identical volume of conducting material, the tapered antennas can achieve better radiation performance than non-tapered ones on antenna gains and radiation efficiencies. The method has been successfully verified by applying it onto 869 MHz and 2.45 GHz antennas. The influence of the tapering technique on antenna bandwidth is also investigated.

  • 2916.
    Shaoteng, Liu
    KTH, School of Information and Communication Technology (ICT), Electronics and Embedded Systems.
    New circuit switching techniques in on-chip networks2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Network on Chip (NoC) is proposed as a promising technology to address the communication challenges in deep sub-micron era. NoC brings network-based communication into the on-chip environment and tackles the problems like long wire complexities, bandwidth scaling and so on. After more than a decade's evolution and development, there are many NoC architectures and solutions available. Nevertheless, NoCs can be classi_ed into two categories: packet switched NoC and circuit switched NoC. In this thesis, targeting circuit switched NoC, we present our innovations and considerations on circuit switched NoCs in three areas, namely, connection setup method, time division multiplexing (TDM) technology and spatial division multiplexing (SDM) technology.

    Connection setup technique deeply inuences the architecture and performance of a circuit switched NoC, since circuit switched NoC requires to set up connections before launching data transfer. We propose a novel parallel probe based method for dynamic distributed connection setup. This setup method on one hand searches all the possible minimal paths in parallel. On the other hand, it also has a mechanism to reduce resource occupation during the path search process by reclaiming redundant paths. With this setup method, connections are more likely to be established because of the exploration on the path diversity.

    TDM based NoC constitutes a sub-category of circuit switched NoC. We propose a double time-wheel technique to facilitate a probe based connection setup in TDM NoCs. With this technique, path search algorithms used in connection setup are no longer limited to deterministic routing algorithms. Moreover, the hardware cost can be reduced, since setup requests and data flows can co-exist in one network. Apart from the double time-wheel technique for connection setup, we also propose a highway technique that can enhance the slot utilization during data transfer. This technique can accelerate the transfer of a data flow while maintaining the throughput guarantee and the packet order.

    SDM based NoC constitutes another sub-category of circuit switched NoC. SDM NoC can benefit from high clock frequency and simple synchronization efforts. To better support the dynamic connection setup in SDM NoCs, we design a single cycle allocator for channel allocation inside each router. This allocator can guarantee both strong fairness and maximal matching quality. We also build up a circuit switched NoC, which can support multiple channels and multiple networks, to study different ways of organizing channels and setting up connections. Finally, we make a comparison between circuit switched NoC and packet switched NoC. We show the strengths and weaknesses on each of them by analysis and evaluation.

  • 2917.
    Shaoteng, Liu
    et al.
    KTH, School of Information and Communication Technology (ICT). KTH.
    Zhonghai, LuKTH, School of Information and Communication Technology (ICT), Electronics and Embedded Systems.Axel, JantschTU Wien, Vienna, Austria.
    Highway in TDM NoC2015Conference proceedings (editor) (Refereed)
  • 2918.
    Sharif Mansouri, Shohreh
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Dubrova, Elena
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    An improved hardware implementation of the Grain stream cipher2010In: Proceedings - 13th Euromicro Conference on Digital System Design: Architectures, Methods and Tools, DSD 2010, 2010, p. 433-440Conference paper (Refereed)
    Abstract [en]

    A common approach to protect confidential information is to use a stream cipher which combines plain text bits with a pseudo-random bit sequence. Among the existing stream ciphers, Non-Linear Feedback Shift Register (NLFSR)-based ones provide the best trade-off between cryptographic security and hardware efficiency. In this paper, we show how to further improve the hardware efficiency of the Grain stream cipher. By transforming the NLFSR of Grain from its original Fibonacci configuration to the Galois configuration and by introducing new hardware solutions, we double the throughput of the 80 and 128-bit key 1 bit/cycle architectures of Grain with no area and power penalty.

  • 2919.
    Sharif Mansouri, Shohreh
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Dubrova, Elena
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    An Improved Hardware Implementation of the Grain-128a Stream Cipher2012In: Lecture Notes in Computer Science / [ed] Springer-Verlag, 2012, p. 278-292Conference paper (Refereed)
    Abstract [en]

    We study efficient high-throughput hardware implementations of the Grain-128a family of stream ciphers. To increase the throughput compared to the standard design, we apply five different techniques in combination: isolation of the authentication section, Fibonacci-to-Galois transformation of the feedback shift registers, multi-frequency implementation, simplification of the pre-outputs functions and internal pipelining. The combined effect of all these techniques enables an average 56% higher keystream generation throughput among all the ciphers, at the expense of an average 8% area penalty, an average 4% power overhead and a 21% slower keystream initialization phase. An alternative combination of techniques allows an average 23% throughput improvement in all phases.

  • 2920.
    Sharif Razavian, Ali
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Azizpour, Hossein
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Maki, Atsuto
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Sullivan, Josephine
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Persistent Evidence of Local Image Properties in Generic ConvNets2015In: Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings / [ed] Paulsen, Rasmus R., Pedersen, Kim S., Springer Publishing Company, 2015, p. 249-262Conference paper (Refereed)
    Abstract [en]

    Supervised training of a convolutional network for object classification should make explicit any information related to the class of objects and disregard any auxiliary information associated with the capture of the image or thevariation within the object class. Does this happen in practice? Although this seems to pertain to the very final layers in the network, if we look at earlier layers we find that this is not the case. Surprisingly, strong spatial information is implicit. This paper addresses this, in particular, exploiting the image representation at the first fully connected layer,i.e. the global image descriptor which has been recently shown to be most effective in a range of visual recognition tasks. We empirically demonstrate evidences for the finding in the contexts of four different tasks: 2d landmark detection, 2d object keypoints prediction, estimation of the RGB values of input image, and recovery of semantic label of each pixel. We base our investigation on a simple framework with ridge rigression commonly across these tasks,and show results which all support our insight. Such spatial information can be used for computing correspondence of landmarks to a good accuracy, but should potentially be useful for improving the training of the convolutional nets for classification purposes.

  • 2921.
    Sharif Razavian, Ali
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Sullivan, Josephine
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Maki, Atsuto
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    A Baseline for Visual Instance Retrieval with Deep Convolutional Networks2015Conference paper (Refereed)
  • 2922.
    Shen, Meigen
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Zheng, Lirong
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Tjukanoff, E.
    Isoaho, J.
    Tenhunen, Hannu
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Concurrent chip package design for global clock distribution network using standing wave approach2005Conference paper (Refereed)
    Abstract [en]

    As a result of the continuous downscaling of CMOS technology, on chip frequency for high performance microprocessors will soon reach 10 GHz, according to the international technology roadmap for semiconductors (ITRS). A 10 GHz global clock distribution network using a standing wave approach is analyzed on the chip and package levels. On the chip level, a 10 GHz standing wave oscillator (SWO) for a global clock distribution network, using 0.18 /spl mu/m IP6M CMOS technology, is designed and analyzed. Simulation results show that skew is well controlled (about 1 ps), while the clock frequency variation is about 20% because power/ground return paths exist in different metal layers. On the package level, we assume that the chip size is 20/spl times/20 mm/sup 2/ and flip-chip bonding technology is used. Simulation results show that the skew at random positions of the transmission line (spiral or serpentine shape) is within 10% of /spl tau//sub clk/ when the attenuation is about 1.5 dB. For attenuation from 1.5 dB to 6.7 dB, the peak positions (n/spl lambda//2) can be used as clock nodes. For the mesh and plane shape, the skew is controlled within 10% of /spl tau//sub clk/ using the standing wave method.

  • 2923.
    Shen, Wei
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Zhang, Tingting
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Gidlund, Mikael
    ABB Corporate Research.
    Distributed Data Gathering Scheduling Protocol for Wireless Sensor Actor and Actuator Networks2012In: Communications (ICC), 2012 IEEE International Conference on, IEEE Communications Society, 2012, p. 7120-7125Conference paper (Refereed)
    Abstract [en]

    This paper presents a cross-layer distributed scheduling protocol for sensor data gathering transmission inwireless sensor actor and actuator networks. We propose the parent-dominant decision scheduling with collision free (PDDS-CF) algorithm to adapt the dynamics of links in a realistic low-power wireless network. In addition, the protocol has a light-weight mechanism to maintain the conflict links. We have evaluated the protocol andimplementation in TinyOS and Telosb hardware. The experiment shows that our protocol has robustness to the topology changes and it has significant improvements to reduce the traffic load in realistic wireless networks.

  • 2924.
    Shen, Wei
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Zhang, Tingting
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Gidlund, Mikael
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. ABB Corp Res, Vasterås, Sweden.
    Dobslaw, Felix
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science.
    SAS-TDMA: A Source Aware Scheduling Algorithm for Real-Time Communication in Industrial Wireless Sensor Networks2013In: Wireless networks, ISSN 1022-0038, E-ISSN 1572-8196, Vol. 19, no 6, p. 1155-1170Article in journal (Refereed)
    Abstract [en]

    Scheduling algorithms play an importantrole for TDMA-based wireless sensor networks. ExistingTDMA scheduling algorithms address a multitude of objectives.However, their adaptation to the dynamics of a realistic wirelesssensor network has not been investigated in a satisfactorymanner. This is a key issue considering the challenges withinindustrial applications for wireless sensor networks, given thetime-constraints and harsh environments.In response to those challenges, we present SAS-TDMA, asource-aware scheduling algorithm. It is a cross-layer solutionwhich adapts itself to network dynamics. It realizes a tradeoffbetween scheduling length and its configurational overheadincurred by rapid responses to routes changes. We implementeda TDMA stack instead of the default CSMA stack and introduceda cross-layer for scheduling in TOSSIM, the TinyOS simulator.Numerical results show that SAS-TDMA improves the qualityof service for the entire network. It achieves significant improvementsfor realistic dynamic wireless sensor networks whencompared to existing scheduling algorithms with the aim tominimize latency for real-time communication.

  • 2925.
    Sheuly, Sharmin Sultana
    Mälardalen University, School of Innovation, Design and Engineering.
    Resource Virtualization for Real-time Industrial Clouds2016Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cloud computing is emerging very fast as it has potential in transforming IT industry by replacing local systems as well as in reshaping the design of IT hardware. It helps companies to share their infrastructure resources over internet ensuring better utilization of them. Nowadays developers do not need to deploy expensive hardware or human resource to maintain them because of cloud computing. Such elasticity of resources is new in the IT world. With the help of virtualization, clouds meet different types of customer demand as well as ensure better utilization of resources. There are two types of virtualization technique (dominant): (i) hardware level or system level virtualization, and (ii) operating system (OS) level virtualization. In the industry system level virtualization is commonly used. However it introduces some performance overhead because of its heavy weight nature. OS level virtualization replacing system level virtualization as it is of light weight nature and has lower performance overhead. Nevertheless, more research is necessary to compare these two technologies in case of performance overhead. In this thesis, a comparison is made in between these two technologies to find  the suitable one for real time industrial cloud. XEN is chosen to represent system level virtualization and Docker and OpenVZ for OS level virtualization. To compare them the considered performance criteria are: migration time, downtime, CPU consumption during migration, execution time. The evaluation showed that OS level virtualization technique OpenVZ is more suitable for industrial real time cloud as it has better migration utility, shorter downtime and lower CPU consumption during migration.

  • 2926.
    Sheuly, Sharmin Sultana
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bankarusamy, Sudhangathan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Begum, Shahina
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Behnam, Moris
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Resource allocation in industrial cloud computing using artificial intelligence algorithms2015In: Frontiers in Artificial Intelligence and Applications, Volume 278, 2015, p. 128-136Conference paper (Refereed)
    Abstract [en]

    Cloud computing has recently drawn much attention due to the benefits that it can provide in terms of high performance and parallel computing. However, many industrial applications require certain quality of services that need efficient resource management of the cloud infrastructure to be suitable for industrial applications. In this paper, we focus mainly on the services, usually executed within virtual machines, allocation problem in the cloud network. To meet the quality of service requirements we investigate different algorithms that can achieve load balancing which may require migrating virtual machines from one node/server to another during runtime and considering both CPU and communication resources. Three different allocation algorithms based on Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Best-fit heuristic algorithm are applied in this paper. We evaluate the three algorithms in terms of cost/objective function and calculation time. In addition, we explore how tuning different parameters (including population size, probability of mutation and probability of crossover) can affect the cost/objective function in GA. Depending on the evaluation, it is concluded that algorithm performance is dependent on the circumstances i.e. available resource, number of VMs etc. 

  • 2927.
    Shevtsov, Stepan
    Linnaeus University, Faculty of Technology, Department of Computer Science. KU Leuven, Belgium.
    A Control-based Approach for Self-adaptive Software Systems with Formal Guarantees2017Licentiate thesis, comprehensive summary (Other academic)
  • 2928.
    Shevtsov, Stepan
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Berekmeri, Mihaly
    Grenoble Institute of Technology, France.
    Weyns, Danny
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). KU Leuven, Belgium.
    Maggio, Martina
    Lund University.
    Control-Theoretical Software Adaptation: A Systematic Literature Review2018In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 44, no 8, p. 784-810Article in journal (Refereed)
    Abstract [en]

    Modern software applications are subject to uncertain operating conditions, such as dynamics in the availability of services and variations of system goals. Consequently, runtime changes cannot be ignored, but often cannot be predicted at design time. Control theory has been identified as a principled way of addressing runtime changes and it has been applied successfully to modify the structure and behavior of software applications. Most of the times, however, the adaptation targeted the resources that the software has available for execution (CPU, storage, etc.) more than the software application itself. This paper investigates the research efforts that have been conducted to make software adaptable by modifying the software rather than the resource allocated to its execution. This paper aims to identify: the focus of research on control-theoretical software adaptation; how software is modeled and what control mechanisms are used to adapt software; what software qualities and controller guarantees are considered. To that end, we performed a systematic literature review in which we extracted data from 42 primary studies selected from 1512 papers that resulted from an automatic search. The results of our investigation show that even though the behavior of software is considered non-linear, research efforts use linear models to represent it, with some success. Also, the control strategies that are most often considered are classic control, mostly in the form of Proportional and Integral controllers, and Model Predictive Control. The paper also discusses sensing and actuating strategies that are prominent for software adaptation and the (often neglected) proof of formal properties. Finally, we distill open challenges for control-theoretical software adaptation.

  • 2929.
    Shirinbab, Sogand
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance Implications of Virtualization2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Virtualization is a component of cloud computing. Virtualization transforms traditional inflexible, complex infrastructure of individual servers, storage, and network hardware into a flexible virtual resource pool and increases IT agility, flexibility, and scalability while creating significant cost savings. Additional benefits of virtualization include, greater work mobility, increased performance and availability of resources, and automated operations. Many virtualization solutions have been implemented. There are plenty of cloud providers using different virtualization solutions to provide virtual machines (VMs) and containers, respectively. Various virtualization solutions have different performance overheads due to their various implementations of virtualization and supported features. A cloud user should understand performance overheads of different virtualization solutions and the impact on the performance caused by different virtualization features, so that it can choose appropriate virtualization solution, for the services to avoid degrading their quality of services (QoSs). In this research, we investigate the impacts of different virtualization technologies such as, container-based, and hypervisor-based virtualization as well as various virtualization features such as, over-allocation of resources, live migration, scalability, and distributed resource scheduling on the performance of various applications for instance, Cassandra NoSQL database, and a large telecommunication application. According to our results, hypervisor-based virtualization has many advantages and is more mature compare to the recently introduced container-based virtualization. However, impacts of the hypervisorbased virtualization on the performance of the applications is much higher than the container-based virtualization as well as the non-virtualized solution. The findings of this research should be of benefit to the ones who provide planning, designing, and implementing of the IT infrastructure.

  • 2930.
    Shirinbab, Sogand
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance Implications of Over-allocation of Virtual CPUs2015In: 2015 International Symposium on Networks, Computers and Communications (ISNCC 2015), IEEE , 2015Conference paper (Refereed)
    Abstract [en]

    A major advantage of cloud environments is that one can balance the load by migrating virtual machines (VMs) from one server to another. High performance and high resource utilization are also important in a cloud. We have observed that over-allocation of virtual CPUs to VMs (i.e. allocating more vCPUs to VMs than there CPU cores on the server) when there are many VMs running on one host can reduce performance. However, if we do not use any over-allocation of virtual CPUs we may suffer from poor resource utilization after VM migration. Thus, it is important to identify and quantify performance bottlenecks when running in virtualized environment. The results of this study will help virtualized environment service providers to decide how many virtual CPUs should be allocated to each VM.

  • 2931.
    Shirinbab, Sogand
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Scheduling Tasks with Hard Deadlines in CloudBased Virtualized Software SystemsManuscript (preprint) (Other academic)
    Abstract [en]

    There is scheduling on two levels in real-time applications executing in a virtualized environment: traditional real-time scheduling of the tasks in the realtime application, and scheduling of different Virtual Machines (VMs) on the hypervisor level. In this paper, we describe a technique for calculating a period and an execution time for a VM containing a real-time application with hard deadlines. This result makes it possible to apply existing real-time scheduling theory when scheduling VMs on the hypervisor level, thus making it possible to guarantee that the real-time tasks in a VM meet their deadlines. If overhead for switching from one VM to another is ignored, it turns out that (infinitely) short VM periods minimize the utilization that each VM needs to guarantee that all real-time tasks in that VM will meet their deadlines. Having infinitely short VM periods is clearly not realistic, and in order to provide more useful results we have considered a fixed overhead at the beginning of each execution of a VM. Considering this overhead, a set of real-time tasks, the speed of each processor core, and a certain processor utilization of the VM containing the real-time tasks, we present a simulation study and some performance bounds that make it possible to determine if it is possible to schedule the real-time tasks in the VM, and in that case for which periods of the VM that this is possible.

  • 2932.
    Shirinbab, Sogand
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Scheduling Tasks with Hard Deadlines in CloudBased Virtualized Software SystemsManuscript (preprint) (Other academic)
    Abstract [en]

    There is scheduling on two levels in real-time applications executing in a virtualized environment: traditional real-time scheduling of the tasks in the realtime application, and scheduling of different Virtual Machines (VMs) on the hypervisor level. In this paper, we describe a technique for calculating a period and an execution time for a VM containing a real-time application with hard deadlines. This result makes it possible to apply existing real-time scheduling theory when scheduling VMs on the hypervisor level, thus making it possible to guarantee that the real-time tasks in a VM meet their deadlines. If overhead for switching from one VM to another is ignored, it turns out that (infinitely) short VM periods minimize the utilization that each VM needs to guarantee that all real-time tasks in that VM will meet their deadlines. Having infinitely short VM periods is clearly not realistic, and in order to provide more useful results we have considered a fixed overhead at the beginning of each execution of a VM. Considering this overhead, a set of real-time tasks, the speed of each processor core, and a certain processor utilization of the VM containing the real-time tasks, we present a simulation study and some performance bounds that make it possible to determine if it is possible to schedule the real-time tasks in the VM, and in that case for which periods of the VM that this is possible.

  • 2933.
    Shirinbab, Sogand
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Scheduling Tasks with Hard Deadlines in Virtualized Software SystemsIn: Article in journal (Refereed)
    Abstract [en]

    There is scheduling on two levels in real-time applications executing in a virtualized environment: traditional real-time scheduling of the tasks in the real-time application, and scheduling of different Virtual Machines (VMs) on the hypervisor level. In this paper, we describe a technique for calculating a period and an execution time for a VM containing a real-time application with hard deadlines. This result makes it possible to apply existing real-time scheduling theory when scheduling VMs on the hypervisor level, thus making it possible to guarantee that the real-time tasks in a VM meet their deadlines. If overhead for switching from one VM to another is ignored, it turns out that (infinitely) short VM periods minimize the utilization that each VM needs to guarantee that all real-time tasks in that VM will meet their deadlines. Having infinitely short VM periods is clearly not realistic, and in order to provide more useful results we have considered a fixed overhead at the beginning of each execution of a VM. Considering this overhead, a set of real-time tasks, the speed of  each processor core, and a certain processor utilization of the VM containing the real-time tasks, we present a simulation study and some performance bounds that make it possible to determine if it is possible to schedule the real-time tasks in the VM, and in that case for which periods of the VM that  is possible.

  • 2934.
    Shirinbab, Sogand
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Lundberg, Lars
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Performance Comparison between Horizontal Scaling of Hypervisor and Container Based Virtualization using Cassandra NoSQL Database2018In: Proceeding of the 3rd International Conference on Virtualization Application and Technology, 2018, , p. 6Conference paper (Refereed)
    Abstract [en]

    Cloud computing promises customers the ondemand ability to scale in face of workload variations. There are different ways to accomplish scaling, one is vertical scaling and the other is horizontal scaling. The vertical scaling refers to buying more power (CPU, RAM), buying a more expensive and robust server, which is less challenging to implement but exponentially expensive. While, the horizontal scaling refers to adding more servers with less processor and RAM, which is usually cheaper overall and can scale very well. The majority of cloud providers prefer the horizontal scaling approach, and for them would be very important to know about the advantages and disadvantages of both technologies from the perspective of the application performance at scale. In this paper, we compare performance differences caused by scaling of the different virtualization technologies in terms of CPU utilization, latency, and the number of transactions per second. The workload is Apache Cassandra, which is a leading NoSQL distributed database for Big Data platforms. Our results show that running multiple instances of the Cassandra database concurrently, affected the performance of read and write operations differently; for both VMware and Docker, the maximum number of read operations was reduced when we ran several instances concurrently, whereas the maximum number of write operations increased when we ran instances concurrently.

  • 2935.
    Shokri-Ghadikolaei, Hossein
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Fischione, Carlo
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Popovski, Petar
    Aalborg University, Denmark.
    Zorzi, Michele
    Design aspects of short range millimeter wave networks: A MAC layer perspective2016In: IEEE Network, ISSN 0890-8044, E-ISSN 1558-156X, Vol. 30, no 3, p. 88-96Article in journal (Refereed)
    Abstract [en]

    Increased density of wireless devices, ever growing demands for extremely high data rate, and spectrum scarcity at microwave bands make the millimeter wave (mmWave) frequencies an important player in future wireless networks. However, mmWave communication systems exhibit severe attenuation, blockage, deafness, and may need microwave networks for coordination and fall-back support. To compensate for high attenuation, mmWave systems exploit highly directional operation, which in turn substantially reduces the interference footprint. The significant differences between mmWave networks and legacy communication technologies challenge the classical design approaches, especially at the medium access control (MAC) layer, which has received comparatively less attention than PHY and propagation issues in the literature so far. In this paper, the MAC layer design aspects of shortrange mmWave networks are discussed. In particular, we explain why current mmWave standards fail to fully exploitthe potential advantages of short range mmWave technology, and argue for the necessity of new collision-awarehybrid resource allocation frameworks with on-demand control messages, the advantages of a collision notification message, and the potential of multihop communication to provide reliable mmWave connections.

  • 2936.
    Shokri-Ghadikolaei, Hossein
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Gkatzikis, Lazaros
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Fischione, Carlo
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Beam-searching and Transmission Scheduling in Millimeter Wave Communications2015In: 2015 IEEE International Conference on Communications (ICC), IEEE conference proceedings, 2015, Vol. 2015, p. 1292-1297Conference paper (Refereed)
    Abstract [en]

    Millimeter wave (mmWave) wireless networks relyon narrow beams to support multi-gigabit data rates. Nevertheless, the alignment of transmitter and receiver beams is a time consuming operation, which introduces an alignment-throughput tradeoff. A wider beamwidth reduces the alignment overhead,but leads also to reduced directivity gains. Moreover, existing mmWave standards schedule a single transmission in eachtime slot, although directional communications facilitate multiple concurrent transmissions. In this paper, a joint consideration ofthe problems of beamwidth selection and scheduling is proposed to maximize effective network throughput. The resulting optimization problem requires exact knowledge of network topology,which may not be available in practice. Therefore, two standard compliant approximation algorithms are developed, which relyon underestimation and overestimation of interference. The first one aims to maximize the reuse of available spectrum, whereas the second one is a more conservative approach that schedules together only links that cause no interference. Extensive performance analysis provides useful insights on the directionality level and the number of concurrent transmissions that should bepursued. Interestingly, extremely narrow beams are in general not optimal.

  • 2937.
    Shrestha, shilu
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Software Modeling in Cyber-Physical Systems2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A Cyber-Physical System (CPS) has a tight integration of computation, networking and physicalprocess. It is a heterogeneous system that combines multi-domain consisting of both hardware andsoftware systems. Cyber subsystems in the CPS implement the control strategy that affects the physicalprocess. Therefore, software systems in the CPS are more complex.

    Visualization of a complex system provides a method of understanding complex systems byaccumulating, grouping, and displaying components of systems in such a manner that they may beunderstood more efficiently just by viewing the model rather than understanding the code. Graphicalrepresentation of complex systems provides an intuitive and comprehensive way to understand thesystem.

    OpenModelica is the open source development environment based on Modelica modeling andsimulation language that consists of several interconnected subsystems. OMEdit is one of the subsystemintegrated into OpenModelica. It is a graphical user interface for graphical modeling. It consists of toolsthat allow the user to create their own shapes and icons for the model.

    This thesis presents a methodology that provides an easy way of understanding the structure andexecution of programs written in the imperative language like C through graphical Modelica model.

  • 2938.
    Sibomana, Louis
    et al.
    Blekinge Institute of Technology, Sweden.
    Hung, Tran
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. École de Technologie Supérieure, Canada.
    Jepernick, Hans-jurgen
    Blekinge Institute of Technology, Sweden.
    On the outage capacity of an underlay cognitive radio network2016Conference paper (Refereed)
  • 2939.
    Sibomana, Louis
    et al.
    Blekinge Institute of Technology, Sweden.
    Jepernick, Hans-jurgen
    Blekinge Institute of Technology, Sweden.
    Hung, Tran
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Kabiri, Charles
    University of Rwanda, Rwanda.
    A Framework for Packet Delay Analysis of Point-to-Multipoint Underlay Cognitive Radio Networks2017In: IEEE Transactions on Mobile Computing, ISSN 1536-1233, E-ISSN 1558-0660, Vol. 16, no 9, p. 2408-2421Article in journal (Refereed)
    Abstract [en]

    This paper presents a queueing analytical framework for the performance evaluation of the secondary user (SU) packet transmission with service differentiation in a point-to- multipoint underlay cognitive radio network. The transmit power of the SU transmitter is subject to the joint outage constraint imposed by the primary user receivers (PU-Rxs) and the SU maximum transmit power limit. The analysis considers a queueing model for secondary traffic with multiple classes, and different types of arrival and service processes under a non-preemptive priority service discipline. The SU quality of service (QoS) is characterized by a packet timeout threshold and target bit error rate. Given these settings, analytical expressions of the packet timeout probability and average transmission time are derived for opportunistic and multicast scheduling. Moreover, expressions of the average packet waiting time in the queue and the total time in the system for each class of traffic are obtained. Numerical examples are provided to illustrate the secondary network performance with respect to various parameters such as number of PU-Rxs and SU receivers, SU packet arrival process, QoS requirements, and the impact of interference from the primary network to the secondary network.

  • 2940.
    Sibomana, Louis
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Tran, Hung
    Tran, Quang Anh
    Impact of secondary user communication on security communication of primary user2015In: Security and Communication Networks, ISSN 1939-0114, E-ISSN 1939-0122, Vol. 8, no 18, p. 4177-4190Article in journal (Refereed)
    Abstract [en]

    Cognitive radio network concept has been considered as a promising solution to improve the spectrum utilization. However, it may be vulnerable to security problems as the primary user (PU) and secondary user (SU) access the same resource. In this paper, we consider a system model where an eavesdropper (EAV) illegally listens to the PU communication in the presence of a SU transmitter (SU-Tx) communicating with a SU receiver (SU-Rx). The SU-Tx transmit power is subject to the peak transmit power constraint of the SU and outage probability constraint of the PU. Given this context, the effect of the interference from the SU-Tx to the EAV on the primary system security is investigated. In particular, analytical expressions of the probability of existence of non-zero secrecy capacity and secrecy outage probability of the PU are derived. Moreover, the performance analysis of the secondary network is examined where closed-form expressions of the symbol error probability and achievable rate are presented. Numerical examples are provided to evaluate the impact of the primary system parameters and channel conditions among users on the system performance of secondary and primary networks. Interestingly, our results reveal a fact that the security of the primary network strongly depends on the channel condition of the SU-Tx to the EAV link and the transmit power policy of the SU-Tx. Copyright (C) 2015 John Wiley & Sons, Ltd.

  • 2941.
    Siddique, Mahmood
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Experimental study of message latencies on FlexRay bus2013Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    FlexRay is a real-time communication protocol for automotive networks which was developed by a consortium of over hundred automotive companies. A communication cycle in FlexRay consists of an event driven segment known as dynamic segment. we have studied the delays suffered by messages that are transmitted in the dynamic segment of FlexRay. To obtain this goal we have used FlexRay nodes that were connected on a bus topology. We have various messages with same payloads and different transmission intervals. While transmitting we have stamped the frame with message instance counter, cycle number and message ID. On the receiving side, we have stamped each received message with current cycle number in order to get delay. Transmission delays are measured in terms communication cycles which are obtained by taking the difference of transmitting cycle and receiving cycle.Experimental results obtained showed that higher priority messages are delayed maximum by one cycle if they are placed in transmission buffer when their slot has passed. Whereas low priority message suffer delays or they might not sent at all if high priority messages have smaller transmission periods as compare to low priority messages.

  • 2942.
    Siddiqui, Afzal
    et al.
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Fleten, Stein-Erik
    How to proceed with competing alternative energy technologies: A real options analysis2010In: Energy Economics, ISSN 0140-9883, E-ISSN 1873-6181, Vol. 32, no 4, p. 817-830Article in journal (Refereed)
    Abstract [en]

    Concerns about CO(2) emissions create incentives for the development and deployment of energy technologies that do not use fossil fuels. Indeed, such technologies would provide tangible benefits in terms of avoided fossil-fuel costs, which are likely to increase as restrictions on CO(2) emissions are imposed. However, a number of challenges need to be overcome prior to market deployment, and the commercialisation of alternative energy technologies may require a staged approach given price and technical risk. We analyse how a firm may proceed with staged commercialisation and deployment of competing alternative energy technologies. An unconventional new alternative technology is one possibility, where one could undertake cost-reducing production enhancement measures as an intermediate step prior to deployment. By contrast, the firm could choose to deploy a smaller-scale existing renewable energy technology, and, using the real options framework, we compare the two projects to provide managerial implications on how one might proceed.

  • 2943.
    Sidenvall, Adrian
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Truong, Andy
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Boman, Erik
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Szreder, Mikael
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Lind, Oskar
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Eriksson, Simon
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Petersson, Simon
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Visualisering av svarstider mellan mobila klienter och datacenter2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Rapporten är en sammanställning av gruppens samlade erfarenheter och lärdomar av ett projekt vars syfte var att utveckla ett verktyg som ska visualisera svarstider mellan mobila klienter och datacenter. Genom att använda kontinuerlig prototypning har gruppen kunnat arbeta på ett användarna sätt för att uppfylla kundens verkliga behov. För att uppnå detta användes utvecklingsmetodiken Kanban. Under projektets gång anpassades metodiken för att bättra passa in i arbetet.

    Projektets användartester har lett till sammanfattande erfarenheter om visualisering av data. Visualiseringar som projektgruppen ansetts tydliga uppfattades inte alltid på samma sätt av användarna. Att visualisera flera parametrar på en världskarta anses vara problematiskt då kartan i sig endast består av länder. För visualisering av flera parametrar måste då även externa användas, exempelvis cirklar eller andra former.

  • 2944.
    Sidorova, Yulia
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Garcia, J.
    Bridging from syntactic to statistical methods: Classification with automatically segmented features from sequences2015In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 48, no 11, p. 3749-3756Article in journal (Refereed)
    Abstract [en]

    To integrate the benefits of statistical methods into syntactic pattern recognition, a Bridging Approach is proposed: (i) acquisition of a grammar per recognition class; (ii) comparison of the obtained grammars in order to find substructures of interest represented as sequences of terminal and/or non-terminal symbols and filling the feature vector with their counts; (iii) hierarchical feature selection and hierarchical classification, deducing and accounting for the domain taxonomy. The bridging approach has the benefits of syntactic methods: preserves structural relations and gives insights into the problem. Yet, it does not imply distance calculations and, thus, saves a non-trivial task-dependent design step. Instead it relies on statistical classification from many features. Our experiments concern a difficult problem of chemical toxicity prediction. The code and the data set are open-source. (C) 2015 Elsevier Ltd. All rights reserved.

  • 2945.
    Siek, Jeremy
    et al.
    Rice University, Houston, TX 77005, United States.
    Taha, Walid
    Rice University, Houston, TX 77005, United States.
    A Semantic Analysis of C++ Templates2006In: ECOOP 2006 - Object-Oriented Programming: 20th European Conference, Nantes, France, July 3-7, 2006, Proceedings / [ed] Dave Thomas, Heidelberg: Springer, 2006, p. 304-327Conference paper (Refereed)
    Abstract [en]

    Templates are a powerful but poorly understood feature ofthe C++ language. Their syntax resembles the parameterized classes ofother languages (e.g., of Java). But because C++ supports template specialization, their semantics is quite different from that of parameterizedclasses. Template specialization provides a Turing-complete sub-languagewithin C++ that executes at compile-time. Programmers put this powerto many uses. For example, templates are a popular tool for writingprogram generators.The C++ Standard defines the semantics of templates using natural language, so it is prone to misinterpretation. The meta-theoretic propertiesof C++ templates have not been studied, so the semantics of templateshas not been systematically checked for errors. In this paper we presentthe first formal account of C++ templates including some of the morecomplex aspects, such as template partial specialization. We validate oursemantics by proving type safety and verify the proof with the Isabelleproof assistant. Our formalization reveals two interesting issues in theC++ Standard: the first is a problem with member instantiation and thesecond concerns the generation of unnecessary template specializations.

  • 2946.
    Siek, Jeremy
    et al.
    University of Colorado, Boulder, CO, USA.
    Taha, Walid
    Rice University, Houston, TX, USA.
    Gradual Typing for Functional Languages2007Conference paper (Refereed)
    Abstract [en]

    Static and dynamic type systems have well-known strengths and weaknesses. In previous work we developed a gradual type system for a functional calculus named $\lambda^?_\to$. Gradual typing provides the benefits of both static and dynamic checking in a single language by allowing the programmer to control whether a portion of the program is type checked at compile-time or run-time by adding or removing type annotations on variables. Several object-oriented scripting languages are preparing to add static checking. To support that work this paper develops $\mathbf{Ob}^{?}_{<:}$, a gradual type system for object-based languages, extending the Ob < : calculus of Abadi and Cardelli. Our primary contribution is to show that gradual typing and subtyping are orthogonal and can be combined in a principled fashion. We also develop a small-step semantics, provide a machine-checked proof of type safety, and improve the space efficiency of higher-order casts.

  • 2947.
    Siek, Jeremy
    et al.
    University of Colorado, Boulder, CO 80309, United States.
    Taha, Walid
    Rice University, Houston, TX 77005, United States.
    Gradual typing for objects2007In: ECOOP 2007 – Object-Oriented Programming: 21st European Conference, Berlin, Germany, July 30 - August 3, 2007. Proceedings / [ed] Erik Ernst, Berlin: Springer Berlin/Heidelberg, 2007, p. 2-27Conference paper (Refereed)
    Abstract [en]

    Static and dynamic type systems have well-known strengthsand weaknesses. In previous work we developed a gradual type system fora functional calculus named λ?→. Gradual typing provides the benefits ofboth static and dynamic checking in a single language by allowing theprogrammer to control whether a portion of the program is type checkedat compile-time or run-time by adding or removing type annotations onvariables. Several object-oriented scripting languages are preparing toadd static checking. To support that work this paper develops Ob?<:,a gradual type system for object-based languages, extending the Ob<:calculus of Abadi and Cardelli. Our primary contribution is to show thatgradual typing and subtyping are orthogonal and can be combined in aprincipled fashion. We also develop a small-step semantics, provide amachine-checked proof of type safety, and improve the space efficiencyof higher-order casts.

  • 2948.
    Sigurjonsson, Sindri Már Kaldal
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Blockchain Use for Data Provenance in Scientific Workflow2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In Scientific workflows, data provenance plays a big part. Through data provenance, the execution of the workflow is documented and information about the data pieces involved are stored. This can be used to reproduce scientific experiments or to proof how the results from the workflow came to be. It is therefore vital that the provenance data that is stored in the provenance database is always synchronized with its corresponding workflow, to verify that the provenance database has not been tampered with. The blockchain technology has been gaining a lot of attention in recent years since Satoshi Nakamoto released his Bitcoin paper in 2009. The blockchain technology consists of a peer-to-peer network where an append-only ledger is stored and replicated across a peer-to-peer network and offers high tamper-resistance through its consensus protocols. In this thesis, the option of whether the blockchain technology is a suitable solution for synchronizing workflow with its provenance data was explored. A system that generates a workflow, based on a definition written in a Domain Specific Language, was extended to utilize the blockchain technology to synchronize the workflow itself and its results. Furthermore, the InterPlanetary File System was utilized to assist with the versioning of individual executions of the workflow. The InterPlanetary File System provided the functionality of comparing individual workflows executions in more detail and to discover how they differ. The solution was analyzed with respect to the 21 CFR Part 11 regulations imposed by the FDA in order to see how it could assist with fulfilling the requirements of the regulations.  Analysis on the system shows that the blockchain extension can be used to verify if the synchronization between a workflow and its results has been tampered with. Experiments revealed that the size of the workflow did not have a significant effect on the execution time of the extension. Additionally, the proposed solution offers a constant cost in digital currency regardless of the workflow. However, even though the extension shows some promise of assisting with fulfilling the requirements of the 21 CFR Part 11 regulations, analysis revealed that the extension does not fully comply with it due to the complexity of the regulations

  • 2949.
    Siigur, Alexander
    et al.
    Mälardalen University, School of Innovation, Design and Engineering.
    Hjärpe, Johan
    Mälardalen University, School of Innovation, Design and Engineering.
    Superfighters Deluxe: A Game Utilizing Web Technologies2012Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The purpose of this project was to provide a deeper understanding of web services, particularlyin the context of an online multi-player game. This paper describes the designand implementation of certain features of the action game Superfighters Deluxe. Thesefeatures include a dynamic website, a user account system, a multi-player game serverbrowser and a system for automatic software updates. This thesis reviews strengths andweaknesses of web technologies such as ASP.NET and PHP and database technologiessuch as MySQL and SQL Server. The technologies used in this project were ASP.NETand MySQL. We determined that these technologies are powerful, reliable and flexibletools for creating web applications. The conclusions of this research are supported by theresults of internal and public testing of the applications.

  • 2950.
    Silbovitz, Anna
    et al.
    Massachusetts Institute of Technol, USA.
    Lundqvist, Kristina
    Massachusetts Institute of Technol, USA.
    A Hardware Implementation of a Ravenscar-Compliant Run-Time Kernel2003In: AIAA/IEEE Digital Avionics Systems Conference - Proceedings, Volume 1, 2003, p. 3.A.3/1-3.A.3/10Conference paper (Other academic)
56575859606162 2901 - 2950 of 3552
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf