Change search
Refine search result
45678910 301 - 350 of 2979
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 301.
    Balador, Ali
    et al.
    Univ. Politec. de Valencia, Valencia, Spain.
    Calafate, Carlos T.
    Univ. Politec. de Valencia, Valencia, Spain.
    Cano, Juan-Carlos
    Univ. Politec. de Valencia, Valencia, Spain.
    Manzoni, Pietro
    Univ. Politec. de Valencia, Valencia, Spain.
    Reducing Channel Contention in Vehicular Environments Through an Adaptive Contention Window Solution2013In: IFIP Wireless Days WD 2013, 2013Conference paper (Refereed)
    Abstract [en]

    Intelligent Transportation Systems (ITS) are attracting growing attention both in industry and academia due to the advances in wireless communication technologies, and a significant demand for a wide variety of applications targeting this kind of environments are expected. In order to make it usable in real vehicular environments, achieving a well-designed Medium Access Control (MAC) protocol is a challenging issue due to the dynamic nature of Vehicular Ad Hoc Networks (VANETs), scalability issues, and the variety of application requirements. Different standardization organizations have selected IEEE 802.11 as the first choice for VANET environments considering its availability, maturity, and cost. The contention window is a critical parameter for handling medium access collisions by the IEEE 802.11 MAC protocol, and it highly affects the communications performance. The impact of adjusting the contention window has been studied in Mobile Ad-Hoc Networks (MANETs), but the vehicular communications community has not yet addressed this issue thoroughly. This paper proposes a new contention window control scheme, called DBM-ACW, for VANET environments. Analysis and simulation results using OMNeT++ in a highway scenario show that DBM-ACW provides better overall performance compared with previous proposals, even with high network densities.

  • 302.
    Balador, Ali
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ericsson, Niclas
    Bakhshi, Zeynab
    Communication Middleware Technologies for Industrial Distributed Control Systems: A Literature Review2017In: International Conference on Emerging Technologies And Factory Automation ETFA'17, 2017Conference paper (Refereed)
    Abstract [en]

    Industry 4.0 is the German vision for the future of manufacturing, where smart factories use information and communication technologies to digitise their processes to achieve improved quality, lower costs, and increased efficiency. It is likely to bring a massive change to the way control systems function today. Future distributed control systems are expected to have an increased connectivity to the Internet, in order to capitalize on new offers and research findings related to digitalization, such as cloud, big data, and machine learning. A key technology in the realization of distributed control systems is middleware, which is usually described as a reusable software layer between operating system and distributed applications. Various middleware technologies have been proposed to facilitate communication in industrial control systems and hide the heterogeneity amongst the subsystems, such as OPC UA, DDS, and RT-CORBA. These technologies can significantly simplify the system design and integration of devices despite their heterogeneity. However, each of these technologies has its own characteristics that may work better for particular applications. Selection of the best middleware for a specific application is a critical issue for system designers. In this paper, we conduct a survey on available standard middleware technologies, including OPC UA, DDS, and RT-CORBA, and show new trends for different industrial domains.

  • 303.
    Balasubramanian, S.M.N
    et al.
    Technische Universiteit Eindhoven, Eindhoven, Netherlands.
    Afshar, Sara Zargari
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Behnam, Moris
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Gai, Paolo
    Evidence Srl, Pisa, Italy.
    Bril, Reinder J.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. Technische Universiteit Eindhoven, Eindhoven, Netherlands.
    A dual shared stack for FSLM in Erika enterprise2017In: The 23rd IEEE International Conference on Embedded and Real-Time Computing Systems and Applications - WiP Session RTCSA'17, 2017Conference paper (Refereed)
    Abstract [en]

    Recently, the flexible spin-lock model (FSLM) has been introduced, unifying spin-based and suspension-based resource sharing protocols for real-time multi-core platforms. Unlike the multiprocessor stack resource policy (MSRP), FSLM doesn’t allow tasks on a core to share a single stack, however. In this paper, we present a hypothesis claiming that for a restricted range of spin-lock priorities, FSLM requires only two stacks. We briefly describe our implementation of a dual stack for FSLM in the Erika Enterprise RTOS as instantiated on an Altera Nios II platform using 4 soft-core processors.

  • 304. Balasubramanian, S.M.N
    et al.
    Afshar, Sara Zargari
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Gai, Paolo
    Behnam, Moris
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    J. Bril, Reinder
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Incorporating implementation overheads in the analysis for the flexible spin-lock model2017In: 43rd Annual Conference of the IEEE Industrial Electronics Society IECON 2017, 2017Conference paper (Refereed)
  • 305.
    Balatinac, Ivan
    et al.
    Mälardalen University, School of Innovation, Design and Engineering.
    Radosevic, Iva
    Mälardalen University, School of Innovation, Design and Engineering.
    Architecting for the cloud2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cloud Computing is an emerging new computing paradigm which is developed out of service-orientation, grid computing, parallel computing, utility computing, autonomic computing, and virtualization paradigms. Both industry and academia have experienced its rapid growth and are exploring full usage of its potentials to maintain their services provided to customers and partners. In this context, a key aspect to investigate is how to architect or design cloud-based application that meet various system requirements of customers’ needs. In this thesis, we have applied the systematic literature review method to explore the main concerns when architecting for the cloud. We have identified, classified, and extracted existing approaches and solutions for specific concerns based on the existing research articles that focus on planning and providing cloud architecture or design for different concerns and needs. The main contribution of the thesis is a catalogued architecture solutions for managing specific concerns when architecting for the cloud.

  • 306.
    Baldini, Gianmarco
    et al.
    Institute for the Protection and Security of the Citizen (IPSC), Italy.
    Kounelis, Ioannis
    Institute for the Protection and Security of the Citizen (IPSC), Italy.
    Nai Fovino, Igor
    Institute for the Protection and Security of the Citizen (IPSC), Italy.
    Neisse, Ricardo
    Institute for the Protection and Security of the Citizen (IPSC), Italy.
    A Framework for Privacy Protection and Usage Control of Personal Data in a Smart City Scenario2013In: Critical Information Infrastructures Security: 8th International Workshop, CRITIS 2013, Amsterdam, The Netherlands, September 16-18, 2013, Revised Selected Papers, Springer Publishing Company, 2013, p. 212-217Conference paper (Refereed)
    Abstract [en]

    In this paper we address trust and privacy protection issues related to identity and personal data provided by citizens in a smart city environment. Our proposed solution combines identity management, trust negotiation, and usage control. We demonstrate our solution in a case study of a smart city during a crisis situation.

  • 307.
    Balliu, Musard
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Logics for Information Flow Security:From Specification to Verification2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Software is becoming  increasingly  ubiquitous and today we find software running everywhere. There is software driving our favorite  game  application or  inside the web portal we use to read the morning  news, and   when we book a vacation.  Being so commonplace, software has become an easy target to compromise  maliciously or at best to get it wrong. In fact, recent trends and highly-publicized attacks suggest that vulnerable software  is at  the root of many security attacks.     

    Information flow security is the research field that studies  methods and techniques to provide strong security guarantees against  software security attacks and vulnerabilities.  The goal of an  information flow analysis is to rigorously check how  sensitive information is used by the software application and ensure that this information does not escape the boundaries of the application, unless it is properly granted permission to do so by the security policy at hand.  This process can   be challenging asit first requires to determine what the applications security policy is and then to provide a mechanism  to enforce that policy against the  software application.  In this thesis  we address the problem of (information flow) policy specification and policy enforcement by leveraging formal methods, in particular logics and language-based analysis and verification techniques.

    The thesis contributes to the state of the art of information flow security in several directions, both theoretical and practical. On the policy specification side, we provide a  framework to reason about  information flow security conditions using the notion of knowledge. This is accompanied  by logics that  can be used  to express the security policies precisely in a syntactical manner. Also, we study the interplay between confidentiality and integrity  to enforce security in  presence of active attacks.  On the verification side, we provide several symbolic algorithms to effectively check whether an application adheres to the associated security policy. To achieve this,  we propose techniques  based on symbolic execution and first-order reasoning (SMT solving) to first extract a model of the target application and then verify it against the policy.  On the practical side, we provide  tool support by automating our techniques and  thereby making it possible  to verify programs written in Java or ARM machine code.  Besides the expected limitations, our case studies show that the tools can be used to  verify the security of several realistic scenarios.

    More specifically, the thesis consists of two parts and six chapters. We start with an introduction giving an overview of the research problems and the results of the thesis. Then we move to the specification part which  relies on knowledge-based reasoning and epistemic logics to specify state-based and trace-based information flow conditions and on the weakest precondition calculus to certify security in  presence of active attacks.  The second part of the thesis addresses the problem of verification  of the security policies introduced in the first part.  We use symbolic execution  and  SMT solving techniques to enable   model checking of the security properties.  In particular, we implement a tool that verifies noninterference  and declassification policies for Java programs. Finally, we conclude with relational verification of low level code, which is also supported by a tool.

  • 308.
    Balliu, Musard
    et al.
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Dam, Mads
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Guanciale, Roberto
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Automating Information Flow Analysis of Low Level Code2014In: Proceedings of CCS’14, November 3–7, 2014, Scottsdale, Arizona, USA, Association for Computing Machinery (ACM), 2014Conference paper (Refereed)
    Abstract [en]

    Low level code is challenging: It lacks structure, it uses jumps and symbolic addresses, the control ow is often highly optimized, and registers and memory locations may be reused in ways that make typing extremely challenging. Information ow properties create additional complications: They are hyperproperties relating multiple executions, and the possibility of interrupts and concurrency, and use of devices and features like memory-mapped I/O requires a departure from the usual initial-state nal-state account of noninterference. In this work we propose a novel approach to relational verication for machine code. Verication goals are expressed as equivalence of traces decorated with observation points. Relational verication conditions are propagated between observation points using symbolic execution, and discharged using rst-order reasoning. We have implemented an automated tool that integrates with SMT solvers to automate the verication task. The tool transforms ARMv7 binaries into an intermediate, architecture-independent format using the BAP toolset by means of a veried translator. We demonstrate the capabilities of the tool on a separation kernel system call handler, which mixes hand-written assembly with gcc-optimized output, a UART device driver and a crypto service modular exponentiation routine.

  • 309.
    Banaee, Hadi
    et al.
    Örebro University, School of Science and Technology.
    Ahmed, Mobyen Uddin
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    Towards NLG for Physiological Data Monitoring with Body Area Networks2013In: 14th European Workshop on Natural Language Generation, 2013, p. 193-197Conference paper (Refereed)
    Abstract [en]

    This position paper presents an on-goingwork on a natural language generationframework that is particularly tailored fornatural language generation from bodyarea networks. We present an overview ofthe main challenges when considering thistype of sensor devices used for at homemonitoring of health parameters. The paperpresents the first steps towards the implementationof a system which collectsinformation from heart rate and respirationusing a wearable sensor.

  • 310.
    Bandaru, Vamsi Krishna
    et al.
    Örebro University, School of Science and Technology.
    Balasubramanian, Rajasekaran
    Örebro University, School of Science and Technology.
    OBJECT RECOGNITION USINGDIALOGUES AND SEMANTICANCHORING2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This report explains in detail the implemented system containing a robot and a sensor

    network that is deployed in a test apartment in an elderly residence area. The report

    focuses on the creation and maintenance (anchoring) of the connection between the

    semantic information present in the dialog with perceived actual physical objects in the

    home. Semantic knowledge about concepts and their correlations are retrieved from online

    resources and ontologies, e.g. Word-Net and sensors information are provided by

    cameras distributed in the apartment.

  • 311.
    Bankarusamy, Sudhangathan
    Mälardalen University, School of Innovation, Design and Engineering.
    Towards hardware accelerated rectification of high speed stereo image streams2017Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    The process of combining two views of a scene in order to obtain depth information is called stereo vision. When the same is done using a computer it is then called computer stereo vision. Stereo vision is used in robotic application where depth of an object plays a role. Two cameras mounted on a rig is called a stereo camera system. Such a system is able to capture two views and enable robotic application to use the depth information to complete tasks. Anomalies are bound to occur in such a stereo rig, when both the cameras are not parallel to each other. Mounting of the cameras on a rig accurately has physical alignment limitations. Images taken from such a rig has inaccurate depth information and has to be rectified. Therefore rectification is a pre-requisite to computer stereo vision. One such a stereo rig used in this thesis is the GIMME2 stereo camera system. The system has two 10 mega-pixel cameras with on-board FPGA, RAM, processor running Linux operating system, multiple Ethernet ports and an SD card feature amongst others. Stereo rectification on memory constrained hardware is a challenging task as the process itself requires both the images to be stored in the memory. The FPGA on the GIMME2 systems must be used in order to achieve the best possible speed. Programming a system that does not have a display and for used for a specific purpose is called embedded programming. The purpose of this system is distance estimation and working with such a system falls in the Embedded Systems program. This thesis presents a method that makes rectification a step ahead for this particular system. The functionality of the algorithm is shown in MATLAB and using VHDL and is compared to available tools and systems.

  • 312.
    Bao, Yan
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture (Closed 20120101), Software and Computer Systems, SCS (Closed 20120101).
    Brorsson, Mats
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture (Closed 20120101), Software and Computer Systems, SCS (Closed 20120101).
    An Implementation of Cache-Coherence for the Nios II ™ Soft-core processor2009Conference paper (Refereed)
    Abstract [en]

    Soft-core programmable processors mapped onto fieldprogrammable gate arrays (FPGA) can be considered as equivalents to a microcontroller. They combine central processing units (CPUs), caches, memories, and peripherals on a single chip. Soft-cores processors represent an increasingly common embedded software implementation option. Modern FPGA soft-cores are parameterized to support application-specific customization. However, these softcore processors are designed to be used in uniprocessor system, not for multiprocessor system. This project describes an implementation to solve the cache coherency problem in an ALTERA Nios II soft-core multiprocessor system.

  • 313. Bardizbanyan, Alen
    et al.
    Själander, Magnus
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Architecture and Computer Communication.
    Whalley, David
    Larsson-Edefors, Per
    Improving data access efficiency by using context-aware loads and stores2015In: Proc. 16th ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems, New York: ACM Press, 2015, p. 27-36Conference paper (Refereed)
    Abstract [en]

    Memory operations have a significant impact on both performance and energy usage even when an access hits in the level-one data cache (L1 DC). Load instructions in particular affect performance as they frequently result in stalls since the register to be loaded is often referenced before the data is available in the pipeline. L1 DC accesses also impact energy usage as they typically require significantly more energy than a register file access. Despite their impact on performance and energy usage, L1 DC accesses on most processors are performed in a general fashion without regard to the context in which the load or store operation is performed. We describe a set of techniques where the compiler enhances load and store instructions so that they can be executed with fewer stalls and/or enable the L1 DC to be accessed in a more energy-efficient manner. We show that using these techniques can simultaneously achieve a 6% gain in performance and a 43% reduction in L1 DC energy usage.

  • 314.
    Barf, Jochen
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    Development and Implementation of an Image-Processing-Based Horizon Sensor for Sounding Rockets2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 315.
    Barkah, Dani
    et al.
    Volvo Construction Equipment AB, Eskilstuna, Sweden.
    Ermedahl, Andreas
    Mälardalen University, Department of Computer Science and Electronics.
    Gustafsson, Jan
    Mälardalen University, Department of Computer Science and Electronics.
    Lisper, Björn
    Mälardalen University, Department of Computer Science and Electronics.
    Sandberg, Christer
    Mälardalen University, Department of Computer Science and Electronics.
    Evaluation of Automatic Flow Analysis for WCET Calculation on Industrial Real-Time System Code2008In: Proceedings - Euromicro Conference on Real-Time Systems, 2008, 2008, p. 331-340Conference paper (Refereed)
    Abstract [en]

    A static Worst-Case Execution Time (WCET) analysis derives upper bounds for the execution times of programs. Such analysts requires information about the possible program flows. The current practice is to provide this information manually, which can be laborious and error-prone. An alternative is to derive this information through an automated flow analysis. In this article, we present a case study where an automatic flowanalysis method was tested on industrial real-time system code. The same code was the subject of an earlier WCET case study, where it was analysed using manual annotations for the flow information. The purpose of the current study was to see to which extent the same flow information could be found automatically. The results show that for the most part this is indeed possible, and we could derive comparable WCET estimates using the automatically generated flow information. In addition, valuable insights were gained on what is needed to make flow analysis methods work on real production code. 

  • 316.
    Barkland, Lars-Erik
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science.
    Norder, Kim
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science.
    En studie av Scrum för två personer och utveckling av mobilt gränssnitt2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    A big portion of software development is the method which controls the flow ofthe creative process. Thru experience it has been shown how a lack oforganisation can have a profoundly negative effect on the development workand finally on the product itself. We take the Scrum method in a two-man formand analyze how the a small-scale version effects the roles, artefacts andactivities associated with the original Scrum method. The result of the workusing a small scale Scrum is presented along with the changes that have beendone during the work. The result and the final discussion show a positive effecton using a structured development method.Lately the use of small screen devices and their use of the Internet has widelyincreased . With this change in user interface comes new challenges designing aWeb handling a wide variety of user devices. In the process of developing aweb application techniques for small-screen interfaces was analyzed from thewhere Responsive Design was found to be the best choice concidering relevantlimitations. Design principals from the concept of Responsive Design isanalyzed and applied.

  • 317. Barkowsky, M.
    et al.
    Wang, Kun
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Cousseau, R.
    Brunnstrom, K.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Le Callet, P.
    Subjective quality assessment of error concealment strategies for 3DTV in the presence of asymmetric transmission errors2010In: Proceedings of 2010 IEEE 18th International Packet Video Workshop (PV, 2010, p. 193-200Conference paper (Refereed)
    Abstract [en]

    The transmission of 3DTV sequences over packet based networks may result in degradations of the video quality due to packet loss. In the conventional 2D case, several different strategies are known for extrapolating the missing information and thus concealing the error. In 3D however, the residual error after concealment of one view might leads to binocular rivalry with the correctly received second view. In this paper, three simple alternatives are presented: frame freezing, a reduced playback speed, and displaying only a single view for both eyes, thus effectively switching to 2D presentation. In a subjective experiment the performance in terms of quality of experience of the three methods is evaluated for different packet loss scenarios. Error-free encoded videos at different bit rates have been included as anchor conditions. The subjective experiment method contains special precautions for measuring the Quality of Experience (QoE) for 3D content and also contains an indicator for visual discomfort. The results indicate that switching to 2D is currently the best choice but difficulties with visual discomfort should be expected even for this method.

  • 318.
    Barkowsky, Marcus
    et al.
    LUNAM Université, Université de Nantes, IRCCyN UMR CNRS 6597, France .
    Li, Jing
    LUNAM Université, Université de Nantes, IRCCyN UMR CNRS 6597, France .
    Han, Taehwan
    Youn, Sungwook
    Ok, Jiheon
    Lee, Chulhee
    Hedberg, Christer
    Dept. of Netlab, Acreo Swedish ICT AB, Sweden .
    Ananth, Indirajith Vijai
    Dept. of Netlab, Acreo Swedish ICT AB, Sweden .
    Wang, Kun
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Computer and System science. Dept. of Netlab, Acreo Swedish ICT AB, Sweden .
    Brunnström, Kjell
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. Dept. of Netlab, Acreo Swedish ICT AB, Sweden .
    Le Callet, Patrick
    LUNAM Université, Université de Nantes, IRCCyN UMR CNRS 6597, France .
    Towards standardized 3DTV QoE assessment: Cross-lab study on display technology and viewing environment parameters2013In: Proceedings of SPIE - The International Society for Optical Engineering, 2013, p. Art. no. 864809-Conference paper (Refereed)
    Abstract [en]

    Subjective assessment of Quality of Experience in stereoscopic 3D requires new guidelines for the environmental setup as existing standards such as ITU-R BT. 500 may no longer be appropriate. A first step is to perform cross-lab experiments in different viewing conditions on the same video sequences. Three international labs performed Absolute Category Rating studies on a freely available video database containing degradations that are mainly related to video quality degradations. Different conditions have been used in the labs: Passive polarized displays, active shutter displays, differences in viewing distance, the number of parallel viewers, and the voting device. Implicit variations were introduced due to the three different languages in Sweden, South Korea, and France. Although the obtained Mean Opinion Scores are comparable, slight differences occur in function of the video degradations and the viewing distance. An analysis on the statistical differences obtained between the MOS of the video sequences revealed that obtaining an equivalent number of differences may require more observers in some viewing conditions. It was also seen that the alignment of the meaning of the attributes used in Absolute Category Rating in different languages may be beneficial. Statistical analysis was performed showing influence of the viewing distance on votes and MOS results.

  • 319.
    Barriga, L.
    et al.
    KTH, Superseded Departments, Teleinformatics.
    Brorsson, Mats
    Lund university.
    Ayani, Rassul
    KTH, Superseded Departments, Teleinformatics.
    Hybrid Parallel Simulation of Distributed Shared-Memory Architectures1996Report (Other academic)
  • 320.
    Barriga, Luis
    et al.
    KTH, Superseded Departments, Teleinformatics.
    Brorsson, Mats
    Lund university.
    Ayani, Rassul
    KTH, Superseded Departments, Teleinformatics.
    A model for parallel simulation of distributed shared memory1996Conference paper (Refereed)
    Abstract [en]

    We present an execution model for parallel simulation of a distributed shared memory architecture. The model captures the processor-memory interaction and abstracts the memory subsystem. Using this model we show how parallel, on-line, partially-ordered memory traces can be correctly predicted without interacting with the memory subsystem. We also outline a parallel optimistic memory simulator that uses these traces, finds a global order among all events, and returns correct data and timing to each processor. A first evaluation of the amount of concurrency that our model can extract for an ideal multiprocessor shows that processors may execute relatively long instruction sequences without violating the causality constraints. However parallel simulation efficiency is highly dependent on the memory consistency model and the application characteristics.

  • 321.
    Bartosz, Michalek
    et al.
    Katholieke Universiteit Leuven, Belgium .
    Weyns, Danny
    Katholieke Universiteit Leuven, Belgium .
    Towards a Solution for Change Impact Analysis of Software Product Line Products2011In: VARSA Workshop; Software Architecture (WICSA), 2011 9th Working IEEE/IFIP Conference on, IEEE Communications Society, 2011, p. 290-293Conference paper (Refereed)
    Abstract [en]

    Despite the fact that some practitioners and researchers report successful stories on Software ProductLines (SPL) adaptation, the evolution of SPL remains challenging. In our research we study a specific aspect of SPL adaptation, namely on updating of deployed products. Our particular focus is on the correct execution of updates and minimal interruption of services during the updates. The update process has two stages. First, the products affected by the evolution must be identified. We call this stage SPL-wide change impact analysis. In the second stage, each of the affected products has to be updated. In our previous work we have addressed the second stage of the update process. In this paper we report on our early results of the first stage: change impact analysis. We discuss how existing variability models can be employed to support automated identification of the products that require an update. The discussion is illustrated with the examples from an educational SPL that we are developing at K.U. Leuven.

  • 322.
    Barua, Shaibal
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ahmed, Mobyen Uddin
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Begum, Shahina
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Classifying drivers' cognitive load using EEG signals2017In: Studies in Health Technology and Informatics, ISSN 0926-9630, E-ISSN 1879-8365, Vol. 237, p. 99-106Article in journal (Refereed)
    Abstract [en]

    A growing traffic safety issue is the effect of cognitive loading activities on traffic safety and driving performance. To monitor drivers' mental state, understanding cognitive load is important since while driving, performing cognitively loading secondary tasks, for example talking on the phone, can affect the performance in the primary task, i.e. driving. Electroencephalography (EEG) is one of the reliable measures of cognitive load that can detect the changes in instantaneous load and effect of cognitively loading secondary task. In this driving simulator study, 1-back task is carried out while the driver performs three different simulated driving scenarios. This paper presents an EEG based approach to classify a drivers' level of cognitive load using Case-Based Reasoning (CBR). The results show that for each individual scenario as well as using data combined from the different scenarios, CBR based system achieved approximately over 70% of classification accuracy. © 2017 The authors and IOS Press.

  • 323.
    Barua, Shaibal
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Begum, Shahina
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ahmed, Mobyen Uddin
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Clustering based Approach for Automated EEG Artifacts Handling2015In: Frontiers in Artificial Intelligence and Applications, vol. 278, 2015, p. 7-16Conference paper (Refereed)
    Abstract [en]

    Electroencephalogram (EEG), measures the neural activity of the central nervous system, which is widely used in diagnosing brain activity and therefore plays a vital role in clinical and Brain-Computer Interface application. However, analysis of EEG signal is often complex since the signal recoding often contaminates with noises or artifacts such as ocular and muscle artifacts, which could mislead the diagnosis result. Therefore, to identify the artifacts from the EEG signal and handle it in a proper way is becoming an important and interesting research area. This paper presents an automated EEG artifacts handling approach, where it combines Independent Component Analysis (ICA) with a 2nd order clustering approach. Here, the 2nd order clustering approach combines the Hierarchical and Gaussian Picture Model clustering algorithm. The effectiveness of the proposed approach has been examined and observed on real EEG recording. According to result, the artifacts in the EEG signals are identified and removed successfully where the clean EEG signal shows acceptable considering visual inspection.

  • 324.
    Barua, Shaibal
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Begum, Shahina
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ahmed, Mobyen Uddin
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Intelligent Automated EEG Artifacts Handling Using Wavelet Transform, Independent Component Analysis and Hierarchical clustering2015Conference paper (Refereed)
    Abstract [en]

    Billions of interconnected neurons are the building block of human brain. For each brain activity these neurons produce electrical signals or brain waves that can be obtained by the Electroencephalogram (EEG) recording. Due to the characteristics of EEG signal, recorded signal often contaminate with undesired physiological signals other than cerebral signal that refers to as EEG artifacts such as ocular or muscle artifacts. Therefore, identification of artifacts from the EEG signal and handle it in a proper way is becoming an important research area. This paper presents an automated EEG artifacts handling approach, where it combines Wavelet transform, Independent Component Analysis (ICA) with Hierarchical clustering method. The effectiveness of the proposed approach has been examined and observed on real EEG recording. According to result, the artifacts in the EEG signals are identified and removed successfully where after handling artifacts EEG signals show acceptable considering visual inspection.

  • 325.
    Barua, Shaibal
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Begum, Shahina
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ahmed, Mobyen Uddin
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Scalable Framework for Distributed Case-based Reasoning for Big data analytics2017In: 4th EAI International Conference on IoT Technologies for HealthCare HealthyIOT'17, 2017Conference paper (Refereed)
    Abstract [en]

    This paper proposes a scalable framework for distributed case-based reasoning methodology to provide actionable knowledge based on historical big amount of data. The framework addresses several challenges, i.e., promptly analyse big data, cross-domain, use-case specific data processing, multi-source case representation, dynamic case-management, uncertainty, check the plausibility of solution after adaptation etc. through its’ five modules architectures. The architecture allows the functionalities with distributed data analytics and intended to provide solutions under different conditions, i.e. data size, velocity, variety etc.

  • 326.
    Barua, Shaibal
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Begum, Shahina
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ahmed, Mobyen Uddin
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Supervised Machine Learning Algorithms to Diagnose Stress for Vehicle Drivers Based on Physiological Sensor Signals2015In: Studies in Health Technology and Informatics, Volume 211: Proceedings of the 12th International Conference on Wearable Micro and Nano Technologies for Personalized Health, 2–4 June 2015, Västerås, Sweden, 2015, Vol. 211, p. 241-248Conference paper (Refereed)
    Abstract [en]

    Machine learning algorithms play an important role in computer science research. Recent advancement in sensor data collection in clinical sciences lead to a complex, heterogeneous data processing and analysis for patient diagnosis and prognosis. Diagnosis and treatment of patients based on manual analysis of these sensor data is difficult and time consuming. Therefore, development of Knowledge-based systems to support clinicians in decision-making is important. However, it is necessary to perform experimental work to compare performances of different machine learning methods to help to select appropriate method for a specific characteristic of data sets. This paper compares classification performance of three popular machine learning methods i.e., case-based reasoning, neutral networks and support vector machine to diagnose stress of vehicle drivers using finger temperature and heart rate variability. The experimental results show that case-based reasoning outperforms other two methods in terms of classification accuracy. Case-based reasoning has achieved 80% and 86% accuracy to classify stress using finger temperature and heart rate variability. On contrary, both neural network and support vector machine have achieved less than 80% accuracy by using both physiological signals.

  • 327.
    Barua, Shaibal
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Begum, Shahina
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ahmed, Mobyen Uddin
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ahlström, Christer
    The Swedish National Road and Transport Research Institute (VTI), Sweden.
    AUTOMATED EEG ARTIFACTS HANDLING FOR DRIVER SLEEPINESS MONITORING2016In: 2nd International Symposium on Somnolence, Vigilance, and Safety SomnoSafe2016, 2016Conference paper (Refereed)
  • 328.
    Bashir, Imran
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Visualizing Complex Data Using Timeline2012Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis  introduces the idea of visualizing complex data using timeline for problem solving and analyzing a huge database. The database contains information about vehicles, which are continuously sending information about driving behavior, current location, driver ativities etc. Data complexity can be resolved by data visualization where user can see this complex data in the abstract form of timeline visualization. visualize complex data by using timeline mgiht help to monitor and track diffrent time dependent activities. We developed web application to monitor and track monthly, weekly, and daily activities which helps in decision making and understanding complex data.

  • 329.
    Bashir, Shariq
    et al.
    Mohammad Ali Jinnah University, Islamabad, Pakistan.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Baig, Rauf
    Al Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia.
    Opinion-based entity ranking using learning to rank2016In: Applied Soft Computing, ISSN 1568-4946, E-ISSN 1872-9681, Vol. 38, no 1, p. 151-163Article in journal (Refereed)
    Abstract [en]

    As social media and e-commerce on the Internet continue to grow, opinions have become one of the most important sources of information for users to base their future decisions on. Unfortunately, the large quantities of opinions make it difficult for an individual to comprehend and evaluate them all in a reasonable amount of time. The users have to read a large number of opinions of different entities before making any decision. Recently a new retrieval task in information retrieval known as Opinion-Based Entity Ranking (OpER) has emerged. OpER directly ranks relevantentities based on how well opinions on them are matched with a user's preferences that are given in the form of queries. With such a capability, users do not need to read a large number of opinions available for the entities. Previous research on OpER does not take into account the importance and subjectivity of query keywords in individual opinions of an entity. Entity relevance scores are computed primarily on the basis of occurrences of query keywords match, by assuming all opinions of an entity as a single field of text. Intuitively, entities that have positive judgments and strong relevance with query keywords should be ranked higher than those entities that have poor relevance and negative judgments. This paper outlines several ranking features and develops an intuitive framework for OpER in which entities are ranked according to how well individual opinions of entities are matched with the user's query keywords. As a useful ranking model may be constructed from many rankingfeatures, we apply learning to rank approach based on genetic programming (GP) to combine features in order to develop an effective retrieval model for OpER task. The proposed approach is evaluated on two collections and is found to be significantly more effective than the standard OpER approach.

  • 330. Basirat, Ali
    et al.
    Faili, Heshaam
    Bridge the gap between statistical and hand-crafted grammars2013In: Computer speech & language (Print), ISSN 0885-2308, E-ISSN 1095-8363, Vol. 27, no 5, p. 1085-1104Article in journal (Refereed)
    Abstract [en]

    LTAG is a rich formalism for performing NLP tasks such as semantic interpretation, parsing, machine translation and information retrieval. Depend on the specific NLP task, different kinds of LTAGs for a language may be developed. Each of these LTAGs is enriched with some specific features such as semantic representation and statistical information that make them suitable to be used in that task. The distribution of these capabilities among the LTAGs makes it difficult to get the benefit from all of them in NLP applications.

    This paper discusses a statistical model to bridge between two kinds LTAGs for a natural language in order to benefit from the capabilities of both kinds. To do so, an HMM was trained that links an elementary tree sequence of a source LTAG onto an elementary tree sequence of a target LTAG. Training was performed by using the standard HMM training algorithm called Baum–Welch. To lead the training algorithm to a better solution, the initial state of the HMM was also trained by a novel EM-based semi-supervised bootstrapping algorithm.

    The model was tested on two English LTAGs, XTAG (XTAG-Group, 2001) and MICA's grammar (Bangalore et al., 2009) as the target and source LTAGs, respectively. The empirical results confirm that the model can provide a satisfactory way for linking these LTAGs to share their capabilities together.

  • 331.
    Basirat, Ali
    et al.
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Languages, Department of Linguistics and Philology.
    Nivre, Joakim
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Languages, Department of Linguistics and Philology.
    Greedy Universal Dependency Parsing with Right Singular Word Vectors2016Conference paper (Refereed)
    Abstract [en]

    A set of continuous feature vectors formed by right singular vectors of a transformed co-occurrence matrix are used with the Stanford neural dependency parser to train parsing models for a limited number of languages in the corpus of universal dependencies. We show that the feature vector can help the parser to remain greedy and be as accurate as (or even more accurate than) some other greedy and non-greedy parsers.

  • 332.
    Baskar, Jayalakshmi
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Lindgren, Helena
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Human-Agent Dialogues on Health Topics - An Evaluation Study2015In: Highlights of practical applications of agents, multi-agent systems, and sustainability: The PAAMS Collection, PAAMS 2015, 2015, p. 28-39Conference paper (Refereed)
    Abstract [en]

    A common conversation between an older adult and a nurse about health-related issues includes topics such as troubles with sleep, reasons for walking around nighttime, pain conditions, etc. This dialogue emerges from the participating human's lines of thinking, their roles, needs and motives, while switching between topics as the dialogue unfolds. This paper presents a dialogue system that enables a human to engage in a dialogue with a software agent to reason about health-related issues in a home environment. The purpose of this work is to conduct a pilot evaluation study of a prototype system for human-agent dialogues, which is built upon a set of semantic models and integrated in a web application designed for older adults. Focus of the study was to receive qualitative results regarding purpose and content of the agent-based dialogue system, and to evaluate a method for the agent to evaluate its behavior based on the human agent's perception of appropriateness of moves. The participants include five therapists and 11 older adults. The results show users' feedback on the purpose of dialogues and the appropriateness of dialogues presented to them during the interaction with the software agent.

  • 333.
    Baucke, Stephan
    et al.
    Ericsson Research, Germany.
    Grinnemo, Karl-Johan
    Karlstad University, Faculty of Economic Sciences, Communication and IT, Department of Computer Science.
    Ludwig, Reiner
    Ericsson Research, Germany.
    Brunström, Anna
    Karlstad University, Faculty of Economic Sciences, Communication and IT, Department of Computer Science.
    Wolisz, Adam
    Department of Electrical Engineering, Technical University Berlin, Germany.
    Using Relaxed Timer Backoff to Reduce SCTP Failover TimesManuscript (Other academic)
  • 334.
    Baumann, Christoph
    et al.
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Schwarz, Oliver
    RISE SICS.
    Dam, Mads
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Compositional Verification of Security Properties for Embedded Execution Platforms2017In: PROOFS 2017: 6th International Workshop on Security Proofs for Embedded Systems / [ed] Ulrich Kühne and Jean-Luc Danger and Sylvain Guilley, 2017, Vol. 49, p. 1-16Conference paper (Refereed)
    Abstract [en]

    The security of embedded systems can be dramatically improved through the use of formally verified isolation mechanisms such as separation kernels, hypervisors, or microkernels. For trustworthiness, particularly for system level behaviour, the verifications need precise models of the underlying hardware. Such models are hard to attain, highly complex, and proofs of their security properties may not easily apply to similar but different platforms. This may render verification economically infeasible. To address these issues, we propose a compositional top-down approach to embedded system specification and verification, where the system-on-chip is modeled as a network of distributed automata communicating via paired synchronous message passing. Using abstract specifications for each component allows to delay the development of detailed models for cores, devices, etc., while still being able to verify high level security properties like integrity and confidentiality, and soundly refine the result for different instantiations of the abstract components at a later stage. As a case study, we apply this methodology to the verification of information flow security for an industry scale security-oriented hypervisor on the ARMv8-A platform. The hypervisor statically assigns (multiple) cores to each guest system and implements a rudimentary, but usable, inter guest communication discipline. We have completed a pen-and-paper security proof for the hypervisor down to state transition level and report on a partially completed verification of guest mode security in the HOL4 theorem prover.

  • 335.
    Baumgart, Stephan
    et al.
    Volvo Construction Equipment, Eskilstuna, Sweden.
    Fröberg, Joakim
    Mälardalen University, School of Innovation, Design and Engineering, Innovation and Product Realisation.
    Punnekkat, Sasikumar
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. BIT-Pilani KK Birla Goa Campus, India.
    Enhancing Model-Based Engineering of Product Lines by Adding Functional Safety2015In: CEUR Workshop Proceedings, vol. 1487, 2015, p. 53-62Conference paper (Refereed)
    Abstract [en]

    Today's industrial product lines in the automotive and construction equipment domain face the challenge to show functional safety standard compliance and argue for the absence of failures for all derived product variants. The product line approaches are not su cient to support practitioners to trace safety-related characteristics through development. We aim to provide aid in creating a safety case for a certain con guration in a product line such that overall less e ort is necessary for each con guration. In this paper we 1) discuss the impact of functional safety on product line development, 2) propose a model-based approach to capture safety-related characteristics during concept phase for product lines and 3) analyze the usefulness of our proposal.

  • 336.
    Baumgart, Stephan
    et al.
    Volvo Construction Equipment, Eskilstuna, Sweden.
    Fröberg, Joakim
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Punnekkat, Susikumar
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Analyzing Hazards in System-of-Systems: Described in a Quarry Site Automation Context2017In: 11th Annual IEEE International Systems conference SysCon, 2017, p. 544-551Conference paper (Refereed)
    Abstract [en]

    Methods for analyzing hazards related to individual systems are well studied and established in industry today. When system-of-systems are set up to achieve new emergent behavior, hazards specifically caused by malfunctioning behavior of the complex interactions between the involved systems may not be revealed by just analyzing single system hazards. A structured process is required to reduce the complexity to enable identification of hazards when designing system-of-systems. In this paper we first present how hazards are identified and analyzed using hazard and risk assessment (HARA) methodology by the industry in the context of single systems. We describe systems-of-systems and provide a quarry site automation example from the construction equipment domain. We propose a new structured process for identifying potential hazards in systems-of-systems (HISoS), exemplified in the context of the provided example. Our approach helps to streamline the hazard analysis process in an efficient manner thus helping faster certification of system-of-systems.

  • 337.
    Baumgart, Stephan
    et al.
    Volvo Construction Equipment, Eskilstuna, Sweden.
    Parmeza, Ditmar
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Predicting the Effort for Functional Safety in Product Lines2015In: The 41st Euromicro Conference on Software Engineering and Advanced Applications SEAA'15, 2015Conference paper (Refereed)
  • 338.
    Becker, Matthias
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Dasari, Dakshina
    Research and Technology Centre, Robert Bosch, India.
    Mubeen, Saad
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Behnam, Moris
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    MECHAniSer - A Timing Analysis and Synthesis Tool for Multi-Rate Effect Chains with Job-Level Dependencies2016In: 7th International Workshop on Analysis Tools and Methodologies for Embedded and Real-time Systems WATERS'16, 2016Conference paper (Refereed)
    Abstract [en]

    Many industrial embedded systems have timing con- straints on the data propagation through a chain of independent tasks. These tasks can execute at different periods which leads to under and oversampling of data. In such situations, understand- ing and validating the temporal correctness of end-to-end delays is not trivial. Many industrial areas further face distributed development where different functionalities are integrated on the same platform after the development process. The large effect of scheduling decisions on the end-to-end delays can lead to expensive redesigns of software parts due to the lack of analysis at early design stages. Job-level dependencies is one solution for this challenge and means of scheduling such systems are available. In this paper we present MECHAniSer, a tool targeting the early analysis of end-to-end delays in multi-rate cause effect chains with specified job-level dependencies. The tool further provides the possibility to synthesize job-level dependencies for a set of cause-effect chains in a way such that all end-to-end requirements are met. The usability and applicability of the tool to industrial problems is demonstrated via a case study.

  • 339.
    Becker, Matthias
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Dasari, Dakshina
    Robert Bosch GmbH, Renningen, Germany.
    Mubeen, Saad
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Behnam, Moris
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Timing Analysis and Synthesis of Mixed Multi-Rate Effect Chains in MECHAniSer2016In: Open Demo Session of Real-Time Systems located at Real Time Systems Symposium (RTSS) RTSS@Work 2016, 2016Conference paper (Refereed)
    Abstract [en]

    The majority of embedded control systems are modeled with several chains of independently triggered tasks, also known as multi-rate effect chains. These chains have often stringent end-to-end timing requirements that should be satisfied before running the system. MECHAniSer is one of the tools that supports end-to-end timing analysis of such chains. In addition, the tool provides the possibility to synthesize job-level dependencies for these chains such that all end-to-end timing requirements are satisfied. In this paper we showcase an extension of MECHAniSer that supports the analysis of mixed chains that contain a mix of independent and dependent tasks.

  • 340.
    Becker, Matthias
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Dasari, Dakshina
    Research and Technology Centre, Robert Bosch, India.
    Nicolic, Borislav
    CISTER, INESC-TEC, ISEP, Portugal .
    Åkesson, Benny
    CISTER, INESC-TEC, ISEP, Portugal .
    Nélis, Vincent
    CISTER/INESC-TEC, ISEP, Portugal.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Contention-Free Execution of Automotive Applications on a Clustered Many-Core Platform2016In: 28th Euromicro Conference on Real-Time Systems ECRTS'16, Toulouse, France, 2016, p. 14-24Conference paper (Refereed)
    Abstract [en]

    Next generations of compute-intensive real-time applications in automotive systems will require more powerful computing platforms. One promising power-efficient solution for such applications is to use clustered many-core architectures. However, ensuring that real-time requirements are satisfied in the presence of contention in shared resources, such as memories, remains an open issue. This work presents a novel contention-free execution framework to execute automotive applications on such platforms. Privatization of memory banks together with defined access phases to shared memory resources is the backbone of the framework. An Integer Linear Programming (ILP) formulation is presented to find the optimal time-triggered schedule for the on-core execution as well as for the access to shared memory. Additionally a heuristic solution is presented that generates the schedule in a fraction of the time required by the ILP. Extensive evaluations show that the proposed heuristic performs only 0.5% away from the optimal solution while it outperforms a baseline heuristic by 67%. The applicability of the approach to industrially sized problems is demonstrated in a case study of a software for Engine Management Systems.

  • 341.
    Becker, Matthias
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Liu, Meng
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Behnam, Moris
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Adaptive Routing of Real-Time Traffic on a 2D-Mesh Based NoC2015In: The 21st IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, WiP RTCSA-wip'15, 2015Conference paper (Refereed)
  • 342.
    Becker, Matthias
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Mubeen, Saad
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Behnam, Moris
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Extending Automotive Legacy Systems with Existing End-to-End Timing Constraints2017In: 14th International Conference on Information Technology : New Generations ITNG'17, 2017Conference paper (Refereed)
    Abstract [en]

    Developing automotive software is becoming in- creasingly challenging due to continuous increase in its size and complexity. The development challenge is amplified when the industrial requirements dictate extensions to the legacy (previously developed) automotive software while requiring to meet the existing timing requirements. To cope with these challenges, sufficient techniques and tooling to support the modeling and timing analysis of such systems at earlier development phases is needed. Within this context, we focus on the extension of software component chains in the software architectures of automotive legacy systems. Selecting the sampling frequency, i.e. period, for newly added software components is crucial to meet the timing requirements of the chains. The challenges in selecting periods are identified. It is further shown how to automatically assign periods to software components, such that the end-to-end timing requirements are met while the runtime overhead is minimized. An industrial case study is presented that demonstrates the applicability of the proposed solution to industrial problems.

  • 343.
    Becker, Matthias
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Mubeen, Saad
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. Arcticus Systems, Järfälla, Sweden.
    Dasari, Dakshina
    Research and Technology Centre, Robert Bosch, India.
    Behnam, Moris
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    A Generic Framework Facilitating Early Analysis of Data Propagation Delays in Multi-Rate Systems2017In: The 23th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications RTCSA'17, 2017, article id 8046323Conference paper (Refereed)
    Abstract [en]

    A majority of multi-rate real-time systems are constrained by a multitude of timing requirements, in addition to the traditional deadlines on well-studied response times. This means, the timing predictability of these systems not only depends on the schedulability of certain task sets but also on the timely propagation of data through the chains of tasks from sensors to actuators. In the automotive industry, four different timing constraints corresponding to various data propagation delays are commonly specified on the systems. This paper identifies and addresses the source of pessimism as well as optimism in the calculations for one such delay, namely the reaction delay, in the state-of-the-art analysis that is already implemented in several industrial tools. Furthermore, a generic framework is proposed to compute all the four end-to-end data propagation delays, complying with the established delay semantics, in a scheduler and hardware-agnostic manner. This allows analysis of the system models already at early development phases, where limited system information is present. The paper further introduces mechanisms to generate job-level dependencies, a partial ordering of jobs, which need to be satisfied by any execution platform in order to meet the data propagation timing requirements. The job-level dependencies are first added to all task chains of the system and then reduced to its minimum required set such that the job order is not affected. Moreover, a necessary schedulability test is provided, allowing for varying the number of CPUs. The experimental evaluations demonstrate the tightness in the reaction delay with the proposed framework as compared to the existing state-of-the-art and practice solutions.

  • 344.
    Becker, Matthias
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Mubeen, Saad
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Dasari, Dakshina
    Research and Technology Centre, Robert Bosch, India.
    Behnam, Moris
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Scheduling Multi-Rate Real-Time Applications on Clustered Many-Core Architectures with Memory ConstraintsIn: 23rd Asia and South Pacific Design Automation Conference ASP-DAC'18Conference paper (Refereed)
    Abstract [en]

    Access to shared memory is one of the main chal- lenges for many-core processors. One group of scheduling strategies for such platforms focuses on the division of tasks’ access to shared memory and code execution. This allows to orchestrate the access to shared local and off-chip memory in a way such that access contention between different compute cores is avoided by design. In this work, an execution framework is introduced that leverages local memory by statically allocating a subset of tasks to cores. This reduces the access times to shared memory, as off-chip memory access is avoided, and in turn improves the schedulability of such systems. A Constrained Programming (CP) formulation is presented to selects the statically allocated tasks and generates the complete system schedule. Evaluations show that the pro- posed approach yields an up to 21% higher schedulability ratio than related work, and a case study demonstrates its applicability to industrial problems.

  • 345.
    Becker, Matthias
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nicolic, Borislav
    Technische Universität Braunschweig, Germany.
    Dasari, Dakshina
    Robert Bosch GmbH, Renningen, Germany.
    Åkesson, Benny
    CISTER/INESC-TEC, ISEP, Portugal.
    Nélis, Vincent
    CISTER/INESC-TEC, ISEP, Portugal.
    Behnam, Moris
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Partitioning and Analysis of the Network-on-Chip on a COTS Many-Core Platform2017In: 23rd IEEE Real-Time and Embedded Technology and Applications Symposium RTAS'17, 2017, p. 101-112Conference paper (Refereed)
    Abstract [en]

    Many-core processors can provide the computational power required by future complex embedded systems. However, their adoption is not trivial, since several sources of interference on COTS many-core platforms have adverse effects on the resulting performance. One main source of performance degradation is the contention on the Network-on-Chip, which is used for communication among the compute cores via the off- chip memory. Available analysis techniques for the traversal time of messages on the NoC do not consider many of the architectural features found on COTS platforms. In this work, we target a state-of-the-art many-core processor, the Kalray MPPA R . A novel partitioning strategy for reducing the contention on the NoC is proposed. Further, we present an analysis technique dedicated to the proposed partitioning strategy, which considers all architectural features of the COTS NoC. Additionally, it is shown how to configure the parameters for flow-regulation on the NoC, such that the Worst-Case Traversal Time (WCTT) is minimal and buffers never overflow. The benefits of our approach are evaluated based on extensive experiments that show that contention is significantly reduced compared to the unconstrained case, while the proposed analysis outperforms a state-of-the-art analysis for the same platform. An industrial case study shows the tightness of the proposed analysis.

  • 346.
    Becker, Tova
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    Etiska och moraliska dilemman vid hantering av personlig information.2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    I många tusen år har vad som anses vara etiskt försvarbart diskuterats. Allt fler perspektiv beaktas

    bland annat beroende på den tekniska utvecklingen. På senare tid har utvecklingen av de

    digitala medierna inneburit att personlig information värderas och hanteras på nya sätt. Tekniken

    innebär att nya möjligheter skapas samtidigt som det finns en risk att personlig information

    missbrukas. Detta kan påverka vad som är etiskt försvarbart.

    Detta arbete handlar om hur personlig information hanteras med hjälp av digital teknik. Det

    undersöks om användare av IT är medvetna om att deras personliga information samlas in,

    sprids och används för att skapa en mängd nya individuella tjänster. Det utforskas om denna

    hantering, samt de nya möjligheterna att individualisera tjänster, är något som skapar dilemma.

    Till sist sammanställs rekommendationer som framkommit under studien gällande

    vad enskilda individer kan göra om de vill minska riskerna för att personlig information missbrukas.

    Studien inleds med en litteraturgenomgång vilken belyser hur IT påverkat vårt samhälle samt

    hur personlig information hanteras med den nutida teknikens möjligheter.

    Det beskrivs bland annat:

     Hur företag samlar in användardata i bakgrunden utan att tydligt informera om det

     Att det finns profiler om oss alla

     Att den information vi får är anpassad till oss som individer

     Att personlig information finns lättillgänglig och sökbart för alla bland annat för arbetsgivare

    Etik och moral bör ligga till grund för alla resonemang som avser avgöra vad som är rätt och

    fel. Företagen har riktlinjer för hur de bör handla enligt etiska och moraliska principer. Det

    finns lagstiftning och regelverk som styr men informationshantering utvecklas snabbt och är

    global vilket gör situationen komplex. Tekniken utvecklas ofta snabbare än regelverket.

    Metoden för insamlande av empiri bestod av två delar. Primärdata samlades in via intervjuer

    och sekundärdata söktes via nätet och medier för att få aktuell data som underlag till intervjuerna.

    Intervjuerna genomfördes som semikonstruerade intervjuer där datorer användes som

    hjälpmedel för att exemplifiera data som fanns tillgänglig om respondenten. Den empiri som

    framkom från intervjuerna innehåller en bred beskrivning av ämnesområdet även om urvalsgruppen

    var liten. Insamlad data analyserades genom att notera och sammanställa svaren samt

    söka mönster och samband mellan olika teman i intervjun.

    Slutsatser sammanställdes vilka visar på att det finns oroväckande låg medvetenhet inom ämnesområdet.

    Slutsaterana innehåller också tre dilemman kopplade till hur den tekniska utvecklingen

    hanterar personlig information. Dessa är faktaresistens, kränkning av den personliga

    integriteten och utformning av lagar, avtal och normer. Sist sammanställs rekommendationer

    avsedda för de som vill minska riskerna med hantering av personlig information.

    I slutet diskuteras att det är först när allmänheten börjar reagera som en diskussion kan uppstå.

    Först då kan en eventuell förändring ske där en reglering som behandlar hur individer får beröras

    och hur tekniken får användas växer fram.

  • 347.
    Beel, Joeran
    et al.
    Docear, Germany ; Konstanz University, Germany.
    Breitinger, Corinna
    Linnaeus University, Faculty of Technology, Department of Media Technology. Docear, Germany.
    Langer, Stefan
    Docear, Germany ; Otto-von-Guericke University, Germany.
    Lommatzsch, Andreas
    Technische Universität Berlin, Germany.
    Gipp, Bela
    Docear, Germany ; Konstanz University, Germany.
    Towards reproducibility in recommender-systems research2016In: User modeling and user-adapted interaction, ISSN 0924-1868, E-ISSN 1573-1391, Vol. 26, no 1, p. 69-101Article in journal (Refereed)
    Abstract [en]

    Numerous recommendation approaches are in use today. However, comparing their effectiveness is a challenging task because evaluation results are rarely reproducible. In this article, we examine the challenge of reproducibility in recommender-system research. We conduct experiments using Plista’s news recommender system, and Docear’s research-paper recommender system. The experiments show that there are large discrepancies in the effectiveness of identical recommendation approaches in only slightly different scenarios, as well as large discrepancies for slightly different approaches in identical scenarios. For example, in one news-recommendation scenario, the performance of a content-based filtering approach was twice as high as the second-best approach, while in another scenario the same content-based filtering approach was the worst performing approach. We found several determinants that may contribute to the large discrepancies observed in recommendation effectiveness. Determinants we examined include user characteristics (gender and age), datasets, weighting schemes, the time at which recommendations were shown, and user-model size. Some of the determinants have interdependencies. For instance, the optimal size of an algorithms’ user model depended on users’ age. Since minor variations in approaches and scenarios can lead to significant changes in a recommendation approach’s performance, ensuring reproducibility of experimental results is difficult. We discuss these findings and conclude that to ensure reproducibility, the recommender-system community needs to (1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments, (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research. © 2016, Springer Science+Business Media Dordrecht.

  • 348.
    Begum, Shahina
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Ahmed, Mobyen Uddin
    Mälardalen University, Department of Computer Science and Electronics.
    Funk, Peter
    Mälardalen University, Department of Computer Science and Electronics.
    Xiong, Ning
    Mälardalen University, Department of Computer Science and Electronics.
    Folke, Mia
    Mälardalen University, Department of Computer Science and Electronics.
    von Schéele, Bo
    Mälardalen University, Department of Computer Science and Electronics.
    A computer-based system for the assessment and diagnosis of individual sensitivity to stress in Psychophysiology2007Conference paper (Refereed)
    Abstract [en]

    Increased exposure to stress may cause serious health problems leading to long term sick leave if undiagnosed and untreated. The practice amongst clinicians' to use a standardized procedure measuring blood pressure, ECG, finger temperature, breathing speed etc. to make a reliable diagnosis of stress and stress sensitivity is increasing. But even with these measurements it is still difficult to diagnose due to large individual variations. A computer-based system as a second option for the assessment and diagnosis of individual stress level is valuable in this domain.

    A combined approach based on a calibration phase and case-based reasoning is proposed exploiting data from finger temperature sensor readings from 24 individuals. In calibration phase, a standard clinical procedure with six different steps helps to establish a person's stress profile and set up a number of individual parameters. When acquiring a new case, patients are also asked to provide a fuzzy evaluation on how reliable was the procedure to define the case itself. Such a reliability "level" could be used to further discriminate among similar cases. The system extracts key features from the signal and classifies individual sensitivity to stress. These features are stored into a case library and similarity measurements are taken to assess the degrees of matching and create a ranked list containing the most similar cases retrieved by using the nearest-neighbor algorithm.

    A current case (CC) is compared with two other stored cases (C_92 and C_115) in the case library. The global similarity between the case CC and case C_92 is 67% and case CC and case C_115 is 80% shown by the system. So the case C_115 has ranked higher than the case C_92 and is more similar to current case CC. If necessary, the solution for the best matching case can be revised by the clinician to fit the new patient. The current problem with confirmed solution is then retained as a new case and added to the case library for future use.

    The system allows us to utilize previous experience and at the same time diagnose stress along with a stress sensitivity profile. This information enables the clinician to make a more informed decision of treatment plan for the patients. Such a system may also be used to actively notify a person's stress levels even in the home environment.

  • 349.
    Begum, Shahina
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Ahmed, Mobyen Uddin
    Mälardalen University, Department of Computer Science and Electronics.
    Funk, Peter
    Mälardalen University, Department of Computer Science and Electronics.
    Xiong, Ning
    Mälardalen University, Department of Computer Science and Electronics.
    von Schéele, Bo
    Mälardalen University, Department of Computer Science and Electronics.
    Individualized Stress Diagnosis Using Calibration and Case-Based Reasoning2007In: Proceedings of the 24th annual workshop of the Swedish Artificial Intelligence Society, Borås, Sweden, 2007, p. 59-69Conference paper (Refereed)
    Abstract [en]

    Diagnosing stress is difficult even for experts due to large individual variations. Clinician's use today manual test procedures where they measure blood pressure, ECG, finger temperature and breathing speed during a number of exercises. An experienced clinician makes diagnosis on different readings shown in a computer screen. There are only very few experts who are able to diagnose and predict stress-related problems. In this paper we have proposed a combined approach based on a calibration phase and case-based reasoning to provide assistance in diagnosing stress, using data from the finger temperature sensor readings. The calibration phase helps to establish a number of individual parameters. The system uses a case-based reasoning approach and also feedback on how well the patient succeeded with the different test, used for giving similar cases reliability estimates.

  • 350.
    Begum, Shahina
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Ahmed, Mobyen Uddin
    Mälardalen University, Department of Computer Science and Electronics.
    Funk, Peter
    Mälardalen University, Department of Computer Science and Electronics.
    Xiong, Ning
    Mälardalen University, Department of Computer Science and Electronics.
    von Schéele, Bo
    Mälardalen University, Department of Computer Science and Electronics.
    Similarity of Medical Cases in Health Care Using Cosine Similarity and Ontology2007Conference paper (Refereed)
    Abstract [en]

    The increasing use of digital patient records in hospital saves both time and reduces risks wrong treatments caused by lack of information. Digital patient records also enable efficient spread and transfer of experience gained from diagnosis and treatment of individual patient. This is today mostly manual (speaking with col-leagues) and rarely aided by computerized system. Most of the content in patient re-cords is semi-structured textual information. In this paper we propose a hybrid tex-tual case-based reasoning system promoting experience reuse based on structured or unstructured patient records, case-based reasoning and similarity measurement based on cosine similarity metric improved by a domain specific ontology and the nearest neighbor method. Not only new cases are learned, hospital staff can also add comments to existing cases and the approach enables prototypical cases.

45678910 301 - 350 of 2979
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf