The development and research of Artificial Intelligence have had a recent surge in recent years, which includes the medical field. Despite the new technology and tools available, the staff is still under a heavy workload. The goal of this thesis is to analyze the possibilities of a chatbot whose purpose is to assist the medical staff and provide safety for the patients by guaranteeing that they are being monitored. With the use of technologies such as Artificial Intelligence, Natural Language Processing, and Voice Over Internet Protocol, the chatbot can communicate with the patient. It will work as an assistant for the working staff and provide the information from the calls to the medical staff. With the answers provided from the call, the staff will not be needing to ask routine questions every time and can provide help more quickly. The chatbot is administrated through a web application where administrators can initiate calls and add patients to the database.
Arbetet har utförts på ABB i Ludvika med avsikt att förbättra revisionsprocessenför kontroll och skydd. En revision är en ändring på ett godkänt och klarmarkeratobjekt. Då objektet i fråga redan kan vara levererat till kund och man behöver göraen ändring krävs en strukturerad process för att utföra denna ändring. Processen föratt göra detta är i dagsläget komplicerad och kräver en hel del administrativt arbete.Processen använder sig utav ett antal program för lagring och databaser så som,Lotus Notes, HiDraw Studio, och HiDra32.Avdelningen för kontroll och skydd är i fasen att övergå till ett annat elCADsystem, Engineering Base. Detta är också en av anledningarna att man önskar en nyprocess för revisionshantering.Arbetet bestod till en början av inlärning av den dåvarande processen. På detta viskunde flaskhalsar identifieras och det konstaterades var i processen fokus börläggas. Samtidigt som den gamla processen gicks igenom gavs introduktioner i detnya elCAD systemet.Målet med arbetet var att färdigställa och testa en ny process men vissa förseningarfrån Aucotec som är Engineering Base utvecklare, gjorde att tidsbrist blev ettfaktum och ett förslag till process fick bli examensarbetets slutmål.
We present a model library conceived to design and assess critical components of big data frameworks, with a control-centric approach. The library adopts the object-oriented paradigm, using the Modelica language. Continuous-time and algorithmic models can be mixed, allowing to represent control code with high fidelity, and to reduce the simulation effort to the minimum required. We discuss the used modelling principles, describe the library, and show some design examples.
Self-adaptive software applications often include some form of progress rate control. Various frameworks were proposed to measure progress and provision resources to govern it, hence - in control terms - for sensors and actuators. The same is not true for control laws, however. In this paper we address this part of the overall problem, proposing a standard control structure that can be easily configured and tuned to match a variety of progress control needs. We completely analyse the simplest case, namely a single application under fixed rate control, and spend a few words on extensions to multiple application and event-based realisation. Simulation examples are reported to support the proposal.
In this report, the design choices made during the making of a water flow measuring sensor node are described and discussed to various extents. The node is to ultimately be deployed in South Sudan to monitor mini-water yards managed by the International Aid Services. A design using a hall effect water flow sensor, a microcontroller and a GSM modem is presented. Various lengths of SMS and HTTP messages are sent and the current signature they produce are compared to find out which transmission strategy is the most energy conservative. It is concluded that for a constant data volume, sending it in as few messages as possible is beneficial in terms of saving energy. It is also found that for short messages, SMS seems to be cheaper in energy compared to HTTP and the opposite is true for bigger messages. Avoiding actuators altogether has the potential to be beneficial in terms of battery life for a sensor node.
När en användare idag efterfrågar rendering av en kartvy i en applikation behöver en server först rendera en bild utifrån given geografisk data och därefter skicka bilden till klientens mobila enhet. Detta kan resultera i höga responstider, speciellt för användare som befinner sig i områden med bristfällig täckning. I denna studie utvärderas en alternativ lösning där rendering istället sker direkt på klientens enhet. En prototyp av en mobil kartapplikation med stöd för lokal rendering av geografisk rådata utvecklas och utvärderas utefter en konstant för acceptabel fördröjning vid visualisering av information. Resultatet av testerna visar att prototypens prestanda är beroende av mängden information som ska visas. För högre zoomnivåer ger prototypen ett tillfredsställande resultat, men vidare åtgärder krävs för de lägre nivåerna. De främsta utmaningarna vid utvecklingen av applikationen redovisas och förbättringsförslag för fortsatt utveckling framförs.
Collection and analysis of distributed (cloud) computing workloads allows for a deeper understanding of user and system behavior and is necessary for efficient operation of infrastructures and applications. The availability of such workload data is however often limited as most cloud infrastructures are commercially operated and monitoring data is considered proprietary or falls under GPDR regulations. This work investigates the generation of synthetic workloads using Generative Adversarial Networks and addresses a current need for more data and better tools for workload generation. Resource utilization measurements such as the utilization rates of Content Delivery Network (CDN) caches are generated and a comparative evaluation pipeline using descriptive statistics and time-series analysis is developed to assess the statistical similarity of generated and measured workloads. We use CDN data open sourced by us in a data generation pipeline as well as back-end ISP workload data to demonstrate the multivariate synthesis capability of our approach. The work contributes a generation method for multivariate time series workload generation that can provide arbitrary amounts of statistically similar data sets based on small subsets of real data. The presented technique shows promising results, in particular for heterogeneous workloads not too irregular in temporal behavior.
In this paper a new type of non-uniform quantizer, semi-uniform quantizer, is introduced. A k-bit semi-uniform quantizer uses the thresholds defined by a (k+1)-bit uniform quantizer and arranges them in such a way that small-amplitude inputs will be quantized by small quantization steps and large-amplitude inputs by large quantization steps. Therefore the total quantization error power could be reduced and the modulator's dynamic range could be increased by 1-bit. The condition for a semi-uniform quantizer to achieve a better performance than a uniform quantizer is analyzed and verified using a second order 3-bit sigma delta modulator prototype chip, fabricated in 0.35 mum CMOS process. At 32 x oversampling ratio the modulator achieves 81 dB dynamic range and 63.8 dB peak SNDR with 3-bit semi-uniform quantizer. With 3-bit uniform quantizer the dynamic range is 70 dB and the peak SNDR is 54.1 dB.
Recently proposed Byzantine fault-tolerant (BFT) systems achieve high throughput by processing requests in parallel. However, as their agreement protocols rely on a single leader and make big efforts to establish a global total order on all requests before execution, the performance and scalability of such approaches is limited. To address this problem we present SAREK, a parallel ordering framework that partitions the service state to exploit parallelism during both agreement as well as execution. SAREK utilizes request dependency which is abstracted from application-specific knowledge for the service state partitioning. Instead of having one leader at a time for the entire system, it uses one leader per partition and only establishes an order on requests accessing the same partition. SAREK supports operations that span multiple partitions and provides a deterministic mechanism to atomically process them. To address use cases in which there is not enough application-specific knowledge to always determine a priori which partition(s) a request will operate on, SAREK provides mechanisms to even handle mis- predictions without requiring rollbacks.Our evaluation of a key-value store shows an increase in throughput performance by a factor of 2 at half the latency compared to a single-leader implementation.
Recently, renewables equipped with smart inverters are being integrated into distribution networks on a large scale. To address the problem that renewables are too scattered and hard to dispatch uniformly, virtual power plant (VPP) technique has been developed. In this work, we proposed an optimization model to coordinate different system functions and allocate them to various devices in a VPP agent. Peak shaving, congestion management, frequency and voltage regulation are all considered. Then, this scenario-based model is solved with the scenario selection method. Two types of scenarios are carefully selected to form the model, aiming at reducing the computation burden and increasing the strategy's robustness. The model is tested on a VPP agent modified from the 33-bus system. The simulation demonstrates the effectiveness and efficiency of our work.
With the extensive integration of high-penetration renewable energy resources, more fast-response frequency regulation (FR) providers are required to eliminate the impact of uncertainties from loads and distributed generators (DGs) on system security and stability. As a high-quality FR resource, community integrated energy station (CIES) can effectively respond to frequency deviation caused by renewable energy generation, helping to solve the frequency problem of power system. This paper proposes an optimal planning model of CIES considering FR service. First, the model of FR service is established to unify the time scale of FR service and economic operation. Then, an optimal planning model of CIES considering FR service is proposed, with which the revenue of participating in the FR service is obtained under market mechanism. The flexible electricity pricing model is introduced to flatten the peak tie-line power of CIES. Case studies are conducted to analyze the annual cost and the revenue of CIES participating in FR service, which suggest that providing ancillary services can bring potential revenue.
An increasing number of applications covered various fields generate transactional data or other time-stamped data which all belongs to time series data. Time series data mining is a popular topic in the data mining field, it introduces some challenges to improve accuracy and efficiency of algorithms for time series data. Time series data are dynamical, large-scale and high complexity, which makes it difficult to discover patterns among time series data with common methods suitable for static data. One of hierarchical-based clustering methods called BIRCH was proposed and employed for addressing the problems of large datasets. It minimizes the costs of I/O and time. A CF tree is generated during its working process and clusters are generated after four phases of the whole BIRCH procedure. A drawback of BIRCH is that it is not very scalable. This thesis is devoted to improve accuracy and efficiency of BIRCH algorithm. A sliding window BIRCH algorithm is implemented on the basis of BIRCH algorithm. At the end of thesis, the accuracy and efficiency of sliding window BIRCH are evaluated. A performance comparison among SW BIRCH, BIRCH and K-means are also presented with Silhouette Coefficient index and Calinski-Harabaz Index. The preliminary results indicate that the SW BIRCH may achieve a better performance than BIRCH in some cases.
In pervasive environments, users are situated in rich context and can interact with their surroundings through various services. To improve user experience in such environments, it is essential to find the services that satisfies user preferences in certain context. Thus the suitability of discovered services is highly dependent on how much the context-aware system can understand users' current context and preferred activities. In this paper, we propose an unsupervised learning solution for mining user preferences from the user's past context. To cope with the high dimensionality and heterogeneity of context data, we propose a subspace clustering approach that is able to find user preferences identified by different feature sets. The results of our approach are validated by a series of experiments.
In this position paper we present a novel mathematical framework for building metaverses, which is a potential way to unify reality and virtuality to create a cohesive whole universe. We argue that the nature of metaverses is inherently mathematical, and propose that the system of complex numbers could play a key role in constructing them. Specifically, we provide context for our argument and offer a supporting example, the analytic signal, to demonstrate how to construct its imaginary counterpart with the Hilbert transform to a given real signal and how to unify them to form a cohesive complex signal that facilitates the analysis of local dynamic behaviors of the signal. This framework has significant potential for building a metaverse. By leveraging the power of complex numbers, one can create a unified mathematical system that merges the physical and virtual worlds. We believe that this proposal will inspire further research and development of metaverses in this field and that our framework will contribute to the construction of a metaverse that offers unprecedented levels of interactivity and immersion.
In this work, we study YouTube traffic characteristics in a medium-sized Swedish residential municipal network that has ~ 2600 mainly FTTH broadband-connected households. YouTube traffic analyses were carried out in the perspective of video clip category and duration, in order to understand their impact on the potential local network caching gains. To the best of our knowledge, this is the first time systematic analysis of YouTube traffic content in the perspective of video clip category and duration in a residential broadband network. Our results show that the requested YouTube videoclips from the end users in the studied network were imbalanced in regarding the video categories and durations. The dominating video category was Music, both in terms of the total traffic share as well as the contribution to the overall potential local network caching gain. In addition, most of the requested video clips were between 2-5 min in duration, despite video clips with durations over 15 min were also popular among certain video categories, e.g. film videos.
In this paper we describe a systematic study on long-term evolution of residential broadband Internet traffic covering 5 calendar years from June 2007 to May 2011. The traffic evolution is characterized both in the term of the total traffic volume, as well as the traffic volumes and shares for different application categories (file sharing, video streaming etc.), with the focus on comparing the traffic on the per IP user basis and among different broadband subscription groups. The results show that the average daily total traffic generated by each private end user increased only by about 33 % during the past 5 years. Further, the results show that the P2P filesharing has been dominating the network total traffic, but the daily file-sharing traffic volume per end user largely remains the same. Also, the daily streaming-media traffic volume per end user has increased dramatically by over 500% during the studied period of time. In the meantime, the daily web-browsing traffic volume per end user has increased by about 300%. Finally, a further investigation among 4 different FTTH broadband subscription groups with 1, 10 , 30, and 100 Mbit/s symmetric access speeds shows that the lower the access speed, the more diversified the end user traffic tend to be.
In this work, the performance of 5 representative caching replacement policies was investigated and compared for caching Internet video-on-demand (VoD) in local access networks. Two measured traces of end-user requests were used in the analyses for two typical VoD services: TV-on-demand and user generated content represented by YouTube. The studied policies range from simple least recently used (LRU) and least frequently used (LFU) algorithms to more advanced ones denoted as LFU-dynamic lifespan (LFU-DL), Adaptive replacement cache (ARC) and Greedy-dual size frequency (GDSF). Our results show that the ARC policy always outperforms the other policies due to its adaptive nature and its ability to track changes in the traffic patterns. On the other hand, the simple LRU policy can also achieve a caching performance which is comparable to that of the more advanced ARC policy especially for the TV-on-demand service when the potential caching gain is high. On the contrary, the simple LFU policy always shows the poorest performance.However, by applying a proper lifespan supplement under the LFU-DL policy, the caching performance can be effectively enhanced to the level achievable using ARC and LRU policies. Moreover, the GDSF policy does not outperform simple LRU or LFU-DL, especially for YouTube video clips when the potential caching gain is relatively low. The advantage of GDSF manifested in our analysis is, however, its outstanding cache space usage efficiency among the five studied caching algorithms.
The papers in this special section examines the deployment of Big Data and artificial intelligence for network technologies. The eneration of huge amounts of data, called big data, is creating the need for efficient tools to manage those data. Artificial intelligence (AI) has become the powerful tool in dealing with big data with recent breakthroughs at multiple fronts in machine learning, including deep learning. Meanwhile, information networks are becoming larger and more complicated, generating a huge amount of runtime statistics data such as traffic load, resource usages. The emerging big data and AI technologies may include a bunch of new requirements, applications and scenarios such as e-health, Intelligent Transportation Systems (ITS), Industrial Internet of Things (IIoT), and smart cities in the term of computing networks. The big data and AI driven network technologies also provide an unprecedented patient to discover new features, to characterize user demands and system capabilities in network resource assignment, security and privacy, system architecture, modeling and applications, which needs more explorations. The focus of this special section is to address the big data and artificial intelligence for network technologies. We appreciate contributions to this special section and the valuable and extensive efforts of the reviewers. The topics of this special section range from big data and AI algorithms, models, architecture for networks and systems to network architecture.
As a major challenge and opportunity for traditional manufacturing, intelligent manufacturing is facing the needs of sustainable development in future. Sustainability assessment undoubtedly plays a pivotal role for future development of intelligent manufacturing. Aiming at this, the paper presents the digital twin driven information architecture of sustainability assessment oriented for dynamic evolution under the whole life cycle based on the classic digital twin mapping system. The sustainability assessment method segment of the architecture includes indicator system building, indicator value determination, indicator importance degree determination and intelligent manufacturing project assessing. A novel approach for treating the ambiguity of expert judgment in indicator value determination by introducing trapezoidal fuzzy number into analytic hierarchy process is proposed, while the complexity of the influence relationship among the indicators is processed by the integration of complex networks modeling and PROMETHEE II for the indicator importance degree determination. A two-stage evidence combination model based on evidence theory is built for intelligent manufacturing project assessing lastly. The presented digital-twin-driven information architecture and the sustainability assessment method is tested and validated on a study of sustainability assessment of 8 intelligent manufacturing projects of an air conditioning enterprise. The results of the presented method were validated by comparing them with the results of the fuzzy and rough extension of the PROMETHEE II, TOPSIS and VIKOR methods, indicator importance degree determining method by entropy and indicator value determining method by accurate expert scoring.
We present MeterPU, an easy-to-use, generic and low-overhead abstraction API for taking measurements of various metrics (time, energy) on different hardware components (e.g. CPU, DRAM, GPU), using pluggable platform-specific measurement implementations behind a common interface in C++. We show that with MeterPU, not only legacy (time) optimization frameworks, such as autotuned skeleton back-end selection, can be easily retargeted for energy optimization, but also switching different optimization goals for arbitrary code sections now becomes trivial. We apply MeterPU to implement the first energytunable skeleton programming framework, based on the SkePU skeleton programming library.
Current human biomedical research shows that human diseases are closely related to non-coding RNAs, so it is of great significance for human medicine to study the relationship between diseases and non-coding RNAs. Current research has found associations between non-coding RNAs and human diseases through a variety of effective methods, but most of the methods are complex and targeted at a single RNA or disease. Therefore, we urgently need an effective and simple method to discover the associations between non-coding RNAs and human diseases. In this paper, we propose a sparse regularized joint projection model (SRJP) to identify the associations between non-coding RNAs and diseases. First, we extract information through a series of ncRNA similarity matrices and disease similarity matrices and assign average weights to the similarity matrices of the two sides. Then we decompose the similarity matrices of the two spaces into low-rank matrices and put them into SRJP. In SRJP, we innovatively use the projection matrix to combine the ncRNA side and the disease side to identify the associations between ncRNAs and diseases. Finally, the regularization term in SRJP effectively improves the robustness and generalization ability of the model. We test our model on different datasets involving three types of ncRNAs: circRNA, microRNA and long non-coding RNA. The experimental results show that SRJP has superior ability to identify and predict the associations between ncRNAs and diseases. © 2022 The Author(s)
With the emergence of cloud computing, computing resources (i.e., networks, servers, storage, applications, and services) are provisioned as metered on-demand services over networks, and can be rapidly allocated and released with minimal management effort. In the cloud computing paradigm, the virtual machine is one of the most commonly used resource carriers in which business services are encapsulated. Virtual machine placement optimization, i.e., finding optimal placement schemes for virtual machines, and reconfigurations according to the changes of environments, become challenging issues.
The primary contribution of this licentiate thesis is the development and evaluation of our combinatorial optimization approaches to virtual machine placement in cloud environments. We present modeling for dynamic cloud scheduling via migration of virtual machines in multi-cloud environments, and virtual machine placement for predictable and time-constrained peak loads in single-cloud environments. The studied problems are encoded in a mathematical modeling language and solved using a linear programming solver. In addition to scientific publications, this work also contributes in the form of software tools (in EU-funded project OPTIMIS) that demonstrate the feasibility and characteristics of the approaches presented.
With the emergence of new materials for high-efficiency organic solar cells (OSCs), understanding and finetuning the interface energetics become increasingly important. Precise determination of the so-called pinning energies, one of the critical characteristics of the material to predict the energy level alignment (ELA) at either electrode/organic or organic/organic interfaces, are urgently needed for the new materials. Here, pinning energies of a wide variety of newly developed donors and non-fullerene acceptors (NFAs) are measured through ultraviolet photoelectron spectroscopy. The positive pinning energies of the studied donors and the negative pinning energies of NFAs are in the same energy range of 4.3-4.6 eV, which follows the design rules developed for fullerene-based OSCs. The ELA for metal/organic and inorganic/organic interfaces follows the predicted behavior for all of the materials studied. For organic-organic heterojunctions where both the donor and the NFA feature strong intramolecular charge transfer, the pinning energies often underestimate the experimentally obtained interface vacuum level shift, which has consequences for OSC device performance.
Science and technology development promotes Smart City Construction (SCC) as a most imminent problem. This work aims to improve the comprehensive performance of the Smart City-oriented high-dimensional Big Data Management (BDM) platform and promote the far-reaching development of SCC. It comprehensively optimizes the calculation process of the BDM platform through Machine Learning (ML), reduces the dimension of the data, and improves the calculation effect. To this end, this work first introduces the concept of SCC and the BDM platform application design. Then, it discusses the design concept of using ML technology to optimize the calculation effect of the BDM platform. Finally, the Tensor Train Support Vector Machine (TT-SVM) model is designed based on dimension reduction data processing. The proposed model can comprehensively optimize the BDM platform, and the model is compared with other models and evaluated. The research results show that the accuracy of the reduced dimension classification of the TT-SVM model is more than 95. The lowest average processing time for the model's reduced dimension classification is about 1ms. The model's highest data processing accuracy is about 98%, and the average processing time is between 1.0- 1.5ms. Compared with traditional models and BDM platforms, the proposed model has a breakthrough performance improvement, so it plays an important role in future SCC. This work has achieved a great breakthrough in big data processing, and innovatively improved the application mode of high-dimensional big data technology by integrating multiple technologies. Therefore, the finding provides targeted technical reference for algorithms in BDM platform and contributes to the construction and improvement of Smart City.& COPY; 2022 Elsevier B.V. All rights reserved.
We present an early exploration of in-game advertising for virtual reality games. The study investigates what the impacts of interactivity and immersion on consumer learning and game experience are. First, we establish a theoretical grounding for understanding interactivity and immersion in virtual gaming environments. Then, we form a research framework and propose hypotheses around the research question. Next, we report the results of the field research, prototype design, and user study. The prototypes run in mobile browsers and are tested on virtual reality goggles with smartphones attached. Based on the results, we discuss the design of interactivity and immersion, the design’s impacts on consumer learning and game experience as well as the correlation between game experience and consumer learning. The main contributions of the work are an original research framework and a set of design considerations that can be utilized to evaluate and improve the effectiveness of in- game advertisements for virtual reality games.
A mobile photo enforcement (MPE) program deploys enforcement resources (equipment and personnel) to roadway locations, using radar and license plate photos to “catch” speed-limit violators. Where and when to deploy MPE program resources is a very important part of MPE operations, which helps enforcement agencies emphasize their road safety improvement goals and helps increasing the efficiency of program resource use. However, the design of MPE programs has received little attention from researchers.
The allocation of MPE resources is a complex process. This complexity arises from the fact that the allocation of MPE resources is not stationary, but it requires moving those resources from one site to another. Therefore, when MPE program managers allocate operators and equipment to sites, they must simultaneously consider the location of the resource allocation, the time of the allocation, and the availability of resource. Due to this complexity, a tool that can assist MPE program managers in making an effective and efficient resource deployment plan becomes necessary.
Nowadays, with the rapid development of the Internet of Things, the applicationfield of wearable sensors has been continuously expanded and extended, especiallyin the areas of remote electronic medical treatment, smart homes ect. Human dailyactivities recognition based on the sensing data is one of the challenges. With avariety of data mining techniques, the activities can be automatically recognized. Butdue to the diversity and the complexity of the sensor data, not every kind of datamining technique can performed very easily, until after a systematic analysis andimprovement. In this thesis, several data mining techniques were involved in theanalysis of a continuous sensing dataset in order to achieve the objective of humandaily activities recognition. This work studied several data mining techniques andfocuses on three of them; Decision Tree, Naive Bayes and neural network, analyzedand compared these techniques according to the classification results. The paper alsoproposed some improvements to the data mining techniques according to thespecific dataset. The comparison of the three classification results showed that eachclassifier has its own limitations and advantages. The proposed idea of combing theDecision Tree model with the neural network model significantly increased theclassification accuracy in this experiment.
Given the complexity and heterogeneity in Cloud computing scenarios, the modeling approach has widely been employed to investigate and analyze the energy consumption of Cloud applications, by abstracting real-world objects and processes that are difficult to observe or understand directly. It is clear that the abstraction sacrifices, and usually does not need, the complete reflection of the reality to be modeled. Consequently, current energy consumption models vary in terms of purposes, assumptions, application characteristics and environmental conditions, with possible overlaps between different research works. Therefore, it would be necessary and valuable to reveal the state-of-the-art of the existing modeling efforts, so as to weave different models together to facilitate comprehending and further investigating application energy consumption in the Cloud domain. By systematically selecting, assessing, and synthesizing 76 relevant studies, we rationalized and organized over 30 energy consumption models with unified notations. To help investigate the existing models and facilitate future modeling work, we deconstructed the runtime execution and deployment environment of Cloud applications, and identified 18 environmental factors and 12 workload factors that would be influential on the energy consumption. In particular, there are complicated trade-offs and even debates when dealing with the combinational impacts of multiple factors.
With increase in use formal verification tools and methods in distributed systems, it is becoming more challenging to analyse the execution traces generated by formal verification tools. This paper presents a method for unification of execution traces of industrial automation systems, based on IEC 61499 standard. Execution trace of a system is a sequence of events, where each event represents a change in the state of the system. Execution traces allow developers to explore safely behavior of control software. Execution traces can be obtained several ways, including monitoring of a real system (or its simulator), or as a counterexample build by model checker. In the paper we explore unification of execution traces for debug task in FBME - modular IDE for IEC 61499 applications. We present the formal model of the execution trace representation and show the working on a simple example.
Traffic flow forecasting is a challenging task due to its spatio-temporal nature and the stochastic features underlying complex traffic situations. Currently, Graph Convolutional Network (GCN) methods are among the most successful and promising approaches. However, most GCNs methods rely on a static graph structure, which is generally unable to extract the dynamic spatio-temporal relationships of traffic data and to interpret trip patterns or motivation behind traffic flows. In this paper, we propose a novel Semantics-aware Dynamic Graph Convolutional Network (SDGCN) for traffic flow forecasting. A sparse, state-sharing, hidden Markov model is applied to capture the patterns of traffic flows from sparse trajectory data; this way, latent states, as well as transition matrices that govern the observed trajectory, can be learned. Consequently, we can build dynamic Laplacian matrices adaptively by jointly considering the trip pattern and motivation of traffic flows. Moreover, high-order Laplacian matrices can be obtained by a newly designed forward algorithm of low time complexity. GCN is then employed to exploit spatial features, and Gated Recurrent Unit (GRU) is applied to exploit temporal features. We conduct extensive experiments on three real-world traffic datasets. Experimental results demonstrate that the prediction accuracy of SDGCN outperforms existing traffic flow forecasting methods. In addition, it provides better explanations of the generative Laplace matrices, making it suitable for traffic flow forecasting in large cities and providing insight into the causes of various phenomena such as traffic congestion. The code is publicly available at https://github.com/gorgen2020/SDGCN. © 2023 IEEE.
In this thesis two case studies are performed about solving two design problems we face during the design phase of new Volvo truck. One is to solve the frame packing problem on CAN bus. The other is to solve the LDC allocation problem. Both solutions are targeted to meet as many end-to-end latency requirements as possible. Now the solution is obtained through manually approach and based on the designer experience. But it is still not satisfactory enough. With the development of artificial intelligence method we propose two methods based on genetic algorithm to solve our design problem we face today. In first case study about frame packing we perform one single genetic algorithm process to find the optimal solution. In second case study about LDC allocation we proposed how to handle two genetic algorithm processes together to reach the optimal solution. In this thesis we show the feasibility of adopting artificial intelligence concept in some activities of the truck design phases like we do in both case studies.
This paper presents a scalable Watchdog Agent®-based toolbox approach for machine health prognostics. The toolbox consists of modularized embedded algorithms for signal processing and feature extraction, performance assessment, diagnostics and prognostics, which can be reconfigured for different machinery prognostic applications, and can be extensible and adaptable to most real-world machine situations. A decision making technique, Quality Function Deployment (QFD)-based tool selection method, is applied for the automatic selection of algorithms from the Watchdog Agent® toolbox using multiple criteria. In addition, the architecture for the Watchdog Agent®-based real-time remote machinery prognostics and health management, which incorporates remote and embedded predictive maintenance technologies, is presented. An industrial case involving the automatic tool changer of a machine tool is presented to illustrate how the Watchdog Agent® toolbox can be used in diverse scenarios.
This study presents a new point set registration method to align 3D range scans. In our method, fuzzy clusters are utilized to represent a scan, and the registration of two given scans is realized by minimizing a fuzzy weighted sum of the distances between their fuzzy cluster centers. This fuzzy cluster-based metric has a broad basin of convergence and is robust to noise. Moreover, this metric provides analytic gradients, allowing standard gradient-based algorithms to be applied for optimization. Based on this metric, the outlier issues are addressed. In addition, for the first time in rigid point set registration, a registration quality assessment in the absence of ground truth is provided. Furthermore, given specified rotation and translation spaces, we derive the upper and lower bounds of the fuzzy cluster-based metric and develop a branch-and-bound (BnB)-based optimization scheme, which can globally minimize the metric regardless of the initialization. This optimization scheme is performed in an efficient coarse-to-fine fashion: First, fuzzy clustering is applied to describe each of the two given scans by a small number of fuzzy clusters. Then, a global search, which integrates BnB and gradient-based algorithms, is implemented to achieve a coarse alignment for the two scans. During the global search, the registration quality assessment offers a beneficial stop criterion to detect whether a good result is obtained. Afterwards, a relatively large number of points of the two scans are directly taken as the fuzzy cluster centers, and then, the coarse solution is refined to be an exact alignment using the gradient-based local convergence. Compared to existing counterparts, this optimization scheme makes a large improvementin terms of robustness and efficiency by virtue of the fuzzy cluster-based metric and the registration quality assessment. In the experiments, the registration results of several 3D range scan pairs demonstrate the accuracy and effectiveness of the proposed method, as well as its superiority to state-of-the-art registration approaches.
Efficient model checking is important in order to make this type of software verification useful for systems that are complex in their structure. If a system is too large or complex then model checking does not simply scale, i.e., it could take too much time to verify the system. This is one strong argument for focusing on making model checking faster. Another interesting aim is to make model checking so fast that it can be used for predicting scheduling decisions for real-time schedulers at runtime. This of course requires the model checking to complete within a order of milliseconds or even microseconds. The aim is set very high but the results of this thesis will at least give a hint on whether this seems possible or not. The magic card for (maybe) making this possible is called Graphics Processing Unit (GPU). This thesis will investigate if and how a model checking algorithm can be ported and executed on a GPU. Modern GPU architectures offers a high degree of processing power since they are equipped with up to 1000 (NVIDIA GTX 590) or 3000 (NVIDIA Tesla K10) processor cores. The drawback is that they offer poor thread-communication possibilities and memory caches compared to CPU. This makes it very difficult to port CPU programs to GPUs.The example model (system) used in this thesis represents a real-time task scheduler that can schedule up to three periodic self-suspending tasks. The aim is to verify, i.e., find a feasible schedule for these tasks, and do it as fast as possible with the help of the GPU.
Multi-ion radiotherapy has been suggested as a new way to treat cancer, combining the radiological advantages of lighter and heavier ions in a single treatment to improve plan robustness and increase LETd in the target. To succeed, multi-ion radiotherapy requires a treatment planning system capable of computing dose for and optimising multi-ion treatment plans.
In this project, prototypical multi-ion radiotherapy treatment planning support has been implemented in the RayStation treatment planning system. The existing dose engine for helium and carbon ion beams has been extended to support protons, oxygen and neon ions, and support has been added for dose computation and plan optimisation for any combination of these ion species.
The implemented functionality has been evaluated in two phantom cases and a patient case. Multi-ion treatment plans have been shown to outperform carbon ion treatment plans in terms of simultaneously providing plan robustness, uniform RBE-weighted dose and high LETd. In the patient case, the multi-ion plan displayed significant improvements in the ability to "paint" high LETd in the target. Clinical studies are required to determine to what extent this new modality increases treatment quality in practice.
The use of virtual commissioning has increased in the last decade, but there are still challenges before the software code validation method is widespread in use. One of the extensions to virtual commissioning is the digital twin technology to allow for further improved accuracy. The aim of this paper is to review existing standards and approaches to developing virtual commissioning, through a literature review and interviews with experts in the industry. First, the definitions and classifications related to virtual commissioning and digital twins are reviewed, followed by, the approaches for the development of virtual commissioning and digital twins reported in the literature are explored. Then, in three interviews with experts of varying backgrounds and competencies, the views of the virtual technologies are assessed to provide new insight for the industry. The findings of the literature review and interviews are, among others, the apparent need for standardisation in the field and that a sought-after standard in the form of ISO 23247-1 is underway. The key finding of this paper is that digital twin is a concept with a promising future in combination with other technologies of Industry 4.0. We also outline the challenges and possibilities of virtual commissioning and the digital twin and could be used as a starting point for further research in standardisations and improvements sprung from the new standard.
Networks connected to the internet are under a constant threat of attacks. To protect against such threats, new techniques utilising already connected hardware have in this thesis been proven to be a viable solution. By equipping network switches with lightweight machine learning models, such as, Decision Tree and Random Forest, no additional devices are needed to be installed on the network.When an attack is detected, the device may notify or take direct actions on the network to protect vulnerable systems. By utilising container software on Westermo's devices, a model has been integrated, limiting its computational resources. Such a system, and its building blocks, are what this thesis has researched and implemented. The system has been validated using multiple different models using a range of parameters.These models have been trained offline on datasets with pre-recorded attacks. The recordings are converted into flows, decreasing dataset size and increasing information density. These flows contain features corresponding to information about the packets and statistics about the flows. During training, a subset of features was selected using a Genetic Algorithm, decreasing the time for processing each packet. After the models have been trained, they are converted to C code, which runs on a network switch. These models are verified online, using a simulated factory, launching different attacks on the network. Results show that the hardware is sufficient for smaller models and that the system is capable of detecting certain types of attacks.
Abstract Railway transportation systems are important for society and have many challenging and important planning problems. Train services as well as maintenance of a railway network need to be scheduled efficiently, but have mostly been treated as two separate planning problems. Since these activities are mutually exclusive they must be coordinated and should ideally be planned together. In this paper we present a mixed integer programming model for solving an integrated railway traffic and network maintenance problem. The aim is to find a long term tactical plan that optimally schedules train free windows sufficient for a given volume of regular maintenance together with the wanted train traffic. A spatial and temporal aggregation is used for controlling the available network capacity. The properties of the proposed model are analyzed and computational experiments on various synthetic problem instances are reported. Model extensions and possible modifications are discussed as well as future research directions.
Abstract Model-Based Engineering (MBE) aims at increasing the effectiveness of engineering by using models as key artifacts in the development process. While empirical studies on the use and the effects of MBE in industry exist, there is only little work targeting the embedded systems domain. We contribute to the body of knowledge with a study on the use and the assessment of MBE in that particular domain. We collected quantitative data from 112 subjects, mostly professionals working with MBE, with the goal to assess the ...
In situations where we want to use mixed reality systems over larger areas, it is necessary for these systems to maintain a correct orientation with respect to the real world. A solution for synchronizing the mixed reality and the real world over time is therefore essential to provide a good user experience. This thesis proposes such a solution, utilizing both a local positioning system named WISPR using Ultra Wide Band technology and an internal positioning system based on Google ARCore utilizing feature tracking. This is done by presenting a prototype mobile application utilizing the positions from these two positioning systems to align the physical environment with a corresponding virtual 3D-model. This enables increased environmental awareness by displaying virtual objects in accurately placed locations in the environment that otherwise are difficult or impossible to observe.
Two transformation algorithms were implemented to align the physical environment with the corresponding virtual 3D-model: Singular Value Decomposition and Orthonormal Matrices. The choice of algorithm showed minimal effect on both positional accuracy and computational cost. The most significant factor influencing the positional accuracy was found to be the quality of sampled position pairs from the two positioning systems. The parameters used to ensure high quality for the sampled position pairs were the LPS accuracy threshold, sampling frequency, sampling distance, and sample limit. A fine-tuning process of these parameters is presented and resulted in a mean Euclidean distance error of less than 10 cm to a predetermined path in a sub-optimal environment.
The aim of this thesis was not only to achieve high positional accuracy but also to make the application usable in environments such as mines, which are prone to worse conditions than those able to be evaluated in the available test environment. The design of the application, therefore, focuses on robustness and being able to handle connection losses from either positioning system. The resulting implementation can detect a connection loss, determine if the loss is destructive enough through performing quality checking of the transformation, and with this can apply both essential recovery actions and identify when such a recovery is deemed unnecessary.
Safety-critical systems are required to comply withsafety standards as well as security and privacy standards.In order to provide insights into how practitioners apply thestandards on safety, security or privacy (Sa/Se/Pr), as well ashow they employ Sa/Se/Pr analysis methodologies and softwaretools to meet such criteria, we conducted a questionnaire-basedsurvey. This paper summarizes our major analysis results of thereceived responses.
Being pioneers comes with advantages and responsibility. The concept of threathunting is currently being subsidized by businesses promoting their products. Additionally,there is little or no information regarding the implementation and theeffects, which vary depending on the organization. Threat hunting needed an unbiaseddefinition in accordance with employees in IT security. Consequently, theframeworks used when assessing threat hunting had to be objective. This thesispresents a definition of threat hunting, composed using impartial opinions. Furthermore,the thesis provides unique frameworks to assist when implementing andassessing threat hunting at an organization. This thesis has several areas of application:as a knowledge base for threat hunting, as the recommended practice forimplementing threat hunting and as groundwork for a more comprehensive evaluationof threat hunting capabilities. Ultimately, the thesis offers unprecedentednonpartisan information and recommendations on threat hunting.
In the field of robotics and autonomous vehicles, the use of RGB-D data and LiDAR sensors is a popular practice for applications such as SLAM[14], object classification[19] and scene understanding[5]. This thesis explores the problem of semantic segmentation using deep multimodal fusion of LRF and depth data. Two data set consisting of 1080 and 108 data points from two scenes is created and manually labeled in 2D space and transferred to 1D using a proposed label transfer method utilizing hierarchical clustering. The data set is used to train and validate the suggested method for segmentation using a proposed dual encoder-decoder network based on SalsaNet [1] with gradual fusion in the decoder. Applying the suggested method yielded an improvement in the scenario of an unseen circuit when compared to uni-modal segmentation using depth, RGB, laser, and a naive combination of RGB-D data. A suggestion of feature extraction in the form of PCA or stacked auto-encoders is suggested as a further improvement for this type of fusion. The source code and data set are made publicly available at https://github.com/Anguse/salsa_fusion.
The increasing availability of easy-to-use end-to-end encrypted messaging applications has made it possible for more people to conduct their conversations privately. This is something that criminals have taken advantage of and it has proven to make digital forensic investigations more difficult as methods of decrypting the data are needed. In this thesis, data from iOS and Windows devices is extracted and analysed, with focus on the application Signal. Even though other operating systems are compatible with the Signal application, such as Android, it is outside the scope of this thesis. The results of this thesis provide access to data stored in the encrypted application Signalwithout the need for expensive analysis tools. This is done by developing and publishing the first open-source script for decryption and parsing of the Signal database. The script is available for anyone at https://github.com/decryptSignal/decryptSignal.