Digitala Vetenskapliga Arkivet

Change search
Refine search result
1234567 1 - 50 of 15293
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    A. Mouris, Boules
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Ghauch, Hadi
    Department of COMELEC, Institut Mines-Telecom, Telecom-ParisTech, Paris, 91120, France.
    Thobaben, Ragnar
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Jonsson, B. Lars G.
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electromagnetic Engineering.
    Multi-tone Signal Optimization for Wireless Power Transfer in the Presence of Wireless Communication Links2020In: IEEE Transactions on Wireless Communications, ISSN 1536-1276, E-ISSN 1558-2248, Vol. 19, no 5, p. 3575-3590Article in journal (Refereed)
    Abstract [en]

    In this paper, we study optimization of multi-tone signals for wireless power transfer (WPT) systems. We investigate different non-linear energy harvesting models. Two of them are adopted to optimize the multi-tone signal according to the channel state information available at the transmitter. We show that a second-order polynomial curve-fitting model can be utilized to optimize the multi-tone signal for any RF energy harvester design. We consider both single-antenna and multi-antenna WPT systems. In-band co-existing communication links are also considered in this work by imposing a constraint on the received power at the nearby information receiver to prevent its RF front end from saturation. We emphasize the importance of imposing such constraint by explaining how inter-modulation products, due to saturation, can cause high interference at the information receiver in the case of multi-tone signals. The multi-tone optimization problem is formulated as a non-convex linearly constrained quadratic program. Two globally optimal solution approaches using mixed-integer linear programming and finite branch-and-bound techniques are proposed to solve the problem. The achieved improvement resulting from applying both solution methods to the multi-tone optimization problem is highlighted through simulations and comparisons with other solutions existing in the literature.

  • 2.
    A. Mouris, Boules
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Kolitsidas, Christos
    Ericsson, Systems and Technology-HW Research, Kista, 164 80, Sweden.
    Thobaben, Ragnar
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    A Dual-Polarized Multi-Antenna Structure for Simultaneous Transmission of Wireless Information and Power2019In: 2019 IEEE International Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting, APSURSI 2019 - Proceedings, IEEE, 2019, p. 1805-1806, article id 8889079Conference paper (Refereed)
    Abstract [en]

    In this paper, a dual-polarized multi-antenna structure is designed at 2.45 GHz with the goal of allowing simultaneous transmission of wireless information and power. Differential feeding was used to minimize the mutual coupling due to radiation leakage in addition to a mushroom-type EBG structure for suppressing the surface waves. Simulation results for the proposed structure show a mutual coupling level lower than -40 dB between the information transmitting antenna and the power transmitting antennas for both polarizations. The isolation level between the antennas is improved by at least 22 dB and 14 dB for the E-plane and H-plane coupling, respectively.

  • 3. AAl Abdulsalam, Abdulrahman
    et al.
    Velupillai, Sumithra
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS. King's College, London.
    Meystre, Stephane
    UtahBMI at SemEval-2016 Task 12: Extracting Temporal Information from Clinical Text2016In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), Association for Computational Linguistics , 2016, p. 1256-1262Conference paper (Refereed)
    Abstract [en]

    The 2016 Clinical TempEval continued the 2015 shared task on temporal information extraction with a new evaluation test set. Our team, UtahBMI, participated in all subtasks using machine learning approaches with ClearTK (LIBLINEAR), CRF++ and CRFsuite packages. Our experiments show that CRF-based classifiers yield, in general, higher recall for multi-word spans, while SVM-based classifiers are better at predicting correct attributes of TIMEX3. In addition, we show that an ensemble-based approach for TIMEX3 could yield improved results. Our team achieved competitive results in each subtask with an F1 75.4% for TIMEX3, F1 89.2% for EVENT, F1 84.4% for event relations with document time (DocTimeRel), and F1 51.1% for narrative container (CONTAINS) relations.

  • 4.
    Aarflot, Ludvig
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Implementation of High Current Measurement Technology for Automotive Applications in Programmable Logic2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    At Inmotion Technologies, a special method of measuring phase currents is usedin the high power inverters for automotive applications. This method requiresa considerable amount of control logic, currently implemented with discretelogic gates distributed over a number of integrated circuits. In this thesis, thefeasibility of replacing this with programmable logic hardware in one singlepackage is investigated.The theory behind the current measurement method as well as the operationof the discrete implementation is analysed and described. Requirements ona programmable logic device to implement this was identified and a suitabledevice chosen accordingly. A prototype was developed and tested, interfacingan existing product.Benefits in terms of cost and size are evaluated as well as required changesto the existing system and the possibility for improvements brought by such achange is analysed. Since the products in question have high requirements onfunctional safety, possible impacts in this regard are discussed.

    Download full text (pdf)
    fulltext
  • 5.
    Aasberg, Freddy
    KTH, School of Electrical Engineering and Computer Science (EECS).
    HypervisorLang: Attack Simulations of the OpenStack Nova Compute Node2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cloud services are growing in popularity and the global public cloud services are forecasted to increase by 17% in 2020[1]. The popularity of cloud services is due to the improved resource allocation for providers and simplicity of use for the customer. Due to the increasing popularity of cloud services and its increased use by companies, the security assessment of the services is strategically becoming more critical. Assessing the security of a cloud system can be problematic because of its complexity since the systems are composed of many different technologies. One way of simplifying the security assessment is attack simulations, covering cyberattacks of the investigated system. This thesis will make use of Meta Attack language (MAL) to create the Domain- Specific Language (DLS) HypervisorLang that models the virtualisation layer in an OpenStack Nova setup. The result of this thesis is a proposed DSL HypervisorLang which uses attack simulation to model hostile usage of the service and defences to evade those. The hostile usage covers attacks such as a denial of services, buffer overflows and out-of-bound-read and are sourced via known vulnerabilities. To implement the main components of the Nova module into HypervisorLang, literature studies where performed and included components in Nova together with threat modelling. Evaluating the correctness of HypervisorLang was performed by implementing test cases to display the different attack steps included in the model. However, the results also show that some limitations of the evaluations have been found and are proposed for further research. 

    Download full text (pdf)
    fulltext
  • 6.
    Aasberg Pipirs, Freddy
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Svensson, Patrik
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Tenancy Model Selection Guidelines2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Software as a Service (SaaS) is a subset of cloud services where a vendor provides software as a service to customers. The SaaS application is installed on the SaaS provider’s servers, and is often accessed via the web browser. In the context of SaaS, a customer is called tenant, which often is an organization that is accessing the SaaS application, but it could also be a single individual. A SaaS application can be classified into tenancy models. A tenancy model describes how a tenant’s data is mapped to the storage on the server-side of the SaaS application.By doing a research, the authors have drawn the conclusion that there is a lack of guidance for selecting tenancy models. The purpose of this thesis is to provide guidance for selecting tenancy models. The short-term-goal is to create a tenancy selection guide. The long-term-goal is to provide researchers and students with research material. This thesis provides a guidance model for selection of tenancy models. The model is called Tenancy Model Selection Guidelines (TMSG).TMSG was evaluated by interviewing two professionals from the software industry. The criteria used for evaluating TMSG were Interviewee credibility, Syntactic correctness, Semantic correctness, Usefulness and Model flexibility. In the interviews, both of the interviewees said that TMSG was in need of further refinements. Still they were positive to the achieved result.

    Download full text (pdf)
    fulltext
  • 7.
    Abad Garcia, Carlos
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Error Injection Study for a SpaceFibre In-Orbit Demonstrator2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The space electronics sector is shifting towards the New-Space paradigm, in which traditional space-quali_ed and expensive components and payloads are replaced by commercial o_-the-shelf (COTS) alternatives. This change in mentality is accompanied by the development of inexpensive cubesats, lowering the entry barrie in terms of cost, enabling an increase in scienti_c research in space. However, also well-established and resourceful spacecraft manufacturers are adopting this trend that allows them to become more competitive in the market. Following this trend, Thales Alenia Space is developing R&D activities using COTS components. One example is the SpaceFibre In-Orbit Demonstrator, a digital board integrated in a cubesat payload that aims to test two Intellectual Property blocks implementing the new ECSS standard for high-speed onboard communication. This thesis presents the necessary steps that were taken to integrate the _rmware for the demonstrator's Field-Programmable Gate Array (FPGA) that constitutes the main processing and control unit for the board. The activity is centered around the development of a Leon3 System-on-Chip in VHDL used to manage the components in the board and test the SpaceFibre technology. Moreover, it also addresses the main problem of using COTS components in the space environment: their sensitivity to radiation, that, for a FPGA results in Single-Event Upsets causing the implementation to malfunction, and a potential failure of the mission if they are not addressed. To accomplish the task, a SEU-emulation methodology based in partial recon_guration and integrating the state of the art techniques is elaborated and applied to test the reliability of the SpaceFibre technology. Finally, results show that the mean time between failures of the SpaceFibre Intellectual Property Block using a COTS FPGA is of 170 days for Low Earth Orbit (LEO) and 2278 days for Geostationary Orbit (GEO) if con_guration memory scrubbing is included in the design, enabling its usage in short LEO missions for data transmission. Moreover, tailored mitigation techniques based on the information gathered from applying the proposed methodology are presented to improve the gures. 

    Download full text (pdf)
    fulltext
  • 8.
    Abbas, Zainab
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Scalable Streaming Graph and Time Series Analysis Using Partitioning and Machine Learning2021Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Recent years have witnessed a massive increase in the amount of data generated by the Internet of Things (IoT) and social media. Processing huge amounts of this data poses non-trivial challenges in terms of the hardware and performance requirements of modern-day applications. The data we are dealing with today is of massive scale, high intensity and comes in various forms. MapReduce was a popular and clever choice of handling big data using a distributed programming model, which made the processing of huge volumes of data possible using clusters of commodity machines. However, MapReduce was not a good fit for performing complex tasks, such as graph processing, iterative programs and machine learning. Modern data processing frameworks, that are being popularly used to process complex data and perform complex analysis tasks, overcome the shortcomings of MapReduce. Some of these popular frameworks include Apache Spark for batch and stream processing, Apache Flink for stream processing and Tensor Flow for machine learning.

    In this thesis, we deal with complex analytics on data modeled as time series, graphs and streams. Time series are commonly used to represent temporal data generated by IoT sensors. Analysing and forecasting time series, i.e. extracting useful characteristics and statistics of data and predicting data, is useful for many fields that include, neuro-physiology, economics, environmental studies, transportation, etc. Another useful data representation we work with, are graphs. Graphs are complex data structures used to represent relational data in the form of vertices and edges. Graphs are present in various application domains, such as recommendation systems, road traffic analytics, web analysis, social media analysis. Due to the increasing size of graph data, a single machine is often not sufficient to process the complete graph. Therefore, the computation, as well as the data, must be distributed. Graph partitioning, the process of dividing graphs into subgraphs, is an essential step in distributed graph processing of large scale graphs because it enables parallel and distributed processing.

    The majority of data generated from IoT and social media originates as a continuous stream, such as series of events from a social media network, time series generated from sensors, financial transactions, etc. The stream processing paradigm refers to the processing of data streaming that is continuous and possibly unbounded. Combining both graphs and streams leads to an interesting and rather challenging domain of streaming graph analytics. Graph streams refer to data that is modelled as a stream of edges or vertices with adjacency lists representing relations between entities of continuously evolving data generated by a single or multiple data sources. Streaming graph analytics is an emerging research field with great potential due to its capabilities of processing large graph streams with limited amounts of memory and low latency. 

    In this dissertation, we present graph partitioning techniques for scalable streaming graph and time series analysis. First, we present and evaluate the use of data partitioning to enable data parallelism in order to address the challenge of scale in large spatial time series forecasting. We propose a graph partitioning technique for large scale spatial time series forecasting of road traffic as a use-case. Our experimental results on traffic density prediction for real-world sensor dataset using Long Short-Term Memory Neural Networks show that the partitioning-based models take 12x lower training time when run in parallel compared to the unpartitioned model of the entire road infrastructure. Furthermore, the partitioning-based models have 2x lower prediction error (RMSE) compared to the entire road model. Second, we showcase the practical usefulness of streaming graph analytics for large spatial time series analysis with the real-world task of traffic jam detection and reduction. We propose to apply streaming graph analytics by performing useful analytics on traffic data stream at scale with high throughput and low latency. Third, we study, evaluate, and compare the existing state-of-the-art streaming graph partitioning algorithms. We propose a uniform analysis framework built using Apache Flink to evaluate and compare partitioning features and characteristics of streaming graph partitioning methods. Finally, we present GCNSplit, a novel ML-driven streaming graph partitioning solution, that uses a small and constant in-memory state (bounded state) to partition (possibly unbounded) graph streams. Our results demonstrate that \ours provides high-throughput partitioning and can leverage data parallelism to sustain input rates of 100K edges/s. GCNSplit exhibits a partitioning quality, in terms of graph cuts and load balance, that matches that of the state-of-the-art HDRF (High Degree Replicated First) algorithm while storing three orders of magnitude smaller partitioning state.

    Download full text (pdf)
    fulltext
  • 9.
    Abbas, Zainab
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Al-Shishtawy, Ahmad
    RISE SICS, Stockholm, Sweden.
    Girdzijauskas, Sarunas
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS. RISE SICS, Stockholm, Sweden..
    Vlassov, Vladimir
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Short-Term Traffic Prediction Using Long Short-Term Memory Neural Networks2018Conference paper (Refereed)
    Abstract [en]

    Short-term traffic prediction allows Intelligent Transport Systems to proactively respond to events before they happen. With the rapid increase in the amount, quality, and detail of traffic data, new techniques are required that can exploit the information in the data in order to provide better results while being able to scale and cope with increasing amounts of data and growing cities. We propose and compare three models for short-term road traffic density prediction based on Long Short-Term Memory (LSTM) neural networks. We have trained the models using real traffic data collected by Motorway Control System in Stockholm that monitors highways and collects flow and speed data per lane every minute from radar sensors. In order to deal with the challenge of scale and to improve prediction accuracy, we propose to partition the road network into road stretches and junctions, and to model each of the partitions with one or more LSTM neural networks. Our evaluation results show that partitioning of roads improves the prediction accuracy by reducing the root mean square error by the factor of 5. We show that we can reduce the complexity of LSTM network by limiting the number of input sensors, on average to 35% of the original number, without compromising the prediction accuracy.

  • 10.
    Abbas, Zainab
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Ivarsson, Jón Reginbald
    KTH.
    Al-Shishtawy, A.
    Vlassov, Vladimir
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Scaling Deep Learning Models for Large Spatial Time-Series Forecasting:
    2019In: Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019:
    , Institute of Electrical and Electronics Engineers Inc. , 2019, p. 1587-1594
    Conference paper (Refereed)
    Abstract [en]

    Neural networks are used for different machine learning tasks, such as spatial time-series forecasting. Accurate modelling of a large and complex system requires large datasets to train a deep neural network that causes a challenge of scale as training the network and serving the model are computationally and memory intensive. One example of a complex system that produces a large number of spatial time-series is a large road sensor infrastructure deployed for traffic monitoring. The goal of this work is twofold: 1) To model large amount of spatial time-series from road sensors; 2) To address the scalability problem in a real-life task of large-scale road traffic prediction which is an important part of an Intelligent Transportation System.We propose a partitioning technique to tackle the scalability problem that enables parallelism in both training and prediction: 1) We represent the sensor system as a directed weighted graph based on the road structure, which reflects dependencies between sensor readings, and weighted by sensor readings and inter-sensor distances; 2) We propose an algorithm to automatically partition the graph taking into account dependencies between spatial time-series from sensors; 3) We use the generated sensor graph partitions to train a prediction model per partition. Our experimental results on traffic density prediction using Long Short-Term Memory (LSTM) Neural Networks show that the partitioning-based models take 2x, if run sequentially, and 12x, if run in parallel, less training time, and 20x less prediction time compared to the unpartitioned model of the entire road infrastructure. The partitioning-based models take 100x less total sequential training time compared to single sensor models, i.e., one model per sensor. Furthermore, the partitioning-based models have 2x less prediction error (RMSE) compared to both the single sensor models and the entire road model. 

  • 11.
    Abbas, Zainab
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Kalavri, Vasiliki
    Systems Group, ETH, Zurich, Switzerland.
    Carbone, Paris
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Vlassov, Vladimir
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Streaming Graph Partitioning: An Experimental Study2018In: Proceedings of the VLDB Endowment, E-ISSN 2150-8097, Vol. 11, no 11, p. 1590-1603Article in journal (Refereed)
    Abstract [en]

    Graph partitioning is an essential yet challenging task for massive graph analysis in distributed computing. Common graph partitioning methods scan the complete graph to obtain structural characteristics offline, before partitioning. However, the emerging need for low-latency, continuous graph analysis led to the development of online partitioning methods. Online methods ingest edges or vertices as a stream, making partitioning decisions on the fly based on partial knowledge of the graph. Prior studies have compared offline graph partitioning techniques across different systems. Yet, little effort has been put into investigating the characteristics of online graph partitioning strategies.

    In this work, we describe and categorize online graph partitioning techniques based on their assumptions, objectives and costs. Furthermore, we employ an experimental comparison across different applications and datasets, using a unified distributed runtime based on Apache Flink. Our experimental results showcase that model-dependent online partitioning techniques such as low-cut algorithms offer better performance for communication-intensive applications such as bulk synchronous iterative algorithms, albeit higher partitioning costs. Otherwise, model-agnostic techniques trade off data locality for lower partitioning costs and balanced workloads which is beneficial when executing data-parallel single-pass graph algorithms.

  • 12.
    Abbas, Zainab
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Sigurdsson, Thorsteinn Thorri
    KTH.
    Al-Shishtawy, Ahmad
    RISE Res Inst Sweden, Stockholm, Sweden..
    Vlassov, Vladimir
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Evaluation of the Use of Streaming Graph Processing Algorithms for Road Congestion Detection2018In: 2018 IEEE INT CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, UBIQUITOUS COMPUTING & COMMUNICATIONS, BIG DATA & CLOUD COMPUTING, SOCIAL COMPUTING & NETWORKING, SUSTAINABLE COMPUTING & COMMUNICATIONS / [ed] Chen, JJ Yang, LT, IEEE COMPUTER SOC , 2018, p. 1017-1025Conference paper (Refereed)
    Abstract [en]

    Real-time road congestion detection allows improving traffic safety and route planning. In this work, we propose to use streaming graph processing algorithms for road congestion detection and evaluate their accuracy and performance. We represent road infrastructure sensors in the form of a directed weighted graph and adapt the Connected Components algorithm and some existing graph processing algorithms, originally used for community detection in social network graphs, for the task of road congestion detection. In our approach, we detect Connected Components or communities of sensors with similarly weighted edges that reflect different states in the traffic, e.g., free flow or congested state, in regions covered by detected sensor groups. We have adapted and implemented the Connected Components and community detection algorithms for detecting groups in the weighted sensor graphs in batch and streaming manner. We evaluate our approach by building and processing the road infrastructure sensor graph for Stockholm's highways using real-world data from the Motorway Control System operated by the Swedish traffic authority. Our results indicate that the Connected Components and DenGraph community detection algorithms can detect congestion with accuracy up to approximate to 94% for Connected Components and up to approximate to 88% for DenGraph. The Louvain Modularity algorithm for community detection fails to detect congestion regions for sparsely connected graphs, representing roads that we have considered in this study. The Hierarchical Clustering algorithm using speed and density readings is able to detect congestion without details, such as shockwaves.

  • 13.
    Abbas, Zainab
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Sottovia, Paolo
    Huawei Munich Research Centre, Munich, Germany.
    Hassan, Mohamad Al Hajj
    Huawei Munich Research Centre, Munich, Germany.
    Foroni, Daniele
    Huawei Munich Research Centre, Munich, Germany.
    Bortoli, Stefano
    Huawei Munich Research Centre, Munich, Germany.
    Real-time Traffic Jam Detection and Congestion Reduction Using Streaming Graph Analytics2020In: 2020 IEEE International Conference on Big Data (Big Data), Institute of Electrical and Electronics Engineers (IEEE) , 2020, p. 3109-3118Conference paper (Refereed)
    Abstract [en]

    Traffic congestion is a problem in day to day life, especially in big cities. Various traffic control infrastructure systems have been deployed to monitor and improve the flow of traffic across cities. Real-time congestion detection can serve for many useful purposes that include sending warnings to drivers approaching the congested area and daily route planning. Most of the existing congestion detection solutions combine historical data with continuous sensor readings and rely on data collected from multiple sensors deployed on the road, measuring the speed of vehicles. While in our work we present a framework that works in a pure streaming setting where historic data is not available before processing. The traffic data streams, possibly unbounded, arrive in real-time. Moreover, the data used in our case is collected only from sensors placed on the intersections of the road. Therefore, we investigate in creating a real-time congestion detection and reduction solution, that works on traffic streams without any prior knowledge. The goal of our work is 1) to detect traffic jams in real-time, and 2) to reduce the congestion in the traffic jam areas.In this work, we present a real-time traffic jam detection and congestion reduction framework: 1) We propose a directed weighted graph representation of the traffic infrastructure network for capturing dependencies between sensor data to measure traffic congestion; 2) We present online traffic jam detection and congestion reduction techniques built on a modern stream processing system, i.e., Apache Flink; 3) We develop dynamic traffic light policies for controlling traffic in congested areas to reduce the travel time of vehicles. Our experimental results indicate that we are able to detect traffic jams in real-time and deploy new traffic light policies which result in 27% less travel time at the best and 8% less travel time on average compared to the travel time with default traffic light policies. Our scalability results show that our system is able to handle high-intensity streaming data with high throughput and low latency.

  • 14.
    Abdallah Hussein Mohammed, Ahmed
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Analyzing common structures in Enterprise Architecture modeling notations2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Over the past few decades, the field of Enterprise Architecture has attracted researchers, and many Enterprise Architecture modeling frameworks have been proposed. However, in order to support the different needs, the different frameworks offer many different elements types that can be used to create an Enterprise Architecture. This abundance of elements can make it difficult for the end-user to differentiate between the usages of all the various elements in order to identify what elements they actually need. Therefore, this research analyzes existing Enterprise Architecture modeling frameworks and extract the common properties that exists in the different Enterprise Architecture modeling notations. In this study, we performed a Systematic Literature Review that aims at finding the most commonly used Enterprise Architecture modeling frameworks in the Enterprise Architecture literature. Additionally, the elements defined in these frameworks are used to create a taxonomy based on the similarities between the different Enterprise Architecture Frameworks. Our results showed that TOGAF, ArchiMate, DoDAF, and IAF are the most used modeling frameworks. Also, we managed to identify the common elements that are available in the different Enterprise Architecture Frameworks mentioned above and represent the common elements in a multilevel model. The findings of this study can make it easier for the end-user to pick the appropriate elements for their use cases, as it highlights the core elements of Enterprise Architecture modeling. Additionally, we showed how our model can be extended to support the needs of different domains. This thesis also forms the foundation for the development of an Enterprise Architecture modeling framework that can be customized and extended so that only the relevant elements are presented to the end-user.

    Download full text (pdf)
    fulltext
  • 15.
    Abdalmoaty, Mohamed
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). KTH Royal Institute of Technology.
    Identification of Stochastic Nonlinear Dynamical Models Using Estimating Functions2019Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Data-driven modeling of stochastic nonlinear systems is recognized as a very challenging problem, even when reduced to a parameter estimation problem. A main difficulty is the intractability of the likelihood function, which renders favored estimation methods, such as the maximum likelihood method, analytically intractable. During the last decade, several numerical methods have been developed to approximately solve the maximum likelihood problem. A class of algorithms that attracted considerable attention is based on sequential Monte Carlo algorithms (also known as particle filters/smoothers) and particle Markov chain Monte Carlo algorithms. These algorithms were able to obtain impressive results on several challenging benchmark problems; however, their application is so far limited to cases where fundamental limitations, such as the sample impoverishment and path degeneracy problems, can be avoided.

    This thesis introduces relatively simple alternative parameter estimation methods that may be used for fairly general stochastic nonlinear dynamical models. They are based on one-step-ahead predictors that are linear in the observed outputs and do not require the computations of the likelihood function. Therefore, the resulting estimators are relatively easy to compute and may be highly competitive in this regard: they are in fact defined by analytically tractable objective functions in several relevant cases. In cases where the predictors are analytically intractable due to the complexity of the model, it is possible to resort to {plain} Monte Carlo approximations. Under certain assumptions on the data and some conditions on the model, the convergence and consistency of the estimators can be established. Several numerical simulation examples and a recent real-data benchmark problem demonstrate a good performance of the proposed method, in several cases that are considered challenging, with a considerable reduction in computational time in comparison with state-of-the-art sequential Monte Carlo implementations of the ML estimator.

    Moreover, we provide some insight into the asymptotic properties of the proposed methods. We show that the accuracy of the estimators depends on the model parameterization and the shape of the unknown distribution of the outputs (via the third and fourth moments). In particular, it is shown that when the model is non-Gaussian, a prediction error method based on the Gaussian assumption is not necessarily more accurate than one based on an optimally weighted parameter-independent quadratic norm. Therefore, it is generally not obvious which method should be used. This result comes in contrast to a current belief in some of the literature on the subject. 

    Furthermore, we introduce the estimating functions approach, which was mainly developed in the statistics literature, as a generalization of the maximum likelihood and prediction error methods. We show how it may be used to systematically define optimal estimators, within a predefined class, using only a partial specification of the probabilistic model. Unless the model is Gaussian, this leads to estimators that are asymptotically uniformly more accurate than linear prediction error methods when quadratic criteria are used. Convergence and consistency are established under standard regularity and identifiability assumptions akin to those of prediction error methods.

    Finally, we consider the problem of closed-loop identification when the system is stochastic and nonlinear. A couple of scenarios given by the assumptions on the disturbances, the measurement noise and the knowledge of the feedback mechanism are considered. They include a challenging case where the feedback mechanism is completely unknown to the user. Our methods can be regarded as generalizations of some classical closed-loop identification approaches for the linear time-invariant case. We provide an asymptotic analysis of the methods, and demonstrate their properties in a simulation example.

    Download full text (pdf)
    fulltext
    Download full text (pdf)
    Errata
  • 16.
    Abdalmoaty, Mohamed R.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Hjalmarsson, Håkan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Application of a Linear PEM Estimator to a Stochastic Wiener-Hammerstein Benchmark Problem⁎2018In: IFAC-PapersOnLine, E-ISSN 2405-8963, Vol. 51, no 15, p. 784-789Article in journal (Refereed)
    Abstract [en]

    The estimation problem of stochastic Wiener-Hammerstein models is recognized to be challenging, mainly due to the analytical intractability of the likelihood function. In this contribution, we apply a computationally attractive prediction error method estimator to a real-data stochastic Wiener-Hammerstein benchmark problem. The estimator is defined using a deterministic predictor that is nonlinear in the input. The prediction error method results in tractable expressions, and Monte Carlo approximations are not necessary. This allows us to tackle several issues considered challenging from the perspective of the current mainstream approach. Under mild conditions, the estimator can be shown to be consistent and asymptotically normal. The results of the method applied to the benchmark data are presented and discussed.

  • 17.
    Abdalmoaty, Mohamed R.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Rojas, Cristian R.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Hjalmarsson, Håkan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Identification of a Class of Nonlinear Dynamical Networks⁎2018In: IFAC-PapersOnLine, E-ISSN 2405-8963, Vol. 51, no 15, p. 868-873Article in journal (Refereed)
    Abstract [en]

    Identification of dynamic networks has attracted considerable interest recently. So far the main focus has been on linear time-invariant networks. Meanwhile, most real-life systems exhibit nonlinear behaviors; consider, for example, two stochastic linear time-invariant systems connected in series, each of which has a nonlinearity at its output. The estimation problem in this case is recognized to be challenging, due to the analytical intractability of both the likelihood function and the optimal one-step ahead predictors of the measured nodes. In this contribution, we introduce a relatively simple prediction error method that may be used for the estimation of nonlinear dynamical networks. The estimator is defined using a deterministic predictor that is nonlinear in the known signals. The estimation problem can be defined using closed-form analytical expressions in several non-trivial cases, and Monte Carlo approximations are not necessarily required. We show, that this is the case for some block-oriented networks with no feedback loops and where all the nonlinear modules are polynomials. Consequently, the proposed method can be applied in situations considered challenging by current approaches. The performance of the estimation method is illustrated on a numerical simulation example.

  • 18.
    Abdalmoaty, Mohamed
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Eriksson, Oscar
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Bereza-Jarocinski, Robert
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Broman, David
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Hjalmarsson, Håkan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Identification of Non-Linear Differential-Algebraic Equation Models with Process Disturbances2021In: Proceedings The 60th IEEE conference on Decision and Control (CDC), Institute of Electrical and Electronics Engineers (IEEE) , 2021Conference paper (Refereed)
    Abstract [en]

    Differential-algebraic equations (DAEs) arise naturally as a result of equation-based object-oriented modeling. In many cases, these models contain unknown parameters that have to be estimated using experimental data. However, often the system is subject to unknown disturbances which, if not taken into account in the estimation, can severely affect the model's accuracy. For non-linear state-space models, particle filter methods have been developed to tackle this issue. Unfortunately, applying such methods to non-linear DAEs requires a transformation into a state-space form, which is particularly difficult to obtain for models with process disturbances. In this paper, we propose a simulation-based prediction error method that can be used for non-linear DAEs where disturbances are modeled as continuous-time stochastic processes. To the authors' best knowledge, there are no general methods successfully dealing with parameter estimation for this type of model. One of the challenges in particle filtering  methods are random variations in the minimized cost function due to the nature of the algorithm. In our approach, a similar phenomenon occurs and we explicitly consider how to sample the underlying continuous process to mitigate this problem. The method is illustrated numerically on a pendulum example. The results suggest that the method is able to deliver consistent estimates.

    Download full text (pdf)
    fulltext
  • 19.
    Abdalmoaty, Mohamed
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Hjalmarsson, Håkan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Consistent Estimators of Stochastic MIMO Wiener Models based on Suboptimal Predictors2018Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 20.
    Abdalmoaty, Mohamed
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Hjalmarsson, Håkan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Identification of Stochastic Nonlinear Models Using Optimal Estimating Functions2020In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 119, article id 109055Article in journal (Refereed)
    Abstract [en]

    The first part of the paper examines the asymptotic properties of linear prediction error method estimators, which were recently suggested for the identification of nonlinear stochastic dynamical models. It is shown that their accuracy depends not only on the shape of the unknown distribution of the data, but also on how the model is parameterized. Therefore, it is not obvious in general which linear prediction error method should be preferred. In the second part, the estimating functions approach is introduced and used to construct estimators that are asymptotically optimal with respect to a specific class of estimators. These estimators rely on a partial probabilistic parametric models, and therefore neither require the computations of the likelihood function nor any marginalization integrals. The convergence and consistency of the proposed estimators are established under standard regularity and identifiability assumptions akin to those of prediction error methods. The paper is concluded by several numerical simulation examples.

    Download full text (pdf)
    fulltext
  • 21.
    Abdalmoaty, Mohamed
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Hjalmarsson, Håkan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Linear Prediction Error Methods for Stochastic Nonlinear Models2019In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 105, p. 49-63Article in journal (Refereed)
    Abstract [en]

    The estimation problem for stochastic parametric nonlinear dynamical models is recognized to be challenging. The main difficulty is the intractability of the likelihood function and the optimal one-step ahead predictor. In this paper, we present relatively simple prediction error methods based on non-stationary predictors that are linear in the outputs. They can be seen as extensions of the linear identification methods for the case where the hypothesized model is stochastic and nonlinear. The resulting estimators are defined by analytically tractable objective functions in several common cases. It is shown that, under certain identifiability and standard regularity conditions, the estimators are consistent and asymptotically normal. We discuss the relationship between the suggested estimators and those based on second-order equivalent models as well as the maximum likelihood method. The paper is concluded with a numerical simulation example as well as a real-data benchmark problem.

    Download full text (pdf)
    fulltext
    Download full text (pdf)
    fulltext
  • 22.
    Abdalmoaty, Mohamed
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Hjalmarsson, Håkan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Wahlberg, Bo
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    The Gaussian MLE versus the Optimally weighted LSE2020In: IEEE signal processing magazine (Print), ISSN 1053-5888, E-ISSN 1558-0792, Vol. 37, no 6, p. 195-199Article in journal (Refereed)
    Abstract [en]

    In this note, we derive and compare the asymptotic covariance matrices of two parametric estimators: the Gaussian Maximum Likelihood Estimator (MLE), and the optimally weighted Least-Squares Estimator (LSE). We assume a general model parameterization where the model's mean and variance are jointly parameterized, and consider Gaussian and non-Gaussian data distributions.

    Download full text (pdf)
    fulltext
  • 23.
    Abdelgalil, Mohammed Saqr
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lopez-Pernas, Sonsoles
    Idiographic Learning Analytics:A single student (N=1) approach using psychological networks2021Conference paper (Refereed)
    Abstract [en]

    Recent findings in the field of learning analytics have brought to our attention that conclusions drawn from cross-sectional group-level data may not capture the dynamic processes that unfold within each individual learner. In this light, idiographic methods have started to gain grounds in many fields as a possible solution to examine students’ behavior at the individual level by using several data points from each learner to create person-specific insights. In this study, we introduce such novel methods to the learning analytics field by exploring the possible potentials that one can gain from zooming in on the fine-grained dynamics of a single student. Specifically, we make use of Gaussian Graphical Models —an emerging trend in network science— to analyze a single student's dispositions and devise insights specific to him/her. The results of our study revealed that the student under examination may be in need to learn better self-regulation techniques regarding reflection and planning.

  • 24.
    Abdelgalil, Mohammed Saqr
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Univ Eastern Finland, Sch Comp, Joensuu Campus,Yliopistokatu 2,POB 111, FI-80100 Joensuu, Finland..
    Lopez-Pernas, Sonsoles
    Univ Politecn Madrid, ETSI Telecomunicac, Dept Ingn Sistemas Telemat, Avda Complutense 30, Madrid 28040, Spain..
    The longitudinal trajectories of online engagement over a full program2021In: Computers and education, ISSN 0360-1315, E-ISSN 1873-782X, Vol. 175, article id 104325Article in journal (Refereed)
    Abstract [en]

    Student engagement has a trajectory (a timeline) that unfolds over time and can be shaped by different factors including learners' motivation, school conditions, and the nature of learning tasks. Such factors may result in either a stable, declining or fluctuating engagement trajectory. While research on online engagement is abundant, most authors have examined student engagement in a single course or two. Little research has been devoted to studying online longitudinal engagement, i.e., the evolution of student engagement over a full educational program. This learning analytics study examines the engagement states (sequences, successions, stability, and transitions) of 106 students in 1396 course enrollments over a full program. All data of students enrolled in the academic year 2014-2015, and their subsequent data in 2015-2016, 2016-2017, and 2017-2018 (15 courses) were collected. The engagement states were clustered using Hidden Markov Models (HMM) to uncover the hidden engagement trajectories which resulted in a mostly-engaged (33% of students), an intermediate (39.6%), and a troubled (27.4%) trajectory. The mostly-engaged trajectory was stable with infrequent changes, scored the highest, and was less likely to drop out. The troubled trajectory showed early disengagement, frequent dropouts and scored the lowest grades. The results of our study show how to identify early program disengagement (activities within the third decile) and when students may drop out (first year and early second year).

  • 25.
    Abdelgalil, Mohammed Saqr
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Lopez-Pernas, Sonsoles
    Univ Politecn Madrid, ETSI Telecomunicac, Dept Ingn Sistemas Telemat, Madrid, Spain..
    Toward self big data2021In: International Journal of Health Sciences (IJHS), ISSN 1658-3639, Vol. 15, no 5, p. 1-2Article in journal (Refereed)
  • 26.
    Abdelgalil, Mohammed Saqr
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. University of Eastern Finland, Joensuu, Finland.
    López-Pernas, S.
    Idiographic learning analytics: A single student (N=1) approach using psychological networks2021In: CEUR Workshop Proceedings, CEUR-WS , 2021, p. 16-22Conference paper (Refereed)
    Abstract [en]

    Recent findings in the field of learning analytics have brought to our attention that conclusions drawn from cross-sectional group-level data may not capture the dynamic processes that unfold within each individual learner. In this light, idiographic methods have started to gain grounds in many fields as a possible solution to examine students' behavior at the individual level by using several data points from each learner to create person-specific insights. In this study, we introduce such novel methods to the learning analytics field by exploring the possible potentials that one can gain from zooming in on the fine-grained dynamics of a single student. Specifically, we make use of Gaussian Graphical Models -an emerging trend in network science- to analyze a single student's dispositions and devise insights specific to him/her. The results of our study revealed that the student under examination may be in need to learn better self-regulation techniques regarding reflection and planning. 

  • 27. Abdelhakim, A.
    et al.
    Blaabjerg, F.
    Nee, Hans-Peter
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electric Power and Energy Systems.
    Single-Stage Boost Modular Multilevel Converter (BMMC) for Energy Storage Interface2020In: 2020 22nd European Conference on Power Electronics and Applications, EPE 2020 ECCE Europe, Institute of Electrical and Electronics Engineers (IEEE) , 2020, article id 9215788Conference paper (Refereed)
    Abstract [en]

    Single-stage DC-AC power converters are gaining higher attention due to their simpler structure compared to the two-stage equivalent solution. In this paper, a single-stage DC-AC converter solution is proposed for interfacing a low voltage (LV) DC source with a higher voltage AC load or grid, where this converter has a modular structure with multilevel operation. The proposed converter, which is called boost modular multilevel converter (BMMC), comprises the boosting capability within the inversion operation, and it is mainly dedicated for interfacing LV energy storage systems, such as fuel cells and batteries, and it allows the use of LV MOSFETs (« 300 V), in order to utilize their low ON-state resistance, along with LV electrolytic capacitors. This converter is introduced and analysed in this paper, where simulation results using PLECS, considering a 10 kW three-phase BMMC, are presented in order to verify its functionality.

  • 28.
    AbdElKhalek, Y. M.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Awad, M. I.
    Abd El Munim, H. E.
    Maged, S. A.
    Trajectory-based fast ball detection and tracking for an autonomous industrial robot system2021In: International Journal of Intelligent Systems Technologies and Applications, ISSN 1740-8865, E-ISSN 1740-8873, Vol. 20, no 2, p. 126-145Article in journal (Refereed)
    Abstract [en]

    Autonomising industrial robots is the main goal in this paper; imagine humanoid robots that have several degrees of freedom (DOF) mechanisms as their arms. What if the humanoid's arms could be programmed to be responsive to their surrounding environment, without any hard-coding assigned? This paper presents the idea of an autonomous system, where the system observes the surrounding environment and takes action on its observation. The application here is that of rebuffing an object that is thrown towards a robotic arm's workspace. This application mimics the idea of high dynamic responsiveness of a robot's arm. This paper will present a trajectory generation framework for rebuffing incoming flying objects. The framework bases its assumptions on inputs acquired through image processing and object detection. After extensive testing, it can be said that the proposed framework managed to fulfil the real-time system requirements for this application, with an 80% successful rebuffing rate. 

  • 29.
    Abdelmassih, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Container Orchestration in Security Demanding Environments at the Swedish Police Authority2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The adoption of containers and container orchestration in cloud computing is motivated by many aspects, from technical and organizational to economic gains. In this climate, even security demanding organizations are interested in such technologies but need reassurance that their requirements can be satisfied. The purpose of this thesis was to investigate how separation of applications could be achieved with Docker and Kubernetes such that it may satisfy the demands of the Swedish Police Authority.

    The investigation consisted of a literature study of research papers and official documentation as well as a technical study of iterative creation of Kubernetes clusters with various changes. A model was defined to represent the requirements for the ideal separation. In addition, a system was introduced to classify the separation requirements of the applications.

    The result of this thesis consists of three architectural proposals for achieving segmentation of Kubernetes cluster networking, two proposed systems to realize the segmentation, and one strategy for providing host-based separation between containers. Each proposal was evaluated and discussed with regard to suitability and risks for the Authority and parties with similar demands. The thesis concludes that a versatile application isolation can be achieved in Docker and Kubernetes. Therefore, the technologies can provide a sufficient degree of separation to be used in security demanding environments.

    Download full text (pdf)
    fulltext
  • 30.
    Abdelmotteleb, Ibtihal
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Designing Electricity Distribution Network Charges for an Efficient Integration of Distributed Energy Resources and Customer Response2018Doctoral thesis, monograph (Other academic)
    Abstract [en]

    A significant transformation has been gradually taking place within the energy sector, mainly as a result of energy policies targeting environmental objectives. Consequently, the penetration of Distributed Energy Resources (DERs) has been escalating, including self-generation, demand side management, storage, and electrical vehicles. Although the integration of DERs may create technical challenges in the operation of distribution networks, it may also provide opportunities to more efficiently manage the network and defer network reinforcements. These opportunities and challenges impose the necessity of redesigning distribution network charges to incentivize efficient customer response.

    This PhD thesis focuses on the design of distribution network charges that send correct economic signals and trigger optimal responses within the context of active customers. First, a cost-reflective network charge is proposed that consists of a forward-looking locational component based on the network’s utilization level, which transmits the long-term incremental cost of network upgrades. Then, a residual cost component that recovers the remaining part of the regulated network revenues is proposed. The objective of the proposed network charge is to increase the system’s efficiency by incentivizing efficient short- and long-term customers’ reaction while ensuring network cost recovery. The Thesis presents an optimization model that simulates customers’ response to the proposed network charge in comparison to other traditional network charge designs. The model considers the operational and DER investment decisions that customers take rationally to minimize their total costs.

    Secondly, an evaluation methodology based on the Analytical Hierarchy Process technique is proposed in order to assess and compare different designs of network charges with respect to four attributes: network cost recovery, deferral of network costs, efficient customer response and recognition of side-effects on customers.

    Finally, a framework for Local Flexibility Mechanisms (LFM) is presented, complementing the proposed cost-reflective network charge. It aims to provide distribution-level coordination to mitigate unintended customer responses to network charges, by allowing customers to reveal their preferences and offer their flexibility services. It consists of a short-term LFM that utilizes customers’ flexibility in day-to-day network operation, and a long-term LFM that procures customers’ long-term flexibility to replace partially or fully network investments in network planning.

    Download full text (pdf)
    Designing electricity distribution network charges for an efficient integration of distributed energy resources and customer response
  • 31.
    Abdelnour, Jerome
    et al.
    NECOTIS Dept. of Electrical and Computer Engineering, Sherbrooke University, Canada.
    Rouat, Jean
    NECOTIS Dept. of Electrical and Computer Engineering, Sherbrooke University, Canada.
    Salvi, Giampiero
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH. Department of Electronic Systems, Norwegian University of Science and Technology, Norway.
    NAAQA: A Neural Architecture for Acoustic Question Answering2022In: IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, p. 1-12Article in journal (Refereed)
    Download full text (pdf)
    fulltext
  • 32.
    Abdihakim, Ali
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Characterizing Feature Influence and Predicting Video Popularity on YouTube2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    YouTube is an online video sharing platform where users can distribute and consume video and other types of content. The rapid technological advancement along with the proliferation och technological gadgets has led to the phenomenon of viral videos where videos and content garner hundreds of thousands if not million of views in a short span of time. This thesis looked at the reason for these viral content, more specifically as it pertains to videos on YouTube. This was done by building a predictor model using two different approaches and extracting important features that causes video popularity. The thesis further observed how the subsequent features impact video popularity via partial dependency plots. The knn model outperformed logistic regression model. The thesis showed, among other things that YouTube channel and title were the most important features followed by comment count, age and video category. Much research have been done pertaining to popularity prediction, but less on deriving important features and evaluating their impact on popularity. Further research has to be conduced on feature influence, which is paramount to comprehend the causes for content going viral. 

    Download full text (pdf)
    fulltext
  • 33.
    Abdinur Iusuf, Joakim
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Nordling, Edvin
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Easing the transition from block-based programming in education: Comparing two ways of transitioning from block-based to text-based programming and an alternative way to solve the transition problem2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Many learners find the transition from block-based programming to text-based programming difficult. Consequently, research has investigated how block-based languages support learners when making the transition to text-based programming. It categorized the way in which block-based languages support the transition into one-way transition, dual-modality and hybrid environments. This research investigates how one-way transition environments compare to dual-modality environments with regards to learning a text-based language, and how the two modalities differ with regards to the motivational factors satisfaction, enjoyment and easiness. The results show that dual-modality environments could be a better alternative than one-way transition environment when learners make the transition from block-based to text-based programming. The results also show that solving a problem in dual-modality environments could be easier than solving them in one-way transition environments, which could potentially mean that learners experience more motivation when making the transition in a dual-modality environment. This study also investigated if there is an alternative to one-way transition, dual-modality and hybrid environments when helping learners transition from block-based to text-based programming, and what a learning activity in this alternative solution could look like. It found that Blockly Games is an alternative, and describes a learning activity built in Blockly Games. Future research should aim at gaining a deeper understanding of the differences between one-way transition, dual-modality and hybrid environments, and investigate if the approach taken by Blockly Games is a better alternative.

    Download full text (pdf)
    fulltext
  • 34.
    Abdirahman Adami, Adnan
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Actors Cooperation Analysis: A Techo-economic Study on Smart City Paradigm2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Modern cities must overcome complex challenges to achieve socio-economic development and to improve the quality of life as the urban population is rapidly increasing. The concept of smart citie s is a response to these challenges. Thus, emerging technologies that are key enablers for the development of a smart city are said to be IoT and 5G. To deploy such technologies , however, may be expensive and requires the involvement of multiple actors. Hence, lack of cooperation and coordination for planning, financing, deploying and managing the city’s operational networks makes it even more difficult to overcome such challenges. Further, waste management companies and parking services operators in a city have expensive operation costs and services inefficiency due to little utilization of IoT-based solutions. This paper identifies and analyzes smart city ecosyst e ms, value networks, actors, actor’s roles, and business models in order to illustrate business relationships and provide business opportunities in the development of smart and sustainable cities through cooperation and collaboration among involved actors . Target actors that this study focuse s are on Mobile Network Operators, Parking Services Operators, and Waste Management Companies, and uses smart parking and smart waste collection as use-cases. Results show several cooperative business scenarios that can lead to successful business relationships and opportunities.

    Download full text (pdf)
    fulltext
  • 35.
    Abdollahi, Meisam
    et al.
    Iran Univ Sci & Technol, Tehran, Iran..
    Baharloo, Mohammad
    Inst Res Fundamental Sci IPM, Tehran, Iran..
    Shokouhinia, Fateme
    Amirkabir Univ Technol, Tehran, Iran..
    Ebrahimi, Masoumeh
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems, Electronic and embedded systems.
    RAP-NoC: Reliability Assessment of Photonic Network-on-Chips, A simulator2021In: Proceedings of the 8th ACM international conference on nanoscale computing and communication (ACM NANOCOM 2021), Association for Computing Machinery (ACM) , 2021Conference paper (Refereed)
    Abstract [en]

    Nowadays, optical network-on-chip is accepted as a promising alternative solution for traditional electrical interconnects due to lower transmission delay and power consumption as well as considerable high data bandwidth. However, silicon photonics struggles with some particular challenges that threaten the reliability of the data transmission process.The most important challenges can be considered as temperature fluctuation, process variation, aging, crosstalk noise, and insertion loss. Although several attempts have been made to investigate the effect of these issues on the reliability of optical network-on-chip, none of them modeled the reliability of photonic network-on-chip in a system-level approach based on basic element failure rate. In this paper, an analytical model-based simulator, called Reliability Assessment of Photonic Network-on-Chips (RAP-NoC), is proposed to evaluate the reliability of different 2D optical network-on-chip architectures and data traffic. The experimental results show that, in general, Mesh topology is more reliable than Torus considering the same size. Increasing the reliability of Microring Resonator (MR) has a more significant impact on the reliability of an optical router rather than a network.

  • 36.
    Abdul Khader, Shahbaz
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Data-Driven Methods for Contact-Rich Manipulation: Control Stability and Data-Efficiency2021Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Autonomous robots are expected to make a greater presence in the homes and workplaces of human beings. Unlike their industrial counterparts, autonomous robots have to deal with a great deal of uncertainty and lack of structure in their environment. A remarkable aspect of performing manipulation in such a scenario is the possibility of physical contact between the robot and the environment. Therefore, not unlike human manipulation, robotic manipulation has to manage contacts, both expected and unexpected, that are often characterized by complex interaction dynamics.

    Skill learning has emerged as a promising approach for robots to acquire rich motion generation capabilities. In skill learning, data driven methods are used to learn reactive control policies that map states to actions. Such an approach is appealing because a sufficiently expressive policy can almost instantaneously generate appropriate control actions without the need for computationally expensive search operations. Although reinforcement learning (RL) is a natural framework for skill learning, its practical application is limited for a number of reasons. Arguably, the two main reasons are the lack of guaranteed control stability and poor data-efficiency. While control stability is necessary for ensuring safety and predictability, data-efficiency is required for achieving realistic training times. In this thesis, solutions are sought for these two issues in the context of contact-rich manipulation.

    First, this thesis addresses the problem of control stability. Despite unknown interaction dynamics during contact, skill learning with stability guarantee is formulated as a model-free RL problem. The thesis proposes multiple solutions for parameterizing stability-aware policies. Some policy parameterizations are partly or almost wholly deep neural networks. This is followed by policy search solutions that preserve stability during random exploration, if required. In one case, a novel evolution strategies-based policy search method is introduced. It is shown, with the help of real robot experiments, that Lyapunov stability is both possible and beneficial for RL-based skill learning.

    Second, this thesis addresses the issue of data-efficiency. Although data-efficiency is targeted by formulating skill learning as a model-based RL problem, only the model learning part is addressed. In addition to benefiting from the data-efficiency and uncertainty representation of the Gaussian process, this thesis further investigates the benefits of adopting the structure of hybrid automata for learning forward dynamics models. The method also includes an algorithm for predicting long-term trajectory distributions that can represent discontinuities and multiple modes. The proposed method is shown to be more data-efficient than some state-of-the-art methods. 

    Download full text (pdf)
    fulltext
  • 37.
    Abdul Khader, Shahbaz
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Future Labs, CH-5405 Baden, Switzerland..
    Yin, Hang
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Falco, Pietro
    ABB Corp Res, S-72178 Västerås, Sweden..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Data-Efficient Model Learning and Prediction for Contact-Rich Manipulation Tasks2020In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 5, no 3, p. 4321-4328Article in journal (Refereed)
    Abstract [en]

    In this letter, we investigate learning forward dynamics models and multi-step prediction of state variables (long-term prediction) for contact-rich manipulation. The problems are formulated in the context of model-based reinforcement learning (MBRL). We focus on two aspects-discontinuous dynamics and data-efficiency-both of which are important in the identified scope and pose significant challenges to State-of-the-Art methods. We contribute to closing this gap by proposing a method that explicitly adopts a specific hybrid structure for the model while leveraging the uncertainty representation and data-efficiency of Gaussian process. Our experiments on an illustrative moving block task and a 7-DOF robot demonstrate a clear advantage when compared to popular baselines in low data regimes.

  • 38.
    Abdul Khader, Shahbaz
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Yin, Hang
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Falco, Pietro
    ABB Corporate Research, Vasteras, 72178, Sweden.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Corporate Research, Vasteras, 72178, Sweden.
    Learning deep energy shaping policies for stability-guaranteed manipulation2021In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 6, no 4, p. 8583-8590Article in journal (Refereed)
    Abstract [en]

    Deep reinforcement learning (DRL) has been successfully used to solve various robotic manipulation tasks. However, most of the existing works do not address the issue of control stability. This is in sharp contrast to the control theory community where the well-established norm is to prove stability whenever a control law is synthesized. What makes traditional stability analysis difficult for DRL are the uninterpretable nature of the neural network policies and unknown system dynamics. In this work, stability is obtained by deriving an interpretable deep policy structure based on the energy shaping control of Lagrangian systems. Then, stability during physical interaction with an unknown environment is established based on passivity. The result is a stability guaranteeing DRL in a model-free framework that is general enough for contact-rich manipulation tasks. With an experiment on a peg-in-hole task, we demonstrate, to the best of our knowledge, the first DRL with stability guarantee on a real robotic manipulator.

  • 39.
    Abdul Khader, Shahbaz
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Yin, Hang
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Falco, Pietro
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Learning Deep Neural Policies with Stability GuaranteesManuscript (preprint) (Other academic)
    Abstract [en]

    Deep reinforcement learning (DRL) has been successfully used to solve various robotic manipulation tasks. However, most of the existing works do not address the issue of control stability. This is in sharp contrast to the control theory community where the well-established norm is to prove stability whenever a control law is synthesized. What makes traditional stability analysis difficult for DRL are the uninterpretable nature of the neural network policies and unknown system dynamics. In this work, unconditional stability is obtained by deriving an interpretable deep policy structure based on the energy shaping control of Lagrangian systems. Then, stability during physical interaction with an unknown environment is established based on passivity. The result is a stability guaranteeing DRL in a model-free framework that is general enough for contact-rich manipulation tasks. With an experiment on a peg-in-hole task, we demonstrate, to the best of our knowledge, the first DRL with stability guarantee on a real robotic manipulator.

    Download full text (pdf)
    fulltext
  • 40.
    Abdul Khader, Shahbaz
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Corp Res, Västerås, Sweden..
    Yin, Hang
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Falco, Pietro
    ABB Corp Res, Västerås, Sweden..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Learning Stable Normalizing-Flow Control for Robotic Manipulation2021In: 2021 IEEE International Conference On Robotics And Automation (ICRA 2021), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 1644-1650Conference paper (Refereed)
    Abstract [en]

    Reinforcement Learning (RL) of robotic manipulation skills, despite its impressive successes, stands to benefit from incorporating domain knowledge from control theory. One of the most important properties that is of interest is control stability. Ideally, one would like to achieve stability guarantees while staying within the framework of state-of-the-art deep RL algorithms. Such a solution does not exist in general, especially one that scales to complex manipulation tasks. We contribute towards closing this gap by introducing normalizing-flow control structure, that can be deployed in any latest deep RL algorithms. While stable exploration is not guaranteed, our method is designed to ultimately produce deterministic controllers with provable stability. In addition to demonstrating our method on challenging contact-rich manipulation tasks, we also show that it is possible to achieve considerable exploration efficiency-reduced state space coverage and actuation efforts- without losing learning efficiency.

  • 41.
    Abdul Khader, Shahbaz
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Yin, Hang
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Pietro, Falco
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Learning Stable Normalizing-Flow Control for Robotic ManipulationManuscript (preprint) (Other academic)
    Abstract [en]

    Reinforcement Learning (RL) of robotic manipu-lation skills, despite its impressive successes, stands to benefitfrom incorporating domain knowledge from control theory. Oneof the most important properties that is of interest is controlstability. Ideally, one would like to achieve stability guaranteeswhile staying within the framework of state-of-the-art deepRL algorithms. Such a solution does not exist in general,especially one that scales to complex manipulation tasks. Wecontribute towards closing this gap by introducing normalizing-flow control structure, that can be deployed in any latest deepRL algorithms. While stable exploration is not guaranteed,our method is designed to ultimately produce deterministiccontrollers with provable stability. In addition to demonstratingour method on challenging contact-rich manipulation tasks, wealso show that it is possible to achieve considerable explorationefficiency–reduced state space coverage and actuation efforts–without losing learning efficiency.

    Download full text (pdf)
    fulltext
  • 42. Abe, Kenshi
    et al.
    Ariu, Kaito
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Sakamoto, Mitsuki
    Iwasaki, Atsushi
    A Slingshot Approach to Learning in Monotone GamesManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper, we address the problem of computing equilibria in monotone games.The traditional Follow the Regularized Leader algorithms fail to converge to anequilibrium even in two-player zero-sum games. Although optimistic versions ofthese algorithms have been proposed with last-iterate convergence guarantees, theyrequire noiseless gradient feedback. To overcome this limitation, we present a novelframework that achieves last-iterate convergence even in the presence of noise. Ourkey idea involves perturbing or regularizing the payoffs or utilities of the games.This perturbation serves to pull the current strategy to an anchored strategy, whichwe refer to as a slingshot strategy. First, we establish the convergence rates of ourframework to a stationary point near an equilibrium, regardless of the presenceor absence of noise. Next, we introduce an approach to periodically update theslingshot strategy with the current strategy. We interpret this approach as a proximalpoint method and demonstrate its last-iterate convergence. Our framework iscomprehensive, incorporating existing payoff-regularized algorithms and enablingthe development of new algorithms with last-iterate convergence properties. Finally,we show that our algorithms, based on this framework, empirically exhibit fasterconvergence.

  • 43.
    Abe, Kenshi
    et al.
    CyberAgent, Inc..
    Ariu, Kaito
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). CyberAgent, Inc..
    Sakamoto, Mitsuki
    Toyoshima, Kentaro
    University of Electro-Communications.
    Iwasaki, Atsushi
    Last-Iterate Convergence with Full and Noisy Feedback in Two-Player Zero-Sum Games2023In: Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, MLResearchPress , 2023, Vol. 206, p. 7999-8028Conference paper (Refereed)
    Abstract [en]

    This paper proposes Mutation-Driven Multiplicative Weights Update (M2WU) for learning an equilibrium in two-player zero-sum normal-form games and proves that it exhibits the last-iterate convergence property in both full and noisy feedback settings. In the former, players observe their exact gradient vectors of the utility functions. In the latter, they only observe the noisy gradient vectors. Even the celebrated Multiplicative Weights Update (MWU) and Optimistic MWU (OMWU) algorithms may not converge to a Nash equilibrium with noisy feedback. On the contrary, M2WU exhibits the last-iterate convergence to a stationary point near a Nash equilibrium in both feedback settings. We then prove that it converges to an exact Nash equilibrium by iteratively adapting the mutation term. We empirically confirm that M2WU outperforms MWU and OMWU in exploitability and convergence rates.

  • 44.
    Abedi, Amin
    et al.
    UNIGE, Inst Environm Sci, Geneva, Switzerland.;UNIGE, Comp Sci Dept, Geneva, Switzerland..
    Hesamzadeh, Mohammad Reza
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electric Power and Energy Systems.
    Romerio, Franco
    UNIGE, Inst Environm Sci, Geneva, Switzerland.;UNIGE, Geneva Sch Econ & Management, Geneva, Switzerland..
    Adaptive robust vulnerability analysis of power systems under uncertainty: A multilevel OPF-based optimization approach2022In: International Journal of Electrical Power & Energy Systems, ISSN 0142-0615, E-ISSN 1879-3517, Vol. 134, article id 107432Article in journal (Refereed)
    Abstract [en]

    With the growing level of uncertainties in today's power systems, the vulnerability analysis of a power system with uncertain parameters becomes a must. This paper proposes a two-stage adaptive robust optimization (ARO) model for the vulnerability analysis of power systems. The main goal is to immunize the solutions against all possible realizations of the modeled uncertainty. In doing so, the uncertainties are defined by some predetermined intervals defined around the expected values of uncertain parameters. In our model, there are a set of first-stage decisions made before the uncertainty is revealed (attacker decision) and a set of second-stage decisions made after the realization of uncertainties (defender decision). This setup is formulated as a mixedinteger trilevel nonlinear program (MITNLP). Then, we recast the proposed trilevel program to a single-level mixed-integer linear program (MILP), applying the strong duality theorem (SDT) and appropriate linearization approaches. The efficient off-the-shelf solvers can guarantee the global optimum of our final MILP model. We also prove a lemma which makes our model much easier to solve. The results carried out on the IEEE RTS and modified Iran's power system show the performance of our model to assess the power system vulnerability under uncertainty.

  • 45.
    Abedi, Amin
    et al.
    Institute for Environmental Sciences, University of Geneva, Switzerland.
    Hesamzadeh, Mohammad Reza
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electric Power and Energy Systems.
    Romerio, Franco
    Institute for Environmental Sciences, University of Geneva, Switzerland.
    An ACOPF-based bilevel optimization approach for vulnerability assessment of a power system2021In: International Journal of Electrical Power & Energy Systems, ISSN 0142-0615, E-ISSN 1879-3517, Vol. 125, article id 106455Article in journal (Refereed)
    Abstract [en]

    This paper examines the effects of reactive power dispatch, losses, and voltage profile on the results of the interdiction model to analyze the vulnerability of the power system. First, an attacker-defender Stackelberg game is introduced. The introduced game is modeled as a bilevel optimization problem where the attacker is modeled in the upper level and the defender is modeled in the lower level. The AC optimal power flow (ACOPF) is proposed as the defender's tool in the lower-level problem to mitigate the attack consequences. Our proposed ACOPF-based mathematical framework is inherently a mixed-integer bilevel nonlinear program (MIBNLP) that is NP-hard and computationally challenging. This paper linearizes and then transforms it into a one-level mixed-integer linear program (MILP) using the duality theory and some proposed linearization techniques. The proposed MILP model can be solved to the global optimum using state-of-the-art solvers such as Cplex. Numerical results on two IEEE systems and Iran's 400-kV transmission network demonstrate the performance of the proposed MILP for vulnerability assessment. We have also compared our MILP model with the DCOPF-based approach proposed in the relevant literature. The comparative results show that the reported damage measured in terms of load shedding for the DCOPF-based approach is always lower than or equal to that for the ACOPF-based approach and these models report a different set of critical lines, especially in more stressed and larger power systems. Also, the effectiveness and feasibility of the proposed MILP model for power-system vulnerability analysis are discussed and highlighted. 

  • 46. Abedifar, V.
    et al.
    Furdek, Marija
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS, Optical Network Laboratory (ON Lab).
    Muhammad, Ajmal
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS, Optical Network Laboratory (ON Lab).
    Eshghi, M.
    Wosinska, Lena
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS, Optical Network Laboratory (ON Lab).
    Routing, modulation format, spectrum and core allocation in SDM networks based on programmable filterless nodes2018In: Optics InfoBase Conference Papers, Optics Info Base, Optical Society of America, 2018Conference paper (Refereed)
    Abstract [en]

    An RMSCA approach based on binary particle swarm optimization is proposed for programmable filterless SDM networks, aimed at minimizing core and spectrum usage. Nearoptimal resource consumption.

  • 47. Abedifar, Vahid
    et al.
    Furdek, Marija
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Muhammad, Ajmal
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Eshghi, Mohammad
    Wosinska, Lena
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Routing, Modulation and Spectrum Assignment in Programmable Networks based on Optical White Boxes2018In: Journal of Optical Communications and Networking, ISSN 1943-0620, E-ISSN 1943-0639, Vol. 10, no 9, p. 723-735Article in journal (Refereed)
    Abstract [en]

    Elastic optical networks (EONs) can help overcome the flexibility challenges imposed by emerging heterogeneous and bandwidth-intensive applications. Among the different solutions for flexible optical nodes, optical white box switches implemented by architecture on demand (AoD) have the capability to dynamically adapt their architecture and module configuration to the switching and processing requirements of the network traffic. Such adaptability allows for unprecedented flexibility in balancing the number of required nodal components in the network, spectral resource usage, and length of the established paths. To investigate these trade-offs and achieve cost-efficient network operation, we formulate the routing, modulation, and spectrum assignment (RMSA) problem in AoD-based EONs and propose three RMSA strategies aimed at optimizing a particular combination of these performance indicators. The strategies rely on a newly proposed internal node configuration matrix that models the structure of optical white box nodes in the network, thus facilitating hardware-aware routing of connection demands. The proposed strategies are evaluated in terms of the number of required modules and the related cost, spectral resource usage, and average path length. Extensive simulation results show that the proposed RMSA strategies can achieve remarkable cost savings by requiring fewer switching modules than the benchmarking approaches, at a favorable trade-off with spectrum usage and path length.

  • 48.
    Abedin, Ahmad
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    Germanium layer transfer and device fabrication for monolithic 3D integration2021Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Monolithic three-dimensional (M3D) integration, it has been proposed,can overcome the limitations of further circuits’ performance improvementand functionality expansion. The emergence of the internet of things (IoT) isdriving the semiconductor industry toward the fabrication of higher-performancecircuits with diverse functionality. On the one hand, the scaling of devices isreaching critical dimensions, which makes their further downscaling techno-logically difficult and economically challenging, whereas, on the other hand,the field of electronics is no longer limited only to developing circuits thatare meant for data processing. Sensors, processors, actuators, memories, andeven power storage units need to be efficiently integrated into a single chip tomake IoT work. M3D integration through stacking different layers of deviceson each other can potentially improve circuits’ performance by shorteningthe wiring length and reducing the interconnect delay. Using multiple tiersfor device fabrication makes it possible to integrate different materials withsuperior physical properties. It offers the advantage of fabricating higher-performance devices with multiple functionalities on a single chip. However,high-quality layer transfer and processing temperature budget are the majorchallenges in M3D integration. This thesis involves an in-depth explorationof the application of germanium (Ge) in monolithic 3D integration.Ge has been recognized as one of the most promising materials that canreplace silicon (Si) as the channel material for p-type field-effect transistors(pFETs) because of its high hole mobility. Ge pFETs can be fabricated atsubstantially lower temperatures compared to Si devices which makes theformer a good candidate for M3D integration. However, the fabrication ofhigh-quality Ge-on-insulator (GOI) layers with superior thickness homogene-ity, low residual doping, and a sufficiently good interface with buried oxide(BOX) has been challenging.This thesis used low-temperature wafer bonding and etch-back techniquesto fabricate the GOI substrate for M3D applications. For this purpose, aunique stack of epitaxial layers was designed and fabricated. The layer stackcontains a Ge strain relaxed buffer (SRB) layer, a SiGe layer to be used asan etch stop, and a top Ge layer to be transferred to the handling wafer.The wafers were bonded at room temperature, and the sacrificial wafer wasremoved through multiple etching steps leaving 20 nm Ge on the insulatorwith excellent thickness homogeneity over the wafer. Ge pFET devices werefabricated on the GOI substrates and electrically characterized to evaluatethe layer quality. Finally, the epitaxial growth of the highly doped SiGeand sub-nm Si cap layers have been investigated as alternatives for improvedperformance Ge pFETs.The Ge buffer layer was developed through the two-step deposition tech-nique resulting in defect density of107cm−3and surface roughness of 0.5 nm.The fully strainedSi0.5Ge0.5film with high crystal quality was epitaxiallygrown at temperatures below 450°C. The layer was sandwiched between theGe buffer and the top 20 nm Ge layer to be used as an etch-stop in the etch- back process. A highly selective etching method was developed to remove the3μm Ge buffer and 10nm SiGe film without damaging the 20 nm transferringGe layer.The Ge pFETs were fabricated at temperatures below 600°C so that theycould be compatible with the M3D integration. The back interface of thedevices depleted atVBG= 0V, which confirmed the small density of fixedcharges at the Ge/BOX interface along with a low level of residual doping inthe Ge channel. The Ge pFETs with 70 % yield over the whole wafer showed60 % higher carrier mobility than Si reference devices.Low-temperature epitaxial growth of Si passivation layer on Ge was de-veloped in this thesis. For electrical evaluation of the passivation layer,metal-oxide-semiconductor (MOS) capacitors were fabricated and character-ized. The capacitors showed an interface trap density of3×1011eV−1cm−2,and hysteresis as low as 3 mV at Eox of 4MV/cm corresponding to oxide trapdensity of1.5×1010cm−2. The results indicate that this Si passivation layersubstantially improves the gate dielectric by reducing the subthreshold slopeof Ge devices while increasing their reliability. The in-situ doped SiGe layerwith a dopant concentration of2.5×1019cm−3and resistivity of 3.5 mΩcmwas selectively grown on Ge to improve the junction formation.The methods developed in this thesis are suitable for large-scale M3Dintegration of Ge pFET devices on the Si platform. The unique Ge layertransfer and etch-back techniques resulted in the fabrication of GOI substrateswith high thickness homogeneity, low residual doping, and sufficiently goodGe/BOX interface. The process temperatures for Ge transfer and pFETsfabrication are kept within the range of the M3D budget. Integration of theSi cap for gate dielectric formation and SiGe layers in the source/drain regionmay increase device performance and reliability

    Download full text (pdf)
    fulltext
  • 49.
    Abedin, Ahmad
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    Garidis, Konstantinos
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    Asadollahi, Ali
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    Hellström, Per-Erik
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    Östling, Mikael
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    Growth of epitaxial SiGe alloys as etch-stop layers in germanium-on-insulator fabricationManuscript (preprint) (Other academic)
    Abstract [en]

    In this study, the application of epitaxially grown SixGe1-x films as etch stop layers in a germanium-on-insulator substrate fabrication flow is investigated. Layers with Ge contents from 15% to 70% were epitaxially grown on Si (1 0 0) using silane and germane. It was found that the Ge content in the films is independent of the growth temperature for fixed partial pressure ratios. At low growth temperatures the activation energy is found to be 1.8 eV which points to a hydrogen desorption limited growth rate mechanism. At growth temperatures of less than 500℃, the surface roughness is <1 nm. This surface roughness does not change when the films are grown on Ge substrates. Finally, a fully strained Si0.5Ge0.5 film was grown on Ge strain relaxed buffer at 450℃. This layer demonstrates etch selectivity of >400:1 towards Ge in diluted SC-1. This result enables the integration of the Si0.5Ge0.5 film as an etch stop layer for single crystalline germanium-on-insulator substrate fabrication.

    Download full text (pdf)
    fulltext
  • 50.
    Abedin, Ahmad
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems, Integrated devices and circuits.
    Zurauskaite, Laura
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems, Integrated devices and circuits.
    Asadollahi, Ali
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems, Integrated devices and circuits. KTH.
    GOI fabrication for Monolithic 3D integrationIn: Article in journal (Other academic)
1234567 1 - 50 of 15293
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf