In wireless local area networks often a station can potentially associate with more than one access point. Therefore, a relevant question is which access point to select "best" from a list of candidate ones. In IEEE 802.11, the user simply associates to the access point with the strongest received signal strength. However, this may result in a significant load imbalance between several access points, as some accommodate a large number of stations while others are lightly loaded or even idle. Moreover, the multi-rate flexibility provided by several IEEE 802.11 variants can cause low bit rate stations to negatively affect high bit rate ones and consequently degrade the overall network throughput. This paper investigates the various aspects of "best" access point selection for IEEE 802.11 systems. In detail, we first derive a decision metric the selection can be based on. Using this metric we propose two new selection mechanisms which are decentralized in the sense that the decision is performed by each station, given appropriate status information of each access point. In fact, only few bytes of status information have to be added to the beacon and probe response frames which does not impose significant overhead. In addition, we show that our mechanism improves station quality of service and better utilizes network resources compared to the conventional one implemented today in IEEE 802.11 devices.
This paper presents and evaluates an Inter-Access Point Coordination protocol for dynamic channel selection in IEEE 802.11 WLANs. It addresses an open issue for the implementation of many distributed and centralized dynamic channel selection policies proposed to mitigate interference problems in Wireless LANs (WLANs). The presented protocol provides services to a wide range of policies that require different levels of coordination among APs by enabling them to actively communicate and exchange information. An Intra-Cell protocol that enables interaction between the AP and its accommodated stations to handle channel switching within the same cell is also presented.
In wireless local area networks, often a station can potentially associate with more than one access point (AP). Therefore, a relevant question is which AP to select 'best' from a list of candidate ones. In IEEE 802.11, the user simply associates to the AP with the strongest received signal strength. However, this may result in a significant load imbalance between several APs. Moreover, the multi-rate flexibility provided by several IEEE 802.11 variants can cause low bit rate stations to negatively affect high bit rate ones and consequently degrade the overall network throughput. This paper investigates the various aspects of 'best' AP selection for IEEE 802.11 systems. In detail, we first derive a new decision metric which can be used for AP selection. Using this metric, we propose two new selection mechanisms which are decentralised in the sense that the decision is performed by each station, given appropriate status information of each AP. In fact, only few bytes of status information have to be added to the Beacon and Probe Response frames which does not impose significant overhead. We show that our mechanism improves mean quality of service of all stations and better utilises network resources compared to the conventional one implemented today in IEEE 802.11 devices. Also, the schemes are appealing in terms of stability and provide their performance improvement even for denser or lighter network configurations.
Resource-constrained Edge Devices (EDs), e.g., IoT sensors and microcontroller units, are expected to make intelligent decisions using Deep Learning (DL) inference at the edge of the network. Toward this end, developing tinyML models is an area of active research - DL models with reduced computation and memory storage requirements - that can be embedded on these devices. However, tinyML models have lower inference accuracy. On a different front, DNN partitioning and inference offloading techniques were studied for distributed DL inference between EDs and Edge Servers (ESs). In this paper, we explore Hierarchical Inference (HI), a novel approach proposed in [19] for performing distributed DL inference at the edge. Under HI, for each data sample, an ED first uses a local algorithm (e.g., a tinyML model) for inference. Depending on the application, if the inference provided by the local algorithm is incorrect or further assistance is required from large DL models on edge or cloud, only then the ED offloads the data sample. At the outset, HI seems infeasible as the ED, in general, cannot know if the local inference is sufficient or not. Nevertheless, we present the feasibility of implementing HI for image classification applications. We demonstrate its benefits using quantitative analysis and show that HI provides a better trade-off between offloading cost, throughput, and inference accuracy compared to alternate approaches.
Since its first release in the late 1990s, Wi-Fi has been updated to keep up with evolving user needs. Recently, Wi-Fi and other radio access technologies have been pushed to their edge when serving Augmented Reality (AR) applications. AR applications require high throughput, low latency, and high reliability to ensure a high-quality user experience. The 802.11be amendment - which will be marketed as Wi-Fi 7 - introduces several features that aim to enhance its capabilities to support challenging applications like AR. One of the main features introduced in this amendment is Multi-Link Operation (MLO) which allows nodes to transmit and receive over multiple links concurrently. When using MLO, traffic is distributed among links using an implementation-specific traffic-to-link allocation policy. This paper aims to evaluate the performance of MLO, using different policies, in serving AR applications compared to Single-Link (SL). Experimental simulations using an event-based Wi-Fi simulator have been conducted. Our results show the general superiority of MLO when serving AR applications. MLO achieves lower latency and serves a higher number of AR users compared to SL with the same frequency resources. In addition, increasing the number of links can improve the performance of MLO. Regarding traffic-to-link allocation policies, we found that policies can be more susceptible to channel blocking, resulting in possible performance degradation.
This demo showcases a typical industrial automation scenario of a robot picking and placing work pieces from a moving conveyor belt. It involves sensory data inputs to a Programmable Logic Controller (PLC), and instructions from the PLC to a robot for the pick and place operation. The scenario requires communication from sensors to the PLC and from the PLC to a robot with ultra-low latency and extremely high reliability. While none of today's wireless standards is capable of satisfying these stringent communication demands, our early prototype implementation of some of the design features of the future 5G standard enables industrial control using wireless communication. Our demo will show the live performance characteristics of the 5G design features for low latency and high reliability.
Ultra-reliable and low-latency communication is the enabler for many new use cases, including wireless industrial automation. Fulfilling varying requirements of these use cases demands a flexible radio design. To address this, a holistic approach needs to be adopted. Therefore, this paper presents the radio access concepts affecting the communication reliability and latency, and comprehensively evaluates link and system level considerations through simulations. In particular, we describe the choice of suitable modulation and coding schemes, and discuss the impact of different numerologies and waveform candidates. We also point out the key principles for radio frame design to reduce the end-to-end latency. The presented concepts are then used to evaluate the performance at system level for an industrial scenario. It is shown that by an appropriate design of the radio interface for 5G system, the required low-latency and high reliability for industrial applications and many other use cases can be achieved.
Resource assignment problems occur in a vast variety of applications, from scheduling problems over image recognition to communication networks. Often these problems can be modeled by a maximum weight matching problem in (bipartite) graphs or generalizations thereof, and efficient and practical algorithms are known for these problems. Although in some of the applications an assignment of the resources may be needed only once, in many of these applications, the assignment has to be computed more often for different scenarios. In that case it is often essential that the assignments can be computed very fast. Moreover, implementing different assignments in different scenarios may come with a certain cost for the reconfiguration of the system. In this paper, we consider the problem of determining optimal assignments sequentially over a given time horizon, where consecutive assignments are coupled by constraints that control the cost of reconfiguration. We develop fast approximation and online algorithms for this problem with provable approximation guarantees and competitive ratios. Moreover, we present an extensive computational study about the applicability of our model and our algorithms in the context of orthogonal frequency division multiple access (OFDMA) wireless networks, finding a significant performance improvement for the total bandwidth of the system using our algorithms. For this application (the downlink of an OFDMA wireless cell) , the run time of matching algorithms is extremely important, having an acceptable range of a few milliseconds only. For the considered realistic instances, our algorithms perform extremely well: the solution quality is, on average, within a factor of 0.8-0.9 of optimal off-line solutions, and the running times are at most 5 ms per phase even in the worst case. Thus, our algorithms are well suited to be applied in the context of OFDMA systems.
In communication networks, resource assignment problems appear in several different settings. These problems are often modeled by a maximum weight matching problem in bipartite graphs and efficient matching algorithms are well known. In several applications, the corresponding matching problem has to be solved many times in a row as the underlying system operates in a time-slotted fashion and the edge weights change over time. However, changing the assignments can come with a certain cost for reconfiguration that depends on the number of changed edges between subsequent assignments. In order to control the cost of reconfiguration, we propose the k-constrained bipartite matching problem for bipartite graphs, which seeks an optimal matching that realizes at most k changes from a previous matching. We provide fast approximation algorithms with provable guarantees for this problem. Furthermore, to cope with the sequential nature of assignment problems, we introduce an online variant of the k-constrained matching problem and derive online algorithms that are based on our approximation algorithms for the k-constrained bipartite matching problem. Finally, we establish the applicability of our model and our algorithms in the context of OFDMA wireless networks finding a significant performance improvement for the proposed algorithms.
The increase in cost and usage of RF spectrum has made it increasingly necessary to monitor its usage and protect it from unauthorized use. A number of prior studies have designed algorithms to localize unauthorized transmitters using crowdsourced sensors. To reduce the cost of crowdsourcing, these studies select the most relevant sensors a priori to localize such transmitters. In this work, we instead argue for online selection to localize such transmitters. Online selection can lead to more accurate localization using limited number of sensors, as compared to selecting sensors a priori, albeit at the cost of higher latency. To account for the trade-off between accuracy and latency, we add a constraint on the number of selection rounds. For the case where the number of rounds is equal to the number of selected sensors, we propose a heuristic based on Thompson Sampling and show using trace-driven simulation that it provides 23 % better accuracy compared to a number of proposed baseline algorithms. For restricted number of rounds, we show that using conventional parallel version of the modified Thompson Sampling which selects equal number of sensors in each round results in a substantial reduction in accuracy. To this end, we propose a strategy of selecting decreasing number of sensors in subsequent rounds of the modified Parallel Thompson Sampling. Our evaluation shows that the proposed heuristic leads to only 3 % reduction in accuracy in contrast to 22 % using modified Parallel Thompson Sampling, when we select 50 sensors in 20 rounds.
It is well known that applying dynamic resource allocation to down-link transmissions of OFDMAsystems provides a significant performance increase, by taking advantage of diversity effects. In order toquantify the maximum possible gain achievable by applying dynamic mechanisms, several optimizationproblems have been suggested for studying a dynamic system's optimal behaviour. However, so far theseoptimization approaches do not take sophisticated system requirements into account, as there are differentpacket-arrival processes, buffering constraints, scheduling policies, as well as QoS requirements per serviceclass. In this paper we present a new optimization model that is based on a packet-centric system view andincludes the system requirements mentioned above. We compare the performance results of the conventionaland the new approach and study the impact of dynamic power allocation in both cases.
Recently, a lot of research effort has been spent on cross-layer system design. It has been shown that cross-layer mechanisms (i.e., policies) potentially provide significant performance gains for various systems. In this article we review several aspects of cross-layer system optimization regarding wireless OFDM systems. We discuss basic optimization models and present selected heuristic approaches realizing cross-layer policies by means of dynamic resource allocation. Two specific areas are treated separately: models and dynamic approaches for single transmitter/receiver pairs (i.e., a point-to-point communication scenario) as well as models and approaches for point-to-multipoint communication scenarios (e.g., the downlink of a wireless cell). This article provides basic knowledge in order to investigate future OFDM cross-layer-optimization issues.
Soft frequency reuse (SFR) is a common technique for co-channel interference (CCI) mitigation in cellular OFDMA networks. The performance of such networks significantly depends on the configuration of the power profiles that implement the soft frequency reuse patterns. In this paper, we investigate the performance of static soft frequency reuse by comparing it against the optimal case, in which a central entity optimally distributes power among the users of the network. It is shown that there is a significant performance gap between both approaches, which needs to be filled by adaptive SFR mechanisms. Moreover, we show that the achievable gain of static SFR is small in a system that is able to optimally decide on terminal/sub-carrier assignments.
OFDM systems are known to overcome the impairment of the wireless channel by splitting the given system bandwidth into parallel sub-carriers, on which data symbols can be transmitted simultaneously. This enables the possibility of enhancing the system's performance by deploying adaptive (dynamic) mechanisms, namely power and modulation adaption and dynamic sub-carrier assignments. In multi-user communication systems (OFDM-FDMA), these mechanisms can be used to achieve a level of system fairness ensuring that each terminal receives at least an environment-specific minimum amount of data per down-link phase. However, it has been doubted by multiple previous investigations that dynamic power adaption provides enough performance gain in order to be applied in such systems, as it increases the computational load significantly. In this study we discuss the performance gain due to the different approaches and show that in specific communication scenarios enabling a dynamic power distribution provides a significant performance increase compared to dynamic schemes without power adaption.
Soft frequency reuse is a strong tool for co-channel interference mitigation in cellular OFDMAILTE networks. The performance of such networks significantly depends on the configuration of the power masks that implement the soft frequency reuse patterns. In this paper, we investigate the performance of different power mask configurations against the optimal case, in which a central entity optimally distributes power and resource blocks among the users of the network. It is shown that large differences exist between the performance of different mask types and the optimal case in both, the overall cell throughput, as well as the cell-edge user performance.
Since Age of Information (AoI) has been proposed as a metric that quantifies the freshness of information updates in a communication system, there has been a constant effort in understanding and optimizing different statistics of the AoI process for classical queueing systems. In addition to classical queuing systems, more recently, systems with no queue or a unit capacity queue storing the latest packet have been gaining importance as storing and transmitting older packets do not reduce AoI at the receiver. Following this line of research, we study the distribution of AoI for the GI/GI/1/1 and GI/GI/1/2* systems, under non-preemptive scheduling. For any single-source-single-server queueing system, we derive, using sample path analysis, a fundamental result that characterizes the AoI violation probability, and use it to obtain closed-form expressions for D/GI/1/1, M/GI/1/1 as well as systems that use zero-wait policy. Further, when exact results are not tractable, we present a simple methodology for obtaining upper bounds for the violation probability for both GI/GI/1/1 and GI/GI/1/2* systems. An interesting feature of the proposed upper bounds is that, if the departure rate is given, they overestimate the violation probability by at most a value that decreases with the arrival rate. Thus, given the departure rate and for a fixed average service, the bounds are tighter at higher utilization.
Age of Information (AoI) has proven to be a useful metric in networked systems where timely information updates are of importance. In the literature, minimizing "average age" has received considerable attention. However, various applications pose stricter age requirements on the updates which demand knowledge of the AoI distribution. Furthermore, the analysis of AoI distribution in a multi-hop setting, which is important for the study of Wireless Networked Control Systems (WNCS), has not been addressed before. Toward this end, we study the distribution of AoI in a WNCS with two hops and devise a problem of minimizing the tail of the AoI distribution with respect to the frequency of generating information updates, i.e., the sampling rate of monitoring a process, under first-come-first-serve (FCFS) queuing discipline. We argue that computing an exact expression for the AoI distribution may not always be feasible; therefore, we opt for computing upper bounds on the tail of the AoI distribution. Using these upper bounds, we formulate Upper Bound Minimization Problems (UBMP), namely, Chernoff-UBMP and alpha-relaxed Upper Bound Minimization Problem (alpha-UBMP), where alpha > 1 is an approximation factor, and solve them to obtain "good" heuristic rate solutions for minimizing the tail. We demonstrate the efficacy of our approach by solving the proposed UBMPs for three service distributions: geometric, exponential, and Erlang. Simulation results show that the rate solutions obtained are near optimal for minimizing the tail of the AoI distribution for the considered distributions.
In this article, we investigate the transient behavior of a sequence of packets/bits traversing a multi-hop wireless network under static routing. Our work is motivated by novel applications from the domain of process automation, MachineType Communication (MTC) and cyber- physical systems, where short messages are communicated and statistical guarantees need to be provided on a per-message level. In order to optimize such a network, apart from understanding the stationary system dynamics, an understanding of the short-term dynamics (i.e. transient behavior) is also required. To this end, we derive novel Wireless Transient Bounds (WTB) for end-to-end delay and backlog in a multi-hop wireless network using stochastic network calculus approach. We start by analyzing a single end-toend path, i. e. a line topology, and then we show how the obtained results can be applied to a mesh network with static routing using a concept called 'leftover service'. WTB depends on the initial backlog at each node as well as the instantaneous channel states. We numerically compare WTB with Kernel-Based-Transient Bound ( KBTB), which can be obtained by adapting existing stationary bound, as well as simulated end-to- end delay of the investigated network. While KBTB and stationary bounds are not able to capture the short-term system dynamics well, WTB provides relatively tight upper bound and has a decay rate that closely matches the simulation. This is achieved by WTB only with a slight increase in the computational complexity, by a factor of O(T + N), where T is the duration of the arriving sequence and N is the number of hops in the network. We believe that the presented analysis and the bounds are necessary tools for future work on transient network optimization for many important emerging applications, e.g., massive MTC, critical MTC, edge computing and autonomous vehicle.
There is a growing interest in analysing freshness of data in networked systems. Age of Information (AoI) has emerged as a relevant metric to quantify this freshness at a receiver, and minimizing this metric for different system models has received significant research attention. However, a fundamental question remains: what is the minimum achievable AoI in any single-server-single-source queuing system for a given service-time distribution? We address this question for the average peak AoI (PAoI) statistic by considering generate-at-will source model, service preemptions, and request delays. Our main result is on the characterization of the minimum achievable average PAoI, and we show that it is achieved by a fixed-threshold policy among the set of all causal policies. We use the characterization to provide necessary and sufficient condition for preemptions to be beneficial for a given service-time distribution. Our numerical results, obtained using well-known distributions, demonstrate that the heavier the tail of a distribution the higher the performance gains of using preemptions.
There is a growing interest in analysing the freshness of data in networked systems. Age of Information (AoI) has emerged as a popular metric to quantify this freshness at a given destination. There has been a significant research effort in optimizing this metric in communication and networking systems under different settings. In contrast to previous works, we are interested in a fundamental question, what is the minimum achievable AoI in any single-server-single-source queuing system for a given service-time distribution? To address this question, we study a problem of optimizing AoI under service preemptions. Our main result is on the characterization of the minimum achievable average peak AoI (PAoI). We obtain this result by showing that a fixed-threshold policy is optimal in the set of all randomized-threshold causal policies. We use the characterization to provide necessary and sufficient conditions for the service-time distributions under which preemptions are beneficial.
The joint design of control and communication scheduling in a Networked Control System (NCS) is known to be a hard problem. Several research works have successfully designed optimal sampling and/or control strategies under simplified communication models, where transmission delays/times are negligible or fixed. However, considering sophisticated communication models, with random transmission times, result in highly coupled and difficult-to-solve optimal design problems due to the parameter inter-dependencies between estimation/control and communication layers. To tackle this problem, in this work, we investigate the applicability of Age-of-Information (AoI) for solving control/estimation problems in an NCS under i.i.d. transmission times. Our motivation for this investigation stems from the following facts: 1) recent results indicate that AoI can be tackled under relatively sophisticated communication models, and 2) a lower AoI in an NCS may result in a lower estimation/control cost. We study a joint optimization of sampling and scheduling for a single-loop stochastic LTI networked system with the objective of minimizing the time-average squared norm of the estimation error. We first show that, under mild assumptions on information structure the optimal control policy can be designed independently from the sampling and scheduling policies. We then derive a key result that minimizing the estimation error is equivalent to minimizing a non-negative and non-decreasing function of AoI. The parameters of this function include the LTI matrix and the covariance of exogenous noise in the LTI system. Noting that the formulated problem is a stochastic combinatorial optimization problem and is hard to solve, we resort to heuristic algorithms by extending existing algorithms in the AoI literature. We also identify a class of LTI system dynamics for which minimizing the estimation error is equivalent to minimizing the expected AoI.
We consider a finite-state Discrete-Time Markov Chain (DTMC) source that can be sampled for detecting the events when the DTMC transits to a new state. Our goal is to study the trade-off between sampling frequency and staleness in detecting the events. We argue that, for the problem at hand, using Age of Information (AoI) for quantifying the staleness of a sample is conservative and therefore, introduce age penalty for this purpose. We study two optimization problems: minimize average age penalty subject to an average sampling frequency constraint, and minimize average sampling frequency subject to an average age penalty constraint; both are Constrained Markov Decision Problems. We solve them using linear programming approach and compute Markov policies that are optimal among all causal policies. Our numerical results demonstrate that the computed Markov policies not only outperform optimal periodic sampling policies, but also achieve sampling frequencies close to or lower than that of an optimal clairvoyant (non-causal) sampling policy, if a small age penalty is allowed.
We consider a finite-state Discrete-Time Markov Chain (DTMC) source that can be sampled for detecting the events when the DTMC transits to a new state. Our goal is to study the trade-off between sampling frequency and staleness in detecting the events. We argue that, for the problem at hand, using Age of Information (AoI) for quantifying the staleness of a sample is conservative and therefore, study another freshness metric age penalty, which is defined as the time elapsed since the first transition out of the most recently observed state. We study two optimization problems: minimize average age penalty subject to an average sampling frequency constraint, and minimize average sampling frequency subject to an average age penalty constraint; both are Constrained Markov Decision Problems. We solve them using the Lagrangian MDP approach, where we also provide structural results that reduce the search space. Our numerical results demonstrate that the computed Markov policies not only outperform optimal periodic sampling policies, but also achieve sampling frequencies close to or lower than that of an optimal clairvoyant (non-causal) sampling policy, if a small age penalty is allowed.
The wireless control of modular multilevel converter (MMC) submodules was recently proposed. The success of the control depends on specialized control methods suitable for wireless communication and a properly designed wireless communication network in the MMC valve hall while aiming for low latency and high reliability. The wireless communication in the hall can be affected by the electromagnetic interference (EMI) of MMC submodules, voltage and current transients. In this article, firstly, a wireless communication network based on 5G New Radio is designed for an example full-scale MMC valve hall. After that, radiated EMI characteristics of MMC submodules with different voltage and current ratings and two dc circuit breakers are measured. The effects of EMI on wireless communication in the multi-GHz frequency band are tested. The interference from the components is confined below 500 MHz, and the wireless communication with 5825 MHz center frequency is not affected by the interference.
The wireless control of modular multilevel converter (MMC) submodules was recently proposed. The success of the control depends on specialized control methods suitable for wireless communication and a properly designed wireless communication network in the MMC valve hall while aiming for low latency and high reliability. The wireless communication in the hall can be affected by the electromagnetic interference (EMI) of MMC submodules, voltage and current transients. In this article, firstly, a wireless communication network based on 5G New Radio is designed for an example full-scale MMC valve hall. After that, the radiated EMI characteristics of the MMC submodules with different voltage and current ratings and two dc circuit breakers are measured. The effects of EMI on wireless communication in the multi-GHz frequency band are tested. The interference from the components is confined below 500 MHz, and the wireless communication with 5825 MHz center frequency is not affected by the interference.
The modular multilevel converter is one of the most preferred converters for high-power conversion applications. Wireless control of the submodules can contribute to its evolution by lowering the material and labor costs of cabling and by increasing the availability of the converter. However, wireless control leads to many challenges for the control and modulation of the converter as well as for proper low-latency high-reliability communication. This paper investigates the tolerable asynchronism between phase-shifted carriers used in modulation from a wireless control point of view and proposes a control method along with communication protocol for wireless control. The functionality of the proposed method is validated by computer simulations in steady state.
The central control of MMC becomes demanding in computation power and communication bandwidth as the number of submodules increase. Distributed control methods can overcome these bottlenecks. In this paper, a simple distributed control method together with synchronization of modulation carriers in the submodules is presented. The proposal is implemented on a lab-scale MMC with asynchronous-serial communication on a star network between the central and local controllers. It is shown that the proposed control method works satisfactorily in the steady state. The method can be applied as is to MMCs with any number of submodules per arm.
The wireless control of modular multilevel converter (MMC) submodules might offer advantages for MMCs with a high number of submodules. However, the control system should tolerate the stochastic nature of the wireless communication, continue the operation flawlessly or, at least, avoid overcurrents, overvoltages, and component failures. The previously proposed control methods enabled to control the submodules wirelessly with consecutive communication errors up to hundreds of control cycles. The submodule control method in this paper facilitates the MMC to safely overcome communication errors that last longer and when the MMC experiences significant electrical disturbances during the errors. The submodules are proposed to operate autonomously by implementing a replica of the central controller in the submodules and drive the replicas based on the local variables and the previously received data. The simulation and experimental results verify the proposed control method.
Wireless control of modular multilevel converter (MMC) submodules has been offered recently with potentially lower cost and higher availability advantages for the converter station. In this paper, the wireless control of MMC submodules under ac-side faults is investigated. The central controller of the MMC is equipped for the unbalanced grid conditions. Local current controllers in the submodules are operated autonomously in case of loss of wireless communication during the fault. A set of simulations with single line-to-ground, line-to-line, and three-phase-to-ground faults reveal that the MMC rides through the faults in all the cases with the expected communication conditions or when the communication is lost before or after the fault instant.
Wireless control of modular multilevel converter (MMC) submodules can benefit from different points of view, such as lower converter cost and shorter installation time. In return for the advantages, the stochastic performance of wireless communication networks necessitates an advanced converter control system immune to the losses and delays of the wirelessly transmitted data. This paper proposes an advancement to the distributed control of MMCs to utilize in wireless submodule control. Using the proposed method, the operation of the MMC continues smoothly and uninterruptedly during wireless communication errors. The previously proposed submodule wireless control concept relies on implementing the modulation and individual submodule-capacitor-voltage control in the submodules using the insertion indices transmitted from a central controller. This paper takes the concept as a basis and introduces to synthesize the indices autonomously in the submodules during the communication errors. This new approach allows the MMC continue its operation when one, some, or all submodules suffer from communication errors for a limited time. The proposal is validated experimentally on a laboratory-scale MMC.
Wireless control of modular multilevel converter (MMC) submodules offers several potential benefits to exploit, such as decreased converter costs and ease in converter installation. However, wireless control comes with several challenging engineering requirements. The control methods used with wired communication networks are not directly applicable to the wireless control due to the latency and reliability differences of wired and wireless networks. This article reviews the existing control architectures of MMCs and proposes a control and communication method for wireless submodule control. Also, a synchronization method for pulsewidth modulation carriers is proposed suitable for wireless control. The imperfections of wireless communication, such as higher latency and packet losses compared to wired communication, are analyzed for the operation of MMCs. The latency is fixed with a proper controller and wireless network design. The converter is rendered immune to the packet losses by decreasing the closed-loop control bandwidth. The functionality of the proposal is verified, for the first time, experimentally on a laboratory-scale MMC using a simple wireless network. It is shown that wireless control of MMC submodules with the proposed approach can perform comparably to the wired control.
Various industrial control applications have stringent end-to-end latency requirements in the order of a few milliseconds. Software-defined networking (SDN) is a promising solution in order to meet these stringent requirements under varying traffic patterns, as it enables the flexible management of flows across the network. Thus, SDN allows to ensure that traffic flows use congestion-free paths, reducing the delay to forwarding and processing delays at the SDN nodes. However, accommodating new flows at runtime is under such a setting challenging as it may require the migration of existing flows, without interrupting ongoing traffic. In this paper, we consider the problem of dynamic flow migration and propose a polynomial time algorithm that can find a solution if direct flow migration is feasible. We furthermore propose an algorithm for computing both direct and indirect flow migration and prove its correctness. Numerical results obtained on a FatTree network topology show that flow migration is typically necessary for networks with a moderate number of flows, while direct flow migration is feasible in around 60% of the cases.
Spurred by recent industrial trends, such as factoryautomation or phase synchronization in the smart grid, thereis a significant interest for wireless industrial networks lately.In contrast to traditional applications, the focus is on carryingout communication at very short latencies together with highreliabilities. Meeting such extreme requirements with wirelessnetworks is challenging. A potential candidate for such a networkis a token-passing protocol, as it allows to bound latencies.However, it lacks mechanisms to cope with the dynamics ofwireless channels. In this paper, we present EchoRing, a novelwireless token-passing protocol. Cooperative communication andan improved fault tolerance allow this decentralized protocol tosupport industrial applications over wireless networks. Based onexperimental results, we demonstrate the suitability of EchoRingto support demands of industrial applications. EchoRing outper-forms other schemes by several orders of magnitude in terms ofreliability for latencies of and below10ms.
Given the rising demand for wireless solutions in the area of machine-to-machine communication, we present the novel EchoRing protocol. It is designed to serve the communication needs of industrial applications, while being optimized for the wireless channel specifically. Directly taking known principles of tethered communication to the wireless domain is likely to yield degraded performance results. Additional techniques have to be added to make known principles be able to master the challenges of wireless channel dynamics. On the other hand, the majority of currently existing wireless communication standards are developed to allow mobility on the last hop of a transmission path that originates in the Internet or a local home network. Hence, the focus is on supporting the best-effort paradigm of the data streams. However, in industrial environments this best-effort paradigm gets replaced by the need to steadily achieve very high reliabilities at very short deadlines.
In this demonstration, we will show how industrial applications can be interconnected wirelessly despite the drawbacks of the wireless channel. The experimental setup allows to compare different medium access control protocols under varying conditions.
Recently, the wireless networking community is getting more and more interested in novel protocol designs for safety-critical applications. These new applications come with unprecedented latency and reliability constraints which poses many open challenges. A particularly important one relates to the question how to develop such systems. Traditionally, development of wireless systems has mainly relied on simulations to identify viable architectures. However, in this case the drawbacks of simulations – in particular increasing run-times – rule out its application. Instead, in this paper we propose to use probabilistic model checking, a formal model-based verification technique, to evaluate different system variants during the design phase. Apart from allowing evaluations and therefore design iterations with much smaller periods, probabilistic model checking provides bounds on the reliability of the considered design choices. We demonstrate these salient features with respect to the novel EchoRing protocol, which is a token-based system designed for safety-critical industrial applications. Several mechanisms for dealing with a token loss are modeled and evaluated through probabilistic model checking, showing its potential as suitable evaluation tool for such novel wireless protocols. In particular, we show by probabilistic model checking that wireless tokenpassing systems can benefit tremendously from the considered fault-tolerant methods. The obtained performance guarantees for the different mechanisms even provide reasonable bounds for experimental results obtained from a real-world implementation.
Emerging machine-to-machine communication scenarios are envisioned to deal with more stringent quality-of-service demands. This relates mainly to outage and latency requirements, which are for example for safety-critical messages quite different than for traditional applications. On the other hand, it is widely accepted that machine-to-machine communication systems need to be energy-efficient because of the widespread use of battery-powered devices, but also due to their huge deployment numbers. In this paper, we address these issues with respect to multi-hop transmissions. Specifically, we deal with minimizing the consumed energy of transmitting a packet with end-to-end outage and latency requirements. We account for the cases in which the system can utilize solely average channel state information, or in addition obtain and profit from instantaneous channel state information. The developed solution is based on convex optimization. It is shown numerically that despite accounting for the energy consumption of acquiring instantaneous channel state information, especially as the outage and latency requirements become tough, it is by up to 100 times more energy efficient to convey a packet with instantaneous than with average channel state information.
This paper proposes an efficient solution to the open problem of network planning for large-scale WLAN deployments. WLAN performance is governed by the Csma-Ca protocol, whose dynamic effects are difficult to capture. Accurate performance evaluation depends on simulations and takes time. A detailed analysis of dozens candidate designs with varying Ap positions and channel assignments during network planning is therefore infeasible. In our solution, we first identify few good candidate designs using a multi-criteria optimization model, which features notions of cell overlap and station throughput. These candidate designs are taken from the corresponding Pareto frontier. In the second step, we evaluate the performance of the candidate designs by means of simulations. We apply our method to a realistic, large-scale planning scenario for an indoor office environment. The detailed simulations reveal important characteristics of the candidate designs that are not captured by the optimization model. The resulting performance differs significantly across the candidate designs. Hence, this approach successfully combines the benefits of mathematical optimization and simulations, yet avoiding their individual drawbacks.
Feature-based authentication schemes that verify wireless transmitter identities based on physical-layer features allow for fast and efficient authentication with minimal overhead. Hence, they are interesting to consider for safety-critical applications where low latency and high reliability is required. However, as erroneous authentication decisions will introduce delays, we propose to study the impact of feature-based schemes on the system-level performance. In this paper, we therefore study the queuing performance of a line-of-sight wireless link that employs a feature-based authentication scheme based on the complex channel gain. Using stochastic networks calculus, we provide bounds on the delay performance which are validated by numerical simulations. The results show that the delay and authentication performance is highly dependent on the SNR and Rice factor. However, under good channel conditions, a missed-detection rate of 10(-8) can be achieved without introducing excessive delays in the system.
We study the detection and delay performance impacts of a feature-based physical layer authentication (PLA) protocol in mission-critical machine-type communication (MTC) networks. The PLA protocol uses generalized likelihood-ratio testing based on the line-of-sight (LOS), single-input multiple- output channel-state information in order to mitigate imper- sonation attempts from an adversary node. We study the de- tection performance, develop a queueing model that captures the delay impacts of erroneous decisions in the PLA (i.e., the false alarms and missed detections), and model three different adversary strategies: data injection, disassociation, and Sybil attacks. Our main contribution is the derivation of analytical delay performance bounds that allow us to quantify the delay introduced by PLA that potentially can degrade the performance in mission-critical MTC networks. For the delay analysis, we utilize tools from stochastic network calculus. Our results show that with a sufficient number of receive antennas (approx. 4-8) and sufficiently strong LOS components from legitimate devices, PLA is a viable option for securing mission-critical MTC systems, despite the low latency requirements associated to corresponding use cases. Furthermore, we find that PLA can be very effective in detecting the considered attacks, and in particular, it can significantly reduce the delay impacts of disassociation and Sybil attacks.
We study a multi-user up-link scenario where an attacker tries to impersonate the legitimate transmitters. We present a new framework for deriving a posteriori attack probabilities from the channel observations at the access point, which enables fast intrusion detection and authentication at the physical layer and can be exploited to reduce the security overhead by offtoading higher-layer authentication schemes. This is highly relevant for delay-sensitive applications that are targeted in 5G where the security overhead may limit the real-time performance. We take a factor-graph approach that can easily be extended to take into account other features, channel models, and radio access schemes. While related works only consider single-link scenarios, the multi-user approach in this paper allows us to exploit the cross-channel correlation of the large-scale fading parameters that is due to the propagation environment for improving the detection performance. As numerical results show, especially for slowly changing channels with high correlation our approach provides significant performance gains.
Physical layer authentication (PLA) has recently been discussed in the context of URLLC due to its low complexity and low overhead. Nevertheless, these schemes also introduce additional sources of error through missed detections and false alarms. The trade-offs of these characteristics are strongly dependent on the deployment scenario as well as the processing architecture. Thus, considering a feature-based PLA scheme utilizing channel-state information at multiple distributed radio-heads, we study these trade-offs analytically. We model and analyze different scenarios of centralized and decentralized decision-making and decoding, as well as the impacts of a single-antenna attacker launching a Sybil attack. Based on stochastic network calculus, we provide worst-case performance bounds on the system-level delay for the considered distributed scenarios under a Sybil attack. Results show that the arrival-rate capacity for a given latency deadline is increased for the distributed scenarios. For a clustered sensor deployment, we find that the distributed approach provides 23% higher capacity when compared to the centralized scenario.
Physical layer authentication (PLA) has recently been discussed in the context of URLLC due to its low complexity and low overhead. Nevertheless, these schemes also introduce additional sources of error through missed de- tections and false alarms. The trade-offs of these characteristics are strongly dependent on the deployment scenario as well as the processing architec- ture. Thus, considering a feature-based PLA scheme utilizing channel-state information at multiple distributed radio-heads, we study these trade-offs analytically. We model and analyze different scenarios of centralized and de- centralized decision-making and decoding, as well as the impacts of a single- antenna attacker launching a Sybil attack. Based on stochastic network cal- culus, we provide worst-case performance bounds on the system-level delay for the considered distributed scenarios under a Sybil attack. Results show that the arrival-rate capacity for a given latency deadline is increased for the distributed scenarios. For a clustered sensor deployment, we find that the distributed approach provides 23% higher capacity when compared to the centralized scenario.
This paper proposes a new approach for physical layer authentication where transmissions are authenticated based on the single-input/multiple-output channel-states observed at multiple distributed antenna-arrays. The receiver operating characteristics (ROC) are derived in terms of closed form expressions for the false alarm and missed detection probability in order to evaluate the effectiveness compared to single-array authentication. To this end, we study the worst-case missed detection probability based on the optimal attacker position. Finally, we apply our previously developed queueing analytical tools, based on stochastic network calculus, in order to assess the delay performance impacts of the physical layer authentication scheme in a mission-critical communication scenario. Our results show that the distributed approach significantly outperforms single-array authentication in terms of worst-case missed detection probability and that this can help mitigating the delay performance impacts of authentication false alarms.
In this paper, we extend our previous work on user assignment in Cloud-RAN, where we proposed an algorithm for user assignment (UA). We motivate the inherent fairness issue that is present in the latter UA scheme, since some users in the system will never get served. To improve the fairness, we propose that the UA scheme is preceded by a user scheduling step which aims at selecting at any time the users that should be considered by the UA algorithm for scheduling (in the next time slot). Two user scheduling approaches have been studied. The first scheme improves the minimum throughput (MT), by selecting at any time the users with the lowest throughput. The second scheme is based on round-robin (RR) scheduling, where the set of potentially scheduled users for the next slot, is done by excluding all the previously served users, in that round. Moreover, the subset of actual users to be served, is determined using the UA algorithm. We evaluate their fairness and sumrate performance, via extensive simulations. While one might have expected a tradeoff between the sum-rate performance and fairness, our results show that MT improves both metrics, when compared to the original UA algorithm (without fairness), for some choice of parameter values. This implies that both fairness and aggregate system performance can be improved, by a careful choice of the number of assigned and served users.
In this paper, we investigate the problem of mitigating interference between so-called antenna domains of a cloud radio access network (C-RAN). In contrast to previous work, we turn to an approach utilizing primarily the optimal assignment of users to central processors in a C-RAN deployment. We formulate this user assignment problem as an integer optimization problem and propose an iterative algorithm for obtaining a solution. Motivated by the lack of optimality guarantees on such solutions, we opt to find lower bounds on the problem and the resulting interference leakage in the network. We thus derive the corresponding Dantzig-Wolfe decomposition, formulate the dual problem, and show that the former offers a tighter bound than the latter. We highlight the fact that the bounds in question consist of linear problems with an exponential number of variables and adapt the column generation method for solving them. In addition to shedding light on the tightness of the bounds in question, our numerical results show significant sum-rate gains over several comparison schemes. Moreover, the proposed scheme delivers similar performance as weighted minimum mean squared-error (MMSE) with a significantly lower complexity (around 10 times less).
We study here the problem of Antenna Domain Formation (ADF) in cloud RAN systems, whereby multiple remote radio-heads (RRHs) are each to be assigned to a set of antenna domains (ADs), such that the total interference between the ADs is minimized. We formulate the corresponding optimization problem, by introducing the concept of interference coupling coefficients among pairs of radio-heads. We then propose a low-overhead algorithm that allows the problem to be solved in a distributed fashion, among the aggregation nodes (ANs), and establish basic convergence results. Moreover, we also propose a simple relaxation to the problem, thus enabling us to characterize its maximum performance. We follow a layered coordination structure: after the ADs are formed, radio-heads are clustered to perform coordinated beamforming using the well known Weighted-MMSE algorithm. Finally, our simulations show that using the proposed ADF mechanism would significantly increase the sum-rate of the system (with respect to random assignment of radio-heads).
It is well known that channel-dependent OFDMA resource assignment algorithms provide a significant performance improvement compared to static (i.e. channel-unaware) approaches. Such dynamic algorithms constantly adapt resource assignments to current channel states according to some objective function. Due to these dynamics, it is difficult to predict the resulting performance for such schemes given a certain scenario (characterized by the number of terminals in the cell and their average channel gains). Hence, previous work on admission control for OFDMA systems neglects the performance improvement from channel-dependent resource assignments and bases analysis on the average channel gains instead. In this paper we provide for the first time an analytical framework for admission control in OFDMA systems applying channel-dependent resource assignments. The framework is based on fundamental transformations of the channel gains caused by the channel-dependent assignment algorithms. We provide closed-form expressions for these transformations and derive from them probability functions for the rate achieved per terminal and frame. These functions can then be used for admission control as demonstrated in this paper for Voice-over-IP streams in IEEE 802.16e systems.
The concept of the effective service capacity is an analytical framework for evaluating QoS-constrained queuing performance of communication systems. Recently, it has been applied to the analysis of different wireless systems like point-to-point systems or multi-user systems. In contrast to previous work, we consider in this work slot-based systems where a scheduler determines a packet size to be transmitted at the beginning of the slot. For this, the scheduler can utilize outdated channel state information. Based on a threshold error model, we derive the effective service capacity for different scheduling strategies that the scheduler might apply. We show that even slightly outdated channel state information leads to a significant loss in capacity in comparison to an ideal system with perfect channel state information available at the transmitter. This loss depends on the risk-level the scheduler is willing to take which is represented by an SNR margin. We show that for any QoS target and average link state there exists an optimal SNR margin improving the maximum sustainable rate. Typically, this SNR margin is around 3 dB but is sensible to the QoS target and average link quality. Finally, we can also show that adapting to the instantaneous channel state only pays off if the correlation between the channel estimate and the channel state is relatively high (with a coefficient above 0.9).