This paper presents the design and implementation of the software for a run-time assurance infrastructure in the E-care@home system. An experimental evaluation is conducted to verify that the run-time assurance infrastructure is functioning correctly, and to enable detecting performance degradation in experimental IoT network deployments within the context of E-care@home. © 2018, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.
Exploiting multiple radio channels for communication has been long known as a practical way to mitigate interference in wireless settings. In Wireless Sensor Networks, however, multi-channel solutions have not reached their full potential: the MAC layers included in TinyOS or the Contiki OS for example are mostly single-channel. The literature offers a number of interesting solutions, but experimental results were often too few to build confidence. We propose a practical extension of low-power listening, MiCMAC, that performs channel hopping, operates in a distributed way, and is independent of upper layers of the protocol stack. The above properties make it easy to deploy in a variety of scenarios, without any extra configuration/scheduling/channel selection hassle. We implement our solution in Contiki and evaluate it in a 97-node testbed while running a complete, out-of-the-box low-power IPv6 communication stack (UDP/RPL/6LoWPAN). Our experimental results demonstrate increased resilience to emulated WiFi interference (e.g., data yield kept above 90% when ContikiMAC drops in the 40% range). In noiseless environments, MiCMAC keeps the overhead low in comparison to ContikiMAC, achieving performance as high as 99% data yield along with sub-percent duty cycle and sub-second latency for a 1-minute inter-packet interval data collection.
Medium access control for wireless sensor networks has been a very active research area for the past couple of years. The sensor networks literature presents an alphabet soup of medium access control protocols with almost all of the works focusing only on energy efficiency. There is much more innovative work to be done at the MAC layer, but current efforts are not addressing the hard unsolved problems. Majority of the works appearing in the literature are "least publishable incremental improvements" over the popular S-MAC [1] protocol. In this paper we present research directions for future medium access research. We identify some open issues and discuss possible solutions.
Smart home environments have a significant potential to provide for long-term monitoring of users with special needs in order to promote the possibility to age at home. Such environments are typically equipped with a number of heterogeneous sensors that monitor both health and environmental parameters. This paper presents a framework called E-care@home, consisting of an IoT infrastructure, which provides information with an unambiguous, shared meaning across IoT devices, end-users, relatives, health and care professionals and organizations. We focus on integrating measurements gathered from heterogeneous sources by using ontologies in order to enable semantic interpretation of events and context awareness. Activities are deduced using an incremental answer set solver for stream reasoning. The paper demonstrates the proposed framework using an instantiation of a smart environment that is able to perform context recognition based on the activities and the events occurring in the home.
Energy is one of the most important resources in wireless sensor networks. We use an idealized mathematical model to study the impact of routing on energy consumption. Our results are very general and, within the assumptions listed in Section 2, apply to arbitrary topologies, routings and radio energy models. We find bounds on the minimal and maximal energy routings will consume, and use them to bound the lifetime of the network. The bounds are sharp, and can be achieved in many situations of interest. We illustrate the theory with some examples.
Energy is one of the most important resources in wireless sensor networks. We use an idealized mathematical model to study the energy consumption under all possible routings. Our results are very general and, within the assumptions listed in Section 2, apply to arbitrary topologies, routings and radio energy models. We find bounds on the minimal and maximal energy routings will consume, and use them to bound the lifetime of the network. The bounds are sharp, and we show that they are achievable in many situations of interest. We give some examples, and apply the theory to the problem of covering a given square region with the most efficient member of a family of increasingly more dense square-lattice sensor networks. Finally, we use simulations to test these results in a more realistic scenario, where packet loss can occur.
The Internet of Things (IoT) consists of resource-constrained devices (e.g., sensors and actuators) which form low power and lossy networks to connect to the Internet. With billions of devices deployed in various environments, IoT is one of the main building blocks of future Internet of Services (IoS). Limited power, processing, storage and radio dictate extremely efficient usage of these resources to achieve high reliability and availability in IoS. Denial of Service (DoS) and Distributed DoS (DDoS) attacks aim to misuse the resources and cause interruptions, delays, losses and degrade the offered services in IoT. DoS attacks are clearly threats for availability and reliability of IoT, and thus of IoS. For highly reliable and available IoS, such attacks have to be prevented, detected or mitigated autonomously. In this study, we propose a comprehensive investigation of Internet of Things security for reliable Internet of Services. We review the characteristics of IoT environments, cryptography-based security mechanisms and D/DoS attacks targeting IoT networks. In addition to these, we extensively analyze the intrusion detection and mitigation mechanisms proposed for IoT and evaluate them from various points of view. Lastly, we consider and discuss the open issues yet to be researched for more reliable and available IoT and IoS. © The Author(s) 2018.
We present the experimental evaluation of different security mechanisms applied to persistent state in intermittent computing. Whenever executions become intermittent because of energy scarcity, systems employ persistent state on non-volatile memories (NVMs) to ensure forward progress of applications. Persistent state spans operating system and network stack, as well as applications. While a device is off recharging energy buffers, persistent state on NVMs may be subject to security threats such as stealing sensitive information or tampering with configuration data, which may ultimately corrupt the device state and render the system unusable. Based on modern platforms of the Cortex M*series, we experimentally investigate the impact on typical intermittent computing workloads of different means to protect persistent state, including software and hardware implementations of staple encryption algorithms and the use of ARM TrustZone protection mechanisms. Our results indicate that i) software implementations bear a significant overhead in energy and time, sometimes harming forward progress, but also retaining the advantage of modularity and easier updates; ii) hardware implementations offer much lower overhead compared to their software counterparts, but require a deeper understanding of their internals to gauge their applicability in given application scenarios; and iii) TrustZone shows almost negligible overhead, yet it requires a different memory management and is only effective as long as attackers cannot directly access the NVMs
We present the experimental evaluation of different security mechanisms applied to persistent state in intermittent computing. Whenever executions become intermittent because of energy scarcity, systems employ persistent state on non-volatile memories (NVMs) to ensure forward progress of applications. Persistent state spans operating system and network stack, as well as applications. While a device is off recharging energy buffers, persistent state on NVMs may be subject to security threats such as stealing sensitive information or tampering with configuration data, which may ultimately corrupt the device state and render the system unusable. Based on modern platforms of the Cortex M* series, we experimentally investigate the impact on typical intermittent computing workloads of different means to protect persistent state, including software and hardware implementations of staple encryption algorithms and the use of ARM TrustZone protection mechanisms. Our results indicate that i) software implementations bear a significant overhead in energy and time, sometimes harming forward progress, but also retaining the advantage of modularity and easier updates; ii) hardware implementations offer much lower overhead compared to their software counterparts, but require a deeper understanding of their internals to gauge their applicability in given application scenarios; and iii) TrustZone shows almost negligible overhead, yet it requires a different memory management and is only effective as long as attackers cannot directly access the NVMs.
This work explores high data rate microwave communication through fat tissue in order to address the wide bandwidth requirements of intra-body area networks. We have designed and carried out experiments on an IEEE 802.15.4 based WBAN prototype by measuring the performance of the fat tissue channel in terms of data packet reception with respect to tissue length and power transmission. This paper proposes and demonstrates a high data rate communication channel through fat tissue using phantom and ex-vivo environments. Here, we achieve a data packet reception of approximately 96 % in both environments. The results also show that the received signal strength drops by ~1 dBm per 10 mm in phantom and ~2 dBm per 10 mm in ex-vivo. The phantom and ex-vivo experimentations validated our approach for high data rate communication through fat tissue for intrabody network applications. The proposed method opens up new opportunities for further research in fat channel communication. This study will contribute to the successful development of high bandwidth wireless intra-body networks that support high data rate implanted, ingested, injected, or worn devices
This paper presents numerical modeling and experimental validation of the signal path loss at the 5.8 GHz Industrial, Scientific, and Medical (ISM) band, performed in the context of fat-intrabody communication (fat-IBC), a novel intrabody communication platform using the body-omnipresent fat tissue as the key wave-guiding medium. Such work extends our previous works at 2.0 and 2.4 GHz in the characterization of its performance in other useful frequency range. In addition, this paper also includes studies of both static and dynamic human body movements. In order to provide with a more comprehensive characterization of the communication performance at this frequency, this work focuses on investigating the path loss at different configurations of fat tissue thickness, antenna polarizations, and locations in the fat channel. We bring more realism to the experimental validation by using excised tissues from porcine cadaver as both their fat and muscle tissues have electromagnetic characteristics similar to those of human with respect to current state-of-art artificial phantom models. Moreover, for favorable signal excitation and reception in the fat-IBC model, we used topology optimized waveguide probes. These probes provide an almost flat response in the frequency range from 3.2 to 7.1 GHz which is higher than previous probes and improve the evaluation of the performance of the fat-IBC model. We also discuss various aspects of real-world scenarios by examining different models, particularly homogeneous multilayered skin, fat, and muscle tissue. To study the effect of dynamic body movements, we examine the impact of misalignment, both in space and in wave polarization, between implanted nodes. We show in particular that the use of fat-IBC techniques can be extended up in frequency to a broadband channel at 5.8 GHz.
The potential offered by the intra-body communication (IBC) over the past few years has resulted in a spike of interest for the topic, specifically for medical applications. Fat-IBC is subsequently a novel alternative technique that utilizes fat tissue as a communication channel. This work aimed to identify such transmission medium and its performance in varying blood-vessel systems at 2.45 GHz, particularly in the context of the IBC and medical applications. It incorporated three-dimensional (3D) electromagnetic simulations and laboratory investigations that implemented models of blood vessels of varying orientations, sizes, and positions. Such investigations were undertaken by using ex-vivo porcine tissues and three blood-vessel system configurations. These configurations represent extreme cases of real-life scenarios that sufficiently elucidated their principal influence on the transmission. The blood-vessel models consisted of ex-vivo muscle tissues and copper rods. The results showed that the blood vessels crossing the channel vertically contributed to 5.1 dB and 17.1 dB signal losses for muscle and copper rods, respectively, which is the worst-case scenario in the context of fat-channel with perturbance. In contrast, blood vessels aligned-longitudinally in the channel have less effect and yielded 4.5 dB and 4.2 dB signal losses for muscle and copper rods, respectively. Meanwhile, the blood vessels crossing the channel horizontally displayed 3.4 dB and 1.9 dB signal losses for muscle and copper rods, respectively, which were the smallest losses among the configurations. The laboratory investigations were in agreement with the simulations. Thus, this work substantiated the fat-IBC signal transmission variability in the context of varying blood vessel configurations.
The reliability of intra-body wireless communication systems is very important in medical applications to ensure the data transmission between implanted devices. In this paper, we present newly developed measurements to investigate the effect of blood vessels on the data packet reception through the fat tissue. We use an IEEE 802.15.4-based WBAN prototype to measure the packet reception rate (PRR) through a tissue-equivalent phantom model. The blood vessels are modelled using copper rods. We measure the PRR at the frequency 2.45 GHz for several power levels. The results revealed that the presence of blood vessels aligned with the fat channel has tiny influence on the PRR when measured over the range -25 dBm to 0 dBm power level and for different blood vessels positions. Our investigations show 97% successful PRR through a 10 cm length fat channel in presence of the blood vessels.
In this paper, we investigate the use of fat tissue as a communication channel between in-body, implanted devices at R-band frequencies (1.7-2.6 GHz). The proposed fat channel is based on an anatomical model of the human body. We propose a novel probe that is optimized to efficiently radiate the R-band frequencies into the fat tissue. We use our probe to evaluate the path loss of the fat channel by studying the channel transmission coefficient over the R-band frequencies. We conduct extensive simulation studies and validate our results by experimentation on phantom and ex-vivo porcine tissue, with good agreement between simulations and experiments. We demonstrate a performance comparison between the fat channel and similar waveguide structures. Our characterization of the fat channel reveals propagation path loss of similar to 0.7 dB and similar to 1.9 dB per cm for phantom and ex-vivo porcine tissue, respectively. These results demonstrate that fat tissue can be used as a communication channel for high data rate intra-body networks.
In this paper, we present an approach for communication through human body tissue in the R-band frequency range. This study examines the ranges of microwave frequencies suitable for intra-body communication. The human body tissues are characterized with respect to their transmission properties using simulation modeling and phantom measurements. The variations in signal coupling with respect to different tissue thicknesses are studied. The simulation and phantom measurement results show that electromagnetic communication in the fat layer is viable with attenuation of approximately 2 dB per 20 mm.
This study aims to investigate the reliability of intra-body microwave propagation through the fat tissue in presence of blood vessels. Here, we consider three types of blood vessels with different sizes. We investigate the impact of the number of blood vessels and their alignment on the transmission of microwave signals through the fat channel. In our study, we employ two probes that act as a transmitter and a receiver. The probes are designed to operate at the Industrial, Scientific, and Medical radio band (2.45 GHz). For a channel length of 100 mm, our results indicate that the presence of the blood vessels may increase the channel path loss by similar to 1.5 dB and similar to 4.5 dB when the vessels are aligned and orthogonally aligned with the fat channel, respectively.
Recently, the human fat tissue has been proposed as a microwave channel for intra-body sensor applications. In this work, we assess how disturbances can prevent reliable microwave propagation through the fat channel. Perturbants of different sizes are considered. The simulation and experimental results show that efficient communication through the fat channel is possible even in the presence of perturbants such as embedded muscle layers and blood vessels. We show that the communication channel is not affected by perturbants that are smaller than 15 mm cube.
In recent studies, it has been found that fat tissue can be used as a microwave communication channel. In this article, the effect of thickness inhomogeneities in fat tissues on the performance of in-body microwave communication at 2.45 GHz is investigated using phantom models. We considered two models namely concave and convex geometrical fat distribution to account for the thickness inhomogeneities. The thickness of the fat tissue is varied from 5 mm to 45 mm and the Gap between the transmitter/receiver and the starting and ending of concavity/convexity is varied from 0 mm to 25 mm for a length of 100 mm to study the behavior in the microwave propagation. The phantoms of different geometries, concave and convex, are used in this work to validate the numerical studies. It was noticed that the convex model exhibited higher signal coupling by an amount of 1 dB (simulation) and 2 dB (measurement) compared to the concave model. From the study, it was observed that the signal transmission improves up to 30 mm thick fat and reaches a plateau when the thickness is increased further.
In this paper, we investigate the use of fat tissue as a communication channel between in-body, implanted devices at R-band frequencies (1.7–2.6 GHz). The proposed fat channel is based on an anatomical model of the human body. We propose a novel probe that is optimized to efficiently radiate the R-band frequencies into the fat tissue. We use our probe to evaluate the path loss of the fat channel by studying the channel transmission coefficient over the R-band frequencies. We conduct extensive simulation studies and validate our results by experimentation on phantom and ex-vivo porcine tissue, with good agreement between simulations and experiments. We demonstrate a performance comparison between the fat channel and similar waveguide structures. Our characterization of the fat channel reveals propagation path loss of ∼0.7 dB and ∼1.9 dB per cm for phantom and ex-vivo porcine tissue, respectively. These results demonstrate that fat tissue can be used as a communication channel for high data rate intra-body networks.
Many Wireless Sensor Networks (WSNs) are used to collect and process confidential information. Confidentiality must be ensured at all times and, for example, solutions for confidential communication, processing or storage are required. To date, the research community has addressed mainly the issue of confidential communication. Efficient solutions for cryptographically secured communication and associated key exchange in WSNs exist. Many WSN applications, however, rely heavily on available on-node storage space and therefore it is essential to ensure the confidentiality of stored data as well. In this paper we present Codo, a confidential data storage solution which balances platform, performance and security requirements. We implement Codo for the Contiki WSN operating system and evaluate its performance.
The future Internet of Things (IoT) may be based on the existing and established Internet Protocol (IP). Many IoT application scenarios will handle sensitive data. However, as security requirements for storage and communication are addressed separately, work such as key management or cryp-tographic processing is duplicated. In this paper we present a framework that allows us to combine secure storage and secure communication in the IP-based IoT. We show how data can be stored securely such that it can be delivered securely upon request without further cryptographic processing. Our prototype implementation shows that combined secure storage and communication can reduce the security-related processing on nodes by up to 71% and energy consumption by up to 32.1%.
Comprehensive security mechanisms are required for a successful implementation of the Internet of Things (IoT). Existing solutions focus mainly on securing the communication links between Internet hosts and IoT devices. However, as most IoT devices nowadays provide vast amounts of flash storage space, it is as well required to consider storage security within a comprehensive security framework. Instead of developing independent security solutions for storage and communication, we propose Fusion, a framework that provides coalesced confidential storage and communication. Fusion uses existing secure communication protocols for the IoT such as Internet protocol security (IPsec) and datagram transport layer security (DTLS) and re-uses the defined communication security mechanisms within the storage component. Thus, trusted mechanisms developed for communication security are extended into the storage space. Notably, this mechanism allows us to transmit requested data directly from the file system without decrypting read data blocks and then re-encrypting these for transmission. Thus, Fusion provides benefits in terms of processing speed and energy efficiency, which are important aspects for resource-constrained IoT devices. This paper describes the Fusion architecture and its instantiation for IPsec-based and DTLS-based systems. We describe Fusion's implementation and evaluate its storage overheads, communication performance, and energy consumption.
Comprehensive security mechanisms are required for a successful implementation of the Internet of Things (IoT). Existing solutions focus mainly on securing the communication links between Internet hosts and IoT devices. However, as most IoT devices nowadays provide vast amounts of flash storage space it is as well required to consider storage security within a comprehensive security framework. Instead of developing independent security solutions for storage and communication we propose Fusion, a framework which provides coalesced confidential storage and communication. Fusion uses existing secure communication protocols for the IoT such as IPsec and DTLS and re-uses the defined communication security mechanisms within the storage component. Thus, trusted mechanisms developed for communication security are extended into the storage space. Notably, this mechanism allows us to transmit requested data directly from the file system without decrypting read data blocks and then re-encrypting these for transmission. Thus, Fusion provides benefits in terms of processing speed and energy efficiency which are important aspects for resource constrained IoT devices. The paper describes the Fusion architecture and its instantiation for IPsec and DTLS based systems. We describe Fusion’s implementation and evaluate its storage overheads, communication performance and energy consumption
We study the effect of interference on localization algorithms through the study of the interference effect on signal features that are used for localization. Particularly, the effect of interference on packet-based Received Signal Strength Indicator (RSSI), reported by IEEE 802.11 and IEEE 802.15.4 technologies, and on Time of Flight (ToF), reported by IEEE 802.15.4 technology, is studied using both theoretical discussions and experimental verifications. As for the RSSI values, using an information theoretic formulation, we distinguish three operational regimes and we show that the RSSI values, in dBm, remain unchanged in the noise-limited regime, increase almost linearly with interference power in dBm in the interference-limited regime and cannot be obtained due to packet loss in the collision regime. The maximum observable RSSI variation is dependent on the transmission rate and Signal to Noise Ratio (SNR). We also show that ToF is, interestingly, decreased under interference which is caused in the symbol synchronization procedure at the receiver. After providing the experimental results, we discuss how the localization algorithms are affected by interference.
Radio interference may lead to packet losses, thus negatively affecting the performance of sensornet applications. In this paper, we experimentally assess the impact of external interference on state-of-the-art sensornet MAC protocols. Our experiments illustrate that specific features of existing protocols, e.g., hand-shaking schemes preceding the actual data transmission, play a critical role in this setting. We leverage these results by identifying mechanisms to improve the robustness of existing MAC protocols under interference. These mechanisms include the use of multiple hand-shaking attempts coupled with packet trains and suitable congestion backoff schemes to better tolerate interference. We embed these mechanisms within an existing X-MAC implementation and show that they considerably improve the packet delivery rate while keeping the power consumption at a moderate level.
Temperature is known to have a significant effect on the performance of radio transceivers: the higher the temper- ature, the lower the quality of links. Analysing this effect is particularly important in sensor networks because several applications are exposed to harsh environmental conditions. Daily or hourly changes in temperature can dramatically reduce the throughput, increase the delay, or even lead to network partitions. A few studies have quantified the impact of temperature on low-power wireless links, but only for a limited temperature range and on a single radio transceiver. Building on top of these preliminary observations, we de- sign a low-cost experimental infrastructure to vary the on- board temperature of sensor nodes in a repeatable fashion, and we study systematically the impact of temperature on various sensornet platforms. We show that temperature af- fects transmitting and receiving nodes differently, and that all platforms follow a similar trend that can be captured in a simple first-order model. This work represents an ini- tial stepping stone aimed at predicting the performance of a network considering the particular temperature profile of a given environment.
Wireless low-power transceivers used in sensor networks typically operate in unlicensed frequency bands that are subject to external radio interference caused by devices transmitting at much higher power.communication protocols should therefore be designed to be robust against such interference. A critical building block of many protocols at all layers is agreement on a piece of information among a set of nodes. At the MAC layer, nodes may need to agree on a new time slot or frequency channel, at the application layer nodes may need to agree on handing over a leader role from one node to another. Message loss caused by interference may break agreement in two different ways: none of the nodes uses the new information (time slot, channel, leader) and sticks with the previous assignment, or-even worse-some nodes use the new information and some do not. This may lead to reduced performance or failures. In this paper, we investigate the problem of agreement under external radio interference and point out the limitations of traditional message-based approaches. We propose JAG, a novel protocol that uses jamming instead of message transmissions to make sure that two neighbouring nodes agree, and show that it outperforms message-based approaches in terms of agreement probability, energy consumption, and time-to-completion. We further show that JAG can be used to obtain performance guarantees and meet the requirements of applications with real-time constraints.
New Internet of Things (IoT) technologies such as LongRange (LoRa) are emerging which enable power ecientwireless communication over very long distances. Devicestypically communicate directly to a sink node which removesthe need of constructing and maintaining a complex multi-hop network. Given the fact that a wide area is coveredand that all devices communicate directly to a few sinknodes a large number of nodes have to share the commu-nication medium. LoRa provides for this reason a rangeof communication options (centre frequency, spreading fac-tor, bandwidth, coding rates) from which a transmitter canchoose. Many combination settings are orthogonal and pro-vide simultaneous collision free communications. Neverthe-less, there is a limit regarding the number of transmitters aLoRa system can support. In this paper we investigate thecapacity limits of LoRa networks. Using experiments wedevelop models describing LoRa communication behaviour.We use these models to parameterise a LoRa simulation tostudy scalability. Our experiments show that a typical smartcity deployment can support 120 nodes per 3.8 ha, which isnot sucient for future IoT deployments. LoRa networkscan scale quite well, however, if they use dynamic commu-nication parameter selection and/or multiple sinks.