IoT systems are increasingly composed out of flexible, programmable, virtualised, and arbitrarily chained IoT elements and services using portable code. Moreover, they might be sliced, i.e. allowing multiple logical IoT systems (network + application) to run on top of a shared physical network and compute infrastructure. However, implementing and designing particularly security mechanisms for such IoT systems is challenging since a) promising technologies are still maturing, and b) the relationships among the many requirements, technologies and components are difficult to model a-priori.
The aim of the paper is to define design cues for the security architecture and mechanisms of future, virtualised, arbitrarily chained, and eventually sliced IoT systems. Hereby, our focus is laid on the authorisation and authentication of user, host, and code integrity in these virtualised systems. The design cues are derived from the design and implementation of a secure virtual environment for distributed and collaborative AI system engineering using so called AI pipelines. The pipelines apply chained virtual elements and services and facilitate the slicing of the system. The virtual environment is denoted for short as the virtual premise (VP). The use-case of the VP for AI design provides insight into the complex interactions in the architecture, leading us to believe that the VP concept can be generalised to the IoT systems mentioned above. In addition, the use-case permits to derive, implement, and test solutions. This paper describes the flexible architecture of the VP and the design and implementation of access and execution control in virtual and containerised environments.
The use of data is essential for the capabilities of Data-driven Artificial intelligence (AI), Deep Learning and Big Data analysis techniques. This data usage, however, raises intrinsically the concerns on data privacy. In addition, supporting collaborative development of AI applications across organisations has become a major need in AI system design. Digital Rights Management (DRM) is required to protect intellectual property in such collaboration. As a consequence of DRM, privacy threats and privacy-enforcing mechanisms will interact with each other.
This paper describes the privacy and DRM requirements in collaborative AI system design using AI pipelines. It describes the relationships between DRM and privacy and outlines the threats against these non-functional features. Finally, the paper provides first security architecture to protect against the threats on DRM and privacy in collaborative AI design using AI pipelines.
The use of data is essential for the capabilities of Data- driven Artificial intelligence (AI), Deep Learning and Big Data analysis techniques. The use of data, however, raises intrinsically the concern of the data privacy, in particular for the individuals that provide data. Hence, data privacy is considered as one of the main non-functional features of the Next Generation Internet. This paper describes the privacy challenges and requirements for collaborative AI application development. We investigate the constraints of using digital right management for supporting collaboration to address the privacy requirements in the regulation.
Cloudified architectures facilitate resource ac-cess and sharing which is independent from physical lo-cations. They permit high availability of resources at lowoperational costs. These advantages, however, do not comefor free. End users might fear that they lose control overthe location of their data and, thus, of their autonomy indeciding to whom the data is communicate to. Thus, strongprivacy and trust concerns arise for end users.In this work we will review and investigate privacy andtrust requirements for Cloud systems in general and for acloud-based marketplace (CMP) for AI in particular. We willinvestigate whether and how the current privacy and trustdimensions can be applied to Clouds and for the design ofa CMP. We also propose the concept of a "virtual premise"for enabling "Privacy-by-Design" [1] in Clouds. The ideaof a "virtual premise" might probably not be a universalsolution for any privacy requirement. However, we expectthat it provides flexibility in designing privacy in Cloudsand thus leading to higher trust.
The processing of the huge amounts of information from the Internet of Things (IoT) has become challenging. Artificial Intelligence (AI) techniques have been developed to handle this task efficiently. However, they require annotated data sets for training, while manual preprocessing of the data sets is costly. The H2020 project “Bonseyes” has suggested a “Market Place for AI”, where the stakeholders can engage trustfully in business around AI resources and data sets. The MP permits trading of resources that have high privacy requirements (e.g. data sets containing patient medical information) as well as ones with low requirements (e.g. fuel consumption of cars) for the sake of its generality. In this abstract we review trust and privacy definitions and provide a first requirement analysis for them with regards to Cloud-based Market Places (CMPs). The comparison of definitions and requirements allows for the identification of the research gap that will be addressed by the main authors PhD project. © IFIP International Federation for Information Processing 2017.
Long time evolution (LTE) represents an emerging and promising technology for providing broadband, mobile Internet access. Because of the limitation of available spectrum resource, high spectrum efficiency technologies such as channel-aware scheduling need to be explored. In this work, we evaluate the performance of three scheduling algorithms proposed for LTE downlink transmission. The evaluation takes place in mixed traffic scenarios and aims at exploring strengths and weaknesses of the proposed algorithms. Simulation results illustrate the importance of real-time traffic awareness by schedulers when a specified level of quality of service is required. The research shows that lack of prioritisation of multimedia traffic will lead to severe degradation of video and VoIP services even with a relatively low network load.
The Internet video sharing services have been gaining importance and increasing their share in the multimedia market. In order to compete effectively and provide broadcast television with a comparable level of quality, the Internet video should fulfil stringent quality of service (QoS) constraints. However, as the Internet video is based on packet transmission, it is influenced by delays, transmission errors, data losses and bandwidth limitations which can have a devastating influence on the perceived quality of the multimedia content. There are many works which describe the impact of network impairments on the Internet video. Nevertheless, little is known about how network conditions influence the video streamed by the currently popular services such as YouTube, where video is transmitted over reliable TCP/HTTP protocols. Therefore using a network simulator, we conducted an experimental evaluation of theHTTP based video transmission analysing how the network impairments mentioned above influence the streamed video. The experiments were validated against a network emulator supplied with real network traces. As a result of this work, we can state that the buffering strategies implemented by a video player are in many cases able to mitigate unfavourable network conditions what allow to play the streamed video smoothly. The results may serve Internet Service Providers so that they could tune their network characteristics in order to match the demand from HTTP video.
In this paper we present a novel framework supporting distributed network management using a self-organizing peer-to-peer overlay network. The overlay consists of several Distributed Network Agents which can perform distributed tests and distributed monitoring for fault and performance management. In that way, the concept is able to overcome disadvantages that come along with a central management unit, like lack of scalability and reliability. So far, little attention has been payed to the quality of service experienced by the end user. Our self-organizing management overlay provides a reliable and scalable basis for distributed tests that incorporate the end user. The use of a distributed, self-organizing software will also reduce capital and operational expenditures of the operator since fewer entities have to be installed and operated.
N/A
The Simple Network Management Protocol, SNMP, is the most widespread standard for Internet management. As SNMP stacks are available on most equipment, this protocol has to be considered when it comes to performance management, traffic engineering and network control. However, especially when using the predominant version 1, SNMPv1, special care has to be taken to avoid erroneous results when calculating bit rates. In this work, we evalute six off-the-shelf network components. We demonstrate that bit rate measurements can be completely misleading if the sample intervals that are used are either too large or too small. We present solutions and work-arounds for these problems. The devices are evaluated with regards to their updating and response behavior.
The demand for large-scale diverse datasets is rapidly increasing due to the advancements in AI services impacting day-to-day life. However, gathering such massive datasets still remains a critical challenge in the AI service engineering pipeline, especially in the computer vision domain where labeled data is scarce. Rather than isolated data collection, crowdsourcing techniques have shown promising potential to achieve the data collection task in a time and cost-efficient manner. In the existing crowdsourcing marketplaces, the crowd works to fulfill consumer-defined requirements where in the end consumer gains the data ownership and the crowd is compensated with task-based payment. On the contrary, this work proposes a blockchain-based decentralized marketplace named Vision Sovereignty Data Marketplace (ViSDM), in which the crowd works to fulfill global requirements & holds data ownership, the consumers pay a certain data price to perform a computing task (model training/testing), the data price is distributed among the crowd in a one-to-many manner through smart contracts, thus allowing the crowd to gain profit from each consumer transaction occurring on their data. The marketplace is implemented as multiple smart contracts and is evaluated based on blockchain-transaction gas fees for the stakeholder interaction & by running scenarios-based simulations. Furthermore, discussions address the challenges included in maintaining data quality and the future milestones towards deployment. © 2023 Owner/Author.
The size and diversity of the training datasets directly influences the decision-making process of AI models. Therefore, there is an immense need for massive and diverse datasets to enhance the deployment process of AI applications. Crowdsourcing marketplaces provide a fast and reliable alternative to the laborious data collection process. However, the existing crowdsourcing marketplaces are either centralized or do not fully provide data sovereignty. By contrast, this work proposes a decentralized crowdsourcing platform through prototypical implementation along with active involvement of business entities, that grants the users sovereignty over their collected data, named as Vision-Sovereignty Data Marketplace (ViSDM). This work contributes to the data marketplaces landscape by introducing (i) A liquid democracy-based voting system to negotiate prices between a buyer and multiple data owners, (ii) An automated AI-Based per-sample value calculation function to evaluate the data and distribute profit among the data owners. © 2023 Owner/Author.
Based on the need for distributed end-to-end quality management for next-generation mobile Internet services, this paper presents a ready-to-deploy quality assessment concept for the impact of the network on the performance of mobile services. We consider the throughput utility function (TUF) as a special case of the network utility function (NUF). These functions combine the observed network utility at the inlet and the outlet of a mobile network. NUF and TUF capture the damping effect of the network onto user-perceived quality from an end-to-end perspective. As opposed to sometimes hard-to-evaluate QoS parameters such as delay and loss, the NUF is highly intuitive due to its mapping to a simple value between 0 and 100 %, which reflects user perception. We demonstrate the capabilities of the proposed TUF by measurements of application-perceived throughput conducted in a mobile, i.e. GPRS and UMTS network.
Based on the need for distributed end-to-end quality management for next generation Internet services, this paper presents a ready-to-deploy quality assessment concept for the impact of the network on the service performance. The proposed Network Utility Function (NUF) combines the observed network utility at the inlet and the outlet. Thus, it captures the damping effect of the network onto user-perceived quality from an end-to-end perspective. As opposed to incomprehensible QoS parameters such as delay and loss, the NUF is highly intuitive due to its mapping to a simple value between 0 and 1. We demonstrate the capabilities of the proposed concept for a special NUF, the Throughput Utility Function (TUF) by realistic simulation.
Network performance management is facing the challenge of provisioning advanced services with stringent delay and throughput requirements. For this reason, shortage of network capacity implying delay or loss, so-called bottlenecks, have to be identified and to be classified. The latter tasks imply the need for tractable analytical performance models. We identify the stochastic fluid flow model, which is based on bit rates and its statistics, as a possible candidate of being capable of describing qualitative behaviour of bottlenecks. In this work, we show how total and individual bit rate statistics at the output of a bottleneck are calculated via the stochastic fluid flow model. From this, we deduce some general behaviours and classification criteria for bottlenecks.
To be able to satisfy their users, interactive applications like video conferences require a certain Quality-of-Service from heterogeneous networks. This paper proposes the use of throughput histograms as Quality-of-Service indicator. These histograms are built from local, unsynchronized, passive measurements of packet streams from the viewpoint of an application. They can easily be exchanged between sender and receiver, and their comparison provides information about severity and type of a potential bottleneck. We demonstrate the usefulness of these indicators for evaluating the transport quality perceived by a video conferencing application and its users in the presence of a bottleneck.
We are pleased to welcome you to the 3rd symposium dedicated to PhD students which is taking place during the 8th IEEE NFV-SDN conference. The NFV-SDN Doctoral Symposium is committed to fostering collaboration amongst PhD students and experts from all communities researching and working in the areas of Network Function Virtualization (NFV) and Software Defined Networks (SDN). It offers a unique opportunity for PhD students to present their latest research results, discuss new research ideas, and to gather valuable expert feedback on their work from experienced researchers from both industry and academia. Moreover, it is a place for mentoring and to get in touch with student peers working in the same field.
The fifth generation (5G) mobile network brings significant new capacity and opportunity to network operators while also creating new challenges and additional pressure to build and operate networks differently. The transformation to 5G mobile networks creates the opportunity to virtualize significant portions of the radio access (RAN) and network core, allowing operators to better compete with over-the-top and hyperscaler offerings. This book covers the business and technical areas of virtualization that enable the transformation and innovation that today’s operators are seeking. It identifies forward-looking gaps where the technology continues to develop, specifically packet acceleration and timing requirements, which today are still not fully virtualized. The book shows you the operational and support considerations, development and lifecycle management, business implications, and vendor-team dynamics involved in deploying a virtualized network. Packed with key concepts of virtualization that solve a broad array of problems, this is an essential reference for those entering this technical domain, those that are going to build and operate these networks, and those that are seeking to learn more about the telecom network. It illustrates why you just can’t do it all in the cloud today.
The 2022 IEEE NFV-SDN conference has committed itself to be one accelerator of the continuous exchange on the latest ideas, developments and results between all ecosystem partners in the academia and industry areas. The conference is highly encouraged to foster discussion on new approaches as well as dedicated work on missing aspects for improvements of NFV and SDN enabling architectures, algorithms, frameworks and operation of virtualized network functions, including container-based functions, and infrastructures. In addition to the latest NFV and SDN concepts and mechanisms, the 2022 IEEE NFV-SDN conference will focus on performance results as well as enhancements related to the use of machine learning in NFV and SDN networks, and improvements to data plane programmability.
Presents the introductory welcome message from the conference proceedings. May include the conference officers' congratulations to all involved with the conference event and publication of the proceedings record.
Voice-over-IP (VoIP) telephony becomes more and more popular in the wired Internet because of easy-to-use applications with high sound quality like Skype. UMTS operators promise to offer large data rates which should also make VoIP possible in a mobile environment. However, the success of those application strongly depends on the user perceived voice quality. In this paper, we therefore analyze the achievable and the actual quality of IP-based telephony calls using Skype. This is done performing measurements in both a real UMTS network and a test environment. The latter is used to emulate rate control mechanisms and changing system conditions of UMTS networks. The results show whether Skype over UMTS is able to keep pace with existing mobile telephony systems and how it reacts to different network characteristics. The investigated performance measures comprise the Perceptual Evaluation of Speech Quality (PESQ) to evaluate the voice quality, the packet loss, the inter-packet delay, and the throughput to capture networkbased factors. In this context, the concept of the Network Utility Function (NUF) is applied to describe the impact of the network on the voice quality as perceived by the end-user.
Given the growing importance of quantitative relationships between user-perceived Quality of Experience (QoE) and network Quality of Service (QoS), this paper investigates the IQX hypothesis for two voice codecs, iLBC and G.711. This hypothesis expresses QoE as an exponential function of QoS degradation. The experiments are carried out in a controlled environment using the softphone SJPhone, the network emulator NIST Net, and a tool calculating the PESQ (Perceptual Evaluation of Speech Quality) from sent and received audio files. The IQX hypothesis is confirmed exactly for disturbances perceived on applications level, packet loss and packet reordering, which clearly correlate to the main sensitivities of the used softphone to packet-level disturbances such as loss, jitter and reordering. So, besides of providing a unified relationship between QoE and QoS, the IQX also proved to be capable of identifying the QoS parameters of relevance for QoE degradations. The study also points out interesting tracks for future work in terms of QoS degradations and related QoE evaluations.
Virtual Network Embedding will be one of the key concepts of the Future Internet. For an ISP it is important to know how many additional Virtual Networks (VNs) of a specific application (e.g. web, streaming, P2P, and VoIP) are mappable into the current resource substrate with a certain probability. In this work we calculate this probability with our embedding algorithm which enables us to consider side effects based on remapping of VNs (e.g. due to reduced link delay). Our results show that minimal extra resources can significantly increase embedding probability of additional VNs.
The Bonseyes EU H2020 collaborative project aims to develop a platform consisting of a Data Marketplace, a Deep Learning Toolbox, and Developer Reference Platforms for organizations wanting to adopt Artificial Intelligence. The project will be focused on using artificial intelligence in low power Internet of Things (IoT) devices ("edge computing"), embedded computing systems, and data center servers ("cloud computing"). It will bring about orders of magnitude improvements in efficiency, performance, reliability, security, and productivity in the design and programming of systems of artificial intelligence that incorporate Smart Cyber-Physical Systems (CPS). In addition, it will solve a causality problem for organizations who lack access to Data and Models. Its open software architecture will facilitate adoption of the whole concept on a wider scale. To evaluate the effectiveness, technical feasibility, and to quantify the real-world improvements in efficiency, security, performance, effort and cost of adding AI to products and services using the Bonseyes platform, four complementary demonstrators will be built. Bonseyes platform capabilities are aimed at being aligned with the European FI-PPP activities and take advantage of its flagship project FIWARE. This paper provides a description of the project motivation, goals and preliminary work.
Different types of software components and data have to be combined to solve an artificial intelligence challenge. An emerging marketplace for these components will allow for their exchange and distribution. To facilitate and boost the collaboration on the marketplace a solution for finding compatible artifacts is needed. We propose a concept to define compatibility on such a marketplace and suggest appropriate scenarios on how users can interact with it to support the different types of required compatibility. We also propose an initial architecture that derives from and implements the compatibility principles and makes the scenarios feasible. We matured our concept in focus group workshops and interviews with potential marketplace users from industry and academia. The results demonstrate the applicability of the concept in a real-world scenario.
This paper takes an exploratory look on control plane signaling in a mobile cellular core network. In contrast to most contributions in this field, our focus does not lie on the wireless or user-oriented parts of the network, but on signaling in the core network. In an investigation of core network data we take a look at statistics related to GTP tunnels and their signaling. Based on the results thereof we propose a definition of load at the GGSN and create an initial load queuingmodel.We find signs of user devices putting burden on the core network through their behavior
Modern computing devices, including smartphones, laptops, and tablet computers, are equipped with a number of sensors and data sources, ranging from accelerometers to 3G radio information. But different platforms (eg Android or iOS) use different interfaces for sensors access. There is no common way to access this data. Privacy is another issue hardly tackled on any platform, other than in a crude allow or deny way.
Todays virtualization technologies are manifold and a comparison is hardly achievable. Thus, a metric independent from the virtualization technology is required to compare different systems. The metric is measurable in a passive way, hence, no artificial traffic has to be generated, and the virtualization system needs not to be modified. It evaluates the throughput of events on different time slices. This methodology, in contrast to existing jitter evaluations, enables the identification of critical timescales in the virtualization system. In this demonstration a proof-of-concept for a performance metric for NFV elements on multiple timescales is presented. In a reduced environment consisting of a single virtual router host the influences of hardware resource sharing and other impact factors (e.g. cpu, memory or disc load) are made visible. The demonstration gives an example of a performance degrease on smaller timescales, which can not be identified by a common throughput measurement over time. Thus, the presented metric enables to identify critical system conditions and can be used to optimize the scheduling of NFV, to compare different virtualization technologies, or to grade the performance for specific applications.
During the last decade, we have witnessed a rapiddevelopment of extended reality (XR) technologies such asaugmented reality (AR) and virtual reality (VR). Further, therehave been tremendous advancements in artificial intelligence(AI) and machine learning (ML). These two trends will havea significant impact on future digital societies. The vision ofan immersive, ubiquitous, and intelligent virtual space opensup new opportunities for creating an enhanced digital world inwhich the users are at the center of the development process,so-calledintelligent realities(IRs).The “Human-Centered Intelligent Realities” (HINTS) profileproject will develop concepts, principles, methods, algorithms,and tools for human-centered IRs, thus leading the wayfor future immersive, user-aware, and intelligent interactivedigital environments. The HINTS project is centered aroundan ecosystem combining XR and communication paradigms toform novel intelligent digital systems.HINTS will provide users with new ways to understand,collaborate with, and control digital systems. These novelways will be based on visual and data-driven platforms whichenable tangible, immersive cognitive interactions within realand virtual realities. Thus, exploiting digital systems in a moreefficient, effective, engaging, and resource-aware condition.Moreover, the systems will be equipped with cognitive featuresbased on AI and ML, which allow users to engage with digitalrealities and data in novel forms. This paper describes theHINTS profile project and its initial results. ©2023, Copyright held by the authors
Renewable energy sources were introduced as an alternative to fossil fuel sources to make electricity generation cleaner. However, today's renewable energy markets face a number of limitations, such as inflexible pricing models and inaccurate consumption information. These limitations can be addressed with a decentralized marketplace architecture. Such architecture requires a mechanism to guarantee that all marketplace operations are executed according to predefined rules and regulations. One of the ways to establish such a mechanism is blockchain technology. This work defines a decentralized blockchain-based peer-to-peer (P2P) energy marketplace which addresses actors' privacy and the performance of consensus mechanisms. The defined marketplace utilizes private permissioned Ethereum-based blockchain client Hyperledger Besu (HB) and its smart contracts to automate the P2P trade settlement process. Also, to make the marketplace compliant with energy trade regulations, it includes the regulator actor, which manages the issue and consumption of guarantees of origin and certifies the renewable energy sources used to generate traded electricity. Finally, the proposed marketplace incorporates privacy-preserving features, allowing it to generate private transactions and store them within a designated group of actors. Performance evaluation results of HB-based marketplace with three main consensus mechanisms for private networks, i.e., Clique, IBFT 2.0, and QBFT, demonstrate a lower throughput than another popular private permissioned blockchain platform Hyperledger Fabric (HF). However, the lower throughput is a side effect of the Byzantine Fault Tolerant characteristics of HB's consensus mechanisms, i.e., IBFT 2.0 and QBFT, which provide increased security compared to HF's Crash Fault Tolerant consensus RAFT.
This work defines a decentralized blockchain-based peer-to-peer (P2P) energy marketplace which addresses actors' privacy and the performance of consensus mechanisms. The defined marketplace utilizes private permissioned Ethereum-based blockchain client Hyperledger Besu (HB) and its smart contracts to automate the P2P trade settlement process. Also, to make the marketplace compliant with energy trade regulations, it includes the regulator actor, which manages the issue and generation of guarantees of origin and certifies the renewable energy sources used to generate traded electricity. Finally, the proposed marketplace incorporates privacy-preserving features, allowing it to generate private transactions and store them within a designated group of actors. Performance evaluation results of HB-based marketplace with three main consensus mechanisms for private networks, i. e., Clique, IBFT 2.0, and QBFT, demonstrate a lower throughput than another popular private permissioned blockchain platform Hyperledger Fabric (HF). However, the lower throughput is a side effect of the Byzantine Fault Tolerant characteristics of HB's consensus mechanisms, i. e., IBFT 2.0 and QBFT, which provide increased security compared to HF's Crash Fault Tolerant consensus RAFT. © 2023 IEEE.
Renewable energy sources are becoming increasingly important as a substitute for fossil energy production. However, distributed renewable energy production faces several challenges regarding trading and management, such as inflexible pricing models and inaccurate green consumption information. A decentralized peer-to-peer (P2P) electricity marketplace may address these challenges. It enables prosumers to market their self-produced electricity. However, such a marketplace needs to guarantee that the transactions follow market rules and government regulations, cannot be manipulated, and are consistent with the generated electricity. One of the ways to provide these guarantees is to leverage blockchain technology.
This work describes a decentralized blockchain-based P2P energy marketplace addressing privacy, trust, and governance issues. It uses a private permissioned blockchain Hyperledger Fabric (HF) and its smart contracts to perform energy trading settlements. The suggested P2P marketplace includes a particular regulator actor acting as a governmental representative overseeing marketplace operations. In this way, the suggested P2P marketplace can address the governance issues needed in electricity marketplaces. Further, the proposed marketplace ensures actors’ data privacy by employing HF’s private data collections while preserving the integrity and auditability of all operations. We present an in-depth performance evaluation and provide insights into the security and privacy challenges emerging from such a marketplace. The results demonstrate that partial centralization by the applied regulator does not limit the P2P energy trade settlement execution. Blockchain technology allows for automated marketplace operations enabling better incentives for prosumer electricity production. Finally, the suggested marketplace preserves the user’s privacy when P2P energy trade settlements are conducted.
The energy distribution infrastructure is a vital part of any modern society. Thus, renewable energy sources are becoming increasingly important as a substitute for energy produced with fossil fuels. However, renewable energy production faces several challenges in the energy market and its management, such as inflexible pricing models and inaccurate green consumption information. A decentralized electricity marketplace may address these challenges. However, such a platform must guarantee that the transactions follow the market rules and regulations, cannot be manipulated, and are consistent with the energy generated. One of the ways to provide these guarantees is to leverage blockchain technology. Our previous studies demonstrate the current energy trade regulations result in partial marketplace centralization around governmental authority. The governmental authority, i.e., the regulator, oversees marketplace operations and requires energy providers to share private data about electricity generation and energy trade settlement. This study proposes amendments to D2018/2001 legislation and the governmental regulator actor to improve marketplace flexibility and data privacy. Further, we propose a new blockchain-based P2P energy marketplace model with increased flexibility and scalability while addressing actors' privacy and trust requirements. The marketplace utilizes a private permissioned blockchain Hyperledger Fabric (HF) due to its privacy-preserving and trust-enabling capabilities. This study provides HF comparison with Ethereum-based competitor Hyperledger Besu (HB). Further, based on the identified advantages and limitations, we discuss the rationale for the choice of HF. We utilize HF's smart contracts to enable P2P energy trade settlement orchestration and management. Based on previous studies, we propose an improvement towards HF security by utilizing a Byzantine Fault Tolerant (BFT) consensus mechanism, which is protected against malicious system actors. The results demonstrate that while protecting the blockchain network from malicious system actors, the BFT mechanism shows a similar throughput to the RAFT Crash Fault Tolerant consensus in the context of the P2P energy marketplace. Finally, BFT consensus enables legislation enhancements, resulting in increased flexibility and data privacy in the energy trade marketplace.
When exposed to the network, applications and devices are exposed to constant security risks. This puts pressure on hardware and software vendors to test even more than before how secure applications and devices are before being released to customers.
We have worked towards defining and developing a frame- work for automated security testbeds. Testbeds comprise both the ability to build on-demand virtual isolated networks that emulate corporate networks, as well as the ability to automate security breach scenarios, which accelerates the testing process. In order to accomplish both features of the testbed, we have based the framework on well-established cloud and orchestration technologies e. g. , OpenStack and Ansible. Although many of these technologies are powerful, they are also complex, leading to a steep learning curve for new users. Thus, one of the main goals of the developed framework is to hide the underlying complexities through a template approach and a simplified user interface that shortens the initial training time.
In this paper, we present the full stack of technologies that were used for constructing the testbed framework. The framework allows us to create entire virtual networks and to manipulate network devices started in it, via comprehensive yet simple interfaces. Also, we describe a specific testbed solution, developed as a part of the Test Arena Blekinge project.
Service Chains have developed into an important concept in service provisioning in today’s and future Clouds. Cloud systems, e.g., Amazon Web Services (AWS), permit the implementation and deployment of new applications, services and service chains rapidly and flexibly. They employ the idea of Infrastructure as Code (IaC), which is the process of managing and provisioning computing infrastructure and its configuration through machine-processable definition files.
In this paper, we first detail future service chains with particular focus on Network Function Virtualization (NFV) and machine learning in AI. Afterwards, we analyze and summarize the capabilities of today’s IaC tools for orchestrating Cloud infrastructures and service chains. We compare the functionality of the major five IaC tools: Puppet, Chef, SaltStack, Ansible, and Terraform. In addition, we demonstrate how to analyze the functional capabilities of one of the tools. Finally, we give an outlook on future research issues on using IaC tools across multiple operators, data center domains, and different stockholders that collaborate on service chains.
In this paper, we investigate how to design a security architecture of a Platform-as-a-Service (PaaS) solution, denoted as Secure Virtual Premise (SVP), for collaborative and distributed AI engineering using AI artifacts and Machine Learning (ML) pipelines. Artifacts are re-usable software objects which are a) tradeable in marketplaces, b) implemented by containers, c) offer AI functions as microservices, and, d) can form service chains, denoted as AI pipelines. Collaborative engineering is facilitated by the trading and (re-)using artifacts and, thus, accelerating the AI application design.
The security architecture of the SVP is built around the security needs of collaborative AI engineering and uses a proxy concept for microservices. The proxy shields the AI artifact and pipelines from outside adversaries as well as from misbehaving users, thus building trust among the collaborating parties. We identify the security needs of collaborative AI engineering, derive the security challenges, outline the SVP’s architecture, and describe its security capabilities and its implementation, which is currently in use with several AI developer communities. Furthermore, we evaluate the SVP’s Technology Readiness Level (TRL) with regard to collaborative AI engineering and data security.
Digital marketplaces were created recently to accelerate the delivery of applications and services to customers. Their appealing feature is to activate and dynamize the demand, supply, and development of digital goods, applications, or services. By being an intermediary between producer and consumer, the primary business model for a marketplace is to charge the producer with a commission on the amount paid by the consumer. However, most of the time, the commission is dictated by the marketplace facilitator itself and creates an imbalance in value distribution, where producer and consumer sides suffer monetarily. In order to eliminate the need for a centralized entity between the producer and consumer, a blockchain-based decentralized digital marketplace concept was introduced. It provides marketplace actors with the tools to perform business transactions in a trusted manner and without the need for an intermediary. In this work, we provide a survey on Telecommunication Services Marketplaces (TSMs) which employ blockchain technology as the main trust enabling entity in order to avoid any intermediaries. We provide an overview of scientific and industrial proposals on the blockchain-based online digital marketplaces at large, and TSMs in particular. We consider in this study the notion of telecommunication services as any service enabling the capability for information transfer and, increasingly, information processing provided to a group of users by a telecommunications system. We discuss the main standardization activities around the concepts of TSMs and provide particular use-cases for the TSM business transactions such as SLA settlement. Also, we provide insights into the main foundational services provided by the TSM, as well as a survey of the scientific and industrial proposals for such services. Finally, a prospect for future developments is given. Author
Peer-to-peer file sharing applications have evolved to one of the major traffic sources in the Internet. In particular, the eDonkey file sharing system and its derivatives are causing high amounts of traffic volume in today’s networks. The eDonkey system is typically used for exchanging very large files like audio/video CDs or even DVD images. In this report we provide a measurement based traffic profile of the eDonkey service. Furthermore, we discuss how this type of service increases the ”mice and elephants” phenomenon in the Internet traffic characteristics.
The rapid adoption of networks that are based on "cloudification" and Network Function Virtualisation (NFV) comes from the anticipated high cost savings of up to 70% in their build and operation. The high savings are founded in the use of general standard servers, instead of single-purpose hardware, and by efficiency resource sharing through virtualisation concepts. In this paper, we discuss the capabilities of resource description of "on-board" tools, i.e. using standard Linux commands, to enable OPEX savings. We put a focus on monitoring resources on small time-scales and on the variation observed on such scales. We introduce a QoE-based comparative concept that relates guest and host views on "utilisation" and "load" for the analysis of the variations. We describe the order of variations in "utilisation" and "load" by measurement and by graphical analysis of the measurements. We do these evaluations for different host operating systems and monitoring tools.
Cloud Networking (CN) and related conceptsoffer appealing novelties to Cloud Computing (CC) customers.They can do a one-stop-shopping for network-enhanced cloudservices. In addition, the costs of such services might below due to multiple customers sharing the infrastructures.Moreover, telecommunication network operators are adopt-ing the CN in theirNetwork Functions Virtualisation (NFV)framework for reducing costs and increasing the flexibility oftheir networks. The technical appeal of CN comes from thetight integration of CC and smart networks. The economicalattractiveness results from avoiding dedicated hardware, shar-ing of resources, and simplified resource management (RM) asseen by the users respectively by the applications. The visionof cheap and integrated CN services is obviously attractive,but it is also evident that it will require more complex RMprocedures for efficiently balancing the usage of all resources.In this contribution, we suggest an initial architecture forintegrated and practical RM in CN and NFV systems. TheRM concept aims at locating and analysing performancebottlenecks, efficiency problems, and eventually discover un-used resources. The suggested architecture is based on alayered view on the system. Moreover, we detail difficultiesin practical resources usage monitoring which, in turn, definerequirements for a RM architecture. The requirement analysisis based on measurements in a CN infrastructure.
In this paper we will investigate why and how Network Virtualization (NV) can overcome the shortfalls of the current system and how it paves the way for the Future Internet. Therefore, we will first discuss some major deficiencies and achievements of today’s Internet. Afterwards, we identify three major building blocks of NV: a) the use of application-specific routing overlays, b) the safe consolidation of resources by OS virtualization on a generic infrastructure, and c) the exploitation of the network diversity for performance enhancements and for new business models, such as the provisioning of intermediate nodes or path oracles. Subsequently, we discuss an implementation scheme for network virtualization or routing overlays based on one-hop source routers (OSRs). The capabilities of the combination of NV and OSRs are demonstrated by a concurrent multipath transmission (CMP) mechanism (also known as stripping) for obtaining high throughput transmission pipes. The suggested stripping mechanism constitutes a first instance of a refinement of the concept of NV, the idea of transport system virtualization.
The explosive growth of the Internet has fundamentally changed the global society. The emergence of concepts like service-oriented architecture (SOA), Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), Network as a Service (NaaS) and Cloud Computing in general has catalyzed the migration from the information-oriented Internet into an Internet of Services (IoS). This has opened up virtually unbounded possibilities for the creation of new and innovative services that facilitate business processes and improve the quality of life. However, this also calls for new approaches to ensuring quality and reliability of these services. The goal of this book chapter is to first analyze the state-of-the-art in the area of autonomous control for a reliable IoS and then to identify the main research challenges within it. A general background and high-level description of the current state of knowledge is presented. Then, for each of the three subareas, namely the autonomous management and real-time control, methods and tools for monitoring and service prediction, and smart pricing and competition in multi-domain systems, a brief general introduction and background are presented, and a list of key research challenges is formulated.
On behalf of the Organizing Committee, it is a pleasure to welcome you to the IEEE International Conference on Fog and Mobile Edge Computing (FMEC’2024) Malmö, Sweden. September 2-5, 2024. The FMEC’2024 aims at providing a continuing, sustained and global forum for disseminating the latest scientific research and industry results using large-scale data analytics over edge/fog deployment architectures. Its scope covered Edge Intelligence (EI), its applications, and current and emerging technologies. Centered around edge Al-enabled services, FMEC’24 covers various research on an edge-to-cloud continuum, edge security, data privacy, communication efficiency, and edge applications resilience.