Battery storage is emerging as a key component of intelligent green electricitiy systems. The battery is monetized through market participation, which usually involves bidding. Bidding is a multiâobjective optimization problem, involving targets such as maximizing market compensation and minimizing penalties for failing to provide the service and costs for battery aging. In this article, battery participation is investigated on primary frequency reserve markets. Reinforcement learning is applied for the optimization. In previous research, only simplified formulations of battery aging have been used in the reinforcement learning formulation, so it is unclear how the optimizer would perform with a real battery. In this article, a physicsâbased battery aging model is used to assess the aging. The contribution of this article is a methodology involving a realistic battery simulation to assess the performance of the trained RL agent with respect to battery aging in order to inform the selection of the weighting of the aging term in the RL reward formula. The RL agent performs day-ahead bidding on the Finnish Frequency Containment Reserves for Normal Operation market, with the objective of maximizing market compensation, minimizing market penalties and minimizing aging costs.
Battery storages are an essential element of the emerging smart grid. Compared to other distributed intelligent energy resources, batteries have the advantage of being able to rapidly react to events such as renewable generation fluctuations or grid disturbances. There is a lack of research on ways to profitably exploit this ability. Any solution needs to consider rapid electrical phenomena as well as the much slower dynamics of relevant electricity markets. Reinforcement learning is a branch of artificial intelligence that has shown promise in optimizing complex problems involving uncertainty. This article applies reinforcement learning to the problem of trading batteries. The problem involves two timescales, both of which are important for profitability. Firstly, trading the battery capacity must occur on the timescale of the chosen electricity markets. Secondly, the real-time operation of the battery must ensure that no financial penalties are incurred from failing to meet the technical specification. The trading-related decisions must be done under uncertainties, such as unknown future market prices and unpredictable power grid disturbances. In this article, a simulation model of a battery system is proposed as the environment to train a reinforcement learning agent to make such decisions. The system is demonstrated with an application of the battery to Finnish primary frequency reserve markets.
The popularity of network virtualization has recently regained considerable momentum because of the emergence of OpenFlow technology. It is essentially decouples a data plane from a control plane and promotes hardware programmability. Subsequently, OpenFlow facilitates the implementation of network virtualization. This study aims to provide an overview of different approaches to create a virtual network using OpenFlow technology. The paper also presents the OpenFlow components to compare conventional network architecture with OpenFlow network architecture, particularly in terms of the virtualization. A thematic OpenFlow network virtualization taxonomy is devised to categorize network virtualization approaches. Several testbeds that support OpenFlow network virtualization are discussed with case studies to show the capabilities of OpenFlow virtualization. Moreover, the advantages of popular OpenFlow controllers that are designed to enhance network virtualization is compared and analyzed. Finally, we present key research challenges that mainly focus on security, scalability, reliability, isolation, and monitoring in the OpenFlow virtual environment. Numerous potential directions to tackle the problems related to OpenFlow network virtualization are likewise discussed
The use of medical images has been continuously increasing, which makes manual investigations of every image a difficult task. This study focuses on classifying brain magnetic resonance images (MRIs) as normal, where a brain tumor is absent, or as abnormal, where a brain tumor is present. A hybrid intelligent system for automatic brain tumor detection and MRI classification is proposed. This system assists radiologists in interpreting the MRIs, improves the brain tumor diagnostic accuracy, and directs the focus toward the abnormal images only. The proposed computer-aided diagnosis (CAD) system consists of five steps: MRI preprocessing to remove the background noise, image segmentation by combining Otsu binarization and K-means clustering, feature extraction using the discrete wavelet transform (DWT) approach, and dimensionality reduction of the features by applying the principal component analysis (PCA) method. The major features were submitted to a kernel support vector machine (KSVM) for performing the MRI classification. The performance evaluation of the proposed system measured a maximum classification accuracy of 100 % using an available MRIs database. The processing time for all processes was recorded as 1.23 seconds. The obtained results have demonstrated the superiority of the proposed system.
Computer-aided diagnosis (CAD) systems have become very important for the medical diagnosis of brain tumors. The systems improve the diagnostic accuracy and reduce the required time. In this paper, a two-stage CAD system has been developed for automatic detection and classification of brain tumor through magnetic resonance images (MRIs). In the first stage, the system classifies brain tumor MRI into normal and abnormal images. In the second stage, the type of tumor is classified as benign (Noncancerous) or malignant (Cancerous) from the abnormal MRIs. The proposed CAD ensembles the following computational methods: MRI image segmentation by K-means clustering, feature extraction using discrete wavelet transform (DWT), feature reduction by applying principal component analysis (PCA). The two-stage classification has been conducted using a support vector machine (SVM). Performance evaluation of the proposed CAD has achieved promising results using a non-standard MRIs database.
The successful early diagnosis of brain tumors plays a major role in improving the treatment outcomes and thus improving patient survival. Manually evaluating the numerous magnetic resonance imaging (MRI) images produced routinely in the clinic is a difficult process. Thus, there is a crucial need for computer-aided methods with better accuracy for early tumor diagnosis. Computer-aided brain tumor diagnosis from MRI images consists of tumor detection, segmentation, and classification processes. Over the past few years, many studies have focused on traditional or classical machine learning techniques for brain tumor diagnosis. Recently, interest has developed in using deep learning techniques for diagnosing brain tumors with better accuracy and robustness. This study presents a comprehensive review of traditional machine learning techniques and evolving deep learning techniques for brain tumor diagnosis. This review paper identifies the key achievements reflected in the performance measurement metrics of the applied algorithms in the three diagnosis processes. In addition, this study discusses the key findings and draws attention to the lessons learned as a roadmap for future research.
In order to facilitate data-driven solutions for early detection of atrial fibrillation (AF), the 2017 CinC conference challenge was devoted to automatic AF classification based on short ECG recordings. The proposed solutions concentrated on maximizing the classifiers F 1 score, whereas the complexity of the classifiers was not considered. However, we argue that this must be addressed as complexity places restrictions on the applicability of inexpensive devices for AF monitoring outside hospitals. Therefore, this study investigates the feasibility of complexity reduction by analyzing one of the solutions presented for the challenge.
Wireless Sensor Networks (WSN) have been highly developed which can be used in agriculture to enable optimal irrigation scheduling. Since there is an absence of widely used available methods to support effective agriculture practice in different weather conditions, WSN technology can be used to optimise irrigation in the crop fields. This paper presents architecture of an irrigation system by incorporating interoperable IP based WSN, which uses the protocol stacks and standard of the Internet of Things paradigm. The performance of fundamental issues of this network is emulated in Tmote Sky for 6LoWPAN over IEEE 802.15.4 radio link using the Contiki OS and the Cooja simulator. The simulated results of the performance of the WSN architecture presents the Round Trip Time (RTT) as well as the packet loss of different packet size. In addition, the average power consumption and the radio duty cycle of the sensors are studied. This will facilitate the deployment of a scalable and interoperable multi hop WSN, positioning of border router and to manage power consumption of the sensors.
Wireless Sensor Networks (WSNs) are playing remarkable contribution in real time decision making by actuating the surroundings of environment. As a consequence, the contemporary agriculture is now using WSNs technology for better crop production, such as irrigation scheduling based on moisture level data sensed by the sensors. Since WSNs are deployed in constraints environments, the life time of sensors is very crucial for normal operation of the networks. In this regard routing protocol is a prime factor for the prolonged life time of sensors. This research focuses the performances analysis of some clustering based routing protocols to select the best routing protocol. Four algorithms are considered, namely Low Energy Adaptive Clustering Hierarchy (LEACH), Threshold Sensitive Energy Efficient sensor Network (TEEN), Stable Election Protocol (SEP) and Energy Aware Multi Hop Multi Path (EAMMH). The simulation is carried out in Matlab framework by using the mathematical models of those algortihms in heterogeneous environment. The performance metrics which are considered are stability period, network lifetime, number of dead nodes per round, number of cluster heads (CH) per round, throughput and average residual energy of node. The experimental results illustrate that TEEN provides greater stable region and lifetime than the others while SEP ensures more througput.
Because of the increased popularity and fast expansion of the Internet as well as Internet of things, networks are growing rapidly in every corner of the society. As a result, huge amount of data is travelling across the computer networks that lead to the vulnerability of data integrity, confidentiality and reliability. So, network security is a burning issue to keep the integrity of systems and data. The traditional security guards such as firewalls with access control lists are not anymore enough to secure systems. To address the drawbacks of traditional Intrusion Detection Systems (IDSs), artificial intelligence and machine learning based models open up new opportunity to classify abnormal traffic as anomaly with a self-learning capability. Many supervised learning models have been adopted to detect anomaly from networks traffic. In quest to select a good learning model in terms of precision, recall, area under receiver operating curve, accuracy, F-score and model built time, this paper illustrates the performance comparison between Naïve Bayes, Multilayer Perceptron, J48, Naïve Bayes Tree, and Random Forest classification models. These models are trained and tested on three subsets of features derived from the original benchmark network intrusion detection dataset, NSL-KDD. The three subsets are derived by applying different attributes evaluator’s algorithms. The simulation is carried out by using the WEKA data mining tool.
In this paper, a distributed home automation system will be demonstrated. Traditional systems are based on a central controller where all the decisions are made. The proposed control architecture is a solution to overcome the problems such as the lack of flexibility and re-configurability that most of the conventional systems have. This has been achieved by employing a method based on the new IEC 61499 function block standard, which is proposed for distributed control systems. This paper also proposes a wireless sensor network as the system infrastructure in addition to the function blocks in order to implement the Internet-of-Things technology into the area of home automation as a solution for distributed monitoring and control. The proposed system has been implemented in both Cyber (nxtControl) and Physical (Contiki-OS) level to show the applicability of the solution
This book provides a comprehensive overview of computational intelligence methods for semantic knowledge management. Contrary to popular belief, the methods for semantic management of information were created several decades ago, long before the birth of the Internet. In fact, it was back in 1945 when Vannevar Bush introduced the idea for the first protohypertext: the MEMEX (MEMory + indEX) machine. In the years that followed, Bush’s idea influenced the development of early hypertext systems until, in the 1980s, Tim Berners Lee developed the idea of the World Wide Web (WWW) as it is known today. From then on, there was an exponential growth in research and industrial activities related to the semantic management of the information and its exploitation in different application domains, such as healthcare, e-learning and energy management.
However, semantics methods are not yet able to address some of the problems that naturally characterize knowledge management, such as the vagueness and uncertainty of information. This book reveals how computational intelligence methodologies, due to their natural inclination to deal with imprecision and partial truth, are opening new positive scenarios for designing innovative semantic knowledge management architectures.
Despite evidence of rising popularity of video on the web (or VOW), little is known about how users access video. However, such a characterization can greatly benefit the design of multimedia systems such as web video proxies and VOW servers. Hence, this paper presents an analysis of trace data obtained from an ongoing VOW experiment in Lulea University of Technology, Sweden. This experiment is unique as video material is distributed over a high bandwidth network allowing users to make access decisions without the network being a major factor. Our analysis revealed a number of interesting discoveries regarding user VOW access. For example, accesses display high temporal locality: several requests for the same video title often occur within a short time span. Accesses also exhibited spatial locality of reference whereby a small number of machines accounted for a large number of overall requests. Another finding was a browsing pattern where users preview the initial portion of a video to find out if they are interested. If they like it, they continue watching, otherwise they halt it. This pattern suggests that caching the first several minutes of video data should prove effective. Lastly, the analysis shows that, contrary to previous studies, ranking of video titles by popularity did not fit a Zipfian distribution.
Online social media has completely transformed how we communicate with each other. While online discussion platforms are available in the form of applications and websites, an emergent outcome of this transformation is the phenomenon of ‘opinion leaders’. A number of previous studies have been presented to identify opinion leaders in online discussion networks. In particular, Feng (2016 Comput. Hum. Behav. 54, 43–53. (doi:10.1016/j.chb.2015.07.052)) has identified five different types of central users besides outlining their communication patterns in an online communication network. However, the presented work focuses on a limited time span. The question remains as to whether similar communication patterns exist that will stand the test of time over longer periods. Here, we present a critical analysis of the Feng framework both for short-term as well as for longer periods. Additionally, for validation, we take another case study presented by Udanor et al. (2016 Program 50, 481–507. (doi:10.1108/PROG-02-2016-0011)) to further understand these dynamics. Results indicate that not all Feng-based central users may be identifiable in the longer term. Conversation starter and influencers were noted as opinion leaders in the network. These users play an important role as information sources in long-term discussions. Whereas network builder and active engager help in connecting otherwise sparse communities. Furthermore, we discuss the changing positions of opinion leaders and their power to keep isolates interested in an online discussion network.
Glaucoma detection is an important research area in intelligent system and it plays an important role to medical field. Glaucoma can give rise to an irreversible blindness due to lack of proper diagnosis. Doctors need to perform many tests to diagnosis this threatening disease. It requires a lot of time and expense. Sometime affected people may not have any vision loss, at the early stage of glaucoma. For detecting glaucoma, we have built a model to lessen the time and cost. Our work introduces a CNN based Inception V3 model. We used total 6072 images. Among this image 2336 were glaucomatous and 3736 were normal fundus image. For training our model we took 5460 images and for testing we took 612 images. After that we obtained an accuracy of 0.8529 and a value of 0.9387 for AUC. For comparison, we used DenseNet121 and ResNet50 algorithm and got an accuracy of 0.8153 and 0.7761 respectively.
Researchers are increasingly exploring educational games in immersive virtual reality (IVR) environments to facilitate students’ learning experiences. Mainly, the effect of IVR on learning outcomes has been the focus. However, far too little attention has been paid to the influence of game elements and IVR features on learners’ perceived cognition. This study examined the relationship between game elements (challenge, goal clarity, and feedback) as pedagogical approach, features of IVR technology (immersion and interaction), and learners’ perceived cognition (reflective thinking and comprehension). An experiment was conducted with 49 undergraduate students who played an IVR game-based application (iThinkSmart) containing mini games developed to facilitate learners’ computational thinking competency. The study employed partial least squares structural equation modelling to investigate the effect of educational game elements and learning contents on learner’s cognition. Findings show that goal clarity is the main predictor of learners’ reflective thinking and comprehension in an educational game-based IVR application. It was also confirmed that immersion and interaction experience impact learner’s comprehension. Notably, adequate learning content in terms of the organisation and relevance of the content contained in an IVR game-based application significantly moderate learners’ reflective thinking and comprehension. The findings of this study have implications for educators and developers of IVR game-based intervention to facilitate learning in the higher education context. In particular, the implication of this study touches on the aspect of learners’ cognitive factors that aim to produce 21st-century problem-solving skills through critical thinking.
Understanding the principles of computational thinking (CT), e.g., problem abstraction, decomposition, and recursion, is vital for computer science (CS) students. Unfortunately, these concepts can be difficult for novice students to understand. One way students can develop CT skills is to involve them in the design of an application to teach CT. This study focuses on co-designing mini games to support teaching and learning CT principles and concepts in an online environment. Online co-design (OCD) of mini games enhances students’ understanding of problem-solving through a rigorous process of designing contextual educational games to aid their own learning. Given the current COVID-19 pandemic, where face-to-face co-designing between researchers and stakeholders could be difficult, OCD is a suitable option. CS students in a Nigerian higher education institution were recruited to co-design mini games with researchers. Mixed research methods comprising qualitative and quantitative strategies were employed in this study. Findings show that the participants gained relevant knowledge, for example, how to (i) create game scenarios and game elements related to CT, (ii) connect contextual storyline to mini games, (iii) collaborate in a group to create contextual low-fidelity mini game prototypes, and (iv) peer review each other’s mini game concepts. In addition, students were motivated toward designing educational mini games in their future studies. This study also demonstrates how to conduct OCD with students, presents lesson learned, and provides recommendations based on the authors’ experience.
Computational thinking (CT) has become an essential skill nowadays. For young students, CT competency is required to prepare them for future jobs. This competency can facilitate students’ understanding of programming knowledge which has been a challenge for many novices pursuing a computer science degree. This study focuses on designing and implementing a virtual reality (VR) game-based application (iThinkSmart) to support CT knowledge. The study followed the design science research methodology to design, implement, and evaluate the first prototype of the VR application. An initial evaluation of the prototype was conducted with 47 computer science students from a Nigerian university who voluntarily participated in an experimental process. To determine what works and what needs to be improved in the iThinkSmart VR game-based application, two groups were randomly formed, consisting of the experimental (n = 21) and the control (n = 26) groups respectively. Our findings suggest that VR increases motivation and therefore increase students’ CT skills, which contribute to knowledge regarding the affordances of VR in education and particularly provide evidence on the use of visualization of CT concepts to facilitate programming education. Furthermore, the study revealed that immersion, interaction, and engagement in a VR educational application can promote students’ CT competency in higher education institutions (HEI). In addition, it was shown that students who played the iThinkSmart VR game-based application gained higher cognitive benefits, increased interest and attitude to learning CT concepts. Although further investigation is required in order to gain more insights into students learning process, this study made significant contributions in positioning CT in the HEI context and provides empirical evidence regarding the use of educational VR mini games to support students learning achievements.
This paper presents iThinkSmart, an immersive virtual reality-based application to facilitate the learning of computational thinking (CT) concepts. The tool was developed to supplement the traditional teaching and learning of CT by integrating three virtual mini games, namely, River Crossing, Tower of Hanoi, and Mount Patti treasure hunt, to foster immersion, interaction, engagement, and personalization for an enhanced learning experience. iThinkSmart mini games can be played on a smartphone with a Goggle Cardboard and hand controller. This first prototype of the game accesses players' competency of CT and renders feedback based on learning progress.
This study examines the research landscape of smart learning environments by conducting a comprehensive bibliometric analysis of the field over the years. The study focused on the research trends, scholar’s productivity, and thematic focus of scientific publications in the field of smart learning environments. A total of 1081 data consisting of peer-reviewed articles were retrieved from the Scopus database. A bibliometric approach was applied to analyse the data for a comprehensive overview of the trend, thematic focus, and scientific production in the field of smart learning environments. The result from this bibliometric analysis indicates that the first paper on smart learning environments was published in 2002; implying the beginning of the field. Among other sources, “Computers & Education,” “Smart Learning Environments,” and “Computers in Human Behaviour” are the most relevant outlets publishing articles associated with smart learning environments. The work of Kinshuk et al., published in 2016, stands out as the most cited work among the analysed documents. The United States has the highest number of scientific productions and remained the most relevant country in the smart learning environment field. Besides, the results also showed names of prolific scholars and most relevant institutions in the field. Keywords such as “learning analytics,” “adaptive learning,” “personalized learning,” “blockchain,” and “deep learning” remain the trending keywords. Furthermore, thematic analysis shows that “digital storytelling” and its associated components such as “virtual reality,” “critical thinking,” and “serious games” are the emerging themes of the smart learning environments but need to be further developed to establish more ties with “smart learning”. The study provides useful contribution to the field by clearly presenting a comprehensive overview and research hotspots, thematic focus, and future direction of the field. These findings can guide scholars, especially the young ones in field of smart learning environments in defining their research focus and what aspect of smart leaning can be explored.
This study investigated the role of virtual reality (VR) in computer science (CS) education over the last 10 years by conducting a bibliometric and content analysis of articles related to the use of VR in CS education. A total of 971 articles published in peer-reviewed journals and conferences were collected from Web of Science and Scopus databases to conduct the bibliometric analysis. Furthermore, content analysis was conducted on 39 articles that met the inclusion criteria. This study demonstrates that VR research for CS education was faring well around 2011 but witnessed low production output between the years 2013 and 2016. However, scholars have increased their contribution in this field recently, starting from the year 2017. This study also revealed prolific scholars contributing to the field. It provides insightful information regarding research hotspots in VR that have emerged recently, which can be further explored to enhance CS education. In addition, the quantitative method remains the most preferred research method, while the questionnaire was the most used data collection technique. Moreover, descriptive analysis was primarily used in studies on VR in CS education. The study concludes that even though scholars are leveraging VR to advance CS education, more effort needs to be made by stakeholders across countries and institutions. In addition, a more rigorous methodological approach needs to be employed in future studies to provide more evidence-based research output. Our future study would investigate the pedagogy, content, and context of studies on VR in CS education.
Detecting communities in graphs is a fundamental tool to understand the structure of Web-based systems and predict their evolution. Many community detection algorithms are designed to process undirected graphs (i.e., graphs with bidirectional edges) but many graphs on the Web - e.g. microblogging Web sites, trust networks or the Web graph itself - are often directed. Few community detection algorithms deal with directed graphs but we lack their experimental comparison. In this paper we evaluated some community detection algorithms across accuracy and scalability. A first group of algorithms (Label Propagation and Infomap) are explicitly designed to manage directed graphs while a second group (e.g., WalkTrap) simply ignores edge directionality; finally, a third group of algorithms (e.g., Eigenvector) maps input graphs onto undirected ones and extracts communities from the symmetrized version of the input graph. We ran our tests on both artificial and real graphs and, on artificial graphs, WalkTrap achieved the highest accuracy, closely followed by other algorithms; Label Propagation has outstanding performance in scalability on both artificial and real graphs. The Infomap algorithm showcased the best trade-off between accuracy and computational performance and, therefore, it has to be considered as a promising tool for Web Data Analytics purposes.
Vehicular cloud computing is envisioned to deliver services that provide traffic safety and efficiency to vehicles. Vehicular cloud computing has great potential to change the contemporary vehicular communication paradigm. Explicitly, the underutilized resources of vehicles can be shared with other vehicles to manage traffic during congestion. These resources include but are not limited to storage, computing power, and Internet connectivity. This study reviews current traffic management systems to analyze the role and significance of vehicular cloud computing in road traffic management. First, an abstraction of the vehicular cloud infrastructure in an urban scenario is presented to explore the vehicular cloud computing process. A taxonomy of vehicular clouds that defines the cloud formation, integration types, and services is presented. A taxonomy of vehicular cloud services is also provided to explore the object types involved and their positions within the vehicular cloud. A comparison of the current state-of-the-art traffic management systems is performed in terms of parameters, such as vehicular ad hoc network infrastructure, Internet dependency, cloud management, scalability, traffic flow control, and emerging services. Potential future challenges and emerging technologies, such as the Internet of vehicles and its incorporation in traffic congestion control, are also discussed. Vehicular cloud computing is envisioned to have a substantial role in the development of smart traffic management solutions and in emerging Internet of vehicles
The explosive growth in the number of devices connected to the Internet of Things (IoT) and the exponential increase in data consumption only reflect how the growth of big data perfectly overlaps with that of IoT. The management of big data in a continuously expanding network gives rise to non-trivial concerns regarding data collection efficiency, data processing, analytics, and security. To address these concerns, researchers have examined the challenges associated with the successful deployment of IoT. Despite the large number of studies on big data, analytics, and IoT, the convergence of these areas creates several opportunities for flourishing big data and analytics for IoT systems. In this paper, we explore the recent advances in big data analytics for IoT systems as well as the key requirements for managing big data and for enabling analytics in an IoT environment. We taxonomized the literature based on important parameters. We identify the opportunities resulting from the convergence of big data, analytics, and IoT as well as discuss the role of big data analytics in IoT applications. Finally, several open challenges are presented as future research directions.
Early prediction of whether a product will go to backorder or not is necessary for optimal management of inventory that can reduce the losses in sales, establish a good relationship between the supplier and customer and maximize the revenues. In this study, we have investigated the performance and effectiveness of tree based machine learning algorithms to predict the backorder of a product. The research methodology consists of preprocessing of data, feature selection using statistical hypothesis test, imbalanced learning using the random undersampling method and performance evaluating and comparing of four tree based machine learning algorithms including decision tree, random forest, adaptive boosting and gradient boosting in terms of accuracy, precision, recall, f1-score, area under the receiver operating characteristic curve and area under the precision and recall curve. Three main findings of this study are (1) random forest model without feature selection and with random undersampling method achieved the highest performance in terms of all performance measure metrics, (2) feature selection cannot contribute to the performance enhancement of the tree based classifiers, and (3) random undersampling method significantly improves performance of tree based classifiers in product backorder prediction.
Accurate and rapid identification of the severe and non-severe COVID-19 patients is necessary for reducing the risk of overloading the hospitals, effective hospital resource utilization, and minimizing the mortality rate in the pandemic. A conjunctive belief rule-based clinical decision support system is proposed in this paper to identify critical and non-critical COVID-19 patients in hospitals using only three blood test markers. The experts’ knowledge of COVID-19 is encoded in the form of belief rules in the proposed method. To fine-tune the initial belief rules provided by COVID-19 experts using the real patient’s data, a modified differential evolution algorithm that can solve the constraint optimization problem of the belief rule base is also proposed in this paper. Several experiments are performed using 485 COVID-19 patients’ data to evaluate the effectiveness of the proposed system. Experimental result shows that, after optimization, the conjunctive belief rule-based system achieved the accuracy, sensitivity, and specificity of 0.954, 0.923, and 0.959, respectively, while for disjunctive belief rule base, they are 0.927, 0.769, and 0.948. Moreover, with a 98.85% AUC value, our proposed method shows superior performance than the four traditional machine learning algorithms: LR, SVM, DT, and ANN. All these results validate the effectiveness of our proposed method. The proposed system will help the hospital authorities to identify severe and non-severe COVID-19 patients and adopt optimal treatment plans in pandemic situations.
Tomato leaves can be infected with various infectious viruses and fungal diseases that drastically reduce tomato production and incur a great economic loss. Therefore, tomato leaf disease detection and identification are crucial for maintaining the global demand for tomatoes for a large population. This paper proposes a machine learning-based technique to identify diseases on tomato leaves and classify them into three diseases (Septoria, Yellow Curl Leaf, and Late Blight) and one healthy class. The proposed method extracts radiomics-based features from tomato leaf images and identifies the disease with a gradient boosting classifier. The dataset used in this study consists of 4000 tomato leaf disease images collected from the Plant Village dataset. The experimental results demonstrate the effectiveness and applicability of our proposed method for tomato leaf disease detection and classification.
Artificial intelligence has achieved notable advances across many applications, and the field is recently concerned with developing novel methods to explain machine learning models. Deep neural networks deliver the best performance accuracy in different domains, such as text categorization, image classification, and speech recognition. Since the neural network models are black-box types, they lack transparency and explainability in predicting results. During the COVID-19 pandemic, Fake News Detection is a challenging research problem as it endangers the lives of many online users by providing misinformation. Therefore, the transparency and explainability of COVID-19 fake news classification are necessary for building the trustworthiness of model prediction. We proposed an integrated LIME-BiLSTM model where BiLSTM assures classification accuracy, and LIME ensures transparency and explainability. In this integrated model, since LIME behaves similarly to the original model and explains the prediction, the proposed model becomes comprehensible. The performance of this model in terms of explainability is measured by using Kendall’s tau correlation coefficient. We also employ several machine learning models and provide a comparison of their performances. Therefore, we analyzed and compared the computation overhead of our proposed model with the other methods because the model takes the integrated strategy.
The enormous use of facial expression recognition in various sectors of computer science elevates the interest of researchers to research this topic. Computer vision coupled with deep learning approach formulates a way to solve several real-world problems. For instance, in robotics, to carry out as well as to strengthen the communication between expert systems and human or even between expert agents, it is one of the requirements to analyze information from visual content. Facial expression recognition is one of the trending topics in the area of computer vision. In our previous work, a facial expression recognition system is delivered which can classify an image into seven universal facial expressions—angry, disgust, fear, happy, neutral, sad, and surprise. This is the extension of our previous research in which a real-time facial expression recognition system is proposed that can recognize a total of ten facial expressions including the previous seven facial expressions and additional three facial expressions—mockery, think, and wink from video streaming data. After model training, the proposed model has been able to gain high validation accuracy on a combined facial expression dataset. Moreover, the real-time validation of the proposed model is also promising.
The novel Coronavirus-induced disease COVID-19 is the biggest threat to human health at the present time, and due to the transmission ability of this virus via its conveyor, it is spreading rapidly in almost every corner of the globe. The unification of medical and IT experts is required to bring this outbreak under control. In this research, an integration of both data and knowledge-driven approaches in a single framework is proposed to assess the survival probability of a COVID-19 patient. Several neural networks pre-trained models: Xception, InceptionResNetV2, and VGG Net, are trained on X-ray images of COVID-19 patients to distinguish between critical and non-critical patients. This prediction result, along with eight other significant risk factors associated with COVID-19 patients, is analyzed with a knowledge-driven belief rule-based expert system which forms a probability of survival for that particular patient. The reliability of the proposed integrated system has been tested by using real patient data and compared with expert opinion, where the performance of the system is found promising.
Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability
It is my pleasure to welcome you to the sixth Demonstration Session at the IEEE Conference on Local Computer Networks (LCN) 2014. We were looking for demonstrations for all topics covered by the main conference as well as all the workshops held in conjunction with the conference. The technical demonstrations were strongly encouraged to show innovative and original research. The main purpose of the demo session is to provide demonstrations that validate important research issues and/or show innovative prototypes.
A smart grid can be considered as a complex network where each node represents a generation unit or a consumer, whereas links can be used to represent transmission lines. One way to study complex systems is by using the agent-based modeling paradigm. The agent-based modeling is a way of representing a complex system of autonomous agents interacting with each other. Previously, a number of studies have been presented in the smart grid domain making use of the agent-based modeling paradigm. However, to the best of our knowledge, none of these studies have focused on the specification aspect of the model. The model specification is important not only for understanding but also for replication of the model. To fill this gap, this study focuses on specification methods for smart grid modeling. We adopt two specification methods named as Overview, design concept, and details and Descriptive agent-based modeling. By using specification methods, we provide tutorials and guidelines for model developing of smart grid starting from conceptual modeling to validated agent-based model through simulation. The specification study is exemplified through a case study from the smart grid domain. In the case study, we consider a large set of network, in which different consumers and power generation units are connected with each other through different configuration. In such a network, communication takes place between consumers and generating units for energy transmission and data routing. We demonstrate how to effectively model a complex system such as a smart grid using specification methods. We analyze these two specification approaches qualitatively as well as quantitatively. Extensive experiments demonstrate that Descriptive agent-based modeling is a more useful approach as compared with Overview, design concept, and details method for modeling as well as for replication of models for the smart grid.
Emojis are like small icons or images used to express our sentiments or feelings via text messages. They are extensively used in different social media platforms like Facebook, Twitter, Instagram etc. We considered hand-drawn emojis to classify them into 8 classes in this research paper. Hand-drawn emojis are the emojis drawn in any digital platform or in just a paper with a pen. This paper will enable the users to classify the hand-drawn emojis so that they could use them in any social media without any confusion. We made a local dataset of 500 images for each class summing a total of 4000 images of hand-drawn emojis. We presented a system which could recognise and classify the emojis into 8 classes with a convolutional neural network model. The model could favorably recognise as well as classify the hand-drawn emojis with an accuracy of 97%. Some pre-trained CNN models like VGG16, VGG19, ResNet50, MobileNetV2, InceptionV3 and Xception are also trained on the dataset to compare the accuracy and check whether they are better than the proposed one. On the other hand, machine learning models like SVM, Random Forest, Adaboost, Decision Tree and XGboost are also implemented on the dataset.
Mosquitoes are responsible for the most number of deaths every year throughout the world. Bangladesh is also a big sufferer of this problem. Dengue, malaria, chikungunya, zika, yellow fever etc. are caused by dangerous mosquito bites. The main three types of mosquitoes which are found in Bangladesh are aedes, anopheles and culex. Their identification is crucial to take the necessary steps to kill them in an area. Hence, a convolutional neural network (CNN) model is developed so that the mosquitoes could be classified from their images. We prepared a local dataset consisting of 442 images, collected from various sources. An accuracy of 70% has been achieved by running the proposed CNN model on the collected dataset. However, after augmentation of this dataset which becomes 3,600 images, the accuracy increases to 93%. We also showed the comparison of some methods with the CNN method which are VGG-16, Random Forest, XGboost and SVM. Our proposed CNN method outperforms these methods in terms of the classification accuracy of the mosquitoes. Thus, this research forms an example of humanitarian technology, where data science can be used to support mosquito classification, enabling the treatment of various mosquito borne diseases.
One of the most vital parts of medical image analysis is the classification of brain tumors. Because tumors are thought to be origins to cancer, accurate brain tumor classification can save lives. As a result, CNN (Convolutional Neural Network)-based techniques for classifying brain cancers are frequently employed. However, there is a problem: CNNs are exposed to vast amounts of training data in order to produce good performance. This is where transfer learning enters into the picture. We present a 4-class transfer learning approach for categorizing Glioma, Meningioma, and Pituitary tumors and non-tumors in this study. The three most prevalent types of brain tumors are glioma, meningioma, and pituitary tumors. Our presented method, which employs the theory of transfer learning, utilizes a pre-trained InceptionResnetV1 method for classifying brain MRI images by extracting features from them using the softmax classifier method. The proposed approach outperforms all prior techniques with a mean classification accuracy of 93.95%. For the evaluation of our method we use kaggle dataset. Precision, recall, and F-score are one of the key performance metrics employed in this study.
Optimization problem like Travelling Salesman Problem (TSP) can be solved by applying Genetic Algorithm (GA) to obtain perfect approximation in time. In addition, TSP is considered as a NP-hard problem as well as an optimal minimization problem. Selection, crossover and mutation are the three main operators of GA. The algorithm is usually employed to find the optimal minimum total distance to visit all the nodes in a TSP. Therefore, the research presents a new crossover operator for TSP, allowing the further minimization of the total distance. The proposed crossover operator consists of two crossover point selection and new offspring creation by performing cost comparison. The computational results as well as the comparison with available well-developed crossover operators are also presented. It has been found that the new crossover operator produces better results than that of other cross-over operators.
The main challenge of any mobile robot is to detect and avoid obstacles and potholes. This paper presents the development and implementation of a novel mobile robot. An Arduino Uno is used as the processing unit of the robot. A Sharp distance measurement sensor and Ultrasonic sensors are used for taking inputs from the environment. The robot trains a neural network based on a feedforward backpropagation algorithm to detect and avoid obstacles and potholes. For that purpose, we have used a truth table. Our experimental results show that our developed system can ideally detect and avoid obstacles and potholes and navigate environments.
An earthquake is a tremor felt on the surface of the earth created by the movement of the major pieces of its outer shell. Till now, many attempts have been made to forecast earthquakes, which saw some success, but these attempted models are specific to a region. In this paper, an earthquake occurrence and location prediction model is proposed. After reviewing the literature, long short-term memory (LSTM) is found to be a good option for building the model because of its memory-keeping ability. Using the Keras tuner, the best model was selected from candidate models, which are composed of combinations of various LSTM architectures and dense layers. This selected model used seismic indicators from the earthquake catalog of Bangladesh as features to predict earthquakes of the following month. Attention mechanism was added to the LSTM architecture to improve the model’s earthquake occurrence prediction accuracy, which was 74.67%. Additionally, a regression model was built using LSTM and dense layers to predict the earthquake epicenter as a distance from a predefined location, which provided a root mean square error of 1.25.
This article describes the architecture and service enablers developed in the NIMO project. Furthermore, it identifies future challenges and knowledge gaps in upcoming ICT service development for public sector units empowering citizens with enhanced tools for interaction and participation. We foresee crowdsourced applications where citizens contribute with dynamic, timely and geographically spread gathered information.
An Internet-of-Things (IoT)-Belief Rule Base (BRB) based hybrid system is introduced to assess Autism spectrum disorder (ASD). This smart system can automatically collect sign and symptom data of various autistic children in realtime and classify the autistic children. The BRB subsystem incorporates knowledge representation parameters such as rule weight, attribute weight and degree of belief. The IoT-BRB system classifies the children having autism based on the sign and symptom collected by the pervasive sensing nodes. The classification results obtained from the proposed IoT-BRB smart system is compared with fuzzy and expert based system. The proposed system outperformed the state-of-the-art fuzzy system and expert system.
Service Oriented Architecture (SOA) offers a flexible paradigm for information flow among collaborating organizations. As information moves out of an organization boundary, various security concerns may arise, such as confidentiality, integrity, and authenticity that needs to be addressed. Moreover, verifying the correctness of the communication protocol is also an important factor. This paper focuses on the formal verification of the xDAuth protocol, which is one of the prominent protocols for identity management in cross domain scenarios. We have modeled the information flow of xDAuth protocol using High Level Petri Nets (HLPN) to understand protocol information flow in a distributed environment. We analyze the rules of information flow using Z language while Z3 SMT solver is used for verification of the model. Our formal analysis and verification results reveal the fact that the protocol fulfills its intended purpose and provides the security for the defined protocol specific properties, e.g. secure secret key authentication, Chinese wall security policy and secrecy specific properties, e.g. confidentiality, integrity, authenticity.
Recently, the convergence between Blockchain and IoT has been appealing in many domains including, but not limited to, healthcare, supply chain, agriculture, and telecommunication. Both Blockchain and IoT are sophisticated technologies whose feasibility and performance in large-scale environments are difficult to evaluate. Consequently, a trustworthy Blockchain-based IoT simulator presents an alternative to costly and complicated actual implementation. Our primary analysis finds that there has not been so far a satisfactory simulator for the creation and assessment of blockchain-based IoT applications, which is the principal impetus for our effort. Therefore, this study gathers the thoughts of experts about the development of a simulation environment for blockchain-based IoT applications. To do this, we conducted two different investigations. First, a questionnaire is created to determine whether the development of such a simulator would be of substantial use. Second, interviews are conducted to obtain participants’ opinions on the most pressing challenges they encounter with blockchain-based IoT applications. The outcome is a conceptual architecture for simulating blockchain-based IoT applications that we evaluate using two research methods; a questionnaire and a focus group with experts. All in all, we find that the proposed architecture is generally well-received due to its comprehensive range of key features and capabilities for blockchain-based IoT purposes.
The fifth generation (5G) wireless network is expected to have dense deployments of cells in order to provide efficient Internet and cellular connections. The cloud radio access network (C-RAN) emerges as one of the 5G solutions to steer the network architecture and control resources beyond the legacy radio access technologies. The C-RAN decouples the traffic management operations from the radio access technologies leading to a new combination of virtualized network core and fronthaul architecture. In this paper, we first investigate the power consumption impact due to the aggressive deployments of low-power neighborhood femtocell networks (NFNs) under the umbrella of a coordinated multipoint (CoMP) macrocell. We show that power savings obtained from employing low power NFN start to decline as the density of deployed femtocells exceed certain threshold. The analysis considers two CoMP sites at the cell-edge and intra-cell areas. Second, to restore the power efficiency and network stabilization, a C-RAN model is proposed to restructure the NFN into clusters to ease the energy burden in the evolving 5G systems. Tailoring this to traffic load, selected clusters will be switched off to save power when they operate with low traffic loads
Cloud monitoring activity involves dynamically tracking the Quality of Service (QoS) parameters related to virtualized resources (e.g., VM, storage, network, appliances, etc.), the physical resources they share, the applications running on them and data hosted on them. Applications and resources configuration in cloud computing environment is quite challenging considering a large number of heterogeneous cloud resources. Further, considering the fact that at given point of time, there may be need to change cloud resource configuration (number of VMs, types of VMs, number of appliance instances, etc.) for meet application QoS requirements under uncertainties (resource failure, resource overload, workload spike, etc.). Hence, cloud monitoring tools can assist a cloud providers or application developers in: (i) keeping their resources and applications operating at peak efficiency, (ii) detecting variations in resource and application performance, (iii) accounting the service level agreement violations of certain QoS parameters, and (iv) tracking the leave and join operations of cloud resources due to failures and other dynamic configuration changes. In this paper, we identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools. We further discuss how the aforementioned research dimensions and design issues are handled by current academic research as well as by commercial monitoring tools.
Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers and data processing frameworks) platforms with features such as elasticity, pay-per-use, low upfront investment and low time to market. This has led to the proliferation of business critical applications that leverage various cloud platforms. Such applications hosted on single/multiple cloud provider platforms have diverse characteristics requiring extensive monitoring and benchmarking mechanisms to ensure run-time Quality of Service (QoS) (e.g., latency and throughput). This paper proposes, develops and validates CLAMBS—Cross-Layer Multi Cloud Application Monitoring and Benchmarking as-a-Service for efficient QoS monitoring and benchmarking of cloud applications hosted on multi-clouds environments. The major highlight of CLAMBS is its capability of monitoring and benchmarking individual application components such as databases and web servers, distributed across cloud layers (*-aaS), spread among multiple cloud providers. We validate CLAMBS using prototype implementation and extensive experimentation and show that CLAMBS efficiently monitors and benchmarks application components on multi-cloud platforms including Amazon EC2 and Microsoft Azure.
The service delivery model of cloud computing acts as a key enabler for big data analytics applications enhancing productivity, efficiency and reducing costs. The ever increasing flood of data generated from smart phones and sensors such as RFID readers, traffic cams etc require innovative provisioning and QoS monitoring approaches to continuously support big data analytics. To provide essential information for effective and efficient bid data analytics application QoS monitoring, in this paper we propose and develop CLAMS-Cross-Layer Multi-Cloud Application Monitoring-as-a-Service Framework. The proposed framework: (a) performs multi-cloud monitoring, and (b) addresses the issue of cross-layer monitoring of applications. We implement and demonstrate CLAMS functions on real-world multi-cloud platforms such as Amazon and Azure.
Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers, data processing frameworks, etc.) platforms. Application services hosted on single/multiple cloud provider platforms have diverse characteristics that require extensive monitoring mechanisms to aid in controlling run-time quality of service (e.g., access latency and number of requests being served per second, etc.). To provide essential real-time information for effective and efficient cloud application quality of service (QoS) monitoring, in this paper we propose, develop and validate CLAMS—Cross-Layer Multi-Cloud Application Monitoring-as-a-Service Framework. The proposed framework is capable of: (a) performing QoS monitoring of application components (e.g., database, web server, application server, etc.) that may be deployed across multiple cloud platforms (e.g., Amazon and Azure); and (b) giving visibility into the QoS of individual application component, which is something not supported by current monitoring services and techniques. We conduct experiments on real-world multi-cloud platforms such as Amazon and Azure to empirically evaluate our framework and the results validate that CLAMS efficiently monitors applications running across multiple clouds.