Change search
Refine search result
1 - 44 of 44
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abdul Waheed, Malik
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Cheng, Xin
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Kjeldsberg, Per Gunnar
    NTNU.
    Generalized Architecture for a Real-time Computation of an Image Component Features on a FPGAManuscript (preprint) (Other academic)
    Abstract [en]

    This paper describes a generalized architecture for real-time component labeling and computation of image component features. Computing real-time image component features is one of the most important paradigms for modern machine vision systems. Embedded machine vision systems demand robust performance, power efficiency as well as minimum area utilization. The presented architecture can easily be extended with additional modules for parallel computation of arbitrary image component features. Hardware modules for component labeling and feature calculation run in parallel. This modularization makes the architecture suitable for design automation. Our architecture is capable of processing 390 video frames per second of size 640x480 pixels. Dynamic power consumption is 24.20mW at 86 frames per second on a Xilinx Spartran6 FPGA.

  • 2.
    Ahmad, Naeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Model, placement optimization and verification of a sky surveillance visual sensor network2013In: International Journal of Space-Based and Situated Computing (IJSSC), ISSN 2044-4893, E-ISSN 2044-4907, Vol. 3, no 3, p. 125-135Article in journal (Refereed)
    Abstract [en]

    A visual sensor network (VSN) is a distributed system of a large number of camera nodes, which generates two dimensional data. This paper presents a model of a VSN to track large birds, such as golden eagle, in the sky. The model optimises the placement of camera nodes in VSN. A camera node is modelled as a function of lens focal length and camera sensor. The VSN provides full coverage between two altitude limits. The model can be used to minimise the number of sensor nodes for any given camera sensor, by exploring the focal lengths that fulfils both the full coverage and minimum object size requirement. For the case of large bird surveillance, 100% coverage is achieved for relevant altitudes using 20 camera nodes per km² for the investigated camera sensors. A real VSN is designed and measurements of VSN parameters are performed. The results obtained verify the VSN model.

  • 3.
    Ahmad, Naeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Solution space exploration of volumetric surveillance using a general taxonomy2013In: Proceedings of SPIE - The International Society for Optical Engineering / [ed] Daniel J. Henry, 2013, p. Art. no. 871317-Conference paper (Refereed)
    Abstract [en]

    Visual surveillance systems provide real time monitoring of the events or the environment. The availability of low cost sensors and processors has increased the number of possible applications of these kinds of systems. However, designing an optimized visual surveillance system for a given application is a challenging task, which often becomes a unique design task for each system. Moreover, the choice of components for a given surveillance application out of a wide spectrum of available alternatives is not an easy job. In this paper, we propose to use a general surveillance taxonomy as a base to structure the analysis and development of surveillance systems. We demonstrate the proposed taxonomy for designing a volumetric surveillance system for monitoring the movement of eagles in wind parks aiming to avoid their collision with wind mills. The analysis of the problem is performed based on taxonomy and behavioral and implementation models are identified to formulate the solution space for the problem. Moreover, we show that there is a need for generalized volumetric optimization methods for camera deployment.

  • 4.
    Ahmad, Naeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Cost Optimization of a Sky Surveillance Visual Sensor Network2012In: Proceedings of SPIE - The International Society for Optical Engineering, Belgium: SPIE - International Society for Optical Engineering, 2012, p. Art. no. 84370U-Conference paper (Refereed)
    Abstract [en]

    A Visual Sensor Network (VSN) is a network of spatially distributed cameras. The primary difference between VSN and other type of sensor network is the nature and volume of information. A VSN generally consists of cameras, communication, storage and central computer, where image data from multiple cameras is processed and fused. In this paper, we use optimization techniques to reduce the cost as derived by a model of a VSN to track large birds, such as Golden Eagle, in the sky. The core idea is to divide a given monitoring range of altitudes into a number of sub-ranges of altitudes. The sub-ranges of altitudes are monitored by individual VSNs, VSN1 monitors lower range, VSN2 monitors next higher and so on, such that a minimum cost is used to monitor a given area. The VSNs may use similar or different types of cameras but different optical components, thus, forming a heterogeneous network.  We have calculated the cost required to cover a given area by considering an altitudes range as single element and also by dividing it into sub-ranges. To cover a given area with given altitudes range, with a single VSN requires 694 camera nodes in comparison to dividing this range into sub-ranges of altitudes, which requires only 96 nodes, which is 86% reduction in the cost.

  • 5.
    Ahmad, Naeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Modeling and Verification of a Heterogeneous Sky Surveillance Visual Sensor Network2013In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. id. 490489-Article in journal (Refereed)
    Abstract [en]

    A visual sensor network (VSN) is a distributed system of a large number of camera nodes and has useful applications in many areas. The primary difference between a VSN and an ordinary scalar sensor network is the nature and volume of the information. In contrast to scalar sensor networks, a VSN generates two-dimensional data in the form of images. In this paper, we design a heterogeneous VSN to reduce the implementation cost required for the surveillance of a given area between two altitude limits. The VSN is designed by combining three sub-VSNs, which results in a heterogeneous VSN. Measurements are performed to verify full coverage and minimum achieved object image resolution at the lower and higher altitudes, respectively, for each sub-VSN. Verification of the sub-VSNs also verifies the full coverage of the heterogeneous VSN, between the given altitudes limits. Results show that the heterogeneous VSN is very effective to decrease the implementation cost required for the coverage of a given area. More than 70% decrease in cost is achieved by using a heterogeneous VSN to cover a given area, in comparison to homogeneous VSN. © 2013 Naeem Ahmad et al.

  • 6.
    Ahmad, Naeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Oelmann, Bengt
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Model and placement optimization of a sky surveillance visual sensor network2011In: Proceedings - 2011 International Conference on Broadband and Wireless Computing, Communication and Applications, BWCCA 2011, IEEE Computer Society, 2011, p. 357-362Conference paper (Refereed)
    Abstract [en]

    Visual Sensor Networks (VSNs) are networks which generate two dimensional data. The major difference between VSN and ordinary sensor network is the large amount of data. In VSN, a large number of camera nodes form a distributed system which can be deployed in many potential applications. In this paper we present a model of the physical parameters of a visual sensor network to track large birds, such as Golden Eagle, in the sky. The developed model is used to optimize the placement of the camera nodes in the VSN. A camera node is modeled as a function of its field of view, which is derived by the combination of the lens focal length and camera sensor. From the field of view and resolution of the sensor, a model for full coverage between two altitude limits has been developed. We show that the model can be used to minimize the number of sensor nodes for any given camera sensor, by exploring the focal lengths that both give full coverage and meet the minimum object size requirement. For the case of large bird surveillance we achieve 100% coverage for relevant altitudes using 20 camera nodes per km2 for the investigated camera sensors.

  • 7.
    Anwar, Qaiser
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Intelligence Partitioning as a Method for Architectural Exploration of Wireless Sensor Node2016In: Proceedings of the International Conference on Computational Science and Computational Intelligence (CSCI), 2016., IEEE Press, 2016, p. 935-940, article id 7881473Conference paper (Refereed)
    Abstract [en]

    Embedded systems with integrated sensing, processing and wireless communication are driving future connectivity concepts such as Wireless Sensor Networks (WSNs) and Internet of Things (IoTs). Because of resource limitations, there still exists a number of challenges such as low latency and energy consumption to realize these concepts to full potential. To address and understand these challenges, we have developed and employed an intelligence partitioning method which generates different implementation alternatives by distributing processing load across multiple nodes. The task-to-node mapping has exponential complexity which is hard to compute for a large scale system. Regarding this, our method provides recommendation to handle and minimize such complexity for a large system. Experiments on a use-case concludes that the proposed method is able to identify unfavourable architecture solutions in which forward and backword communication paths exists in task-to-node mapping. These solution can be avoided for further architectural exploration, thus limiting the space for architecture exploration of a sensor node.

  • 8.
    Brunnström, Kjell
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology. RISE Acreo AB.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. HIAB AB.
    Pettersson, Magnus
    HIAB AB.
    Johanson, Mathias
    Alkit Communications AB, Mölndal.
    Quality Of Experience For A Virtual Reality Simulator2018In: IS and T International Symposium on Electronic Imaging Science and Technology 2018, 2018Conference paper (Refereed)
    Abstract [en]

    In this study, we investigate a VR simulator of a forestrycrane used for loading logs onto a truck, mainly looking at Qualityof Experience (QoE) aspects that may be relevant for taskcompletion, but also whether there are any discomfort relatedsymptoms experienced during task execution. The QoE test hasbeen designed to capture both the general subjective experience ofusing the simulator and to study task completion rate. Moreover, aspecific focus has been to study the effects of latency on thesubjective experience, with regards both to delays in the cranecontrol interface as well as lag in the visual scene rendering in thehead mounted display (HMD). Two larger formal subjectivestudies have been performed: one with the VR-system as it is andone where we have added controlled delay to the display updateand to the joystick signals. The baseline study shows that mostpeople are more or less happy with the VR-system and that it doesnot have strong effects on any symptoms as listed in the SSQ. In thedelay study we found significant effects on Comfort Quality andImmersion Quality for higher Display delay (30 ms), but verysmall impact of joystick delay. Furthermore, the Display delay hadstrong influence on the symptoms in the SSQ, as well as causingtest subjects to decide not to continue with the completeexperiments, and this was also found to be connected to the longerDisplay delays (≥ 20 ms).

  • 9.
    Gustafsson, O.
    et al.
    Division of Electronics Systems, Department of Electrical Engineering, Linköping University, SE-581 83 Linköping, Sweden.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Wanhammar, L.
    Division of Electronics Systems, Department of Electrical Engineering, Linköping University, SE-581 83 Linköping, Sweden.
    Generalized overlapping digit patterns for multi-dimensional sub-expression sharing2010In: 1st International Conference on Green Circuits and Systems, ICGCS 2010, IEEE conference proceedings, 2010, p. 65-68Conference paper (Refereed)
    Abstract [en]

    Sub-expression sharing is a technique that can be applied to reduce the complexity of linear time-invariant non-recursive computations by identifying common patterns. It has recently been proposed that it is possible to improve the performance of single and multiple constant multiplication by identifying overlapping digit patterns. In this work we extend the concept of overlapping digit patterns to arbitrary shift dimensions, such as shift in time (FIR filters). © 2010 IEEE.

  • 10.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Analysis of Vision systems and Taxonomy Formulation: An abstract model for generalization2011Report (Refereed)
    Abstract [en]

    Vision systems are increasingly used in many applications including optical character recognition, mechanical inspection, automotive safety, surveillance and traffic monitoring. The current trend in vision systems is to propose solutions for specific problems as each application has different requirements and constraints. There is no generalized model or benchmark, to the best of our knowledge, which can be used for providing generic solutions for different class of vision systems. Providing a generic model in vision systems is a challenging task due to number of influencing factors. However, common characteristic can be identified in order to propose an abstract model. The majority of vision applications focus on the detection, analysis and recognition of objects. These tasks are reduced to vision functions which can be used to characterize the vision systems. In this report, we have analysed different types of vision systems, both wire and wireless, individual vision systems as well as a vision node in a Wireless Vision Sensor Network (WVSN). This analysis leads to the development of a system taxonomy, in which vision functions are considered as characteristics of the systems. The taxonomy is evaluated by using a quantitative parameter which shows that it covers 95 percent of the investigated vision systems and its flow is ordered for 50 percent of the systems. The proposed taxonomy will assist designers to classify their systems and enable researchers to compare their results with a similar class of systems. Moreover, it will help designers/researchers to propose generic architectures for different class of vision systems.

  • 11.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Energy Efficient and Programmable Architecture for Wireless Vision Sensor Node2013Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Wireless Vision Sensor Networks (WVSNs) is an emerging field which has attracted a number of potential applications because of smaller per node cost, ease of deployment, scalability and low power stand alone solutions. WVSNs consist of a number of wireless Vision Sensor Nodes (VSNs). VSN has limited resources such as embedded processing platform, power supply, wireless radio and memory.  In the presence of these limited resources, a VSN is expected to perform complex vision tasks for a long duration of time without battery replacement/recharging. Currently, reduction of processing and communication energy consumptions have been major challenges for battery operated VSNs. Another challenge is to propose generic solutions for a VSN so as to make these solutions suitable for a number of applications.

    To meet these challenges, this thesis focuses on energy efficient and programmable VSN architecture for machine vision systems which can classify objects based on binary data. In order to facilitate generic solutions, a taxonomy has been developed together with a complexity model which can be used for systems’ classification and comparison without the need for actual implementation. The proposed VSN architecture is based on tasks partitioning between a VSN and a server as well as tasks partitioning locally on the node between software and hardware platforms. In relation to tasks partitioning, the effect on processing, communication energy consumptions, design complexity and lifetime has been investigated.

    The investigation shows that the strategy, in which front end tasks up to segmentation, accompanied by a bi-level coding, are implemented on Field Programmable Platform (FPGA) with small sleep power, offers a generalized low complexity and energy efficient VSN architecture. The implementation of data intensive front end tasks on hardware reconfigurable platform reduces processing energy. However, there is a scope for reducing communication energy, related to output data. This thesis also explores data reduction techniques including image coding, region of interest coding and change coding which reduces output data significantly.

    For proof of concept, VSN architecture together with tasks partitioning, bi-level video coding, duty cycling and low complexity background subtraction technique has been implemented on real hardware and functionality has been verified for four applications including particle detection system, remote meter reading, bird detection and people counting. The results based on measured energy values shows that, depending on the application, the energy consumption can be reduced by a factor of approximately 1.5 up to 376 as compared to currently published VSNs. The lifetime based on measured energy values showed that for a sample period of 5 minutes, VSN can achieve 3.2 years lifetime with a battery of 37.44 kJ energy. In addition to this, proposed VSN offers generic architecture with smaller design complexity on hardware reconfigurable platform and offers easy adaptation for a number of applications as compared to published systems.

  • 12.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Investigation of Architectures for Wireless Visual Sensor Nodes2011Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Wireless visual sensor network is an emerging field which has proveduseful in many applications, including industrial control and monitoring,surveillance, environmental monitoring, personal care and the virtual world.Traditional imaging systems used a wired link, centralized network, highprocessing capabilities, unlimited storage and power source. In manyapplications, the wired solution results in high installation and maintenancecosts. However, a wireless solution is the preferred choice as it offers lessmaintenance, infrastructure costs and greater scalability.The technological developments in image sensors, wirelesscommunication and processing platforms have paved the way for smartcamera networks usually referred to as Wireless Visual Sensor Networks(WVSNs). WVSNs consist of a number of Visual Sensor Nodes (VSNs)deployed over a large geographical area. The smart cameras can performcomplex vision tasks using limited resources such as batteries or alternativeenergy sources, embedded platforms, a wireless link and a small memory.Current research in WVSNs is focused on reducing the energyconsumption of the node so as to maximise the life of the VSN. To meet thischallenge, different software and hardware solutions are presented in theliterature for the implementation of VSNs.The focus in this thesis is on the exploration of energy efficientreconfigurable architectures for VSNs by partitioning vision tasks on software,hardware platforms and locality. For any application, some of the vision taskscan be performed on the sensor node after which data is sent over the wirelesslink to the server where the remaining vision tasks are performed. Similarly,at the VSN, vision tasks can be partitioned on software and the hardwareplatforms.In the thesis, all possible strategies are explored, by partitioning visiontasks on the sensor node and on the server. The energy consumption of thesensor node is evaluated for different strategies on software platform. It isobserved that performing some of the vision tasks on the sensor node andsending compressed images to the server where the remaining vision tasks areperformed, will have lower energy consumption.In order to achieve better performance and low power consumption,Field Programmable Gate Arrays (FPGAs) are introduced for theimplementation of the sensor node. The strategies with reasonable designtimes and costs are implemented on hardware-software platform. Based onthe implementation of the VSN on the FPGA together with micro-controller,the lifetime of the VSN is predicted using the measured energy values of theplatforms for different processing strategies. The implementation resultsprove our analysis that a VSN with such characteristics will result in a longerlife time.

  • 13.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Malik, Abdul Waheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O’Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Implementation of Wireless Vision Sensor Node With a Lightweight Bi-Level Video Coding2013In: IEEE Journal on Emerging and Selected Topics in Circuits and Systems, ISSN 2156-3357, Vol. 3, no 2, p. 198-209, article id 6508941Article in journal (Refereed)
    Abstract [en]

    Wireless vision sensor networks (WVSNs) consist ofa number of wireless vision sensor nodes (VSNs) which have limitedresources i.e., energy, memory, processing, and wireless bandwidth.The processing and communication energy requirements ofindividual VSN have been a challenge because of limited energyavailability. To meet this challenge, we have proposed and implementeda programmable and energy efficient VSN architecturewhich has lower energy requirements and has a reduced designcomplexity. In the proposed system, vision tasks are partitionedbetween the hardware implemented VSN and a server. The initialdata dominated tasks are implemented on the VSN while thecontrol dominated complex tasks are processed on a server. Thisstrategy will reduce both the processing energy consumption andthe design complexity. The communication energy consumption isreduced by implementing a lightweight bi-level video coding on theVSN. The energy consumption is measured on real hardware fordifferent applications and proposed VSN is compared against publishedsystems. The results show that, depending on the application,the energy consumption can be reduced by a factor of approximately1.5 up to 376 as compared to VSN without the bi-level videocoding. The proposed VSN offers energy efficient, generic architecturewith smaller design complexity on hardware reconfigurableplatform and offers easy adaptation for a number of applicationsas compared to published systems.

  • 14.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O’Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Low Complexity Background Subtraction for Wireless Vision Sensor Node2013In: Proceedings - 16th Euromicro Conference on Digital System Design, DSD 2013, 2013, p. 681-688Conference paper (Refereed)
    Abstract [en]

    Wireless vision sensor nodes consist of limited resources such as energy, memory, wireless bandwidth and processing. Thus it becomes necessary to investigate lightweight vision tasks. To highlight the foreground objects, many machine vision applications depend on the background subtraction technique. Traditional background subtraction approaches employ recursive and non-recursive techniques and store the whole image in memory. This raises issues like complexity on hardware platform, energy requirements and latency. This work presents a low complexity background subtraction technique for a hardware implemented VSN. The proposed technique utilizes existing image scaling techniques for scaling down the image. The downscaled image is stored in memory of microcontroller which is already there for transmission. For subtraction operation, the background pixels are generated in real time through up scaling. The performance, and memory requirements of the system is compared for four image scaling techniques including nearest neighbor, averaging, bilinear, and bicubic. The results show that a system with lightweight scaling techniques, i.e., nearest neighbor and averaging, up to a scaling factor of 8, missed on average less than one object as compared to a system which uses a full original background image. The proposed approach will reduce the cost, design/implementation complexity and the memory requirement by a factor of up to 64.

  • 15.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Benkrid, Khaled
    School of Engineering at the University of Edinburgh,UK.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O’Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Analysis and Characterization of Embedded Vision Systems for Taxonomy Formulation2013In: Proceedings of SPIE - The International Society for Optical Engineering / [ed] Nasser Kehtarnavaz, Matthias F. Carlsohn,, USA: SPIE - International Society for Optical Engineering, 2013, p. Art. no. 86560J-Conference paper (Refereed)
    Abstract [en]

    The current trend in embedded vision systems is to propose bespoke solutions for specific problems as each application has different requirement and constraints. There is no widely used model or benchmark which aims to facilitate generic solutions in embedded vision systems. Providing such model is a challenging task due to the wide number of use cases, environmental factors, and available technologies. However, common characteristics can be identified to propose an abstract model. Indeed, the majority of vision applications focus on the detection, analysis and recognition of objects. These tasks can be reduced to vision functions which can be used to characterize the vision systems. In this paper, we present the results of a thorough analysis of a large number of different types of vision systems. This analysis led us to the development of a system’s taxonomy, in which a number of vision functions as well as their combination characterize embedded vision systems. To illustrate the use of this taxonomy, we have tested it against a real vision system that detects magnetic particles in a flowing liquid to predict and avoid critical machinery failure. The proposed taxonomy is evaluated by using a quantitative parameter which shows that it covers 95 percent of the investigated vision systems and its flow is ordered for 60 percent systems. This taxonomy will serve as a tool for classification and comparison of systems and will enable the researchers to propose generic and efficient solutions for same class of systems.

  • 16.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Malik, Abdul Waheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Complexity Analysis of Vision Functions for implementation of Wireless Smart Cameras using System Taxonomy2012In: Proceedings of SPIE - The International Society for Optical Engineering, Belgium: SPIE - International Society for Optical Engineering, 2012, p. Art. no. 84370C-Conference paper (Refereed)
    Abstract [en]

    There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption and bandwidth when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which has the ability to predict the resource requirements for the development and comparison of vision solutions in wireless smart cameras. To accelerate the development of such tool, we have used a system taxonomy, which shows that the majority of wireless smart cameras have common functions. In this paper, we have investigated the arithmetic complexity and memory requirements of vision functions by using the system taxonomy and proposed an abstract complexity model. To demonstrate the use of this model, we have analysed a number of implemented systems with this model and showed that complexity model together with system taxonomy can be used for comparison and generalization of vision solutions. Moreover, it will assist researchers/designers to predict the resource requirements for different class of vision systems in a reduced time and which will involve little effort. 

  • 17.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Waheed, Malik A.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Architecture of Wireless Visual Sensor Node with Region of Interest Coding2012In: Proceedings - 2012 IEEE 3rd International Conference on Networked Embedded Systems for Every Application, NESEA 2012, IEEE conference proceedings, 2012, p. Art. no. 6474029-Conference paper (Refereed)
    Abstract [en]

    The challenges involved in designing a wirelessVision Sensor Node include the reduction in processing andcommunication energy consumption, in order to maximize itslifetime. This work presents an architecture for a wireless VisionSensor Node, which consumes low processing andcommunication energy. The processing energy consumption isreduced by processing lightweight vision tasks on the VSN andby partitioning the vision tasks between the wireless VisionSensor Node and the server. The communication energyconsumption is reduced with Region Of Interest coding togetherwith a suitable bi-level compression scheme. A number ofdifferent processing strategies are investigated to realize awireless Vision Sensor Node with a low energy consumption. Theinvestigation shows that the wireless Vision Sensor Node, usingRegion Of Interest coding and CCITT group4 compressiontechnique, consumes 43 percent lower processing andcommunication energy as compared to the wireless Vision SensorNode implemented without Region Of Interest coding. Theproposed wireless Vision Sensor Node can achieve a lifetime of5.4 years, with a sample period of 5 minutes by using 4 AAbatteries.

  • 18.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Waheed, Malik A.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Complexity Analysis of Vision Functions for Comparison of Wireless Smart Cameras2014In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. no. 710685-Article in journal (Refereed)
    Abstract [en]

    There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption, and bandwidth, when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which facilitates the complexity estimation and comparison of wireless smart camera systems in order to develop efficient generic solutions. To develop such a tool, we have presented, in this paper, a complexity model by using a system taxonomy. In this model, we have investigated the arithmetic complexity and memory requirements of vision functions with the help of system taxonomy. To demonstrate the use of the proposed model, a number of actual systems are analyzed in a case study. The complexity model, together with system taxonomy, is used for the complexity estimation of vision functions and for a comparison of vision systems. After comparison, the systems are evaluated for implementation on a single generic architecture. The proposed approach will assist researchers in benchmarking and will assist in proposing efficient generic solutions for the same class of problems with reduced design and development costs.

  • 19.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Implementation of wireless Vision Sensor Node for Characterization of Particles in Fluids2012In: IEEE transactions on circuits and systems for video technology (Print), ISSN 1051-8215, E-ISSN 1558-2205, Vol. 22, no 11, p. 1634-1643Article in journal (Refereed)
    Abstract [en]

    Wireless Vision Sensor Networks (WVSNs) have a number of wireless Vision Sensor Nodes (VSNs), often spread over a large geographical area. Each node has an image capturing unit, a battery or alternative energy source, a memory unit, a light source, a wireless link and a processing unit. The challenges associated with WVSNs include low energy consumption, low bandwidth, limited memory and processing capabilities. In order to meet these challenges, our research is focused on the exploration of energy efficient reconfigurable architectures for VSN. In this work, the design/research challenges associated with the implementation of VSN on different computational platforms such as micro-controller, FPGA and server, are explored. In relation to this, the effect on the energy consumption and the design complexity at the node, when the functionality is moved from one platform to another are analyzed. Based on the implementation of the VSN on embedded platforms, the lifetime of the VSN is predicted using the measured energy values of the platforms for different implementation strategies. The implementation results show that an architecture, where the compressed images after pixel based operation are transmitted, realize a WVSN system with low energy consumption. Moreover, the complex post processing tasks are moved to a server, with reduced constraints. 

  • 20.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Malik, Abdul Waheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Architecture Exploration Based on Tasks Partitioning Between Hardware, Software and Locality for a Wireless Vision Sensor Node2012In: International Journal of Distributed Systems and Technologies, ISSN 1947-3532, E-ISSN 1947-3540, Vol. 3, no 2, p. 58-71Article in journal (Refereed)
    Abstract [en]

    Wireless Vision Sensor Networks (WVSNs) is an emerging field which consists of a number of Visual Sensor Nodes (VSNs). Compared to traditional sensor networks, WVSNs operates on two dimensional data, which requires high bandwidth and high energy consumption. In order to minimize the energy consumption, the focus is on finding energy efficient and programmable architectures for the VSN by partitioning the vision tasks among hardware (FPGA), software (Micro-controller) and locality (sensor node or server). The energy consumption, cost and design time of different processing strategies is analyzed for the implementation of VSN. Moreover, the processing energy and communication energy consumption of VSN is investigated in order to maximize the lifetime. Results show that by introducing a reconfigurable platform such as FPGA with small static power consumption and by transmitting the compressed images after pixel based tasks from the VSN results in longer battery lifetime for the VSN.

  • 21.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O’ Nills, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Gustafsson, Oscar
    Linköping University.
    On the number representation in sub-expression sharing2010In: International Conference on Signals and Electronic Systems, ICSES'10 - Conference Proceeding 2010,, IEEE conference proceedings, 2010, p. 17-20Conference paper (Refereed)
    Abstract [en]

    The core of many DSP tasks is Multiplication ofone data with several constants, i.e. in Digital filtering, image processing DCT and DFT. The Modern Portable equipments like Cellular phones and MP3 players which has DSP circuits,involve large number of multiplications of one variable with several constants (MCM) which leads to large area, delay and energy consumption in hardware. Multiplication operation can be realized using addition/subtraction and shifts without general multipliers. Different number representations are used in MCM algorithms and there are differnet views about different representations. Some of the authors termed the Canonic Signed Digit (CSD) representation as better for subexpression sharing. We have compared the results of CSD and Binary representations using our Generalized MCM Algorithm on Random Matrices and come to conclusion that binary representation is better compared to CSD when a system has multiple inputs and multiple outputs.

  • 22.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O’ Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Exploration of Target Architecture for aWireless Camera Based Sensor Node2010In: 28th Norchip Conference, NORCHIP 2010, IEEE conference proceedings, 2010, p. 1-4Conference paper (Refereed)
    Abstract [en]

    The challenges associated with wireless vision sensor networks are low energy consumption, less bandwidth and limited processing capabilities. In order to meet these challenges different approaches are proposed. Research in wireless vision sensor networks has been focused on two different assumptions, first is sending all data to the central base station without local processing, second approach is based on conducting all processing locally at the sensor node and transmitting only the final results. Our research is focused on partitioning the vision processing tasks between Senor node and central base station. In this paper we have added the exploration dimension to perform some of the vision tasks such as image capturing, background subtraction, segmentation and Tiff Group4 compression on FPGA while communication on microcontroller. The remaining vision processing tasks i.e. morphology, labeling, bubble remover and classification are processed on central base station. Our results show that the introduction of FPGA for some of the visual tasks will result in a longer life time for the visual sensor node while the architecture is still programmable.

  • 23.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Kardeby, Victor
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
    Munir, Huma
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    STC-CAM1, IR-visual based smart camera system2015In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2015, p. 195-196Conference paper (Refereed)
    Abstract [en]

    Safety-critical applications require robust and real-time surveillance. For such applications, a vision sensor alone can give false positive results because of poor lighting conditions, occlusion, or different weather conditions. In this work, a visual sensor is complemented by an infrared thermal sensor which makes the system more resilient in unfavorable situations. In the proposed camera architecture, initial data intensive tasks are performed locally on the sensor node and then compressed data is transmitted to a client device where remaining vision tasks are performed. The proposed camera architecture is demonstrated as a proof-ofconcept and it offers a generic architecture with better surveillance while only performing low complexity computations on the resource constrained devices.

  • 24.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O’Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Demo: SRAM FPGA based Wireless Smart Camera: SENTIOF-CAM2014In: Proceedings of the International Conference on Distributed Smart Cameras, 2014, article id a41Conference paper (Refereed)
    Abstract [en]

    Wireless Sensor Networks applications with huge amount of datarequirements are attracting the utilization of high performanceembedded platforms i.e. Field Programmable Gate Arrays(FPGAs) for in-node sensor processing. However, the designcomplexity, high configuration and static energies of SRAMFPGAs impose challenges for duty cycled applications. In thisdemo, we demonstrate the functionality of SRAM FPGA basedwireless vision sensor node called SENTIOF-CAM. Thedemonstration shows that by using intelligent techniques, a lowenergy and low complexity SRAM FPGA based wireless visionsensor node can be realized for duty cycled applications.

  • 25.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O’Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Energy Driven Selection and Hardware Implementation of Bi-Level Image Compression2014In: Proceedings of the International Conference on Distributed Smart Cameras, ACM Press, 2014, article id a32Conference paper (Refereed)
    Abstract [en]

    Wireless Vision Sensor Nodes are considered to have smaller resources and are expected to have a longer lifetime based on the available limited energy. A wireless Vision Sensor Node (VSN) is often characterized to consume more energy in communication as compared to processing. The communication energy can be reduced by reducing the amount of transmission data with the help of a suitable compression scheme. This work investigates bi-level compression schemes including G4, G3, JBIG2, Rectangular, GZIP, GZIP_Pack and JPEG-LS on a hardware platform. The investigation results show that GZIP_pack, G4 and JBIG2 schemes are suitable for a hardware implemented VSN. JBIG2 offers up to a 43 percent reduction in overall energy consumption as compared to G4 and GZIP_pack for complex images. However, JBIG2 has higher resource requirement and implementation complexity. The difference in overall energy consumption is smaller for smooth images. Depending on the application requirement, the exclusion of a header can reduce the energy consumption by approximately 1 to 33 percent.

  • 26.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Munir, Huma
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Low complexity FPGA based background subtraction technique for thermal imagery2015In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2015, p. 1-6Conference paper (Refereed)
    Abstract [en]

    Embedded smart camera systems are gaining popularity for a number of real world surveillance applications. However, there are still challenges, i.e. variation in illumination, shadows, occlusion, and weather conditions while employing the vision algorithms in outdoor environments. For safety-critical surveillance applications, the visual sensors can be complemented with beyond-visual-range sensors. This in turn requires analysis, development and modification of existing imaging techniques. In this work, a low complexity background modelling and subtraction technique has been proposed for thermal imagery. The proposed technique has been implemented on Field Programmable Gate Arrays (FPGAs) after in-depth analysis of different sets of images, characterizing poor signal-to-noise ratio challenges, e.g. motion of high frequency background objects, temperature variation and camera jitter etc. The proposed technique dynamically updates the background on pixel level and requires a single frame storage as opposed to existing techniques. The comparison of this approach with two other approaches show that this approach performs better in different environmental conditions. The proposed technique has been modelled in Register Transfer Logic (RTL) and implementation on the latest FPGAs shows that the design requires less than 1 percent logics, 47 percent block RAMs, and consumes 91 mW power consumption on Artix-7 100T FPGA.

  • 27.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Rinner, Bernhard
    Alpen-Adria-Universität, Institute of Networked and Embedded Systems, Lakeside Park B02b, Klagenfurt, Austria .
    Zand, Sajjad Zandi
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Exploration of preprocessing architectures for field-programmable gate array-based thermal-visual smart camera2016In: Journal of Electronic Imaging (JEI), ISSN 1017-9909, E-ISSN 1560-229X, Vol. 25, no 4, article id 041006Article in journal (Refereed)
    Abstract [en]

    Embedded smart cameras are gaining in popularity for a number of real-Time outdoor surveillance applications. However, there are still challenges, i.e., computational latency, variation in illumination, and occlusion. To solve these challenges, multimodal systems, integrating multiple imagers can be utilized. However, trade-off is more stringent requirements on processing and communication for embedded platforms. To meet these challenges, we investigated two low-complexity and high-performance preprocessing architectures for a multiple imagers' node on a field-programmable gate array (FPGA). In the proposed architectures, majority of the tasks are performed on the thermal images because of the lower spatial resolution. Analysis with different sets of images show that the system with proposed architectures offers better detection performance and can reduce output data from 1.7 to 99 times as compared with full-size images. The proposed architectures can achieve a frame rate of 53 fps, logics utilization from 2.1% to 4.1%, memory consumption 987 to 148 KB and power consumption in the range of 141 to 163 mW on Artix-7 FPGA. This concludes that the proposed architectures offer reduced design complexity and lower processing and communication requirements while retaining the configurability of the system.

  • 28.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Shahzad, Khurram
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Oelmann, Bengt
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Energy Efficient SRAM FPGA based Wireless Vision Sensor Node: SENTIOF‐CAM2014In: IEEE transactions on circuits and systems for video technology (Print), ISSN 1051-8215, E-ISSN 1558-2205, Vol. 24, no 12, p. 2132-2143Article in journal (Refereed)
    Abstract [en]

    Many Wireless Vision Sensor Networks (WVSNs) applications are characterized to have a low duty cycling. An individual wireless Vision Senor Node (VSN) in WVSN is required to operate with limited resources i.e., processing, memory and wireless bandwidth on available limited energy. For such resource constrained VSN, this paper presents a low complexity, energy efficient and programmable VSN architecture based on a design matrix which includes partitioning of processing load between the node and a server, a low complexity background subtraction, bi-level video coding and duty cycling. The tasks partitioning and proposed background subtraction reduces the processing energy and design complexity for hardware implemented VSN. The bi-level video coding reduces the communication energy whereas the duty cycling conserves energy for lifetime maximization. The proposed VSN, referred to as SENTIOF-CAM, has been implemented on a customized single board, which includes SRAM FPGA, microcontroller, radio transceiver and a FLASH memory. The energy values are measured for different states and results are compared with existing solutions. The comparison shows that the proposed solution can offer up to 69 times energy reduction. The lifetime based on measured energy values shows that for a sample period of 5 minutes, a 3.2 years lifetime can be achieved with a battery of 37.44 kJ energy. In addition to this, the proposed solution offers generic architecture with smaller design complexity on a hardware reconfigurable platform and offers easy adaptation for a number of applications.

  • 29.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Wang, Xu
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Pre-processing Architecture for IR-Visual Smart Camera Based on Post-Processing Constraints2016Conference paper (Refereed)
    Abstract [en]

    In embedded vision systems, the efficiency of pre-processing architectures have a ripple effect on post-processing functions such as feature extraction, classification and recognition. In this work, we investigated a pre-processing architecture for smart camera system, integrating a thermal and vision sensors, by considering the constraints of post-processing. By utilizing the locality feature of the system, we performed pre-processing on the camera node by using FPGA and post-processing on the client device by using the microprocessor platform, NVIDIA Tegra. The study shows that for outdoor people surveillance applications with complex background and varying lighting conditions, the pre-processing architecture, which transmits thermal binary Region-of-Interest (ROI) images, offers better classification accuracy and smaller complexity as compared to alternative approaches.

  • 30.
    Khursheed, Khursheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Binary video codec for data reduction in wireless visual sensor networks2013In: Proceedings of SPIE - The International Society for Optical Engineering / [ed] Kehtarnavaz, N; Carlsohn, MF, SPIE - International Society for Optical Engineering, 2013, p. Art. no. 86560L-Conference paper (Refereed)
    Abstract [en]

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding. © COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only.

  • 31.
    Khursheed, Khursheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Detecting and Coding Region of Interests in Bi-Level Images for Data Reduction in Wireless Visual Sensor Network2012In: Wireless and Mobile Computing, Networking and Communications (WiMob), 2012 IEEE 8th International Conference on, IEEE conference proceedings, 2012, p. 705-712Conference paper (Refereed)
    Abstract [en]

    Wireless Visual Sensor Network (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. The VSNs acquire images of the area of interest in the field, perform some local processing on these images and transmit the results using an embedded wireless transceiver. The energy consumption on transmitting the results wirelessly is correlated with the information amount that is being transmitted.  The images acquired by the VSNs contain huge amount of data due to many kinds of redundancies in the images. Suitable bi-level image compression standards can efficiently reduce the information amount in images and will thus be effective in reducing the communication energy consumption in the WVSN. But compression capability of the bi-level image compression standards is limited to the underline compression algorithm. Further data reduction can be achieved by detecting Region of Interest (ROI) in the bi-level images and then coding these ROIs using bi-level image compression method. We explored the compression performance of the lossless ROI detection and coding method for various kinds of changes such as different shapes, locations and number of objects in the continuous set of frames. The CCITT Group 4, JBIG2 and Gzip are used for coding the detected ROIs. We concluded that CCITT Group 4 is a better choice for coding the ROIs in the Bi-level images because of its comparatively good compression performance and less computational complexity. This paper is intended to be a resource for the researchers interested in reducing the amount of data in the bi-level images for energy constrained WVSNs.

  • 32.
    Khursheed, Khursheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Performance analysis of bi-level image compression methods for machine vision embedded applicationsManuscript (preprint) (Other academic)
  • 33.
    Khursheed, Khursheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    The effect of packets relaying on the implementation issues of the visual sensor node2013In: Electronics and Electrical Engineering, ISSN 1392-1215, Vol. 19, no 10, p. 155-161Article in journal (Refereed)
    Abstract [en]

    Wireless Visual Sensor Networks (WVSNs) are used for the monitoring of large and inaccessible areas. WVSNs are feasible today due to the advancement in many fields of electronics such as CMOS cameras, low power computing platforms, distributed computing and radio transceivers. The energy budget in a WVSN is limited because of the wireless nature of the applications and the small physical size of the Visual Sensor Node (VSN). The WVSN covers a large area where every node cannot transmit its results directly to the server. Receiving and forwarding other node's packets consumes a large portion of the energy budget of the VSNs. This paper explores the effect of packets relaying in a multihop WVSN on the implementation issues of the VSN. It also explores the effect of node density in the multihop WVSN on the energy consumption, which in turn, has an impact on the lifetime of the VSN. Results show that the network topology does not affect the software implementation of the VSN because of the relatively high execution time of the image processing tasks on the microcontroller. For hardware implementation, network topology and node density does affect the architecture of the VSN due to the fact that communication energy consumption is dominant (because of the low execution time on FPGAs).

  • 34.
    Khursheed, Khursheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Analysis of change coding for data reduction in wireless visual sensor networkManuscript (preprint) (Other academic)
  • 35.
    Khursheed, Khursheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Bi-Level Video Codec for Machine Vision Embedded Applications2013In: Elektronika Ir Elektrotechnika, ISSN 1392-1215, Vol. 19, no 8, p. 93-96Article in journal (Refereed)
    Abstract [en]

    Wireless Visual Sensor Networks (WVSN) are feasible today due to the advancement in many fields of electronics such as Complementary Metal Oxide Semiconductor (CMOS) cameras, low power electronics, distributed computing and radio transceivers. The energy budget in WVSN is limited due to the small form factor of the Visual Sensor Nodes (VSNs) and the wireless nature of the application. The images captured by VSN contain huge amount of data which leads to high communication energy consumptions. Hence there is a need for designing efficient algorithms which are computationally less complex and provide high compression ratio. The change coding and Region of Interest (ROIs) coding are the options for data reduction of the VSN. But, for higher number of objects in the images, the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Bi-Level Video Codec (BVC) for several representative machine vision applications. We proposed to implement image coding, change coding and ROI coding at the VSN and to select the smallest bit stream among the three. Results show that the compression performance of the BVC for such applications is always better than that of change coding and ROI coding.

  • 36.
    Khursheed, Khursheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Efficient Data Reduction Techniques for Remote Applications of a Wireless Visual Sensor Network2013In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 10, p. Art. no. 240-Article in journal (Refereed)
    Abstract [en]

    A Wireless Visual Sensor Network (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. After acquiring an image of the area of interest, the VSN performs local processing on it and transmits the result using an embedded wireless transceiver. Wireless data transmission consumes a great deal of energy, where energy consumption is mainly dependent on the amount of information being transmitted. The image captured by the VSN contains a huge amount of data. For certain applications, segmentation can be performed on the captured images. The amount of information in the segmented images can be reduced by applying efficient bi-level image compression methods. In this way, the communication energy consumption of each of the VSNs can be reduced. However, the data reduction capability of bi-level image compression standards is fixed and is limited by the used compression algorithm. For applications attributing few changes in adjacent frames, change coding can be applied for further data reduction. Detecting and compressing only the Regions of Interest (ROIs) in the change frame is another possibility for further data reduction. In a communication system, where both the sender and the receiver know the employed compression standard, there is a possibility for further data reduction by not including the header information in the compressed bit stream of the sender. This paper summarizes different information reduction techniques such as image coding, change coding and ROI coding. The main contribution is the investigation of the combined effect of all these coding methods and their application to a few representative real life applications. This paper is intended to be a resource for researchers interested in techniques for information reduction in energy constrained embedded applications.

  • 37.
    Khursheed, Khursheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks2012In: SPIE: Proc. SPIE 8437, 84370M (2012), 2012Conference paper (Refereed)
    Abstract [en]

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  • 38.
    Khursheed, Khursheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Malik, Abdul Waheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Exploration of tasks partitioning between hardware software and locality for a wireless camera based vision sensor node2011In: Proceedings - 6th International Symposium on Parallel Computing in Electrical Engineering, PARELEC 2011, IEEE conference proceedings, 2011, p. 127-132Conference paper (Refereed)
    Abstract [en]

    In this paper we have explored different possibilities for partitioning the tasks between hardware, software and locality for the implementation of the vision sensor node, used in wireless vision sensor network. Wireless vision sensor network is an emerging field which combines image sensor, on board computation and communication links. Compared to the traditional wireless sensor networks which operate on one dimensional data, wireless vision sensor networks operate on two dimensional data which requires higher processing power and communication bandwidth. The research focus within the field of wireless vision sensor networks have been on two different assumptions involving either sending raw data to the central base station without local processing or conducting all processing locally at the sensor node and transmitting only the final results. Our research work focus on determining an optimal point of hardware/software partitioning as well as partitioning between local and central processing, based on minimum energy consumption for vision processing operation. The lifetime of the vision sensor node is predicted by evaluating the energy requirement of the embedded platform with a combination of FPGA and micro controller for the implementation of the vision sensor node. Our results show that sending compressed images after pixel based tasks will result in a longer battery life time with reasonable hardware cost for the vision sensor node. © 2011 IEEE.

  • 39.
    Khursheed, Khursheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Exploration of Local and Central Processing for a Wireless Camera Based Sensor Node2010In: International Conference on Signals and Electronic Systems, ICSES'10 - Conference Proceeding 2010, Article number 5595231, IEEE conference proceedings, 2010, p. 147-150Conference paper (Refereed)
    Abstract [en]

    Wireless vision sensor network is an emerging field which combines image sensor, on board computation and communication links. Compared to the traditional wireless sensor networks which operate on one dimensional data, wireless vision sensor networks operate on two dimensional data which requires both higher processing power and communication bandwidth. The research focus within the field of wireless vision sensor network has been based on two different assumptions involving either sending data to the central base station without local processing or conducting all processing locally at the sensor node and transmitting only the final results. In this paper we focus on determining an optimal point for intelligence partitioning between the sensor node and the central base station and by exploring compression methods. The lifetime of the visual sensor node is predicted by evaluating the energy consumption for different levels of intelligence partitioning at the sensor node. Our results show that sending compressed images after segmentation will result in a longer life for the sensor node.

  • 40.
    Lawal, Najeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Design exploration of a multi-camera dome for sky monitoring2016In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2016, Vol. 12-15-September-2016, p. 14-18, article id 2967419Conference paper (Refereed)
    Abstract [en]

    Sky monitoring has many applications but also many challenges to be addressed before it can be realized. Some of the challenges are cost, energy consumption and complex deployment. One way to address these challenges is to compose a camera dome by grouping cameras that monitor a half sphere of the sky. In this paper, we present a model for design exploration that investigates how characteristics of camera chips and objective lenses affect the overall cost of a node of a camera dome. The investigation showed that by accepting more cameras in a single node can result in a reduced total cost of the system. This concludes that by using suitable design and camera placement technique, a cost-effective solution can be proposed for massive open-area i.e. sky monitoring.

  • 41.
    Malik, Abdul Waheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Hardware Architecture for Real-time  Computation of Image Component Feature Descriptors on a FPGA2014In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. no. 815378-Article in journal (Refereed)
    Abstract [en]

    This paper describes a hardwarearchitecture for real-time image component labelingand the computation of image component featuredescriptors. These descriptors are object relatedproperties used to describe each image component.Embedded machine vision systems demand a robustperformance, power efficiency as well as minimumarea utilization, depending on the deployedapplication. In the proposed architecture, the hardwaremodules for component labeling and featurecalculation run in parallel. A CMOS image sensor(MT9V032), operating at a maximum clock frequencyof 27MHz, was used to capture the images. Thearchitecture was synthesized and implemented on aXilinx Spartan-6 FPGA. The developed architecture iscapable of processing 390 video frames per second ofsize 640x480 pixels. Dynamic power consumption is13mW at 86 frames per second.

  • 42.
    Malik, Abdul Waheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Meng, Xiaozhou
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Real-Time machine vision system using FPGA and soft-core processor2012In: Proceedings of SPIE - The International Society for Optical Engineering, SPIE - International Society for Optical Engineering, 2012, p. Art. no. 84370Z-Conference paper (Refereed)
    Abstract [en]

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions. © 2012 SPIE.

  • 43.
    Shallari, Irida
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Anwar, Qaiser
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Background Modelling, Analysis and Implementation for Thermographic Images2017In: PROCEEDINGS OF THE 2017 SEVENTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA 2017), IEEE, 2017Conference paper (Refereed)
    Abstract [en]

    Background subtraction is one of the fundamental steps in the image-processing pipeline for distinguishing foreground from background. Most of the methods have been investigated with respect to visual images, in which case challenges are different compared to thermal images. Thermal sensors are invariant to light changes and have reduced privacy concerns. We propose the use of a low-pass IIR filter for background modelling in thermographic imagery due to its better performance compared to algorithms such as Mixture of Gaussians and K-nearest neighbour, while reducing memory requirements for implementation in embedded architectures. Based on the analysis of four different image datasets both indoor and outdoor, with and without people presence, the learning rate for the filter is set to 3×10-3 Hz and the proposed model is implemented on an Artix-7 FPGA.

  • 44.
    Shallari, Irida
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. HIAB AB.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Evaluating Pre-Processing Pipelines for Thermal-Visual Smart Camera2017In: Proceedings of the 11th International Conference on Distributed Smart Cameras, ACM Digital Library, 2017, Vol. F132201, p. 95-100Conference paper (Refereed)
    Abstract [en]

    Smart camera systems integrating multi-model image sensors provide better spectral sensitivity and hence better pass-fail decisions. In a given vision system, pre-processing tasks have a ripple effect on output data and pass-fail decision of high level tasks such as feature extraction, classification and recognition. In this work, we investigated four pre-processing pipelines and evaluated the effect on classification accuracy and output transmission data. The pre-processing pipelines processed four types of images, thermal grayscale, thermal binary, visual and visual binary. The results show that the pre-processing pipeline, which transmits visual compressed Region of Interest (ROI) images, offers 13 to 64 percent better classification accuracy as compared to thermal grayscale, thermal binary and visual binary. The results show that visual raw and visual compressed ROI with suitable quantization matrix offers similar classification accuracy but visual compressed ROI offers up to 99 percent reduced communication data as compared to visual ROI.

1 - 44 of 44
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf