Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Energy Efficient and Programmable Architecture for Wireless Vision Sensor Node
Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.ORCID iD: 0000-0003-1923-3843
2013 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Wireless Vision Sensor Networks (WVSNs) is an emerging field which has attracted a number of potential applications because of smaller per node cost, ease of deployment, scalability and low power stand alone solutions. WVSNs consist of a number of wireless Vision Sensor Nodes (VSNs). VSN has limited resources such as embedded processing platform, power supply, wireless radio and memory.  In the presence of these limited resources, a VSN is expected to perform complex vision tasks for a long duration of time without battery replacement/recharging. Currently, reduction of processing and communication energy consumptions have been major challenges for battery operated VSNs. Another challenge is to propose generic solutions for a VSN so as to make these solutions suitable for a number of applications.

To meet these challenges, this thesis focuses on energy efficient and programmable VSN architecture for machine vision systems which can classify objects based on binary data. In order to facilitate generic solutions, a taxonomy has been developed together with a complexity model which can be used for systems’ classification and comparison without the need for actual implementation. The proposed VSN architecture is based on tasks partitioning between a VSN and a server as well as tasks partitioning locally on the node between software and hardware platforms. In relation to tasks partitioning, the effect on processing, communication energy consumptions, design complexity and lifetime has been investigated.

The investigation shows that the strategy, in which front end tasks up to segmentation, accompanied by a bi-level coding, are implemented on Field Programmable Platform (FPGA) with small sleep power, offers a generalized low complexity and energy efficient VSN architecture. The implementation of data intensive front end tasks on hardware reconfigurable platform reduces processing energy. However, there is a scope for reducing communication energy, related to output data. This thesis also explores data reduction techniques including image coding, region of interest coding and change coding which reduces output data significantly.

For proof of concept, VSN architecture together with tasks partitioning, bi-level video coding, duty cycling and low complexity background subtraction technique has been implemented on real hardware and functionality has been verified for four applications including particle detection system, remote meter reading, bird detection and people counting. The results based on measured energy values shows that, depending on the application, the energy consumption can be reduced by a factor of approximately 1.5 up to 376 as compared to currently published VSNs. The lifetime based on measured energy values showed that for a sample period of 5 minutes, VSN can achieve 3.2 years lifetime with a battery of 37.44 kJ energy. In addition to this, proposed VSN offers generic architecture with smaller design complexity on hardware reconfigurable platform and offers easy adaptation for a number of applications as compared to published systems.

Place, publisher, year, edition, pages
Sundsvall: Mid Sweden University , 2013. , 115 p.
Series
Mid Sweden University doctoral thesis, ISSN 1652-893X ; 167
Keyword [en]
Wireless Vision Sensor Node, Smart camera, Wireless Vision Sensor Networks, Architecture, Video coding.
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:miun:diva-20179Local ID: STCISBN: 978-91-87557-12-5 (print)OAI: oai:DiVA.org:miun-20179DiVA: diva2:663257
Public defence
2013-10-22, M108, holmgatan 10,SE 85170, sundsvall, 10:03 (English)
Opponent
Supervisors
Funder
Knowledge Foundation
Available from: 2013-11-11 Created: 2013-11-11 Last updated: 2016-10-20Bibliographically approved
List of papers
1. Analysis and Characterization of Embedded Vision Systems for Taxonomy Formulation
Open this publication in new window or tab >>Analysis and Characterization of Embedded Vision Systems for Taxonomy Formulation
Show others...
2013 (English)In: Proceedings of SPIE - The International Society for Optical Engineering / [ed] Nasser Kehtarnavaz, Matthias F. Carlsohn,, USA: SPIE - International Society for Optical Engineering, 2013, Art. no. 86560J- p.Conference paper, Published paper (Refereed)
Abstract [en]

The current trend in embedded vision systems is to propose bespoke solutions for specific problems as each application has different requirement and constraints. There is no widely used model or benchmark which aims to facilitate generic solutions in embedded vision systems. Providing such model is a challenging task due to the wide number of use cases, environmental factors, and available technologies. However, common characteristics can be identified to propose an abstract model. Indeed, the majority of vision applications focus on the detection, analysis and recognition of objects. These tasks can be reduced to vision functions which can be used to characterize the vision systems. In this paper, we present the results of a thorough analysis of a large number of different types of vision systems. This analysis led us to the development of a system’s taxonomy, in which a number of vision functions as well as their combination characterize embedded vision systems. To illustrate the use of this taxonomy, we have tested it against a real vision system that detects magnetic particles in a flowing liquid to predict and avoid critical machinery failure. The proposed taxonomy is evaluated by using a quantitative parameter which shows that it covers 95 percent of the investigated vision systems and its flow is ordered for 60 percent systems. This taxonomy will serve as a tool for classification and comparison of systems and will enable the researchers to propose generic and efficient solutions for same class of systems.

Place, publisher, year, edition, pages
USA: SPIE - International Society for Optical Engineering, 2013
Series
Proceedings of SPIE, ISSN 0277-786X ; 8656
Keyword
System taxonomy, Smart cameras, Embedded vision systems, Wireless vision sensor networks
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-16035 (URN)10.1117/12.2000584 (DOI)000333051900018 ()2-s2.0-84875855354 (Scopus ID)STC (Local ID)978-0-8194-9429-0 (ISBN)STC (Archive number)STC (OAI)
Conference
Real-Time Image and Video Processing 2013; Burlingame, CA; United States; 6 February 2013 through 7 February 2013; Code 96385
Available from: 2013-02-05 Created: 2012-03-30 Last updated: 2016-10-20Bibliographically approved
2. Complexity Analysis of Vision Functions for implementation of Wireless Smart Cameras using System Taxonomy
Open this publication in new window or tab >>Complexity Analysis of Vision Functions for implementation of Wireless Smart Cameras using System Taxonomy
Show others...
2012 (English)In: Proceedings of SPIE - The International Society for Optical Engineering, Belgium: SPIE - International Society for Optical Engineering, 2012, Art. no. 84370C- p.Conference paper, Published paper (Refereed)
Abstract [en]

There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption and bandwidth when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which has the ability to predict the resource requirements for the development and comparison of vision solutions in wireless smart cameras. To accelerate the development of such tool, we have used a system taxonomy, which shows that the majority of wireless smart cameras have common functions. In this paper, we have investigated the arithmetic complexity and memory requirements of vision functions by using the system taxonomy and proposed an abstract complexity model. To demonstrate the use of this model, we have analysed a number of implemented systems with this model and showed that complexity model together with system taxonomy can be used for comparison and generalization of vision solutions. Moreover, it will assist researchers/designers to predict the resource requirements for different class of vision systems in a reduced time and which will involve little effort. 

Place, publisher, year, edition, pages
Belgium: SPIE - International Society for Optical Engineering, 2012
Series
Proceedings of SPIE, ISSN 0277-786X ; 8437
Keyword
wireless smart camera, complexity analysis, system taxonomy, comparison, resource requirements
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-16036 (URN)10.1117/12.923797 (DOI)000305693900010 ()2-s2.0-84861946720 (Scopus ID)STC (Local ID)978-0-8194-9129-9 (ISBN)STC (Archive number)STC (OAI)
Conference
Real-Time Image and Video Processing 2012;Brussels;19 April 2012through19 April 2012;Code90041
Available from: 2012-03-30 Created: 2012-03-30 Last updated: 2016-10-20Bibliographically approved
3. Architecture Exploration Based on Tasks Partitioning Between Hardware, Software and Locality for a Wireless Vision Sensor Node
Open this publication in new window or tab >>Architecture Exploration Based on Tasks Partitioning Between Hardware, Software and Locality for a Wireless Vision Sensor Node
Show others...
2012 (English)In: International Journal of Distributed Systems and Technologies, ISSN 1947-3532, E-ISSN 1947-3540, Vol. 3, no 2, 58-71 p.Article in journal (Refereed) Published
Abstract [en]

Wireless Vision Sensor Networks (WVSNs) is an emerging field which consists of a number of Visual Sensor Nodes (VSNs). Compared to traditional sensor networks, WVSNs operates on two dimensional data, which requires high bandwidth and high energy consumption. In order to minimize the energy consumption, the focus is on finding energy efficient and programmable architectures for the VSN by partitioning the vision tasks among hardware (FPGA), software (Micro-controller) and locality (sensor node or server). The energy consumption, cost and design time of different processing strategies is analyzed for the implementation of VSN. Moreover, the processing energy and communication energy consumption of VSN is investigated in order to maximize the lifetime. Results show that by introducing a reconfigurable platform such as FPGA with small static power consumption and by transmitting the compressed images after pixel based tasks from the VSN results in longer battery lifetime for the VSN.

Place, publisher, year, edition, pages
IGI Global, USA.: , 2012
Keyword
Wireless Vision Sensor Networks; Vision Sensor Node; Hardware/Software Partitioning; Reconfigurable Architecture; Image Processing.
National Category
Engineering and Technology
Identifiers
urn:nbn:se:miun:diva-14940 (URN)10.4018/jdst.2012040104 (DOI)2-s2.0-84880522514 (Scopus ID)
Projects
Onparticle detection
Available from: 2012-01-04 Created: 2011-11-27 Last updated: 2017-12-08Bibliographically approved
4. Implementation of wireless Vision Sensor Node for Characterization of Particles in Fluids
Open this publication in new window or tab >>Implementation of wireless Vision Sensor Node for Characterization of Particles in Fluids
Show others...
2012 (English)In: IEEE transactions on circuits and systems for video technology (Print), ISSN 1051-8215, E-ISSN 1558-2205, Vol. 22, no 11, 1634-1643 p.Article in journal (Refereed) Published
Abstract [en]

Wireless Vision Sensor Networks (WVSNs) have a number of wireless Vision Sensor Nodes (VSNs), often spread over a large geographical area. Each node has an image capturing unit, a battery or alternative energy source, a memory unit, a light source, a wireless link and a processing unit. The challenges associated with WVSNs include low energy consumption, low bandwidth, limited memory and processing capabilities. In order to meet these challenges, our research is focused on the exploration of energy efficient reconfigurable architectures for VSN. In this work, the design/research challenges associated with the implementation of VSN on different computational platforms such as micro-controller, FPGA and server, are explored. In relation to this, the effect on the energy consumption and the design complexity at the node, when the functionality is moved from one platform to another are analyzed. Based on the implementation of the VSN on embedded platforms, the lifetime of the VSN is predicted using the measured energy values of the platforms for different implementation strategies. The implementation results show that an architecture, where the compressed images after pixel based operation are transmitted, realize a WVSN system with low energy consumption. Moreover, the complex post processing tasks are moved to a server, with reduced constraints. 

Keyword
Reconfigurable architecture, Image processing, Wireless vision sensor networks, Wireless vision sensor node.
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-14389 (URN)10.1109/TCSVT.2012.2202189 (DOI)000313971700010 ()2-s2.0-84875631744 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2011-08-24 Created: 2011-08-24 Last updated: 2017-12-08Bibliographically approved
5. Implementation of Wireless Vision Sensor Node With a Lightweight Bi-Level Video Coding
Open this publication in new window or tab >>Implementation of Wireless Vision Sensor Node With a Lightweight Bi-Level Video Coding
Show others...
2013 (English)In: IEEE Journal on Emerging and Selected Topics in Circuits and Systems, ISSN 2156-3357, Vol. 3, no 2, 198-209 p., 6508941Article in journal (Refereed) Published
Abstract [en]

Wireless vision sensor networks (WVSNs) consist ofa number of wireless vision sensor nodes (VSNs) which have limitedresources i.e., energy, memory, processing, and wireless bandwidth.The processing and communication energy requirements ofindividual VSN have been a challenge because of limited energyavailability. To meet this challenge, we have proposed and implementeda programmable and energy efficient VSN architecturewhich has lower energy requirements and has a reduced designcomplexity. In the proposed system, vision tasks are partitionedbetween the hardware implemented VSN and a server. The initialdata dominated tasks are implemented on the VSN while thecontrol dominated complex tasks are processed on a server. Thisstrategy will reduce both the processing energy consumption andthe design complexity. The communication energy consumption isreduced by implementing a lightweight bi-level video coding on theVSN. The energy consumption is measured on real hardware fordifferent applications and proposed VSN is compared against publishedsystems. The results show that, depending on the application,the energy consumption can be reduced by a factor of approximately1.5 up to 376 as compared to VSN without the bi-level videocoding. The proposed VSN offers energy efficient, generic architecturewith smaller design complexity on hardware reconfigurableplatform and offers easy adaptation for a number of applicationsas compared to published systems.

Place, publisher, year, edition, pages
IEEE Press, 2013
Keyword
Architecture, smart camera, video coding, wireless vision sensor networks (WVSNs), wireless vision sensor node (VSN)
National Category
Engineering and Technology
Identifiers
urn:nbn:se:miun:diva-19193 (URN)10.1109/JETCAS.2013.2256816 (DOI)000337789200009 ()2-s2.0-84879076204 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2013-06-12 Created: 2013-06-12 Last updated: 2016-10-20Bibliographically approved
6. Low Complexity Background Subtraction for Wireless Vision Sensor Node
Open this publication in new window or tab >>Low Complexity Background Subtraction for Wireless Vision Sensor Node
Show others...
2013 (English)In: Proceedings - 16th Euromicro Conference on Digital System Design, DSD 2013, 2013, 681-688 p.Conference paper, Published paper (Refereed)
Abstract [en]

Wireless vision sensor nodes consist of limited resources such as energy, memory, wireless bandwidth and processing. Thus it becomes necessary to investigate lightweight vision tasks. To highlight the foreground objects, many machine vision applications depend on the background subtraction technique. Traditional background subtraction approaches employ recursive and non-recursive techniques and store the whole image in memory. This raises issues like complexity on hardware platform, energy requirements and latency. This work presents a low complexity background subtraction technique for a hardware implemented VSN. The proposed technique utilizes existing image scaling techniques for scaling down the image. The downscaled image is stored in memory of microcontroller which is already there for transmission. For subtraction operation, the background pixels are generated in real time through up scaling. The performance, and memory requirements of the system is compared for four image scaling techniques including nearest neighbor, averaging, bilinear, and bicubic. The results show that a system with lightweight scaling techniques, i.e., nearest neighbor and averaging, up to a scaling factor of 8, missed on average less than one object as compared to a system which uses a full original background image. The proposed approach will reduce the cost, design/implementation complexity and the memory requirement by a factor of up to 64.

Keyword
wireless vision sensor node, background subtraction, Smart camera, low complexity.
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-19204 (URN)10.1109/DSD.2013.77 (DOI)2-s2.0-84890108886 (Scopus ID)STC (Local ID)978-076955074-9 (ISBN)STC (Archive number)STC (OAI)
Conference
16th Euromicro Conference On Digital System Design; 4-6 Sep 2013; Santander, Spain
Available from: 2013-06-12 Created: 2013-06-12 Last updated: 2016-10-20Bibliographically approved
7. Architecture of Wireless Visual Sensor Node with Region of Interest Coding
Open this publication in new window or tab >>Architecture of Wireless Visual Sensor Node with Region of Interest Coding
Show others...
2012 (English)In: Proceedings - 2012 IEEE 3rd International Conference on Networked Embedded Systems for Every Application, NESEA 2012, IEEE conference proceedings, 2012, Art. no. 6474029- p.Conference paper, Published paper (Refereed)
Abstract [en]

The challenges involved in designing a wirelessVision Sensor Node include the reduction in processing andcommunication energy consumption, in order to maximize itslifetime. This work presents an architecture for a wireless VisionSensor Node, which consumes low processing andcommunication energy. The processing energy consumption isreduced by processing lightweight vision tasks on the VSN andby partitioning the vision tasks between the wireless VisionSensor Node and the server. The communication energyconsumption is reduced with Region Of Interest coding togetherwith a suitable bi-level compression scheme. A number ofdifferent processing strategies are investigated to realize awireless Vision Sensor Node with a low energy consumption. Theinvestigation shows that the wireless Vision Sensor Node, usingRegion Of Interest coding and CCITT group4 compressiontechnique, consumes 43 percent lower processing andcommunication energy as compared to the wireless Vision SensorNode implemented without Region Of Interest coding. Theproposed wireless Vision Sensor Node can achieve a lifetime of5.4 years, with a sample period of 5 minutes by using 4 AAbatteries.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2012
Keyword
architecture, wireless vision sensor node, Region of interest coding, Smart camera, wireless visual sensor networks, wireless multimedia sensor networks.
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-18021 (URN)10.1109/NESEA.2012.6474029 (DOI)000319471300019 ()2-s2.0-84875603760 (Scopus ID)STC (Local ID)978-146734723-5 (ISBN)STC (Archive number)STC (OAI)
Conference
2012 IEEE 3rd International Conference on Networked Embedded Systems for Every Application, NESEA 2012; Liverpool; United Kingdom; 13 December 2012 through 14 December 2012; Category numberCFP12NEE-ART; Code 96291
Available from: 2012-12-19 Created: 2012-12-19 Last updated: 2016-10-20Bibliographically approved

Open Access in DiVA

Phd_thesis_imran(3594 kB)703 downloads
File information
File name FULLTEXT01.pdfFile size 3594 kBChecksum SHA-512
a07c4cb125e84ade0a9570e7557b3bc85a86478d48c5618f81aa8d973843c52f4e0749345a39b1aea3144fc2003b25a95acb383cf8f5d90429d7d053a95eec3c
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Imran, Muhammad
By organisation
Department of Electronics Design
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 703 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1651 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf