Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Energy Efficient and Programmable Architecture for Wireless Vision Sensor Node
Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Avdelningen för elektronikkonstruktion.ORCID-id: 0000-0003-1923-3843
2013 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Wireless Vision Sensor Networks (WVSNs) is an emerging field which has attracted a number of potential applications because of smaller per node cost, ease of deployment, scalability and low power stand alone solutions. WVSNs consist of a number of wireless Vision Sensor Nodes (VSNs). VSN has limited resources such as embedded processing platform, power supply, wireless radio and memory.  In the presence of these limited resources, a VSN is expected to perform complex vision tasks for a long duration of time without battery replacement/recharging. Currently, reduction of processing and communication energy consumptions have been major challenges for battery operated VSNs. Another challenge is to propose generic solutions for a VSN so as to make these solutions suitable for a number of applications.

To meet these challenges, this thesis focuses on energy efficient and programmable VSN architecture for machine vision systems which can classify objects based on binary data. In order to facilitate generic solutions, a taxonomy has been developed together with a complexity model which can be used for systems’ classification and comparison without the need for actual implementation. The proposed VSN architecture is based on tasks partitioning between a VSN and a server as well as tasks partitioning locally on the node between software and hardware platforms. In relation to tasks partitioning, the effect on processing, communication energy consumptions, design complexity and lifetime has been investigated.

The investigation shows that the strategy, in which front end tasks up to segmentation, accompanied by a bi-level coding, are implemented on Field Programmable Platform (FPGA) with small sleep power, offers a generalized low complexity and energy efficient VSN architecture. The implementation of data intensive front end tasks on hardware reconfigurable platform reduces processing energy. However, there is a scope for reducing communication energy, related to output data. This thesis also explores data reduction techniques including image coding, region of interest coding and change coding which reduces output data significantly.

For proof of concept, VSN architecture together with tasks partitioning, bi-level video coding, duty cycling and low complexity background subtraction technique has been implemented on real hardware and functionality has been verified for four applications including particle detection system, remote meter reading, bird detection and people counting. The results based on measured energy values shows that, depending on the application, the energy consumption can be reduced by a factor of approximately 1.5 up to 376 as compared to currently published VSNs. The lifetime based on measured energy values showed that for a sample period of 5 minutes, VSN can achieve 3.2 years lifetime with a battery of 37.44 kJ energy. In addition to this, proposed VSN offers generic architecture with smaller design complexity on hardware reconfigurable platform and offers easy adaptation for a number of applications as compared to published systems.

Ort, förlag, år, upplaga, sidor
Sundsvall: Mid Sweden University , 2013. , s. 115
Serie
Mid Sweden University doctoral thesis, ISSN 1652-893X ; 167
Nyckelord [en]
Wireless Vision Sensor Node, Smart camera, Wireless Vision Sensor Networks, Architecture, Video coding.
Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
URN: urn:nbn:se:miun:diva-20179Lokalt ID: STCISBN: 978-91-87557-12-5 (tryckt)OAI: oai:DiVA.org:miun-20179DiVA, id: diva2:663257
Disputation
2013-10-22, M108, holmgatan 10,SE 85170, sundsvall, 10:03 (Engelska)
Opponent
Handledare
Forskningsfinansiär
KK-stiftelsenTillgänglig från: 2013-11-11 Skapad: 2013-11-11 Senast uppdaterad: 2016-10-20Bibliografiskt granskad
Delarbeten
1. Analysis and Characterization of Embedded Vision Systems for Taxonomy Formulation
Öppna denna publikation i ny flik eller fönster >>Analysis and Characterization of Embedded Vision Systems for Taxonomy Formulation
Visa övriga...
2013 (Engelska)Ingår i: Proceedings of SPIE - The International Society for Optical Engineering / [ed] Nasser Kehtarnavaz, Matthias F. Carlsohn,, USA: SPIE - International Society for Optical Engineering, 2013, s. Art. no. 86560J-Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

The current trend in embedded vision systems is to propose bespoke solutions for specific problems as each application has different requirement and constraints. There is no widely used model or benchmark which aims to facilitate generic solutions in embedded vision systems. Providing such model is a challenging task due to the wide number of use cases, environmental factors, and available technologies. However, common characteristics can be identified to propose an abstract model. Indeed, the majority of vision applications focus on the detection, analysis and recognition of objects. These tasks can be reduced to vision functions which can be used to characterize the vision systems. In this paper, we present the results of a thorough analysis of a large number of different types of vision systems. This analysis led us to the development of a system’s taxonomy, in which a number of vision functions as well as their combination characterize embedded vision systems. To illustrate the use of this taxonomy, we have tested it against a real vision system that detects magnetic particles in a flowing liquid to predict and avoid critical machinery failure. The proposed taxonomy is evaluated by using a quantitative parameter which shows that it covers 95 percent of the investigated vision systems and its flow is ordered for 60 percent systems. This taxonomy will serve as a tool for classification and comparison of systems and will enable the researchers to propose generic and efficient solutions for same class of systems.

Ort, förlag, år, upplaga, sidor
USA: SPIE - International Society for Optical Engineering, 2013
Serie
Proceedings of SPIE, ISSN 0277-786X ; 8656
Nyckelord
System taxonomy, Smart cameras, Embedded vision systems, Wireless vision sensor networks
Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
urn:nbn:se:miun:diva-16035 (URN)10.1117/12.2000584 (DOI)000333051900018 ()2-s2.0-84875855354 (Scopus ID)STC (Lokalt ID)978-0-8194-9429-0 (ISBN)STC (Arkivnummer)STC (OAI)
Konferens
Real-Time Image and Video Processing 2013; Burlingame, CA; United States; 6 February 2013 through 7 February 2013; Code 96385
Tillgänglig från: 2013-02-05 Skapad: 2012-03-30 Senast uppdaterad: 2016-10-20Bibliografiskt granskad
2. Complexity Analysis of Vision Functions for implementation of Wireless Smart Cameras using System Taxonomy
Öppna denna publikation i ny flik eller fönster >>Complexity Analysis of Vision Functions for implementation of Wireless Smart Cameras using System Taxonomy
Visa övriga...
2012 (Engelska)Ingår i: Proceedings of SPIE - The International Society for Optical Engineering, Belgium: SPIE - International Society for Optical Engineering, 2012, s. Art. no. 84370C-Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption and bandwidth when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which has the ability to predict the resource requirements for the development and comparison of vision solutions in wireless smart cameras. To accelerate the development of such tool, we have used a system taxonomy, which shows that the majority of wireless smart cameras have common functions. In this paper, we have investigated the arithmetic complexity and memory requirements of vision functions by using the system taxonomy and proposed an abstract complexity model. To demonstrate the use of this model, we have analysed a number of implemented systems with this model and showed that complexity model together with system taxonomy can be used for comparison and generalization of vision solutions. Moreover, it will assist researchers/designers to predict the resource requirements for different class of vision systems in a reduced time and which will involve little effort. 

Ort, förlag, år, upplaga, sidor
Belgium: SPIE - International Society for Optical Engineering, 2012
Serie
Proceedings of SPIE, ISSN 0277-786X ; 8437
Nyckelord
wireless smart camera, complexity analysis, system taxonomy, comparison, resource requirements
Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
urn:nbn:se:miun:diva-16036 (URN)10.1117/12.923797 (DOI)000305693900010 ()2-s2.0-84861946720 (Scopus ID)STC (Lokalt ID)978-0-8194-9129-9 (ISBN)STC (Arkivnummer)STC (OAI)
Konferens
Real-Time Image and Video Processing 2012;Brussels;19 April 2012through19 April 2012;Code90041
Tillgänglig från: 2012-03-30 Skapad: 2012-03-30 Senast uppdaterad: 2016-10-20Bibliografiskt granskad
3. Architecture Exploration Based on Tasks Partitioning Between Hardware, Software and Locality for a Wireless Vision Sensor Node
Öppna denna publikation i ny flik eller fönster >>Architecture Exploration Based on Tasks Partitioning Between Hardware, Software and Locality for a Wireless Vision Sensor Node
Visa övriga...
2012 (Engelska)Ingår i: International Journal of Distributed Systems and Technologies, ISSN 1947-3532, E-ISSN 1947-3540, Vol. 3, nr 2, s. 58-71Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Wireless Vision Sensor Networks (WVSNs) is an emerging field which consists of a number of Visual Sensor Nodes (VSNs). Compared to traditional sensor networks, WVSNs operates on two dimensional data, which requires high bandwidth and high energy consumption. In order to minimize the energy consumption, the focus is on finding energy efficient and programmable architectures for the VSN by partitioning the vision tasks among hardware (FPGA), software (Micro-controller) and locality (sensor node or server). The energy consumption, cost and design time of different processing strategies is analyzed for the implementation of VSN. Moreover, the processing energy and communication energy consumption of VSN is investigated in order to maximize the lifetime. Results show that by introducing a reconfigurable platform such as FPGA with small static power consumption and by transmitting the compressed images after pixel based tasks from the VSN results in longer battery lifetime for the VSN.

Ort, förlag, år, upplaga, sidor
IGI Global, USA.: , 2012
Nyckelord
Wireless Vision Sensor Networks; Vision Sensor Node; Hardware/Software Partitioning; Reconfigurable Architecture; Image Processing.
Nationell ämneskategori
Teknik och teknologier
Identifikatorer
urn:nbn:se:miun:diva-14940 (URN)10.4018/jdst.2012040104 (DOI)2-s2.0-84880522514 (Scopus ID)
Projekt
Onparticle detection
Tillgänglig från: 2012-01-04 Skapad: 2011-11-27 Senast uppdaterad: 2017-12-08Bibliografiskt granskad
4. Implementation of wireless Vision Sensor Node for Characterization of Particles in Fluids
Öppna denna publikation i ny flik eller fönster >>Implementation of wireless Vision Sensor Node for Characterization of Particles in Fluids
Visa övriga...
2012 (Engelska)Ingår i: IEEE transactions on circuits and systems for video technology (Print), ISSN 1051-8215, E-ISSN 1558-2205, Vol. 22, nr 11, s. 1634-1643Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Wireless Vision Sensor Networks (WVSNs) have a number of wireless Vision Sensor Nodes (VSNs), often spread over a large geographical area. Each node has an image capturing unit, a battery or alternative energy source, a memory unit, a light source, a wireless link and a processing unit. The challenges associated with WVSNs include low energy consumption, low bandwidth, limited memory and processing capabilities. In order to meet these challenges, our research is focused on the exploration of energy efficient reconfigurable architectures for VSN. In this work, the design/research challenges associated with the implementation of VSN on different computational platforms such as micro-controller, FPGA and server, are explored. In relation to this, the effect on the energy consumption and the design complexity at the node, when the functionality is moved from one platform to another are analyzed. Based on the implementation of the VSN on embedded platforms, the lifetime of the VSN is predicted using the measured energy values of the platforms for different implementation strategies. The implementation results show that an architecture, where the compressed images after pixel based operation are transmitted, realize a WVSN system with low energy consumption. Moreover, the complex post processing tasks are moved to a server, with reduced constraints. 

Nyckelord
Reconfigurable architecture, Image processing, Wireless vision sensor networks, Wireless vision sensor node.
Nationell ämneskategori
Annan elektroteknik och elektronik Elektroteknik och elektronik
Identifikatorer
urn:nbn:se:miun:diva-14389 (URN)10.1109/TCSVT.2012.2202189 (DOI)000313971700010 ()2-s2.0-84875631744 (Scopus ID)STC (Lokalt ID)STC (Arkivnummer)STC (OAI)
Tillgänglig från: 2011-08-24 Skapad: 2011-08-24 Senast uppdaterad: 2017-12-08Bibliografiskt granskad
5. Implementation of Wireless Vision Sensor Node With a Lightweight Bi-Level Video Coding
Öppna denna publikation i ny flik eller fönster >>Implementation of Wireless Vision Sensor Node With a Lightweight Bi-Level Video Coding
Visa övriga...
2013 (Engelska)Ingår i: IEEE Journal on Emerging and Selected Topics in Circuits and Systems, ISSN 2156-3357, Vol. 3, nr 2, s. 198-209, artikel-id 6508941Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Wireless vision sensor networks (WVSNs) consist ofa number of wireless vision sensor nodes (VSNs) which have limitedresources i.e., energy, memory, processing, and wireless bandwidth.The processing and communication energy requirements ofindividual VSN have been a challenge because of limited energyavailability. To meet this challenge, we have proposed and implementeda programmable and energy efficient VSN architecturewhich has lower energy requirements and has a reduced designcomplexity. In the proposed system, vision tasks are partitionedbetween the hardware implemented VSN and a server. The initialdata dominated tasks are implemented on the VSN while thecontrol dominated complex tasks are processed on a server. Thisstrategy will reduce both the processing energy consumption andthe design complexity. The communication energy consumption isreduced by implementing a lightweight bi-level video coding on theVSN. The energy consumption is measured on real hardware fordifferent applications and proposed VSN is compared against publishedsystems. The results show that, depending on the application,the energy consumption can be reduced by a factor of approximately1.5 up to 376 as compared to VSN without the bi-level videocoding. The proposed VSN offers energy efficient, generic architecturewith smaller design complexity on hardware reconfigurableplatform and offers easy adaptation for a number of applicationsas compared to published systems.

Ort, förlag, år, upplaga, sidor
IEEE Press, 2013
Nyckelord
Architecture, smart camera, video coding, wireless vision sensor networks (WVSNs), wireless vision sensor node (VSN)
Nationell ämneskategori
Teknik och teknologier
Identifikatorer
urn:nbn:se:miun:diva-19193 (URN)10.1109/JETCAS.2013.2256816 (DOI)000337789200009 ()2-s2.0-84879076204 (Scopus ID)STC (Lokalt ID)STC (Arkivnummer)STC (OAI)
Tillgänglig från: 2013-06-12 Skapad: 2013-06-12 Senast uppdaterad: 2016-10-20Bibliografiskt granskad
6. Low Complexity Background Subtraction for Wireless Vision Sensor Node
Öppna denna publikation i ny flik eller fönster >>Low Complexity Background Subtraction for Wireless Vision Sensor Node
Visa övriga...
2013 (Engelska)Ingår i: Proceedings - 16th Euromicro Conference on Digital System Design, DSD 2013, 2013, s. 681-688Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Wireless vision sensor nodes consist of limited resources such as energy, memory, wireless bandwidth and processing. Thus it becomes necessary to investigate lightweight vision tasks. To highlight the foreground objects, many machine vision applications depend on the background subtraction technique. Traditional background subtraction approaches employ recursive and non-recursive techniques and store the whole image in memory. This raises issues like complexity on hardware platform, energy requirements and latency. This work presents a low complexity background subtraction technique for a hardware implemented VSN. The proposed technique utilizes existing image scaling techniques for scaling down the image. The downscaled image is stored in memory of microcontroller which is already there for transmission. For subtraction operation, the background pixels are generated in real time through up scaling. The performance, and memory requirements of the system is compared for four image scaling techniques including nearest neighbor, averaging, bilinear, and bicubic. The results show that a system with lightweight scaling techniques, i.e., nearest neighbor and averaging, up to a scaling factor of 8, missed on average less than one object as compared to a system which uses a full original background image. The proposed approach will reduce the cost, design/implementation complexity and the memory requirement by a factor of up to 64.

Nyckelord
wireless vision sensor node, background subtraction, Smart camera, low complexity.
Nationell ämneskategori
Annan elektroteknik och elektronik
Identifikatorer
urn:nbn:se:miun:diva-19204 (URN)10.1109/DSD.2013.77 (DOI)2-s2.0-84890108886 (Scopus ID)STC (Lokalt ID)978-076955074-9 (ISBN)STC (Arkivnummer)STC (OAI)
Konferens
16th Euromicro Conference On Digital System Design; 4-6 Sep 2013; Santander, Spain
Tillgänglig från: 2013-06-12 Skapad: 2013-06-12 Senast uppdaterad: 2016-10-20Bibliografiskt granskad
7. Architecture of Wireless Visual Sensor Node with Region of Interest Coding
Öppna denna publikation i ny flik eller fönster >>Architecture of Wireless Visual Sensor Node with Region of Interest Coding
Visa övriga...
2012 (Engelska)Ingår i: Proceedings - 2012 IEEE 3rd International Conference on Networked Embedded Systems for Every Application, NESEA 2012, IEEE conference proceedings, 2012, s. Art. no. 6474029-Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

The challenges involved in designing a wirelessVision Sensor Node include the reduction in processing andcommunication energy consumption, in order to maximize itslifetime. This work presents an architecture for a wireless VisionSensor Node, which consumes low processing andcommunication energy. The processing energy consumption isreduced by processing lightweight vision tasks on the VSN andby partitioning the vision tasks between the wireless VisionSensor Node and the server. The communication energyconsumption is reduced with Region Of Interest coding togetherwith a suitable bi-level compression scheme. A number ofdifferent processing strategies are investigated to realize awireless Vision Sensor Node with a low energy consumption. Theinvestigation shows that the wireless Vision Sensor Node, usingRegion Of Interest coding and CCITT group4 compressiontechnique, consumes 43 percent lower processing andcommunication energy as compared to the wireless Vision SensorNode implemented without Region Of Interest coding. Theproposed wireless Vision Sensor Node can achieve a lifetime of5.4 years, with a sample period of 5 minutes by using 4 AAbatteries.

Ort, förlag, år, upplaga, sidor
IEEE conference proceedings, 2012
Nyckelord
architecture, wireless vision sensor node, Region of interest coding, Smart camera, wireless visual sensor networks, wireless multimedia sensor networks.
Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
urn:nbn:se:miun:diva-18021 (URN)10.1109/NESEA.2012.6474029 (DOI)000319471300019 ()2-s2.0-84875603760 (Scopus ID)STC (Lokalt ID)978-146734723-5 (ISBN)STC (Arkivnummer)STC (OAI)
Konferens
2012 IEEE 3rd International Conference on Networked Embedded Systems for Every Application, NESEA 2012; Liverpool; United Kingdom; 13 December 2012 through 14 December 2012; Category numberCFP12NEE-ART; Code 96291
Tillgänglig från: 2012-12-19 Skapad: 2012-12-19 Senast uppdaterad: 2016-10-20Bibliografiskt granskad

Open Access i DiVA

Phd_thesis_imran(3594 kB)1113 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 3594 kBChecksumma SHA-512
a07c4cb125e84ade0a9570e7557b3bc85a86478d48c5618f81aa8d973843c52f4e0749345a39b1aea3144fc2003b25a95acb383cf8f5d90429d7d053a95eec3c
Typ fulltextMimetyp application/pdf

Sök vidare i DiVA

Av författaren/redaktören
Imran, Muhammad
Av organisationen
Avdelningen för elektronikkonstruktion
Elektroteknik och elektronik

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 1113 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

isbn
urn-nbn

Altmetricpoäng

isbn
urn-nbn
Totalt: 1904 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf