Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Investigation of intelligence partitioning in wireless visual sensor networks
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.ORCID iD: 0000-0002-6484-9260
2011 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The wireless visual sensor network is an emerging field which is formed by deploying many visual sensor nodes in the field and in which each individual visual sensor node contains an image sensor, on board processor, memory and wireless transceiver. In comparison to the traditional wireless sensor networks, which operate on one dimensional data, the wireless visual sensor networks operate on two dimensional data which requires higher processing power and communication bandwidth. Research focus within the field of wireless visual sensor networks has been on two different extremes, involving either sending raw data to the central base station without local processing or conducting all processing locally at the visual sensor node and transmitting only the final results.This research work focuses on determining an optimal point of hardware/software partitioning at the visual sensor node as well as partitioning tasks between local and central processing, based on the minimum energy consumption for the vision processing tasks. Different possibilities in relation to partitioning the vision processing tasks between hardware, software and locality for the implementation of the visual sensor node, used in wireless visual sensor networks have been explored. The effect of packets relaying and node density on the energy consumption and implementation of the individual wireless visual sensor node, when used in a multi-hop wireless visual sensor networks have also been explored.The lifetime of the visual sensor node is predicted by evaluating the energy requirement of the embedded platform with a combination of the Field Programmable Gate Arrays (FPGA) and the micro-controller for the implementation of the visual sensor node and, in addition, taking into account the amount of energy required for receiving/forwarding the packets of other nodes in the multi-hop network.Advancements in FPGAs have been the motivation behind their choice as the vision processing platform for implementing visual sensor node. This choice is based on the reduced time-to-market, low Non-Recurring Engineering (NRE) cost and programmability as compared to ASICs. The other part of the architecture of the visual sensor node is the SENTIO32 platform, which is used for vision processing in the software implementation of the visual sensor node and for communicating the results to the central base station in the hardware implementation (using the RF transceiver embedded in SENTIO32).

Place, publisher, year, edition, pages
Sundsvall: Mid Sweden University , 2011. , 108 p.
Series
Mid Sweden University licentiate thesis, ISSN 1652-8948 ; 65
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:miun:diva-14445Local ID: STCISBN: 978-91-86694-44-9 (print)OAI: oai:DiVA.org:miun-14445DiVA: diva2:438932
Supervisors
Available from: 2011-09-06 Created: 2011-09-05 Last updated: 2016-10-19Bibliographically approved
List of papers
1. Exploration of Local and Central Processing for a Wireless Camera Based Sensor Node
Open this publication in new window or tab >>Exploration of Local and Central Processing for a Wireless Camera Based Sensor Node
2010 (English)In: International Conference on Signals and Electronic Systems, ICSES'10 - Conference Proceeding 2010, Article number 5595231, IEEE conference proceedings, 2010, 147-150 p.Conference paper, Published paper (Refereed)
Abstract [en]

Wireless vision sensor network is an emerging field which combines image sensor, on board computation and communication links. Compared to the traditional wireless sensor networks which operate on one dimensional data, wireless vision sensor networks operate on two dimensional data which requires both higher processing power and communication bandwidth. The research focus within the field of wireless vision sensor network has been based on two different assumptions involving either sending data to the central base station without local processing or conducting all processing locally at the sensor node and transmitting only the final results. In this paper we focus on determining an optimal point for intelligence partitioning between the sensor node and the central base station and by exploring compression methods. The lifetime of the visual sensor node is predicted by evaluating the energy consumption for different levels of intelligence partitioning at the sensor node. Our results show that sending compressed images after segmentation will result in a longer life for the sensor node.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2010
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-12602 (URN)000299392000035 ()2-s2.0-78649273063 (Scopus ID)978-839047434-2 (ISBN)978-1-4244-5307-8 (ISBN)
Conference
International Conference on Signals and Electronic Systems, ICSES'10; Gliwice; 7 September 2010 through 10 September 2010; Category number CFP1057D-ART; Code 82386
Available from: 2010-12-13 Created: 2010-12-13 Last updated: 2013-11-11Bibliographically approved
2. Exploration of tasks partitioning between hardware software and locality for a wireless camera based vision sensor node
Open this publication in new window or tab >>Exploration of tasks partitioning between hardware software and locality for a wireless camera based vision sensor node
Show others...
2011 (English)In: Proceedings - 6th International Symposium on Parallel Computing in Electrical Engineering, PARELEC 2011, IEEE conference proceedings, 2011, 127-132 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we have explored different possibilities for partitioning the tasks between hardware, software and locality for the implementation of the vision sensor node, used in wireless vision sensor network. Wireless vision sensor network is an emerging field which combines image sensor, on board computation and communication links. Compared to the traditional wireless sensor networks which operate on one dimensional data, wireless vision sensor networks operate on two dimensional data which requires higher processing power and communication bandwidth. The research focus within the field of wireless vision sensor networks have been on two different assumptions involving either sending raw data to the central base station without local processing or conducting all processing locally at the sensor node and transmitting only the final results. Our research work focus on determining an optimal point of hardware/software partitioning as well as partitioning between local and central processing, based on minimum energy consumption for vision processing operation. The lifetime of the vision sensor node is predicted by evaluating the energy requirement of the embedded platform with a combination of FPGA and micro controller for the implementation of the vision sensor node. Our results show that sending compressed images after pixel based tasks will result in a longer battery life time with reasonable hardware cost for the vision sensor node. © 2011 IEEE.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2011
Keyword
Hardware/Software Partioning, Image Processing, Reconfigurable Architecture, Vision Sensor Node, Wireless Vision Sensor Networks, Battery life time, Communication bandwidth, Compressed images, Embedded platforms, Energy requirements, Hardware cost, Hardware/software partitioning, Local processing, Minimum energy, Optimal points, Partioning, Processing power, Vision processing, Vision sensors, Wireless cameras, Work Focus, Computer hardware, Electrical engineering, Energy utilization, Engineering research, Field programmable gate arrays (FPGA), Parallel architectures, Sensors, Telecommunication equipment, Telecommunication systems, Wireless networks, Sensor nodes
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-14195 (URN)10.1109/PARELEC.2011.21 (DOI)2-s2.0-79958725347 (Scopus ID)STC (Local ID)9780769543970 (ISBN)STC (Archive number)STC (OAI)
Conference
6th International Symposium on Parallel Computing in Electrical Engineering, PARELEC 2011; Luton; 4 April 2011 through 5 April 2011; Category number E4397; Code 85105
Available from: 2011-07-19 Created: 2011-07-19 Last updated: 2016-10-19Bibliographically approved
3. Exploration of Target Architecture for aWireless Camera Based Sensor Node
Open this publication in new window or tab >>Exploration of Target Architecture for aWireless Camera Based Sensor Node
2010 (English)In: 28th Norchip Conference, NORCHIP 2010, IEEE conference proceedings, 2010, 1-4 p.Conference paper, Published paper (Refereed)
Abstract [en]

The challenges associated with wireless vision sensor networks are low energy consumption, less bandwidth and limited processing capabilities. In order to meet these challenges different approaches are proposed. Research in wireless vision sensor networks has been focused on two different assumptions, first is sending all data to the central base station without local processing, second approach is based on conducting all processing locally at the sensor node and transmitting only the final results. Our research is focused on partitioning the vision processing tasks between Senor node and central base station. In this paper we have added the exploration dimension to perform some of the vision tasks such as image capturing, background subtraction, segmentation and Tiff Group4 compression on FPGA while communication on microcontroller. The remaining vision processing tasks i.e. morphology, labeling, bubble remover and classification are processed on central base station. Our results show that the introduction of FPGA for some of the visual tasks will result in a longer life time for the visual sensor node while the architecture is still programmable.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2010
National Category
Computer and Information Science
Identifiers
urn:nbn:se:miun:diva-12503 (URN)10.1109/NORCHIP.2010.5669490 (DOI)2-s2.0-78751533214 (Scopus ID)978-142448973-2 (ISBN)
Conference
28th Norchip Conference, NORCHIP 2010; Tampere; 15 November 2010 through 16 November 2010; Category number CFP10828-ART; Code 83479
Available from: 2010-12-13 Created: 2010-12-09 Last updated: 2013-11-11Bibliographically approved

Open Access in DiVA

fulltext(1460 kB)