Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Deep learning based technique for enhanced sonar imaging
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Saab, SE-581 88 Linköping, Sweden.ORCID iD: 0000-0002-0552-567X
Saab, SE-581 88 Linköping, Sweden;Department of Electrical and Information Technology, Lund University.
Saab, SE-581 88 Linköping, Sweden;Department of Electrical and Information Technology, Lund University.
2019 (English)Conference paper, Published paper (Other academic)
Abstract [en]

Several beamforming techniques can be used to enhance the resolution of sonar images. Beamforming techniques can be divided into two types: data independent beamforming such as the delay-sum-beamformer, and data-dependent methods known as adaptive beamformers. Adaptive beamformers can often achieve higher resolution, but are more sensitive to errors. Several signals are processed from several consecutive pings. The signals are added coherently to achieve the same effect as having a longer array in synthetic aperture sonar (SAS). In general it can be said that a longer array gives a higher image resolution. SAS processing typically requires high navigation accuracy, and physical array-overlap between pings. This restriction on displacement between pings limits the area coverage rate for the vehicle carrying the SAS. We investigate the possibility to enhance sonar images from one ping measurements in this paper. This is done by using state-of-the art techniques from Image-to-Image translation, namely the conditional generative adversarial network (cGAN) Pix2Pix. The cGAN learns a mapping from an input to output image as well as a loss function to train the mapping. We test our concept by training a cGAN on simulated data, going from a short array (low resolution) to a longer array (high resolution). The method is evaluated using measured SAS-data collected by Saab with the experimental platform Sapphires in freshwater Lake Vättern.

Place, publisher, year, edition, pages
2019. p. 1021-1028
Series
Underwater Acoustics Conference and Exhibition, E-ISSN 2408-0195
Keywords [en]
Sonar Imaging, Synthetic Aperture Sonar, Generative Adversarial Networks, Image Enhancement
National Category
Robotics
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-263730OAI: oai:DiVA.org:kth-263730DiVA, id: diva2:1369284
Conference
5th Underwater Acoustics Conference & Exhibition (UACE), Hersonissos, Crete, Greece, 30 Jun - 5 Jul 2019
Projects
SMaRC
Note

QC 20191112

Available from: 2019-11-11 Created: 2019-11-11 Last updated: 2019-11-12Bibliographically approved

Open Access in DiVA

fulltext(979 kB)7 downloads
File information
File name FULLTEXT01.pdfFile size 979 kBChecksum SHA-512
d28c1e63e5ff3f5811747f17764adea7e9c9c244f4670aedd6c3a79dafd387067dfe43cdef17e679318047e80caeb3c9ff8e7dbf299d48dc648980cc2d0763e3
Type fulltextMimetype application/pdf

Other links

Full-text

Search in DiVA

By author/editor
Rixon Fuchs, Louise
By organisation
Robotics, Perception and Learning, RPL
Robotics

Search outside of DiVA

GoogleGoogle Scholar
Total: 7 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 61 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf