Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data: Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data
Linköpings universitet, Institutionen för systemteknik, Datorseende.
2019 (engelsk)Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
Abstract [en]

Semantic segmentation is a key approach to comprehensive image data analysis. It can be applied to analyze 2D images, videos, and even point clouds that contain 3D data points. On the first two problems, CNNs have achieved remarkable progress, but on point cloud segmentation, the results are less satisfactory due to challenges such as limited memory resource and difficulties in 3D point annotation. One of the research studies carried out by the Computer Vision Lab at Linköping University was aiming to ease the semantic segmentation of 3D point cloud. The idea is that by first projecting 3D data points to 2D space and then focusing only on the analysis of 2D images, we can reduce the overall workload for the segmentation process as well as exploit the existing well-developed 2D semantic segmentation techniques. In order to improve the performance of CNNs for 2D semantic segmentation, the study has used input data derived from different modalities. However, how different modalities can be optimally fused is still an open question. Based on the above-mentioned study, this thesis aims to improve the multistream framework architecture. More concretely, we investigate how different singlestream architectures impact the multistream framework with a given fusion method, and how different fusion methods contribute to the overall performance of a given multistream framework. As a result, our proposed fusion architecture outperformed all the investigated traditional fusion methods. Along with the best singlestream candidate and few additional training techniques, our final proposed multistream framework obtained a relative gain of 7.3\% mIoU compared to the baseline on the semantic3D point cloud test set, increasing the ranking from 12th to 5th position on the benchmark leaderboard.

sted, utgiver, år, opplag, sider
2019. , s. 70
Emneord [en]
deep learning, multimodal fusion, multimodality, semantic segmentation, point cloud segmentation
HSV kategori
Identifikatorer
URN: urn:nbn:se:liu:diva-157705ISRN: LiTH-ISY-EX--19/5190--SEOAI: oai:DiVA.org:liu-157705DiVA, id: diva2:1327473
Fag / kurs
Computer Engineering
Presentation
2019-02-28, Linköping, 13:00 (engelsk)
Veileder
Examiner
Tilgjengelig fra: 2019-06-19 Laget: 2019-06-19 Sist oppdatert: 2019-06-19bibliografisk kontrollert

Open Access i DiVA

fulltext(10484 kB)73 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 10484 kBChecksum SHA-512
3965175ce2113914acf5dbd20ce7f9e64174e613f35e1c963e221f980929bdd7f4eebcad3f10901aee5e46409d670fdd34b270f5027e2f90076d6c85089c53ec
Type fulltextMimetype application/pdf

Søk i DiVA

Av forfatter/redaktør
He, Linbo
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 73 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

urn-nbn

Altmetric

urn-nbn
Totalt: 230 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf