Change search
Refine search result
1 - 4 of 4
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Li, Yongwei
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Olsson, Roger
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    An analysis of demosaicing for plenoptic capture based on ray optics2018In: Proceedings of 3DTV Conference 2018, 2018, article id 8478476Conference paper (Refereed)
    Abstract [en]

    The plenoptic camera is gaining more and more attention as it capturesthe 4D light field of a scene with a single shot and enablesa wide range of post-processing applications. However, the preprocessing steps for captured raw data, such as demosaicing, have been overlooked. Most existing decoding pipelines for plenoptic cameras still apply demosaicing schemes which are developed for conventional cameras. In this paper, we analyze the sampling pattern of microlens-based plenoptic cameras by ray-tracing techniques and ray phase space analysis. The goal of this work is to demonstrate guidelines and principles for demosaicing the plenoptic captures by taking the unique microlens array design into account. We show that the sampling of the plenoptic camera behaves differently from that of a conventional camera and the desired demosaicing scheme is depth-dependent.

  • 2.
    Li, Yongwei
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Scrofani, Gabriele
    Department of Optics, University of Valencia, Burjassot, Spain.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Martinez-Corraly, M.
    Department of Optics, University of Valencia, Burjassot, Spain.
    Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture2018In: 2018 26th European Signal Processing Conference (EUSIPCO), IEEE conference proceedings, 2018, p. 206-210, article id 8553336Conference paper (Refereed)
    Abstract [en]

    With the rapid development of light field technology, depth estimation has been highlighted as one of the critical problems in the field, and a number of approaches have been proposed to extract the depth of the scene. However, depthestimation by stereo matching becomes difficult and unreliable when the captured images lack both color and feature information. In this paper, we propose a scheme that extracts robust depth from monochromatic, feature-sparse scenes recorded in orthographic sub-aperture images. Unlike approaches which relyon the rich color and texture information across the sub-aperture views, our approach is based on depth from focus techniques. First, we superimpose shifted sub-aperture images on top of anarbitrarily chosen central image. To focus on different depths, the shift amount is varied based on the micro-lens array properties. Next, an area-based depth estimation approach is applied tofind the best match among the focal stack and generate the dense depth map. This process is repeated for each sub-aperture image. Finally, occlusions are handled by merging depth maps generated from different central images followed by a voting process. Results show that the proposed scheme is more suitable than conventional depth estimation approaches in the context of orthographic captures that have insufficient color and feature information, such as microscopic fluorescence imaging.

  • 3.
    Li, Yongwei
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Sjöström, Mårten
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Depth-Assisted Demosaicing for Light Field Data in Layered Object Space2019In: 2019 IEEE International Conference on Image Processing (ICIP), IEEE, 2019, p. 3746-3750, article id 8803441Conference paper (Refereed)
    Abstract [en]

    Light field technology, which emerged as a solution to the increasing demands of visually immersive experience, has shown its extraordinary potential for scene content representation and reconstruction. Unlike conventional photography that maps the 3D scenery onto a 2D plane by a projective transformation, light field preserves both the spatial and angular information, enabling further processing steps such as computational refocusing and image-based rendering. However, there are still gaps that have been barely studied, such as the light field demosaicing process. In this paper, we propose a depth-assisted demosaicing method for light field data. First, we exploit the sampling geometry of the light field data with respect to the scene content using the ray-tracing technique and develop a sampling model of light field capture. Then we carry out the demosaicing process in a layered object space with object-space sampling adjacencies rather than pixel placement. Finally, we compare our results with state-of-art approaches and discuss about the potential research directions of the proposed sampling model to show the significance of our approach.

  • 4.
    Wang, Chunpeng
    et al.
    Qilu University of Technology (Shandong Academy of Sciences), Jinan, China; Dalian University of Technology, Dalian, China.
    Wang, Xingyuan
    Dalian Maritime University, Dalian, China; Dalian University of Technology, Dalian, China.
    Li, Yongwei
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Systems and Technology.
    Xia, Zhiqiu
    Dalian University of Technology, Dalian, China.
    Zhang, Chuan
    Dalian University of Technology, Dalian, China.
    Quaternion polar harmonic Fourier moments for color images2018In: Information Sciences, ISSN 0020-0255, E-ISSN 1872-6291, Vol. 450, p. 141-156Article in journal (Refereed)
    Abstract [en]

    This paper proposes quaternion polar harmonic Fourier moments (QPHFM) for color image processing and analyzes the properties of QPHFM. After extending Chebyshev–Fourier moments (CHFM) to quaternion Chebyshev-Fourier moments (QCHFM), comparison experiments, including image reconstruction and color image object recognition, on the performance of QPHFM and quaternion Zernike moments (QZM), quaternion pseudo-Zernike moments (QPZM), quaternion orthogonal Fourier-Mellin moments (QOFMM), QCHFM, and quaternion radial harmonic Fourier moments (QRHFM) are carried out. Experimental results show QPHFM can achieve an ideal performance in image reconstruction and invariant object recognition in noise-free and noisy conditions. In addition, this paper discusses the importance of phase information of quaternion orthogonal moments in image reconstruction. 

1 - 4 of 4
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf