Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Computational Photography: High Dynamic Range and Light Fields
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
2020 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The introduction and recent advancements of computational photography have revolutionized the imaging industry. Computational photography is a combination of imaging techniques at the intersection of various fields such as optics, computer vision, and computer graphics. These methods enhance the capabilities of traditional digital photography by applying computational techniques both during and after the capturing process. This thesis targets two major subjects in this field: High Dynamic Range (HDR) image reconstruction and Light Field (LF) compressive capturing, compression, and real-time rendering.

The first part of the thesis focuses on the HDR images that concurrently contain detailed information from the very dark shadows to the brightest areas in the scenes. One of the main contributions presented in this thesis is the development of a unified reconstruction algorithm for spatially variant exposures in a single image. This method is based on a camera noise model, and it simultaneously resamples, reconstructs, denoises, and demosaics the image while extending its dynamic range. Furthermore, the HDR reconstruction algorithm is extended to adapt to the local features of the image, as well as the noise statistics, to preserve the high-frequency edges during reconstruction.

In the second part of this thesis, the research focus shifts to the acquisition, encoding, reconstruction, and rendering of light field images and videos in a real-time setting. Unlike traditional integral photography, a light field captures the information of the dynamic environment from all angles, all points in space, and all spectral wavelength and time. This thesis employs sparse representation to provide an end-to-end solution to the problem of encoding, real-time reconstruction, and rendering of high dimensional light field video data sets. These solutions are applied on various types of data sets, such as light fields captured with multi-camera systems or hand-held cameras equipped with micro-lens arrays, and spherical light fields. Finally, sparse representation of light fields was utilized for developing a single sensor light field video camera equipped with a color-coded mask. A new compressive sensing model is presented that is suitable for dynamic scenes with temporal coherency and is capable of reconstructing high-resolution light field videos.  

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2020. , p. 122
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2046
National Category
Media Engineering
Identifiers
URN: urn:nbn:se:liu:diva-163693DOI: 10.3384/diss.diva-163693ISBN: 9789179299057 (print)OAI: oai:DiVA.org:liu-163693DiVA, id: diva2:1394089
Public defence
2020-02-28, Domteatern, Visualiseringscenter C, Kungsgatan 54, 60233, Norrköping, 09:15 (English)
Opponent
Supervisors
Available from: 2020-02-18 Created: 2020-02-18 Last updated: 2020-03-18Bibliographically approved
List of papers
1. HDR reconstruction for alternating gain (ISO) sensor readout
Open this publication in new window or tab >>HDR reconstruction for alternating gain (ISO) sensor readout
2014 (English)In: Eurographics 2014 short papers, 2014Conference paper, Published paper (Refereed)
Abstract [en]

Modern image sensors are becoming more and more flexible in the way an image is captured. In this paper, we focus on sensors that allow the per pixel gain to be varied over the sensor and develop a new technique for efficient and accurate reconstruction of high dynamic range (HDR) images based on such input data. Our method estimates the radiant power at each output pixel using a sampling operation which performs color interpolation, re-sampling, noise reduction and HDR-reconstruction in a single step. The reconstruction filter uses a sensor noise model to weight the input pixel samples according to their variances. Our algorithm works in only a small spatial neighbourhood around each pixel and lends itself to efficient implementation in hardware. To demonstrate the utility of our approach we show example HDR-images reconstructed from raw sensor data captured using off-the shelf consumer hardware which allows for two different gain settings for different rows in the same image. To analyse the accuracy of the algorithm, we also use synthetic images from a camera simulation software.

Keywords
HDR, image reconstruction, dual-ISO, image processing
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-104922 (URN)
Conference
Eurographics, Strasbourg, France, April 7-11, 2014
Projects
VPS
Available from: 2014-03-03 Created: 2014-03-03 Last updated: 2020-02-18Bibliographically approved
2. Adaptive dualISO HDR-reconstruction
Open this publication in new window or tab >>Adaptive dualISO HDR-reconstruction
2015 (English)In: EURASIP Journal on Image and Video Processing, ISSN 1687-5176, E-ISSN 1687-5281Article in journal (Refereed) Published
Abstract [en]

With the development of modern image sensors enabling flexible image acquisition, single shot HDR imaging is becoming increasingly popular. In this work we capture single shot HDR images using an imaging sensor with spatially varying gain/ISO. In comparison to previous single shot HDR capture based on a single sensor, this allows all incoming photons to be used in the imaging, instead of wasting incoming light using spatially varying ND-filters, commonly used in previous works. The main technical contribution in this work is an  extension of previous HDR reconstruction approaches for single shot HDR imaging based on local polynomial approximations [15,10]. Using a sensor noise model, these works deploy a statistically informed filtering operation to reconstruct HDR pixel values. However, instead of using a fixed filter size, we introduce two novel algorithms for adaptive filter kernel selection. Unlike previous works, using  adaptive filter kernels [16], our algorithms are based on analysing the model fit and the expected statistical deviation of the estimate based on the sensor noise model. Using an iterative procedure we can then adapt the filter kernel according to the image structure and the statistical image noise. Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods. To demonstrate the robustness of our approach, we have exploited input images from raw sensor data using a commercial off-the shelf camera. To further analyze our algorithm, we have also implemented a camera simulator to evaluate different gain pattern and noise properties of the sensor.

Place, publisher, year, edition, pages
Springer Publishing Company, 2015
Keywords
HDR reconstruction; Single shot HDR imaging; DualISO; Statistical image fitlering
National Category
Computer Sciences Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-122587 (URN)10.1186/s13640-015-0095-0 (DOI)000366324500001 ()
Note

Funding agencies: Swedish Foundation for Strategic Research (SSF) [IIS11-0081]; Linkoping University Center for Industrial Information Technology (CENIIT); Swedish Research Council through the Linnaeus Environment CADICS

Available from: 2015-11-10 Created: 2015-11-10 Last updated: 2020-02-18Bibliographically approved
3. A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos
Open this publication in new window or tab >>A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos
2019 (English)In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 38, no 3, p. 1-18, article id 23Article in journal (Refereed) Published
Abstract [en]

In this article we present a novel dictionary learning framework designed for compression and sampling of light fields and light field videos. Unlike previous methods, where a single dictionary with one-dimensional atoms is learned, we propose to train a Multidimensional Dictionary Ensemble (MDE). It is shown that learning an ensemble in the native dimensionality of the data promotes sparsity, hence increasing the compression ratio and sampling efficiency. To make maximum use of correlations within the light field data sets, we also introduce a novel nonlocal pre-clustering approach that constructs an Aggregate MDE (AMDE). The pre-clustering not only improves the image quality but also reduces the training time by an order of magnitude in most cases. The decoding algorithm supports efficient local reconstruction of the compressed data, which enables efficient real-time playback of high-resolution light field videos. Moreover, we discuss the application of AMDE for compressed sensing. A theoretical analysis is presented that indicates the required conditions for exact recovery of point-sampled light fields that are sparse under AMDE. The analysis provides guidelines for designing efficient compressive light field cameras. We use various synthetic and natural light field and light field video data sets to demonstrate the utility of our approach in comparison with the state-of-the-art learning-based dictionaries, as well as established analytical dictionaries.

Place, publisher, year, edition, pages
ACM Digital Library, 2019
Keywords
Light field video compression, compressed sensing, dictionary learning, light field photography
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-158026 (URN)10.1145/3269980 (DOI)000495415600005 ()
Available from: 2019-06-24 Created: 2019-06-24 Last updated: 2020-02-18Bibliographically approved
4. Light Field Video Compression and Real Time Rendering
Open this publication in new window or tab >>Light Field Video Compression and Real Time Rendering
Show others...
2019 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, p. 265-276Article in journal (Refereed) Published
Abstract [en]

Light field imaging is rapidly becoming an established method for generating flexible image based description of scene appearances. Compared to classical 2D imaging techniques, the angular information included in light fields enables effects such as post‐capture refocusing and the exploration of the scene from different vantage points. In this paper, we describe a novel GPU pipeline for compression and real‐time rendering of light field videos with full parallax. To achieve this, we employ a dictionary learning approach and train an ensemble of dictionaries capable of efficiently representing light field video data using highly sparse coefficient sets. A novel, key element in our representation is that we simultaneously compress both image data (pixel colors) and the auxiliary information (depth, disparity, or optical flow) required for view interpolation. During playback, the coefficients are streamed to the GPU where the light field and the auxiliary information are reconstructed using the dictionary ensemble and view interpolation is performed. In order to realize the pipeline we present several technical contributions including a denoising scheme enhancing the sparsity in the dataset which enables higher compression ratios, and a novel pruning strategy which reduces the size of the dictionary ensemble and leads to significant reductions in computational complexity during the encoding of a light field. Our approach is independent of the light field parameterization and can be used with data from any light field video capture system. To demonstrate the usefulness of our pipeline, we utilize various publicly available light field video datasets and discuss the medical application of documenting heart surgery.

Place, publisher, year, edition, pages
John Wiley & Sons, 2019
Keywords
Computational photography, Light Fields, Light Fields Compression, Light Field Video
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-162100 (URN)10.1111/cgf.13835 (DOI)000496351100025 ()
Conference
Pacific Graphics 2019
Note

Funding agencies:  childrens heart clinic at Skane University hospital, Barnhjartcentrum; strategic research environment ELLIIT; Swedish Science Council [201505180]; VinnovaVinnova [2017-03728]; Visual Sweden Platform for Augmented Intelligence

Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2020-02-18

Open Access in DiVA

fulltext(35452 kB)33 downloads
File information
File name FULLTEXT01.pdfFile size 35452 kBChecksum SHA-512
82857efaa704afe72e45f428eb3298d1bfd0dc4c8724f37299977b62b24ecda331ce5dde5392dd9a4d1c0152bd13bb4be845dc8164110a2516118ecdbf5b1797
Type fulltextMimetype application/pdf
Order online >>

Other links

Publisher's full text

Search in DiVA

By author/editor
Hajisharif, Saghi
By organisation
Media and Information TechnologyFaculty of Science & Engineering
Media Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 33 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 124 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf