Change search
Refine search result
12 1 - 50 of 62
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Baravdish, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    GPU Accelerated Sparse Representation of Light Fields2019In: VISIGRAPP - 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, February 25-27, 2019., 2019, Vol. 4, p. 177-182Conference paper (Refereed)
    Abstract [en]

    We present a method for GPU accelerated compression of light fields. The approach is by using a dictionary learning framework for compression of light field images. The large amount of data storage by capturing light fields is a challenge to compress and we seek to accelerate the encoding routine by GPGPU computations. We compress the data by projecting each data point onto a set of trained multi-dimensional dictionaries and seek the most sparse representation with the least error. This is done by a parallelization of the tensor-matrix product computed on the GPU. An optimized greedy algorithm to suit computations on the GPU is also presented. The encoding of the data is done segmentally in parallel for a faster computation speed while maintaining the quality. The results shows an order of magnitude faster encoding time compared to the results in the same research field. We conclude that there are further improvements to increase the speed, and thus it is not too far from an interacti ve compression speed.

  • 2.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    BriefMatch: Dense binary feature matching for real-time optical flow estimation2017In: Proceedings of the Scandinavian Conference on Image Analysis (SCIA17) / [ed] Puneet Sharma, Filippo Maria Bianchi, Springer, 2017, Vol. 10269, p. 221-233Conference paper (Refereed)
    Abstract [en]

    Research in optical flow estimation has to a large extent focused on achieving the best possible quality with no regards to running time. Nevertheless, in a number of important applications the speed is crucial. To address this problem we present BriefMatch, a real-time optical flow method that is suitable for live applications. The method combines binary features with the search strategy from PatchMatch in order to efficiently find a dense correspondence field between images. We show that the BRIEF descriptor provides better candidates (less outlier-prone) in shorter time, when compared to direct pixel comparisons and the Census transform. This allows us to achieve high quality results from a simple filtering of the initially matched candidates. Currently, BriefMatch has the fastest running time on the Middlebury benchmark, while placing highest of all the methods that run in shorter than 0.5 seconds.

  • 3.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology. Linköping University, Faculty of Science & Engineering.
    Denes, Gyorgy
    University of Cambridge, England.
    Mantiuk, Rafal K.
    University of Cambridge, England.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    HDR image reconstruction from a single exposure using deep CNNs2017In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 36, no 6, article id 178Article in journal (Refereed)
    Abstract [en]

    Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods.

  • 4.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    A versatile material reflectance measurement system for use in production2011In: Proceedings of SIGRAD 2011. Evaluations of Graphics and Visualization — Efficiency, Usefulness, Accessibility, Usability, November 17–18, 2011, KTH, Stockholm, Sweden, Linköping University Electronic Press, 2011, p. 69-76Conference paper (Refereed)
    Abstract [en]

    In this paper we present our developed bidirectional reflectance distribution capturing pipeline. It includes a constructed gonioreflectometer for reflectance measurements, as well as extensive software for operation, data visualization and parameter fitting of analytic models. Our focus is on the flexible user interface, aimed at material appearance creation for computer graphics, and targeted both for production and research employment.

    Key challenges have been in providing a user friendly and effective software for functioning in a production environment, abstracting the details of the calculations involved in the reflectance capturing and fitting. We show how a combination of well-tuned tools can make complex processes such as reflectance calibration, measurement and fitting highly automated in a fast and easy work-flow, from material scanning to model parameters optimized for use in rendering. At the same time, the developed software provides a modifiable interface for detailed control. The importance of having good reflectance visualizations is also demonstrated, where the software plotting tools are able to show vital details of a reflectance distribution, giving valuable insight in to a materials properties and a models accuracy of fit to measured data, on both a local and global level.

  • 5.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Mantiuk, R. K.
    University of Cambridge, England.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A comparative review of tone-mapping algorithms for high dynamic range video2017In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 36, no 2, p. 565-592Article in journal (Refereed)
    Abstract [en]

    Tone-mapping constitutes a key component within the field of high dynamic range (HDR) imaging. Its importance is manifested in the vast amount of tone-mapping methods that can be found in the literature, which are the result of an active development in the area for more than two decades. Although these can accommodate most requirements for display of HDR images, new challenges arose with the advent of HDR video, calling for additional considerations in the design of tone-mapping operators (TMOs). Today, a range of TMOs exist that do support video material. We are now reaching a point where most camera captured HDR videos can be prepared in high quality without visible artifacts, for the constraints of a standard display device. In this report, we set out to summarize and categorize the research in tone-mapping as of today, distilling the most important trends and characteristics of the tone reproduction pipeline. While this gives a wide overview over the area, we then specifically focus on tone-mapping of HDR video and the problems this medium entails. First, we formulate the major challenges a video TMO needs to address. Then, we provide a description and categorization of each of the existing video TMOs. Finally, by constructing a set of quantitative measures, we evaluate the performance of a number of the operators, in order to give a hint on which can be expected to render the least amount of artifacts. This serves as a comprehensive reference, categorization and comparative assessment of the state-of-the-art in tone-mapping for HDR video.

  • 6.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Mantiuk, Rafal K.
    University of Cambridge, England.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A HIGH DYNAMIC RANGE VIDEO CODEC OPTIMIZED BY LARGE-SCALE TESTING2016In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), IEEE , 2016, p. 1379-1383Conference paper (Refereed)
    Abstract [en]

    While a number of existing high-bit depth video compression methods can potentially encode high dynamic range (HDR) video, few of them provide this capability. In this paper, we investigate techniques for adapting HDR video for this purpose. In a large-scale test on 33 HDR video sequences, we compare 2 video codecs, 4 luminance encoding techniques (transfer functions) and 3 color encoding methods, measuring quality in terms of two objective metrics, PU-MSSIM and HDR-VDP-2. From the results we design an open source HDR video encoder, optimized for the best compression performance given the techniques examined.

  • 7.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. IRYSTEC, Canada.
    Mantiuk, Rafal K.
    University of Cambridge, England; IRYSTEC, Canada.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. IRYSTEC, Canada.
    REAL-TIME NOISE-AWARE TONE-MAPPING AND ITS USE IN LUMINANCE RETARGETING2016In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), IEEE , 2016, p. 894-898Conference paper (Refereed)
    Abstract [en]

    With the aid of tone-mapping operators, high dynamic range images can be mapped for reproduction on standard displays. However, for large restrictions in terms of display dynamic range and peak luminance, limitations of the human visual system have significant impact on the visual appearance. In this paper, we use components from the real-time noise-aware tone-mapping to complement an existing method for perceptual matching of image appearance under different luminance levels. The refined luminance retargeting method improves subjective quality on a display with large limitations in dynamic range, as suggested by our subjective evaluation.

  • 8.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Mantiuk, Rafal
    University of Cambridge.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Real-time noise-aware tone mapping2015In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, ISSN 0730-0301, Vol. 34, no 6, p. 198:1-198:15, article id 198Article in journal (Refereed)
    Abstract [en]

    Real-time high quality video tone mapping is needed for manyapplications, such as digital viewfinders in cameras, displayalgorithms which adapt to ambient light, in-camera processing,rendering engines for video games and video post-processing. We propose a viable solution for these applications by designing a videotone-mapping operator that controls the visibility of the noise,adapts to display and viewing environment, minimizes contrastdistortions, preserves or enhances image details, and can be run inreal-time on an incoming sequence without any preprocessing. To ourknowledge, no existing solution offers all these features. Our novelcontributions are: a fast procedure for computing local display-adaptivetone-curves which minimize contrast distortions, a fast method for detailenhancement free from ringing artifacts, and an integrated videotone-mapping solution combining all the above features.

  • 9.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology.
    Mantiuk, Rafal
    University of Cambridge, England.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology.
    Single-frame Regularization for Temporally Stable CNNs2019In: IEEE Conference on Computer Vision and Pattern Recognition, 2019, p. 11176-11185Conference paper (Refereed)
    Abstract [en]

    Convolutional neural networks (CNNs) can model complicated non-linear relations between images. However, they are notoriously sensitive to small changes in the input. Most CNNs trained to describe image-to-image mappings generate temporally unstable results when applied to video sequences, leading to flickering artifacts and other inconsistencies over time. In order to use CNNs for video material, previous methods have relied on estimating dense frame-to-frame motion information (optical flow) in the training and/or the inference phase, or by exploring recurrent learning structures. We take a different approach to the problem, posing temporal stability as a regularization of the cost function. The regularization is formulated to account for different types of motion that can occur between frames, so that temporally stable CNNs can be trained without the need for video material or expensive motion estimation. The training can be performed as a fine-tuning operation, without architectural modifications of the CNN. Our evaluation shows that the training strategy leads to large improvements in temporal smoothness. Moreover, for small datasets the regularization can help in boosting the generalization performance to a much larger extent than what is possible with naive augmentation strategies.

  • 10.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Mantiuk, Rafal
    University of Cambridge, UK.
    Evaluation of tone mapping operators for HDR video2016In: High dynamic range video: from acquisition to display and applications / [ed] Frédéric Dufaux, Patrick Le Callet, Rafal K. Mantiuk, Marta Mrak, London, United Kingdom: Academic Press, 2016, 1st, p. 185-206Chapter in book (Other academic)
    Abstract [en]

    Tone mapping of HDR-video is a challenging filtering problem. It is highly important to develop a framework for evaluation and comparison of tone mapping operators. This chapter gives an overview of different approaches for how evalation of tone mapping operators can be conducted, including experimental setups, choice of input data, choice of tone mapping operators, and the importance of parameter tweaking for fair comparisons. This chapter also gives examples of previous evaluations with a focus on the results from the most recent evaluation conducted by Eilertsen et. al [reference]. This results in a classification of the currently most commonly used tone mapping operators and overview of their performance and possible artifacts.

  • 11.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Wanat, Robert
    Bangor University, United Kingdom.
    Mantiuk, Rafal
    Bangor University, United Kingdom.
    Perceptually based parameter adjustments for video processing operations2014In: ACM SIGGRAPH Talks 2014, ACM Press, 2014Conference paper (Refereed)
    Abstract [en]

    Extensive post processing plays a central role in modern video production pipelines. A problem in this context is that many filters and processing operators are very sensitive to parameter settings and that the filter responses in most cases are highly non-linear. Since there is no general solution for performing perceptual calibration of image and video operators automatically, it is often necessary to manually perform tweaking of multiple parameters. This is an iterative process which requires instant visual feedback of the result in both the spatial and temporal domains. Due to large filter kernels, computational complexity, high frame rate, and image resolution it is, however, often very time consuming to iteratively re-process and tweak long video sequences.We present a new method for rapidly finding the perceptual minima in high-dimensional parameter spaces of general video operators. The key idea of our algorithm is that the characteristics of an operator can be accurately described by interpolating between a small set of pre-computed parameter settings. By computing a perceptual linearization of the parameter space of a video operator, the user can explore this interpolated space to find the best set of parameters in a robust way. Since many operators are dependent on two or more parameters, we formulate this as a general optimization problem where we let the objective function be determined by the user’s image assessments. To demonstrate the usefulness of our approach we show a set of use cases (see the supplementary material) where our algorithm is applied to computationally expensive video operations.

  • 12.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Wanat, Robert
    Bangor University, UK.
    Mantiuk, Rafal
    Bangor University, UK.
    Survey and Evaluation of Tone Mapping Operators for HDR-video2013In: Siggraph 2013 Talks, ACM Press, 2013Conference paper (Other academic)
    Abstract [en]

    This work presents a survey and a user evaluation of tone mapping operators (TMOs) for high dynamic range (HDR) video, i.e. TMOs that explicitly include a temporal model for processing of variations in the input HDR images in the time domain. The main motivations behind this work is that: robust tone mapping is one of the key aspects of HDR imaging [Reinhard et al. 2006]; recent developments in sensor and computing technologies have now made it possible to capture HDR-video, e.g. [Unger and Gustavson 2007; Tocci et al. 2011]; and, as shown by our survey, tone mapping for HDR video poses a set of completely new challenges compared to tone mapping for still HDR images. Furthermore, video tone mapping, though less studied, is highly important for a multitude of applications including gaming, cameras in mobile devices, adaptive display devices and movie post-processing. Our survey is meant to summarize the state-of-the-art in video tonemapping and, as exemplified in Figure 1 (right), analyze differences in their response to temporal variations. In contrast to other studies, we evaluate TMOs performance according to their actual intent, such as producing the image that best resembles the real world scene, that subjectively looks best to the viewer, or fulfills a certain artistic requirement. The unique strength of this work is that we use real high quality HDR video sequences, see Figure 1 (left), as opposed to synthetic images or footage generated from still HDR images.

  • 13.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Wanat, Robert
    Bangor University, Wales .
    Mantiuk, Rafal K.
    Bangor University, Wales .
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Evaluation of Tone Mapping Operators for HDR-Video2013In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 32, no 7, p. 275-284Article in journal (Refereed)
    Abstract [en]

    Eleven tone-mapping operators intended for video processing are analyzed and evaluated with camera-captured and computer-generated high-dynamic-range content. After optimizing the parameters of the operators in a formal experiment, we inspect and rate the artifacts (flickering, ghosting, temporal color consistency) and color rendition problems (brightness, contrast and color saturation) they produce. This allows us to identify major problems and challenges that video tone-mapping needs to address. Then, we compare the tone-mapping results in a pair-wise comparison experiment to identify the operators that, on average, can be expected to perform better than the others and to assess the magnitude of differences between the best performing operators.

  • 14.
    Emadi, Mohammad
    et al.
    Qualcomm Technol Inc, CA 95110 USA.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence2018In: Circuits, systems, and signal processing, ISSN 0278-081X, E-ISSN 1531-5878, Vol. 37, no 4, p. 1562-1574Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a new performance guarantee for the orthogonal matching pursuit (OMP) algorithm. We use mutual coherence as a metric for determining the suitability of an arbitrary overcomplete dictionary for exact recovery. Specifically, a lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise and an upper bound for the mean square error is derived. Compared to the previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a much closer correlation to empirical results of OMP.

  • 15.
    Emadi, Mohammad
    et al.
    Qualcomm Technol Inc, CA USA.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    OMP-based DOA estimation performance analysis2018In: Digital signal processing (Print), ISSN 1051-2004, E-ISSN 1095-4333, Vol. 79, p. 57-65Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a new performance guarantee for Orthogonal Matching Pursuit (OMP) in the context of the Direction Of Arrival (DOA) estimation problem. For the first time, the effect of parameters such as sensor array configuration, as well as signal to noise ratio and dynamic range of the sources is thoroughly analyzed. In particular, we formulate a lower bound for the probability of detection and an upper bound for the estimation error. The proposed performance guarantee is further developed to include the estimation error as a user-defined parameter for the probability of detection. Numerical results show acceptable correlation between theoretical and empirical simulations. (C) 2018 Elsevier Inc. All rights reserved.

  • 16.
    Gardner, Andrew
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Depends: Workflow Management Software for Visual Effects Production2014Conference paper (Refereed)
    Abstract [en]

    In this paper, we present an open source, multi-platform, workflow management application named Depends, designed to clarify and enhance the workflow of artists in a visual effects environment. Depends organizes processes into a directed acyclic graph, enabling artists to quickly identify appropriate changes, make modifications, and improve the look of their work. Recovering information about past revisions of an element is made simple, as the provenance of data is a core focus of a Depends workflow. Sharing work is also facilitated by the clear and consistent structure of Depends. We demonstrate the flexibility of Depends by presenting a number of scenarios where its style of workflow management has been essential to the creation of high-quality results.

  • 17.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Real-time image based lighting with streaming HDR-lightprobe sequences2012In: Proceedings of SIGRAD 2012 / [ed] Andreas Kerren, Stefan Seipel, Linköping, Sweden, 2012Conference paper (Other academic)
    Abstract [en]

    We present a framework for shading of virtual objects using high dynamic range (HDR) light probe sequencesin real-time. Such images (light probes) are captured using a high resolution HDR camera. In each frame ofthe HDR video, an optimized CUDA kernel is used to project incident lighting into spherical harmonics in realtime. Transfer coefficients are calculated in an offline process. Using precomputed radiance transfer the radiancecalculation reduces to a low order dot product between lighting and transfer coefficients. We exploit temporalcoherence between frames to further smooth lighting variation over time. Our results show that the frameworkcan achieve the effects of consistent illumination in real-time with flexibility to respond to dynamic changes in thereal environment.

  • 18.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Adaptive dualISO HDR-reconstruction2015In: EURASIP Journal on Image and Video Processing, ISSN 1687-5176, E-ISSN 1687-5281Article in journal (Refereed)
    Abstract [en]

    With the development of modern image sensors enabling flexible image acquisition, single shot HDR imaging is becoming increasingly popular. In this work we capture single shot HDR images using an imaging sensor with spatially varying gain/ISO. In comparison to previous single shot HDR capture based on a single sensor, this allows all incoming photons to be used in the imaging, instead of wasting incoming light using spatially varying ND-filters, commonly used in previous works. The main technical contribution in this work is an  extension of previous HDR reconstruction approaches for single shot HDR imaging based on local polynomial approximations [15,10]. Using a sensor noise model, these works deploy a statistically informed filtering operation to reconstruct HDR pixel values. However, instead of using a fixed filter size, we introduce two novel algorithms for adaptive filter kernel selection. Unlike previous works, using  adaptive filter kernels [16], our algorithms are based on analysing the model fit and the expected statistical deviation of the estimate based on the sensor noise model. Using an iterative procedure we can then adapt the filter kernel according to the image structure and the statistical image noise. Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods. To demonstrate the robustness of our approach, we have exploited input images from raw sensor data using a commercial off-the shelf camera. To further analyze our algorithm, we have also implemented a camera simulator to evaluate different gain pattern and noise properties of the sensor.

  • 19.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Per, Larsson
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Tran, Kiet
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Light Field Video Compression and Real Time Rendering2019In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 8, p. 265-276Article in journal (Refereed)
    Abstract [en]

    Light field imaging is rapidly becoming an established method for generating flexible image based description of scene appearances. Compared to classical 2D imaging techniques, the angular information included in light fields enables effects such as post‐capture refocusing and the exploration of the scene from different vantage points. In this paper, we describe a novel GPU pipeline for compression and real‐time rendering of light field videos with full parallax. To achieve this, we employ a dictionary learning approach and train an ensemble of dictionaries capable of efficiently representing light field video data using highly sparse coefficient sets. A novel, key element in our representation is that we simultaneously compress both image data (pixel colors) and the auxiliary information (depth, disparity, or optical flow) required for view interpolation. During playback, the coefficients are streamed to the GPU where the light field and the auxiliary information are reconstructed using the dictionary ensemble and view interpolation is performed. In order to realize the pipeline we present several technical contributions including a denoising scheme enhancing the sparsity in the dataset which enables higher compression ratios, and a novel pruning strategy which reduces the size of the dictionary ensemble and leads to significant reductions in computational complexity during the encoding of a light field. Our approach is independent of the light field parameterization and can be used with data from any light field video capture system. To demonstrate the usefulness of our pipeline, we utilize various publicly available light field video datasets and discuss the medical application of documenting heart surgery.

  • 20.
    Hajsharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    HDR reconstruction for alternating gain (ISO) sensor readout2014In: Eurographics 2014 short papers, 2014Conference paper (Refereed)
    Abstract [en]

    Modern image sensors are becoming more and more flexible in the way an image is captured. In this paper, we focus on sensors that allow the per pixel gain to be varied over the sensor and develop a new technique for efficient and accurate reconstruction of high dynamic range (HDR) images based on such input data. Our method estimates the radiant power at each output pixel using a sampling operation which performs color interpolation, re-sampling, noise reduction and HDR-reconstruction in a single step. The reconstruction filter uses a sensor noise model to weight the input pixel samples according to their variances. Our algorithm works in only a small spatial neighbourhood around each pixel and lends itself to efficient implementation in hardware. To demonstrate the utility of our approach we show example HDR-images reconstructed from raw sensor data captured using off-the shelf consumer hardware which allows for two different gain settings for different rows in the same image. To analyse the accuracy of the algorithm, we also use synthetic images from a camera simulation software.

  • 21.
    Jones, Andrew
    et al.
    USC Institute Creat Technology, CA 90094 USA.
    Nagano, Koki
    USC Institute Creat Technology, CA 90094 USA.
    Busch, Jay
    USC Institute Creat Technology, CA 90094 USA.
    Yu, Xueming
    USC Institute Creat Technology, CA 90094 USA.
    Peng, Hsuan-Yueh
    USC Institute Creat Technology, CA 90094 USA.
    Barreto, Joseph
    USC Institute Creat Technology, CA 90094 USA.
    Alexander, Oleg
    USC Institute Creat Technology, CA 90094 USA.
    Bolas, Mark
    USC Institute Creat Technology, CA 90094 USA.
    Debevec, Paul
    USC Institute Creat Technology, CA 90094 USA.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Time-Offset Conversations on a Life-Sized Automultiscopic Projector Array2016In: PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), IEEE , 2016, p. 927-935Conference paper (Refereed)
    Abstract [en]

    We present a system for creating and displaying interactive life-sized 3D digital humans based on pre-recorded interviews. We use 30 cameras and an extensive list of questions to record a large set of video responses. Users access videos through a natural conversation interface that mimics face-to-face interaction. Recordings of answers, listening and idle behaviors are linked together to create a persistent visual image of the person throughout the interaction. The interview subjects are rendered using flowed light fields and shown life-size on a special rear-projection screen with an array of 216 video projectors. The display allows multiple users to see different 3D perspectives of the subject in proper relation to their viewpoints, without the need for stereo glasses. The display is effective for interactive conversations since it provides 3D cues such as eye gaze and spatial hand gestures.

  • 22.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Banterle, Francesco
    Visual Computing Lab, ISTI-CNR, Italy.
    Gardner, Andrew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Photorealistic rendering of mixed reality scenes2015In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 2, p. 643-665Article in journal (Refereed)
    Abstract [en]

    Photo-realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods of advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state-of-the-art in this field, and presents a categorization and comparison of current methods. Our in-depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.

  • 23.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Dahlin, Johan
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kok, Manon
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Schön, Thomas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology. Uppsala Universitet.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Real-time video based lighting using GPU raytracing2014In: Proceedings of the 22nd European Signal Processing Conference (EUSIPCO), 2014, IEEE Signal Processing Society, 2014Conference paper (Refereed)
    Abstract [en]

    The recent introduction of HDR video cameras has enabled the development of image based lighting techniques for rendering virtual objects illuminated with temporally varying real world illumination. A key challenge in this context is that rendering realistic objects illuminated with video environment maps is computationally demanding. In this work, we present a GPU based rendering system based on the NVIDIA OptiX framework, enabling real time raytracing of scenes illuminated with video environment maps. For this purpose, we explore and compare several Monte Carlo sampling approaches, including bidirectional importance sampling, multiple importance sampling and sequential Monte Carlo samplers. While previous work have focused on synthetic data and overly simple environment maps sequences, we have collected a set of real world dynamic environment map sequences using a state-of-art HDR video camera for evaluation and comparisons.

  • 24.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bonnet, Gerhard
    SpheronVR AG.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unified HDR reconstruction from raw CFA data2013In: Proceedings of IEEE International Conference on Computational Photography 2013 / [ed] David Boas, Paris Sylvain, Shmel Peleg, Todd Zickler, IEEE , 2013, p. 1-9Conference paper (Refereed)
    Abstract [en]

    HDR reconstruction from multiple exposures poses several challenges. Previous HDR reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion in several steps. We instead present a unifying approach, performing HDR assembly directly from raw sensor data in a single processing operation. Our algorithm includes a spatially adaptive HDR reconstruction based on fitting local polynomial approximations to observed sensor data, using a localized likelihood approach incorporating spatially varying sensor noise. We also present a realistic camera noise model adapted to HDR video. The method allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over state-of-the-art methods, both in terms of flexibility and reconstruction quality.

  • 25.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bonnet, Gerhard
    AG Spheron VR, Germany.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    A unified framework for multi-sensor HDR video reconstruction2014In: Signal Processing : Image Communications, ISSN 0923-5965, Vol. 29, no 2, p. 203-215Article in journal (Refereed)
    Abstract [en]

    One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.

  • 26.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Real-time HDR video reconstruction for multi-sensor systems2012In: ACM SIGGRAPH 2012 Posters, New York, NY, USA: ACM Press, 2012, p. 65-Conference paper (Refereed)
    Abstract [en]

    HDR video is an emerging field of technology, with a few camera systems currently in existence [Myszkowski et al. 2008], Multi-sensor systems [Tocci et al. 2011] have recently proved to be particularly promising due to superior robustness against temporal artifacts, correct motion blur, and high light efficiency. Previous HDR reconstruction methods for multi-sensor systems have assumed pixel perfect alignment of the physical sensors. This is, however, very difficult to achieve in practice. It may even be the case that reflections in beam splitters make it impossible to match the arrangement of the Bayer filters between sensors. We therefor present a novel reconstruction method specifically designed to handle the case of non-negligible misalignments between the sensors. Furthermore, while previous reconstruction techniques have considered HDR assembly, debayering and denoising as separate problems, our method is capable of simultaneous HDR assembly, debayering and smoothing of the data (denoising). The method is also general in that it allows reconstruction to an arbitrary output resolution and mapping. The algorithm is implemented in CUDA, and shows video speed performance for an experimental HDR video platform consisting of four 2336x1756 pixels high quality CCD sensors imaging the scene trough a common optical system. ND-filters of different densities are placed in front of the sensors to capture a dynamic range of 24 f-stops.

  • 27.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Löw, Joakim
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ljung, Patric
    Siemens.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering: -2012In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 3, p. 447-462Article in journal (Refereed)
    Abstract [sv]

    We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, includingdirectional lights, point lights and environment maps. real-time performance is achieved by encoding local and global volumetricvisibility using spherical harmonic (SH) basis functions stored in an efficient multi-resolution grid over the extent of the volume. Ourmethod enables high frequency shadows in the spatial domain, but is limited to a low frequency approximation of visibility and illuminationin the angular domain. In a first pass, Level Of Detail (LOD) selection in the grid is based on the current transfer function setting.This enables rapid on-line computation and SH projection of the local spherical distribution of visibility information. Using a piecewiseintegration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing thelight sources using their SH projections, the integral over lighting, visibility and isotropic phase functions can be efficiently computedduring rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performanceof the approach.

  • 28.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Schön, Thomas B.
    Division of Systems and Control, Department of Information Technology, Uppsala University.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Pseudo-Marginal Metropolis Light Transport2015In: Proceeding SA '15 SIGGRAPH Asia 2015 Technical Briefs, ACM Digital Library, 2015, p. 13:1-13:4Conference paper (Other academic)
    Abstract [en]

    Accurate and efficient simulation of light transport in heterogeneous participating media, such as smoke, clouds and fire, plays a key role in the synthesis of visually interesting renderings for e.g. visual effects, computer games and product visualization. However, rendering of scenes with heterogenous participating with Metropolis light transport (MLT) algorithms have previously been limited to primary sample space methods or using biased approximations of the transmittance in the scene. This paper presents a new sampling strategy for Markov chain Monte Carlo (MCMC) methods, e.g. MLT, based on pseudo-marginal MCMC. Specifically, we show that any positive and unbiased estimator of the target distribution can replace the exact quantity to simulate a Markov Chain with a stationary distribution that has a marginal which is the exact target distribution of interest. This enables us to evaluate the transmittance function with recent unbiased estimators which leads to significantly shorter rendering times. Compared to previous work, relying on (biased) ray-marching for evaluating transmittance, our method enables simulation of longer Markov chains, a better exploration of the path space, and consequently less image noise, for a given computational budget. To demonstrate the usefulness of our pseudo-marginal approach, we compare it to representative methods for efficient rendering of anisotropic heterogeneous participating media and glossy transfer. We show that it performs significantly better in terms of image noise and rendering times compared to previous techniques. Our method is robust, and can easily be implemented in a modern renderer.

  • 29.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Moeller, Torsten
    Simon Fraser University, Vancouver.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Estimation and Modeling of Actual Numerical Errors in Volume Rendering2010In: COMPUTER GRAPHICS FORUM, ISSN 0167-7055, Vol. 29, no 3, p. 893-902Article in journal (Refereed)
    Abstract [en]

    In this paper we study the comprehensive effects on volume rendered images due to numerical errors caused by the use of finite precision for data representation and processing. To estimate actual error behavior we conduct a thorough study using a volume renderer implemented with arbitrary floating-point precision. Based on the experimental data we then model the impact of floating-point pipeline precision, sampling frequency and fixed-point input data quantization on the fidelity of rendered images. We introduce three models, an average model, which does not adapt to different data nor varying transfer functions, as well as two adaptive models that take the intricacies of a new data set and transfer function into account by adapting themselves given a few different images rendered. We also test and validate our models based on new data that was not used during our model building.

  • 30.
    Löw, Joakim
    et al.
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    ABC - BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces2013In: Eurographics 24th Symposium on Rendering: Posters, 2013Conference paper (Other academic)
    Abstract [en]

    Glossy surface reflectance is hard to model accuratley using traditional parametric BRDF models. An alternative is provided by data driven reflectance models, however these models offers less user control and generally results in lower efficency. In our work we propose two new lightweight parameteric BRDF models for accurate modeling of glossy surface refllectance, one inspired by Rayleigh-Rice theory for optically smooth surfaces and one inspired by microfacet-theory. We base our models on a thourough study of the scattering behaviour of measured reflectance data from the MERL database. The study focuses on two key aspects of BRDF models, parametrization and scatter distribution. We propose a new scattering distributuion for glossy BRDFs inspired by the ABC model for surface statistics of optically smooth surfaces. Based on the survey we consider two parameterizations, one based on micro-facet theory using the halfway vector and one inspired by the parametrization for the Rayleigh-Rice BRDF model considering the projected devaition vector. To enable efficent rendering we also show how the new models can be approximatley sampled for importance sampling the scattering integral.

  • 31.
    Löw, Joakim
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces2012In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 31, no 1Article in journal (Refereed)
    Abstract [en]

    This article presents two new parametric models of the Bidirectional Reflectance Distribution Function (BRDF), one inspired by the Rayleigh-Rice theory for light scattering from optically smooth surfaces, and one inspired by micro-facet theory. The models represent scattering from a wide range of glossy surface types with high accuracy. In particular, they enable representation of types of surface scattering which previous parametric models have had trouble modeling accurately. In a study of the scattering behavior of measured reflectance data, we investigate what key properties are needed for a model to accurately represent scattering from glossy surfaces. We investigate different parametrizations and how well they match the behavior of measured BRDFs. We also examine the scattering curves which are represented in parametric models by different distribution functions. Based on the insights gained from the study, the new models are designed to provide accurate fittings to the measured data. Importance sampling schemes are developed for the new models, enabling direct use in existing production pipelines. In the resulting renderings we show that the visual quality achieved by the models matches that of the measured data.

  • 32.
    Löw, Joakim
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    HDR Light Probe Sequence Resampling for Realtime Incident Light Field Rendering2009In: Proceedings - SCCG 2009: 25th Spring Conference on Computer Graphics / [ed] Helwig Hauser, New York, USA: ACM New York , 2009, p. 43-50Conference paper (Refereed)
    Abstract [en]

    This paper presents a method for resampling a sequence of high dynamic range light probe images into a representation of Incident Light Field (ILF) illumination which enables realtime rendering. The light probe sequences are captured at varying positions in a real world environment using a high dynamic range video camera pointed at a mirror sphere. The sequences are then resampled to a set of radiance maps in a regular three dimensional grid before projection onto spherical harmonics. The capture locations and amount of samples in the original data make it inconvenient for direct use in rendering and resampling is necessary to produce an efficient data structure. Each light probe represents a large set of incident radiance samples from different directions around the capture location. Under the assumption that the spatial volume in which the capture was performed has no internal occlusion, the radiance samples are projected through the volume along their corresponding direction in order to build a new set of radiance maps at selected locations, in this case a three dimensional grid. The resampled data is projected onto a spherical harmonic basis to allow for realtime lighting of synthetic objects inside the incident light field.

  • 33.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Emadi, Mohammad
    Qualcomm Technologies Inc., San Jose, CA, USA.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ehsan, Afshari
    Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA.
    On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence2017In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 24, no 11, p. 1646-1650Article in journal (Refereed)
    Abstract [en]

    In this paper we present a new coherence-based performance guarantee for the Orthogonal Matching Pursuit (OMP) algorithm. A lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise is derived. Compared to previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a closer match to empirically obtained results of the OMP algorithm.

  • 34.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Hajisharif, Saghi
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos2019In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 38, no 3, p. 1-18, article id 23Article in journal (Refereed)
    Abstract [en]

    In this article we present a novel dictionary learning framework designed for compression and sampling of light fields and light field videos. Unlike previous methods, where a single dictionary with one-dimensional atoms is learned, we propose to train a Multidimensional Dictionary Ensemble (MDE). It is shown that learning an ensemble in the native dimensionality of the data promotes sparsity, hence increasing the compression ratio and sampling efficiency. To make maximum use of correlations within the light field data sets, we also introduce a novel nonlocal pre-clustering approach that constructs an Aggregate MDE (AMDE). The pre-clustering not only improves the image quality but also reduces the training time by an order of magnitude in most cases. The decoding algorithm supports efficient local reconstruction of the compressed data, which enables efficient real-time playback of high-resolution light field videos. Moreover, we discuss the application of AMDE for compressed sensing. A theoretical analysis is presented that indicates the required conditions for exact recovery of point-sampled light fields that are sparse under AMDE. The analysis provides guidelines for designing efficient compressive light field cameras. We use various synthetic and natural light field and light field video data sets to demonstrate the utility of our approach in comparison with the state-of-the-art learning-based dictionaries, as well as established analytical dictionaries.

  • 35.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Compressive Image Reconstruction in Reduced Union of Subspaces2015In: Computer Graphics Forum, ISSN 1467-8659, Vol. 34, no 2, p. 33-44Article in journal (Refereed)
    Abstract [en]

    We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light-fields. The algorithm relies on a learning-based basis representation. We train an ensemble of intrinsically two-dimensional (2D) dictionaries that operate locally on a set of 2D patches extracted from the input data. We show that one can convert the problem of 2D sparse signal recovery to an equivalent 1D form, enabling us to utilize a large family of sparse solvers. The proposed framework represents the input signals in a reduced union of subspaces model, while allowing sparsity in each subspace. Such a model leads to a much more sparse representation than widely used methods such as K-SVD. To evaluate our method, we apply it to three different scenarios where the signal dimensionality varies from 2D (images) to 3D (animations) and 4D (light-fields). We show that our method outperforms state-of-the-art algorithms in computer graphics and image processing literature.

  • 36.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Geometry Independent Surface Light Fields for Real TimeRendering of Precomputed Global Illumination2011In: Proceedings of SGRAD 2011 / [ed] Thomas Larsson, Lars Kjelldahl, Kai-Mikael Jää-Aro, Royal Institute of Technology, Stockholm, 2011, p. 27-34Conference paper (Refereed)
    Abstract [en]

    We present a framework for generating, compressing and rendering of Surface Light Field (SLF) data. Our methodis based on radiance data generated using physically based rendering methods. Thus the SLF data is generateddirectly instead of re-sampling digital photographs. Our SLF representation decouples spatial resolution fromgeometric complexity. We achieve this by uniform sampling of spatial dimension of the SLF function. For compression,we use Clustered Principal Component Analysis (CPCA). The SLF matrix is first clustered to low frequencygroups of points across all directions. Then we apply PCA to each cluster. The clustering ensures that the withinclusterfrequency of data is low, allowing for projection using a few principal components. Finally we reconstructthe CPCA encoded data using an efficient rendering algorithm. Our reconstruction technique ensures seamlessreconstruction of discrete SLF data. We applied our rendering method for fast, high quality off-line rendering andreal-time illumination of static scenes. The proposed framework is not limited to complexity of materials or lightsources, enabling us to render high quality images describing the full global illumination in a scene.

  • 37.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Learning based compression for real-time rendering of surface light fields2013In: Siggraph 2013 Posters, ACM Press, 2013Conference paper (Other academic)
    Abstract [en]

    Photo-realistic image synthesis in real-time is a key challenge in computer graphics. A number of techniques where the light transport in a scene is pre-computed, compressed and used for real-time image synthesis have been proposed. In this work, we extend this idea and present a technique where the radiance distribution in a scene, including arbitrarily complex materials and light sources, is pre-computed using photo-realistic rendering techniques and stored as surface light fields (SLF) at each surface. An SLF describes the full appearance of each surface in a scene as a 4D function over the spatial and angular domains. An SLF is a complex data set with a large memory footprint often in the order of several GB per object in the scene. The key contribution in this work is a novel approach for compression of surface light fields that enables real-time rendering of complex scenes. Our learning-based compression technique is based on exemplar orthogonal bases (EOB), and trains a compact dictionary of full-rank orthogonal basis pairs with sparse coefficients. Our results outperform the widely used CPCA method in terms of storage cost, visual quality and rendering speed. Compared to PRT techniques for real-time global illumination, our approach is limited to static scenes but can represent high frequency materials and any type of light source in a unified framework.

  • 38.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes2013In: Proceedings of ACM SIGGRAPH ASIA 2013, ACM Press, 2013Conference paper (Refereed)
    Abstract [en]

    We present an algorithm for compression and real-time rendering of surface light fields (SLF) encoding the visual appearance of objects in static scenes with high frequency variations. We apply a non-local clustering in order to exploit spatial coherence in the SLFdata. To efficiently encode the data in each cluster, we introducea learning based approach, Clustered Exemplar Orthogonal Bases(CEOB), which trains a compact dictionary of orthogonal basispairs, enabling efficient sparse projection of the SLF data. In ad-dition, we discuss the application of the traditional Clustered Principal Component Analysis (CPCA) on SLF data, and show that inmost cases, CEOB outperforms CPCA, K-SVD and spherical harmonics in terms of memory footprint, rendering performance andreconstruction quality. Our method enables efficient reconstructionand real-time rendering of scenes with complex materials and lightsources, not possible to render in real-time using previous methods.

  • 39.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    ON NONLOCAL IMAGE COMPLETION USING AN ENSEMBLE OF DICTIONARIES2016In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), IEEE , 2016, p. 2519-2523Conference paper (Refereed)
    Abstract [en]

    In this paper we consider the problem of nonlocal image completion from random measurements and using an ensemble of dictionaries. Utilizing recent advances in the field of compressed sensing, we derive conditions under which one can uniquely recover an incomplete image with overwhelming probability. The theoretical results are complemented by numerical simulations using various ensembles of analytical and training-based dictionaries.

  • 40.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Guillemot, Christine
    INRIA, France.
    Multi-Shot Single Sensor Light Field Camera Using a Color Coded Mask2018In: 2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), IEEE COMPUTER SOC , 2018, p. 226-230Conference paper (Refereed)
    Abstract [en]

    We present a compressed sensing framework for reconstructing the full light field of a scene captured using a single-sensor consumer camera. To achieve this, we use a color coded mask in front of the camera sensor. To further enhance the reconstruction quality, we propose to utilize multiple shots by moving the mask or the sensor randomly. The compressed sensing framework relies on a training based dictionary over a light field data set. Numerical simulations show significant improvements in reconstruction quality over a similar coded aperture system for light field capture.

  • 41.
    Staadt, Oliver
    et al.
    Univ Rostock, Germany.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Fjeld, Morten
    Chalmers Univ Technol, Sweden.
    Fratarcangeli, Marco
    Chalmers Univ Technol, Sweden.
    Sjolie, Daniel
    Univ Gothenburg, Sweden.
    Foreword to the Special Section on the 23rd ACM symposium on virtual reality software and technology 20172018In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 77, p. A3-A4Article in journal (Other academic)
    Abstract [en]

    n/a

  • 42.
    Tongbuasirilai, Tanaboon
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kurt, Murat
    International Computer Institute, Ege University, Izmir, Turkey.
    Compact and intuitive data-driven BRDF models2019In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, p. 1-18Article in journal (Refereed)
    Abstract [en]

    Measured materials are rapidly becoming a core component in the photo-realistic image synthesis pipeline. The reason is that data-driven models can easily capture the underlying, fine details that represent the visual appearance of materials, which can be difficult or even impossible to model by hand. There are, however, a number of key challenges that need to be solved in order to enable efficient capture, representation and interaction with real materials. This paper presents two new data-driven BRDF models specifically designed for 1D separability. The proposed 3D and 2D BRDF representations can be factored into three or two 1D factors, respectively, while accurately representing the underlying BRDF data with only small approximation error. We evaluate the models using different parameterizations with different characteristics and show that both the BRDF data itself and the resulting renderings yield more accurate results in terms of both numerical errors and visual results compared to previous approaches. To demonstrate the benefit of the proposed factored models, we present a new Monte Carlo importance sampling scheme and give examples of how they can be used for efficient BRDF capture and intuitive editing of measured materials.

  • 43.
    Tongbuasirilai, Tanaboon
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kurt, Murat
    Uluslararası Bilgisayar Enstitüsü, Ege Üniversitesi, Turkey.
    Efficient BRDF Sampling Using Projected Deviation Vector Parameterization2017In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 153-158Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel approach for efficient sampling of isotropic Bidirectional Reflectance Distribution Functions (BRDFs). Our approach builds upon a new parameterization, the Projected Deviation Vector parameterization, in which isotropic BRDFs can be described by two 1D functions. We show that BRDFs can be efficiently and accurately measured in this space using simple mechanical measurement setups. To demonstrate the utility of our approach, we perform a thorough numerical evaluation and show that the BRDFs reconstructed from measurements along the two 1D bases produce rendering results that are visually comparable to the reference BRDF measurements which are densely sampled over the 4D domain described by the standard hemispherical parameterization.

  • 44.
    Tsirikoglou, Apostolia
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ekeberg, Simon
    Swiss International AB, Sweden.
    Vikström, Johan
    Swiss International AB, Sweden.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    S(wi)SS: A flexible and robust sub-surface scattering shader2014In: Proceedings of SIGRAD 2014 / [ed] Morten Fjeld, 2014Conference paper (Refereed)
    Abstract [en]

    S(wi)SS is a new, flexible artist friendly multi-layered sub-surface scattering shader that simulates accurately subsurface scattering for a large range of translucent materials. It is a physically motivated multi-layered approach where the sub-surface scattering effect is generated using one to three layers. It enables seamless mixing of the classical dipole, the better dipole and the quantized diffusion reflectance model in the sub-surface scattering layers, and additionally provides the scattering coming of front and back illumination, as well as all the BSDFcomponents, in separate render channels enabling the artist to either use them physically accurately or tweak them independently during compositing to produce the desired result. To demonstrate the usefulness of our approach, we show a set of high quality rendering results from different user scenarios.

  • 45.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Incident Light Fields2009Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Image based lighting, (IBL), is a computer graphics technique for creating photorealistic renderings of synthetic objects such that they can be placed into real world scenes. IBL has been widely recognized and is today used in commercial production pipelines. However, the current techniques only use illumination captured at a single point in space. This means that traditional IBL cannot capture or recreate effects such as cast shadows, shafts of light or other important spatial variations in the illumination. Such lighting effects are, in many cases, artistically created or are there to emphasize certain features, and are therefore a very important part of the visual appearance of a scene.

    This thesis and the included papers present methods that extend IBL to allow for capture and rendering with spatially varying illumination. This is accomplished by measuring the light field incident onto a region in space, called an Incident Light Field, (ILF), and using it as illumination in renderings. This requires the illumination to be captured at a large number of points in space instead of just one. The complexity of the capture methods and rendering algorithms are then significantly increased.

    The technique for measuring spatially varying illumination in real scenes is based on capture of High Dynamic Range, (HDR), image sequences. For efficient measurement, the image capture is performed at video frame rates. The captured illumination information in the image sequences is processed such that it can be used in computer graphics rendering. By extracting high intensity regions from the captured data and representing them separately, this thesis also describes a technique for increasing rendering efficiency and methods for editing the captured illumination, for example artificially moving or turning on and of individual light sources.

  • 46.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Banterle, Francesco
    Visual Computing Laboratory at ISTI-CNR, Italy.
    Mantiuk, Rafal
    Computer Laboratory, University of Cambridge, UK.
    Eilertsen, Gabriel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    The HDR-video pipeline: From capture and image reconstruction to compression and tone mapping2016Conference paper (Other academic)
    Abstract [en]

    High dynamic range (HDR) video technology has gone through remarkable developments over the last few years;HDR-video cameras are being commercialized, new algorithms for color grading and tone mapping specifically designed for HDR-video have recently been proposed, and the first open source compression algorithms for HDR-video are becoming available. HDR-video represents a paradigm shift in imaging and computer graphics, which has and will continue to generate a range of both new research challenges and applications. This intermediate-level tutorial will give an in-depth overview of the full HDR-video pipeline present several examples of state-of-the-art algorithms and technology in HDR-video capture, tone mapping, compression and specific applications in computer graphics.

  • 47.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology.
    An optical system for single image environment maps2007In: SIGGRAPH '07 ACM SIGGRAPH 2007 posters, ACM Press, 2007Conference paper (Refereed)
    Abstract [en]

    We present an optical setup for capturing a full 360° environment map in a single image snapshot. The setup, which can be used with any camera device, consists of a curved mirror swept around a negative lens, and is suitable for capturing environment maps and light probes. The setup achieves good sampling density and uniformity for all directions in the environment.

  • 48.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bonnet, Gerhard
    SpheronVR, Germany.
    Kaiser, Gunnar
    SpheronVR, Germany.
    Next Generation Image Based Lighting using HDR Video2011In: Proceeding SIGGRAPH '11 ACM SIGGRAPH 2011 Talks, ACM Special Interest Group on Computer Science Education, 2011, p. article no 60-Conference paper (Refereed)
    Abstract [en]

    We present an overview of our recently developed systems pipeline for capture, reconstruction, modeling and rendering of real world scenes based on state-of-the-art high dynamic range video (HDRV). The reconstructed scene representation allows for photo-realistic Image Based Lighting (IBL) in complex environments with strong spatial variations in the illumination. The pipeline comprises the following essential steps:

    1.) Capture - The scene capture is based on a 4MPixel global shutter HDRV camera with a dynamic range of more than 24 f-stops at 30 fps. The HDR output stream is stored as individual un-compressed frames for maximum flexibility. A scene is usually captured using a combination of panoramic light probe sequences [1], and sequences with a smaller field of view to maximize the resolution at regions of special interest in the scene. The panoramic sequences ensure full angular coverage at each position and guarantee that the information required for IBL is captured. The position and orientation of the camera is tracked during capture.

    2.) Scene recovery - Taking one or more HDRV sequences as input, a geometric proxy model of the scene is built using a semi-automatic approach. First, traditional computer vision algorithms such as structure from motion [2] and Manhattan world stereo [3] are used. If necessary, the recovered model is then modified using an interaction scheme based on visualizations of a volumetric representation of the scene radiance computed from the input HDRV sequence. The HDR nature of this volume also enables robust extraction of direct light sources and other high intensity regions in the scene.

    3.) Radiance processing - When the scene proxy geometry has been recovered, the radiance data captured in the HDRV sequences are re-projected onto the surfaces and the recovered light sources. Since most surface points have been imaged from a large number of directions, it is possible to reconstruct view dependent texture maps at the proxy geometries. These 4D data sets describe a combination of detailed geometry that has not been recovered and the radiance reflected from the underlying real surfaces. The view dependent textures are then processed and compactly stored in an adaptive data structure.

    4.) Rendering - Once the geometric and radiometric scene information has been recovered, it is possible to place virtual objects into the real scene and create photo-realistic renderings as illustrated above. The extracted light sources enable efficient sampling and rendering times that are fully comparable to that of traditional virtual computer graphics light sources. No previously described method is capable of capturing and reproducing the angular and spatial variation in the scene illumination in comparable detail.

    We believe that the rapid development of high quality HDRV systems will soon have a large impact on both computer vision and graphics. Following this trend, we are developing theory and algorithms for efficient processing HDRV sequences and using the abundance of radiance data that is going to be available.

  • 49.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ollila, Mark
    Integrated Vision Products AB, Sweden.
    Johannesson, Mattias
    Integrated Vision Products AB, Sweden.
    A Real Time Light Probe2004In: The 25th Eurographics Annual Conference 2004 Short papers and Interactive Applications, Grenoble, France, 2004Conference paper (Refereed)
    Abstract [en]

    We present a novel system capable of capturing high dynamic range (HDR) Light Probes at video speed. Each Light Probe frame is built from an individual full set of exposures, all of which are captured within the frame time. The exposures are processed and assembled into a mantissa-exponent representation image within the camera unit before output, and then streamed to a standard PC. As an example, the system is capable of capturing Light Probe Images with a resolution of 512x512 pixels using a set of 10 exposures covering 15 f-stops at a frame rate of up to 25 final HDR frames per second. The system is built around commercial special-purpose camera hardware with on-chip programmable image processing logic and tightly integrated frame buffer memory, and the algorithm is implemented as custom downloadable microcode software.

  • 50.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Per, Larsson
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Free Form Incident Light Fields2008In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 27, no 4, p. 1293-1301Article in journal (Refereed)
    Abstract [en]

    This paper presents methods for photo-realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4-D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.

12 1 - 50 of 62
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf