Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Assessment of Multi-Camera Calibration Algorithms for Two-Dimensional Camera Arrays Relative to Ground Truth Position and Direction
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. (Realistic3D)ORCID iD: 0000-0002-4967-3033
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. (Realistic3D)ORCID iD: 0000-0003-3751-6089
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. (Realistic3D)
2016 (English)In: 3DTV-Conference, IEEE Computer Society, 2016, article id 7548887Conference paper, Published paper (Refereed)
Abstract [en]

Camera calibration methods are commonly evaluated on cumulative reprojection error metrics, on disparate one-dimensional da-tasets. To evaluate calibration of cameras in two-dimensional arrays, assessments need to be made on two-dimensional datasets with constraints on camera parameters. In this study, accuracy of several multi-camera calibration methods has been evaluated on camera parameters that are affecting view projection the most. As input data, we used a 15-viewpoint two-dimensional dataset with intrinsic and extrinsic parameter constraints and extrinsic ground truth. The assessment showed that self-calibration methods using structure-from-motion reach equal intrinsic and extrinsic parameter estimation accuracy with standard checkerboard calibration algorithm, and surpass a well-known self-calibration toolbox, BlueCCal. These results show that self-calibration is a viable approach to calibrating two-dimensional camera arrays, but improvements to state-of-art multi-camera feature matching are necessary to make BlueCCal as accurate as other self-calibration methods for two-dimensional camera arrays.

Place, publisher, year, edition, pages
IEEE Computer Society, 2016. article id 7548887
Keywords [en]
Camera calibration, multi-view image dataset, 2D camera array, self-calibration, calibration assessment
National Category
Signal Processing Media and Communication Technology
Identifiers
URN: urn:nbn:se:miun:diva-27960DOI: 10.1109/3DTV.2016.7548887ISI: 000390840500006Scopus ID: 2-s2.0-84987849952Local ID: STCISBN: 978-1-5090-3313-3 (print)OAI: oai:DiVA.org:miun-27960DiVA, id: diva2:938875
Conference
2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2016; Hamburg; Germany; 4 July 2016 through 6 July 2016; Category numberCFP1655B-ART; Code 123582
Funder
Knowledge Foundation, 20140200Available from: 2016-06-17 Created: 2016-06-16 Last updated: 2018-05-15Bibliographically approved
In thesis
1. Multi-Camera Light Field Capture: Synchronization, Calibration, Depth Uncertainty, and System Design
Open this publication in new window or tab >>Multi-Camera Light Field Capture: Synchronization, Calibration, Depth Uncertainty, and System Design
2018 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The digital camera is the technological counterpart to the human eye, enabling the observation and recording of events in the natural world. Since modern life increasingly depends on digital systems, cameras and especially multiple-camera systems are being widely used in applications that affect our society, ranging from multimedia production and surveillance to self-driving robot localization. The rising interest in multi-camera systems is mirrored by the rising activity in Light Field research, where multi-camera systems are used to capture Light Fields - the angular and spatial information about light rays within a 3D space. 

The purpose of this work is to gain a more comprehensive understanding of how cameras collaborate and produce consistent data as a multi-camera system, and to build a multi-camera Light Field evaluation system. This work addresses three problems related to the process of multi-camera capture: first, whether multi-camera calibration methods can reliably estimate the true camera parameters; second, what are the consequences of synchronization errors in a multi-camera system; and third, how to ensure data consistency in a multi-camera system that records data with synchronization errors. Furthermore, this work addresses the problem of designing a flexible multi-camera system that can serve as a Light Field capture testbed.

The first problem is solved by conducting a comparative assessment of widely available multi-camera calibration methods. A special dataset is recorded, giving known constraints on camera ground-truth parameters to use as reference for calibration estimates. The second problem is addressed by introducing a depth uncertainty model that links the pinhole camera model and synchronization error to the geometric error in the 3D projections of recorded data. The third problem is solved for the color-and-depth multi-camera scenario, by using a proposed estimation of the depth camera synchronization error and correction of the recorded depth maps via tensor-based interpolation. The problem of designing a Light Field capture testbed is addressed empirically, by constructing and presenting a multi-camera system based on off-the-shelf hardware and a modular software framework.

The calibration assessment reveals that target-based and certain target-less calibration methods are relatively similar at estimating the true camera parameters. The results imply that for general-purpose multi-camera systems, target-less calibration is an acceptable choice. For high-accuracy scenarios, even commonly used target-based calibration approaches are insufficiently accurate. The proposed depth uncertainty model is used to show that converged multi-camera arrays are less sensitive to synchronization errors. The mean depth uncertainty of a camera system correlates to the rendered result in depth-based reprojection, as long as the camera calibration matrices are accurate. The proposed depthmap synchronization method is used to produce a consistent, synchronized color-and-depth dataset for unsynchronized recordings without altering the depthmap properties. Therefore, the method serves as a compatibility layer between unsynchronized multi-camera systems and applications that require synchronized color-and-depth data. Finally, the presented multi-camera system demonstrates a flexible, de-centralized framework where data processing is possible in the camera, in the cloud, and on the data consumer's side. The multi-camera system is able to act as a Light Field capture testbed and as a component in Light Field communication systems, because of the general-purpose computing and network connectivity support for each sensor, small sensor size, flexible mounts, hardware and software synchronization, and a segmented software framework. 

Place, publisher, year, edition, pages
Sundsvall, Sweden: Mid Sweden University, 2018. p. 64
Series
Mid Sweden University licentiate thesis, ISSN 1652-8948 ; 139
Keywords
Light field, Camera systems, Multiview, Synchronization, Camera calibration
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:miun:diva-33622 (URN)978-91-88527-56-1 (ISBN)
Presentation
2018-06-15, L111, Holmgatan 10, Sundsvall, 13:00 (English)
Opponent
Supervisors
Funder
Knowledge Foundation, 20140200
Note

Vid tidpunkten för framläggning av avhandlingen var följande delarbete opublicerat: delarbete 3 manuskript.

At the time of the defence the following paper was unpublished: paper 3 manuscript.

Available from: 2018-05-16 Created: 2018-05-15 Last updated: 2018-05-16Bibliographically approved

Open Access in DiVA

AssessmentOfMultiCameraCalibrationAlgorithms(496 kB)284 downloads
File information
File name FULLTEXT01.pdfFile size 496 kBChecksum SHA-512
dd6f332d28354fad49222ff2bb76d99af69cd01ad741de07aa572d039788aa31474afb15dcc8af80dd6ca56faf416f9980a7855ec052813ac6f505ae962b7b98
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Dima, ElijsSjöström, MårtenOlsson, Roger
By organisation
Department of Information and Communication systems
Signal ProcessingMedia and Communication Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 284 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 717 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf