Reconstruction of real-world scenes from a set of multiple images is a topic in Computer Vision and 3D
Computer Graphics with many interesting applications. There exists a powerful algorithm for shape reconstruction
from arbitrary viewpoints, called Space Carving. However, it is computationally expensive and
hence can not be used with applications in the field of 3D video or CSCW as well as interactive 3D model
creation. Attempts have been made to achieve real-time framerates using PC cluster systems. While these
provide enough performance they are also expensive and less flexible. Approaches that use GPU hardware
acceleration on single workstations achieve interactive framerates for novel-view synthesis, but do
not provide an explicit volumetric representation of the whole scene. The proposed approach shows the
efforts in developing a GPU hardware-accelerated framework for obtaining the volumetric photo hull of
a dynamic 3D scene as seen from multiple calibrated cameras. High performance is achieved by employing
a shape from silhouette technique in advance to obtain a tight initial volume for Space Carving. Also
several speed-up techniques are presented to increase efficiency. Since the entire processing is done on a
single PC the framework can be applied to mobile setups, enabling a wide range of further applications.
The approach is explained using programmable vertex and fragment processors with current hardware and
compared to highly optimized CPU implementations. It is shown that the new approach can outperform
the latter by more than one magnitude. The downloadable introduction has been written specifically for this offer. Its contents are only a subset of the real introductory chapter of the thesis.
Inhaltsverzeichnis (Table of Contents)
- 1 Introduction
- 1.1 Application
- 1.2 Classification
- 1.3 Performance
- 1.4 Contribution
- 1.5 Overview
- 2 Related Work
- 2.1 Shape from Silhouette
- 2.1.1 Image Segmentation
- 2.1.2 Foundations
- 2.1.3 Performance of View-Independent Reconstruction
- 2.1.3.1 CPU
- 2.1.3.2 GPU Acceleration
- 2.1.4 Performance of View-Dependent Reconstruction
- 2.1.4.1 CPU
- 2.1.4.2 GPU Acceleration
- 2.1.5 Conclusion
- 2.2 Shape from Photo-Consistency
- 2.2.1 Foundations
- 2.2.2 Performance of View-Independent Reconstruction
- 2.2.2.1 CPU
- 2.2.2.2 GPU Acceleration
- 2.2.3 Performance of View-Dependent Reconstruction
- 2.2.3.1 CPU
- 2.2.3.2 GPU Acceleration
- 2.2.4 Conclusion
- 3 Fundamentals
- 3.1 Camera Geometry
- 3.1.1 Pinhole Camera Model
- 3.1.2 Camera Parameters
- 3.1.2.1 Intrinsic Parameters
- 3.1.2.2 Extrinsic Parameters
- 3.1.2.3 Radial Lens Distortion
- 3.1.3 Camera Calibration
- 3.2 Light and Color
- 3.2.1 Light in Space
- 3.2.2 Light at a Surface
- 3.2.3 Occlusion and Shadows
- 3.2.4 Light at a Camera
- 3.2.5 Color
- 3.2.6 Color Representation
- 3.2.7 CCD Camera Color Imaging
- 3.3 3D Reconstruction from Multiple Views
- 3.3.1 Visual Hull Reconstruction by Shape from Silhouette
- 3.3.2 Photo Hull Reconstruction by Shape from Photo-Consistency
- 4 Basic Algorithm
- 4.1 Data
- 4.2 Reconstruction
- 5 Advanced Algorithm
- 5.1 Overview
- 5.2 Texture Processing
- 5.3 Destination Cameras
- 5.4 Reconstruction
- 5.5 Postprocessing
- 6 Experiments
- 6.1 System Setup
- 6.2 Implementation
- 6.3 Datasets
- 6.4 Performance
- 6.5 Quality
- 7 Discussion and Enhancements
- 7.1 Summary
- 7.2 Limitations
- 7.3 Future Work
- 7.4 Annotation
Zielsetzung und Themenschwerpunkte (Objectives and Key Themes)
The objective of this thesis is to develop a real-time, GPU-accelerated framework for 3D scene reconstruction from multiple calibrated cameras. The approach focuses on creating an explicit, view-independent volumetric model of the scene, a significant improvement over existing real-time methods that typically produce only partial, view-dependent representations. The framework combines shape-from-silhouette and shape-from-photoconsistency techniques for efficient and high-quality results.
- Real-time 3D scene reconstruction using a single PC.
- GPU acceleration for high-performance processing.
- Combination of shape-from-silhouette and shape-from-photoconsistency algorithms.
- Development of efficient techniques for handling visibility and occlusion.
- Evaluation of performance and reconstruction quality.
Zusammenfassung der Kapitel (Chapter Summaries)
Chapter 1: Introduction introduces the context of real-time 3D scene reconstruction, its applications (3D video, CSCW, interactive modeling), and related techniques. It highlights the contribution of the proposed approach, which combines real-time processing with the generation of a complete volumetric model.
Chapter 2: Related Work reviews existing methods for dynamic scene reconstruction using shape-from-silhouette and shape-from-photoconsistency, comparing their performance and hardware acceleration strategies (CPU vs. GPU). It analyzes the trade-offs between speed and quality.
Chapter 3: Fundamentals provides the theoretical background, covering camera geometry, light and color, and the principles of shape-from-silhouette and shape-from-photoconsistency, including the concepts of visual hull and photo hull.
Chapter 4: Basic Algorithm presents a general, hardware-independent algorithm for scene reconstruction, outlining the data requirements (camera parameters and image data) and the sequential steps for approximating the scene using shape-from-silhouette and shape-from-photoconsistency. It introduces an implicit visibility computation to improve efficiency.
Chapter 5: Advanced Algorithm details the GPU implementation of the reconstruction framework, describing texture processing, the use of destination cameras for ray casting, and post-processing steps. It explains techniques like interleaved sampling and early ray carving to improve performance.
Chapter 6: Experiments presents the experimental setup, implementation details, and results of performance and quality evaluations using different datasets and system configurations (CPU vs. GPU).
Schlüsselwörter (Keywords)
Real-time 3D reconstruction, Space Carving, Graphics hardware acceleration, Shape from Silhouette, Shape from Photo-Consistency, Visual Hull, Photo Hull, GPU programming, Multi-view stereo, Volumetric modeling, 3D video, Computer Supported Cooperative Work (CSCW).
- Arbeit zitieren
- Christian Nitschke (Autor:in), 2006, A Framework for Real-time 3D Reconstruction by Space Carving using Graphics Hardware, München, GRIN Verlag, https://www.grin.com/document/186283