8. Stereo processing examples¶
This chapter showcases examples of processing data sets acquired with specific instruments. For a general introduction, see the tutorial (Section 3).
Structure-from-Motion examples are in Section 9 (for orbital images with no rig), and Section 10 (using a rig and robot images).
- 8.1. Guidelines for selecting stereo pairs
- 8.2. Mars Reconnaissance Orbiter HiRISE
- 8.3. Mars Reconnaissance Orbiter CTX
- 8.4. Mars Global Surveyor MOC-NA
- 8.5. Mars Exploration Rovers
- 8.6. K10
- 8.7. Lunar Reconnaissance Orbiter (LRO) NAC
- 8.8. Apollo 15 Metric Camera images
- 8.9. Mars Express High Resolution Stereo Camera (HRSC)
- 8.10. Cassini ISS NAC
- 8.11. Community Sensor Model
- 8.11.1. The USGS CSM Frame sensor
- 8.11.2. The USGS CSM linescan sensor
- 8.11.3. CSM Pushframe sensor
- 8.11.4. The USGS CSM SAR sensor for LRO Mini-RF
- 8.11.5. CSM cameras for MSL
- 8.11.5.1. Illustration
- 8.11.5.2. Fetch the images and metadata from PDS
- 8.11.5.3. Download the SPICE data
- 8.11.5.4. Set up ALE
- 8.11.5.5. Creation of CSM MSL cameras
- 8.11.5.6. Simple stereo example
- 8.11.5.7. Multi-day stereo
- 8.11.5.8. Mapprojection
- 8.11.5.9. MSL Mast cameras
- 8.11.5.10. Low-resolution MSL Nav cam images
- 8.11.6. CSM model state
- 8.11.7. CSM state embedded in ISIS cubes
- 8.12. Dawn (FC) Framing Camera
- 8.13. Kaguya Terrain Camera
- 8.14. LRO Mini-RF using ISIS camera models
- 8.15. Using PBS and SLURM
- 8.16. ASTER
- 8.17. DigitalGlobe
- 8.18. RPC camera models
- 8.19. PeruSat-1
- 8.20. Pleiades
- 8.21. SPOT5
- 8.22. SkySat Stereo and Video data
- 8.22.1. Stereo data
- 8.22.2. Video data
- 8.22.3. The input data
- 8.22.4. Initial camera models and a reference DEM
- 8.22.5. Bundle adjustment
- 8.22.6. Creating terrain models
- 8.22.7. Mosaicking and alignment
- 8.22.8. Alignment of cameras
- 8.22.9. Mapprojection
- 8.22.10. When things fail
- 8.22.11. Structure from motion
- 8.22.12. RPC models
- 8.22.13. Bundle adjustment using reference terrain
- 8.22.14. Floating the camera intrinsics
- 8.23. Declassified satellite images: KH-4B
- 8.23.1. Fetching the data
- 8.23.2. Resizing the images
- 8.23.3. Stitching the images
- 8.23.4. Fetching a ground truth DEM
- 8.23.5. Creating camera files
- 8.23.6. Bundle adjustment and stereo
- 8.23.7. Validation of cameras
- 8.23.8. Running stereo
- 8.23.9. DEM generation and alignment
- 8.23.10. Floating the intrinsics
- 8.23.11. Modeling the camera models as pinhole cameras with RPC distortion
- 8.24. Declassified satellite images: KH-7
- 8.25. Declassified satellite images: KH-9
- 8.26. Chang’e 3 landing camera
- 8.27. Shallow-water bathymetry
- 8.27.1. Software considerations
- 8.27.2. Physics considerations
- 8.27.3. Computation of the water-land threshold
- 8.27.4. Creation of masks based on the threshold
- 8.27.5. Determination of the water surface
- 8.27.6. Stereo with bathymetry correction
- 8.27.7. Performing sanity checks on a bathy run
- 8.27.8. Bundle adjustment and alignment
- 8.27.9. Validation of alignment
- 8.27.10. Bathymetry with changing water level
- 8.27.11. How to reuse most of a run
- 8.27.12. Bathymetry correction with mapprojected images
- 8.27.13. Using Digital Globe PAN images
- 8.27.14. Using non-Digital Globe images
- 8.27.15. Effect of bathymetry correction on the output DEM
- 8.28. Umbra SAR
9. SfM examples¶
10. SfM examples with a rig¶
These examples shows how to solve for camera poses using Structure-from-Motion (SfM) and then create textured meshes.
The images are acquired using a rig mounted on a robot on the ISS (Section 10.1, Section 10.2) and with the MSL Curiosity rover (Section 10.3).
Somewhat related examples, but without using a rig or the above workflow, are in Section 9 (the images are acquired in orbit using a satellite and a DEM is produced) and Section 8.5 (a basic and rather old two-image example for the MER rovers). See also Section 8.11.5 for an example using CSM cameras for the MSL rover, without employing SfM.
- 10.1. A 3-sensor rig example
- 10.2. Mapping the ISS using 2 rigs with 3 cameras each
- 10.2.1. Illustration
- 10.2.2. Overview
- 10.2.3. Data acquisition strategy
- 10.2.4. Challenges
- 10.2.5. Data processing strategy
- 10.2.6. Installing the software
- 10.2.7. Data preparation
- 10.2.8. A first small run
- 10.2.9. Results
- 10.2.10. Scaling up the problem
- 10.2.11. Fine-tuning
- 10.2.12. Surgery with maps
- 10.2.13. Sample rig configuration
- 10.3. MSL navcam example
11. Shape-from-Shading¶
- 11.1. Overview of SfS
- 11.2. Examples
- 11.3. Limitations
- 11.4. Mathematical model
- 11.5. How to get images
- 11.6. ISIS vs CSM models
- 11.7. SfS at 1 meter/pixel using a single image
- 11.8. Albedo modeling with one or more images
- 11.9. SfS with multiple images in the presence of shadows
- 11.10. Large-scale SfS
- 11.10.1. Challenges
- 11.10.2. The initial terrain
- 11.10.3. Terrain bounds
- 11.10.4. Image selection and sorting by illumination
- 11.10.5. Bundle adjustment
- 11.10.6. Validation of bundle adjustment
- 11.10.7. Alignment to the ground
- 11.10.8. Registration refinement
- 11.10.9. Validation of registration
- 11.10.10. When registration fails
- 11.10.11. Running SfS in parallel
- 11.10.12. Inspection and further iterations
- 11.10.13. Comparison with initial terrain
- 11.10.14. Handling issues in the SfS result
- 11.10.15. Blending the SfS result with the initial terrain
- 11.10.16. Creation of mask of SfS pixels
- 11.10.17. SfS height uncertainty map
- 11.10.18. Solving for jitter
- 11.11. Insights for getting the most of SfS
- 11.12. Shape-from-Shading for Earth
- 11.13. Shape-from-Shading with CTX images