3D Structure From Visual Motion
This is a description page for the PhD course on 3D Structure from Visual Motion: Novel Techniques in Computer Vision and Autonomous Robots/Vehicles. This course can be taken also by students from Computer Engineering in the Laurea Magistrale track.
Contents
Course Aim & Organization
Simultaneous estimate of the unknown motion of a camera (or the vehicle this camera is upon) while reconstructing the 3D structure of the observed world is a challenging task that has been deeply studied in the recent literature. The PhD course on 3D Structure from Visual Motion: Novel Techniques in Computer Vision and Autonomous Robots/Vehicles will present modern techniques to simultaneously estimate the unknown motion of a camera while reconstructing the 3D structure of the observed world to be applied in scientific fields such as: 3D reconstruction, autonomous robot navigation, aerial/field surveying, unmanned vehicle maneuvering, etc.
Teachers
Although formally entitled to just one of the teachers (myself) the course is also held by (in order of appearance)
With possibly special guests
Course Schedule
Please consider this schedule as tentative ...
In the following you find the detailed schedule for the course and the rooms booked for it. In brackets you find also the lecturer for each specific topic.
- 3d vision Basics(xx/03/2012 xx:15-xx:15 in room xxx)
- Course introduction (M. Matteucci)
- Projection model and projection matrix (V. Caglioti)
- xx/03/2012 xx:15-xx:15 in room xxx
- Fundamental and Essential matrices (V. Caglioti)
- Correspondence analysis: tracking and ransac (D. Migliore)
- xx/03/2012 xx:15-xx:15 in room xxx
- Motion extraction and 3D reconstruction (V. Caglioti)
- xx/03/2012 xx:15-xx:15 in room xxx
- Stereo e Omnidirectional odometry (D. Migliore)
- Uncalibrated visual odometry (V. Caglioti)
- Omnidirectional odometry (V. Caglioti)
- xx/03/2012 xx:15-xx:15 in room xxx
- Optical flow (M. Marcon)
- Combined estiamation of 3D structure and camera egomotion (M.Marcon)
- xx/03/2012 xx:15-xx:15 in room xxx
- Bayesian Filtering and SLAM (M. Matteucci)
- xx/03/2012 xx:15-xx:15 in room xxx
- MonoSLAM, PTAM and FrameSLAM (M. Matteucci, D.G. Sorrenti)
- xx/03/2012 xx:15-xx:15 in room xxx
- 3D without 3D: plenoptic methods, lumigraph, albedo, non Lambertian surfaces (M. Marcon)
Course Material & Referencies
The following is some suggested material to follow the course lectures.
Slides and lecture notes
- Camera geometry, single view, and two view geometry material
- Two view geometry and visual odometry material
- Correspondence analysis and RANSAC
- Optical flow tracking and egomotion estimation
- Structure from Motion
- Bayesian Filtering, Kalman Filtering, and SLAM
- Simultaneous Localization and Mapping a.k.a. SLAM!
- Monocular SLAM
- Stereo and Omnidirectional SLAM
- Panoramic Visual Odomentry
- Parallel Tracking and Mapping
- 3D without 3D
Suggested Bibliography
- R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision, Cambridge University Press, March 2004.
- S. Thrun, W. Burgard, D. Fox. Probabilistic Robotics, MIT Press, September 2005.
- Papers you might find useful to deepen your study:
- Simultaneous Localization and Mapping (SLAM): Part I The Essential Algorithms. H. Durrant-Whyte, T. Bailey [1]
- Unified Inverse Depth Parametrization for Monocular SLAM by J.M.M. Montiel, Javier Civera, and Andrew J. Davison [2]
- Parallel Tracking and Mapping for Small AR Workspaces by Georg Klein and David Murray [3]
- FrameSLAM: from Bundle Adjustment to Realtime Visual Mappping by Kurt Konolige and Motilal Agrawal [4]