3D Structure From Visual Motion

From Chrome
Revision as of 00:51, 8 April 2014 by Matteo (Talk | contribs) (Course Schedule)

Jump to: navigation, search
Recent news you should be aware of ...
 * The schedule of 2014 lectures is out!
 * A new version of the course is scheduled for the year 2013/2014


This is a description page for the PhD course on 3D Structure from Visual Motion: Novel Techniques in Computer Vision and Autonomous Robots/Vehicles. This course can be taken also by students from Computer Engineering in the Laurea Magistrale track.


Course Aim & Organization

Simultaneous estimate of the unknown motion of a camera (or the vehicle this camera is upon) while reconstructing the 3D structure of the observed world is a challenging task that has been deeply studied in the recent literature. The PhD course on 3D Structure from Visual Motion: Novel Techniques in Computer Vision and Autonomous Robots/Vehicles will present modern techniques to simultaneously estimate the unknown motion of a camera while reconstructing the 3D structure of the observed world to be applied in scientific fields such as: 3D reconstruction, autonomous robot navigation, aerial/field surveying, unmanned vehicle maneuvering, etc.

Teachers

Although formally entitled to just one of the teachers (myself) the course is also held by (in order of appearance)

Course Schedule

The course schedule for this yeas edition foresees 3 hour lectures from 14:30 to 17:30 (time might change according to participants needs) of the following days: 12/05/2014, 14/05/2014, 16/05/2014, 19/05/2014, 21/05/2014, 23/05/2014, 26/05/2014, 28/05/2014.

In the following you find a tentative syllabus for the course.

  • 3d Vision Basics
    • Course introduction
    • Feature extraction, matching and tracking
    • Projection model and projection matrix
    • Fundamental and Essential matrices
  • Structure from Motion and Visual Odometry
    • Optical flow
    • Combined estiamation of 3D structure and camera egomotion
    • Motion extraction and 3D reconstruction
  • Unconventional Visual Odometry
    • Uncalibrated visual odometry
    • Omnidirectional odometry
  • Simulataneous Localization and Mapping
    • From Bayesian Filtering to SLAM
    • EKF-Based SLAM
  • Visual SLAM
    • EKF-based Monocular SLAM
    • Stereo and Omnidirectional visual SLAM
    • Why filters? PTAM and FrameSLAM

Course Material & Referencies

The following is some suggested material to follow the course lectures.

Slides and lecture notes

Suggested Bibliography

  • R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision, Cambridge University Press, March 2004.
  • S. Thrun, W. Burgard, D. Fox. Probabilistic Robotics, MIT Press, September 2005.
  • Papers you might find useful to deepen your study:
    • Simultaneous Localization and Mapping (SLAM): Part I The Essential Algorithms. H. Durrant-Whyte, T. Bailey [1]
    • Unified Inverse Depth Parametrization for Monocular SLAM by J.M.M. Montiel, Javier Civera, and Andrew J. Davison [2]
    • Parallel Tracking and Mapping for Small AR Workspaces by Georg Klein and David Murray [3]
    • FrameSLAM: from Bundle Adjustment to Realtime Visual Mappping by Kurt Konolige and Motilal Agrawal [4]

Libraries and Demos

TBC

Course Evaluation

The course evaluation will be done on the basis of a project which could be completed also in groups of two people. In the case of PhD students this project could/should be somehow related to their research interests.