Home | Dataset/Code | Challenge | Terms | Leaderboard | Sign Up
Deep Inertial Prediction (DIPr)
DIPr (”dipper”) is a novel method for short-term IMU-only tracking for VR headsets that leverages deep learning which is intended to be used for fallback tracking for visual-inertial odometry or SLAM.
The initial release of DIPr includes data and code.
More details can be found at:
DIPr can dramatically improve short term IMU-only tracking over pure IMU integration
Watch our announcement video here!
($5000 in prizes)
In addition to the code/dataset release, we present the DIPr Challenge with Prizes for the best solutions.
Deadline: April 30, 2022 More details
Modern SLAM (simultaneous localization and mapping) implementations are able to track a headset well enough, so that applications can respond VR world to head movements realistically and with low latency.
Fortunately, IMU fallback does not happen often and usually only lasts several seconds, and when the poor scene conditions are over, modern SLAM is able to re-localize relative to the previously tracked world and continue robust tracking. Nonetheless, user experience during the fallback is not perfect.
SLAM algorithms are able achieve state of the art tracking using one or more cameras and an IMU (Inertial Measurement Unit). Sometimes, camera tracking may fail due to poor scene conditions (blur, low light corner, overexposure due to outdoor light from a window, poor scene features on a wall). When camera tracking fails, a VR system may switch to an IMU-only tracking mode. Unfortunately, the trajectory (especially translation) will quickly deviate from the actual trajectory due to errors in IMU integration and initial conditions. To the user, it looks like they are flying away in some direction in the VR world.
The goal of DIPr is to improve short-term IMU fallback tracking for a VR system by leveraging Deep Learning methods. The learned methods can be seen as an additional source of probabilistic information, and as additional priors and constraints because deep learning models may recognize and classify some motions (footsteps, typical VR game movements) and regress their kinematic properties.