.notion-header { display: none; !important }

Challenge

Home | Dataset/Code | Challenge

Overview

The DIPR challenge is now closed but we have kept the Evaluation and FAQ sections of this page available for informational purposes. Please email us at dipr@arcturus.industries for access to the dataset but please do not send any contest submissions. The DIPr focused on improving short-term IMU fallback tracking for VR systems. The first phase of this challenge used a synthetic dataset created by Arcturus Industries and we released sample code.

Evaluation

This contest will be evaluated by running your code on a private dataset after submission. We plan to update a public leaderboard on a regular basis.

For this challenge we will evaluate submissions using multiple tracking loss simulations on test sequences. This simulation and associated evaluation is performed as follows:

  1. We selected several intervals on each test dataset, where the IMU fallback will be initialized and used. The start points and tracking loss duration are fixed (saved in hdf5 files). They may be chosen arbitrarily or based on real-life tracking loss.
  2. You IMU fallback prediction on each segment will be aggregated to compute the Velocity Mean Absolute Error (VMAE) per dataset
  3. We will then average your VMAE among all intervals and all datasets

We use velocity based metrics to reflect what’s important in SLAM for VR. Indeed, a good velocity estimate is crucial to prevent motion sickness or flyaway. Moreover, the L1 norm is less sensitive to large errors that sometimes occur.

Recommendations

We recommend using the provided code as a start, and we require you to send your submissions in form of code modifications to a third party. Using the same method as us to fuse deep learning with IMU data (EKF) is not mandatory. You may also decide to change the CNN prediction type. You may propose your own methods or a novel methods (even non deep learning).

FAQs

“How can I perform perfect IMU integration, recovering the GT velocity signal?”

We have sample code that performs IMU integration in our repository, but it adds noise to the ideal IMU to make the problem more realistic. You may comment the noise adding code to get integration that match GT.

Let's look first at Evaluatior.run_segment() function. To get pure IMU integration, you need to:

  1. Disable noise comment the lines below self.noise_model.add_imu_noise(data_imu.data[:, 1:]) self.noise_model.add_noise_to_init_state(start_state)
  2. Disable CNN measurements by passing skip_updates=False to ImuFallback class constructor

You can see in the function a numpy array named imu_segmentwith IMU data is prepared, basically by taking data from our dataset class. The IMU segment array has shape (N_samples, 7) and contains [timestamp, gyro_x, gyro_y, gyro_z, acc_x, acc_y, acc_z] in each row that is basically a single IMU measurement at the timestamp. As you see next, the rows of the IMU segment array are fed to ImuFallback.on_new_imu() one by one, and each time it does single IMU integration step. Let’s look inside this function - there is some logic for accumulating IMU history, IMU interpolation, and executing the EKF prediction step that propagates EKF state using the new IMU measurement that just passed. EKF update step we disabled earlier.To learn actual IMU integration code you need to look at lambda that is passed to self.ekf.predict -  that is self.prediction_fn function. Take a look inside to learn the integration code, as well as some covariance propagation login required for EKF. Basically self.prediction_fn  returns state after integration and covariance matrix after integration. We used trapezoidal accelerometer double integration for more accuracy (compared to rectangular integration).

So we hope this code excursion is useful and the code is easy to understand! If you run the evaluate.py script with commented noise and CNN measurement, it will generate plots that match ground truth velocity.