.notion-header { display: none; !important }


Home | Dataset/Code | Challenge | Terms | Leaderboard | Sign Up


The DIPr challenge is focused on improving short-term IMU fallback tracking for VR systems. The first phase of this challenge uses a synthetic dataset created by Arcturus Industries and we have released sample code to help you get started and to demonstrate how submitted solutions will be evaluated.

$5000 in prizes

The prize for the winning team is $2500, $1500 for 2nd place, and $1000 for 3rd place. Prizes will be paid in USD or USDC. Moreover, an entry has to beat the Arcturus Industries provided solution in order to win.

If you don’t win the first stage, we plan to release additional data with more realistic noise/error models with other opportunities to win with the best submission. If you participated in the earlier submissions, you will be eligible for a larger reward if you win the second challenge.

To get started sign up at https://dipr.ai/sign-up and check out the instructions at: https://github.com/arcturus-industries/dipr

Deadline: April 30, 2022


  • In addition to training your own models, you can also modify or even replace the EKF. Submit a modified imu_fallback.py and cnn_backend.py (if needed) with any other supporting files needed for your solution. Note: modifications in other existing files will be dropped.
  • You can use all data and code we provide in the V1 release but no additional recorded data (data augmentation is welcome).
  • Maximum of 2 submissions per week.
  • You need to beat our provided solution in order to win a prize.
  • Only your best submission will be used for scoring.
  • Do not share any details about your approach with Arcturus. If you wish to do so, let us know first.


This contest will be evaluated by running your code on a private dataset after submission. We plan to update a public leaderboard on a regular basis.

For this challenge we will evaluate submissions using multiple tracking loss simulations on test sequences. This simulation and associated evaluation is performed as follows:

  1. We selected several intervals on each test dataset, where the IMU fallback will be initialized and used. The start points and tracking loss duration are fixed (saved in hdf5 files). They may be chosen arbitrarily or based on real-life tracking loss.
  2. You IMU fallback prediction on each segment will be aggregated to compute the Velocity Mean Absolute Error (VMAE) per dataset
  3. We will then average your VMAE among all intervals and all datasets

We use velocity based metrics to reflect what’s important in SLAM for VR. Indeed, a good velocity estimate is crucial to prevent motion sickness or flyaway. Moreover, the L1 norm is less sensitive to large errors that sometimes occur.

How to Submit Your Solution

  • Use the DIPr dataset (obtained after signing up: https://dipr.ai/sign-up) to train your Machine Learning model (CNN, GBT, etc.)
  • Modify the code sample provided in GitHub (DIPr GitHub page) to if you need or if you like.
  • Make sure you can still run evaluate.py — it’s what we will run to evaluate your network, but we will use held-back evaluation data.

Do not submit your solution directly to Arcturus Industries. A third party will evaluate submissions and won’t share submission content with Arcturus Industries. Send your submissions to: submissions@dulojc.io (Updated March 8 2022)

  • The evaluator will run evaluate.py on evaluation data and add you to the leader-board typically within 24-48 hours. In case your model is large or you used a custom setup, it may take longer.
  • If you use additional deep learning frameworks or other libraries that require an install you will need to provide a Dockerfile to setup the environment your code needs to run.
  • Your code must run in 32gb of ram or less and must run using CPU inference. It is okay if inference is slow.


We recommend using the provided code as a start, and we require you to send your submissions in form of code modifications to a third party. Using the same method as us to fuse deep learning with IMU data (EKF) is not mandatory. You may also decide to change the CNN prediction type. You may propose your own methods or a novel methods (even non deep learning).


“How can I perform perfect IMU integration, recovering the GT velocity signal?”

We have sample code that performs IMU integration in our repository, but it adds noise to the ideal IMU to make the problem more realistic. You may comment the noise adding code to get integration that match GT.

Let's look first at Evaluatior.run_segment() function. To get pure IMU integration, you need to:

  1. Disable noise comment the lines below self.noise_model.add_imu_noise(data_imu.data[:, 1:]) self.noise_model.add_noise_to_init_state(start_state)
  2. Disable CNN measurements by passing skip_updates=False to ImuFallback class constructor

You can see in the function a numpy array named imu_segmentwith IMU data is prepared, basically by taking data from our dataset class. The IMU segment array has shape (N_samples, 7) and contains [timestamp, gyro_x, gyro_y, gyro_z, acc_x, acc_y, acc_z] in each row that is basically a single IMU measurement at the timestamp. As you see next, the rows of the IMU segment array are fed to ImuFallback.on_new_imu() one by one, and each time it does single IMU integration step. Let’s look inside this function - there is some logic for accumulating IMU history, IMU interpolation, and executing the EKF prediction step that propagates EKF state using the new IMU measurement that just passed. EKF update step we disabled earlier.To learn actual IMU integration code you need to look at lambda that is passed to self.ekf.predict -  that is self.prediction_fn function. Take a look inside to learn the integration code, as well as some covariance propagation login required for EKF. Basically self.prediction_fn  returns state after integration and covariance matrix after integration. We used trapezoidal accelerometer double integration for more accuracy (compared to rectangular integration).

So we hope this code excursion is useful and the code is easy to understand! If you run the evaluate.py script with commented noise and CNN measurement, it will generate plots that match ground truth velocity.