What about IMU integration to increase localization accuracy? #4913
Replies: 2 comments 1 reply
-
Thanks for posting an interesting idea and sharing and sharing the papers.
At first, the data I usually handle doesn't have this issue. However, the scale factor sometimes shifts, which can be a problem. I often solve it by recalibrating the scale factor as needed. For IMU integration, I believe we need to address the following issues: (However, since I am not familiar with the latest methods, these concerns might already be addressed.)
Personally, the main problem with vehicle velocity is that drift due to scale factor shifts can prevent accurate localization inside tunnels. If this method can solve that issue, I would be very interested in looking into it further. |
Beta Was this translation helpful? Give feedback.
-
Here is a rough specification of what the new Inputs:
Note that EKF pose could also make sense, but correction requires a 3D pose and current EKF implementation is for 2D only. Outputs:
The module behaviour would be split between 2 states: INIT and RUNNING In INIT state:
In RUNNING state:
Note: this design assumes the last received NDT pose timestamp is always < to the last IMU measurement timestamp, which is true in practice. The following items could be checked while in RUNNING state to detect/prevent divergence:
If divergence is detected, the system would switch back to INIT state. The system would thus re-initialize automatically the next time the vehicle is stopped. |
Beta Was this translation helpful? Give feedback.
-
In current design, ego localization is estimated by EKF, which takes 3 different inputs:
In practice, I have found the vehicle odometer data is often unreliable:
These design choices are not a problem if you consider this data is ment to be displayed on the vehicle dashboard, but for a self-driving vehicle which needs to locate itself accurately, this is definitely lacking.
I don't mean odometer velocity report should not be used in EKF, but having only this as a velocity measurement is not sufficient.
Then, while IMU sensors can measure both linear acceleration and angular velocity, only the later is used in Autoware (and only angular velocity around z axis). Estimating the vehicle state from IMU measurement is not straight forward because of biases and gravity, but it is not impossible. Actually, this is what many SLAM algorithms have been doing for a while with fairly good results.
For example, the paper "A Contracting Hierarchical Observer for Pose-Inertial Fusion" describes the state observer used by DLIO algorithm. This observer can estimate position, velocity and orientation from IMU measurements integration. This observer is reported to converge exponentially towards the true state (position, velocity, orientation and IMU biases) if position and orientation errors can be measured (e.g. difference between EKF pose and IMU integrated pose). To keep this example, this is what DLIO manages to do with this observer:
https://github.com/vectr-ucla/direct_lidar_inertial_odometry/blob/master/doc/gif/aggressive.gif
Based on my own experimentations, this observer can estimate the vehicle state quite realiably. For example, estimated position drift is typically < 1cm between 2 LIDAR scan, even with very noisy IMU data (uneven ground).
So I am thinking, could IMU integration be used to improve localization accuracy?
For example, IMU measurements would be integrated in a new module implementing its own state observer. The output of this state observer would be fed to EKF as Pose+Twist measurement (so position, orientation and velocity) instead of the current IMU turn rate (or as complement). Finally, the output poses from EKF would be sent back to the state observer as true pose to correct the accelerometer and gyroscope biases.
What do you think?
Beta Was this translation helpful? Give feedback.
All reactions