You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Working with a synthetic map created from a floor plan, I noticed that when the mapped walls around the robot are thick, the likelihood model tends to accumulate a lot of uncertainty in both position and orientation extremely fast.
For a very extreme case of this, see this case, where the robot immediately delocalized itself entering this passage with large areas marked as obstacles around it. It can be seen in the particle cloud that large volumes of the position belief are distributed lateral to the robot, even within the walls themselves.
This behavior is shared by both beluga_amcl and nav2_amcl and its due to the "see through walls" nature of the standard implementation of the likelihood model. Using the beam model in the same map works just fine, since ray-tracing is impervious to the thickness of the walls.
While it can be argued it is a feature of the model and therefore maps should not be created like this, arguments can also be set for addressing this:
it's not obvious to the user why a "perfect" map such as this would not work, unless they know about how the sensor model works.
this effect can also result from people editing capture-generated maps.
capture-generated maps can also have multi-cell width walls (see examples below). While this is unlikely to cause delocalization, it can add variance to the solution in the same order of magnitude of the base MCL localization error.
It can add to the overall unreliability of the system in particular places. For example, a long narrow passage with thick walls will have increased orientation variance compared to one with narrow walls, possibly causing the localization/planning loop to trip by not being able to plan a path out of the passage and leaving the robot stuck in it.
A possible solution comes from the thinking that it's reasonable and still conservative to assume that any cell for which none of its 8 immediate neighbors is a free-space cells cannot be the result of a direct measurement during mapping, and as far as the sensor models are concerned:
For the beam model, the cell is unreachable.
For the likelihood model, the cell is artificially reducing the distance to the closest obstacle metric for that cell, effectively widening the sensor measurement distribution causing the issues mentioned above.
Notice that a cell only surrounded by occupied and unknown cells (e.g. external wall cells in the magazino dataset example below) is also artificially skewing the likelihood model metric while not really providing information, since the lack of an adjacent free-cell indicates that this occupied cell was not measured directly and therefore does not really provide real information to either model.
Platform (please complete the following information):
A possible solution comes from the thinking that it's reasonable and still conservative to assume that any cell for which none of its 8 immediate neighbors is a free-space cells cannot be the result of a direct measurement during mapping
That is, assuming no variance in the occupancy distribution either. Mapping is imperfect, but I agree with you that's not modeled anyway (even if the likelihood field model in its current form could cope with that to some extent precisely because it widens the sensor distribution).
Elaborating on your idea, we could also compute distances to nearest free cells. Then we can introduce a notion of penetration depth at the obstacle interface to our likelihood field. Exponential decay is typical in optics and electromagnetism (edit: couldn't find it, I was half remembering what I read at some point about evanescent field solutions).
Bug description
Working with a synthetic map created from a floor plan, I noticed that when the mapped walls around the robot are thick, the likelihood model tends to accumulate a lot of uncertainty in both position and orientation extremely fast.
For a very extreme case of this, see this case, where the robot immediately delocalized itself entering this passage with large areas marked as obstacles around it. It can be seen in the particle cloud that large volumes of the position belief are distributed lateral to the robot, even within the walls themselves.
This behavior is shared by both
beluga_amcl
andnav2_amcl
and its due to the "see through walls" nature of the standard implementation of the likelihood model. Using the beam model in the same map works just fine, since ray-tracing is impervious to the thickness of the walls.While it can be argued it is a feature of the model and therefore maps should not be created like this, arguments can also be set for addressing this:
A possible solution comes from the thinking that it's reasonable and still conservative to assume that any cell for which none of its 8 immediate neighbors is a free-space cells cannot be the result of a direct measurement during mapping, and as far as the sensor models are concerned:
Notice that a cell only surrounded by occupied and unknown cells (e.g. external wall cells in the magazino dataset example below) is also artificially skewing the likelihood model metric while not really providing information, since the lack of an adjacent free-cell indicates that this occupied cell was not measured directly and therefore does not really provide real information to either model.
Platform (please complete the following information):
Beluga
version: 3144527How to reproduce
List steps to reproduce the issue:
ros2 launch gonbuki_bringup bringup_launch.py map_name:=hq4_ideal localization_package:=beluga_amcl
Code snippets or minimal examples are always helpful, if not necessary.
Additional context
Capture-generated map examples, in all cases cells are$0.05m$
Cartographer Magazino dataset map:
Map generated online (no tuning) using Slam-toolbox:
The text was updated successfully, but these errors were encountered: