The interface between Perception and Planning lacks sufficient information #5034
yukkysaito
started this conversation in
Design
Replies: 1 comment
-
I think this is a symptom of the traditional sense/plan/act paradigm. I think this is inherent in those architectures. Do you have any idea why the subsumption architecture and tighter multi-level connection between perception and acting, as advocated by R. Brooks, is not so popular in contemporary autonomous driving? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Background
https://github.com/orgs/autowarefoundation/discussions/5032
Issues
The interface between Perception and Planning lacks sufficient information. For moving objects, we generally respond by sending predicted paths and object classes, but this is not enough. We need to encode information such as gestures, traffic controller signals, and flashing gestures, and convey this to Planning to enable appropriate planning. However, the current interface cannot incorporate such information, and it is challenging to fully encode real-world information.
Additionally, interpreting unknown objects is difficult. Even for the same type of plant, some should be avoided while others can be ignored. For example, we want to avoid hitting a solid tree branch lying on the road, but we can usually ignore the vegetation growing by the roadside.
Identifying objects using semantic segmentation alone is insufficient; it requires advanced interpretation and judgment, such as determining what happens if the object is hit or if an emergency brake is applied. With the current interface, where information is abstracted into class categories or object shapes, it is difficult to distinguish between different types of plants in Planning.
We are looking for architecture proposals to solve this problem
Beta Was this translation helpful? Give feedback.
All reactions