New Message Type for Instance Segmentation Results #5047
Replies: 2 comments 8 replies
-
@YoshiRi @kminoda Do you have any comments about the message type? |
Beta Was this translation helpful? Give feedback.
-
@StepTurtle comment and suggestionsRegarding the message type, I’d like to comment that in the proposed approach, a mask of the same size as the image would be sent as a topic for each class or object. This could lead to very inefficient communication. To minimize the number of topics related to images, consider issuing pairs of images and configuration (config) data. For instance, you could assign scalar values to each class based on the image intensity. As for the correspondence between these scalar values and classes, you might use a field or a separate latch topic. Here’s an example of how you could define such messages. However, for the config part, you could flow it in a different format, such as a string in a separate latch topic, while still achieving the desired functionality:
Concerns
Supplement: Proposed improvementIf the Image topic’s encoding allowed specifying boolean values, it could significantly mitigate the data size issue. However, based on my review of the ROS source code, it appears that boolean masks are not supported. In that case, consider creating a specialized type capable of storing boolean mask images and including it in |
Beta Was this translation helpful? Give feedback.
-
Hi,
We want to add a new 2D instance segmentation model to Autoware perception pipeline and I have a question related this topic. We couldn't decide on the message type.
Check following PRs and issue to see current works:
In current Autoware design we are using following message types for 2D detection:
For only bounding box detection:
tier4_perception_msgs::msg::DetectedObjectWithFeature
(under tier4_autoware_msgs repository)For bounding box and semantic segmentation result:
tier4_perception_msgs::msg::DetectedObjectWithFeature
(under tier4_autoware_msgs repository) +sensor_msgs::msg::Image
For semantic segmentation models, we have a single image representing all objects in the scene, so we can publish the segmentation mask as a single image. However, for instance segmentation models, we have a separate segmentation mask for each detected object, and we need to determine how to store this mask information.
Possible Approaches:
We can publish the instance information in two different ways.
1) A
sensor_msgs::msg::Image
for each detected object.We can add a new message type something similar with
tier4_perception_msgs::msg::DetectedObjectWithFeature
and it containssensor_msgs::msg::Image
Current:
tier4_perception_msgs::msg::DetectedObjectWithFeature
:Possible Approach:
autoware_perception_msgs/DetectedObject object Feature feature + sensor_msgs/Image
tier4_perception_msgs::msg::DetectedObjectWithMask
.sensor_msgs::msg::Image
for each detected object.2) A Border Arrays
We can add a new fields to
tier4_perception_msgs::msg::DetectedObjectWithFeature
which contains borders of the detected objectCurrent:
tier4_perception_msgs::msg::DetectedObjectWithFeature
:Possible Approach:
tier4_perception_msgs::msg::DetectedObjectWithBorder
autoware_perception_msgs/DetectedObject object Feature feature + Border[] borders
tier4_perception_msgs::msg::Border
tier4_perception_msgs::msg::DetectedObjectWithBorder
andtier4_perception_msgs::msg::Border
.boundary_status
? Because sometimes we might encounter objects with gaps inside them.If you have any opinions or another approach similar to or completely different from the information I shared above on this topic, I would be glad to hear them.
Beta Was this translation helpful? Give feedback.
All reactions