Some questions on basing tutorial 9 for real world implementation #605
Replies: 6 comments 6 replies
-
So there is no need for a common measurement space, just need models that enable measurements to work in common state/track space. For example, with sensor providing xyz_model = LinearGaussian(ndim_state=6, mapping=[0, 2, 4], noise_covar=np.diag([2, 1, 3]))
detection = Detection([x, y, z], timestamp=time, measurement_model=xyz_model)
xz_model = LinearGaussian(ndim_state=6, mapping=[0, 6], noise_covar=np.diag([2, 3]))
detection = Detection([x, z], timestamp=time, measurement_model=xz_model) or for ebr_model = CartesianToElevationBearingRange(
ndim_state=ndim_state,
mapping=position_mapping,
noise_covar=noise_covar,
translation_offset=position,
rotation_offset=orientation)
detection = Detection([theta, phi, r], timestamp=time, measurement_model=ebr_model)
eb_model = CartesianToElevationBearing(
ndim_state=ndim_state,
mapping=position_mapping,
noise_covar=noise_covar,
translation_offset=position,
rotation_offset=orientation)
detection = Detection([theta, phi], timestamp=time, measurement_model=eb_model) The joint tracking and classification is slightly more complex at the moment, as work in progress in #576 to combine that (but we welcome testers 😉 ) In interim, you could add colour for example as "metadata", so for example: detection = Detection([theta, phi, r], timestamp=time, metadata={'color': color}, measurement_model=ebr_model) and exploit that in data association: from stonesoup.gater.filtered import FilteredDetectionsGater
hypothesiser = FilteredDetectionsGater(
hypothesiser_being_wrapped,
metadata_filter="color"
) This would avoid associating detections that are
Yes, the translation offset should be the sensor's location, and rotation offset the sensors orientation. As a sensor is attached to a platform, these may be dynamic, and offset themselves from the platform. You'll see in this example that this sensor uses the senor's position and orientation ( Stone-Soup/stonesoup/sensor/passive.py Lines 39 to 44 in 071abc8
Thanks You may also want to look at the Multi-Sensor Moving Platform Simulation Example, as this is fusing two different types of sensor. |
Beta Was this translation helpful? Give feedback.
-
OK, thanks.
|
Beta Was this translation helpful? Give feedback.
-
OK, thanks.
In other words, how does one use/build a detection state that is integrated with continuous and categorical variables, not in the context of estimation, but in the context of the association? |
Beta Was this translation helpful? Give feedback.
-
Wow! Thank you very much! If it is OK,I have just one last thing, that can nicely summarize the issue. P.S. the examples of "Sensor Platform Simulatio"n and "Multi-Sensor Moving Platform Simulation" could help but since they are based on "simulators" it is hard for me to stream there real data, differently than in the tutorials in which I've found it easy, since the simulated GT are simple. Thanks again, |
Beta Was this translation helpful? Give feedback.
-
Thank you very much! |
Beta Was this translation helpful? Give feedback.
-
Hi. Now, say I have two sources which get only theta and phi of a detection. Since they are two, a triangulation may be operated in order to estimate the range. Thanks. |
Beta Was this translation helpful? Give feedback.
-
Hi.
I am trying to implement a system that based on Tutorial 9 but has some more features, which are:
A. Besides sensors that supply measurements in x,y,z, there are sensors that supply relative measurements in polar coordinates (theta,phi and r), and I have the sensors' self-location in x,y,z.
B. the sensors supply also "Categorical" features of the detected objects, such as color. We can assume that each measurement is of [x,y,z,color] or [theta,phi,r,color], each with the sensor's self location.
C. Every measurement might be lack of some features. e.g, measurement of time 100 lacks y position, and measurement of time 110 lacks the color.
My questions are:
I guess this format should be constructed by stonesoup.models.measurement.nonlinear.CombinedReversibleGaussianMeasurementModel
with the sequence
-stonesoup.models.measurement.linear.LinearGaussian -for x,y,x
-stonesoup.models.measurement.nonlinear.CartesianToElevationBearingRange -theta,phi and r
-stonesoup.models.measurement.categorical.CategoricalMeasurementModel -for the color.
Am I right?
(https://stonesoup.readthedocs.io/en/v0.1b8/stonesoup.models.measurement.html#stonesoup.models.measurement.nonlinear.CartesianToBearingRange.mapping) property of the model is a 2 element vector, whose first (i.e. mapping[0]) and second (i.e. mapping[0]) elements..
Regards,
Adiel.
Beta Was this translation helpful? Give feedback.
All reactions