Optimizing Get Frame Time for LiDAR L515 #12313
Replies: 4 comments 3 replies
-
Hi @littlefred117 The L515 does not have the Gain option that the 400 Series cameras have. Instead, it has Receiver Gain, which can be controlled with Python code like this:
L515 does not have depth exposure controls, though it does have exposure control for its RGB. L515's depth has a short exposure time of less than 100ns per depth point In the absence of depth exposure controls, the laser_power and receiver_gain options are the primary method of adjusting how well the camera performs in a particular environment or when scanning particular types of objects. For example, the Short Range Visual Preset camera configuration works well in a lot of situations by lowering both the laser_power and receiver_gain values. Your script instruction for laser_power is correct. The official L515 user guide, available at the link below as a downloadable PDF document, has details about the effect that each L515 preset has. Plese see pages 9 and 10 of the guide. https://support.intelrealsense.com/hc/en-us/article_attachments/360081846174 |
Beta Was this translation helpful? Give feedback.
-
Hi @littlefred117 If you are using the L515 camera model then rs.option.receiver_gain should be supported. It is an L515-only option. Have you defined depth_sensor in your script, please?
|
Beta Was this translation helpful? Give feedback.
-
You could test whether disabling the decimation post-processing filter inproves performance. Post-processing filters are calculated on the computer and not the camera hardware, so a filter will impose a processing burden on the CPU. |
Beta Was this translation helpful? Give feedback.
-
Yes, I mean the decimate instructions in the code block. |
Beta Was this translation helpful? Give feedback.
-
Hi,
So I am trying to optimize the time it takes for the LiDAR L515 to get the depth frames. In my current code, it takes around 1.92 seconds to get the depth frames (how long it takes for the first "done" message to be printed):
import numpy as np
Initialize the RealSense pipeline
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 320, 240, rs.format.z16, 30)
Start the pipeline with the given configuration
pipeline.start(config)
framecount = 0
accumulated_points = None
try:
while framecount < 2:
# Wait for a frame
frames = pipeline.wait_for_frames()
finally:
# Stop the pipeline
pipeline.stop()
print("finished")
However, I read on this datasheet,
Intel_RealSense_LiDAR_L515_Datasheet_Rev003 (1).pdf, on page 6 that you can "a short exposure time of <100ns per depth
point, even rapidly moving objects can be captured with minimal motion blur". I am considering adding something like this to my code, in order to shorten the time:
depth_sensor = profile.get_device().first_depth_sensor()
depth_sensor.set_option(rs.option.exposure, 100) # Set the exposure time in microseconds
depth_sensor.set_option(rs.option.gain, 16) # Set the gain value
depth_sensor.set_option(rs.option.laser_power, 16) # Set the laser power
However, I am getting issues with adding these options, so I was wondering if anyone knew how to properly implement this in order to optimize time. Thanks
Beta Was this translation helpful? Give feedback.
All reactions