Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Providing Video file as sensor and with start time of the video #796

Closed
divdaisymuffin opened this issue Nov 9, 2021 · 13 comments
Closed

Comments

@divdaisymuffin
Copy link

Hi @nnshah1 and @xwu2git

The current video simulation works with taking system time as start time of video, but our target is to get actual video start time to be taken, which means we need analytics index data to be contain timestamp of video start timestamp not current system time, similar the recording index.

We have already tried to replace the system time in mqtt2db.py and upload.py file by giving this as a input to yaml, and taking it as a environmental variable.
But we are facing issues with lag in bounding box creation they are not well sync.

Is there a way via VAserving we can take start time directly ? or any other possible option you can suggest.

@nnshah1
Copy link

nnshah1 commented Nov 9, 2021

@divdaisymuffin Can you share the changes you made in mqtt2db.py?

@nnshah1
Copy link

nnshah1 commented Nov 10, 2021

If you adjust the real_base here to be the video_base:

real_base=r["real_base"] if "real_base" in r else 0

And change this to be:

"time":str(int(int(os.path.basename(filename).split('_')[-2])/1000000)),

                "time":str(int(int(os.path.basename(filename).split('_')[-1])+video_base/1000000)),

I believe that should adjust the timestamps to be based on the video_base as opposed to the current time.

This can be done in vaserving directly - but would have the same basic effect - is this the adjustment you already have made? If so can you give an example of the lag in bounding boxes in the visualization?

@divdaisymuffin
Copy link
Author

Hi @nnshah1

The changes we made in mqtt2db.py are as follows for sync with video time :
startDate and botstartTime we are taking as input in a index

`if ("time" not in r) and ("real_base" in r) and ("timestamp" in r):
real_base=r["real_base"] if "real_base" in r else 0

            if sensorType == 2:        
                if startDate != None:    
                    print("Bot deployed Video File as a Sensor", flush = True)
                    print(f"sensorDetails ---> sensorName {sensor_name} sensorType---> {sensorType} startDate---> {startDate}", flush=True)
                    r["time"] = int(startDate) + int(time.time() * 1000) - botStartTime # for simulation/video file only
                    print("from runva timesTamp ", r["time"], flush = True)           
            else:
                print("Bot deployed in live sensor", flush = True)
                print(f"sensorDetails ---> sensorName {sensor_name} sensorType---> {sensorType} startDate---> {startDate}", flush=True)
                r["time"] = int((real_base + r["timestamp"]) / 1000000) # for live streaming only
                print("r_time", r["time"], r["timestamp"], real_base, time.time())`

For recording time we made changes in upload.py:

`if sensorType == 2:
mp4file=mp4path+"/"+str(timestamp)+".mp4"

                # perform a straight copy to fix negative timestamp for chrome
                list(run(["/usr/local/bin/ffmpeg","-f","mp4","-i",path,"-c","copy",mp4file]))

                sinfo=probe(mp4file)
                print("Sinfo = ", sinfo, flush=True)
                videoFileDuration = int(sinfo["duration"] * 1000)
                timesTamp = timestamp + videoFileDuration
                
                mp4file=mp4path+"/"+str(timesTamp)+".mp4"
                list(run(["/usr/local/bin/ffmpeg","-f","mp4","-i",path,"-c","copy",mp4file]))
                sinfo=probe(mp4file)
                print("Sinfo2 = ", sinfo, flush=True)
                
                sinfo.update({
                    "sensor": sensor,
                    "office": {
                        "lat": office[0],
                        "lon": office[1],
                    },
                    "kpiId":kpis_id,
                    "botConfigId":botConfig_id,
                    "botName":algorithm,
                    "time": timesTamp,
                    "path": mp4file[len(self._storage)+1:]
                })
                
            else:
                mp4file=mp4path+"/"+str(timestamp)+".mp4"

                # perform a straight copy to fix negative timestamp for chrome
                list(run(["/usr/local/bin/ffmpeg","-f","mp4","-i",path,"-c","copy",mp4file]))

                sinfo=probe(mp4file)
                print("Sinfo = ", sinfo, flush=True)
                
                sinfo.update({
                    "sensor": sensor,
                    "office": {
                        "lat": office[0],
                        "lon": office[1],
                    },
                    "kpiId":kpis_id,
                    "botConfigId":botConfig_id,
                    "botName":algorithm,
                    "time": timestamp,
                    "path": mp4file[len(self._storage)+1:]
                })`

@divdaisymuffin
Copy link
Author

@nnshah1 as you mentioned about video_base. From where we will get video_base? realbase we are getting in r object from pipeline.json, but I cant find any video_base there.
And how video_base will know the start-time of the video?

@nnshah1
Copy link

nnshah1 commented Nov 10, 2021

@nnshah1 as you mentioned about video_base. From where we will get video_base? realbase we are getting in r object from pipeline.json, but I cant find any video_base there. And how video_base will know the start-time of the video?

video_base would have to be passed in and used instead of real_base. It should be the start time of the recorded clip.

@nnshah1
Copy link

nnshah1 commented Nov 10, 2021

int(startDate) + int(time.time() * 1000) - botStartTime

try something like:

r["time"] = int(startDate) + r["timestamp"]

The timestamp will give the relative position of the event within the stream, and then adding in the base gives you the absolute time.

@nnshah1
Copy link

nnshah1 commented Nov 10, 2021

I would first try to only modify mqtt2b and rec2b to update the timestamps and metadata getting published and leave upload alone -

@divdaisymuffin
Copy link
Author

@nnshah1 Thanks for the suggestion, we will try this and get back to you

@divdaisymuffin
Copy link
Author

@nnshah1 we have tried the suggestion and made changes accordingly in mqtt2db.py and rec2db.py.
But when we are doing it we are not getting the actual timestam of video instead we are getting a timestamp of very far future of around 2096.
Also I ahve a query that realbase that we add to the timestamp, what actually is it? and why timestamp that we are getting in r object is why very old one of around 1945 ?

@nnshah1
Copy link

nnshah1 commented Nov 16, 2021

the timestamp is the number of nanoseconds since the stream started recording, thus it is a relative number and not an absolute.

The absolute time is gotten by adding in the base (should be in nanoseconds since the epoch) to the timestamp. For recordings that start at a different base time - that should give the same result as you would want - i.e. using a different base captured in nanoseconds since the epoch would give you the correct timestamp.

I would double check that your adding in the base as nanoseconds and then dividing back down to seconds before interpreting the timestamp.

@divdaisymuffin
Copy link
Author

@nnshah1 your suggestion helped me a alot for creating recording with the given video start timestamp. I have taken second value after splitting the filename and added that into my startTime and taken that into ns, this change is been made into rec2db.py andi got exact time in my recordings.
But still I am not able to get correct time with mqtt2db.py file by doing startTime + r["timestamp"], so I am stuck with analytics index time logic.

@vidyasiv
Copy link

@divdaisymuffin , you had mentioned the lag in bounding boxes on the web visualization.
I looked into adding watermark and comparing the results with master and changes relating to customize start time.

Pipeline with watermark:

"template":"rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! video/x-raw,format=BGRx ! queue leaky=upstream ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[object_detection_2020R2][1][network]}\" model-proc=\"{models[object_detection_2020R2][1][proc]}\" name=\"detection\" ! gvametaconvert name=\"metaconvert\" ! queue ! gvametapublish name=\"destination\" ! gvawatermark ! videoconvert ! x264enc ! splitmuxsink max-size-time=60500000000 name=\"splitmuxsink\"",

The dark blue boxes are by gvawatermark and the cyan boxes are from analytics.js drawing bounding boxes.
I noticed that inference interval is 6 by default, when watermarking/detection happens, the bounding boxes seem to line up almost perfectly. The issue is for the frames in between, the detection information from the previous watermarking appears to be carried over.

Screenshots with customize start time changes:

2001_inference6_frame_duration_180_ 049

2001_inference6_frame_duration_180_ 050

Screenshots with master:

master_inference6_frame_duration_180_  271

master_inference6_frame_duration_180_  272

This issue is independent of changes relating to customizing start time and should probably be filed as a separate Github issue.
Please confirm if you're able to customize start time as demonstrated in sample in fork

@nnshah1
Copy link

nnshah1 commented Jan 4, 2022

Closing issue as ability to set starting time has been demonstrated - issue with lagging watermarks has to do with the inference interval and is not related directly to the customized start time. Let us know if there is an issue with customizing the start time.

@nnshah1 nnshah1 closed this as completed Jan 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants