-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discovery not always working #66
Comments
Hi, Thank you for your message, the feedback is greatly appreciated. There are a number of factors that go into discovery that can affect the reliability of the process. In general terms, discovery is facilitated by a transmission from the host computer of a UDP packet which is sent out on the subnet with a reserved address and port that signify a broadcast packet and a response destination. All devices on the subnet are able to receive the packet, and those that are so programmed may respond back to the host. UDP packets are not guaranteed to be received over the network. This is in contrast to TCP packets which do have guaranteed delivery. A network that is very busy will lose a larger percentage of UDP packets than a network with little traffic. Additionally, individual devices may not be well equipped to handle UDP packets as is often the case with IP cameras. Many cameras will not implement the ONVIF standard properly or have a proper network implementation or may be underpowered in terms of processing ability. Many times, the web interface GUI is using a proprietary communication protocol from the camera manufacturer that will have better performance than the camera ONVIF implementation. Some ONVIF applications will attempt to overcome this issue by continuously transmitting broadcast packets during the entire time that the application is open. My own personal opinion was that this is not a good strategy, mostly because it will be sending spurious broadcasts, which might further pollute an already saturated network, but I'm sure you could find counter arguments to that position. Another strategy would be to isolate the cameras on a separate subnet. This would require the host to be equipped with multiple ethernet interfaces. If the host has a wired as well as a wireless interface, the cameras could be attached to a router connected to the host computer wired interface and the host could communicate to the main router using the wireless interface. The onvif-gui program is able to accommodate this configuration. It may also be possible to cache the camera configuration values so that the discovery process is not needed to find them. There are some drawbacks to this approach, the biggest being that cameras configured with DHCP settings will change addresses and not be found. Some cameras depend on clock synchronization for authentication and these will drop out eventually from clock drift or daylight savings time changes. One hack could be to just leave the program opened when not using it and then the settings will be saved for as long as the program is opened, the camera streaming could be turned off while not using it. In order to cache the camera configurations, it would require quite a bit of work as the data would have to be serialized to disk and then resurrected with some sort of option to compare it to data that might be found on discovery. Additionally, the GUI would have to be synchronized between the possibly outdated data saved on disk versus data that could found at discovery, so there would be a significant amount of development and testing to properly implement such a feature. If there had been a significant amount of interest in the program, I might consider doing something like that, but the interest is just not there. You could try an open source program https://www.ispyconnect.com/, maybe have better luck with that. Best Regards, Stephen |
Hi Stephen, |
I am the same but also a onvif noob:) |
Hi Stuart, Thank you for reaching out. @leechpool-github has a good point that the program should be able to find the cameras without the full discovery process. I have recently become aware that this is indeed the case, thanks to help from this issue thread. I will be adding some code to find the cameras if they don't pop up every time based on previous connections. There will also be an input for user supplied IP address and ONVIF port. There is a new release coming soon, these will be included with several other new features, including multi-stream. |
I have been using SmartPSS as I noob my way through Onvif, which is really good but closed source. When the new release is out please post here as then I will get a notify, many thanks. |
Thank you for the inquiry. The video and audio streams are already demuxed for you and can be accessed through python modules. There is a file that shows how to access the audio data for processing sample.py. Each sample of the audio data is passed into the python module as a numpy array. The sample program is very simple, it just gives the option to mute either channel of a two channel stream. I haven't done much with the audio, I'm thinking just now that most cameras are single channel, so you would have to figure that in. Looking at the code, F is the frame and is the same as ffmpeg AVFrame. You slice the numpy array to get the channels
If you are looking for a good cheap camera, I recommend the Amcrest brand. I also have had good results with trendnet. Most dahua are pretty good, as are hikvision. Axis can be a little quirky and expensive, but the quality is very good. Please let me know if you are able to get the YOLO working on the video, I have had zero feedback on that feature |
I will have a look at Yolo in next couple of days and report back |
That's fantastic news, |
@sr99622 Hi Stephen I haven't looked at your version of Yolov8 but I have been playing with https://github.com/ultralytics/ultralytics on a $66 Opi5 https://www.aliexpress.com/item/1005004941808071.html Also though as Frigate does running Yolo or any detection model on each frame likely isn't needed where low load motion detection will trigger single frame object detection. I have been looking around at various NVR and the always seem to have a viewer, that in some way someone will actually be sat 24/7 watching AI detect and tell you what you are viewing. This is one thing that has left me scratching my head as the target stream and objects have massive effects on what model should be used on set hardware. I haven't found a single NVR that tunes its AI to recieved streams which I am really puzzled at. as if you having even a basic experience of running these models then surely it must be known how important model selection to a streams paramters is if you want accuracy?! The model and the input params can have a massive effect on accuracy and dpending on classify, detect, pose or track have very distinct FPS requirements so embedding a single model is extremely restrictive to functional use. Yolov8n 320 image on a rk3588(s) @ 133 FPS 37.3 mAP (640 not 320) (mean Average Precision) Yolov8n 640 image on a rk3588(s) @ 33.61 FPS 37.3 mAP (640) (mean Average Precision) |
Hi Stuart, thank you for the feedback, I had not heard of Frigate before so that was good information. I like what they are doing with the lower powered devices. I'm thinking I should add access to the camera substreams like they are doing for detection. I have been mostly focused on higher powered applications with GPU so it is interesting to see things from a different perspective. I agree with you about tuning detections and model sizes. That's why onvif-gui has a selection of models that allow the user to adjust their input sizes. The yolov8 model in onvif-gui is the ultralytics version. |
Yeah I guess I am confused by the repo name libonvif as much seems outside that scope. The yolo coco metrics don't give an awful lot of info apart from a large object is >962 so likely an object detector running on the substream of the common 640 dimention if pixel based can provide info up to 96x bigger in the main stream? I am not that keen on frigate as that sent me on a quest that dependent on subject and stream parameters, a pluggable network is needed than an application. If you look at https://docs.ultralytics.com/models/yolov8/#performance-metrics then the mAP likely means your going to run with YOLOv8s to YOLOv8l and with the thought path of needing needing realtime frame based detect and classification, I don't really see the need to run on big and expensive GPU's. I am thinking Centroid or Mosse type trackers due to speed on the substream, not only detect movement but keep track of individual objects and also provide a historical track that future position can be estimated. I fired from the hip without even looking at the models, but still think YoloV8 needs the bigger models if used but only might have a few frames to deal with as an object is tracked across a screen. That is not me dissagreeing but purely brainstorming with what I have been head scratching to achieve, which is very different in each application of incoming stream and objects of interest. I wish I could program in C/C++ ocassionaly I have hacked a few libs and examples together and Arm V8.2+ has mat/mul vector instructions that if you know how to optimise vectors and maybe just use the openCV libs its likely the trackers could be extremely fast maybe even run on A55 rk3566, but have not that far certainly could run on RK3588 or at least think they can with some of the example FPS being given :) Also the GPU MaliG610 is extremely efficient and like could add a few more frames, the NPU is something I am not sure about its 3core but have a feeling its less than its published ratings. I don't even know if Yolov8 but its great introduction and with such fast advances wondering what models will turn up with BNN and frameworks such as https://github.com/larq/compute-engine check the benches in https://docs.larq.dev/zoo/ that Nvidia also supports for I guess blistering speed. https://arxiv.org/pdf/2011.09398.pdf |
More good links, I like what AlexxIT is doing, that is something that could be useful down the road to get camera output to external viewers. I was not aware of larq, looks like something that is new. It's hard to keep up with developments as they come so quickly. I tend to stick with established technology as integrating these things can be a challenge and each of them has their own quirks and gotchas. YoloV8 has some nice properties and is well supported and reasonably easy to implement. This makes it a good candidate for integration. Regarding trackers, I spent a lot of time working with different ones, and I'm coming to the opinion that tracking as a whole is not ready for prime time. The best tracker IMO is ByteTrack and that really only works well with YOLOX. Even ByteTrack has some big issues and is not well supported. Most of the others don't work well at all, including DeepSORT. I still like a good GUI, it helps to visualize what is going on, and can help simplify the setup even if the ultimate goal is complete automation. When working with a large number of parameters, the GUI can help visualize the effects of changes to the parameters on the quality of the detection. I find that the mAP quotes don't really reflect what happens in the real world, and I have to run the models on different scenarios to get a feel for how well they operate. |
Recently acquired a Vikylin VK383-LIUF (see attached spec sheet) which is evidently a re-branded Hikvision device that Vikylin OEMs. I expected to be able to inspect/configure it via onvif-util, based on the following statement from onvif-util.1 (V1.4.6):
but alas, the camera is not even discovered via the The discovery failure is not due to network traffic load, nor is it a basic IP connectivity issue: The camera's http server (i.e. configuration web page) is accessible, and I've probed the camera using the Any ideas on why this is and/or how it might be addressed? Happy to provide whatever add'l info might be of interest. Also attached is the spec sheet for the device. On p. 3 ("API") it lists three ONVIF profiles (G, S, T) to which it is presumably conformant. However, it's worth noting that Vikylin is not an ONVIF member, and hence technically not 'permitted' to claim ONVIF conformance per se. (Verified the above by contacting onvif.org directly and querying them as to whether re-branders of conforming OEM products must themselves be ONVIF members in order to claim conformance for the re-branded device. The answer was an unambiguous "yes".) Not that this has anything to do with the technical issue at hand, just mentioning it for completeness. LATER NOTE: Problem solved, my error: Multicast discovery was disabled. Sorry for the noise. |
Hello, Thank you so much for reaching out, I can appreciate the confusion around ONVIF compliance. OEMs that rebrand Hikvision or Dahua cameras most often install their own software on the device. They will usually pretty much follow the general concepts of the manufacturer's software with some tweaks, which may or may not improve the device performance. Discovery used to be a big sticking point with a lot of cameras using non-standard discovery packets, although most nowadays stick to the standard. The problem with that was that there was a gray area where discovery packets would work on some cameras and not others, which was very confusing. At one time, I made an effort to collect different types of discovery messages, and if you go to line 3018 in onvif.c, you can see where there is an option to select an alternate discovery message, either 1 or 2. The code was fixed on 1 since most cameras started complying with the standard, but you could try re-compiling with option 2 to see if that works any better. The downside is that a lot of cameras will only respond to message type 1. It looks like from your LATER NOTE that things have started working, I hope this is the case. Best Regards, Stephen |
Hi,
I am using this app on Ubuntu 22.04. Since the Wayland issue was fixed, I've been using this app daily to view / PTZ my cameras. I've found that discovery mainly just works and finds all my cameras, but occasionally, it will find none or a subset of the nine I currently have. Even ones on my wired network don't always get discovered 100% of the time. I've always immediately checked I can ping and access by web interface gui, so I know the cameras are there and reachable (I use fixed IP for all cameras). I've noticed that when there are a number of cameras missing, often pressing "discover" will find them, but often enough to be an issue, repeated attempts to "discover" will fail. I've tried leaving the app open and just waiting.... then after a random period of time the cameras get found again.
Any ideas? is this a characteristic of the onvif discovery process? or perhaps something wrong with my Network?
I've seen requests to be able to manually enter camera details from others and responses indicating that this is not the intended use case etc. Accepting this, I wondered if the option to save and import the discovered cameras might be considered as a way around the discovery issues I am getting. I don't necessarily expect the app to be modified to help with my particular situation and would be prepared to attempt to modify the code of my install to "help myself", but I am new to Python so any pointers would be appreciated.
There don't seem to be many Linux apps capable of viewing and moving onvif cameras, which a relatively unskilled Linux enthusiast like me can just pick up and use. I really appreciate the effort that went into creating and sharing this app!
:)
The text was updated successfully, but these errors were encountered: