Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I cannot download official models for inference #1870

Open
SchumacherVonHeinz opened this issue Jul 1, 2024 · 2 comments
Open

I cannot download official models for inference #1870

SchumacherVonHeinz opened this issue Jul 1, 2024 · 2 comments

Comments

@SchumacherVonHeinz
Copy link

According to the guidebook, I found codes for object detection within only 10 lines, like

#inference.py in jetson-inference
import jetson.utils
import jetson.inference
input = jetson.utils.videoSource("csi://0")
output = jetson.utils.videoOutput("display://0")
net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
while output.IsStreaming():
img = input.Capture()
detections = net.Detect(img)
output.Render(img)
output.SetStatus("Performance {:.0f}FPS".format(net.GetNetworkFPS()))`

but after i run "python inference.py", the terminal showed that

[TRT] native precisions detected for GPU: FP32, FP16, INT8
[TRT] selecting fastest native precision for GPU: FP16
[TRT] could not find engine cache .1.1.8502.GPU.FP16.engine
[TRT] cache file invalid, profiling network model on device GPU

error: model file 'networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff' was not found.
if loading a built-in model, maybe it wasn't downloaded before.

    Run the Model Downloader tool again and select it for download:

       $ cd <jetson-inference>/tools
       $ ./download-models.sh

[TRT] detectNet -- failed to initialize.
Traceback (most recent call last):
File "inference.py", line 5, in
net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
Exception: jetson.inference -- detectNet failed to load network
(I only pasted the error part)

and I chose the imageNet using AlexNet, but still in failure(almost the same), what should I do to deal with this error? Thank u for ur attention! :)

@dusty-nv
Copy link
Owner

dusty-nv commented Jul 1, 2024 via email

@SchumacherVonHeinz
Copy link
Author

Here is the model mirror page: https://github.com/dusty-nv/jetson-inference/releases/tag/model-mirror-190618

________________________________ From: Knight Von Schumacher @.> Sent: Monday, July 1, 2024 10:30:27 AM To: dusty-nv/jetson-inference @.> Cc: Subscribed @.> Subject: [dusty-nv/jetson-inference] I cannot download official models for inference (Issue #1870) According to the guidebook, I found codes for object detection within only 10 lines, like #inference.py in jetson-inference import jetson.utils import jetson.inference input = jetson.utils.videoSource("csi://0") output = jetson.utils.videoOutput("display://0") net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5) while output.IsStreaming(): img = input.Capture() detections = net.Detect(img) output.Render(img) output.SetStatus("Performance {:.0f}FPS".format(net.GetNetworkFPS()))` but after i run "python inference.py", the terminal showed that [TRT] native precisions detected for GPU: FP32, FP16, INT8 [TRT] selecting fastest native precision for GPU: FP16 [TRT] could not find engine cache .1.1.8502.GPU.FP16.engine [TRT] cache file invalid, profiling network model on device GPU error: model file 'networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff' was not found. if loading a built-in model, maybe it wasn't downloaded before. Run the Model Downloader tool again and select it for download: $ cd /tools $ ./download-models.sh [TRT] detectNet -- failed to initialize. Traceback (most recent call last): File "inference.py", line 5, in net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5) Exception: jetson.inference -- detectNet failed to load network (I only pasted the error part) and I chose the imageNet using AlexNet, but still in failure(almost the same), what should I do to deal with this error? Thank u for ur attention! :) — Reply to this email directly, view it on GitHub<#1870>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGK4ACES53T2ZMAGP3JLZKFRYHAVCNFSM6AAAAABKFWSLN6VHI2DSMVQWIX3LMV43ASLTON2WKOZSGM4DIMBRGI2TANI. You are receiving this because you are subscribed to this thread.Message ID: @.>

OMG! Thank you for ur fast reply! I'll look it right now!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants