-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
why the image url is wrong #11355
Comments
@omaiyiwa it seems that you are passing image URLs as sources when using |
is classify's predict.py file |
@omaiyiwa, I apologize for my previous response. I didn't recognize that you are passing a URL and that the error is caused by an incorrect path. It appears that YOLOv5's For instance, in your current configuration, you can download the image and save it locally then pass the path to the saved local image to the
This code downloads the image from the specified URL and saves the file with the same name as provided in the URL in the current directory. After that, you can pass the saved file path to the Please let me know if you have any further questions. |
So what I did initially to get the image url from S3 bucket was wrong, is there any other way to deploy the model to amazon sagemaker |
@omaiyiwa yes, you can deploy your YOLOv5 model on Amazon SageMaker for inference. You can do this by creating a SageMaker endpoint for your model. This will allow you to send image data to the endpoint, where the model will make inferences and return the results. To do this, you will need to follow these steps:
Here is a high-level example of how you can deploy your model to SageMaker:
You can find more details on how to deploy a model on SageMaker in the AWS SageMaker documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-models.html. Please let me know if you have any further questions. |
Thank you very much for your help, 4.My code is as follows DUMMY_IAM_ROLE = role def main():
if name == "main": |
@omaiyiwa, to answer your questions:
Additionally, I noticed your Here is an updated version of your code that addresses these issues:
Note that in this example code, the |
model_location = "s3://session.default_bucket()/yolov5-model/best.pt" The error is: |
@omaiyiwa, it looks like the Make sure you provide the correct For example, if you upload your model to the default SageMaker bucket using the following code:
Then you should set
Regarding the error you're seeing, it looks like If the above solutions don't work, I suggest printing out the contents of |
@omaiyiwa, apologies for the confusion. It appears that you uploaded the To package the
Make sure to replace Once you have the packaged
Then you can update the
Hope this helps! |
I would like to express my gratitude again. 1.error in this paragraph: predictions = predictor.predict(img) Make a prediction on a single imageurl = "https://ultralytics.com/images/zidane.jpg" 2.The error is like this: 3.the inference.py is like this: def model_fn(model_dir): def input_fn(serialized_input_data, content_type): def predict_fn(input_data, model): |
@omaiyiwa, the error message You can try increasing the instance size to an However, there could also be an issue with the inference script. From your For example, add a print statement right before the Also, try running a local prediction by directly calling the Additionally, if you have large image files, you should split them into smaller sizes to process on your endpoint as processing large files can lead to memory issues. Let me know if this helps! |
I verified that the error is at |
The picture I predict is https://ultralytics.com/images/zidane.jpg, it should not be too large |
@omaiyiwa since you have tried increasing the instance size and the problem still persists, it could also be due to other factors such as network latency or the size of the image file itself. Here are a few suggestions that may help:
Let me know if any of these suggestions help! |
I segmented the code, after creating the endpoint, I use this code to make the request, also timeout import numpy as np config = botocore.config.Config(read_timeout=80) url = "https://ultralytics.com/images/zidane.jpg" response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME, print(response) |
@omaiyiwa, since you are still experiencing timeouts even with the client side request using
Let me know if you have any other questions or if any of these suggestions help. |
@omaiyiwa it seems like your In your
However, in your error message, the
Based on the error message, it looks like the path to the model file is only To fix this issue, you may need to modify your code where you set the If you are using SageMaker to deploy your model, you can access the path to your
This assumes that you are using the Let me know if this helps! |
I think it should be an instance problem. Although I have specified an instance, the environment on the instance is not configured. How should I do it, or can I run the local configuration? |
@omaiyiwa If the endpoint is hosted in a SageMaker EC2 instance and you believe that the issue is related to the instance not being configured properly, you can try using the Here are the general steps to follow:
This will create a new kernel named
If your model works locally, then the issue may be with the instance configuration, such as the instance size or the network configuration. You can try to further troubleshoot and optimize the instance environment based on your findings. Let me know if this helps! |
The error line is |
@omaiyiwa it looks like the error is occurring at the line where you're trying to run inference using the To resolve this issue, you may need to ensure that the environment on the SageMaker instance has all the necessary dependencies and configurations required to run the model and perform inference. Here are a few steps you can take to troubleshoot and fix the environment issues:
By addressing these aspects, you should be able to diagnose and resolve the environment-related issues on the SageMaker instance. Let me know if you have any further questions or if you need additional assistance. |
Search before asking
YOLOv5 Component
Detection
Bug
Hello, I use predict.py in classify, I want to detect the url of the image, but the source shows ..\https:\stopscooterpic.s3.eu-central-1.amazonaws.com\2023-04-14\1af2dfdf767d4f5fb670437a89c84f3e202304104085012 .jpg,
the complete configuration is classify\predict2: weights=..\runs\train-cls\exp4\weights\best.pt, source=..\https:\stopscooterpic.s3.eu-central-1.amazonaws.com \2023-04-14\1af2dfdf767d4f5fb670437a89c84f3e202304104085012.jpg, data=..\data\coco128.yaml, imgsz=[640, 640], device=0, view_img=False, save_Fugment=False, nosave visualize=False, update=False, project=..\runs\predict-cls, name=exp, exist_ok=True, half=False, dnn=False, vid_stride=1,
I put the previous .. \Deleted, making the source become https:\stopscooterpic.s3.eu-central-1.amazonaws.com\2023-04-14\1af2dfdf767d4f5fb670437a89c84f3e202304104085012.jpg, but it is judged as False in is_url, at first I thought it was an S3 bucket problem. But I changed to https://ultralytics.com/images/zidane.jpg is the same.
The error is OSError: [WinError 123] The file name, directory name, or volume label syntax is incorrect. : 'https:\ultralytics.com\images\zidane.jpg'
Environment
YOLOv5 Python-3.8.0 torch-1.12.1+cu116 CUDA:0 (NVIDIA GeForce RTX 3090 Ti, 24563MiB)
windows 10
Minimal Reproducible Example
File "D:/yolov5-master/classify/predict2.py", line 110, in run
dataset = LoadImages(source, img_size=imgsz, transforms=classify_transforms(imgsz[0]), vid_stride=vid_stride)
File "D:\yolov5-master\utils\dataloaders.py", line 246, in init
p = str(Path(p).resolve())
File "E:\anaconda\envs\yolo\lib\pathlib.py", line 1159, in resolve
s = self._flavour.resolve(self, strict=strict)
File "E:\anaconda\envs\yolo\lib\pathlib.py", line 202, in resolve
s = self._ext_to_normal(_getfinalpathname(s))
OSError: [WinError 123] 文件名、目录名或卷标语法不正确。: 'https:\ultralytics.com\images\zidane.jpg'
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: