Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplified Inference #1153

Merged
merged 1 commit into from
Oct 15, 2020
Merged

Simplified Inference #1153

merged 1 commit into from
Oct 15, 2020

Conversation

glenn-jocher
Copy link
Member

@glenn-jocher glenn-jocher commented Oct 15, 2020

Replacement for #1045

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

This PR introduces adjustments to confidence thresholds and the NMS (Non-Maximum Suppression) process, alongside code clean-up and modularization.

📊 Key Changes

  • Changed the default object confidence threshold from 0.4 to 0.25 and the IOU threshold for NMS from 0.5 to 0.45 in detect.py.
  • Removed an unused NMS import and unnecessary model evaluation call in hubconf.py.
  • Added autoShape class in common.py for a robust model wrapper to support various input forms and removed redundant imports.
  • Integrated autoShape functionality within yolo.py and allowed the addition/removal of NMS from YOLO models.
  • Removed outdated code and imports in various files such as sotabench.py and datasets.py.

🎯 Purpose & Impact

  • Lower confidence and NMS thresholds lead to potentially more detections at the cost of increased false positives.
  • The introduction of autoShape enables the model to handle different input types more seamlessly, enhancing user-friendliness and the potential for integration into different pipelines without requiring pre-formatting by the user.
  • Code modularization and cleanup improve the maintainability of the code base, making it easier to develop and deploy.
  • These changes seek to improve both detection performance and user experience when working with YOLO models from the Ultralytics repository.

@glenn-jocher glenn-jocher changed the title initial commit Simplified Inference Oct 15, 2020
@glenn-jocher
Copy link
Member Author

This PR allows for using YOLOv5 independent of the this repo, and for automatic handling of input formats. Now you can pass in PIL objects directly, numpy array images, cv2 images, or torch inputs. You can pass in a batch as a list, or as a torch batched input. Options are:

    def forward(self, x, size=640, augment=False, profile=False):
        # supports inference from various sources. For height=720, width=1280, RGB images example inputs are:
        #   opencv:     x = cv2.imread('image.jpg')[:,:,::-1]  # HWC BGR to RGB x(720,1280,3)
        #   PIL:        x = Image.open('image.jpg')  # HWC x(720,1280,3)
        #   numpy:      x = np.zeros((720,1280,3))  # HWC
        #   torch:      x = torch.zeros(16,3,720,1280)  # BCHW
        #   multiple:   x = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...]  # list of images

Input

import cv2
import numpy as np
from PIL import Image, ImageDraw

from models.experimental import attempt_load

# Model
# model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
model = attempt_load('yolov5s.pt').autoshape()  # add autoshape wrapper

# Images
img1 = Image.open('inference/images/zidane.jpg')  # PIL
img2 = cv2.imread('inference/images/zidane.jpg')[:, :, ::-1]  # opencv (BGR to RGB)
img3 = np.zeros((640, 1280, 3))  # numpy
imgs = [img1, img2, img3]

# Inference
prediction = model(imgs, size=640)  # includes NMS

# Plot
for i, img in enumerate(imgs):
    print('\nImage %g/%g: %s ' % (i + 1, len(imgs), img.shape), end='')
    img = Image.fromarray(img.astype(np.uint8)) if isinstance(img, np.ndarray) else img  # from np
    if prediction[i] is not None:  # is not None
        for *box, conf, cls in prediction[i]:  # [xy1, xy2], confidence, class
            print('%s %.2f, ' % (model.names[int(cls)], conf), end='')  # label
            ImageDraw.Draw(img).rectangle(box, width=3)  # plot
    img.save('results%g.jpg' % i)  # save

Output

Downloading https://github.com/ultralytics/yolov5/releases/download/v3.0/yolov5s.pt to yolov5s.pt...
100%
14.5M/14.5M [00:00<00:00, 64.5MB/s]

Fusing layers... 
Adding autoShape... 

Image 1/3: (720, 1280, 3) person 0.87, person 0.80, tie 0.78, tie 0.28, 
Image 2/3: (720, 1280, 3) person 0.87, person 0.80, tie 0.78, tie 0.28, 
Image 3/3: (640, 1280, 3) 

results0

@glenn-jocher glenn-jocher merged commit 3b57cb5 into master Oct 15, 2020
@glenn-jocher glenn-jocher deleted the simple_inference2 branch October 15, 2020 18:10
@glenn-jocher
Copy link
Member Author

glenn-jocher commented Oct 15, 2020

Torch Hub Example

This repo NOT required

import cv2
import numpy as np
import torch
from PIL import Image, ImageDraw

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True, force_reload=True).fuse().eval()
model = model.autoshape()  # add autoshape wrapper IMPORTANT

# Image
torch.hub.download_url_to_file('https://raw.githubusercontent.com/ultralytics/yolov5/master/inference/images/zidane.jpg', 'zidane.jpg')
img1 = Image.open('zidane.jpg')  # PIL
img2 = cv2.imread('zidane.jpg')[:, :, ::-1]  # opencv (BGR to RGB)
img3 = np.zeros((640, 1280, 3))  # numpy
imgs = [img1, img2, img3]  # batched inference

# Inference
prediction = model(imgs, size=640)  # includes NMS

# Plot
for i, img in enumerate(imgs):
    print('\nImage %g/%g: %s ' % (i + 1, len(imgs), img.shape), end='')
    img = Image.fromarray(img.astype(np.uint8)) if isinstance(img, np.ndarray) else img  # from np
    if prediction[i] is not None:  # is not None
        for *box, conf, cls in prediction[i]:  # [xy1, xy2], confidence, class
            print('class %g %.2f, ' % (cls, conf), end='')  # label
            ImageDraw.Draw(img).rectangle(box, width=3)  # plot
    img.save('results%g.jpg' % i)  # save

results0

@dagap
Copy link

dagap commented Nov 30, 2020

@glenn-jocher

When I run this, I get the following error:

if prediction[i] is not None:  # is not None
    TypeError: 'Detections' object is not subscriptable

@glenn-jocher
Copy link
Member Author

glenn-jocher commented Nov 30, 2020

@dagap I'll inline the latest here in case anyone arrives here.

Official page: https://pytorch.org/hub/ultralytics_yolov5/

Load From PyTorch Hub (NEW FORMAT)

To load YOLOv5 from PyTorch Hub for inference with PIL, OpenCV, Numpy or PyTorch inputs:

import cv2
import torch
from PIL import Image

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True).fuse().autoshape()  # for PIL/cv2/np inputs and NMS

# Images
for f in ['zidane.jpg', 'bus.jpg']:  # download 2 images
    print(f'Downloading {f}...')
    torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/' + f, f)
img1 = Image.open('zidane.jpg')  # PIL image
img2 = cv2.imread('bus.jpg')[:, :, ::-1]  # OpenCV image (BGR to RGB)
imgs = [img1, img2]  # batched list of images

# Inference
results = model(imgs, size=640)  # includes NMS

# Results
results.print()  # print results to screen
results.show()  # display results
results.save()  # save as results1.jpg, results2.jpg... etc.

# Data
print('\n', results.xyxy[0])  # print img1 predictions
#          x1 (pixels)  y1 (pixels)  x2 (pixels)  y2 (pixels)   confidence        class
# tensor([[7.47613e+02, 4.01168e+01, 1.14978e+03, 7.12016e+02, 8.71210e-01, 0.00000e+00],
#         [1.17464e+02, 1.96875e+02, 1.00145e+03, 7.11802e+02, 8.08795e-01, 0.00000e+00],
#         [4.23969e+02, 4.30401e+02, 5.16833e+02, 7.20000e+02, 7.77376e-01, 2.70000e+01],
#         [9.81310e+02, 3.10712e+02, 1.03111e+03, 4.19273e+02, 2.86850e-01, 2.70000e+01]])

burglarhobbit pushed a commit to burglarhobbit/yolov5 that referenced this pull request Jan 1, 2021
KMint1819 pushed a commit to KMint1819/yolov5 that referenced this pull request May 12, 2021
@pk-1196
Copy link

pk-1196 commented May 26, 2021

@glenn-jocher Hi , i have trained yolov5 model using yolov5s.pt and saved the weights as best.pt
Now how to use above weight i.e. best.pt on the stacked image to get inference speed any changes in above code . so that my input image will be of (2,3,640,640)

@glenn-jocher
Copy link
Member Author

glenn-jocher commented May 27, 2021

@ZhangChen0212
Copy link

@dagap I'll inline the latest here in case anyone arrives here.

Official page: https://pytorch.org/hub/ultralytics_yolov5/

Load From PyTorch Hub (NEW FORMAT)

To load YOLOv5 from PyTorch Hub for inference with PIL, OpenCV, Numpy or PyTorch inputs:

import cv2
import torch
from PIL import Image

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True).fuse().autoshape()  # for PIL/cv2/np inputs and NMS

# Images
for f in ['zidane.jpg', 'bus.jpg']:  # download 2 images
    print(f'Downloading {f}...')
    torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/' + f, f)
img1 = Image.open('zidane.jpg')  # PIL image
img2 = cv2.imread('bus.jpg')[:, :, ::-1]  # OpenCV image (BGR to RGB)
imgs = [img1, img2]  # batched list of images

# Inference
results = model(imgs, size=640)  # includes NMS

# Results
results.print()  # print results to screen
results.show()  # display results
results.save()  # save as results1.jpg, results2.jpg... etc.

# Data
print('\n', results.xyxy[0])  # print img1 predictions
#          x1 (pixels)  y1 (pixels)  x2 (pixels)  y2 (pixels)   confidence        class
# tensor([[7.47613e+02, 4.01168e+01, 1.14978e+03, 7.12016e+02, 8.71210e-01, 0.00000e+00],
#         [1.17464e+02, 1.96875e+02, 1.00145e+03, 7.11802e+02, 8.08795e-01, 0.00000e+00],
#         [4.23969e+02, 4.30401e+02, 5.16833e+02, 7.20000e+02, 7.77376e-01, 2.70000e+01],
#         [9.81310e+02, 3.10712e+02, 1.03111e+03, 4.19273e+02, 2.86850e-01, 2.70000e+01]])

Hi there! When I run this code, I get an error:

Traceback (most recent call last):
  File "/home/chen/Desktop/test.py", line 6, in <module>
    model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True).fuse().autoshape()  # for PIL/cv2/np inputs and NMS
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 947, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'AutoShape' object has no attribute 'fuse'

BjarneKuehl pushed a commit to fhkiel-mlaip/yolov5 that referenced this pull request Aug 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Load YOLOv5 from PyTorch Hub ⭐
4 participants