03. Predict with pre-trained YOLO models¶
This article shows how to play with pre-trained YOLO models with only a few lines of code.
First let’s import some necessary libraries:
from gluoncv import model_zoo, data, utils from matplotlib import pyplot as plt
Load a pretrained model¶
Let’s get an YOLOv3 model trained with on Pascal VOC
dataset with Darknet53 as the base model. By specifying
pretrained=True, it will automatically download the model from the model
zoo if necessary. For more pretrained models, please refer to
Pre-process an image¶
Next we download an image, and pre-process with preset data transforms. Here we specify that we resize the short edge of the image to 512 px. You can feed an arbitrarily sized image. Once constraint for YOLO is that input height and width can be divided by 32.
You can provide a list of image file names, such as
gluoncv.data.transforms.presets.yolo.load_test() if you
want to load multiple image together.
This function returns two results. The first is a NDArray with shape (batch_size, RGB_channels, height, width). It can be fed into the model directly. The second one contains the images in numpy format to easy to be plotted. Since we only loaded a single image, the first dimension of x is 1.
Downloading dog.jpg from https://raw.githubusercontent.com/zhreshold/mxnet-ssd/master/data/demo/dog.jpg... 0%| | 0/160 [00:00<?, ?KB/s] 100%|##########| 160/160 [00:00<00:00, 8191.40KB/s] Shape of pre-processed image: (1, 3, 512, 683)
Inference and display¶
The forward function will return all detected bounding boxes, and the corresponding predicted class IDs and confidence scores. Their shapes are (batch_size, num_bboxes, 1), (batch_size, num_bboxes, 1), and (batch_size, num_bboxes, 4), respectively.
We can use
gluoncv.utils.viz.plot_bbox() to visualize the
results. We slice the results for the first image and feed them into plot_bbox:
Total running time of the script: ( 0 minutes 2.164 seconds)