02. Predict with pre-trained Faster RCNN models

This article shows how to play with pre-trained Faster RCNN model.

First let’s import some necessary libraries:

from matplotlib import pyplot as plt
import gluoncv
from gluoncv import model_zoo, data, utils

Load a pretrained model

Let’s get an Faster RCNN model trained on Pascal VOC dataset with ResNet-50 backbone. By specifying pretrained=True, it will automatically download the model from the model zoo if necessary. For more pretrained models, please refer to Model Zoo.

The returned model is a HybridBlock gluoncv.model_zoo.FasterRCNN with a default context of cpu(0).

net = model_zoo.get_model('faster_rcnn_resnet50_v1b_voc', pretrained=True)

Out:

Downloading /root/.mxnet/models/faster_rcnn_resnet50_v1b_voc-447328d8.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/faster_rcnn_resnet50_v1b_voc-447328d8.zip...

  0%|          | 0/121887 [00:00<?, ?KB/s]
  0%|          | 566/121887 [00:00<00:23, 5096.55KB/s]
  3%|2         | 3337/121887 [00:00<00:07, 15716.73KB/s]
  9%|8         | 10764/121887 [00:00<00:02, 40006.06KB/s]
 14%|#4        | 17274/121887 [00:00<00:02, 49342.61KB/s]
 21%|##        | 25075/121887 [00:00<00:01, 57154.19KB/s]
 28%|##7       | 33850/121887 [00:00<00:01, 67047.19KB/s]
 34%|###3      | 41281/121887 [00:00<00:01, 69339.62KB/s]
 41%|####      | 49718/121887 [00:00<00:01, 71828.19KB/s]
 48%|####7     | 58124/121887 [00:00<00:00, 75510.35KB/s]
 54%|#####3    | 65723/121887 [00:01<00:00, 65995.95KB/s]
 61%|######1   | 74584/121887 [00:01<00:00, 72161.97KB/s]
 67%|######7   | 82037/121887 [00:01<00:00, 51092.04KB/s]
 72%|#######2  | 88123/121887 [00:01<00:00, 51684.06KB/s]
 79%|#######9  | 96670/121887 [00:01<00:00, 59563.84KB/s]
 85%|########4 | 103345/121887 [00:01<00:00, 45282.38KB/s]
 92%|#########1| 112108/121887 [00:02<00:00, 54113.17KB/s]
 98%|#########7| 119263/121887 [00:02<00:00, 58114.57KB/s]
121888KB [00:02, 56088.17KB/s]

Pre-process an image

Next we download an image, and pre-process with preset data transforms. The default behavior is to resize the short edge of the image to 600px. But you can feed an arbitrarily sized image.

You can provide a list of image file names, such as [im_fname1, im_fname2, ...] to gluoncv.data.transforms.presets.rcnn.load_test() if you want to load multiple image together.

This function returns two results. The first is a NDArray with shape (batch_size, RGB_channels, height, width). It can be fed into the model directly. The second one contains the images in numpy format to easy to be plotted. Since we only loaded a single image, the first dimension of x is 1.

Please beware that orig_img is resized to short edge 600px.

im_fname = utils.download('https://github.com/dmlc/web-data/blob/master/' +
                          'gluoncv/detection/biking.jpg?raw=true',
                          path='biking.jpg')
x, orig_img = data.transforms.presets.rcnn.load_test(im_fname)

Out:

Downloading biking.jpg from https://github.com/dmlc/web-data/blob/master/gluoncv/detection/biking.jpg?raw=true...

  0%|          | 0/244 [00:00<?, ?KB/s]
100%|##########| 244/244 [00:00<00:00, 19327.49KB/s]

Inference and display

The Faster RCNN model returns predicted class IDs, confidence scores, bounding boxes coordinates. Their shape are (batch_size, num_bboxes, 1), (batch_size, num_bboxes, 1) and (batch_size, num_bboxes, 4), respectively.

We can use gluoncv.utils.viz.plot_bbox() to visualize the results. We slice the results for the first image and feed them into plot_bbox:

demo faster rcnn

Total running time of the script: ( 0 minutes 5.818 seconds)

Gallery generated by Sphinx-Gallery