{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 2. Dive Deep into Training TSN mdoels on UCF101\n\nThis is a video action recognition tutorial using Gluon CV toolkit, a step-by-step example.\nThe readers should have basic knowledge of deep learning and should be familiar with Gluon API.\nNew users may first go through `A 60-minute Gluon Crash Course
Feel free to skip the tutorial because the training script is self-complete and ready to launch.\n\n :download:`Download Full Python Script: train_recognizer.py<../../../scripts/action-recognition/train_recognizer.py>`\n\n Example training command::\n\n # Finetune a pretrained VGG16 model without using temporal segment network.\n python train_recognizer.py --model vgg16_ucf101 --num-classes 101 --num-gpus 8 --lr-mode step --lr 0.001 --lr-decay 0.1 --lr-decay-epoch 30,60,80 --num-epochs 80\n\n # Finetune a pretrained VGG16 model using temporal segment network.\n python train_recognizer.py --model vgg16_ucf101 --num-classes 101 --num-gpus 8 --num-segments 3 --lr-mode step --lr 0.001 --lr-decay 0.1 --lr-decay-epoch 30,60,80 --num-epochs 80\n\n For more training command options, please run ``python train_recognizer.py -h``\n Please checkout the `model_zoo <../model_zoo/index.html#action_recognition>`_ for training commands of reproducing the pretrained model.
In order to finish the tutorial quickly, we only train for 3 epochs, and 100 iterations per epoch.\n In your experiments, we recommend setting ``epochs=80`` for the full UCF101 dataset.