© 2018 IEEE. CNNs are the state-of-The-Art for many computer vision problems, including object detection. However, reducing the computational complexity of a CNN is a key prerequisite to deploying state-of-The-Art deep learning networks in many low power embedded real-Time robotic applications. Pruning has been shown to be an effective method to reduce the computational complexity of a Convolutional Neural Network (CNN) while maintaining accuracy. In the literature, accuracy lost through pruning is recovered with extended fine-Tuning of the pruned network at the end of the pruning procedure, but further pruning is not conducted after extended fine-Tuning. In this work we modify the pruning procedure to incorporate extended fine-Tuning at intervals during the procedure to maintain network accuracy while pruning further than would otherwise be possible. We evaluate this procedure on a small scale custom object detection dataset and the more challenging standard PASCAL VOC dataset. On the former the new procedure achieves a 19.6× reduction in FLOPS for a drop of only 0.4% mean Average Precision (mAP) while the latter achieves only a 1.8× reduction in FLOPS for a drop of 0.8% mAP. The results indicate differing levels of parameter redundancy in the initial networks.