Predefined Model Architectures
Individual layers can be used to create any custom model. There are some common architectures, however, that don't have to be made from scratch and are predefined.
model_name defines the name of the predefined network. Currently supported networks include:
A classic image classification network (Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).)
A classic image classification network (Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.)
The convolutional architecture used by the authors of YOLOv2, the object detection system implemented in DPP. (Redmon, J., & Farhadi, A. (2017). YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7263-7271).)
The convolutional architecture used in the Count-ception paper, meant for object counting via redundant counting. (Paul Cohen, J., Boucher, G., Glastonbury, C. A., Lo, H. Z., & Bengio, Y. (2017). Count-ception: Counting by fully convolutional redundant counting. In Proceedings of the IEEE International Conference on Computer Vision (pp. 18-26).)
A tiny convolutional network with three low-capacity convolutional layers, three pooling layers, and a single small fully connected layer with 64 units. For simple problems which require a small memory footprint.
A slightly higher-capacity feature extractor with five layers, using batch norm between each block of convolutional layers. Same 64-unit fully connected layer as
xsmall to avoid overfitting problems with plant phenotyping datasets.
Uses the full vgg-16 feature extractor, but with batch normalization instead of dropout. A slightly larger 256-unit fully connected layer.
Uses the full vgg-16 feature extractor, but with batch normalization instead of dropout. Two fully connected layers with 512 and 384 units respectively.