CNN features off-the-shelf: An Astounding Baseline for Recognition
2014 (English)In: Proceedings of CVPR 2014, 2014Conference paper (Refereed)
Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.
Place, publisher, year, edition, pages
IdentifiersURN: urn:nbn:se:kth:diva-149178DOI: 10.1109/CVPRW.2014.131ISI: 000349552300079ScopusID: 2-s2.0-84908537903OAI: oai:DiVA.org:kth-149178DiVA: diva2:738235
Computer Vision and Pattern Recognition (CVPR) 2014, DeepVision workshop,June 28, 2014, Columbus, Ohio
Best Paper Runner-up Award.
QC 201408252014-08-162014-08-162016-09-08Bibliographically approved