Format

Send to

Choose Destination

Caffe con Troll: Shallow Ideas to Speed Up Deep Learning.

Author information

1
Stanford University.
2
Stanford University; University of Wisconsin-Madison.

Abstract

We present Caffe con Troll (CcT), a fully compatible end-to-end version of the popular framework Caffe with rebuilt internals. We built CcT to examine the performance characteristics of training and deploying general-purpose convolutional neural networks across different hardware architectures. We find that, by employing standard batching optimizations for CPU training, we achieve a 4.5× throughput improvement over Caffe on popular networks like CaffeNet. Moreover, with these improvements, the end-to-end training time for CNNs is directly proportional to the FLOPS delivered by the CPU, which enables us to efficiently train hybrid CPU-GPU systems for CNNs.

Supplemental Content

Full text links

Icon for PubMed Central
Loading ...
Support Center