A benchmarking suite comparing the performance of public convolutional neural network implementations across multiple deep learning frameworks.
ConvNet Benchmarks is a benchmarking suite that compares the performance of various convolutional neural network implementations across different deep learning frameworks. It measures forward and backward pass times for popular CNN architectures like AlexNet, Overfeat, and GoogleNet, providing standardized performance data to help researchers and developers evaluate framework efficiency.
Deep learning researchers, computer vision engineers, and framework developers who need to compare the computational performance of different CNN implementations for model training and inference.
It provides transparent, hardware-standardized benchmarks that reveal real performance differences between frameworks—helping users avoid misleading conclusions about framework speed and enabling data-driven framework selection.
Easy benchmarking of all publicly accessible implementations of convnets
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Benchmarks CNN implementations across TensorFlow, Caffe, Torch, Chainer, and Nervana Neon, providing a broad, apples-to-apples view of performance differences as noted in the standardized tables.
All tests run on identical hardware (Intel i7 + NVIDIA Titan X) with Ubuntu 14.04, ensuring fair and reproducible comparisons for the era.
Reports separate times for forward and backward passes, plus layer-wise analysis for spatial convolutions, offering granular insights into computational bottlenecks.
The README includes explicit notes clarifying CuDNN bindings and implementation details, preventing misinterpretation of framework speeds.
Last updated in 2015, missing benchmarks for modern frameworks like PyTorch and newer CNN architectures, limiting current relevance.
Only tests older models (e.g., AlexNet, GoogleNet V1) from the early 2010s, not covering architectures essential for today's computer vision tasks.
Benchmarks are tied to NVIDIA Titan X, an obsolete GPU, making results inapplicable to current generations like Ampere or Hopper architectures.
The project shows no updates since 2015, with no support for newer software versions or bug fixes, requiring significant effort to adapt.