Open-Awesome
CategoriesAlternativesStacksSelf-HostedExplore
Open-Awesome

© 2026 Open-Awesome. Curated for the developer elite.

TermsPrivacyAboutGitHubRSS
  1. Home
  2. C/C++
  3. ncnn

ncnn

NOASSERTIONC++20260113

A high-performance neural network inference framework optimized for mobile platforms, enabling efficient AI deployment on edge devices.

GitHubGitHub
23.1k stars4.4k forks0 contributors

What is ncnn?

ncnn is a high-performance neural network inference framework optimized for mobile and embedded platforms. It enables efficient deployment of deep learning models on edge devices, solving the problem of running AI applications directly on smartphones and other resource-constrained hardware. The framework is designed for speed and minimal resource consumption while maintaining compatibility with popular model formats.

Target Audience

Mobile app developers, embedded systems engineers, and AI researchers who need to deploy neural networks on Android, iOS, or other edge devices. It's particularly valuable for teams building computer vision, face detection, or object recognition applications for mobile platforms.

Value Proposition

Developers choose ncnn for its exceptional performance on mobile CPUs, lack of third-party dependencies, and comprehensive platform support. Its unique selling point is being faster than other known open-source frameworks on mobile phone CPUs while maintaining full cross-platform compatibility and GPU acceleration via Vulkan.

Overview

ncnn is a high-performance neural network inference framework optimized for the mobile platform

Use Cases

Best For

  • Deploying computer vision models on Android and iOS apps
  • Running real-time object detection on mobile devices
  • Building AI-powered features for mobile applications like photo editing
  • Edge AI inference on embedded systems and single-board computers
  • Optimizing neural network performance for ARM-based processors
  • Developing cross-platform AI applications with GPU acceleration

Not Ideal For

  • Teams building server-side inference pipelines with large-scale models and high throughput requirements
  • Developers seeking rapid prototyping with high-level Python APIs and extensive pre-built model zoos
  • Projects targeting GPU acceleration exclusively on non-Vulkan platforms, such as legacy CUDA or Metal-only environments
  • Organizations preferring extensive English documentation and Western-centric community support channels

Pros & Cons

Pros

Mobile CPU Speed Demon

Uses ARM NEON assembly-level optimizations to achieve faster inference than known open-source frameworks on mobile phone CPUs, as explicitly stated in the README.

Dependency-Free Design

Pure C++ implementation with no third-party libraries simplifies cross-platform deployment and reduces binary size, ideal for mobile apps where dependencies are minimized.

Vulkan GPU Boost

Supports the low-overhead Vulkan API for efficient GPU acceleration across mobile and desktop platforms, enabling performance gains on compatible devices.

Memory Efficiency

Implements zero-copy loading and sophisticated data structures to minimize memory footprint, crucial for resource-constrained edge devices like smartphones and embedded systems.

Cons

Complex Initial Setup

Building and integrating ncnn requires navigating detailed platform-specific instructions across numerous OSes and architectures, which can be daunting for newcomers despite the comprehensive wiki.

Vulkan-Only GPU Path

GPU acceleration is tied solely to Vulkan, limiting options on devices without Vulkan support or where other APIs like Apple's Metal or NVIDIA's CUDA are preferred or more mature.

Language Barrier

Key support channels like QQ groups and some documentation are primarily in Chinese, which may hinder accessibility and troubleshooting for non-Chinese speaking developers.

Frequently Asked Questions

Quick Stats

Stars23,140
Forks4,417
Contributors0
Open Issues1,071
Last commit2 days ago
CreatedSince 2017

Tags

#vulkan#ios#simd#deep-learning#android#model-deployment#c-plus-plus#inference#cross-platform#artificial-intelligence#edge-computing#arm-neon#computer-vision

Built With

C
C++

Included in

C/C++70.6kVulkan3.7k
Auto-fetched 1 day ago

Related Projects

OpenCVOpenCV

Open Source Computer Vision Library

Stars87,213
Forks56,544
Last commit2 days ago
FAISSFAISS

A library for efficient similarity search and clustering of dense vectors.

Stars39,808
Forks4,347
Last commit1 day ago
XGBoostXGBoost

Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

Stars28,301
Forks8,864
Last commit2 days ago
DarknetDarknet

Convolutional Neural Networks

Stars26,450
Forks21,131
Last commit2 years ago
Community-curated · Updated weekly · 100% open source

Found a gem we're missing?

Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.

Submit a projectStar on GitHub