Open-Awesome
CategoriesAlternativesStacksSelf-HostedExplore
Open-Awesome

© 2026 Open-Awesome. Curated for the developer elite.

TermsPrivacyAboutGitHubRSS
  1. Home
  2. Robotic Tooling
  3. finn

finn

BSD-3-ClausePythonv0.10.1

A dataflow compiler for quantized neural network inference on FPGAs, generating highly efficient custom accelerators.

Visit WebsiteGitHubGitHub
980 stars295 forks0 contributors

What is finn?

FINN is a dataflow compiler framework for quantized neural network inference on FPGAs. It generates highly efficient, customized dataflow-style architectures to accelerate neural network inference, achieving high throughput and low latency. The framework is open-source and experimental, developed by AMD Research & Advanced Development to explore neural network implementations across software/hardware stacks.

Target Audience

Researchers and engineers working on FPGA-based deep learning acceleration, particularly those focused on quantized neural networks and custom hardware architectures. It's also suitable for developers exploring high-performance inference solutions with low latency requirements.

Value Proposition

Developers choose FINN for its ability to generate highly efficient, dataflow-style FPGA accelerators tailored to specific quantized neural networks, offering superior performance and flexibility compared to generic inference frameworks. Its open-source nature enables deep customization and research across the hardware/software stack.

Overview

Dataflow compiler for QNN inference on FPGAs

Use Cases

Best For

  • Accelerating quantized neural network inference on FPGAs
  • Generating custom dataflow architectures for neural networks
  • Researching neural network implementations across hardware/software stacks
  • Achieving high-throughput and low-latency inference on FPGAs
  • Exploring FPGA-based deep learning acceleration with open-source tools
  • Building specialized accelerators for quantized models

Not Ideal For

  • Projects requiring floating-point or non-quantized neural network inference
  • Teams seeking a production-ready, stable framework without experimental risks
  • Developers without FPGA hardware access or expertise in FPGA toolchains
  • Applications needing quick deployment with minimal setup and dependency management

Pros & Cons

Pros

Quantized Network Optimization

Specifically targets quantized neural networks, enhancing FPGA efficiency and performance for low-precision inference as emphasized in the README.

Custom Dataflow Architectures

Generates dataflow-style architectures tailored to each network, achieving high throughput and low latency through specialized hardware pipelines.

Open-Source Flexibility

Fully open-source, enabling deep customization and research across software/hardware abstraction layers for advanced users and academia.

Docker-Based Reproducibility

Uses Docker for compilation to manage complex dependencies, ensuring reproducible builds and easier setup in controlled environments.

Cons

Experimental Nature

Labeled as experimental, so it lacks the stability, regular updates, and production support of mature frameworks like TensorRT or Vitis AI.

Docker-Only Execution

Only supports Docker-based execution, which adds container overhead and limits flexibility for bare-metal or non-containerized deployments.

Complex Setup Requirements

Requires FPGA development tools and hardware access, making initial setup challenging and time-consuming for those unfamiliar with FPGA workflows.

Frequently Asked Questions

Quick Stats

Stars980
Forks295
Contributors0
Open Issues68
Last commit2 days ago
CreatedSince 2018

Tags

#fpga#compiler#neural-network#neural-network-inference#deep-learning#quantization#hardware-acceleration#low-latency#dataflow#high-throughput

Built With

D
Docker

Links & Resources

Website

Included in

Robotic Tooling3.8k
Auto-fetched 18 hours ago

Related Projects

gymgym

A toolkit for developing and comparing reinforcement learning algorithms.

Stars37,177
Forks8,704
Last commit1 month ago
fastaifastai

The fastai deep learning library

Stars27,990
Forks7,661
Last commit5 days ago
mlflowmlflow

The open source AI engineering platform for agents, LLMs, and ML models. MLflow enables teams of all sizes to debug, evaluate, monitor, and optimize production-quality AI applications while controlling costs and managing access to models and data.

Stars25,689
Forks5,673
Last commit1 day ago
MNNMNN

MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.

Stars15,080
Forks2,299
Last commit5 days ago
Community-curated · Updated weekly · 100% open source

Found a gem we're missing?

Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.

Submit a projectStar on GitHub