A header-only Vulkan-based library providing a CUDA Runtime API interface for GPU-accelerated applications.
VUDA is a header-only C++ library that implements the CUDA Runtime API interface using Vulkan as the backend. It allows developers to write GPU-accelerated applications with CUDA-like syntax while running on any hardware that supports Vulkan, including non-NVIDIA GPUs. This solves the problem of CUDA code being locked to NVIDIA hardware by providing a cross-platform alternative.
C++ developers working on GPU-accelerated applications who want to maintain CUDA compatibility while achieving cross-platform support, or those who need to run CUDA-like code on non-NVIDIA hardware.
Developers choose VUDA because it provides the familiar CUDA programming model while enabling true hardware portability through Vulkan. Unlike vendor-locked solutions, it allows the same codebase to run across different GPU vendors and operating systems with minimal changes.
VUDA is a header-only library based on Vulkan that provides a CUDA Runtime API interface for writing GPU-accelerated applications.
Implements the CUDA Runtime API on Vulkan, enabling existing CUDA code to run on non-NVIDIA hardware like AMD and Intel GPUs with minimal changes, as shown in the usage example.
Easy to add to C++ projects without complex build dependencies, since it only requires including vuda.hpp or vuda_runtime.hpp, simplifying setup.
Uses near-identical function calls to CUDA, reducing the learning curve for developers already skilled in CUDA programming, as demonstrated in the code snippet.
Leverages Vulkan's cross-platform capabilities, allowing GPU acceleration on various operating systems where Vulkan is supported, enhancing hardware compatibility.
Has known deviations from the CUDA specification, as noted in the wiki, meaning some advanced CUDA features may not be supported, limiting compatibility.
Adds an abstraction layer over Vulkan, which can introduce latency and reduce performance compared to native CUDA on NVIDIA hardware, especially for compute-intensive tasks.
Requires manual handling of SPIR-V shaders and Vulkan setup, unlike CUDA's integrated toolchain, adding extra steps for kernel compilation and launch.
ncnn is a high-performance neural network inference framework optimized for the mobile platform
Cross-platform, graphics API agnostic, "Bring Your Own Engine/Framework" style rendering library.
A multi-platform library for OpenGL, OpenGL ES, Vulkan, window and input
MoltenVK is a Vulkan Portability implementation. It layers a subset of the high-performance, industry-standard Vulkan graphics and compute API over Apple's Metal graphics framework, enabling Vulkan applications to run on macOS, iOS and tvOS.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.