SDKs for adding private, on-device AI features like LLM chat, speech-to-text, and text-to-speech to mobile and web apps.
RunAnywhere is a production-ready toolkit that provides multi-platform SDKs for integrating on-device AI capabilities into applications. It enables tasks like text generation, speech-to-text, and text-to-speech to run locally without cloud dependency, ensuring privacy, offline functionality, and low latency. The project solves the problem of cloud reliance and data privacy concerns by allowing AI features to operate entirely on the user's device.
Mobile and web developers building applications that require private, offline AI features, such as voice assistants, chatbots, or AI-powered tools, across platforms like iOS, Android, React Native, Flutter, and the web. It is also suitable for developers prioritizing data privacy and low-latency AI interactions.
Developers choose RunAnywhere because it offers a unified, multi-platform solution for on-device AI that eliminates cloud costs, latency, and privacy risks. Its unique selling point is the fully offline voice AI pipeline that combines speech-to-text, LLM processing, and text-to-speech, along with support for various AI models and vision capabilities on supported platforms.
Production ready toolkit to run AI locally
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Provides stable SDKs for Swift and Kotlin, plus beta SDKs for Web, React Native, and Flutter, enabling consistent on-device AI across iOS, Android, and cross-platform frameworks, as shown in the platform support table.
Ensures all AI inference runs locally with no data sent to the cloud, addressing privacy concerns and enabling offline functionality, as emphasized in the philosophy and feature descriptions.
Combines speech-to-text, LLM processing, and text-to-speech into a fully offline voice assistant, demonstrated in the Voice AI GIF and supported across all SDKs except where noted.
Supports downloading and loading various GGUF and ONNX models (e.g., Llama, Whisper) with progress tracking, allowing customization based on device constraints, as detailed in the Supported Models section.
Key capabilities like vision language models are only available on iOS and Web, not on Android, React Native, or Flutter, limiting cross-platform use cases, as indicated in the Features table.
Web, React Native, and Flutter SDKs are labeled as beta, which may lead to breaking changes, incomplete features (e.g., structured output missing in some), and less reliable documentation for production use.
Requires at least 2GB of RAM and 4GB+ recommended for larger models, excluding low-end devices and increasing app size due to model downloads, as noted in the Requirements section.
Involves multiple steps like initializing SDKs, registering backends (e.g., LlamaCPP), and managing model downloads separately, which adds overhead compared to cloud-based APIs with simpler integrations.