Showing 22 of 22 projects
A self-hosted personal AI assistant that connects to your existing messaging apps and devices.
A platform to run, manage, and serve open-source large language models (LLMs) locally or on your own infrastructure.
A platform to run, manage, and serve open-source large language models locally with a simple CLI and REST API.
A C/C++ library for efficient, cross-platform LLM inference with extensive hardware support and quantization.
Run large language models (LLMs) privately on everyday desktops and laptops without requiring API calls or GPUs.
A local, open-source implementation of OpenAI's Code Interpreter that lets LLMs run code on your computer through a natural language interface.
A web UI and optimization library for running and fine-tuning open-source AI models locally with 2x faster training and 70% less VRAM.
A Python library for parsing diverse document formats into structured data, optimized for integration with generative AI applications.
A production-ready API for building private, context-aware AI applications that query documents using local LLMs with no data leaving your environment.
An open-source AI engine that runs LLMs, vision, voice, and image/video models on any hardware with drop-in OpenAI API compatibility.
An open-source, extensible AI agent that runs locally for code, workflows, and automation with support for 15+ LLM providers.
An open-source AI memory tool that captures your screen and audio locally, enabling search and automation agents based on your computer activity.
An open-source AI memory tool that records your screen and audio locally, enabling search and automation agents based on your computer activity.
Offline audio/video transcription desktop app using OpenAI Whisper with privacy-focused local processing.
A smart model router for OpenClaw that cuts LLM costs up to 70% by routing requests to the cheapest capable model.
A native macOS voice-to-text app that transcribes speech to text instantly with 100% offline processing.
A lightweight, single-binary Rust inference server providing 100% OpenAI-API compatible endpoints for local GGUF models.
A local-first, ML-powered desktop application for translating manga, built in Rust with automated text detection, OCR, inpainting, and LLM translation.
A self-learning vector database with graph intelligence, local AI, and PostgreSQL integration, built for real-time adaptation.
A C#/.NET library for efficient local inference of LLaMA and other large language models, based on llama.cpp.
A CLI and companion app that spins up a complete, pre-wired local LLM stack with hundreds of services using a single command.
A Neovim plugin for generating and editing text using local LLMs like Llama and Mistral via Ollama.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.