Showing 29 of 29 projects
A self-hosted, extensible AI platform with a user-friendly web interface supporting Ollama, OpenAI API, and offline RAG.
An in-memory data structure store used as a cache, database, message broker, and vector query engine for real-time applications.
An open-source framework for building LLM-powered applications with data ingestion, indexing, and retrieval capabilities.
A high-performance, cloud-native vector database built for scalable approximate nearest neighbor (ANN) search.
An opinionated RAG framework for integrating generative AI into applications, supporting any LLM, vector store, and file type.
A high-performance vector database and search engine written in Rust, designed for AI applications with filtering and payload support.
Open-source vector database and embedding store for building AI applications with semantic search.
A customizable AI chatbot agent that ingests PDF documents, stores embeddings in a vector database, and answers user queries using LangChain and LangGraph.
An open-source, cloud-native vector database that combines semantic search with structured filtering for AI applications.
A single-file memory layer for AI agents, replacing complex RAG pipelines with serverless, instant retrieval and long-term memory.
An all-in-one AI framework for semantic search, LLM orchestration, and language model workflows built around an embeddings database.
An open-source Java library that simplifies integrating LLMs into Java applications through a unified API and comprehensive toolbox.
An open-source embedded retrieval library for multimodal AI, offering fast vector search, SQL, and full-text search.
An open-source enterprise data warehouse built in Rust for AI agents, analytics, vector search, and full-text search.
Deep Lake is a multimodal data lake and vector store optimized for AI, enabling scalable data management, retrieval, and training for LLM and deep learning applications.
A semantic cache library for LLM queries that reduces API costs by 10x and boosts response speed by 100x.
A Rust library for building scalable, modular, and ergonomic LLM-powered applications.
An ultra-performant data transformation framework for AI, with incremental processing and data lineage built-in.
Open-source framework for building full-stack AI-powered applications with unified APIs for multiple languages and model providers.
Open-source framework for building full-stack AI-powered applications with unified APIs across JavaScript, Go, and Python.
An open-source chatbot console for creating and managing unlimited custom chatbots powered by GPT and open-source LLMs.
An AI-native database built for LLM applications, offering incredibly fast hybrid search across vectors, tensors, and full-text.
An open-source graph-vector database built in Rust that unifies application data, vectors, and graphs for AI applications.
A transactional, relational-graph-vector database that uses Datalog for query, designed as the hippocampus for AI.
A self-learning vector database with graph intelligence, local AI, and PostgreSQL integration, built for real-time adaptation.
A free course teaching how to design, train, and deploy a production-ready real-time financial advisor LLM system using RAG and LLMOps.
Give ChatGPT long-term memory by uploading custom knowledge base files (PDF, txt, epub) and asking questions via a React frontend.
An open-source platform for AI engineering with OpenTelemetry-native LLM observability, GPU monitoring, guardrails, evaluations, and prompt management.
Open-source persistent memory backend for AI agent pipelines with REST API, knowledge graph, and autonomous consolidation.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.