Showing 8 of 8 projects
A CLI and library for evaluating, red-teaming, and comparing LLM prompts, agents, and RAGs with simple declarative configs.
An open-source platform for debugging, evaluating, and monitoring LLM applications, RAG systems, and agentic workflows.
A framework and open-source registry for evaluating large language models (LLMs) and LLM systems.
An open-source continuous testing platform with AI assistant, integrating test management, API testing, and team collaboration.
An open-source Python framework to evaluate, test, and monitor ML and LLM systems with 100+ built-in metrics.
Automatically generates table-driven Go test boilerplate from source code, with optional AI-powered test case generation.
A Node.js end-to-end testing framework with AI-powered features, unified API for multiple browsers, and scenario-driven BDD-style tests.
A Node.js end-to-end testing framework with AI-powered features, unified API for multiple backends, and synchronous test writing.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.