An open-source framework that enables AI agents to learn from experience through a persistent learning loop, improving performance over time.
Agentic Context Engine (ACE) is an open-source framework that enables AI agents to learn from their execution traces and improve autonomously over time. It solves the problem of agents repeating mistakes by implementing a persistent learning loop where strategies are extracted from successes and failures. The framework allows agents to become more consistent and efficient without requiring fine-tuning or manual intervention.
Developers and researchers building AI agents, autonomous systems, or LLM-powered applications who need agents that improve with experience. It's particularly valuable for teams working on browser automation, code translation, or multi-step reasoning tasks.
ACE provides a unique self-improvement mechanism for agents through its recursive reflector and Skillbook, which are not found in standard agent frameworks. Developers choose it because it demonstrably improves agent performance (e.g., doubling consistency on benchmarks) with minimal setup and learning costs, while supporting numerous LLM providers and integration options.
🧠 Make your agents learn from experience. Now available as a hosted solution at kayba.ai
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Maintains a collection of learned strategies that evolve with each task, demonstrably doubling agent consistency on benchmarks like Tau2.
Uses Python code executed in a sandbox to programmatically analyze traces, enabling deep pattern extraction beyond simple LLM summarization.
Integrates with 100+ models via PydanticAI and LiteLLM, including major providers like OpenAI and Anthropic, offering flexibility in model selection.
Shows tangible results such as 49% token reduction in browser automation and low learning costs for complex tasks like code translation.
Requires manual API key setup and model selection, with an interactive setup that may still involve multiple steps, unlike more streamlined frameworks.
The reflection process adds additional LLM calls, increasing operational costs and latency, which can be prohibitive for high-volume or budget-sensitive applications.
Best suited for tasks with repetitive patterns; less effective for one-off or highly dynamic scenarios where learned strategies provide minimal benefit.
As a newer framework, it lacks the extensive community, documentation depth, and third-party integrations of established alternatives like LangChain.