An open-source LLM function calling framework for building scalable, low-latency AI agents with geo-distributed edge infrastructure.
YoMo is an open-source LLM function calling framework for building scalable and ultra-fast AI agents. It enables developers to create type-safe AI functions that can be deployed with low-latency communication using the QUIC protocol, with a focus on geo-distributed edge infrastructure to bring AI inference closer to end users.
Developers and teams building AI agents and applications that require low-latency, scalable, and secure AI function calling, particularly those needing geo-distributed deployment for global user bases.
YoMo offers significantly faster AI agent communication through QUIC protocol implementation, type-safe development with TypeScript and Go support, and a geo-distributed architecture that reduces latency by bringing AI inference closer to users.
🦖 Serverless AI Agent Framework with Geo-distributed Edge AI Infra.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Implements the QUIC protocol for significantly faster communication between AI agents and servers, as emphasized in the low-latency feature, reducing latency compared to traditional HTTP.
Supports TypeScript and Go with type-safe function calling, preventing runtime errors and enabling IDE auto-completion, as demonstrated in the weather example with detailed argument types.
Encrypts every data packet with TLS v1.3 by design, ensuring robust security for AI communications without additional configuration, as highlighted in the enhanced security feature.
Enables deployment close to users globally through edge infrastructure, reducing response times and improving user experience, as described in the architecture focus.
Only supports TypeScript and Go, excluding popular AI languages like Python, which may force teams to rewrite or adapt their existing codebases.
Geo-distributed architecture requires setting up and managing edge servers, increasing operational overhead and cost compared to centralized cloud solutions.
As a newer framework, it lacks the extensive libraries, community support, and third-party integrations available in more mature alternatives like LangChain.