A Go framework for building distributed microservices with built-in RPC, service discovery, data modeling, and AI agent integration via MCP.
Go Micro is a framework for building distributed systems and microservices in Go. It provides essential abstractions like RPC communication, service discovery, load balancing, and data persistence, allowing developers to create scalable and maintainable service architectures. The framework also integrates with the Model Context Protocol (MCP) to automatically expose services as tools for AI agents.
Go developers and teams building distributed systems, microservices architectures, or applications that require seamless service communication, discovery, and persistence. It's also suitable for projects exploring AI agent integration via MCP.
Developers choose Go Micro for its comprehensive feature set out-of-the-box, pluggable architecture that avoids vendor lock-in, and unique AI agent integration that turns services into callable tools. The framework simplifies distributed system complexities while offering flexibility through interchangeable components.
A Go microservices framework
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Every distributed system abstraction is defined as a Go interface, allowing runtime-agnostic integration with any underlying technology, as emphasized in the README's philosophy of 'sane defaults with a pluggable architecture.'
Automatically exposes services as AI-callable tools via the Model Context Protocol (MCP), with an agent playground and tools registry available instantly, enabling agent-first workflows without extra setup.
Includes out-of-the-box service discovery with multicast DNS, client-side load balancing, synchronous RPC, and asynchronous PubSub messaging, reducing the need for external dependencies in microservices development.
Provides a typed data persistence layer with CRUD operations, queries, and support for backends like SQLite and Postgres, accessible via service.Model() for integrated data handling alongside RPC.
Allows running multiple services in a single process as a modular monolith, with isolated state per service, enabling gradual splitting into independent deployments when scaling needs arise.
Deployment is optimized for Linux servers with systemd via SSH, with no built-in support for Docker or Kubernetes, which may not align with cloud-native or containerized environments favored by many teams.
The framework's rich feature set, including service discovery and load balancing, introduces abstraction layers that can be overkill for simple microservices or projects that don't need distributed systems capabilities.
The README includes a 'Reflection Usage & Philosophy' document, indicating reliance on reflection for some operations, which might impact performance in high-throughput scenarios compared to more static approaches.
While pluggable, the ecosystem for plugins (e.g., alternative service discovery backends) is less extensive than in more established frameworks, potentially limiting integration options without custom development.