A leaderless, distributed SQLite replication system with MySQL wire compatibility for edge and read-heavy scenarios.
Marmot is a distributed SQLite replication system that provides a MySQL wire-compatible interface, allowing applications to use SQLite as a replicated database. It solves the problem of scaling SQLite for read-heavy edge deployments by enabling leaderless, multi-node clusters with automatic data synchronization and conflict resolution.
Developers and organizations needing a lightweight, distributed database for edge computing, multi-region WordPress deployments, or read-heavy scenarios where low-latency local reads are critical.
Marmot offers a unique combination of MySQL compatibility, leaderless architecture, and direct SQLite file access, reducing operational overhead compared to traditional active-active MySQL setups or leader-based SQLite replication solutions.
A distributed SQLite server with MySQL wire compatible interface
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Uses a gossip-based protocol that allows any node to accept writes, eliminating single points of failure and manual failover, as highlighted in the architecture comparison with rqlite/dqlite.
Provides full MySQL protocol support, enabling existing clients, ORMs, and applications like WordPress to work without modification, with demonstrated compatibility for functions like NOW and ON DUPLICATE KEY UPDATE.
Clients can read the local SQLite file directly for sub-millisecond latency reads, which is ideal for edge computing scenarios where performance is critical, as mentioned in the edge deployment patterns.
Offers configurable write consistency (ONE, QUORUM, ALL) allowing trade-offs between latency and durability, with QUORUM as the default balanced option for distributed transactions.
Supports Debezium-compatible Change Data Capture for publishing events to Kafka or NATS, enabling real-time data pipelines and integrations without custom code.
Marmot is eventually consistent with Last-Write-Wins conflict resolution, which may not satisfy applications that require strict serializable transactions across all nodes, as noted in the limitations section.
The system replicates all tables within a database; selective replication of specific tables is not supported, which can be inefficient for databases with mixed or large unused tables.
While DDL operations are protected by cluster-wide locks with a 30-second lease, concurrent schema changes on the same database from multiple nodes should be avoided, requiring careful coordination in multi-node setups.
Anti-entropy and garbage collection settings need careful tuning for large clusters or high write throughput, with validated rules like gc_min_retention_hours >= delta_sync_threshold_seconds, adding complexity.