An ETS-based key/value cache for Elixir with row-level isolated writes, TTL support, and modification callbacks.
ConCache is an Elixir library that provides a concurrent, ETS-based key/value cache with row-level isolation for writes and built-in TTL support. It solves the problem of safely managing shared, mutable state across multiple processes in Elixir applications by extending ETS with synchronization, expiration, and callback features.
Elixir developers building applications that require shared, mutable state or caching layers, such as web servers, real-time systems, or distributed services where concurrent access to cached data must be handled safely.
Developers choose ConCache over raw ETS because it adds essential production-ready features like isolated writes to prevent race conditions, flexible TTL management, and telemetry integration, all while maintaining ETS's performance and simplicity.
ets based key/value cache with row level isolated writes and ttl support
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Ensures synchronized writes per key to prevent race conditions, allowing safe concurrent updates while other keys remain accessible, as detailed in the update/3 function.
Supports global and per-item expiry with efficient checks that avoid brute-force table scans, and includes touch-on-read options for renewed item lifetimes.
Emits hit and miss events out-of-the-box, enabling easy monitoring and metric collection without custom instrumentation.
Provides low-latency access by bypassing locks for direct ETS manipulation, useful when isolation isn't required, as shown in dirty_* functions.
Built on ETS, it cannot synchronize cache across multiple nodes, limiting scalability in distributed Elixir deployments.
For ETS bag and duplicate bag types, key functions like update and get_or_store are not supported, reducing flexibility for certain data structures.
Row-level locks can cause acquisition timeouts (default 5 seconds) and add latency, especially in high-concurrency scenarios, as mentioned in the locking section.