A high-performance Redis-backed cache client for Ruby, fully compliant with ActiveSupport caching.
Readthis is a Redis-backed cache client for Ruby that serves as a drop-in replacement for ActiveSupport-compliant caches. It provides high-performance caching with features like compression, fault tolerance, and flexible serialization, solving the need for a fast and reliable caching layer in Ruby applications.
Ruby developers, particularly those using Rails or other frameworks that rely on ActiveSupport caching, who need a performant and simple Redis-based caching solution.
Developers choose Readthis for its emphasis on performance, simplicity, and ActiveSupport compliance, offering a streamlined alternative to built-in caching options with support for advanced features like compression and custom serialization.
:newspaper: Pooled active support compliant caching with redis
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Fully compatible with ActiveSupport caching APIs, allowing seamless integration into Rails apps without code changes, as it mimics standard cache stores.
Supports the hiredis driver and efficient serialization options, with documented benchmarks showing speed advantages over alternatives like Memcached.
Automatically compresses large values to reduce memory usage and offers optional connection fault tolerance to handle Redis outages, keeping apps running.
Allows multiple serializers including Marshal, JSON, and Passthrough, with support for custom serializers like Oj, though configuration requires careful ordering.
Does not support standard ActiveSupport cache methods like cleanup, mute, and silence!, which may break workflows relying on these features for maintenance or logging.
Explicitly maintained for Rails versions prior to 5.2 with no new features added, making it a legacy choice that lacks updates for modern Rails ecosystems.
Serializers must be added in a specific order and frozen to avoid runtime errors, adding setup overhead and potential for misconfiguration in dynamic environments.
Enabling TTL refresh on reads adds write operations to all cache hits, introducing latency overhead that may negate performance benefits for frequently accessed keys.