A fast, thread-safe in-memory cache for Go designed to handle massive entry counts with minimal garbage collection overhead.
Fastcache is a specialized in-memory caching library for the Go programming language. It solves the problem of efficiently caching a massive number of items in memory without causing significant garbage collection pauses, which is a common challenge in high-performance Go applications. It achieves this through a unique architecture that minimizes pointer counts and memory fragmentation.
Go developers building high-throughput, memory-sensitive applications like databases (e.g., VictoriaMetrics), real-time analytics systems, or web services that require fast, thread-safe access to large in-memory datasets.
Developers choose Fastcache for its superior speed and significantly lower memory overhead compared to other Go cache libraries like BigCache. Its design philosophy favors simplicity and raw performance, providing a lean, extensible foundation without the bloat of advanced features that can impact speed.
Fast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Benchmarks in the README show fastcache outperforms BigCache, standard Go maps, and sync.Map in multi-core scenarios, with higher millions of operations per second.
Its bucket-based architecture with off-heap 64KB chunks drastically reduces pointer count, preventing garbage collection pauses in large caches, as emphasized in the design philosophy.
Thread-safe design allows multiple goroutines to read and write simultaneously on a single cache instance, scaling efficiently on multi-core CPUs.
The API is designed for zero-allocation patterns, making it straightforward to use while maximizing performance, as highlighted in the documentation.
Cache state can be saved to and loaded from files using SaveToFile and LoadFromFile methods, enabling durability across application restarts.
Fastcache lacks automatic time-based eviction; entries are only removed when the cache size limit is reached, requiring manual implementation for TTL, as admitted in the FAQ.
Keys and values must be byte slices, forcing developers to marshal other data types, which adds overhead and complexity, as stated in the limitations.
It omits features like thundering herd protection and eviction callbacks, as the project prioritizes simplicity and speed over functionality, per the FAQ.
For entries over 64KB, a separate API (Cache.SetBig) must be used, which can complicate code and reduce consistency, as noted in the limitations.