A lightweight, zero-dependency task scheduler and rate limiter for Node.js and browsers, with Redis-based clustering support.
Bottleneck is a lightweight, zero-dependency library for rate limiting and scheduling asynchronous tasks in Node.js and browser environments. It solves the problem of controlling the flow of jobs—such as API calls or database operations—to prevent exceeding rate limits, manage resource contention, and ensure application stability under load. It provides fine-grained control over concurrency, timing, and prioritization with support for distributed clustering via Redis.
Developers building Node.js applications or browser-based tools that interact with rate-limited APIs, manage concurrent background jobs, or need to throttle operations to avoid overwhelming external services or internal resources.
Developers choose Bottleneck for its simplicity, reliability, and comprehensive feature set—including clustering, priority queues, and batching—without adding heavy dependencies. Its battle-tested design and Redis-backed distributed rate limiting make it a production-ready solution for scalable applications.
Job scheduler and rate limiter, supports Clustering
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Supports clustering across multiple Node.js instances using Redis for atomic operations, enabling scalable and reliable rate control in distributed environments, as detailed in the Clustering section.
Offers a wide range of options like minTime, maxConcurrent, reservoir intervals, and priority queues, allowing precise tuning for various use cases from API throttling to burst management.
Includes batching, event-driven monitoring, and job lifecycle tracking, providing tools for efficient operation handling and debugging without adding external dependencies.
Enabling distributed rate limiting requires Redis configuration and careful management of client options, which can be a barrier for teams without existing Redis infrastructure or DevOps support.
Retries are implemented through event listeners on the 'failed' event, requiring developers to write custom logic for retry delays and error recovery, which can be prone to mistakes if not handled carefully.
In clustered mode, job priorities and queue order are maintained per limiter, not globally, leading to potential execution inconsistencies across the cluster, as noted in the Clustering considerations.