A fast, reliable Redis-based distributed message queue and batch processing system for Node.js, Python, Elixir, and PHP.
BullMQ is a Redis-based distributed message queue and batch processing library that allows developers to manage background jobs, scheduled tasks, and complex workflows across multiple programming languages. It solves the problem of building scalable, reliable applications that require asynchronous processing and fault-tolerant job execution.
Developers and engineers building Node.js, Python, Elixir, or PHP applications that need robust background job processing, task scheduling, or distributed message queuing capabilities.
Developers choose BullMQ for its atomicity, reliability, and extensive feature set—including parent-child job dependencies, rate limiting, and deduplication—backed by Redis for high performance and persistence.
BullMQ - Message Queue and Batch processing for NodeJS, Python, Elixir and PHP based on Redis
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Ensures rock-solid reliability and atomic operations by leveraging Redis as the backend, preventing data corruption in distributed systems as emphasized in the README.
Supports complex workflows with parent/child relationships, allowing hierarchical task orchestration for intricate processing pipelines.
Offers configurable rate limiters and priorities to control job execution frequency and order, preventing system overload in high-throughput scenarios.
Provides observables and global events for monitoring job lifecycle events, enabling proactive debugging and event-driven notifications.
Tightly coupled with Redis, requiring additional infrastructure setup and management, which can be a barrier for teams preferring other backends or avoiding Redis costs.
Critical features like observables and group rate limiting are only available in BullMQ-Pro, pushing users towards paid plans for enterprise-grade needs.
Configuring and scaling workers across servers with Redis clusters adds operational overhead compared to simpler, single-server queue solutions.