Bull-Ish Your Queues With BullMQ

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Ever faced a situation where you desperately needed a queue, but traditional options were off the table? My team ran into this exact problem. After a lot of experimentation, we discovered a unique solution using Redis and BullMQ. This talk will share our journey – from the frustration of limited queueing options to building a highly scalable, lightning-fast system. We'll show you how we overcame the obstacles and achieved incredible results with Redis and BullMQ.

This talk has been presented at Node Congress 2025, check out the latest edition of this JavaScript Conference.

FAQ

Luca Del Pupo is a senior software developer at NearForm, a JavaScript and TypeScript enthusiast, and an author of tech posts for TechP. He also enjoys running, hiking, and taking care of his neighbors' cats.

The main topic of Luca Del Pupo's talk was queues and Boolean Queue (BoolMQ), focusing on queue systems in software development.

The team needed a queue system that could handle multiple queues to decouple or decentralize tasks but were initially provided with only one AWS SQS queue.

The team considered using Posgas SQL with the pgboss npm package and Elastic Cache with the BoolMQ npm library for their queue system.

The team chose BoolMQ over pgboss because pgboss would increase the database load, and their database was already under pressure at times.

Key features of BoolMQ include creating queues, handling data with workers, job creation, flow creation, parallelism, and concurrency. It also supports OpenTelemetry and provides a UI dashboard.

The cons of using BoolMQ include dependency on Redis, which if shut down can lead to data loss, and the need to learn a new library.

BoolMQ can be integrated with OpenTelemetry by setting up OpenTelemetry in the Node.js environment and using the BullMQ Hotel package to track telemetry.

Using Redis with BoolMQ allows for easy scaling, as Redis can scale itself if set up correctly, and it supports creating multiple instances or setting different concurrency levels for workers.

BoolMQ is an npm library that implements a fast and robust queue system on top of Redis, suitable for creating microservices architecture and handling queues efficiently.

Luca Del Puppo
Luca Del Puppo
23 min
17 Apr, 2025

Comments

Sign in or register to post your comment.
  • Va Da
    Va Da
    P4
    Ah, queue in Node.js, nice
Video Summary and Transcription
Hello, and welcome to this talk where I talk about queues and Boolean Queue. Today I want to tell you a story of a team working on a client project that needed to handle a simple queue. They approached the platform team for help, but were only given one queue, which wasn't enough. The team had to find a solution using the tools they already had, such as Posgas SQL and Elastic Cache. The first possible solution was pgboss. If you don't know pgboss, it is a simple npm package that you can install in your node application and basically create a sort of queue system on top of Posgas SQL. And expose an API to call the queues and handle the queues inside of your application. The only solution left is Elastic Cache and Redis. And in this particular scenario, what shines is BoolMQ. BoolMQ is a simple npm library that implements a fast and robust queue system on top of Redis. It can be used to decouple your code and create a microservices architecture. You can create a flow by depending on different queues and creating the result based on each queue's result. Out of the box, BoolMQ provides parallelism and concurrency. Additionally, you can have OpenTelemetry set up and a UI dashboard for queue monitoring. To create a PullMQ queue, provide a name and a connection string to Redis. The code includes a run method that continuously pushes new temperatures into the queue. The consumer uses a worker to handle data and execute queries. Close the connection when closing the Node.js process. Run the publisher to push data and the consumer to receive data. Create multiple publishers or consumers as needed. The second demo shows how to set up OpenTelemetry in your Node.js environment to track what happens inside your queues. Use BullMQ Hotel and the BullHotel instance to track telemetry for your queues. The last demo demonstrates the use of Fastify with BullMQ to create a UI for managing queues. You can do whatever you want with OpenTelemetry. The last demo showcases a Fastify application within the UI. Fastify is a simple framework for handling Node.js APIs. Set up Fastify using the Fastify adapter exposed by BullMQ, BullBoard. Expose the '/UI' API to show the UI in the server. The UI allows you to interact with the queues and view completed items. BullMQ is a straightforward solution for handling queues and can be easily scaled using Redis. For most cases, using a queue is necessary and well-documented. However, it relies on Redis, which means data loss if Redis goes down. Additionally, you'll need to learn another library. Thank you for listening!

1. Introduction to Queues

Short description:

Hello, and welcome to this talk where I talk about queues and Boolean Queue. Today I want to tell you a story of a team working on a client project that needed to handle a simple queue. They approached the platform team for help, but were only given one queue, which wasn't enough. The team had to find a solution using the tools they already had, such as Posgas SQL and Elastic Cache. The first possible solution was pgboss.

Hello, and welcome to this talk where I talk about queues and Boolean Queue. So, first of all, let's start from who I am. I'm Luca Del Pupo, a senior software developer at NearForm, a JavaScript and TypeScript lover. In my free time, I try to run my YouTube channel, but without success in the last year. And I also love to write tech posts for TechP. I also love running and hiking in my beautiful Dolomites, and I love to take care of the cats of my neighbors.

By the way, let's start with the topic. So today I want to tell you a story, a story of a team, one of the best teams I've ever seen in my life. So this team was working on a project, on a client project, and at some point they need to handle a simple queue. A simple queue, so they need to understand how to handle a queue system in their project. So they weren't the owner of the infrastructure, so they started to talk with the platform team. So they went to the platform team to ask help to have a sort of queue system inside of their environment. The platform team started to think about the real possibilities and basically the product ran inside of AWS. They decided to give them an AWS SQS queue, but with a simple problem. Just only one queue.

And it wasn't enough for the team, because yes, they asked for one queue, but not just one queue. They needed a sort of queue system that can handle different queues inside of the project and help them to decouple or decentralize some stuff. So this is basically the result. The team was always there to achieve their need and to achieve their goal to get a queue, but the platform team said, yep, I can give you a queue, but just one. So for some days, the night of this team was a bit of a nightmare in reality. A list of nightmares during the next days. So this is basically what happened during the night.

Every partner thinks that the partner that is a member of the team was thinking about, I don't know, something else. But in reality, every teammate was thinking about how we can handle this queue system inside of WK. So at some point, they start to think about what they already have inside of the project. And there were already two different tools that they can use to create a sort of queue system around the project. One was Posgas SQL for the database, and the other one was Elastic Cache, the Redis implementation inside of AWS. They already have these two tools, so they didn't ask something else to the platform team. But the only thing that they have to understand is how they can handle a queue system inside of one of these two tools. So the first possible solution was pgboss.

2. Using pgboss and BoolMQ

Short description:

If you don't know pgboss, it is a simple npm package that you can install in your node application and basically create a sort of queue system on top of Posgas SQL. And expose an API to call the queues and handle the queues inside of your application. The only solution left is Elastic Cache and Redis. And in this particular scenario, what shines is BoolMQ. BoolMQ is a simple npm library that implements a fast and robust queue system on top of Redis. It can be used to decouple your code and create a microservices architecture.

If you don't know pgboss, it is a simple npm package that you can install in your node application and basically create a sort of queue system on top of Posgas SQL. And expose an API to call the queues and handle the queues inside of your application. This tool requires a specific DB schema to handle. It's a good practice and bad practice to have a specific DB schema to handle all the queues and not to use the common one, to avoid to mix the real data with the queues tables. And this means that if you want to use pgboss, you increase the load inside of the database.

And one of the problem of the team was that the database was already under pressure in some moments of the days. So increase again the load of the database can be another nightmare for them. So they decided to avoid to use this solution. So the only solution left is Elastic Cache and Redis. And in this particular scenario, what shines is BoolMQ. BoolMQ is a simple npm library that implements a fast and robust queue system on top of Redis. And you can use it to decouple your code, you can use it to basically create a microservices architecture all around your application, or stuff like this. And the API is really really straightforward.

Just a quick note. There are two bool implementations. The bool without mq is the oldest one. And it works in maintenance mode and it works very well also in the current version of Redis without any problem. But this implementation doesn't support the same. BoolMQ is the new one. The faster one. And basically this uses the stream of Redis. If I don't remember bad, it works from Redis 6 and above. And it is the current version of BoolMQ. So if you have an older version of Redis, you have to use bool, otherwise you can use BoolMQ. If you want to migrate from one to the other, it's not simple because the API has changed, but it's feasible without any problem. By the way, we chart the feature exposed by BoolMQ. So BoolMQ obviously exposes queues. So you can create queues, add elements to the queues and do all the things that you know with the queue. Then what you can do is, using the worker, you can handle the data inside of the queues to unqueue all your data. Then you can also create a job using BoolMQ and Redis.

3. Creating Flows and Monitoring

Short description:

You can create a flow by depending on different queues and creating the result based on each queue's result. Out of the box, BoolMQ provides parallelism and concurrency. Additionally, you can have OpenTelemetry set up and a UI dashboard for queue monitoring. Now let's jump to the code and explore an example.

And the cherry on top is, you can create also the flow. So you can create a sort of object that depends on different queues. And you can basically create the result based on the result of each queue. Let me explain better what I said. Imagine you need to create a quotation of a product. This product is built of different pieces and to get the quotation of each piece, you need to call a different API. What you can do is create a sort of object that represents your product. And for each piece, you can have a specific queue that handles the quotation for each piece. When all the pieces of your product will be quoted, you have the last queue that is the product queue that basically gets all the result of the other queues and create the result of your quotation and save the data, whatever you want. You can find more about flow in the BoolMQ Docs.

Then, what you can have out of the box? Parallelism and concurrency. Obviously if you want to work with queues, you need concurrency and parallelism. Out of the box, both are already implemented. The concurrency by default is disabled and you can enable it. Just add the concurrency and the number of workers that you want inside of your process. Basically, inside of your process you can have different workers that use the memory and you can create how many workers you want. Obviously there is a limitation, the limitation of the machine where your application is running. Then, if you want to have parallelism, you can run your application many times and this enable you to have a multiple instance of your application and so the worker that works in parallel in the instance of the user.

Then, other pluses. There are two important pluses. The first one. Out of the box, you can have OpenTelemetry already set up. It's just a matter of set up OpenTelemetry with a simple package. And if you want a UI dashboard of what happens inside of your queues, you can build a simple dashboard already created by a package and expose it using Fastify, Express or whatever you want. But let's jump to the code to show what happens with PullMQ. So, let's move to the code and let me change the branch. Let's go to the first one. Ok, this is the first example. This is a simple monorepo, ok, nothing special, with two packages, the publisher and the console. The publisher is a simple, in this case is a simple node process that import the queues, a random UID, for the login and the setTimeout.

4. Creating PullMQ Queue and Worker

Short description:

To create a PullMQ queue, provide a name and a connection string to Redis. The code includes a run method that continuously pushes new temperatures into the queue. The consumer uses a worker to handle data and execute queries. Close the connection when closing the Node.js process. Run the publisher to push data and the consumer to receive data. Create multiple publishers or consumers as needed.

Now we can create a PullMQ queue using a simple new queue of PullMQ. This is the name of your queue. And the second part is the connection to the Redis. This is the connection string to the Redis. In our scenario, this is a sort of a fake temporary tracking with the sensor ID. The PushTemperature is the method used to push the data inside of the queue. The Temperature is the name of your event. And the Message is the sensor ID, the value and the detected ad.

What happens in the code, basically, is a run method that using a while true on each 10 milliseconds pushes a new temperature inside of it. And when you want to close the process, the system also closes the connection with the Redis. In the consumer, you have a worker. In this case, the worker is another class exposed by PullMQ. You can have pno, the db connection to save the data inside of the database, nothing special. And here, what happens is you have the same name of the queue. So here, my queue is the same name that you must have inside of the queue.

And this is the worker. So the worker basically uses a promise to handle the data. In this scenario, basically, get the data, add the result inside of the temperature table using sensor temperature and detected ad. Execute the query and blah blah blah. As you can see here, the concurrency is 1, but you can use the number that you prefer to have multiple worker inside of the same of Europe. And as we already seen in the previous publisher, we need to close the connection when someone close the instance of your Node.js process.

So what we can do is run the publisher. And as you can see, start to push data inside of the queue. And then we can run also the consumer. And as you can see, start to receive the data. Then we can also, if you want, create many publishers. If you want, start publisher. Or maybe you can drop it and create another start consumer. And blah blah blah.

5. Setting up OpenTelemetry and Using Fastify

Short description:

The second demo shows how to set up OpenTelemetry in your Node.js environment to track what happens inside your queues. Use BullMQ Hotel and the BullHotel instance to track telemetry for your queues. The last demo demonstrates the use of Fastify with BullMQ to create a UI for managing queues.

And blah blah blah. Pretty simple. Okay, let me close this and go and jump to the second demo.

So the second demo is pretty simple. It's just to show you how to set up OpenTelemetry. If you want to have OpenTelemetry and track what happens inside of your queues, you need to set up OpenTelemetry in your Node.js environment. In this case, I already set up Jager to track everything.

To set up OpenTelemetry in your code, you just need to use BullMQ Hotel and pass the new BullHotel instance that tracks the telemetry of your queue. This applies to both the publisher and the consumer. When you run the publisher and the consumer, you can view the tracking information in the Jager UI.

Now, let's move on to the last demo. In this demo, we will explore the UI and see what happens. I will enable the start and publisher again. This package includes a Fastify application inside the UI. Fastify is a simple framework to handle Node.js APIs. Here is the configuration to set up Fastify using the Fastify adapter exposed by BullMQ, BullBoard. We need to load all the queues into this adapter and expose the '/UI' API to show the UI within the server.

6. Exploring OpenTelemetry and Fastify Integration

Short description:

You can do whatever you want with OpenTelemetry. The last demo showcases a Fastify application within the UI. Fastify is a simple framework for handling Node.js APIs. Set up Fastify using the Fastify adapter exposed by BullMQ, BullBoard. Expose the '/UI' API to show the UI in the server. The UI allows you to interact with the queues and view completed items. BullMQ is a straightforward solution for handling queues and can be easily scaled using Redis.

And if I refresh again, now I can see this is the tracker for the producer, so the time needed to handle the push inside of the publisher, or the producer. And this is the time needed to handle the consumer inside of the consumer. And as you can see, you can do whatever you want inside of your OpenTelemetry.

Now, let's jump to the last demo. The last demo is pretty simple. We can go to the UI and see what happens. Okay, I enable again start and publisher. This package has another beautiful Fastify application inside of the UI. So if you don't know Fastify, it's a simple framework to handle Node.js API in Node.

And here is the configuration, how to set up Fastify. The Fastify adapter is an adapter exposed by BullMQ, BullBoard, to create the adapter for Fastify in this case. We need to load all the queue inside of this adapter, basically, as you can see here. And then what happens, we need to expose the slash UI API to show the UI inside of the server. What happens here, if I run npm run start UI and go back to the browser. Okay, this is the UI. You can click the active, the children. You can go to a specific queue and see the completed item. As you can see, the number increased because now the publisher and the consumer are still going. You can see the failure, the prioritizer, and so on.

And with that, probably, we also complete the demo. I can jump back to the slide to complete, to wrap up everything. So, conclusion. Okay, what you learned today is BullMQ is an easy solution to basically handle queue system without too much. And it's perfect if you cannot have another queue services in your system. I don't want to say it's the best solution ever, but it's a good possible solution. Okay. Then, the API is really straightforward. It's also easy to scale because you can just create Redis. Basically, if you set up Redis in the best way, already scale by itself. And your job can scale by create a new instance or create a different concurrency of your work.

7. Conclusion and Contact Information

Short description:

For most cases, using a queue is necessary and well-documented. However, it relies on Redis, which means data loss if Redis goes down. Additionally, you'll need to learn another library. Here are the QR codes for the presentation and demo, as well as my contact information. Thank you for listening!

It's really for the majority of the case that you need to use a queue. Okay. And it's very well documented. If you need something, you can jump in the documentation and find everything you need inside of it.

Then, cons, unfortunately, is based on Redis. So if you shoot it down, you'll lose the data. So you need also to create a reliable system from this issue. And it's another library to learn on.

Okay, I completed my talk. This is the QR code for the slide of the presentation. This is the QR code for the demo that I'll show you. And these are my contacts. If you want to chat with me, feel free to send me such a problem. And thank you again. I hope you enjoy this talk. Bye-bye.