I got tired of spinning up Redis just to run background jobs. So I built bunqueue – a job queue that uses SQLite instead.
The Problem
Every time I start a new project:
# Here we go again...
docker run -d redis
npm install bullmq ioredis
For small to medium projects, Redis is overkill. You need:
- Another container to manage
- Another service to monitor
- Another thing that can fail at 3 AM
The Solution
bun add bunqueue
That’s it. No Redis. No Docker. Just SQLite.
import { Queue, Worker } from 'bunqueue/client';
// Create a queue
const queue = new Queue('emails');
// Add a job
await queue.add('welcome', {
to: 'user@example.com'
});
// Process jobs
new Worker('emails', async (job) => {
await sendEmail(job.data);
return { sent: true };
});
Why SQLite?
Bun has native SQLite support (bun:sqlite) that’s incredibly fast. With WAL mode enabled:
- 500,000+ ops/sec for queue operations
- 32x faster than BullMQ for common patterns
- Zero network latency – it’s just a file
Features You Actually Need
bunqueue isn’t a toy. It has everything for production:
| Feature | |
|---|---|
| Retries with backoff | ✅ |
| Job priorities | ✅ |
| Delayed jobs | ✅ |
| Cron scheduling | ✅ |
| Dead Letter Queue | ✅ |
| Stall detection | ✅ |
| Progress tracking | ✅ |
| S3 backups | ✅ |
| Sandboxed workers | ✅ |
Real Example: Email Queue
import { Queue, Worker } from 'bunqueue/client';
interface EmailJob {
to: string;
subject: string;
template: string;
}
const emailQueue = new Queue<EmailJob>('emails');
// Add with retry options
await emailQueue.add('welcome', {
to: 'user@example.com',
subject: 'Welcome!',
template: 'welcome'
}, {
attempts: 3, // Retry 3 times
backoff: 5000, // Wait 5s between retries
priority: 10, // Higher = processed first
});
// Worker with concurrency
const worker = new Worker<EmailJob>('emails', async (job) => {
await job.updateProgress(10, 'Loading template');
const html = await renderTemplate(job.data.template);
await job.updateProgress(50, 'Sending');
await sendEmail({
to: job.data.to,
subject: job.data.subject,
html
});
await job.updateProgress(100, 'Done');
return { sent: true };
}, {
concurrency: 5 // Process 5 emails in parallel
});
worker.on('completed', (job, result) => {
console.log(`✅ Email sent to ${job.data.to}`);
});
worker.on('failed', (job, error) => {
console.error(`❌ Failed: ${job.data.to}`, error.message);
});
CPU-Intensive? Use Sandboxed Workers
For heavy tasks that might crash or leak memory:
import { SandboxedWorker } from 'bunqueue/client';
const worker = new SandboxedWorker('video-processing', {
processor: './video-processor.ts',
concurrency: 4,
timeout: 300000, // 5 min timeout
maxMemory: 512, // MB per worker
maxRestarts: 10, // Auto-restart on crash
});
worker.start();
Each job runs in an isolated Bun subprocess. If it crashes, only that worker restarts.
BullMQ Migration
Already using BullMQ? The API is compatible:
- import { Queue, Worker } from 'bullmq';
+ import { Queue, Worker } from 'bunqueue/client';
// Your code stays the same
const queue = new Queue('my-queue');
When to Use bunqueue
✅ Use bunqueue when:
- Single server or small cluster
- You want simplicity
- You don’t want to manage Redis
- You need fast local queues
❌ Stick with BullMQ/Redis when:
- Multi-region distributed systems
- You already have Redis infrastructure
- You need pub/sub beyond job queues
Get Started
bun add bunqueue
import { Queue, Worker } from 'bunqueue/client';
const queue = new Queue('tasks');
await queue.add('hello', { message: 'world' });
new Worker('tasks', async (job) => {
console.log(job.data.message);
});
Links:
Built with Bun. Because sometimes the simple solution is the right one.
