Promise.all() looks innocent.
It is one of those JavaScript features most developers learn early and start using everywhere.
Fetch multiple APIs? Use Promise.all().
Insert multiple rows? Use Promise.all().
Upload multiple files? Use Promise.all().
Call GitHub API for many blobs? Use Promise.all().
At first, it feels clean.
await Promise.all(items.map(item => processItem(item)));
One line. Beautiful. Fast. Modern.
But here is the uncomfortable truth:
Promise.all()is not a performance strategy. It is a concurrency trigger.
And if you use it blindly, it can quietly break your system.
The Problem Is Not Promise.all() Itself
Promise.all() is not bad.
It is useful, powerful, and perfectly fine when the number of promises is small and controlled.
The problem starts when developers use it like this:
await Promise.all(users.map(user => sendEmail(user)));
Looks harmless.
But what if users.length is:
10
100
1,000
50,000
Now the same line behaves very differently.
That one line can suddenly create:
- 1,000 API requests
- 1,000 database queries
- 1,000 file operations
- 1,000 network connections
- 1,000 memory allocations
All at once.
That is where systems start to fail.
Promise.all() Does Not Mean “Run Safely in Parallel”
This is the biggest misunderstanding.
A lot of developers think:
“Promise.all() runs things in parallel.”
Not exactly.
JavaScript does not magically create safe parallel execution for your workload.
What actually happens is:
- all async operations are started immediately
- JavaScript keeps references to all promises
- the runtime waits until all resolve
- if one rejects,
Promise.all()rejects immediately
That means Promise.all() does not control load.
It does not limit concurrency.
It does not respect API rate limits.
It does not care about your database pool.
It does not protect your server memory.
It simply starts everything.
The Hidden Cost of Starting Everything at Once
Let’s say you are processing 5,000 records.
await Promise.all(
records.map(record => processRecord(record))
);
This may look efficient.
But internally, you might be doing:
async function processRecord(record: RecordItem) {
const user = await db.user.findUnique({
where: { id: record.userId }
});
const response = await externalApi.send(user);
await db.auditLog.create({
data: response
});
}
Now multiply that by 5,000.
Suddenly you are not just running 5,000 promises.
You may be creating:
- 5,000 DB reads
- 5,000 external API calls
- 5,000 audit log writes
- thousands of objects in memory
- thousands of open sockets
This is not optimization.
This is a traffic accident.
1. Concurrency Explosions
A concurrency explosion happens when your code starts more async work than the system can safely handle.
The dangerous part is that the code often looks clean.
await Promise.all(files.map(uploadFile));
But if there are 2,000 files, you just started 2,000 uploads.
A better question is not:
“Can JavaScript run this?”
The better question is:
“Can every system behind this operation handle it?”
Because your code may depend on:
- database pool limits
- third-party API limits
- CPU availability
- memory capacity
- disk I/O
- network stability
- cloud provider throttling
Promise.all() ignores all of that.
2. Rate Limiting Problems
Third-party APIs usually do not like sudden request spikes.
If you call an API 500 times at once, you may hit:
429 Too Many Requests- temporary bans
- request throttling
- degraded responses
- random timeouts
Example:
await Promise.all(
users.map(user => paymentProvider.createCustomer(user))
);
This may work in development with 5 users.
Then fail badly in production with 5,000 users.
The code did not change.
The data size did.
That is why Promise.all() bugs often appear only under real traffic.
3. Memory Spikes
Promise.all() keeps track of all promises and their results.
If each result is small, that is fine.
But if each operation returns a large object, file buffer, API response, or parsed payload, memory usage can spike quickly.
Example:
const results = await Promise.all(
largeFiles.map(file => readAndParseFile(file))
);
If each parsed file is 20 MB and you process 100 files, you may suddenly hold gigabytes of data in memory.
That can lead to:
- slow garbage collection
- process crashes
- container restarts
- server instability
- out-of-memory errors
Sometimes the safer solution is to process data in batches or streams instead of loading everything together.
4. Database Connection Exhaustion
This one is extremely common.
Most databases use connection pools.
For example, your app may have a pool limit of 10, 20, or 50 connections.
Now imagine doing this:
await Promise.all(
orders.map(order =>
db.order.update({
where: { id: order.id },
data: order
})
)
);
If orders.length is 1,000, your code tries to schedule 1,000 DB operations immediately.
But your database pool may only support 20 active connections.
The rest wait.
Then things start piling up.
You may see:
- connection timeout
- slow queries
- locked rows
- deadlocks
- pool exhaustion
- failed transactions
Again, Promise.all() is not aware of your database limit.
It will happily overload it.
5. API Throttling and Socket Errors
I faced a real version of this while working with GitHub APIs.
Creating blobs through the GitHub API looked simple at first.
The tempting version was:
await Promise.all(
files.map(file => github.createBlob(file))
);
Clean, yes. Safe, no.
With many files, this could trigger network instability, throttling, or random failures such as:
socket hang upwrite EPIPEECONNRESET- request timeout
- temporary GitHub API failures
The fix was not to “make Promise.all better.”
The fix was to stop launching everything at once.
Instead, the safer approach was:
- limit concurrency
- process files in small batches
- retry only retryable failures
- fail fast for validation errors
- avoid retrying
400 Bad Request - use exponential backoff
- reduce concurrent blob creation
That changed the mindset completely.
The goal was no longer:
“How do I make this as parallel as possible?”
The goal became:
“How do I make this fast enough without destroying reliability?”
That is the real engineering question.
6. Retries Can Make the Problem Worse
Retries sound like a solution.
But careless retries can multiply the damage.
Imagine this:
await Promise.all(
requests.map(request =>
retry(() => callApi(request))
)
);
If 500 requests fail due to rate limiting, and each one retries 3 times, you may have created 1,500 more requests.
Now your retry logic is attacking the same system that already asked you to slow down.
Retries need discipline.
Good retry logic should consider:
- which errors are retryable
- how many times to retry
- how long to wait
- whether to use exponential backoff
- whether to add jitter
- whether the API returned
429 - whether the operation is idempotent
Not every error deserves a retry.
if (error.status === 400) {
throw error;
}
Retrying bad requests only wastes time and increases load.
7. Partial Failures Are Harder Than They Look
Another issue with Promise.all() is failure behavior.
If one promise rejects, Promise.all() rejects.
await Promise.all([
uploadFile(file1),
uploadFile(file2),
uploadFile(file3),
]);
If file2 fails, the entire Promise.all() rejects.
But what about file1 and file3?
They may have already completed.
Now your system is in a partial success state.
This matters when you are doing things like:
- payment operations
- database writes
- file uploads
- email sending
- inventory updates
- GitHub commits
- external integrations
You need to ask:
“If 7 out of 10 operations succeed, what should happen?”
Should you rollback?
Should you retry only failed items?
Should you show partial success?
Should you store failed records for later?
Promise.all() does not answer those questions.
Your architecture has to.
Promise.allSettled() Is Better for Partial Results
When you care about every result, use Promise.allSettled().
const results = await Promise.allSettled(
files.map(file => uploadFile(file))
);
Then separate success and failure.
const successful = results.filter(
result => result.status === "fulfilled"
);
const failed = results.filter(
result => result.status === "rejected"
);
This gives you a complete picture.
But remember:
Promise.allSettled()solves visibility. It does not solve concurrency.
It still starts everything at once.
The Better Pattern: Limit Concurrency
Instead of launching everything together, process with a concurrency limit.
async function processInBatches<T, R>(
items: T[],
batchSize: number,
handler: (item: T) => Promise<R>
): Promise<R[]> {
const results: R[] = [];
for (let i = 0; i < items.length; i += batchSize) {
const batch = items.slice(i, i + batchSize);
const batchResults = await Promise.all(
batch.map(item => handler(item))
);
results.push(...batchResults);
}
return results;
}
Usage:
const results = await processInBatches(
files,
5,
uploadFile
);
Now instead of 1,000 uploads at once, you run 5 at a time.
That is slower than unlimited concurrency.
But it is much safer.
And in production, safe usually wins.
Queue Patterns Are Even Better for Heavy Workloads
For large or long-running workloads, batching may still not be enough.
A queue-based approach is often better.
A queue gives you:
- controlled concurrency
- retry policies
- delayed jobs
- observability
- pause/resume behavior
- backpressure
- better recovery
Instead of:
await Promise.all(items.map(processItem));
You can push jobs into a queue:
for (const item of items) {
await queue.add("process-item", item);
}
Then workers process with controlled concurrency:
const worker = new Worker(
"process-item",
async job => {
await processItem(job.data);
},
{
concurrency: 5,
}
);
That is a much healthier model for serious workloads.
When Promise.all() Is Actually Fine
This article is not saying never use Promise.all().
It is perfectly fine when:
- the number of items is small
- the workload is predictable
- there are no strict rate limits
- memory usage is low
- failures should fail the whole operation
- all operations are independent
Example:
const [user, settings, permissions] =
await Promise.all([
getUser(userId),
getSettings(userId),
getPermissions(userId),
]);
This is a good use case.
The danger starts when the number of promises is dynamic and unbounded.
A Simple Rule I Follow Now
Before using Promise.all(), I ask:
“How many promises can this create in production?”
If the answer is:
3
5
10
Fine.
If the answer is:
unknown
hundreds
thousands
depends on user input
depends on database size
Then I stop and redesign.
Because at that point, I do not need raw Promise.all().
I need one of these:
- batching
- concurrency limiting
- queue processing
- streaming
- pagination
- retry strategy
- backpressure handling
- partial failure tracking
The Real Lesson
Promise.all() is not dangerous because it is broken.
It is dangerous because it is too easy.
It makes risky concurrency look elegant.
It hides operational complexity behind a beautiful one-liner.
But production systems do not care how clean your code looks.
They care about:
- load
- limits
- memory
- failure modes
- retries
- recovery
- reliability
The best engineers do not just ask:
“Can I run these together?”
They ask:
“How much concurrency can this system safely handle?”
That is the difference between writing async code and designing resilient systems.
Final Thought
Promise.all() is a tool.
A very useful one.
But it should not be your default answer for every async workload.
Sometimes the fastest code is the code that finishes first in development.
But the best code is the code that survives production.
Use Promise.all() when the work is small and controlled.
Use queues, batches, and concurrency limits when the system matters.
Because Promise.all() is not parallelism.
It is pressure.
And if you do not control that pressure, your system eventually will.
About the Author
I’m Amrish Khan — a full-stack engineer focused on building fast, privacy-conscious, developer-first applications.
I’m currently exploring the future of:
- local-first developer tooling
- browser-native processing
- AI-efficient workflows
- offline-capable applications
- privacy-focused architectures
I’m also building Aruvix — a growing ecosystem of local-first developer tools designed to process data directly in the browser without unnecessary uploads.
You can follow my work and thoughts here:
- Portfolio: amrishkhan.dev
- LinkedIn: linkedin.com/in/amrishkhan
Top comments (0)