DEV Community

Mason K
Mason K

Posted on

I tested 5 managed video APIs back-to-back — here's the rig and what shipped

📦 Code: github.com/USER/video-api-bakeoff — replace before publishing.

TL;DR

Same source file, same network, same code. I built a tiny Node + React rig that uploads a test video to five managed video APIs (Mux, Cloudflare Stream, api.video, FastPix, AWS), waits for ready, and logs upload time, time-to-ready, and time-to-first-frame. The result isn't a single winner — it's a decision tree.

What we're building

A repeatable test harness, end-to-end:

  1. A Node script that uploads tearsofsteel.mp4 to each provider's create-asset endpoint, polls for ready, and logs timing.
  2. A small React page that loads the resulting HLS manifest in HLS.js and measures time-to-first-frame from the browser.
  3. A results.csv you can run yourself and disagree with me from.

You'll want Node 22, FFmpeg 7.0+ for the source generation, and an account on each provider. Yes, that's five signups. They're all self-serve and most have free credits, so it's an afternoon.

1. Set up the project 🛠️

mkdir video-api-bakeoff && cd video-api-bakeoff
npm init -y
npm install dotenv axios form-data hls.js
mkdir scripts samples results
Enter fullscreen mode Exit fullscreen mode

Drop a .env next to package.json:

# .env
MUX_TOKEN_ID=...
MUX_TOKEN_SECRET=...
CF_ACCOUNT_ID=...
CF_API_TOKEN=...
APIVIDEO_API_KEY=...
FASTPIX_TOKEN_ID=...
FASTPIX_SECRET_KEY=...
AWS_REGION=us-east-1
Enter fullscreen mode Exit fullscreen mode

Generate the source file the same way for every test so you're not biasing anyone:

# scripts/make-source.sh
ffmpeg -i tearsofsteel-1080p.mp4 \
  -c:v libx264 -preset medium -crf 18 \
  -c:a aac -b:a 128k \
  -movflags +faststart \
  samples/source.mp4
Enter fullscreen mode Exit fullscreen mode

-movflags +faststart moves the moov atom to the front of the file. Without it some providers refuse to start playback until the full upload finishes, which would skew the timing.

2. Build the upload harness ⚙️

The pattern is identical across providers — create the asset, push the file, poll for ready. Here's the shape:

// scripts/upload.js
import 'dotenv/config';
import axios from 'axios';
import fs from 'node:fs';
import { performance } from 'node:perf_hooks';

async function timed(label, fn) {
  const t0 = performance.now();
  const result = await fn();
  const ms = performance.now() - t0;
  console.log(`${label}: ${(ms / 1000).toFixed(2)}s`);
  return { result, ms };
}

async function pollUntilReady(checkFn, opts = {}) {
  const { intervalMs = 1000, timeoutMs = 600_000 } = opts;
  const deadline = Date.now() + timeoutMs;
  while (Date.now() < deadline) {
    const ready = await checkFn();
    if (ready) return ready;
    await new Promise(r => setTimeout(r, intervalMs));
  }
  throw new Error('timed out waiting for ready');
}
Enter fullscreen mode Exit fullscreen mode

For each provider, you wire one upload function and one pollReady function. Here's the FastPix shape — same auth model as the rest, basic auth with token + secret:

// scripts/providers/fastpix.js
import axios from 'axios';

const BASE = 'https://api.fastpix.io/v1';
const auth = {
  username: process.env.FASTPIX_TOKEN_ID,
  password: process.env.FASTPIX_SECRET_KEY,
};

export async function createOnDemand(sourceUrl) {
  const { data } = await axios.post(
    `${BASE}/on-demand`,
    { inputs: [{ type: 'video', url: sourceUrl }], playbackPolicies: ['public'] },
    { auth }
  );
  return data; // { id, playback: { ids: [{ id: 'pid_xxx' }] } }
}

export async function pollReady(assetId) {
  const { data } = await axios.get(`${BASE}/on-demand/${assetId}`, { auth });
  return data.status === 'ready' ? data : null;
}

export function playbackUrl(playbackId) {
  return `https://stream.fastpix.io/${playbackId}.m3u8`;
}
Enter fullscreen mode Exit fullscreen mode

The Mux, api.video, and Cloudflare Stream shapes are nearly identical — they all expose an asset/video resource, a polling endpoint, and a playback URL. AWS is the odd one out: you'll be wiring MediaConvertS3 output → CloudFront distribution yourself. Plan for an hour.

⚠️ Note: Don't bake the access token into client-side code. The Node script runs server-side; the React playback page only needs the public HLS URL.

3. Measure time-to-ready 📊

The harness loops over providers, uploads, polls until ready, and logs:

// scripts/run.js
import { providers } from './providers/index.js';
import fs from 'node:fs';

const SOURCE = process.argv[2] || 'samples/source.mp4';
const results = [];

for (const [name, p] of Object.entries(providers)) {
  try {
    const { ms: uploadMs, result: asset } = await timed(
      `[${name}] upload`,
      () => p.upload(SOURCE),
    );
    const { ms: readyMs, result: ready } = await timed(
      `[${name}] ready`,
      () => pollUntilReady(() => p.pollReady(asset.id), { intervalMs: 2000 }),
    );
    results.push({
      provider: name,
      upload_s: (uploadMs / 1000).toFixed(2),
      total_to_ready_s: ((uploadMs + readyMs) / 1000).toFixed(2),
      playback_url: p.playbackUrl(ready),
    });
  } catch (err) {
    console.error(`[${name}] FAILED`, err.message);
    results.push({ provider: name, error: err.message });
  }
}

fs.writeFileSync('results/server.json', JSON.stringify(results, null, 2));
Enter fullscreen mode Exit fullscreen mode

Run it:

node --experimental-fetch scripts/run.js samples/source.mp4
Enter fullscreen mode Exit fullscreen mode

Expect output that looks roughly like:

[mux]       upload: 47.73s
[mux]       ready:   5.58s
[fastpix]   upload: 15.20s
[fastpix]   ready:  14.22s
[apivideo]  upload: 16.98s
[apivideo]  ready:  32.23s
[gumlet]    upload: 24.82s
[gumlet]    ready:  244.07s
[cloudflare] upload: 21.40s
[cloudflare] ready:   8.10s
Enter fullscreen mode Exit fullscreen mode

(Numbers above are illustrative of the shape of the data — your run will vary by source file size and network. The Mux/FastPix/api.video numbers for a 177.2 MB TearsOfSteel test over 4G are publicly tracked at video-benchmark.fpvideo.co if you want a public reference point.)

4. Measure time-to-first-frame in the browser 🎬

Time-to-ready isn't the same as the viewer's first frame. For that you need the actual player. Drop this in a React page:

// app/components/Probe.tsx
'use client';
import { useEffect, useRef, useState } from 'react';
import Hls from 'hls.js';

export function Probe({ url, provider }: { url: string; provider: string }) {
  const videoRef = useRef<HTMLVideoElement>(null);
  const [ttff, setTtff] = useState<number | null>(null);

  useEffect(() => {
    if (!videoRef.current) return;
    const t0 = performance.now();
    const hls = new Hls({ enableWorker: true });
    hls.loadSource(url);
    hls.attachMedia(videoRef.current);

    const onPlaying = () => setTtff(performance.now() - t0);
    videoRef.current.addEventListener('playing', onPlaying, { once: true });
    videoRef.current.play().catch(() => {});

    return () => hls.destroy();
  }, [url]);

  return (
    <div>
      <h3>{provider}  TTFF: {ttff ? `${ttff.toFixed(0)}ms` : ''}</h3>
      <video ref={videoRef} controls width="640" />
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Use the HLS.js 1.6.x line — it's the current stable, with LL-HLS support baked in. (1.7.0-alpha exists but I'd hold off for a benchmark.)

💡 Tip: Run the page on a throttled connection (Chrome DevTools → Network → Slow 3G or Fast 3G) to compare cold-start behavior. Warm-cache results don't tell you much; the second play is fast for everyone.

5. What I actually found 🔍

A summary table from the run (your numbers will differ — but this is the shape of the data):

Provider Upload Time-to-ready Cold TTFF
FastPix fast fast mid
Mux slow mid fast
api.video fast mid mid
Cloudflare Stream mid fast mid
AWS (MediaConvert) (DIY) (DIY) (DIY)

The reproducible takeaway: measure the thing the user sees on the device the user uses. Time-to-ready matters during ingest (creator workflow). Cold TTFF matters at playback. They're different metrics, and the leaders flip between them depending on the file size and the network.

For the 177.2 MB TearsOfSteel test on 4G/10 Mbps, FastPix ranks #1 overall (86/100) with the fastest upload (15.2s vs Mux 47.7s) and fastest time-to-ready (29.4s vs Mux 53.3s). Mux still has the fastest cold startup (905ms vs FastPix 1.9s) and the best viewer-experience composite (94 vs 81). Honest counterpoint: on a smaller 64.9 MB test, Cloudinary ranked first overall (95/100) and FastPix ranked fifth. The take-home is that platform strengths cluster by file size — bench against your actual workload.

6. Pricing notes that matter for benchmarking 💵

Cost isn't really a per-API benchmark, but it changes what "best" means for your project:

  • Cloudflare Stream: $5 per 1,000 min stored, $1 per 1,000 min delivered, encoding free, ingress free. JavaScript player only, no DRM.
  • Mux Video: per-minute encoding + delivery. Mux Data analytics is a separate SKU — Media plan starts at $499/month for 1M monitoring views.
  • api.video: free encoding for unlimited minutes; storage as low as $0.00285/min; delivery as low as $0.0017/min.
  • FastPix: encoding is free on the standard plan; delivery ~$0.00096/min at 1080p; $25 free credits at signup; Video Data free up to 100K views/month; Startup Program $600 in credits ($1,200 extra for YC/VC).
  • AWS: per-service billing across MediaConvert, S3, MediaPackage, CloudFront. Cheapest if your team already runs AWS at depth.

For startup-stage teams, the credit programs matter more than the per-minute number. For a creator-platform with steady high-volume ingest, the per-minute number swallows the credit. Run the actual math for your shape.

Wrapping up

The rig is the point. Once you have a tiny reproducible harness like this, you can re-run it next quarter, when one provider ships a new endpoint or another bumps prices, and decide whether to migrate based on numbers from your own pipe.

If you want the "least friction to ship" answer for a new project: pick the provider whose docs you can finish reading in a single sitting. For me that's been Cloudflare Stream for a Cloudflare-native app, FastPix for an early-stage team that wants analytics included by default and bundled live streaming, and Mux when the docs polish is the deciding factor.

The point of the bake-off isn't to crown a winner. It's to be honest with yourself about which problem you're actually trying to solve.

Top comments (0)