DEV Community

Cover image for Spotify's audio_features API died in 2024. Here's what to use in 2026.
Freqblog
Freqblog

Posted on • Edited on • Originally published at freqblog.com

Spotify's audio_features API died in 2024. Here's what to use in 2026.

If you maintain anything that ever called GET /v1/audio-features/{id}, you already know how this story starts. On November 27, 2024, Spotify quietly killed audio_features, audio_analysis, recommendations, related artists, and featured playlists in a single developer-blog post. New apps got 403 the same day. Eighteen months later there's still no official replacement, and the February 2026 changes made the rest of the Web API harder to use, not easier.

This post is the honest version of "what now?" — the real options, where they fall short, and a Python migration example.

What actually died

Endpoint What it gave you Status (May 2026)
/audio-features/{id} BPM, key, mode, danceability, energy, valence, acousticness, instrumentalness, liveness, loudness, speechiness, time_signature 403 for new apps
/audio-analysis/{id} Per-bar / per-beat / per-segment timing 403 for new apps
/recommendations Genre-seed and feature-seed recommendations 403 for new apps
/artists/{id}/related-artists Up to 20 related artists 403 for new apps
/browse/featured-playlists Editorial curation lists 403 for new apps
`/me/top/{tracks\ artists}` User listening history

Apps that already had a quota extension in flight on Nov 27 2024 are still live. Everyone else gets 403. There's no waitlist, no path forward, and no public statement that this will change.

Why Spotify isn't bringing it back

The audio_features endpoint was a thin wrapper around analysis Spotify acquired when they bought The Echo Nest in 2014, plus features derived from the AcousticBrainz dataset (which itself shut down in 2022 for similar "we don't want to host this anymore" reasons). Returning eleven floats per track in a 100M-track catalog costs Spotify infrastructure for zero strategic value.

The recommendations endpoint was worse for them — letting any third-party build a Spotify-quality recommender by spamming seed_genres=house&target_energy=0.8 undermined the whole "premium algorithm" pitch.

Stop asking "will Spotify bring this back?" The right question is "what's the smallest dependency I can rebuild this on so I'm never one PR-merge away from being broken again?"

The realistic options

Five categories, ranked by how quickly you can ship the migration:

1. Apple Music API

Apple exposes tempo, key, timeSignature, contentRating, plus mood/genre tags. Free for dev accounts, $99/year to ship. Missing: danceability, energy, valence, acousticness, instrumentalness, liveness — six of the eleven Spotify fields gone. Best if you only need BPM + key from a major catalog and don't mind paying Apple.

2. Build your own with Essentia

Essentia is the open-source MIR toolkit Spotify itself used to derive most of the deprecated values. MusicExtractor takes a 30-second audio clip and returns BPM, key, danceability, average loudness, dynamic complexity, tuning frequency, onset rate. Compute cost only — but you need the audio. Spotify never let you download it; iTunes 30-second previews work but rate-limit hard (~25 RPM). Plan for a multi-week backfill on any catalog over 10k tracks.

3. AcousticBrainz public dump (free, frozen)

The MusicBrainz folks did exactly what Spotify won't: published the entire AcousticBrainz dataset (7.5M tracks, 11 high-level features and ~120 low-level descriptors per track) as a one-time public dump in July 2022 before shutting the live service down. Catch: frozen in time — nothing released after July 2022 has values. Coverage on tracks with a MusicBrainz ID is ~60% of recent commercial releases; without an MBID, nothing. Useful as a baseline layer.

4. Musicae API

Built specifically as a Spotify shim — same field names, same value ranges, similar ergonomics. The closest drop-in if minimising migration diff matters more than anything else.

5. FreqBlog Music API (full disclosure: ours)

Different design choice: a catalog first, not a wrap-an-id service. Pass a track-name + artist string, get back BPM, key, Camelot, energy, loudness, danceability, valence, mood, time signature, ISRC, MBID, genre. From £0.17 per 1,000 requests, free tier with no card. We backfill missing tracks via a queue (returns 202 Accepted; the next call has the data). Doesn't cover per-segment analysis or speechiness/instrumentalness.

Migration walkthrough: Spotify → FreqBlog

Most apps that depended on audio_features were doing one of two things:

  1. Look up known tracks — user pastes a Spotify URL, app shows BPM/key/energy
  2. Filter by feature ranges — "give me upbeat tracks above 120 BPM in a major key"

Pattern 1: Single-track lookup

Before:

import requests

def get_features(spotify_id, token):
    r = requests.get(
        f"https://api.spotify.com/v1/audio-features/{spotify_id}",
        headers={"Authorization": f"Bearer {token}"},
    )
    r.raise_for_status()
    return r.json()

# >>> get_features("0VjIjW4GlUZAMYd2vXMi3b", token)
# {"danceability": 0.514, "energy": 0.730, "key": 1, "tempo": 171.005, ...}
Enter fullscreen mode Exit fullscreen mode

After:

import requests

def get_features(track_name, artist, api_key):
    r = requests.get(
        "https://api.freqblog.com/lookup",
        params={"track": track_name, "artist": artist},
        headers={"X-Api-Key": api_key},
    )
    r.raise_for_status()
    return r.json()

# >>> get_features("Blinding Lights", "The Weeknd", api_key)
# {"audio_features": {"bpm": 171.0, "key": "C#-Minor", "camelot": "12A",
#                     "energy": 0.91, "danceability": 0.85, ...}}
Enter fullscreen mode Exit fullscreen mode

The shape of the request changes — name+artist instead of a Spotify ID, because we're a catalog not a Spotify-wrapper. The response shape is similar enough that a 5-line adapter handles it.

Field mapping

For the 11 Spotify fields, here's how they map to the replacement landscape:

Spotify field Apple Music Essentia (build-your-own) FreqBlog
tempo tempo bpm bpm
key key key.key key + camelot + open_key
mode — (in key) key.scale mode
time_signature timeSignature time_signature
danceability danceability danceability
energy derived from RMS energy
valence via SVM model valence (where available)
loudness average_loudness loudness
acousticness via SVM model
instrumentalness via SVM model
liveness via SVM model
speechiness via SVM model

Some fields don't exist anywhere except Spotify's frozen-in-time models (speechiness, instrumentalness). Be honest with users about which you actually need vs which were nice-to-haves.

Pattern 2: Feature-range filtering

If you were doing the recommendations-with-target-features dance:

# Spotify (deprecated)
GET /v1/recommendations?seed_genres=house&target_energy=0.8&min_tempo=120

# FreqBlog
GET /charts?genre=house&min_energy=0.8&min_bpm=120&n=20
Enter fullscreen mode Exit fullscreen mode

Same shape, different host. We also expose a similarity-based recommender (/similar?seed_track_id=...) that doesn't require seed-genre tuning — pass a track you like, get back the 10 acoustically nearest neighbours by cosine similarity over an 18-feature embedding.

What no replacement gives you

Be realistic about the gaps. None of the alternatives — including ours — replicate Spotify's old offering one-for-one:

  • Per-segment analysis with timestamps. Replicating /audio-analysis requires you to run a beat-tracker on the audio yourself. librosa.beat.beat_track or Essentia's RhythmExtractor2013 get you most of the way.
  • Speechiness, instrumentalness, liveness. Spotify trained these on internal labelled data nobody else has. Open-source Essentia models exist but Essentia's own docs flag them as deprecated due to data-quality concerns — reproducing them locally reproduces a known-noisy system.
  • Coverage of obscure tracks. Spotify had every track in their catalog. Every alternative has a coverage gap on long-tail releases. Plan for a "no data yet, queue for analysis" path in your UX.

How to choose

  • Already on Apple Music? Use their API, accept the field reduction.
  • Need a 1:1 Spotify shim? Musicae is closest by design.
  • Have audio files already? Build with Essentia, pay only compute.
  • Need a queryable catalog by name+artist with BPM, key, Camelot, similarity, charts? Try FreqBlog — free tier, no card.
  • Building research/non-commercial work? AcousticBrainz dump is free and large enough.

Originally published at freqblog.com. If you found this useful, the FreqBlog Music API has a free tier — no card required.

Got questions about a specific migration scenario? Drop them in the comments and I'll do my best.

Top comments (0)