DEV Community

Cover image for I Built a Desktop App That Reads Your HRV & Sleep Data and Generates a Unique Therapeutic Soundscape Every Day
Arlej
Arlej

Posted on

I Built a Desktop App That Reads Your HRV & Sleep Data and Generates a Unique Therapeutic Soundscape Every Day

I’ve been wearing an Amazfit GTR4 every night for a few months, tracking the usual stuff — HRV, resting heart rate, deep sleep %, REM %, stress score, and total sleep time (hours/minutes). Every morning I’d open the app, look at the five or six numbers, think “huh, not great today,” and then do absolutely nothing with them.

I tried every sleep music app out there. Binaural beats, nature sounds, “deep sleep frequencies.” They all do the same thing: you pick a mode and they play a loop. Same audio, every day. But my body is all over the place. Some mornings I’m at 44 ms HRV, 3h51m of sleep, stressed to 23. Other days I’m at 67 ms and feel great. The same audio can’t possibly be optimal for both states.

So I built NEUROVA. It’s a Windows desktop app that takes your actual daily health metrics and builds a completely new, physically‑modelled therapeutic soundscape from scratch. No samples. No DAW. Just code. You can pick how long you want it — from a 30 minutes up to 12 hours.

How it works
The app takes six numbers and date: HRV, resting HR, deep sleep %, REM %, stress score, and total sleep. It figures out how far each one is from your own historical normal (not some population average), then uses those “deficit” signals to drive every creative decision.

It scores 20 possible sound categories — piano, guitar, strings, rain, ocean, fire, heartbeat, Tibetan bowls, Gregorian chant, breath guide, etc. — against your fingerprint. A date‑rotation so even identical biometrics two days in a row get a different palette.

Then the Story Engine takes over. It converts those biometric signals into a five‑act dramatic arc with actual tension and volume curves, climax shapes, and a closing character. Each melodic instrument (piano, guitar, strings, bowls) gets its own independent register journey and presence timeline — they all move differently, like characters in a play.

Phrase generation uses a Markov‑weighted motif library across complexity tiers, from single‑note meditative phrases to expressive jazz‑like runs. The engine can even modulate between D minor and F major pentatonic when recovery is high.

All sounds are physically modelled in real time:

Guitar: Karplus‑Strong delay line

Piano: inharmonic partials with stretch tuning and dual‑string detuning

Rain: dual‑band bandpass from pink noise

Ocean: three‑layer model (surge, wash, spray), wave period from resting HR

Fire: brown noise bed + Poisson‑timed crackles + occasional log thuds

Tibetan bowls: measured real‑bowl ratios with long resonant tails

Heartbeat: sub‑bass lub‑dub timed to your HR

I also wrote a custom convolution reverb from scratch — partitioned OLA‑FFT, procedural room impulse responses using image‑source early reflections and an RT60‑matched decay tail. The acoustic space itself is unique per session.

Everything is saved with a seed (biometrics + date) so you can replay the exact session.

You can export generated soundscape.

What was hard
Karplus‑Strong sounds easy on paper. Getting it stable across all frequencies without aliasing was painful. The piano inharmonicity — 7 partials, each with different stretch‑tuning factors — almost made me give up. And writing a zero‑latency partitioned FFT convolver is not something I’d recommend unless you really hate yourself.

The biggest surprise was how much the music style selection changes the experience. I added styles (Ambient, Classical, Jazz, Rock, Electronic, Metal…) because I got bored of everything sounding like a meditation app. Same biometric input, same therapeutic intent — but the Classical version feels like a piano‑led recovery session and the Metal version feels like a rhythmic, driven pulse. Both work, but they feel completely different.

My own data
I’ve been using NEUROVA every day. My HRV has slowly risen, stress dropped, and nights with zero sleep disappeared. Recently I did a weird experiment: I generated a session from my worst biometric day (HRV 44, 3h 51m sleep, stress 23), then deliberately waited two days before listening. The next‑morning data was… not what I expected. I’ll publish that follow‑up soon.

See the app in action

Want to try it? No download needed.
The app isn’t publicly packaged yet, but I’m generating free personalised sessions for anyone who wants one. Just drop your numbers in the comments or on the YouTube video:

-HRV (ms)
-Resting HR (bpm)
-Deep sleep %
-REM %
-Stress score (0‑100)
-Total sleep (hours & minutes)
-Date of measurement
-How long you want it (e.g. 45 min, 2 hours, up to 12 hours)
-Preferred music style (or Auto)

I’ll take your biometric fingerprint, turn it into a soundscape, and publish it on the channel. No cost, no app download, no account.

Top comments (0)