DEV Community

Lovenly
Lovenly

Posted on

I shipped an international dating app with real-time message translation in 50+ languages — here's the stack

I shipped Lovenly — an international dating app with live message translation in 50+ languages

A while ago, I watched a friend trying to date someone in Tokyo. He spoke no Japanese. She spoke no French. They were doing the DeepL copy-paste dance every night, switching tabs, breaking the flow. He told me "I don't even know if she's funny anymore — the translation kills the timing."

I'm a solo dev. I figured: real-time translation inside the chat shouldn't be a "premium" gimmick — it should be the product. So I built Lovenly (lovenly.online) — an international dating site where the messaging layer is the translation layer.

This post is the honest stack write-up. Brand new in production, so no big traffic numbers to wave around — just the technical choices and what bit me.


The stack

Layer Choice Why
Framework Next.js 16 (App Router, RSC) Server components let me query Supabase server-side without an API layer. Streaming for /messages.
DB + Auth + Storage + Realtime Supabase (Postgres) One vendor for the boring parts. RLS policies are non-negotiable for a dating app.
Styling Tailwind v4 Fast. No CSS-in-JS overhead in RSC.
Translation Anthropic Claude API (Sonnet) Better idiomatic translations than Google/DeepL for short, casual messages with slang and emoji — especially FR↔JA, EN↔TH, RU↔ZH.
Payments Stripe Subscriptions + tax handled.
Hosting Vercel (Hobby for now) Fine until traffic explodes.
Push Web Push + VAPID + service worker Native push on iOS PWA + Android Chrome + desktop without paying OneSignal.

The whole thing is TypeScript strict with tsc --noEmit on commit hook.


Architectural decisions that mattered

1. A translation cache table, keyed by (source_lang, target_lang, body_hash)

Translation costs are real. A single match between FR and JA can rack up hundreds of messages. So I never translate the same message twice — even across different match conversations. bio_translations table caches every translation forever. The cache is more useful for bios than for chat (bios get re-viewed by many people, chat is unique), but the marginal cost of the table is nothing.

2. Live translation is server-rendered, not streamed token-by-token

I tried streaming. Looked great in dev. In production, the perceived latency was worse because users saw words appearing piecemeal and it broke the "this person sent a message" mental model. Final answer: send → server translates fully → render once. Faster feel even though same total time.

3. Onboarding is brutal on purpose

5 steps: account → preferences → photos (min 1) → bio → publish. Email confirmation + ≥1 moderated photo is required before any like or message can be sent or received. This kills a chunk of signups at the first photo step — and that's intentional. It also kills bots almost entirely.

For beta, I have an is_seed_bot flag with ~25 demo profiles per country to make the discovery feed not empty. They auto-decline matches, never message, never appear in "online now". Demo only, clearly marked in the schema.

4. Anti-scam pipeline lives in the api_send_message Postgres RPC

Before a row hits messages:

  • URLhaus check on every link
  • Regex scan for IBAN, BTC/ETH/USDT wallets, credit card numbers → blocked
  • Phone / email / Telegram / WhatsApp → flagged for client-side confirmation
  • ~50 scam keywords (crypto, "wallet", "investment", "send me money") → flagged
  • Anthropic Claude toxicity assessment, score 0–10. ≥8 blocks. ≥5 flags.

Flagged messages auto-create a row in reports so moderators see them in /admin/signalements immediately. Shipped that one this week — was the missing link between the AI flag and the human review queue.

5. RLS is not "set it and forget it"

Every column-restricted table has a separate with check policy. Biggest gotcha I hit: auth.uid() returns NULL inside the SQL context for some legacy HS256 tokens, even when supabase.auth.getUser() returns a valid user. For sensitive writes the workaround is the service-role client server-side, but always re-verify photo.profile_id === user.id in TS before the write. Defense in depth.


Things I tried that didn't work

  • CDN-edge translation (Lingva, LibreTranslate): quality is fine for DE↔EN. For VI↔FR or KO↔FR it's unusable. Sticking with Claude.
  • Swipe-only UI: I built a Tinder-style swipe deck. Kept it as an opt-in mode. International dating users want to read profiles, not flick faces — so grid is the default.
  • Auto-translate on display by default: people who already speak both languages found it patronizing. Now it's opt-in per conversation with a one-click toggle.

What I'm doing right now

I'm running a launch promo: the first 100 sign-ups get 3 months of Lovenly Plus free (~€12 value). Hard cap, no extension. The counter is live on the landing.

Link: lovenly.online/first

If you're building something with i18n or messaging and you want to compare notes on Postgres RLS edge cases, Claude API caching strategies, or how I handle PII detection in chat — comments below. I read every one.


What's next

  • Voice messages with transcription + translation (Whisper → Claude → text + audio). Half done.
  • Match suggestions ranked by language compatibility (currently country + intent + age). Need to factor in shared language pairs.
  • AdSense Management API integration for the revenue dashboard widget (currently traffic-based estimation).

If you read this far, thank you. Shipped this thing alone, in production, with real users. It's terrifying. If you're doing something similar drop a link in the comments — I'd love to see it.

Cheers,
Gabriel — lovenly.online

Top comments (0)