TL;DR
I built Pokor, a real-time planning poker tool for distributed engineering teams, because every existing one either capped participants, stories, or sessions, or gave no clear answer about where the data lived. The core is free and unlimited. Below is what's actually interesting under the hood: the realtime layer, the deploy flow, the Docker setup, and a few integration stories (one painful, one cursed).
Why I built it
Every planning poker tool I tried came with a catch. The free ones capped participants, stories, or sessions right when sprint planning got busiest. The ones that didn't gave no real answer about where the data lived or how it might be used later. So I decided to build my own, on my own terms.
What it does
The core is genuinely unlimited and free: sessions, participants, stories, no auto-deletion after N days.
A few features I shipped first because they were the ones I missed most:
- Real-time voting with reveal, re-vote, and confidence indicators
- Async sessions so people can vote on their own time, not just live
- Story import from GitHub Issues, Jira, Notion, and CSV
-
Slack and Discord bots with a
/pokorslash command to start sessions straight from chat
The paid Organizer plan unlocks two-way sync with GitHub, Jira, and Notion. Most teams should never need to pay.
The stack, and why these specific pieces earned their spot
I tried to keep the stack boring. The interesting part isn't the names on the list, it's where each piece actually pulled its weight.
Laravel + Vue (Inertia, with SSR). Inertia lets the backend own routing and auth while the frontend stays a real Vue SPA. No separate API layer, no duplicated DTOs, but I can still drop into a normal SFC for the planning room. Server-side rendering runs as a sidecar Node process from the same Docker image, so the first paint of every public page is real HTML, not a hydration spinner. Good for first-load speed, good for the bots that decide whether to index the site. I also use Wayfinder to generate TypeScript route helpers from the PHP route definitions, so the moment I rename a route on the backend, every frontend caller breaks at build time instead of in production.
FrankenPHP + Laravel Octane. The app stays in memory between requests. For a tool where someone clicks "Reveal" and expects every other browser to update now, the cold-start latency of classic PHP-FPM is the difference between "real-time" and "real-timeish". Octane keeps the hot paths fast, and the WebSocket auth endpoint (which fires for every channel subscription) doesn't bottleneck.
Laravel Reverb for WebSockets. First-party server, Pusher-compatible protocol. I evaluated managed Pusher and Soketi and landed on Reverb because it speaks the same protocol as Pusher (so swapping later is a single env change), it runs in the same Docker network, and it means there's exactly zero data leaving my infrastructure during a vote. For a planning poker tool, where the entire product is "what people typed and what they voted", that mattered to me.
PostgreSQL. Boring on purpose. The interesting query patterns are around story ordering and reveal history, and Postgres' window functions handle both cleanly without me reaching for application-side sorting.
-
Redis, doing two jobs that are both load-bearing:
- Laravel queue, cache, and broadcasting backplane (the standard role).
- Join-request approval queue. When an organizer enables "require approval to join", guest join requests don't go into Postgres at all. They live in Redis with a short TTL and a per-session pending set. Approve or deny fires a broadcast event and the request evaporates. No schema migrations, no cleanup cron, no rows lingering after the session ends. This is the kind of state Redis is genuinely best at, and using Postgres for it would have been worse, not just slower.
The session page is pumped with websockets
The planning room is the heart of the app, and almost every interaction on it is a broadcast. There are a lot of distinct event types flowing through a single Reverb presence channel per session. Some of them:
- vote cast (with a flag that triggers auto-reveal once everyone is in)
- cards revealed (carries every vote plus the computed average and rationales)
- voting started and stopped
- story changed
- participant joined, left, kicked, or renamed
- role changed (promote or demote)
- join request created and resolved (organizer-only)
- emoji thrown (yes, you can throw emoji at people)
- estimate synced back to GitHub, Jira, or Notion
- session mode and session settings changed
On the frontend a single composable subscribes to the presence channel, listens for each event by name, and patches a Pinia store, so every component in the planning room reacts without prop-drilling or polling. The "reveal" itself is wrapped in a row-level Postgres lock so two organizers can't both hit Reveal at the same millisecond and broadcast two different averages.
The deploy flow: blue-green with surgical cutover
The thing I'm most quietly proud of is the deploy. Pokor runs two FrankenPHP Octane containers in production, blue and green, behind Caddy. On every deploy:
- The deploy script detects the active slot.
- It builds a new Docker image with code, vendor, and built frontend assets baked in. The release tag is stamped with the slot letter and a timestamp so I can see at a glance which slot is live.
- Composer install, migrations, and config caching run inside the standby container.
- Standby gets health-checked.
- The cutover itself is a single call into Caddy's admin API, flipping the upstream from the old slot to the new one. In-flight requests get a short grace window to drain.
- The old container stops. Queue, scheduler, and Reverb workers restart against the new image.
- The new release tag is recorded so rollback is one command, which just runs the same flow in reverse against a previous image.
Zero downtime in practice. No managed deploy service, no third-party orchestrator. Just Caddy's admin API and a bash script doing one obvious thing well.
Docker: move-to-another-server-in-minutes
Everything Pokor needs lives in one compose file: Caddy, the two app containers, Reverb, the queue workers, the scheduler, Postgres, Redis, the error-tracking agent. Postgres and Redis are bound to localhost only, embedded inside the stack rather than exposed. There's nothing managed, nothing external, no "click this in the AWS console first" step.
What this gives me, concretely, is that if I ever had to migrate to another server, it's copying the data volumes, copying the env files, and bringing the stack up. No DNS to repoint, no propagation to wait on - the Cloudflare tunnel (more on that below) follows the container, not the IP. Minutes, not a weekend project. That portability was not an accident, it was the constraint I designed around from day one.
Cloudflare doing the heavy lifting at the edge
Cloudflare sits in front of everything, and it earns its keep in three ways:
- TLS and WAF. Caddy speaks plain HTTP internally and trusts Cloudflare's IP ranges. All TLS terminates at the edge, and the obvious bot traffic gets dropped before it ever reaches my containers.
- Turnstile (Cloudflare's invisible challenge / captcha) on the public forms - contact, newsletter signup, anywhere abuse would otherwise leak in. Every submission carries a Turnstile token that I verify server-side via a reusable Laravel validation rule. The "is this a human" question is one I'd rather not answer myself, and Turnstile is invisible most of the time, which is exactly what I want from a captcha.
-
cloudflaredtunnels instead of public ingress. The server has no open ports facing the internet at all. Acloudflareddaemon inside the stack dials out to Cloudflare and serves traffic back through that tunnel. This is also what makes server migration so painless: the tunnel is bound to my Cloudflare account, not to the server's IP, so when I move hosts there is no DNS to change, nothing to propagate, nothing to repoint. The new server comes up, the tunnel reconnects, traffic follows.
The integration stories
GitHub Issues: easy, mostly. GitHub's App plus OAuth model is well-trodden. The two-step "install the app, then authorize the user" dance is annoying once and then it just works. Write-back is either a comment or a label prefix.
Notion: also easy. OAuth, discover databases, let the user pick which property is the estimate field, write back. The Notion API is well-designed and the docs match reality.
Slack and Discord: a slash command that returns an ephemeral message with a pre-signed "Join as Organizer" button. Both providers ended up looking nearly identical on the implementation side, which was a pleasant surprise.
Jira / Atlassian: pain in the ass. I'll be honest, I spent more time on Atlassian than on the next three integrations combined. The OAuth flow itself looks fine on paper. Then I started discovering the real problems:
- A user can have access to multiple Jira sites, so after auth I have to fetch the accessible resources, let them pick one, and persist that choice.
- Story Points doesn't have a stable field ID. It's a custom field. The ID is different in every Jira instance. There is no "give me the story points field" endpoint.
- Even if I find the right field, the connected user might not have permission to write to it on every issue type.
What I ended up with is a small heuristic ranking system that scores every custom numeric field on each project, weighted by known canonical IDs, by exact name matches like "Story Points", and by anything that smells like points or estimates. Then a second pass actually checks that the chosen field is writable, not just readable. Then a "test connection" button so the user can see it work before they save settings. It took me a while to get there, but it works now, and it works on every Jira tenant I've thrown at it.
Microsoft Teams: I genuinely planned to add this. Slack and Discord were done, Teams was the obvious next one. I opened the Azure portal to register an app and... how do I put this politely. The Azure portal is a fractal of nested blades, each one promising to take me to "Active Directory" but actually taking me to a different shape of Active Directory. I clicked around for a few hours, found three different things that looked like the right "App registrations" page, none of which were, and eventually closed the tab. Teams support is officially on the roadmap for "when I have the emotional bandwidth to fight Azure again". If you're a Teams user and you want this, let me know and I'll put on a pot of coffee.
What's next
Right now I'm focused on getting first users and feedback. If you try it with your team and something feels off (broken flow, missing integration, copy that confused you) I'd genuinely love to hear about it.
👉 pokor.dev
Thanks for reading.
Top comments (0)