BuiltWithMeDo
I had a Python app that generated game sprites using Gemini. It worked great. The problem was it only ran on my machine. No web UI, no shareable link, nothing I could hand to someone else or demo without screen sharing.
The Build with MeDo hackathon gave me a reason to fix that. Here is how it went.
Prompt 1: Write the requirements first
Before touching any code I asked MeDo to produce a requirements document. I described what I wanted: a chat panel for talking to the AI agent, a style references gallery, a jobs board, a settings panel for the API URL. MeDo wrote out a full spec with acceptance criteria, edge cases, and an API contract table.
That step mattered more than I expected. It forced me to think clearly about the API before the frontend existed, which meant fewer mismatches to fix later.
Prompt 2: Build it
Once the requirements looked right I said "build this application from the spec above." MeDo generated the full four-panel React frontend in one pass. Dark theme, sidebar navigation, all four panels wired up to the API endpoints from the requirements doc.
That was it for the initial build. Two prompts.
What I built on the backend
The AI pipeline already existed. I just needed a server layer in front of it. I wrote a FastAPI bridge that exposes the pipeline over HTTP: a chat endpoint, a polling endpoint for streaming agent events back to the browser, static file serving for the generated sprites, and a session management layer. Around 550 lines of Python. ngrok tunnels it to the public internet so the MeDo app can reach it.
The follow-up prompts
A few things needed fixing after the first build. The style references gallery was crashing because image URLs were absolute Windows paths, not web URLs. The chat responses were not showing up because the WebSocket was using ws:// instead of wss:// over the HTTPS tunnel. The base URL in settings had a trailing slash that was producing double-slash paths like //config.
Each fix was one prompt. None of them required a redesign, just targeted corrections. By prompt 8 or 9 the app was working end to end.
What the final app actually does
You open the app, type a description in the chat panel like "a glowing health potion for a dark fantasy RPG," and the agent generates the sprite. Images appear in the chat in real time as they come back from Gemini. The style references gallery shows every saved art style with image previews so you can keep your assets visually consistent across a whole project.
What surprised me
The chat panel was the part I expected to be painful. It renders multiple event types live: agent responses, tool calls, thinking blocks, error messages, generated images. MeDo got the structure right on the first pass. I have built streaming UIs manually before and it usually takes a full day to get the edge cases right. MeDo had a working version in one prompt.
The stack
- MeDo frontend: React app built from a requirements doc
- FastAPI backend: wraps the existing Gemini pipeline
- Gemini on Vertex AI: image generation and agent reasoning
- ngrok: tunnels the local server to the web
If you have a working local tool and you want a web interface for it, MeDo is genuinely fast. The hard part is describing what you want clearly. The building part takes care of itself.
Top comments (0)