DEV Community

Cover image for Day 98 of #100DaysOfCode — DevCollab: Finishing the Frontend and Testing Against the Live Backend
M Saad Ahmad
M Saad Ahmad

Posted on

Day 98 of #100DaysOfCode — DevCollab: Finishing the Frontend and Testing Against the Live Backend

Today was the last day of writing code. The Next.js frontend got its final pieces, then the entire app was pointed at the live Railway backend and tested end-to-end with real data flowing between two deployed services. Tomorrow is purely deployment. Today was making sure there's something worth deploying.


Finishing the Frontend

The last remaining pages and components got finished today. Nothing major, the core of the app has been working for several days. What was left was the edit profile form, the sent requests page, and a few component polish items that had been deferred. Getting these done today meant tomorrow's deployment wouldn't be interrupted by code that still needed writing.

The rule I applied when deciding what to finish and what to cut was simple: if the app's core loop works without it, cut it. The core loop is register, post a project, another user applies, and the owner accepts. Everything that supports that loop stayed. Anything decorative or supplementary that wasn't built yet got cut.


Pointing at the Live Backend

The single most important change today was updating .env.local to point NEXT_PUBLIC_API_URL at the Railway URL instead of localhost:8000. One line changed, and suddenly the Next.js app running on my machine was talking to a real PostgreSQL database on Railway's servers instead of a local SQLite file.

This felt different immediately. The first registration on the live backend created a real user in a real production database. The project I created from the frontend showed up in the Django admin panel on Railway. The data is real now.


What the End-to-End Test Revealed

Running the full user flow with the live backend connected revealed four things that didn't show up during local testing.

The first was a CORS issue. The Railway backend was configured with CORS_ALLOWED_ORIGINS pointing to http://localhost:3000 for development. When the Next.js app on localhost:3000 made a request to the Railway URL, the browser blocked it because the allowed origins list was correct, but the actual Railway domain also needed to be in the list for cross-origin requests from localhost to Railway to work. Adding http://localhost:3000 back confirmed it was already there. The issue was actually the HTTPS vs HTTP mismatch. Railway serves over HTTPS, but the CORS config had http://. Changing to https:// fixed it.

The second was an avatar URL issue. Locally, avatars uploaded to the Django backend were served at localhost:8000/media/avatars/filename.jpg. In production, the MEDIA_URL was the same relative path, but Django on Railway doesn't automatically serve media files; WhiteNoise only handles static files. Uploaded avatars returned 404. For now, the avatar upload is disabled in the frontend with a note that it requires cloud storage. The profile works perfectly without it; it just shows the placeholder avatar. This is an acceptable trade-off for a portfolio project.

The third was a field name mismatch. The collaboration request serializer returns the requester's profile data nested under requester_data, but one component in the frontend was looking for requester without the _data suffix. This worked locally because I had old test data in the local database that happened to have the right shape. Against the live backend with fresh data, the field was missing, and the component showed nothing. A one-line fix in the component.

The fourth was token refresh timing. The access tokens are set to expire after 30 minutes. During local testing, I never stayed on the app long enough to hit that expiry. The first time I left the live-connected app open for 35 minutes and came back, every API call returned 401 until the refresh interceptor kicked in. The interceptor worked correctly; it refreshed the token and retried the request, but there was a brief flash of an error toast before the retry succeeded. This is a known limitation of the current interceptor implementation and is acceptable for now.


Running the Production Build

Before calling today done, npm run build was run locally. This is the same process Vercel runs during deployment. If it passes locally, Vercel deployment has a very high chance of succeeding without surprises.

The first build attempt failed with three errors. Two were unused imports that Next.js's production build is stricter about than the development server. The third was a missing key prop on a list render that development mode warned about, but didn't block. All three were quick fixes. The second build attempt passed cleanly.

Running the production build locally before deploying is a habit worth forming on every project. Vercel's build logs are readable, but debugging them remotely is slower than catching the errors on your own machine first.


Preparing the Environment Variables for Vercel

Vercel needs to know the environment variables the Next.js app requires. These are the same variables in .env.local but they need to be entered in Vercel's dashboard rather than in a file. Going through .env.local and listing every variable that starts with NEXT_PUBLIC_ confirmed there's only one: NEXT_PUBLIC_API_URL.

For tomorrow, that value will be the Railway URL that the Railway assigned to the project. The variable is already in the code wherever the API base URL is needed, so changing it from a localhost address to a live URL requires zero code changes.


Where Things Stand After Day 98

The frontend is complete. Every planned page exists, every core user flow works, and the production build passes. The app has been tested end-to-end against the live backend, and four issues were found and resolved. The environment variable needed for Vercel deployment is documented and ready.

Thanks for reading. Feel free to share your thoughts!

Top comments (0)