DEV Community

Cover image for Generative AI Outfits in React: Cloudinary generativeReplace, Background Replace, and Node Upload
Pato for Cloudinary

Posted on • Edited on • Originally published at cloudinary.com

Generative AI Outfits in React: Cloudinary generativeReplace, Background Replace, and Node Upload

FashionistaAI is a React + Vite + Node sample: upload a photo, then use Cloudinary GenAI (generativeReplace, generativeBackgroundReplace, generativeRecolor, generativeRestore) to build four styled looks, with HTTP 423 retry logic while derivatives finish.

Upload a picture → get four styled looks (elegant, streetwear, sporty, business casual). In this walkthrough we’ll build FashionistaAI with Cloudinary GenAI, React (Vite) on the frontend, and a tiny Node.js/Express backend for secure uploads.

Repo: Cloudinary-FashionistaAI

Product note: GenAI features must be enabled for your Cloudinary product account and may depend on your plan—check the console before you run the flows below.


What you’ll build

  • A React app that:

    • uploads an image to your Node backend
    • asks Cloudinary GenAI to swap tops/bottoms (generativeReplace)
    • replaces the background (generativeBackgroundReplace)
    • lets you recolor top or bottom on click (generativeRecolor)
  • A Node.js server that securely uploads files to Cloudinary with the official SDK.

  • Client-side 423 handling: a small preload loop that retries when a derived URL returns HTTP 423 (still processing).


Demo (what it looks like)

The background adapts to the look; each tile is a different style:

  • Elegant
  • Streetwear
  • Sporty
  • Business casual

Fashionista app UI


Prerequisites

  • Node 18+ and npm
  • A free Cloudinary account
    • GenAI features may need to be enabled depending on your plan.
  • Basic React/TypeScript familiarity (optional but helpful)

1) Set up Cloudinary

  1. Create/LoginSettings → Product Environments.
  2. Confirm your Cloud name (keep it consistent across tools).
  3. Settings → Product Environments → API KeysGenerate New API Key. Save: Cloud name, API key, API secret (secret stays on the server).

2) Bootstrap the React app (Vite)

# Create a Vite + React + TS app
npm create vite@latest fashionistaai -- --template react-ts
cd fashionistaai

# Frontend deps
npm i axios @cloudinary/react @cloudinary/url-gen

# Dev tooling
npm i -D @vitejs/plugin-react

# Backend deps (we'll use one package.json for both)
npm i express cors cloudinary multer streamifier dotenv

# Nice-to-have dev deps
npm i -D nodemon concurrently
Enter fullscreen mode Exit fullscreen mode

3) Configure Vite dev proxy (frontend → backend)

Create/replace vite.config.js:

import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'

export default defineConfig({
  plugins: [react()],
  server: {
    port: 3000,
    proxy: {
      '/api': {
        target: 'http://localhost:8000',
        changeOrigin: true,
        secure: false,
      },
    },
  },
})
Enter fullscreen mode Exit fullscreen mode

This forwards any /api/* calls to the Express server on port 8000.


4) Environment variables

Create .env in the project root:

# Server (Node) reads these:
CLOUDINARY_CLOUD_NAME=YOUR_CLOUD_NAME
CLOUDINARY_API_KEY=YOUR_API_KEY
CLOUDINARY_API_SECRET=YOUR_API_SECRET

# Frontend (Vite) reads those prefixed with VITE_
VITE_CLOUDINARY_CLOUD_NAME=YOUR_CLOUD_NAME
Enter fullscreen mode Exit fullscreen mode

Never expose CLOUDINARY_API_SECRET on the frontend. That’s why we’re using a server.


5) Node/Express backend (server.js)

Create server.js in the project root. You can find the complete server file here.

Below: the main pieces of server.js.

Add to server.js (top of file) — wire up your Cloudinary credentials from .env:

cloudinary.config({
  secure: true,
  cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
  api_key: process.env.CLOUDINARY_API_KEY,
  api_secret: process.env.CLOUDINARY_API_SECRET,
})
Enter fullscreen mode Exit fullscreen mode

This connects the server to your Cloudinary account.

We use multer in memory so the file never hits disk. The Node SDK accepts a stream; streamifier turns the in-memory buffer into a stream for upload_stream, with a 10MB cap and PNG / JPG / WEBP fileFilter only.

const storage = multer.memoryStorage()
const upload = multer({
  storage,
  limits: { fileSize: 10 * 1024 * 1024 }, // 10MB
  fileFilter: (_req, file, cb) => {
    const ok = /image\/(png|jpe?g|webp)/i.test(file.mimetype)
    cb(ok ? null : new Error('Only PNG/JPG/WEBP images are allowed'), ok)
  },
})
Enter fullscreen mode Exit fullscreen mode
const uploadStream = cloudinary.uploader.upload_stream(
  { resource_type: 'image' },
  (error, result) => {
    if (error) {
      console.error('Cloudinary error:', error)
      return res.status(500).json({ error: error.message })
    }
    res.json(result)
  }
)

streamifier.createReadStream(req.file.buffer).pipe(uploadStream)
Enter fullscreen mode Exit fullscreen mode

The SDK can take a path or a stream; here multer left the bytes in memory, we stream to upload_stream, and the handler returns the JSON your React app needs.

package.json scripts

Open package.json and add these scripts:

{
  "type": "module",
  "scripts": {
    "dev": "vite",
    "server": "nodemon server.js",
    "start:both": "concurrently -k \"npm:server\" \"npm:dev\""
  }
}
Enter fullscreen mode Exit fullscreen mode

Now you can run both servers with:

npm run start:both
Enter fullscreen mode Exit fullscreen mode

(Or use two terminals: npm run server and npm run dev.)


6) React UI (src/App.tsx)

The UI is a drop-in, TypeScript-friendly App that keeps your original logic, tightens types, separates file vs. Cloudinary image state, and reads the cloud name from env. Full App.tsx in the repo.

Creating the clothing styles

type StyleKey = 'top' | 'bottom'
type StyleConfig = {
  top: string
  bottom: string
  background: string
  type: string
}

const STYLES: StyleConfig[] = [
  { top: 'suit jacket for upper body', bottom: 'suit pants for lower body', background: 'office', type: 'business casual' },
  { top: 'sport tshirt for upper body', bottom: 'sport shorts for lower body', background: 'gym', type: 'sporty' },
  { top: 'streetwear shirt for upper body', bottom: 'streetwear pants for lower body', background: 'street', type: 'streetwear' },
  { top: 'elegant tuxedo for upper body', bottom: 'elegant tuxedo pants for lower body', background: 'gala', type: 'elegant' },
]
Enter fullscreen mode Exit fullscreen mode

StyleKey is top vs. bottom for recolor; StyleConfig is one full look (clothing + background + label). STYLES is the “wardrobe” that becomes each card (Business casual, Sporty, Streetwear, Elegant).

Submitting to the backend and getting the base image

  async function handleSubmit() {
    setError(null)
    setLooks([])
    setLoadingStatus([])
    if (!file) return

    try {
      setLoading(true)
      const data = new FormData()
      data.append('image', file)

      const resp = await axios.post('/api/generate', data, {
        headers: { 'Content-Type': 'multipart/form-data' },
      })

      const publicId = resp.data.public_id as string
      const base = cld.image(publicId).resize(fill().width(508).height(508))
      setBaseImg(base)
      createLooks(publicId)
    } catch (err: any) {
      console.error(err)
      setError(err?.message ?? 'Upload failed')
    } finally {
      setLoading(false)
    }
  }
Enter fullscreen mode Exit fullscreen mode

POST /api/generate uploads the file, returns a public_id, the UI builds a base CloudinaryImage at 508×508, then createLooks(publicId) runs the generative stack.

Preloading derived images (poll until ready)

  function preload(img: CloudinaryImage, index: number, attempts = 0) {
    const url = img.toURL()
    const tag = new Image()
    tag.onload = () =>
      setLoadingStatus(prev => {
        const copy = [...prev]
        copy[index] = false
        return copy
      })
    tag.onerror = async () => {
      // 423 means "still deriving" on Cloudinary
      try {
        const r = await fetch(url, { method: 'HEAD' })
        if (r.status === 423 && attempts < 6) {
          setTimeout(() => preload(img, index, attempts + 1), 2000 * (attempts + 1))
          return
        }
      } catch {}
      setError('Error loading image. Please try again.')
      setLoadingStatus(prev => {
        const copy = [...prev]
        copy[index] = false
        return copy
      })
    }
    tag.src = url
  }
Enter fullscreen mode Exit fullscreen mode

preload loads a derived URL in a hidden Image onload clears that tile’s spinner. Onerror, a HEAD can return HTTP 423 while Cloudinary is still building the derivative, so we back off and retry; otherwise we surface an error. That’s the “423 + retry” pattern from the summary above.

Creating the different looks (generative effects)

  function createLooks(publicId: string) {
    const imgs = STYLES.map(style => {
      const i = cld.image(publicId)
      i.effect(generativeReplace().from('shirt').to(style.top))
      i.effect(generativeReplace().from('pants').to(style.bottom))
      i.effect(generativeBackgroundReplace()) // optional: prompt with your background
      i.effect(generativeRestore())
      i.resize(fill().width(500).height(500))
      return i
    })
    setLooks(imgs)
    setLoadingStatus(imgs.map(() => true))
    imgs.forEach((img, idx) => preload(img, idx))
  }
Enter fullscreen mode Exit fullscreen mode

Example: for each style, generativeReplace().from('shirt').to(style.top) and .from('pants').to(style.bottom) map prompts like “suit jacket for upper body” to the garment swaps, then background + restore + resize complete the card before preload runs per tile.

Recolor modal logic

  function openRecolorModal(index: number) {
    setSelectedLookIndex(index)
    setOpenModal(true)
  }

  function applyRecolor() {
    const clone = [...looks]
    const img = clone[selectedLookIndex]
    if (!img) return
    setLoadingStatus(prev => {
      const copy = [...prev]
      copy[selectedLookIndex] = true
      return copy
    })
    setOpenModal(false)
    // Recolor only the chosen item for the chosen look
    img.effect(generativeRecolor(STYLES[selectedLookIndex][selectedItem], color))
    setLooks(clone)
    preload(img, selectedLookIndex)
  }
Enter fullscreen mode Exit fullscreen mode

Recolor layers generativeRecolor on an already generated tile, then preload waits for the new derivative.

Add styling from the repo CSS or your own.


7) How it works (quick tour)

  • Upload: The file is sent to POST /api/generate. The server uses cloudinary.uploader.upload_stream to store it and returns the public_id.
  • Transform:

    • generativeReplace().from('shirt').to(style.top)
    • generativeReplace().from('pants').to(style.bottom)
    • generativeBackgroundReplace() (optionally prompt it to steer the scene)
    • generativeRestore() for quality
  • Recolor: On a generated tile, open a modal and apply generativeRecolor(<item>, <hex>).

  • 423 handling: When the first request for a derived image hits Cloudinary while it’s still being generated, you might see HTTP 423. The preload helper retries with backoff; for heavy use, consider preparing eager transformations on upload.


8) Testing locally

# Install (already done if you followed along)
npm i

# Run both servers
npm run start:both
# Frontend: http://localhost:3000
# Backend:  http://localhost:8000
Enter fullscreen mode Exit fullscreen mode

Production notes (optional but recommended)

  • Secrets: Keep CLOUDINARY_API_SECRET server-side only; use environment vars on your host.
  • Upload presets: Lock down transformations and content rules with a Cloudinary upload preset.
  • Limits: Add rate limiting to your API if you open it to the public.
  • Validation: Keep the Multer fileFilter and limits in place; consider scanning/validating uploads.
  • Caching/CDN: Cloudinary URLs are CDN-backed; reusing the same public_id improves cache hits.
  • Accessibility: Provide helpful alt text for generated images (the example includes captions).

Wrap-up

FashionistaAI shows how a small React app plus Cloudinary GenAI can turn one upload into four on-brand looks, with background swaps and recolor. Fork it, tweak the prompts, and ship your own AI-powered try-on flow.

If you build something with this, drop a link—DEV readers will want to see it!

Top comments (0)