DEV Community

monkeymore studio
monkeymore studio

Posted on

I Turned Photos Into ASCII Art Without a Single Server Call—Here's How

Remember when ASCII art was just something you pasted into IRC channels? I always thought it was a neat party trick until I tried building an image-to-ASCII converter that runs entirely in the browser. Turns out, mapping pixels to characters involves more subtle decisions than you'd expect—and doing it without a backend changes everything about the architecture.

This post breaks down how our free online ASCII art generator works under the hood. No servers, no uploads, no privacy headaches. Just your browser, a canvas element, and a carefully chosen string of characters.

Why Keep It in the Browser?

You could absolutely build an image-to-ASCII converter that ships images to a server, processes them with ImageMagick or Python PIL, and sends back the result. Plenty of tools do exactly that. But we went the other direction for a few reasons that matter more than you'd think.

Your Images Never Leave Your Device

When you drag a photo into our tool, it stays on your machine. For designers working with client assets, developers screenshotting proprietary code, or anyone who'd rather not upload personal photos to a random server, that's a big deal. The browser handles everything locally through the Canvas API.

Instant Feedback

Because there's no network round-trip, tweaking parameters feels immediate. Slide the scale factor down to pack more detail in, bump up saturation for more vibrant colors, or swap the character set entirely—the preview updates in real time without a loading spinner in sight.

It Works Offline

Once the page loads, you don't even need an internet connection. The entire rendering engine is a few kilobytes of TypeScript. I find this genuinely useful when I'm on a plane or in a spotty-coffee-shop situation and want to generate some quick ASCII art for a README or a presentation.

The Architecture at a Glance

Here's how the pieces fit together from the moment you drop an image to when you download the result:

The whole pipeline lives in three core files: the generator UI (GeneratorClient.tsx), the ASCII engine (lib/ascify.ts), and a tiny Cloudflare Worker (worker/index.ts) that only handles locale routing for our multilingual landing page. The actual conversion? Zero server involvement.

The Core Data Structures

Before diving into the algorithm, let's look at the options that control everything:

export interface AscifyOptions {
  chars: string;        // The character set used for mapping brightness
  charSize: number;     // Font size in pixels
  scaleFactor: number;  // How much to downscale the image (0.01 - 0.2)
  charWidth: number;    // Horizontal spacing per character
  charHeight: number;   // Vertical spacing per character
  autoColor: boolean;   // Use original pixel colors or clamp to maxR/G/B
  maxR: number;         // Red ceiling when autoColor is false
  maxG: number;         // Green ceiling when autoColor is false
  maxB: number;         // Blue ceiling when autoColor is false
  background: string;   // Background color of output canvas
  saturation: number;   // Post-processing saturation multiplier
  brightness: number;   // Post-processing brightness multiplier
  fontFamily: string;   // Font used for rendering
}
Enter fullscreen mode Exit fullscreen mode

And the defaults that ship out of the box:

export const DEFAULT_OPTIONS: AscifyOptions = {
  chars: `$@B%8&WM#*oahkbdpqwmZO0QLCJUYXzcvunxrjft/\\|()1{}[]?-_+~<>i!lI;:,"^\`'. `,
  charSize: 15,
  scaleFactor: 0.09,
  charWidth: 10,
  charHeight: 18,
  autoColor: true,
  maxR: 255,
  maxG: 255,
  maxB: 255,
  background: "#000000",
  saturation: 1,
  brightness: 1,
  fontFamily: "monospace",
};
Enter fullscreen mode Exit fullscreen mode

Notice something about that chars string? It goes from visually dense characters ($, @, B, %) all the way down to barely-there marks ('', ., ', ). This ordering is critical—it's what lets us map a pixel's brightness directly to a character's visual "weight."

The Algorithm: From Pixel to Character

The ascify function is where the magic happens. Here's the step-by-step breakdown.

Step 1: Prepare the Character Lookup

const chars = opts.chars.split("").reverse();
const charLength = chars.length;
const interval = charLength / 256;
Enter fullscreen mode Exit fullscreen mode

We reverse the character string so the densest characters map to the darkest pixels. The interval tells us how many characters each brightness level covers. With 70 characters and 256 possible brightness values, each character represents roughly 3.6 brightness steps.

The getChar helper does the actual lookup:

function getChar(h: number, charArray: string[], interval: number): string {
  const index = Math.floor(h * interval);
  return charArray[Math.min(index, charArray.length - 1)];
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Scale the Image Down

ASCII art works because you're replacing thousands of pixels with a handful of characters. If you tried to map every pixel 1:1, you'd get an impossibly large text block. So we scale down aggressively:

const srcWidth = source instanceof HTMLImageElement 
  ? source.naturalWidth 
  : source.width;
const srcHeight = source instanceof HTMLImageElement 
  ? source.naturalHeight 
  : source.height;

const scaledW = Math.max(1, Math.floor(opts.scaleFactor * srcWidth));
const scaledH = Math.max(
  1,
  Math.floor(opts.scaleFactor * srcHeight * (opts.charWidth / opts.charHeight))
);
Enter fullscreen mode Exit fullscreen mode

That charWidth / charHeight ratio compensates for the fact that monospace characters are typically taller than they are wide. Without this correction, your ASCII art ends up stretched vertically, like someone squished the image.

We draw the source image onto a temporary canvas at this scaled resolution, then read back the raw pixel data:

const srcCanvas = document.createElement("canvas");
srcCanvas.width = scaledW;
srcCanvas.height = scaledH;
const srcCtx = srcCanvas.getContext("2d", { willReadFrequently: true })!;
srcCtx.drawImage(source, 0, 0, scaledW, scaledH);
const imageData = srcCtx.getImageData(0, 0, scaledW, scaledH);
const pixels = imageData.data; // Uint8ClampedArray, RGBA per pixel
Enter fullscreen mode Exit fullscreen mode

The willReadFrequently: true hint is worth calling out. It tells the browser to optimize for repeated getImageData calls, which matters when you're doing live previews and regenerating the art on every slider adjustment.

Step 3: Create the Output Canvas

While the source canvas is small, the output canvas expands back up because each character occupies charWidth × charHeight pixels:

const outCanvas = document.createElement("canvas");
outCanvas.width = scaledW * opts.charWidth;
outCanvas.height = scaledH * opts.charHeight;
const ctx = outCanvas.getContext("2d")!;

ctx.fillStyle = opts.background;
ctx.fillRect(0, 0, outCanvas.width, outCanvas.height);
ctx.font = `${opts.charSize}px ${opts.fontFamily}`;
ctx.textBaseline = "top";
Enter fullscreen mode Exit fullscreen mode

Step 4: The Main Pixel Loop

This is the heart of the algorithm. For every pixel in our scaled-down image:

for (let y = 0; y < scaledH; y++) {
  for (let x = 0; x < scaledW; x++) {
    const idx = (y * scaledW + x) * 4;
    let r = pixels[idx];
    let g = pixels[idx + 1];
    let b = pixels[idx + 2];

    // Optional: clamp RGB to user-defined ceilings
    if (!opts.autoColor) {
      if (r >= opts.maxR) r = opts.maxR;
      if (g >= opts.maxG) g = opts.maxG;
      if (b >= opts.maxB) b = opts.maxB;
    }

    // Compute brightness: simple average of channels
    const h = Math.floor(r / 3 + g / 3 + b / 3);
    const char = getChar(h, chars, interval);

    // Draw the character in the original pixel color
    ctx.fillStyle = `rgb(${r},${g},${b})`;
    ctx.fillText(char, x * opts.charWidth, y * opts.charHeight);
    asciiText += char;
  }
  asciiText += "\n";
}
Enter fullscreen mode Exit fullscreen mode

The brightness formula r/3 + g/3 + b/3 is intentionally simple. It's essentially an unweighted average of the RGB channels. We could use perceived luminance formulas like 0.299*R + 0.587*G + 0.114*B, but the straightforward average works well enough for ASCII art and keeps the code easy to follow.

Each character gets drawn in the original pixel color, which is why the output looks like a colorful mosaic made of text rather than plain monochrome ASCII.

Step 5: Post-Processing Filters

If you've cranked up the saturation or brightness sliders, we apply those as a final pass using the Canvas API's filter pipeline:

if (opts.saturation !== 1 || opts.brightness !== 1) {
  const enhanced = document.createElement("canvas");
  enhanced.width = outCanvas.width;
  enhanced.height = outCanvas.height;
  const ectx = enhanced.getContext("2d")!;
  ectx.filter = `saturate(${opts.saturation * 100}%) brightness(${opts.brightness * 100}%)`;
  ectx.drawImage(outCanvas, 0, 0);
  return { canvas: enhanced, asciiText };
}
Enter fullscreen mode Exit fullscreen mode

This is essentially replicating PIL's ImageEnhance in the browser. We render the already-drawn canvas through a filter layer and capture the result. It's a neat trick that avoids manipulating individual pixels a second time.

The Character Set Is Half the Battle

Here's something the GitHub Copilot CLI team discovered with their animated ASCII banner: the characters you choose dramatically affect how the final image reads. Their terminal animation needed semantic color roles and careful contrast testing across different terminal themes.

For a browser-based image converter, the constraints are different but equally important. Our default character string is deliberately long (70 characters) to give fine-grained brightness gradations:

$@B%8&WM#*oahkbdpqwmZO0QLCJUYXzcvunxrjft/\|()1{}[]?-_+~<>i!lI;:,"^`'. 
Enter fullscreen mode Exit fullscreen mode

But you can swap it for anything. Want pure block characters for a denser look? Use █▓▒░. Want something more readable? Try fewer characters like @%#*+=-:.. The tool doesn't care—it just maps whatever you give it across the 0-255 brightness range.

Browser vs. Terminal: Different ASCII Worlds

Reading about GitHub's Copilot CLI animation made me appreciate how different browser-based ASCII art is from terminal-based work. In a terminal, you're fighting ANSI escape codes, cursor flicker, screen readers, and inconsistent color rendering across emulators. The Copilot team spent over 6,000 lines of TypeScript just handling terminal quirks.

In the browser, we get luxuries terminals can't offer:

  • True RGB color on every character, no ANSI approximations needed
  • A real compositor that handles canvas redraws without flicker
  • No cursor ghosting because we're painting to a bitmap, not streaming stdout
  • Live parameter tuning with instant visual feedback

The tradeoff is that our output is an image or a text file, not a living animation inside a terminal window. Different constraints, different solutions.

Try It Yourself

If you want to see how your photos look rendered in characters, give our ASCII generator a spin. Drag in an image, tweak the settings, and watch your browser turn pixels into text in real time. No uploads, no accounts, no waiting—just a neat little algorithm doing its thing right in your tab.

Top comments (1)

Collapse
 
ggle_in profile image
HARD IN SOFT OUT

Why not add a silent‑film ASCII video mode? Grab frames via WebCodec from a webcam or file, output real‑time ASCII video—still zero server calls. It’d be a viral, private, hypnotic little tool.

I love that this stays completely server‑less. Instant privacy, no weird data uploads—perfect for today’s paranoia.

Real issue that jumps to mind: toss a 20MP phone photo at it, and that canvas processing might freeze the tab for seconds. That’s the exact moment a user closes the tool. Have you tried shoveling the pixel sampling into a Web Worker? In my own browser‑based image projects, that single trick saved UX.