DEV Community

137Foundry
137Foundry

Posted on

How to Set Up Jest for AI-Assisted Unit Test Generation in JavaScript

AI coding assistants generate better unit tests when the testing environment is properly configured. A project with Jest set up correctly, with examples of the testing style you want, and with TypeScript types available produces significantly higher-quality AI-generated tests than a project where the AI is inferring everything from scratch.

This guide walks through configuring Jest for an AI-assisted workflow: the setup steps, the configuration choices that affect generated test quality, and the prompt patterns that work well once the environment is in place.

Terminal monospace test results output
Photo by Daniil Komov on Pexels

Step 1: Install Jest and Configure It

For a new project, install Jest and the necessary tooling:

npm install --save-dev jest @types/jest
# For TypeScript projects:
npm install --save-dev ts-jest @babel/preset-typescript
Enter fullscreen mode Exit fullscreen mode

For TypeScript projects, configure ts-jest in jest.config.ts:

import type { Config } from 'jest';

const config: Config = {
  preset: 'ts-jest',
  testEnvironment: 'node',
  testMatch: ['**/__tests__/**/*.test.ts', '**/*.spec.ts'],
  collectCoverageFrom: [
    'src/**/*.ts',
    '!src/**/*.d.ts',
    '!src/**/index.ts',
  ],
  coverageThreshold: {
    global: {
      branches: 70,
      functions: 80,
      lines: 80,
      statements: 80,
    },
  },
};

export default config;
Enter fullscreen mode Exit fullscreen mode

The coverage thresholds matter for AI-assisted workflows: if you're using generated tests to hit coverage targets, setting thresholds prevents generated tests from just covering happy paths while leaving branch coverage low.

For JavaScript without TypeScript, use Babel:

npm install --save-dev babel-jest @babel/core @babel/preset-env
Enter fullscreen mode Exit fullscreen mode

And a minimal babel.config.js:

module.exports = {
  presets: [
    ['@babel/preset-env', { targets: { node: 'current' } }],
  ],
};
Enter fullscreen mode Exit fullscreen mode

Step 2: Write Two or Three Example Tests First

Before using AI generation, write two or three tests manually for a simple function. These serve as style examples for AI prompts.

Your example tests should establish:

  • Naming convention (describe block naming, test description patterns)
  • Assertion style (which Jest matchers you prefer)
  • Mock setup pattern (how you structure jest.mock() calls and beforeEach setup)
  • Error testing pattern (expect(() => fn()).toThrow(ErrorType))

Example:

import { calculateDiscount } from '../discount';

describe('calculateDiscount', () => {
  describe('valid inputs', () => {
    it('applies percentage discount correctly', () => {
      expect(calculateDiscount(100, 0.1)).toBe(90);
    });

    it('returns full price when discount is zero', () => {
      expect(calculateDiscount(100, 0)).toBe(100);
    });
  });

  describe('edge cases', () => {
    it('throws RangeError when discount exceeds 1', () => {
      expect(() => calculateDiscount(100, 1.5)).toThrow(RangeError);
    });

    it('returns zero when price is zero', () => {
      expect(calculateDiscount(0, 0.1)).toBe(0);
    });
  });
});
Enter fullscreen mode Exit fullscreen mode

This three-minute investment in manual examples produces substantially better AI-generated tests because the AI matches your conventions rather than generating its own.

Step 3: Configure Coverage Reporting

Add coverage scripts to package.json:

{
  "scripts": {
    "test": "jest",
    "test:watch": "jest --watch",
    "test:coverage": "jest --coverage",
    "test:coverage:report": "jest --coverage --coverageReporters=html"
  }
}
Enter fullscreen mode Exit fullscreen mode

Run npm run test:coverage after adding AI-generated tests to verify branch coverage, not just statement coverage. AI generation tends to cluster on statement coverage (executed lines) while missing branch coverage (both sides of conditionals). The coverage report will show exactly which branches aren't tested.

Step 4: Structure Your AI Prompts

With Jest configured and examples in hand, the effective prompt pattern includes four parts:

  1. The function you want tested (paste the full implementation)
  2. A reference to your example test file ("match the style in the example below")
  3. An explicit list of test categories ("include: happy path, null inputs, empty array inputs, and TypeError for invalid types")
  4. The function's dependencies (if it imports other modules)

An example prompt:

"Write Jest unit tests for the processUserData function below. Match the test style from the example file. Cover: successful processing of a valid user object, missing required fields (should throw ValidationError), empty string in required field, and null user input. The function imports validateUser from ./validators - mock that module."

Explicit test category requests are more reliable than open-ended prompts. "Write comprehensive tests" produces a different test suite than "Write happy path, null input, missing field, and error propagation tests."

Step 5: Review Generated Tests Systematically

For each AI-generated test file, apply this review checklist:

  • [ ] Does each test name describe the behavior being tested (not just "returns value")?
  • [ ] Does the test actually fail if you break the function it's testing? (Mutation check)
  • [ ] Are mocks replacing external dependencies at the right boundary?
  • [ ] Are error assertions specific enough (type + message where applicable)?
  • [ ] Do async tests use proper await syntax?
  • [ ] Are there tests for both sides of every conditional?

The mutation check is the most important step. Change one line of the function (flip a comparison, remove a null check, change a return value) and run the test. If the test still passes, it's not verifying the behavior you changed.

Common Jest + AI Configuration Issues

Missing moduleNameMapper for path aliases. If your project uses TypeScript path aliases (@/components/), add moduleNameMapper to jest.config.ts so AI-generated imports resolve correctly.

Wrong testEnvironment for browser APIs. Functions using DOM APIs need testEnvironment: 'jsdom'. AI-generated tests for browser code may fail in the default node environment.

Inadequate mock cleanup. AI-generated tests sometimes miss jest.clearAllMocks() in afterEach, causing test pollution between test cases. Add it to your global setup.

For the comprehensive workflow on AI test generation beyond Jest configuration, see how to generate unit tests with AI coding assistants. The 137Foundry AI automation services team works with AI-assisted development workflows including test generation as part of broader code quality engagements.

Jest documentation and TypeScript documentation are the reference sources for configuration options - useful when AI-generated configuration has edge cases that need adjustment for your specific project.

Automating Test Quality Checks in CI

Once Jest is configured and AI-generated tests are committed, enforce quality through CI rather than relying on individual review.

A minimal GitHub Actions workflow that runs tests and coverage:

name: Test
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: '20'
      - run: npm ci
      - run: npm run test:coverage
Enter fullscreen mode Exit fullscreen mode

If branch coverage falls below the threshold configured in jest.config.ts, the CI step fails and the PR is blocked. This creates a hard gate that catches when AI-generated tests are accepted without adequate coverage review.

For mutation testing in CI, Stryker Mutator can be run as a scheduled job (weekly or before release cuts) rather than on every PR, since mutation testing is slower than regular test runs. Its reports identify which tests are passing but not catching mutations - the most actionable quality signal for AI-generated test suites.

The Vitest alternative to Jest offers faster test execution on Vite-based projects and supports the same coverage configuration. If you're setting up a new project rather than configuring an existing one, Vitest is worth evaluating alongside Jest. The prompting workflow described in this guide applies to both; the configuration syntax differs in minor ways documented in Vitest's official docs.

OpenAI and other AI providers continue to improve test generation quality with each model update - the setup described here is forward-compatible because it's based on providing explicit context rather than relying on model inference. Better models with the same prompt structure produce better tests without requiring workflow changes.

Top comments (0)