DEV Community

ClawGear
ClawGear

Posted on

35 ChatGPT Prompts for QA Engineers: Test Cases, Bug Reports, and Automation Scripts That Actually Work

35 ChatGPT Prompts for QA Engineers: Test Cases, Bug Reports, and Automation Scripts That Actually Work

Quality assurance is the job where you're never done. There's always another edge case to find, another regression to catch, another sprint's worth of test cases to write before the team ships.

ChatGPT doesn't replace QA judgment. It can't tell you whether a bug is critical in your business context. But it can write the first draft of a 50-case test plan in 3 minutes, generate edge cases you missed, and help you write bug reports clear enough that developers actually fix the right thing.

These 35 prompts are built for working QA engineers — manual testers, automation engineers, and QA leads. Use the fill-in-the-blank [BRACKETS] to customize for your actual system.


Section 1: Test Case Generation

Writing test cases from scratch for every feature is the fastest way to fall behind on coverage. These prompts scaffold the work.

Generate [NUMBER] test cases for [FEATURE NAME] in [SYSTEM/APPLICATION]. 
Cover: happy path, edge cases, negative tests, and boundary conditions. 
Format each test case with: ID, description, preconditions, steps, expected result.

Feature description: "[DESCRIBE WHAT THE FEATURE DOES]"
User roles involved: [LIST ROLES]
Inputs involved: [LIST KEY INPUTS/FIELDS]
Enter fullscreen mode Exit fullscreen mode
I'm testing a [FORM/API ENDPOINT/WORKFLOW] that accepts [INPUT TYPE — 
e.g., user registration, file upload, payment amount]. 
Generate test cases covering: valid inputs, invalid inputs, empty/null values, 
boundary values, and SQL injection/XSS attempts.

Field specifications: [PASTE THE FIELD RULES/VALIDATION REQUIREMENTS]
Enter fullscreen mode Exit fullscreen mode
Create a risk-based test case priority matrix for [FEATURE/MODULE]. 
Identify the highest-risk scenarios and explain why they deserve priority 
coverage based on: business impact, technical complexity, and historical defect areas.

Feature context: [DESCRIPTION]
Known risks: [LIST ANY YOU'RE AWARE OF]
Previous defects in this area: [MENTION IF ANY]
Enter fullscreen mode Exit fullscreen mode
Write test cases for the integration between [SYSTEM A] and [SYSTEM B]. 
Cover: successful data flow, timeout handling, authentication failures, 
malformed payload handling, and partial failure scenarios.

Integration type: [REST API/MESSAGE QUEUE/DATABASE SYNC/etc.]
Data being exchanged: [DESCRIPTION]
SLA requirements: [RESPONSE TIME/DATA ACCURACY REQUIREMENTS]
Enter fullscreen mode Exit fullscreen mode
Generate exploratory testing charters for [FEATURE/MODULE]. 
Each charter should define: mission (what to explore), areas (what to focus on), 
time box, and exit criteria.

Feature: [NAME]
Risk areas I've identified: [LIST]
Enter fullscreen mode Exit fullscreen mode

Section 2: Bug Reports and Defect Documentation

A well-written bug report is fixed faster. A poorly written one gets marked "cannot reproduce."

Write a professional bug report for this defect. Use this format: 
Summary, Environment, Steps to Reproduce, Expected Result, Actual Result, 
Severity, Priority, Attachments notes.

What I observed: [DESCRIBE THE BUG IN YOUR OWN WORDS]
Steps I took: [LIST WHAT YOU DID]
Environment: [BROWSER/OS/VERSION/DEVICE]
How often it reproduces: [ALWAYS/SOMETIMES/ONCE]
Enter fullscreen mode Exit fullscreen mode
Rewrite this bug report to be clearer and more reproducible for the 
development team. Remove ambiguity, make steps atomic, and sharpen 
the expected vs actual distinction:

Original report: [PASTE YOUR DRAFT]
Enter fullscreen mode Exit fullscreen mode
Help me classify this defect. Based on the description, suggest: 
severity level (Critical/High/Medium/Low), defect type (functional/UI/performance/security/data), 
root cause category, and which component likely owns it.

Defect description: [DESCRIBE WHAT'S BROKEN]
System area: [MODULE/COMPONENT]
User impact: [WHO IS AFFECTED AND HOW]
Enter fullscreen mode Exit fullscreen mode
Write a regression test scenario to validate the fix for this bug 
doesn't break related functionality:

Bug that was fixed: [DESCRIPTION]
Fix applied: [WHAT WAS CHANGED]
Related features at risk: [LIST]
Enter fullscreen mode Exit fullscreen mode
Create a defect trend summary report for [SPRINT/RELEASE] covering: 
total defects by severity, top defect categories, which components had 
the most defects, and 3 quality observations worth raising in retrospective.

Raw defect data: [PASTE YOUR NUMBERS OR SUMMARY]
Sprint/release: [NAME OR NUMBER]
Enter fullscreen mode Exit fullscreen mode

Section 3: Test Planning and Strategy

QA without a strategy is just random clicking. These prompts help you plan systematically.

Create a test strategy document for [PROJECT/RELEASE NAME]. Cover: 
test scope, out-of-scope items, test types (functional, integration, performance, 
security, UAT), entry/exit criteria, risk assumptions, and environments needed.

Project description: [BRIEF DESCRIPTION]
Team size: [QA TEAM SIZE]
Timeline: [SPRINT LENGTH/RELEASE DATE]
Tech stack: [FRONTEND/BACKEND/DATABASE]
Enter fullscreen mode Exit fullscreen mode
Write a test plan for [FEATURE OR MODULE]. Include: test objectives, 
resources required, test schedule, test cases summary, defect management approach, 
and sign-off criteria.

Feature summary: [DESCRIPTION]
Acceptance criteria from product: [PASTE IF AVAILABLE]
Dependencies: [WHAT MUST BE READY FIRST]
Enter fullscreen mode Exit fullscreen mode
Design a smoke test suite for [APPLICATION/MODULE] that can be run 
in under 15 minutes after each deployment. Include the 10 most critical 
scenarios that indicate the build is stable enough for full testing.

Application type: [WEB APP/API/MOBILE/DESKTOP]
Core flows: [LIST THE 5 MOST IMPORTANT USER WORKFLOWS]
Enter fullscreen mode Exit fullscreen mode
Create a UAT test script for [FEATURE] that a non-technical business user 
can execute. Use plain language, no QA jargon, include screenshots hints 
where useful, and provide clear pass/fail criteria for each step.

Feature: [NAME AND BRIEF DESCRIPTION]
Business user profile: [WHO WILL RUN THE UAT]
Acceptance criteria: [PASTE FROM USER STORY]
Enter fullscreen mode Exit fullscreen mode
Identify the 10 highest-risk areas for regression testing in [APPLICATION] 
given this set of changes in the upcoming release:

Changes: [LIST THE FEATURES/FIXES IN THIS RELEASE]
Application architecture: [MONOLITH/MICROSERVICES/etc.]
Areas historically prone to regression: [LIST IF KNOWN]
Enter fullscreen mode Exit fullscreen mode

Section 4: API Testing

API testing is where QA catches bugs before they become UI problems. These prompts help cover the API layer thoroughly.

Generate Postman-style test cases (request + assertions) for this API endpoint:

Endpoint: [METHOD + PATH  e.g., POST /api/users/register]
Request body schema: [PASTE JSON SCHEMA OR EXAMPLE]
Authentication: [NONE/API KEY/JWT/OAUTH]
Expected success response: [STATUS CODE + RESPONSE BODY EXAMPLE]

Cover: happy path, missing required fields, invalid data types, 
unauthorized access, duplicate records if applicable.
Enter fullscreen mode Exit fullscreen mode
Write test assertions for this API response in [TEST FRAMEWORK  
Postman/Jest/pytest/RestAssured]. Validate: status code, response time, 
response schema, specific field values, and error message format on failure.

Endpoint: [NAME/PATH]
Expected response: [PASTE EXAMPLE JSON]
SLA: [MAX RESPONSE TIME IN MS]
Enter fullscreen mode Exit fullscreen mode
Design a test scenario for [API WORKFLOW] that chains multiple endpoints. 
The scenario should: authenticate, create a resource, read it back, 
update it, verify the update, and delete it. Flag what to assert at each step.

API endpoints involved: [LIST THEM]
Resource being managed: [DESCRIPTION]
Authentication method: [DESCRIBE]
Enter fullscreen mode Exit fullscreen mode
What contract testing should I implement between [SERVICE A] and [SERVICE B]? 
Describe the consumer-driven contract approach, what interactions to test, 
and how to set up the test structure.

Services: [BRIEF DESCRIPTION OF EACH]
Data exchanged: [DESCRIPTION]
Team ownership: [WHO OWNS EACH SERVICE]
Enter fullscreen mode Exit fullscreen mode
Create an API load test script outline in [K6/JMeter/Locust] for [ENDPOINT]. 
Include: ramp-up pattern, target concurrent users, think time, assertions 
to check under load, and pass/fail SLA thresholds.

Endpoint: [NAME]
Expected load: [USERS CONCURRENT/PEAK]
SLA: [RESPONSE TIME TARGET AT LOAD]
Enter fullscreen mode Exit fullscreen mode

Section 5: Automation Framework and Scripting

Writing automation from scratch is slow. Use AI to scaffold the boilerplate faster.

Write a [Playwright/Cypress/Selenium in Python/Java] test script for this 
user flow. The test should: log in, navigate to [PAGE], perform [ACTION], 
and assert [EXPECTED OUTCOME].

Application URL: [URL]
Login credentials approach: [ENV VARS/CONFIG FILE]
Element selectors (if known): [IDs or data-testid values]
Flow steps: [DESCRIBE STEP BY STEP]
Enter fullscreen mode Exit fullscreen mode
Create a reusable page object model (POM) class for [PAGE NAME] in 
[LANGUAGE/FRAMEWORK]. Include: element locators, action methods 
(click, fill, navigate), and assertion helpers.

Page description: [WHAT THIS PAGE DOES]
Key elements: [LIST INPUT FIELDS, BUTTONS, MODALS ON THIS PAGE]
Framework: [PLAYWRIGHT/SELENIUM/CYPRESS]
Enter fullscreen mode Exit fullscreen mode
Write a parameterized test that runs [TEST SCENARIO] with these data sets. 
Use [PYTEST/JEST/TESTNG] data-driven approach.

Test scenario: [DESCRIBE WHAT IT TESTS]
Data sets to run: [LIST THE VARIATIONS  e.g., different user roles, 
different input values, different environments]
Framework: [NAME]
Enter fullscreen mode Exit fullscreen mode
Review this test automation code and identify: flaky test risks, 
hardcoded values that should be config-driven, missing assertions, 
and opportunities to improve readability.

Code: [PASTE YOUR TEST CODE]
Framework: [NAME]
Enter fullscreen mode Exit fullscreen mode
Create a GitHub Actions workflow that runs our [FRAMEWORK] test suite 
on [TRIGGER — PR open, merge to main, schedule]. Include: 
environment setup, test execution, reporting step, and failure notification.

Test framework: [NAME]
Language: [PYTHON/JS/JAVA]
Notification method: [SLACK/EMAIL/GITHUB COMMENT]
Enter fullscreen mode Exit fullscreen mode

Section 6: Performance and Security Testing Basics

Every QA engineer should have baseline performance and security test skills.

Design a performance test plan for [FEATURE/ENDPOINT/FLOW] using 
[TOOL — k6, JMeter, Gatling]. Define: load profile (concurrent users, 
ramp-up, duration), metrics to capture, and pass/fail thresholds.

Expected load: [PEAK USERS OR TPS]
SLA requirements: [RESPONSE TIME, ERROR RATE, THROUGHPUT]
Test environment: [STAGING/PRODUCTION-LIKE]
Enter fullscreen mode Exit fullscreen mode
Generate an OWASP Top 10 security test checklist tailored to 
[APPLICATION TYPE — web app, REST API, mobile backend]. 
For each risk, describe a specific test scenario I can run manually 
or with [BURP SUITE/ZAPPIER/OWASP ZAP].
Enter fullscreen mode Exit fullscreen mode
Write test cases for [AUTHENTICATION/AUTHORIZATION] security scenarios. 
Cover: brute force protection, session management, token expiry, 
privilege escalation attempts, and IDOR vulnerabilities.

Auth mechanism: [JWT/SESSION COOKIE/OAUTH]
User roles: [LIST ROLES AND PERMISSIONS]
Enter fullscreen mode Exit fullscreen mode
I want to test [APPLICATION] for SQL injection vulnerabilities manually 
before we set up automated scanning. Give me 10 payloads to test in 
[INPUT FIELDS  search box, login form, filter params] and what to look 
for in the response.
Enter fullscreen mode Exit fullscreen mode
Create a test checklist for validating that [FEATURE] complies with 
[GDPR/CCPA/HIPAA] data privacy requirements. Focus on: data minimization, 
consent flows, data deletion, and access controls.

Feature handling personal data: [DESCRIBE WHAT DATA IT COLLECTS/PROCESSES]
Regulation: [WHICH COMPLIANCE STANDARD]
Enter fullscreen mode Exit fullscreen mode

Section 7: QA Process, Reporting, and Leadership

Senior QA engineers spend as much time on process as on testing.

Write a QA release readiness report for [RELEASE NAME/VERSION]. 
Format it for executive stakeholders. Cover: test coverage summary, 
open defect count by severity, go/no-go recommendation, and known risks 
being accepted.

Test results summary: [PASTE YOUR NUMBERS]
Open critical/high bugs: [LIST OR COUNT]
Your recommendation: [GO / NO-GO / CONDITIONAL GO]
Enter fullscreen mode Exit fullscreen mode
Create a presentation outline for a QA retrospective on [SPRINT/RELEASE]. 
Include: what went well (testing effectiveness), what didn't (coverage gaps, 
late defects, process breakdowns), and 3 action items for next cycle.

Retrospective data: [PASTE YOUR NOTES OR METRICS]
Team size: [QA TEAM]
Key incidents this cycle: [ANY ESCAPED DEFECTS OR CRITICAL BUGS]
Enter fullscreen mode Exit fullscreen mode
Write a test coverage analysis that maps our current test cases to the 
acceptance criteria in these user stories. Identify gaps.

User stories: [PASTE THEM]
Existing test cases: [PASTE TEST IDS AND DESCRIPTIONS]
Enter fullscreen mode Exit fullscreen mode
I'm onboarding a new QA engineer to [PROJECT]. Create a 2-week onboarding 
plan that covers: application overview, test environment setup, existing 
test suite walkthrough, first solo testing assignment, and success criteria 
for the first month.

Project type: [WEB/API/MOBILE/etc.]
Their experience level: [JUNIOR/MID/SENIOR]
Enter fullscreen mode Exit fullscreen mode
Write a quality metrics dashboard proposal for our QA team. 
Define 8 metrics we should track, explain why each matters, 
and suggest visualization type (trend line, bar chart, threshold gauge).

Team context: [AGILE/WATERFALL, TEAM SIZE, PRODUCT TYPE]
Current pain points: [WHAT ARE WE BAD AT MEASURING]
Enter fullscreen mode Exit fullscreen mode

The Complete QA Prompt Library

These 35 prompts cover the most time-intensive QA writing tasks. For the full library — including API contract testing templates, automation framework comparisons, and security testing playbooks — grab the complete pack below.

→ 35 ChatGPT Prompts for QA Engineers

Organized by testing phase. Ready to use in any framework or tech stack.

Use LAUNCH30 for 30% off — limited uses remaining.

Top comments (0)