DEV Community

Cover image for From Swagger to Tests: Building an AI-Powered API Test Generator with Python

From Swagger to Tests: Building an AI-Powered API Test Generator with Python

Working as a QA with APIs can be… well, kind of a nightmare sometimes. APIs are always changing, endpoints get added, status codes get updated, and keeping your tests in sync feels like chasing a moving target.

If you only look at your task board, it’s easy to lose track of what actually changed and what still needs testing.

In the projects I worked on, we had Swagger available for the API. And I thought: wait a minute… why not use AI and Swagger to save time generating tests?

And that’s how this little project started. In this post, I’ll walk you through how I did it, the challenges I faced, and some cool things you can do next.

The Idea

The goal was simple: take the Swagger spec and extract all the useful info, like:

  • HTTP methods
  • Expected status codes
  • Query parameters
  • Request bodies

…and then generate both positive and negative test scenarios automatically.

For example, for a simple GET /users/{id} endpoint, I wanted the output to look like this:

GET /users/{id}
βœ” Scenario: Retrieve a user with a valid ID  
βœ” Scenario: Validate 404 for user not found  
✘ Scenario: Missing ID parameter  
✘ Scenario: Invalid format for ID  
Enter fullscreen mode Exit fullscreen mode

To make this work nicely, I used AI to create the scenarios based on the endpoint’s Swagger specification, following a template I defined.

About the project

Stack

  • Python – fast, easy to parse data, integrate stuff
  • Rich / Typer (CLI UX) – because a pretty CLI makes life better
  • Gemini AI – super simple Python integration for AI prompts
  • dotenv – to keep the AI keys safe

Project Structure

api-test-generator/
β”œβ”€β”€ README.md                   # Documentation of the project
β”œβ”€β”€ requirements.txt            # DependΓͺncias Python
β”œβ”€β”€ main.py                     # Main function
β”‚
β”œβ”€β”€ output/                         # Folder with generated tests 
β”‚   β”œβ”€β”€ get_Books.txt
β”‚   β”œβ”€β”€ post_Books.txt                  
β”‚
β”œβ”€β”€ functions/                       # Main functions of the project 
β”‚   β”œβ”€β”€ navigation.py                # CLI navigation
β”‚   β”œβ”€β”€ read_swagger.py              # Read files and URL swaggers 
β”‚   └── test_generator.py            # Generate tests and save them in the files
β”‚
└── assets/                    # theme and example for the project 
    β”œβ”€β”€ swaggerexample.json
    └── theme.py
Enter fullscreen mode Exit fullscreen mode

How it works

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚          User / QA           β”‚
β”‚   (CLI Interaction - Rich)   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚        CLI Interface         β”‚
β”‚   (Typer + Rich Menu)        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Swagger/OpenAPI Loader     β”‚
β”‚  - URL, Manual, or Local JSONβ”‚
β”‚  - Validation & Parsing      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   API Specification Parser   β”‚
β”‚  - Endpoints                 β”‚
β”‚  - Methods                   β”‚
β”‚  - Parameters                β”‚
β”‚  - Responses / Status Codes  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚        Gemini AI API         β”‚
β”‚   (Test Case Generation)     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚     Output Generator         β”‚
β”‚  - Text file export (.txt)   β”‚
β”‚  - Structured scenarios      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Enter fullscreen mode Exit fullscreen mode

So basically: user interacts with CLI β†’ loads Swagger β†’ parses specs β†’ builds a prompt β†’ sends to AI β†’ AI returns tests β†’ saves to file.

Code Highlights

The test generator

The core idea here was: extract as much info as possible from Swagger so the AI could generate meaningful tests.

Here’s the main function I wrote:

def test_generator(path, method, swagger_data):  
    print(f"Generating tests for {method.upper()} {path}...")  
    details = swagger_data["paths"][path][method]  
    request_body = ""  
    parameters = ""  
    # Getting information about the endpoint  
    if 'tags' not in details:  
        endpoint_name = path  
    elif len(details['tags']) == 0:  
        endpoint_name = path  
    else:  
        endpoint_name = details['tags'][0]  
    if 'requestBody' in details:  
        request_body = details['requestBody']  
    if 'parameters' in details:  
        parameters = details['parameters']  
    prompt = (f"Generate positive and negative tests for this endpoint:{path} for the method {method.upper()}"  
              f"considering the following specifications: "             
              f"Name of the endpoint: {endpoint_name}"  
              f"Request body: {request_body}"  
              f"Query Parameters: {parameters} and return the tests following this template: {theme.PROMPT_TEMPLATE}")  
    test_scenario = ai_connection(prompt)  
    print(f"Exporting tests to file...")  
    export_to_file(test_scenario, method, endpoint_name)
Enter fullscreen mode Exit fullscreen mode

Connecting to Gemini AI

Connecting to the AI is simple: create a client, set the model, and pass the prompt:

def ai_connection(prompt):  
    load_dotenv()  
    api_key = os.getenv("GOOGLE_API_KEY")  
    client = genai.Client(api_key=api_key)  

    response = client.models.generate_content(  
        model="gemini-2.5-flash",  
        contents=prompt  
    )  
    return response.text
Enter fullscreen mode Exit fullscreen mode

And voilΓ . The AI returns something like:

POST /api/v1/Books  
βœ” Scenario: Successfully create a new book with all valid fields  
βœ” Scenario: Successfully create a new book with only mandatory fields  
βœ” Scenario: Successfully create a new book using 'text/json; v=1.0' content type  

✘ Scenario: Fail to create book due to missing 'title' field  
✘ Scenario: Fail to create book due to missing 'author' field  
✘ Scenario: Fail to create book due to missing 'isbn' field  
✘ Scenario: Fail to create book with an 'isbn' that already exists (conflict)  
✘ Scenario: Fail to create book due to invalid 'isbn' format (e.g., too short, non-numeric where expected)  
✘ Scenario: Fail to create book due to 'publication_year' being a string instead of an integer  
✘ Scenario: Fail to create book due to empty request body  
✘ Scenario: Fail to create book due to malformed JSON in request body  
✘ Scenario: Fail to create book with an empty 'title' string  
✘ Scenario: Fail to create book with an empty 'author' string  
Enter fullscreen mode Exit fullscreen mode

Challenges & Lessons Learned

Honestly, the hardest part was cleaning up Swagger data and building prompts that make sense for the AI.
Another challenge was designing a workflow that actually works in a CLI without feeling clunky.
But in the end, it was super fun, and I learned a lot about AI-assisted testing.

What’s Next

While building this, I started dreaming about all the things I could do next:

  • Automatically generate Postman collections from these tests
  • Integrate with test management tools like Zephyr or Xray
  • Make it a service that monitors Swagger and updates tests whenever endpoints change The possibilities are endless.

Conclusion

This project really showed me that AI + OpenAPI = massive time saver.

Instead of manually writing dozens of tests for every endpoint, I now have an automated system that generates both positive and negative scenarios in minutes.

Next steps? Think bigger: integrate it with CI/CD pipelines, plug it into test management tools, or even make it monitor APIs in real-time. Smarter, faster, and way less painful API testingβ€”sounds like a win to me.

If you want to check out the full project, explore the code, or try it yourself, it’s all on my GitHub: API Test Generator.

Dive in, experiment, and see how much time you can save!

Top comments (9)

Collapse
Β 
cavo789 profile image
Christophe Avonture β€’

Wow... Seems really interesting. I'm actually working on a fastapi project. I've just bookmarked this post to come back later on, thanks.

Collapse
Β 
frickingruvin profile image
Doug Wilson β€’

Interesting idea! Thanks for sharing it!

Collapse
Β 
marcos_quintanilha_75eef4 profile image
Marcos Quintanilha β€’

Uhuu! Nice one. Thanks!

Collapse
Β 
arafatweb profile image
Arafat Hossain Ar β€’

Thanks for this nice content!

Collapse
Β 
danielhe4rt profile image
Daniel Reis β€’

Amazing article! Never saw this type of usage for AI before.

Collapse
Β 
renanvidal profile image
Renan Vidal Rodrigues β€’

Great content

Collapse
Β 
jorge_david_17b7776402d0a profile image
Jorge David β€’

Wow πŸ”₯πŸ”₯πŸ”₯
Congratulations πŸŽ‰πŸŽ‰πŸŽ‰

You did a good job πŸ™‚

Collapse
Β 
yashwa511 profile image
Yash Dev β€’

Nice one !!!

Collapse
Β 
evanlausier profile image
Evan Lausier β€’

Pretty Cool!!

Some comments may only be visible to logged-in visitors. Sign in to view all comments.