This article was originally published on AI Study Room. For the full version with working code examples and related articles, visit the original post.
Performance Testing Tools: k6 vs Locust vs JMeter
Introduction
Performance testing is not optional for production services. Without it, you discover scaling bottlenecks during traffic spikes or product launches. The three leading open-source tools--k6, Locust, and JMeter--each approach load testing from different angles. This article compares their scripting models, distributed testing capabilities, CI integration, and reporting to help you choose the right tool for your use case.
k6
k6 is a modern load testing tool built with JavaScript/Go, designed for developer workflows and CI integration:
// k6 test script
import http from 'k6/http';
import { check, sleep, group } from 'k6';
import { Rate, Trend, Counter } from 'k6/metrics';
// Custom metrics
const errorRate = new Rate('errors');
const paymentLatency = new Trend('payment_latency');
const successCount = new Counter('successful_payments');
// Test configuration
export const options = {
stages: [
{ duration: '2m', target: 50 }, // Ramp up to 50 users
{ duration: '5m', target: 100 }, // Stay at 100 users
{ duration: '2m', target: 200 }, // Spike to 200
{ duration: '2m', target: 0 }, // Ramp down
],
thresholds: {
http_req_duration: ['p(95)<500', 'p(99)<1000'],
errors: ['rate<0.05'],
'payment_latency': ['p(99)<2000'],
},
};
export default function () {
group('Payment Flow', () => {
const payload = JSON.stringify({
amount: 49.99,
currency: 'USD',
token: 'tok_test_123',
});
const params = {
headers: { 'Content-Type': 'application/json' },
tags: { endpoint: 'charge' },
};
const response = http.post(
'https://api.example.com/v1/charges',
payload,
params
);
check(response, {
'status is 200': (r) => r.status === 200,
'response time < 300ms': (r) => r.timings.duration < 300,
'has transaction id': (r) => r.json('id') !== undefined,
});
paymentLatency.add(response.timings.duration);
errorRate.add(response.status !== 200);
if (response.status === 200) successCount.add(1);
sleep(1);
});
}
CI Integration
# .github/workflows/performance-test.yml
name: Performance Test
on:
push:
branches: [main]
schedule:
- cron: '0 6 * * 1-5' # Weekdays at 6 AM
jobs:
load-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run k6 test
uses: grafana/k6-action@v0.3.0
with:
filename: tests/performance/payment-flow.js
flags: --out json=results.json
- name: Upload results
uses: actions/upload-artifact@v4
with:
name: k6-results
path: results.json
- name: Compare thresholds
run: |
if grep -q '"thresholds": {"failed":' results.json; then
echo "Performance thresholds exceeded!"
exit 1
fi
Locust
Locust uses Python for test scenarios, making it ideal for teams already in the Python ecosystem:
# locustfile.py
from locust import HttpUser, task, between, events
from locust.runners import MasterRunner
import json
import logging
class PaymentUser(HttpUser):
wait_time = between(0.5, 2.5)
def on_start(self):
"""Login before starting tasks"""
response = self.client.post("/auth/login", json={
"username": f"test_user_{self.id}",
"password": "test_password",
})
self.token = response.json().get("token")
self.client.headers.update({
"Authorization": f"Bearer {self.token}"
})
@task(3)
def create_payment(self):
"""Create a payment - weight 3"""
payload = {
"amount": 99.99,
"currency": "USD",
"description": "Load test payment",
}
with self.client.post(
"/api/v1/charges",
json=payload,
catch_response=True,
name="/api/v1/charges [POST]",
) as response:
if response.status_code != 201:
response.failure(f"Unexpected status: {response.status_code}")
elif response.elapsed.total_seconds() > 2.0:
response.failure("Request took too long")
@task(1)
def get_balance(self):
"""Check balance - weight 1"""
self.client.get("/api/v1/balance", name="/api/v1/balance [GET]")
@task(1)
Read the full article on AI Study Room for complete code examples, comparison tables, and related resources.
Found this useful? Check out more developer guides and tool comparisons on AI Study Room.
Top comments (0)