DEV Community

丁久
丁久

Posted on • Originally published at dingjiu1989-hue.github.io

Microservices Communication Patterns

This article was originally published on AI Study Room. For the full version with working code examples and related articles, visit the original post.

Microservices Communication Patterns

Microservices Communication Patterns

Microservices must communicate with each other to fulfill user requests. Choosing the right communication pattern is one of the most consequential architectural decisions. This guide covers the major patterns with their trade-offs and implementation strategies.

Synchronous vs Asynchronous Communication

The foundational decision is whether services communicate synchronously (blocking, request-response) or asynchronously (event-driven, fire-and-forget).

Synchronous patterns are simpler to implement and debug. A service sends a request and waits for a response. These work well for read operations and workflows that need immediate confirmation.

Asynchronous patterns decouple services and improve resilience. A service emits an event without knowing or caring which other services consume it. These suit high-volume, loosely coupled systems.

Pattern 1: HTTP/REST

The simplest approach -- services expose RESTful HTTP endpoints:

Service A calls Service B via REST

import requests

def get_user_orders(user_id):

response = requests.get(

f"http://order-service/api/users/{user_id}/orders",

timeout=5

)

response.raise_for_status()

return response.json()

Pros: Simple, language-agnostic, well-understood, easy to debug.

Cons: Coupling (caller must know the callee's URL), latency (blocking), cascading failures (if order-service is down, this call fails).

Use when: The operation must return immediately, the services have a clear caller-callee relationship, throughput requirements are moderate.

Pattern 2: gRPC

gRPC uses Protocol Buffers for efficient binary serialization:

service OrderService {

rpc GetUserOrders (GetUserOrdersRequest) returns (GetUserOrdersResponse);

rpc StreamOrderUpdates (StreamRequest) returns (stream OrderUpdate);

}

message GetUserOrdersRequest {

string user_id = 1;

}

message GetUserOrdersResponse {

repeated Order orders = 1;

}

Client code

async with grpc.aio.insecure_channel("order-service:50051") as channel:

stub = OrderServiceStub(channel)

response = await stub.GetUserOrders(user_id="123")

Pros: Fast (binary protocol), strongly typed (code generation), supports streaming, built-in load balancing.

Cons: More complex setup, tooling less mature than REST, difficult to inspect traffic.

Use when: High throughput is required, services are within the same cluster, you need streaming capabilities.

Pattern 3: Message Queues

Use a message broker for asynchronous communication:

Order Service publishes an event

import pika

def publish_order_created(order):

connection = pika.BlockingConnection(

pika.ConnectionParameters('rabbitmq')

)

channel = connection.channel()

channel.exchange_declare(exchange='orders', exchange_type='topic')

channel.basic_publish(

exchange='orders',

routing_key='order.created',

body=json.dumps(order)

)

connection.close()

Notification Service consumes the event

def on_order_created(ch, method, properties, body):

order = json.loads(body)

send_email(order['user_email'], f"Order {order['id']} confirmed")

channel.basic_consume(queue='order_created', on_message_callback=on_order_created)

Pros: Decoupling (services never call each other directly), buffering (queues handle load spikes), resilience (consumer failures don't affect producers).

Cons: Eventual consistency, harder to debug (tracing across queues), operational complexity (managing RabbitMQ, Kafka, or similar).

Use when: Services are fully independent, you need to handle traffic spikes, multiple services react to the same event.

Pattern 4: Event Sourcing and CQRS

Event sourcing stores state changes as an append-only event log. CQRS separates read and write models:

Event sourced aggregate

class OrderAggregate:

def init(self, order_id):

self.order_id = order_id

self.changes = []

def create_order(self, user_id, items):

self.changes.append({

'type': 'OrderCreated',

'data': {'order_id': self.order_id, 'user_id': user_id, 'items': items}

})

def mark_shipped(self, tracking_id):

self.changes.append({

'type': 'OrderShipped',

'data': {'order_id': self.order_id, 'tracking_id': tracking_id}

})

Pros: Complete audit trail, temporal queries (state at any point), natural fit for event-driven systems.

Cons: Complex to implement, event store requires careful schema management, read model must be eventually consistent.

Use when: Audit requirements are strict, you need full event history, complex workflows benefit from event replay.

Pattern 5: Saga Pattern

Sagas manage distributed transactions across services. Two approaches:

Choreography-based saga: Each service publishes events that trigger the next step:

OrderCreated → PaymentService:processPayment → PaymentProcessed → InventoryService:reserveStock → StockReserved

If a step fails, compensating events roll back previous steps:

PaymentFailed → OrderService:


Read the full article on AI Study Room for complete code examples, comparison tables, and related resources.

Found this useful? Check out more developer guides and tool comparisons on AI Study Room.

Top comments (0)