DEV Community

Bijaya Prasad Kuikel
Bijaya Prasad Kuikel

Posted on

Kafka with Go Part 1 — Understanding Async Systems, Distributed Architecture, and Your First Kafka Publisher/Subscriber

Introduction

Modern web applications are no longer just simple static websites. Years ago, many websites were mostly:

  • static pages
  • blogs
  • content websites
  • simple request-response applications

But modern software systems are very different. Today’s applications do things like:

  • video uploads
  • realtime notifications
  • analytics processing
  • AI inference
  • payment processing
  • image optimization
  • email delivery
  • realtime chat
  • activity tracking
  • stream processing

Modern backend systems are now closer to continuously running software systems rather than just: “serve HTML and return response.”
And this creates a very important architectural problem.


The Problem With Long Running Tasks

Suppose a user uploads a video. Your backend now needs to:

  • save the file
  • generate thumbnails
  • compress the video
  • notify followers
  • update analytics
  • scan for moderation

Some of these tasks may take:

  • several seconds
  • minutes
  • sometimes even longer

Now imagine if the user had to wait for ALL of this before receiving a response.

Client
   |
   v
Backend
   |
   +--> Compress Video
   +--> Generate Thumbnail
   +--> Send Notifications
   +--> Update Analytics
   |
   v
Response Returned
Enter fullscreen mode Exit fullscreen mode

This creates a terrible user experience.

The application becomes:

  • slow
  • blocked
  • harder to scale

Why Async Processing Exists

This is exactly why asynchronous systems exist.

Instead of making users wait:

Client -> Backend -> Everything Happens Here
Enter fullscreen mode Exit fullscreen mode

modern systems usually do this:

Client -> Backend -> Queue -> Worker
Enter fullscreen mode Exit fullscreen mode

The backend quickly stores a job into a queue.

Then background workers process the heavy tasks separately.

Now:

  • API becomes fast
  • users get immediate response
  • heavy processing happens in background

This is one of the most important concepts in backend engineering.


Real World Examples

This architecture exists almost everywhere.

Sending Emails

User Signup
    |
    v
Queue Email Job
    |
    v
Email Worker Sends Email
Enter fullscreen mode Exit fullscreen mode

Image Processing

User Uploads Image
    |
    v
Queue Resize Job
    |
    v
Image Worker Processes File
Enter fullscreen mode Exit fullscreen mode

Payment Notifications

Order Created
    |
    v
Queue Notification Job
    |
    v
Notification Worker
Enter fullscreen mode Exit fullscreen mode

What Is a Queue?

A queue is simply

A middle layer between producers and workers.

Instead of directly doing heavy work:

Backend -> Heavy Task
Enter fullscreen mode Exit fullscreen mode

we place the task into a queue:

Backend -> Queue -> Worker
Enter fullscreen mode Exit fullscreen mode

This gives us:

  • asynchronous processing
  • better performance
  • better scalability
  • loose coupling

Messaging Systems

To implement queues and async systems, we use messaging systems.

Technology Common Usage
Redis Pub/Sub Lightweight realtime messaging
RabbitMQ Traditional queues
Apache Kafka Distributed event streaming
NATS Lightweight distributed systems
Google Pub/Sub Managed cloud messaging

All of them solve similar problems differently.


Distributed Systems

As Systems Grow, Architecture Evolves. Initially, a single application may work perfectly fine.

This is called a:

Monolith architecture.

Monolith Architecture

+----------------------+
|      Monolith        |
|----------------------|
| Auth                 |
| Orders               |
| Payments             |
| Notifications        |
+----------------------+
Enter fullscreen mode Exit fullscreen mode

Everything lives inside one application. This is actually completely normal. Most successful applications start this way.

But as systems grow:

  • traffic increases
  • teams grow
  • deployments become difficult
  • scaling becomes harder
  • failures affect entire application

Eventually systems evolve into:

Microservices architecture.


Microservices

Instead of one giant application, each service becomes independent.

+---------+
| Auth    |
+---------+

+---------+
| Orders  |
+---------+

+------------+
| Payments   |
+------------+

+----------------+
| Notifications  |
+----------------+
Enter fullscreen mode Exit fullscreen mode

Benefits:

  • isolated deployments
  • independent scaling
  • smaller codebases
  • better team ownership

But now another important problem appears.


How Do Services Communicate?

Suppose:

  • Order service creates order
  • Payment service charges customer
  • Notification service sends email

How should they communicate?

Most beginners first think:

Order Service ---> HTTP ---> Payment Service
Enter fullscreen mode Exit fullscreen mode

This works initially.

But distributed systems become difficult quickly.


Problems With Direct HTTP Communication

Imagine:

Order Service ---> Payment Service
Enter fullscreen mode Exit fullscreen mode

What if:

  • payment service is down?
  • network becomes slow?
  • retries create duplicate requests?
  • traffic spikes suddenly?

Now systems become tightly coupled. One service failure can affect everything. And remember the problem we discussed earlier:

long-running tasks should not block users

That same problem exists here too.

Sync vs. Async Comparison

Feature Direct HTTP (Synchronous) Kafka (Asynchronous)
Coupling Tight: Services must know each other's IP/URL. Loose: Services only need to know the Topic name.
Availability If the target service is down, the request fails. If the target is down, Kafka stores the data until it returns.
User Experience User waits for every sub-task to finish. User gets a "Success" as soon as data hits Kafka.
Resilience Hard to handle retries and traffic spikes. Built-in: Kafka acts as a buffer during high traffic.

Suppose:

  • sending email becomes slow
  • payment provider becomes delayed
  • analytics system becomes overloaded

Should users wait? Of course not.

So even microservices need:

  • asynchronous communication
  • buffering
  • scalable messaging systems

Event-Driven Communication

Instead of directly calling services, now Order service simply publishes an event:

Order Service ---> Message Broker ---> Payment Service
Enter fullscreen mode Exit fullscreen mode
order_created
Enter fullscreen mode Exit fullscreen mode

It does NOT care:

  • who consumes it
  • how many consumers exist
  • whether consumers are temporarily offline

This creates:

  • loose coupling
  • better scalability
  • better fault tolerance

This Is Where Kafka Comes In

Apache Kafka became extremely popular because it solves these problems at very large scale.

Kafka is heavily used in:

  • analytics systems
  • payment pipelines
  • notification systems
  • activity tracking
  • realtime monitoring
  • distributed systems
  • stream processing

Companies use Kafka because it handles:

  • huge traffic
  • distributed systems
  • realtime event streaming
  • scalable consumers
  • durable event storage

Kafka Is Not Just a Queue

This is important. Kafka can absolutely work like a queue. But Kafka is much more than that.

Kafka is fundamentally:

A distributed event streaming platform.

Meaning:

  • events can be stored
  • replayed later
  • consumed by multiple services
  • processed at massive scale

This becomes extremely powerful in modern architectures.


Kafka in One Simple Sentence

Kafka is basically:

Producer ---> Kafka ---> Consumer
Enter fullscreen mode Exit fullscreen mode

Producer sends events. Consumers receive events.

Kafka stores events safely in between.


Basic Kafka Architecture

+------------+
| Producer   |
+------------+
       |
       v
+----------------+
| Kafka Topic    |
+----------------+
       |
       v
+------------+
| Consumer   |
+------------+
Enter fullscreen mode Exit fullscreen mode

Important Kafka Terms

Term Meaning
Producer Sends messages
Consumer Reads messages
Topic Message category
Broker Kafka server
Event Actual data/message

So now we will implement a very simple

Which Go Package Are We Using?

We will use:

github.com/segmentio/kafka-go
Enter fullscreen mode Exit fullscreen mode

In the past, Sarama was the go-to choice, but it can be quite "heavy" for beginners. We are using kafka-go because:

  • Idiomatic Go: It feels like writing standard Go code.

  • Context Support: Built-in support for context.Context for clean shutdowns.

  • Simpler API: No complex interfaces to implement for a basic consumer.


Kafka Setup (Modern KRaft Mode)

Older Kafka versions required Zookeeper.

Modern Kafka supports:

KRaft mode
Enter fullscreen mode Exit fullscreen mode

which removes Zookeeper completely.

We will use the modern setup.


Install Kafka (Mac)

Using Homebrew:

brew install kafka
Enter fullscreen mode Exit fullscreen mode

Start Kafka:

kafka-server-start /opt/homebrew/etc/kafka/kraft/server.properties
Enter fullscreen mode Exit fullscreen mode

Create Kafka Topic

Open another terminal:

kafka-topics \
  --create \
  --topic orders \
  --bootstrap-server localhost:9092
Enter fullscreen mode Exit fullscreen mode

Verify:

kafka-topics \
  --list \
  --bootstrap-server localhost:9092
Enter fullscreen mode Exit fullscreen mode

You should see:

orders
Enter fullscreen mode Exit fullscreen mode

Running Kafka with Docker

Instead of installing Kafka directly on your machine, we can also run it using Docker.

This is often easier because:

  • setup is cleaner
  • no local Java/Kafka installation needed
  • easier to reset environments
  • works consistently across machines

1. Create a docker-compose.yml

In your project root, create a file named:

docker-compose.yml
Enter fullscreen mode Exit fullscreen mode

Add the following content:

services:
  kafka:
    image: apache/kafka:4.2.0
    container_name: kafka
    ports:
      - "9092:9092"
    environment:
      # Configure Kafka to run in KRaft mode
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: controller,broker
      KAFKA_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
Enter fullscreen mode Exit fullscreen mode

2. Start Kafka

Run the following command:

docker-compose up -d
Enter fullscreen mode Exit fullscreen mode

This will:

  • pull the Kafka image
  • start Kafka container
  • expose Kafka on port 9092

Verify container is running:

docker ps
Enter fullscreen mode Exit fullscreen mode

You should see a container named:

kafka
Enter fullscreen mode Exit fullscreen mode

3. Create the Kafka Topic

Even with Docker, we still need to create our Kafka topic.

Run:

docker exec -it kafka /opt/kafka/bin/kafka-topics.sh \
  --create \
  --topic orders \
  --bootstrap-server localhost:9092
Enter fullscreen mode Exit fullscreen mode

4. Verify Topic Creation

Run:

docker exec -it kafka /opt/kafka/bin/kafka-topics.sh \
  --list \
  --bootstrap-server localhost:9092
Enter fullscreen mode Exit fullscreen mode

You should see:

orders
Enter fullscreen mode Exit fullscreen mode

Kafka is now ready for our Go producer and consumer.


Create Go Project

mkdir go-kafka-tutorial
cd go-kafka-tutorial
Enter fullscreen mode Exit fullscreen mode

Initialize Go module:

go mod init github.com/<your-github-username>/go-kafka-tutorial
Enter fullscreen mode Exit fullscreen mode

Install Sarama:

go get github.com/segmentio/kafka-go
Enter fullscreen mode Exit fullscreen mode

Project Structure

go-kafka-tutorial/
├── producer/
│   └── main.go
├── consumer/
│   └── main.go
├── go.mod
└── go.sum
Enter fullscreen mode Exit fullscreen mode

Writing Our First Producer

Create:

producer/main.go
Enter fullscreen mode Exit fullscreen mode

Code:

package main

import (
    "context"
    "log"

    "github.com/segmentio/kafka-go"
)

func main() {
    // 1. Initialize the writer
    writer := &kafka.Writer{
        Addr:     kafka.TCP("localhost:9092"),
        Topic:    "orders",
        Balancer: &kafka.LeastBytes{},
    }

    // 2. Ensure the writer is closed at the end
    defer writer.Close()

    // 3. Write a message
    err := writer.WriteMessages(context.Background(),
        kafka.Message{
            Value: []byte("new order created!"),
        },
    )

    if err != nil {
        log.Fatal("could not write message:", err)
    }

    log.Println("message sent successfully to Kafka!")
}
Enter fullscreen mode Exit fullscreen mode

Writing Our First Consumer

Create:

consumer/main.go
Enter fullscreen mode Exit fullscreen mode

Code:

package main

import (
    "context"
    "fmt"
    "log"

    "github.com/segmentio/kafka-go"
)

func main() {
    // 1. Initialize the reader
    reader := kafka.NewReader(kafka.ReaderConfig{
        Brokers:  []string{"localhost:9092"},
        GroupID:  "order-group",
        Topic:    "orders",
        MinBytes: 10e3, // 10KB
        MaxBytes: 10e6, // 10MB
    })

    defer reader.Close()

    fmt.Println("Consumer started... waiting for messages")

    // 2. Loop to read messages
    for {
        m, err := reader.ReadMessage(context.Background())
        if err != nil {
            log.Fatal("could not read message:", err)
        }

        fmt.Printf("Received message: %s\n", string(m.Value))
    }
}
Enter fullscreen mode Exit fullscreen mode

Running the Application

Start consumer first:

go run consumer/main.go
Enter fullscreen mode Exit fullscreen mode

Now in another terminal:

go run producer/main.go
Enter fullscreen mode Exit fullscreen mode

Consumer output:

received message: new order created
Enter fullscreen mode Exit fullscreen mode

You just built your first Kafka publisher/subscriber system using Go.

Conclusion

In this part we learned:

  • why async systems exist
  • long running task problems
  • queues and background workers
  • distributed systems basics
  • monolith vs microservices
  • service communication problems
  • messaging systems
  • Kafka fundamentals
  • creating producer and consumer using Go

Most importantly, You now understand:

WHY Kafka exists.

That foundation matters much more than memorizing APIs.

Top comments (0)