Business intelligence is undergoing a transformation. Instead of relying on SQL queries, Python scripts, and manual dashboards, organizations are moving toward conversational analytics — systems that understand natural language and automatically discover, analyze, and visualize enterprise data.
This blog explores how to build a Multi‑Agent Generative BI Platform using AWS Bedrock, the Strands Agents SDK, and the Model Context Protocol (MCP). We’ll also see how Standard Operating Procedures (SOPs) bring structure and reliability to agent behavior.
The Challenge: Bridging Business and Data
In most enterprises, business users depend on technical teams to answer data questions. A simple query like “What are last quarter’s top‑performing products?” can take days to resolve because it requires SQL expertise, data manipulation, and visualization skills.
This delay creates friction and inconsistency — different analysts produce different results, and insights often arrive too late to act on. The goal is to eliminate this dependency by enabling direct, natural‑language interaction with data.
The Use Case: Oil & Gas Analytics (and Beyond)
Our prototype focuses on oil & gas production analytics, where structured datasets (well logs, sensor readings, geological surveys) are abundant.
Typical questions include:
“Which wells are underperforming?”
“Show production trends for the last six months.”
“Forecast next quarter’s output.”
But the architecture is domain‑agnostic.
Swap the MCP connectors and SOPs, and it works for:
Retail → “same‑store sales,” “inventory turnover.”
Healthcare → HIPAA‑compliant data access via FHIR.
Finance → “risk‑adjusted returns,” “customer lifetime value.”
The Solution: Flattened Multi‑Agent Architecture
Traditional multi‑agent systems use deep delegation chains — agents calling sub‑agents recursively. This increases latency and token cost.
Our design uses a flattened hierarchy:
| Role | Responsibility |
|---|---|
| Supervisor Agent | Routes queries, enforces SOPs |
| Data Steward Agent | Handles data discovery via MCP |
| Data Visualizer Agent | Generates charts and uploads to S3 |
Each agent operates independently but follows strict SOPs for predictable behavior.
Why SOPs Matter
LLMs are probabilistic — their outputs can vary. SOPs introduce determinism and auditability.
Each agent has a markdown file defining its rules using RFC 2119 keywords:
# Visualization SOP
## Constraints
- MUST use matplotlib or seaborn for chart generation
- MUST sort data before visualization
- MUST upload chart to S3 bucket: {s3_bucket}
- MUST return the S3 URI in the response
SOPs act as contracts between developers and agents — version‑controlled, reviewable, and debuggable.
Architecture Overview
Below is a hand‑drawn Excalidraw‑style sketch showing how the system components interact:
Business User asks a question.
Supervisor Agent interprets intent and delegates tasks.
Data Steward Agent queries data via MCP.
Data Visualizer Agent generates charts using Bedrock’s Code Interpreter.
AWS Bedrock executes code securely and uploads results to S3.
Implementation in Python
Here’s a simplified Python workflow using the Strands SDK and AWS Bedrock:
from strands import Agent, Supervisor
from strands.sop import load_sop
from bedrock import BedrockClient
import boto3
# Initialize AWS Bedrock client
bedrock = BedrockClient(region="us-east-1")
# Load SOPs
data_steward_sop = load_sop("sops/data_steward.md")
visualizer_sop = load_sop("sops/visualizer.md")
# Define specialized agents
class DataStewardAgent(Agent):
def query(self, question: str):
# Example: query via MCP (Athena or SQLite MCP server)
return {
"well_id": ["W1", "W2", "W3"],
"production": [1200, 950, 870]
}
class DataVisualizerAgent(Agent):
def visualize(self, data, chart_type="bar"):
code = f"""
import matplotlib.pyplot as plt
wells = {data['well_id']}
production = {data['production']}
plt.bar(wells, production)
plt.title("Top Wells by Oil Production")
plt.savefig("/tmp/chart.png")
"""
result = bedrock.execute_code(code)
s3 = boto3.client("s3")
s3.upload_file("/tmp/chart.png", "my-bi-bucket", "charts/wells.png")
return "s3://my-bi-bucket/charts/wells.png"
# Supervisor orchestrates agents
class SupervisorAgent(Supervisor):
def handle_query(self, query: str):
data = DataStewardAgent(sop=data_steward_sop).query(query)
chart_uri = DataVisualizerAgent(sop=visualizer_sop).visualize(data)
return chart_uri
# Example run
supervisor = SupervisorAgent()
result = supervisor.handle_query("Show me the top 10 wells by oil production")
print("Chart available at:", result)
AWS Bedrock + Strands Framework Synergy
AWS Bedrock provides the secure foundation for LLM execution, guardrails, and code interpretation.
The Strands Framework orchestrates multiple agents with SOP‑driven reliability.
Together, they enable:
Secure LLM Execution
Structured Agent Collaboration
Domain‑agnostic Extensibility
Conversational BI at Scale
Key Benefits
| Feature | Impact |
|---|---|
| Natural‑language interface | Removes technical barriers for business users |
| SOP‑driven governance | Predictable, auditable agent behavior |
| Security by design | MCP blocks unsafe SQL; Bedrock Guardrails filter content |
| Extensible architecture | Add new agents by defining SOPs and delegation tools |
| Rapid insights | End‑to‑end query‑to‑visualization in under 60 seconds |
//Update 30th Apr//
Conclusion
The combination of AWS Bedrock and Strands Framework represents a new paradigm for enterprise analytics — one where business users interact directly with data through natural language, and multi‑agent systems deliver consistent, auditable results.
This architecture is a blueprint for domain‑agnostic, conversational BI that scales across industries.



Top comments (0)