DEV Community

# llm

Posts

đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.
What is an LLM? (And Should You Be Scared of It?)

What is an LLM? (And Should You Be Scared of It?)

2
Comments 2
5 min read
Agents That Learn on the Job Are the Only Agents That Survive

Agents That Learn on the Job Are the Only Agents That Survive

Comments
2 min read
I built a causal memory layer for AI agents after the Replit incident – open source, MIT

I built a causal memory layer for AI agents after the Replit incident – open source, MIT

Comments
1 min read
Customer Support Memory Agent

Customer Support Memory Agent

Comments
3 min read
# How I Built an On-Call Agent That Never Forgets a Past Incident

# How I Built an On-Call Agent That Never Forgets a Past Incident

Comments
5 min read
TERSE Tool Catalog (TTC): Cut Tool Catalog Token Usage by 66.6% in Your AI Agents

TERSE Tool Catalog (TTC): Cut Tool Catalog Token Usage by 66.6% in Your AI Agents

1
Comments 2
4 min read
Building Your Own "Google Maps for Codebases": A Guide to Codebase Q&A with LLMs

Building Your Own "Google Maps for Codebases": A Guide to Codebase Q&A with LLMs

Comments
6 min read
Why our support agent gets better the angrier the customer gets

Why our support agent gets better the angrier the customer gets

Comments
5 min read
Designing Agentic AI: From Simple Prompts to Autonomous Loops

Designing Agentic AI: From Simple Prompts to Autonomous Loops

2
Comments
4 min read
I routed 60 MCP tools through a single proxy — here's what I learned about token waste and security

I routed 60 MCP tools through a single proxy — here's what I learned about token waste and security

Comments 2
4 min read
Audio Ai agent Pipeline

Audio Ai agent Pipeline

1
Comments
5 min read
How I Built a Production AI Agent for $5/month Using Open Source + OpenRouter

How I Built a Production AI Agent for $5/month Using Open Source + OpenRouter

Comments
4 min read
How to Run GLM 4.7 Flash Locally with Ollama — 30B Quality at 3B Speed

How to Run GLM 4.7 Flash Locally with Ollama — 30B Quality at 3B Speed

Comments
3 min read
Let AI Be the Architect, Not the Operator

Let AI Be the Architect, Not the Operator

Comments
5 min read
Build Your Own AI Code Assistant: LocalLLM + Python Automation

Build Your Own AI Code Assistant: LocalLLM + Python Automation

Comments
4 min read
đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.