DEV Community

# localai

Posts

đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.
We Didn't Migrate from n8n to Python Because n8n Failed

We Didn't Migrate from n8n to Python Because n8n Failed

Comments
5 min read
3 Korean Innovations for Local AI Agent Inference

3 Korean Innovations for Local AI Agent Inference

Comments
3 min read
How I Set Up Ollama With n8n and Brought My AI API Costs to Zero

How I Set Up Ollama With n8n and Brought My AI API Costs to Zero

Comments
7 min read
Qwen 3 vs Llama 3: Configuring Local LLMs for Actual Performance

Qwen 3 vs Llama 3: Configuring Local LLMs for Actual Performance

Comments
5 min read
Your local AI made a deal. Now prove it.

Your local AI made a deal. Now prove it.

Comments
9 min read
Local AI Node on Xiaomi 12 Pro Needs Cooling, Battery Discipline

Local AI Node on Xiaomi 12 Pro Needs Cooling, Battery Discipline

Comments
7 min read
Local AI Coding Revolution: Why Open Source Models Are Winning Developer Adoption

Local AI Coding Revolution: Why Open Source Models Are Winning Developer Adoption

Comments
4 min read
Local AI with Ollama + Claude Code: An Honest Review from a Dev Team That Actually Uses It

Local AI with Ollama + Claude Code: An Honest Review from a Dev Team That Actually Uses It

Comments
7 min read
How to Run a Crypto AI Agent on Low-End Hardware in 2026 (No GPU Required)

How to Run a Crypto AI Agent on Low-End Hardware in 2026 (No GPU Required)

1
Comments
5 min read
Best Mac Mini for Running Local LLMs and OpenClaw: Complete Pricing & Buying Guide (2026)

Best Mac Mini for Running Local LLMs and OpenClaw: Complete Pricing & Buying Guide (2026)

Comments
11 min read
LLM Model Names Decoded: A Developer's Guide to Parameters, Quantization & Formats

LLM Model Names Decoded: A Developer's Guide to Parameters, Quantization & Formats

Comments
18 min read
Gemma 4 on Apple Silicon: 85 tok/s with a pip install

Gemma 4 on Apple Silicon: 85 tok/s with a pip install

1
Comments
4 min read
Local AI in 2026: Ollama Benchmarks, $0 Inference, and the End of Per-Token Pricing

Local AI in 2026: Ollama Benchmarks, $0 Inference, and the End of Per-Token Pricing

1
Comments
6 min read
How to Audit Your Stack for Offline AI Readiness

How to Audit Your Stack for Offline AI Readiness

Comments
6 min read
Local LLM Inference in 2026: The Complete Guide to Tools, Hardware & Open-Weight Models

Local LLM Inference in 2026: The Complete Guide to Tools, Hardware & Open-Weight Models

1
Comments 2
17 min read
đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.