DEV Community

# llm

Posts

đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.
GPT-5.4 vs DeepSeek V4 vs GLM-4.7: How to choose the right model without testing each one

GPT-5.4 vs DeepSeek V4 vs GLM-4.7: How to choose the right model without testing each one

Comments
6 min read
RAG Series (4): Document Processing — From Raw Files to High-Quality Chunks

RAG Series (4): Document Processing — From Raw Files to High-Quality Chunks

Comments
7 min read
I built react-native-llm-meter, LLM cost tracking for Expo apps

I built react-native-llm-meter, LLM cost tracking for Expo apps

Comments
3 min read
RAG Series (1): Why LLMs Need External Memory

RAG Series (1): Why LLMs Need External Memory

Comments
7 min read
One Open Source Project a Day (No. 54): Warp - The AI-Native Rust Terminal

One Open Source Project a Day (No. 54): Warp - The AI-Native Rust Terminal

Comments
4 min read
DeepSeek-V4: What a Million-Token Context Actually Changes

DeepSeek-V4: What a Million-Token Context Actually Changes

Comments 1
3 min read
Agent Skills Are Just Header Files (And Virtual Memory, And Unix Pipes)

Agent Skills Are Just Header Files (And Virtual Memory, And Unix Pipes)

Comments
5 min read
Hello World of RAG - Day 1

Hello World of RAG - Day 1

Comments
6 min read
AI gives you advice. But is it good advice?

AI gives you advice. But is it good advice?

Comments
2 min read
Tenacious-Bench: Building a Sales Domain Evaluation Benchmark When No Dataset Exists

Tenacious-Bench: Building a Sales Domain Evaluation Benchmark When No Dataset Exists

Comments
3 min read
Stop Your RAG Pipeline From Hallucinating: A 15-Line Fix published

Stop Your RAG Pipeline From Hallucinating: A 15-Line Fix published

Comments
9 min read
PFlash Boosts llama.cpp Prefill; Ollama Sees Major Speed Gains; Llama 3.2 on Android

PFlash Boosts llama.cpp Prefill; Ollama Sees Major Speed Gains; Llama 3.2 on Android

Comments
3 min read
AI is your Copilot, not to replace humans

AI is your Copilot, not to replace humans

Comments
3 min read
Your AI, Your Rules: Running a Local LLM with GPU Acceleration on Proxmox

Your AI, Your Rules: Running a Local LLM with GPU Acceleration on Proxmox

Comments
10 min read
Sentinel-Proxy AI Firewall Demo

Sentinel-Proxy AI Firewall Demo

Comments
1 min read
đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.