This article walks through how I built an AI article pipeline for rel8.pl. Instead of one chat and one giant prompt, I run a pipeline of agents: RESEARCH, WRITER, SEO, and FRONTEND, coordinated by MASTER. Each stage reads and writes files, so the process is repeatable, measurable, and easy to fix. In practice that turned 3–5 hours of manual work into 15–30 minutes of my time.
Problem: manual content does not scale
In most small companies the flow looks like this:
- someone picks a topic
- someone researches
- someone writes
- someone handles SEO
- someone publishes on the site
It works, but it is slow and depends on one person’s availability. I used to do the same: one chat window, one big prompt, lots of manual fixes, manual copy-paste to the site.
The biggest issue was not model quality — it was missing role separation. When one AI tries to do everything at once, you get mediocre output and no real control over the process.
Solution: a content system instead of one prompt
I split work into specialised roles and wired them into a pipeline:
- Data collection (Firecrawl)
- Research and synthesis (RESEARCH agent)
- Article writing (WRITER agent)
- SEO packaging (SEO agent)
- On-site publishing (FRONTEND agent in Next.js)
Day to day I use Claude for generation and Warp to drive the pipeline from the terminal. For how we think about terminal AI tools, see our Polish article Warp vs Claude Code.
Architecture: files instead of chat memory
The key rule: agents do not “remember” previous sessions, so the filesystem is the process memory.
It works like “Memento”: without notes you start from zero. My notes are directories and input/output files per stage:
data/raw/— research outputdata/articles/— finished articledata/seo/— SEO metadata and publishing checklist/blog/[slug]— final render on the frontend
That gives three things: full traceability, resume-from-any-step, and surgical edits without breaking the whole flow.
Agents: who owns what
RESEARCH (Firecrawl + source cleanup)
The RESEARCH agent pulls sources, strips navigation noise, and merges findings into one working document. Output is only a file in data/raw/.
For a deeper dive on data collection, see the Polish article Firecrawl — web scraping for AI.
WRITER (article for a specific audience)
WRITER reads only the research file and writes the article for SMB owners. It does not re-research, does not fetch new data outside the brief, and does not improvise beyond the brief.
That keeps quality repeatable and comparable across topics.
SEO (visibility without rewriting the whole piece)
The SEO agent does not rewrite the article from scratch. It owns the search layer: meta title, meta description, keywords, heading structure, and a publishing checklist.
FRONTEND (render and publish)
FRONTEND takes finished Markdown and renders it under /blog/[slug] in Next.js. No CMS, no database — content lives in the repo with Git history.
MASTER (coordination)
MASTER does not author content. It checks which files exist and runs only the missing steps. If research already exists, it skips RESEARCH and starts from WRITER.
In practice I launch a topic with one command and only intervene where a real business decision is needed.
How it runs on rel8.pl
A typical cycle:
- Pick topic and sources
- Run the pipeline (
npx tsx scripts/generateContent.ts <url> [limit]) - Review WRITER output and apply quick edits
- Verify SEO and publish via Git/Next.js
A new article appears after push and deploy — no manual CMS clicking.
This pairs well with the basics of SMB AI adoption in AI in a small company — how to start.
Results: how much time the pipeline actually saves
The biggest win is not “prettier prose” — it is shorter cycle time and predictable work.
Time comparison: manual process 3–5 hours vs AI pipeline 15–30 minutes from brief to publish
Roughly:
- before: 3–5 hours per article
- now: 15–30 minutes of my time
- at four articles per month: 10–15 hours reclaimed
- quality: repeatable, because every topic follows the same path
Side effect: regular publishing improves organic traffic and inbound leads.
What you can implement today
You do not need a huge budget or a complex stack. Start minimal:
- Document agent roles in
AGENTS.md(inputs, outputs, explicit non-goals) - Define process folders (
data/raw,data/articles,data/seo) - Run one full cycle on a single topic and tighten the instructions
- Only then add MASTER orchestration on top
What you can gain
Treat the numbers below as directional benchmarks; outcomes depend on industry, input quality, and how you operationalise the workflow.
If you publish irregularly today, the biggest wins are reclaimed time and a predictable content cadence.
In practice:
- less operational work per article
- faster experiments with topics and keywords
- better odds of steady organic growth
- less chaos because each stage has an owner
What we are improving next
Near-term roadmap for the pipeline:
- tighter automation around publishing (less manual Git ceremony)
- better reporting on topic quality and performance
- integrations with marketing and sales workflows
This is not a demo project — it is a working content operating system you can roll out incrementally and scale with the company.