March 4, 2026
How I Automated My Next.js Blog with OpenClaw (Step-by-step Tutorial)
How I wired OpenClaw to drop one new MDX blog post into my Next.js GitHub repo every day, plus keyword backlog + scheduled research — without becoming an SEO person.
Quick version: I set up a simple OpenClaw workflow where my Next.js blog lives as MDX files in GitHub, and the agent handles the boring repeat work.
- writes/publishes on schedule
- refreshes keyword ideas during the week
- sends me a short update (title + keywords + PR link)
I run OpenClaw in the cloud on Clawbase, but this exact workflow also works if you self-host OpenClaw on your own machine or another platform.
The exact “chat instructions” that defined the system (a bit shameful, not gonna lie)
This is basically what I told my assistant to build:
“I’m in Shanghai (China timezone). Starting tomorrow, make sure you have an article in the repo every day, and auto-merge it at 8pm Beijing time. Also write a short summary here about the keywords the article tries to rank for, and send me an update every day at 8pm Beijing time as well.”
Then I added:
“I prefer a planned editorial calendar, so it’s more focused. Unless something urgent/trending appears, choose the next post from the backlog. Update the backlog every few days.”
That’s really just how it started, no technical stuff at all.
What you’ll build (high level)
Same idea, broken down:
- publishes one blog post daily
- does deeper keyword research + competitor analysis a few times each week
- keeps an eye on urgent/trending topics so you don’t miss important moments
In your GitHub repo
You’ll have:
content/blog/YYYY-MM-DD-some-slug.mdx← blog postscontent/blog/BACKLOG.csv← your editorial calendar / keyword backlog
In OpenClaw
You’ll set up 3 scheduled jobs:
- Daily publisher (writes 1 new MDX file + opens a PR)
- Backlog refresh (updates BACKLOG.csv 2×/week)
- Competitor/SERP gap check (weekly)
You can start with just job #1 and add the others later.
Step 0 — Decide your “blog format” (don’t overthink it)
We’re doing the cheapest + simplest option:
- blog posts are files
- we commit them to the repo
- Vercel deploys your site off
main
Frontmatter schema (the exact one I used)
---
title: "Welcome to the Blog"
date: "2026-03-04"
excerpt: "Your blog is now powered by Markdown and MDX."
published: true
cluster: "Integrations"
---
Notes:
clusteris optional, but I like it because later you can filter posts by type (Setup vs Integrations vs Use-cases etc.)- slug is just the filename (super easy)
Step 1 — Create the folder in your repo
In your Next.js repo, create:
content/
blog/
That’s it.
You don’t need a CMS. You don’t need a database.
(Your Next.js app still needs code to render the blog, but that part is separate. If you don’t have the blog scaffold yet, you can just ask OpenClaw to generate it for you. If you already have the structure, this is optional — in my setup, I only wanted OpenClaw to push MDX files.)
Step 2 — Generate the keyword backlog (editorial calendar)
This was the big improvement.
Instead of doing keyword research every day from scratch, I keep a simple CSV in the repo.
Important: OpenClaw can create this for you
In my setup, I didn’t hand-write this file. I told OpenClaw what I wanted, and it generated content/blog/BACKLOG.csv and committed it to GitHub for me.
So you have two choices:
- Option A (manual): you create the file yourself
- Option B (recommended): have OpenClaw bootstrap it (then keep it updated on schedule)
Either way, the end result is the same: your repo contains a strict CSV at:
content/blog/BACKLOG.csv
Use a strict CSV header like this:
keyword,cluster,intent,secondary_keywords,page_type,priority,status,notes,evidence_urls
And example rows:
openclaw install,Setup,Informational,"how to install openclaw; openclaw docker",Blog post,10,todo,"High intent for new users",https://example.com
openclaw slack integration,Integrations,Informational/commercial,"connect openclaw to slack; slack ai assistant",Blog post,8,todo,"Repeat the Telegram pattern",https://example.com
Super important CSV rule
Don’t put section headers like # SETUP inside the CSV.
Keep it strict CSV so automation can parse it.
If you want grouping:
- just filter/sort by the
clustercolumn.
Step 3 — Enable web search (optional, but makes it way better)
If you want the backlog refresh + competitor check to be real (not guessing), give OpenClaw a web search provider.
I used Brave Search.
- Enable OpenClaw’s
web_searchtool - Store the Brave API key in config/env (not in chat)
This lets the agent attach SERP evidence URLs right into BACKLOG.csv.
Cost note (at the time I set this up): Brave API is usage-based, around $5 per 1,000 searches, and includes $5 monthly credits.
For personal use, that usually means you can run this basically for free.
Ready for your own?
🦞 Hire an AI employee that works 24/7
Plans from less than $1/day. Dedicated cloud host, top models, and messaging on Telegram, Slack, or Discord. No API keys to manage.
See plans · Cancel anytime
Step 4 — Connect GitHub so OpenClaw can publish
OpenClaw needs to be able to:
- clone the repo
- create a branch
- commit 1 MDX file
- push
- open a PR
Do this with a fine-grained GitHub token
Create a fine-grained PAT scoped to ONLY your repo.
Quick setup:
- Go to Fine-grained PATs
- Click Generate new token
- Set:
- Resource owner: your account/org that owns the repo
- Repository access: Only select repositories -> select
your product repo
- Set minimum permissions:
- Contents: Read and write
- Pull requests: Read and write (only if you want PRs opened)
- (Optional) Workflows: Read and write (only if you want the agent to edit GitHub Actions later)
Save it as GITHUB_TOKEN in your OpenClaw gateway env/config.
Step 5 — Automated daily publishing job (daily cron)
This is fully automated (cron-triggered).
Once set, this daily loop runs on its own:
- read
content/blog/BACKLOG.csv - pick the next row:
- highest
priority status=todo
- highest
- write a new post as:
content/blog/YYYY-MM-DD-<kebab-slug>.mdx
- open a PR
- send me an update at publish time
Content rules I used (simple, but keeps quality decent)
- use simple words
- step-by-step
- include at least one table
- include code snippets if it makes sense
- add 3–6 citations as plain URLs
The daily update message
At 8pm Beijing, I want an update like:
- title
- slug/filename
- cluster
- primary keyword + secondary keywords
- PR link
Step 6 — Automated keyword refresh (2×/week) + competitor deep dive (weekly cron)
This is also automated with scheduled jobs.
This is how I keep it “planned and focused” instead of random:
- Daily: publish from backlog
- 2×/week: refresh backlog (add 10–20 good ideas, prune duplicates)
- Weekly: competitor/SERP gap check (what ranks, what angles are missing)
This way I don’t live in SEO tools.
Step 7 — My OpenClaw SEO agent schedule
I’m in Shanghai, so everything is Asia/Shanghai.
- Daily publish: 20:00 (8pm Beijing time)
- Backlog refresh: 2×/week (every 3–4 days)
- Competitor deep dive: 1×/week
Troubleshooting (stuff that actually happens)
If you hit any issue, honestly the fastest move is usually:
- paste the exact error into OpenClaw
- ask it to fix it directly in your repo
Since it already has write access to your codebase, let it do the boring fix work.
“GitHub clone asks for username/password and fails”
That means the environment can’t do interactive auth.
Use a token-based HTTPS URL and GITHUB_TOKEN.
“My BACKLOG.csv isn’t parsing”
Common causes:
- comment lines like
# SETUP - blank lines
- unescaped commas
Fix:
- keep it strict CSV
- put commas inside quoted strings only
“OpenClaw keeps changing my repo code”
Don’t allow that. Constrain your job:
- daily job should commit only the new MDX file
- refresh jobs should commit only BACKLOG.csv
TL;DR
If you want a cheap, automated way to publish blog content:
- store posts as
.mdxin your repo - keep an editorial backlog in
BACKLOG.csv - schedule a daily job to publish the next item
- schedule keyword refresh a couple times a week
If you're an indie maker or small team without an SEO/marketing hire, this helps you keep shipping content without adding another full-time job to your plate.
FAQ — Do I need a new OpenClaw sub-agent for each product?
Short answer: usually no.
If you just want to run the same blog workflow for another product, keep one OpenClaw instance and split things like this:
- one backlog file per product (like
BACKLOG_product2.csv) - separate cron job IDs/schedules
- optional separate repo per product
That setup is usually the simplest.
Create a separate agent profile only if you want hard separation:
- different voice/tone for each product
- different default models/tools
- different credentials, repos, or safety rules
- separate memory/workspaces
Sub-agents are still great for parallel one-off tasks.
But for daily publishing, cron jobs are the better fit.
Simple cost framing
If one Clawbase OpenClaw instance is $29/month and you use it for multiple products:
(You can check current plans on the Clawbase pricing page.)
- 2 products -> about $14.50/mo each
- 3 products -> about $9.67/mo each
So with one instance, you can automate one post/day and keep the per-product cost pretty low as you scale.