Summary
This skill enables an agent to orchestrate an end-to-end AI content production workflow for simple informational articles, from SERP analysis and content brief creation through outline approval, drafting, and performance monitoring. Invoke this when you need to produce feature comparisons, how-to guides, listicles, or content updates at scale where domain expertise exists to validate quality upfront.
SKILL.MD
Execute AI-assisted content production for informational articles
When to Activate
You need to produce informational content on topics where:
- The topic is simple and informational (not opinion, research, or rapidly changing subjects)
- You have enough domain knowledge to evaluate output quality
- The content follows a repeatable editorial process
- Speed matters more than novel insights
- Examples: feature comparisons, how-to guides, listicles, updates to existing content
Do NOT use this skill for: original research, opinion pieces, breaking news, complex narratives, or topics outside your knowledge domain.
Core Knowledge
The fundamental principle
Human creative processes can be distilled into specific, manageable steps that LLMs can follow. Success comes from front-loading human judgment at the briefing stage, not from editing AI drafts after generation.
Why this matters: Editing completed drafts is harder and less enjoyable than guiding structure upfront. Better to invest effort in detailed briefs and outlines than in substantial rewrites.
The 90/10 rule
90% of success comes from topic selection. This process works well for simple informational content where you have passing familiarity with the subject. It does not replace skilled writers—it's a tool FOR skilled writers.
Process decomposition approach
Break your existing editorial workflow into discrete stages, each documented separately:
- Topic selection and briefing
- Outlining
- Structural editing
- Drafting
- Product/brand integration
- Line editing
- Internal linking
- Metadata creation
- Platform-specific formatting
Each stage should include specific guidelines and examples in Markdown format. Source these from:
- Existing team documentation
- Writing guides (condensed by LLM)
- Editorial courses or training materials
- High-quality published examples from your archive
Project setup requirements
Use ChatGPT Projects (or equivalent) to:
- Upload process documentation as reference files
- Set custom instructions that apply to all conversations
- Define target audience clearly
- Include instruction to always consult documentation and follow sequential workflow
- Request "in-the-trenches experience" roleplay (generates first-person anecdotes and practical examples)
Critical instruction elements:
- Always reference project files for guidance
- Follow stages in sequential order
- Describe target audience specifically (their skepticism, needs, sophistication level)
- Roleplay as experienced practitioner who's solved real problems
- Set quality bar explicitly (e.g., "write as if your boss will judge this")
Research integration
For topics requiring current information or validation:
- Use deep research features while working on other tasks
- Review synopsis for key findings
- Incorporate AI-generated summary into content brief
- Example use case: checking if companies support a protocol, verifying recent statements
The content brief structure
Front-load all human judgment here. Include:
- Target keyword - with on-page optimization directions
- Working title - to ensure correct search intent match
- Key points to include - personal anecdotes, research findings, unique angles you want covered
- Subtopics to cover - extracted from SERP analysis of top-ranking competitors
- Product/brand mentions - specific or unusual use cases the LLM might not suggest
Why subtopics matter: Content optimization tools analyze top-ranking pages and extract topics your article must cover to be competitive. This ensures topical completeness without manual SERP review.
The editing approach
Use Canvas or equivalent inline commenting:
- Read generated outline first, provide structural feedback
- Request changes before drafting (easier than editing drafts)
- Switch to inline comments for draft review
- Leave specific, actionable feedback
Common feedback patterns to use:
- "Too vague, be more specific and cut weasel words"
- "Include a real example to illustrate this point"
- "Correct this wrong idea [explain correct version]"
- "Trim/expand this idea"
- "Simplify this and make it beginner-friendly"
Why this works: LLM responds instantly, so even imperfect single responses can be quickly nudged toward quality—as long as you know what "good" looks like.
What to handle manually
Images: Screenshots, custom graphics, graphs from real data cannot be reliably generated. Add these yourself after content generation, but have LLM suggest placement locations.
Internal link verification: LLMs hallucinate URLs. Either verify all links manually or provide a curated list of actual URLs with descriptions for LLM to choose from.
Performance monitoring approach
Track AI-generated content separately from human content:
- Group AI articles in portfolios/segments
- Monitor keyword rankings, backlinks, estimated traffic
- Check traffic sources (organic, social, email, AI assistants)
- Compare on-page metrics (bounce rate, time on page) to human baseline
Why separation matters: Validates that AI content performs comparably to human content and helps identify quality issues early.
Constraints / Hard Rules
- You must have domain knowledge to evaluate output quality—never generate content on topics you can't assess
- Never skip the content brief—all human judgment must be front-loaded
- Read and approve everything before publishing—speed is not an excuse for shoddy content
- Follow stages sequentially—outline before draft, structure before line edits
- Do not use for: opinion pieces, original research, new/changing topics, complex narratives
- Manually verify all internal links—LLMs hallucinate URLs
- Add images manually—generative AI cannot reliably create screenshots or data visualizations
Workflow
Setup (one-time)
- Document your editorial process as discrete stages in Markdown files
- Include guidelines and examples for each stage
- Create LLM project and upload process documentation
- Write custom instructions covering:
- Instruction to always consult files and follow sequential workflow
- Target audience description
- Quality bar and roleplay request
- Persuasive writing frameworks (ethos, pathos, logos)
Per article
-
Analyze SERP for target keyword using content optimization tool
- Extract subtopics from top-ranking pages
- Note content gaps or opportunities
-
Create content brief including:
- Target keyword + optimization directions
- Working title
- Key points (anecdotes, unique angles, research findings)
- Subtopics to cover (from SERP analysis)
- Specific product/brand mentions
-
Optional: Deep research
- For topics requiring validation or current info
- Run research request while doing other work
- Review synopsis and incorporate findings into brief
-
Generate outline
- Submit content brief to LLM
- Request bullet-point outline following documented format
- Review structure, flow, section logic
- Request structural changes before drafting
-
Generate draft
- Once outline approved, request full draft
- Switch to inline commenting interface
- Read entire draft, leaving specific comments as you go
- Request revisions using common feedback patterns
- Iterate until publish-ready (structure already approved, so focus on clarity and examples)
-
Generate supporting elements
- Request metadata (title tag, meta description)
- Request platform-specific formatting (shortcodes, markup)
- Request 10 internal link suggestions with context
- Request image placement suggestions
-
Manual finalization
- Verify and fix all internal links
- Add images (screenshots, graphics, charts)
- Final quality review
- Publish
-
Monitor performance
- Track in separate portfolio/segment from human content
- Review rankings, traffic, backlinks
- Compare metrics to human baseline
- Identify quality issues or improvement opportunities
Output Contract
When executing this skill, you produce:
-
Process documentation (one-time setup):
- Markdown files for each editorial stage
- Custom instructions for LLM project
- Target audience definition
-
Per article:
- SERP analysis with competitive subtopics
- Detailed content brief (keyword, title, key points, subtopics, product mentions)
- Approved bullet-point outline
- Publication-ready article draft
- Metadata (title tag, meta description)
- Platform-specific formatting
- Internal link suggestions (manually verified)
- Image placement suggestions
- Performance tracking dashboard showing AI content metrics
Quality bar: AI-generated articles should perform comparably to human-written content in rankings, traffic, and engagement metrics. If they don't, the process needs refinement.