Summary
The agent can evaluate whether specific AI content use cases are acceptable given six distinct risk categories (Google penalty, channel degradation, hallucination, legal, mediocrity, and first-mover opportunity costs), then produce a tailored risk assessment with mitigation requirements and go/no-go recommendations based on the organization's risk tolerance and context.
SKILL.MD
Assess AI content implementation risk
When to Activate
You're planning to use generative AI for content creation and need to evaluate whether specific use cases are acceptable for your organization. Load this before implementing AI content workflows, when stakeholders raise concerns about AI content risks, or when deciding between AI-assisted and fully human content production.
Core Knowledge
Six Risk Categories
1. Google Penalty Risk (Low to Medium)
- Google's stance evolved from "AI content = spam" (April 2022) to "helpful content matters more than creation method" (February 2023)
- Key quote: "Automation has long been used in publishing to create useful content. AI can assist with and generate useful content in exciting new ways"
- Detection is difficult and getting harder—more dollars flow into generation than detection
- Human editing creates a blurred line that Google likely cannot distinguish
- Mitigation: Focus on helpful, problem-solving content regardless of creation method
2. Channel Degradation Risk (Medium, Speculative)
- AI may degrade SEO returns through: keyword competition surge, content flood reducing trust, search engines integrating chat interfaces (fewer clicks to websites)
- This is the "search singularity"—when marginal cost of content approaches zero, supply overwhelms demand
- Implication: Have contingency plans if SEO becomes less effective. Where would you reallocate budget?
3. Hallucination Risk (High, but Controllable)
- AI will generate false information: fabricated quotes, nonexistent data, plausible-sounding nonsense
- This is intrinsic to how LLMs work—they predict next words, not verify facts
- No internal fact-checking mechanism exists in GPT models
- Training data itself contains errors
- Mitigation: Always put a human in the loop to verify factual claims, quotes, data, and logical coherence
4. Legal Risk (Medium, Evolving)
- Three specific regulatory concerns from legal analysis:
- Personal data leakage: Models trained on public data may expose PII, triggering FTC/state enforcement (especially CA, NY, IL)
- Biased content: Implicit bias from training data can trigger regulatory action
- Lack of copyright protection: Copyright Office requires disclosure of AI use and "human author" for protection
- Safest use cases: Background research, brainstorming, idea generation
- Mitigation: Implement human-review scheme for all published AI content
5. Mediocrity Risk (High)
- AI content quality isn't the problem—GPT-4 writes better than most humans
- The real risk: "functional but forgettable" content that's accurate, articulate, actionable... and useless for business goals
- Publishing volume can replace strategic uniqueness
- Content becomes "soulless imitation"—looks like content marketing but lacks effectiveness
- Great content requires: strategic cohesion, effective distribution, lasting impression beyond basic reader expectations
- Per VC Tomasz Tunguz: "For many use cases, uniqueness won't matter. Product documentation, evergreen content for SEO, canned responses for email." But in other cases, uniqueness is everything
- Mitigation: AI must serve strategy, not replace it. Ensure each piece connects to larger goals
6. First-Mover Advantage Risk (Medium)
- Opportunity cost of waiting while competitors build moat of rankings and backlinks
- Technology adoption is slower than people think—market won't saturate overnight
- The certain risk: Failure to experiment and learn
- AI will enter every marketing strategy (too good, too cheap to ignore), but application varies by company
Constraints / Hard Rules
- Never publish AI content without human review for factual accuracy, bias, and personal data
- Always disclose AI use in copyright registration applications
- Do not use AI as a substitute for content strategy
- Do not chase volume at the expense of strategic differentiation when uniqueness matters for your use case
Workflow
-
Identify the specific AI use case you're evaluating (e.g., first drafts, research, programmatic SEO, content repurposing, ABM personalization)
-
Score each risk category for this use case:
- Google penalty risk: Does the content aim to be helpful and solve problems?
- Channel risk: How dependent is success on SEO specifically?
- Hallucination risk: How much factual verification is required? Can you implement human review?
- Legal risk: Does content involve personal data, protected characteristics, or need copyright protection?
- Mediocrity risk: Does this use case require uniqueness, or is functional adequacy sufficient?
- First-mover risk: What's the opportunity cost of delay?
-
Assess your specific context variables:
- Risk tolerance (company, industry, legal constraints)
- Audience expectations (will they detect/care about AI use?)
- Resources (can you afford human review layer?)
- Goals (volume vs. differentiation priority)
- Personal beliefs (ethical stance on AI content)
-
Determine mitigation requirements:
- What level of human review is needed?
- What strategic guardrails prevent mediocrity?
- What contingency exists if channel degrades?
-
Make implementation decision: Proceed, proceed with modifications, or reject use case
Output Contract
Produce a risk assessment document containing:
- Use case description: Specific AI application being evaluated
- Risk scores: For each of 6 categories (Low/Medium/High) with brief justification
- Context factors: Your company's specific variables affecting risk tolerance
- Mitigation plan: Required safeguards (especially human review process)
- Recommendation: Clear go/no-go decision with conditions
- Contingency notes: Plans if channel-level risks materialize (especially SEO degradation)
The assessment should enable a clear decision: whether benefits outweigh risks for this specific company, audience, and use case.
Source: The 6 Risks of AI Content