growth-engine

Installation

$npx skills add ericosiu/ai-marketing-skills --skill growth-engine

Summary

This skill enables an agent to design A/B and multivariate marketing experiments, log results, run statistical significance tests, and automatically populate a reusable playbook of winning tactics. Use it to systematize experimentation across content, email, ads, SEO, and other channels with statistically rigorous decision logic.

SKILL.MD

Growth Engine

Preamble (runs on skill start)

# Version check (silent if up to date)
python3 telemetry/version_check.py 2>/dev/null || true

# Telemetry opt-in (first run only, then remembers your choice)
python3 telemetry/telemetry_init.py 2>/dev/null || true

Privacy: This skill logs usage locally to ~/.ai-marketing-skills/analytics/. Remote telemetry is opt-in only. No code, file paths, or repo content is ever collected. See telemetry/README.md.


Autonomous growth experimentation framework based on Karpathy's autoresearch pattern applied to marketing. Creates experiments with hypotheses, logs data points, runs statistical analysis (bootstrap CI + Mann-Whitney U), auto-promotes winners to a living playbook, and suggests next experiments. Supports batch mode (up to 10 variants simultaneously).

Usage

Use this skill when:

  • Creating or managing A/B or multivariate experiments for any marketing channel
  • Logging experiment data points after content is published or campaigns run
  • Scoring experiments to determine statistical winners
  • Checking the playbook for proven best practices before creating new content
  • Generating weekly scorecards across all channels
  • Monitoring campaign pacing and health

Do NOT use for:

  • One-off content creation (use the playbook output as input, but don't run the engine)
  • Non-experiment analytics or reporting
  • Campaign setup in external platforms (this tracks experiments, not campaign config)

Commands

Create an experiment

python3 experiment-engine.py create \
  --agent <agent_name> \
  --hypothesis "What you expect to happen" \
  --variable "<variable_name>" \
  --variants '["variant_a", "variant_b"]' \
  --metric "<primary_metric>" \
  --cycle-hours 24

Add --batch-mode for 3-10 variant tests. Add --min-samples N to override auto-detection.

Log a data point

python3 experiment-engine.py log \
  --agent <agent_name> \
  --experiment-id <EXP-ID> \
  --variant "<variant_name>" \
  --metrics '{"metric_name": value}'

Score an experiment

python3 experiment-engine.py score --agent <agent_name> --experiment-id <EXP-ID>

Statuses: runningtrendingkeep (winner) or discard (loser)

Winners auto-promote to the playbook. Requires p < 0.05 AND ≥ 15% lift.

List experiments

python3 experiment-engine.py list --agent <agent_name> [--status running|trending|keep|discard]

Check the playbook

python3 experiment-engine.py playbook --agent <agent_name>

Always check the playbook before creating new content to apply proven best practices.

Suggest next experiments

python3 experiment-engine.py suggest --agent <agent_name>

Generate weekly scorecard

python3 autogrowth-weekly-scorecard.py [--weeks N] [--output file.md]

Check campaign pacing

python3 pacing-alert.py [--json]

Exit code 0 = on pace, 1 = alerts present.

Workflow

  1. Before creating content: playbook → apply proven rules
  2. When publishing: log → record which variant was used and its metrics
  3. Periodically: score → check if experiments have reached statistical significance
  4. Weekly: autogrowth-weekly-scorecard.py → review all channels
  5. After completing experiments: suggest → pick the next variable to test

Configuration

Required Environment Variables

VariableDescription
GROWTH_ENGINE_DATA_DIRData directory (default: ./data/experiments)
GROWTH_ENGINE_AGENTSComma-separated agent names (default: content,email,linkedin,seo,blog)

Optional Tuning

VariableDefaultDescription
HIGH_VOLUME_AGENTScontent,emailAgents needing only 10 samples/variant
LOW_VOLUME_AGENTSseo,linkedin,blogAgents needing 30 samples/variant
P_WINNER0.05p-value threshold for winner
P_TREND0.10p-value threshold for trending
LIFT_WIN15.0Minimum % lift for keep decision
BOOTSTRAP_ITERATIONS1000Bootstrap resamples for CI
BATCH_MODE_MAX_VARIANTS10Max variants in batch mode

Pacing Alert Variables

VariableDescription
PIPELINE_API_URLPipeline/CRM API endpoint
PIPELINE_AUTH_TOKENBearer token for pipeline API
RECRUITING_API_URLRecruiting API endpoint
RECRUITING_AUTH_TOKENBearer token for recruiting API
EMAIL_API_URLEmail platform API base URL
EMAIL_AUTH_TOKENBearer token for email platform
OUTBOUND_CAMPAIGNSJSON: {"name": "campaign-id"}
RECRUITING_CAMPAIGNSJSON: {"name": "campaign-id"}
DAILY_LEAD_TARGETLeads/day target (default: 10)
WEEKLY_CANDIDATE_TARGETCandidates/week target (default: 400)

Dependencies

pip install numpy scipy