Tracking Outcomes
Record published content in the learning loop, snapshot tracking data, compute changes between snapshots, and monitor rankings + LLM citations over time.
After publishing content, feed the learning loop so the system can measure what worked and calibrate scoring for future recommendations. The outcome infrastructure is live — this guide walks through the three layers: real-time tracking reads, snapshot-based diffs over time, and outcome event reporting for the learning loop.
The three layers
| Layer | Purpose | Endpoints |
|---|---|---|
| 1. Live reads | Query current rankings, citations, AIO — no history | GET /track/rankings, GET /track/citations, GET /track/aio, GET /track/report |
| 2. Snapshots + diffs | Capture point-in-time state, compute deltas between any two snapshots | POST /track/brands/{id}/snapshot, GET /track/brands/{id}/changes?since= |
| 3. Outcome events | Record recommendation lifecycle events (accepted, published, observed) for learning-loop calibration | POST /track/outcomes/report |
All three work independently — you can mix and match. The recommended pattern after publishing: record an outcome event, capture a baseline snapshot, then diff against it at 7/30/90 days.
Step 1 — record the outcome event
When content is published (or rejected, or observed), log it via the outcome event endpoint. This is the input to the learning loop calibration.
curl -s -X POST \
-H "X-API-Key: $BM_API_KEY" \
-H "Content-Type: application/json" \
"https://api.boringmarketing.com/track/outcomes/report" \
-d '{
"brand_id": "'$BRAND_ID'",
"recommendation_id": "opp-uuid",
"brief_id": "brief-uuid",
"event_type": "completed",
"notes": "Published at https://yourdomain.com/article"
}'
Supported event_type values:
| Event | When to report it |
|---|---|
accepted | You picked the recommendation from the queue |
rejected | You declined the recommendation (include rejection_reason) |
completed | Content published |
observed_7d | 7-day milestone observation |
observed_30d | 30-day milestone observation |
observed_90d | 90-day milestone observation |
This endpoint is feature-flagged on OE_OUTCOME_LINKAGE_ENABLED=true. Returns 503 if the flag is off. Check with your admin if you get an unexpected 503.
Step 2 — capture a baseline snapshot
Snapshots are stateful point-in-time captures of a brand's rankings and citations. They are used to compute deltas later.
curl -s -X POST \
-H "X-API-Key: $BM_API_KEY" \
"https://api.boringmarketing.com/track/brands/$BRAND_ID/snapshot"
Response:
{
"brand_id": "brand-uuid",
"snapshot_id": "snap-uuid",
"created_at": "2026-04-07T10:00:00Z",
"rankings_count": 127,
"citations": {
"chat_gpt": 24,
"google": 10
}
}
Each snapshot captures up to 200 ranking keywords + per-platform citation counts. Snapshots are cheap — capture one before publishing any new content.
Step 3 — compute changes against an earlier snapshot
At 7 days (or any interval), compute the delta between the latest snapshot and an earlier one:
# Capture another snapshot first
curl -s -X POST \
-H "X-API-Key: $BM_API_KEY" \
"https://api.boringmarketing.com/track/brands/$BRAND_ID/snapshot"
# Then compute changes vs the original baseline
curl -s \
-H "X-API-Key: $BM_API_KEY" \
"https://api.boringmarketing.com/track/brands/$BRAND_ID/changes?since=2026-04-07T10:00:00Z"
The since parameter takes any ISO 8601 datetime. The endpoint finds the most recent snapshot at or before that time and diffs it against the latest snapshot. Returns structured deltas: ranking improvements, ranking declines, citation gains, and citation losses.
Milestone tracking pattern
The canonical pattern for 7/30/90-day observations:
- When you publish, call
POST /track/outcomes/reportwithevent_type: "completed"andPOST /track/brands/{id}/snapshotto capture a baseline. - 7 days later: capture a new snapshot, call
GET /track/brands/{id}/changes?since=<publish_timestamp>, then callPOST /track/outcomes/reportwithevent_type: "observed_7d"and the observed deltas innotes. - Repeat at 30 and 90 days.
You can schedule the observations via schedule, a cron, or your agent's scheduler — the API does not auto-fire them.
Live reads (no history)
If you just want a current snapshot without saving it:
Google rankings
curl -s \
-H "X-API-Key: $BM_API_KEY" \
"https://api.boringmarketing.com/track/rankings?brand_id=$BRAND_ID&limit=100"
LLM citations
curl -s \
-H "X-API-Key: $BM_API_KEY" \
"https://api.boringmarketing.com/track/citations?brand_id=$BRAND_ID"
AI Overview presence (per topic family hub keyword)
curl -s \
-H "X-API-Key: $BM_API_KEY" \
"https://api.boringmarketing.com/track/aio?brand_id=$BRAND_ID"
Full 30-day report
curl -s \
-H "X-API-Key: $BM_API_KEY" \
"https://api.boringmarketing.com/track/report?brand_id=$BRAND_ID"
The learning loop
Outcome events feed the leverage scoring calibration. The multiplier table starts as heuristics (trust_gap: 1.20x, eeat_gap: 1.25x, content_gap: 1.10x, technical_gap: 1.30x) and gets tuned from real results as outcome observations accumulate.
After enough brands have tracked outcomes:
- The system learns which signal types produce the highest ROI
- Format recommendations improve (hub-and-spokes vs single article vs comparison)
- Priority scoring becomes more accurate for your industry
This is the compounding advantage of reporting every outcome event consistently.
A legacy POST /brand/{brand_id}/outcome?content_url=...&keyword=... endpoint exists for backward compatibility with older integrations. New integrations should use POST /track/outcomes/report instead — it accepts more structured event data and is the endpoint tied to the learning-loop infrastructure.