Polling Runs
How to poll async pipeline runs, read stage progress, handle failures, and avoid duplicate runs.
All pipeline endpoints (/analyze, /discover, /enrich/brand, /enrich/technical, /enrich/llm, /synthesize, /score, /brief, /execute) return 202 Accepted with a run_id. The work happens in the background. You poll for status.
The pattern
Run lifecycle
| Status | Meaning |
|---|---|
queued | Run accepted, waiting to start |
running | Pipeline actively processing |
done | All stages completed successfully |
failed | An error occurred — check the error field |
Reading stage progress
Stage progress is exposed under meta.current_stage on the run object, not at the top level:
{
"id": "run-uuid",
"brand_id": "brand-uuid",
"kind": "full",
"status": "running",
"error": null,
"started_at": "2026-04-07T10:00:00Z",
"finished_at": null,
"meta": {
"current_stage": "evidence_collection",
"stage_detail": "analyzing SERPs for family 3 of 12",
"stage_started_at": "2026-04-07T10:06:42Z"
}
}
Stages progress through these six values in order:
brand_enrichment— crawling brand site + extracting identitycompetitor_discovery— finding competitors via Tavily researchfamily_discovery— building keyword universe (Haiku-validated 10-step pipeline)evidence_collection— SERP + DataForSEO evidence per family (synthesis happens inside this stage)leverage_scoring— cross-referencing brand gaps with opportunitiesqueue_building— assembling the ranked action queue
Only /analyze emits all six stages. /discover does not emit stage updates at all — its run will have empty meta.current_stage throughout. Single-purpose endpoints (/enrich/brand, /score, /brief, /execute) emit only their own stage.
Polling implementation
# Poll every 10 seconds until done
while true; do
RESP=$(curl -s \
-H "X-API-Key: $BM_API_KEY" \
"https://api.boringmarketing.com/brands/$BRAND_ID/runs/$RUN_ID")
STATUS=$(echo "$RESP" | python3 -c "import sys,json; print(json.load(sys.stdin)['status'])")
STAGE=$(echo "$RESP" | python3 -c "import sys,json; print(json.load(sys.stdin).get('meta',{}).get('current_stage','?'))")
DETAIL=$(echo "$RESP" | python3 -c "import sys,json; print(json.load(sys.stdin).get('meta',{}).get('stage_detail',''))")
echo "[$STATUS] $STAGE — $DETAIL"
[ "$STATUS" = "done" ] || [ "$STATUS" = "failed" ] && break
sleep 10
doneRecommended intervals: 10-15 seconds for full pipeline runs. 5 seconds for single-stage runs (brief, execute, enrich/*).
Handling duplicate runs
If you trigger a pipeline while one is already running, the API returns 409 Conflict:
// 409 Conflict
{
"detail": "A run is already in progress for this brand"
}
The response includes an X-Running-Run-Id header with the existing run's ID. Poll that instead of starting a new run.
# Check existing runs before starting a new one
curl -s \
-H "X-API-Key: $BM_API_KEY" \
"https://api.boringmarketing.com/brands/$BRAND_ID/runs"
Cancelling stuck runs
If a run appears stuck (no stage_started_at progress for more than 15 minutes), mark it as failed and re-trigger:
curl -s -X POST \
-H "X-API-Key: $BM_API_KEY" \
"https://api.boringmarketing.com/brands/$BRAND_ID/runs/$RUN_ID/cancel"
Handling failures
When a run fails, the error field contains the reason:
{
"id": "run-uuid",
"brand_id": "brand-uuid",
"kind": "full",
"status": "failed",
"error": "DataForSEO rate limit exceeded — retry in 60 seconds",
"started_at": "2026-04-07T10:00:00Z",
"finished_at": "2026-04-07T10:12:00Z",
"meta": {
"current_stage": "evidence_collection"
}
}
Retry by triggering the same pipeline endpoint again. Pipeline data is append-only and idempotent — re-running does not create duplicate data.
Listing all runs
View run history for a brand:
curl -s \
-H "X-API-Key: $BM_API_KEY" \
"https://api.boringmarketing.com/brands/$BRAND_ID/runs"
{
"brand_id": "brand-uuid",
"total": 3,
"runs": [
{ "id": "run-1", "kind": "full", "status": "done", "started_at": "...", "finished_at": "..." },
{ "id": "run-2", "kind": "discovery", "status": "done", "started_at": "...", "finished_at": "..." },
{ "id": "run-3", "kind": "scoring", "status": "running", "started_at": "...", "finished_at": null }
]
}
The kind field tells you what type of run it was: full, brand_enrichment, technical_enrichment, llm_enrichment, discovery, scoring, competitor_discovery, synthesize, brief, or execution.