Arial docs
Arial is product analytics for coding agents. An agent signs itself up with one HTTP call, starts writing events immediately, and hands a claim link to the human so they can take ownership later.
This page is a user manual for getting running — signup, handoff, events, reads. Keep it open while you wire things up.
Set up a project
If you have an existing project (Node.js or otherwise) and want a one-shot bootstrap — workspace + SDK install + config file + env wiring — use arial init:
cd your-project
npx @arial-ai/cli initThat single command:
- Signs up a fresh workspace (or adopts an existing one if you've already run
arial signup/arial login). - Writes
arial.jsonto the project root with yourworkspaceId(non-secret, safe to commit). - Detects your package manager (pnpm / npm / yarn / bun via lockfile) and runs
<pm> add @arial-ai/sdk. - Appends
ARIAL_WRITE_KEYto.env.localand ensures.env.localis gitignored. - Prints the claim URL so you can hand it to the human owner.
Re-running arial init is safe: existing arial.json files are left alone unless you pass --force, and .env.local entries are appended (never overwritten). Pass --workspace <slug> to adopt an existing workspace, --new to force a fresh signup, or --no-install to skip the SDK install.
After init, run arial agent-stanza >> AGENTS.md to teach any agent in the repo how to use Arial.
Sign up
If you only need credentials — no project scaffolding, no SDK install — use arial signup directly. This is what arial init calls under the hood:
npx @arial-ai/cli signup --name "my-project"Or install globally for faster subsequent calls:
npm install -g @arial-ai/cli
arial signup --name "my-project"--name is optional. Add --json for a machine-readable response envelope. By default the CLI saves the returned agent key to ~/.config/arial/config.json (or $XDG_CONFIG_HOME/arial/config.json; override with ARIAL_CONFIG_DIR) so all subsequent arial commands authenticate automatically (pass --no-save to skip this).
The result is returned once and never again:
{
"workspace": {
"id": "wsp_...",
"slug": "w-...",
"name": "...",
"avatarUrl": null,
"createdAt": "...",
"updatedAt": "..."
},
"agentKey": "agk_...",
"writeKey": "wk_...",
"claimPassphrase": "six-words-joined-by-dashes",
"claimUrl": "https://arial.sh/claim#six-words-joined-by-dashes",
"activeUntil": "ISO-8601 timestamp, 7 days from signup"
}agentKey— control-plane credential. Broad privileges (workspace reads, queries, configuration). Keep it on servers and CLIs — never in a browser or mobile bundle. When auto-saved by the CLI it lives at~/.config/arial/config.jsonby default; otherwise write it to your.envor secret store. Use asAuthorization: Bearer agk_...on control-plane API calls (https://api.arial.sh). If you lose it, sign up a new workspace.writeKey— ingest credential. Scoped to POSTing events only. Safe to embed in browser bundles, mobile apps, any untrusted client. Use with the@arial-ai/sdkSDK (below) or asAuthorization: Bearer wk_...againsthttps://events.arial.sh/v1/events.claimPassphraseandclaimUrl— hand to the human (see below).activeUntil— the workspace is writable and claimable until this timestamp (7 days). After that it deactivates.
Hand off to the human
Send them the claim URL and the passphrase. Something like:
I've set up an Arial workspace so I can set up analytics and
include real user behaviour back into our work. Claim it for free here:
<claimUrl>
Or paste this at https://arial.sh/claim:
<claimPassphrase>
The link expires <activeUntil> (7 days).Claiming is fast: the human signs in with Google, becomes the workspace owner, and sees dashboards and reports. Nothing changes for the agent — your key keeps working.
If nobody claims within 7 days the workspace deactivates. Sign up a new one and ask your human earlier.
Send events
Arial ships an opinionated, fixed event taxonomy. The same names mean the same things across every workspace, which is what makes cross-customer benchmarks and the agent's strategy model work. Read this section before you instrument anything.
SDK (recommended)
The fastest path is @arial-ai/sdk — it batches, retries with backoff, validates against the taxonomy client-side, fills in context.* for you, and works in Node 20+, Bun, Deno, browsers, Cloudflare Workers, and Vercel Edge.
npm install @arial-ai/sdk
import { createArial } from "@arial-ai/sdk"
const arial = createArial({
writeKey: "wk_...", // from signup — safe to embed in browser/mobile
workspaceId: "wsp_...", // from signup
onError: (err) => console.warn("[arial]", err.code, err.message),
})
arial.identify("user_42", { plan: "pro" })
arial.track("user.signed_in", { method: "google" })
arial.page({ path: "/dashboard", title: "Dashboard" })
// Before a serverless function exits or on app teardown:
await arial.shutdown()Full reference: https://www.npmjs.com/package/@arial-ai/sdk.
The rule
Use canonical events only. The full catalogue is at /docs/taxonomy.json (machine-readable) or /docs/taxonomy.txt (plain-text) — consult it before naming any event. The taxonomy covers the full B2B SaaS lifecycle; if an action isn't in the catalogue, don't track it yet.
Events are validated at ingest against the canonical schemas. Non-canonical event names are rejected with a reason.
Hard limits
- Never put PII in event properties. No email, name, billing contact, IP. User traits live in the identity store, attached via
identify(). Canonical events carry zero PII by design. - Canonical events declare allowed sources (client / server). Ingest derives source from
context.platform(servermaps to server; browser/mobile/app platforms map to client). Server-only events (subscription.*,checkout.completed,trial.*,user.invite*,account.member_*) must come from your backend — emitting them from a client platform is rejected. The full source map is in the catalogue.
Where to look
- Catalogue, JSON:
https://arial.sh/docs/taxonomy.json - Catalogue, plain text:
https://arial.sh/docs/taxonomy.txt - Both are derived from the canonical schemas;
versionis the source of truth.
Raw HTTP (any language, any runtime)
The SDK is the easy path; the endpoint below is stable and documented so anything that speaks HTTP — Python, Go, Rust, curl, a locked-down edge runtime — can integrate directly.
POST https://events.arial.sh/v1/events
Authorization: Bearer wk_...
Content-Type: application/jsonRequest body — a batch of 1 to 500 event envelopes wrapped in { "events": [...] }:
{
"events": [
{
"event": "user.signed_in",
"properties": {
"method": "google"
},
"context": {
"arial_event_id": "<uuid>",
"arial_timestamp": "<iso8601>",
"arial_taxonomy_version": "2.0.0",
"arial_sdk_version": "<string>",
"arial_workspace_id": "<must match the workspace your key is bound to>",
"user_id": "<string|null>",
"anonymous_id": "<string>",
"session_id": "<string>",
"account_id": "<string|null>",
"platform": "web|ios|android|server"
}
}
]
}Every envelope's context.arial_workspace_id must equal the workspace bound to the write key in the Authorization header — mismatches are rejected per-envelope without failing the rest of the batch.
Response — 202 Accepted — reports per-event results. Invalid envelopes are rejected with a reason; valid events in the same batch are still persisted:
{
"accepted": 1,
"rejected": 1,
"errors": [
{
"index": 1,
"event": "app.launched",
"reason": "Unknown canonical event name 'app.launched'."
}
]
}401— missing or invalidAuthorizationheader (no key, wrong format, revoked key).400— body isn't valid JSON, or the batch envelope fails schema validation.
Example:
curl -s -X POST https://events.arial.sh/v1/events \
-H 'authorization: Bearer wk_...' \
-H 'content-type: application/json' \
-d '{"events":[{"event":"user.signed_in","properties":{"method":"google"},"context":{ ... }}]}'POST /v1/identify takes the same Authorization: Bearer wk_... header and a { "user_id", "traits" } body; it currently returns 202 without persisting traits (server-side identity store is a follow-up).
Read analytics
Query the events you've sent against a strict, Arial-authored catalogue of metrics, funnels, and reports. Every response is agent-first: alongside the raw series, each payload carries a natural-language interpretation, a suggested_next list of concrete follow-up URLs, and cross-references to related entries in the catalogue.
Control plane: https://api.arial.sh. Authenticate with Authorization: Bearer agk_....
Two ways to read
- CLI (recommended for agents and operators) —
arial metrics list,arial metrics get <id>,arial funnels list,arial funnels get <id>,arial events list. Same structured JSON envelope as every other CLI command, with--jsonfor machine-readable output. - Direct HTTP —
curlor any language. Endpoints below.
Catalogue (list what you can query)
GET /v1/metrics — every metric definition
GET /v1/funnels — every funnel definition
GET /v1/reports — every report definitionEach entry carries id, name, description, unit, allowed granularities, allowed segments, and related cross-references. The catalogue is workspace-agnostic — identical for everyone — so an agent can memoise it.
curl -s https://api.arial.sh/v1/metrics \
-H 'authorization: Bearer agk_...'Metric detail (compute)
GET /v1/metrics/:idQuery-string parameters (all optional):
from— inclusive start date,YYYY-MM-DD(UTC). Defaults to 30 days beforeto.to— exclusive end date,YYYY-MM-DD(UTC). A value of2026-04-20covers data up to but not including that day — pass tomorrow's date to include today. If omitted, defaults to end-of-today UTC so today's events are included by default.granularity—day|week|month. Defaults to the metric's canonical granularity.segment_by— one of the metric's declared segments (e.g.platform,lifecycle_stage,auth_method).
Example:
curl -s 'https://api.arial.sh/v1/metrics/dau?from=2026-03-01&to=2026-04-01&granularity=day' \
-H 'authorization: Bearer agk_...'Response (scalar shape):
{
"kind": "scalar",
"metric": "dau",
"name": "Daily Active Users",
"description": "...",
"unit": "users",
"from": "2026-03-01",
"to": "2026-04-01",
"granularity": "day",
"segment_by": null,
"series": [{ "date": "2026-03-01", "value": 142 }, ...],
"summary": { "mean": 158, "min": 120, "max": 193, "trend_pct": 12.4 },
"segments": null,
"interpretation": "DAU averaged 158 over the last 31 days, trending +12.4%.",
"suggested_next": [
{ "description": "Break down by platform", "url": "/v1/metrics/dau?segment_by=platform" }
],
"related": { "metrics": ["wau", "mau"], "funnels": [], "reports": ["core_product_health"] },
"computed_at": "2026-04-20T12:00:00.000Z"
}summary.trend_pct compares the later half of the series against the earlier half. It's directional orientation, not a statistical test — re-interpret the series yourself if precision matters.
Invalid granularity or segment_by returns 400 VALIDATION_ERROR with details.allowed listing the valid values and a details.suggestion URL you can retry.
Funnel detail (compute)
GET /v1/funnels/:idQuery-string parameters (all optional):
from— inclusive start date,YYYY-MM-DD(UTC). Defaults to 84 days beforeto(weekly default).to— exclusive end date,YYYY-MM-DD(UTC). Defaults to end-of-today UTC.granularity—day|week|month. Defaults toweek.
Funnel segmentation is not yet supported — segment_by is accepted by the schema for forward-compat but returns 400 VALIDATION_ERROR today. Segment the underlying metrics instead.
Example:
curl -s 'https://api.arial.sh/v1/funnels/signup_to_activation?from=2026-01-17&to=2026-04-17' \
-H 'authorization: Bearer agk_...'Response is dual-shaped. steps is the aggregate view across the whole window — one entry per declared step, in order, with per-step user counts and conversion ratios. series is the same data broken down into per-bucket cohorts so an agent can spot trends without a second request:
{
"funnel": "signup_to_activation",
"name": "Signup → Activation",
"window_days": 7,
"from": "2026-01-17",
"to": "2026-04-17",
"granularity": "week",
"segment_by": null,
"steps": [
{ "event": "user.signed_up", "label": "Signed up", "users": 1200, "conversion_from_anchor": 1, "conversion_from_previous": null, "drop_off_from_previous": 0 },
{ "event": "onboarding.completed", "label": "Finished onboarding", "users": 960, "conversion_from_anchor": 0.8, "conversion_from_previous": 0.8, "drop_off_from_previous": 240 },
{ "event": "activation.reached", "label": "Activated", "users": 540, "conversion_from_anchor": 0.45, "conversion_from_previous": 0.56, "drop_off_from_previous": 420 }
],
"series": [
{ "date": "2026-01-19", "step_counts": [100, 80, 45], "status": "ready", "window_closes_at": null },
{ "date": "2026-04-13", "step_counts": [ 85, 60, 20], "status": "pending", "window_closes_at": "2026-04-27T00:00:00.000Z" }
],
"interpretation": "Signup → Activation — 1,200 users at \"Signed up\" → 45.0% reached \"Activated\" in the last 12 weeks. Largest drop-off: 420 users between \"Finished onboarding\" and \"Activated\".",
"suggested_next": [ ... ],
"related": { "metrics": [...], "funnels": [...], "reports": [...] },
"computed_at": "2026-04-20T12:00:00.000Z"
}Each bucket carries a status of ready or pending. A bucket is pending while its observation window is still open — the funnel's window_days past the bucket end — so later steps are a lower bound, not a final reading. window_closes_at tells you when to re-query.
Cohort windows and pending buckets
Cohort-shaped surfaces (activation, retention, funnels) watch for follow-up events beyond each bucket's end. Until that observation window closes, the bucket is censored: more events may still land and move the number. Every per-point payload therefore carries:
status—ready(window fully elapsed, value is final) orpending(window still open, value is a lower bound).window_closes_at— ISO-8601 UTC timestamp at which a pending bucket will become final.nullwhen the point is alreadyready.
The surrounding interpretation string also notes how many recent buckets are still pending, so a human or agent reading the prose sees the caveat without inspecting every point.
What's live, what's coming
Live in v1: metric detail for DAU/WAU/MAU, new sign-ups, sessions per user, error rate, support contact rate, feature adoption rate, onboarding completion rate, activation rate, time-to-activation, and w1/w4/w12 retention (cohort metrics anchor on each user's first user.signed_up event; cohorts whose observation window hasn't fully elapsed are reported as partial data). Funnel detail (GET /v1/funnels/:id) is live for the signup_to_activation and onboarding funnels.
Segmenting by auth_method works across every metric that declares it — DAU/WAU/MAU pick up each user's sign-up method via a join, so the same segment behaves consistently whether the metric is counted at the event level (new_signups) or at the user level.
Coming next: the GET /v1/reports/:id detail endpoint and funnel segmentation. Reports list in the catalogue today but the detail endpoint is not yet wired — calls return VALIDATION_ERROR pointing at what is computable.
Experiments
Experiments are how Arial turns a PR's outcome from a guess into a measurement. The agent registers an experiment when it opens a PR; the analytics SDK assigns each actor a sticky variant; a daily readout joins exposures with the canonical target metric and recommends ship, revert, continue, or "still underpowered."
In M0 the loop is two arms (control + treatment), 50/50 fixed allocation, derived conversions (no experiment.converted event — the readout joins on whatever canonical target metric you declared), and explicit conclusions (the agent decides; the system never auto-ships or auto-reverts).
Register
POST /v1/experimentsRequired fields: key (slug, unique per workspace; this is the same string arial.variant(key, …) will receive), hypothesis (one sentence), targetMetric (must be a canonical event name — non-canonical strings are rejected), mde (pre-registered minimum detectable effect, absolute, 0..1).
Optional: alpha (default 0.05), conversionWindowHours (default 168 = 7 days), prUrl, decisionDeadline.
Example:
curl -s -X POST https://api.arial.sh/v1/experiments \
-H 'authorization: Bearer agk_...' \
-H 'content-type: application/json' \
-d '{
"key": "checkout-cta-copy",
"hypothesis": "Imperative copy lifts checkout completion",
"targetMetric": "checkout.completed",
"mde": 0.05,
"prUrl": "https://github.com/acme/app/pull/42"
}'The experiment is created in running status. The flag-eval service (flags.arial.sh) propagates it to the analytics SDK on its next config poll (~60s). See [Flag config](#flag-config) for what the SDK polls under the hood.
Wrap your code
In your app, with @arial-ai/sdk:
const variant = arial.variant("checkout-cta-copy", { fallback: "control" })
if (variant === "treatment") {
// new code path
} else {
// existing code path
}The SDK auto-emits experiment.exposed on evaluation and injects context.experiment_assignments into every subsequent event so attribution is a filter, not a join.
Read
GET /v1/experiments — list every experiment in the workspace
GET /v1/experiments/:key — one experiment + its latest readout inline
GET /v1/experiments/:key/readout — latest readout only (poll-friendly)
GET /v1/experiments/:key/snapshot — point-in-time stats without changing the experimentThe readout payload carries per-variant exposure and conversion counts, control vs treatment rates, absolute and relative lift, a 95% Newcombe-Wilson CI, p-value (pooled-variance two-proportion z-test), significant, and a recommendation of ship, revert, continue_running, or underpowered. Recommendations are advisory — concluding is always explicit.
The snapshot payload has the same core stats plus source, isFinal, observedLeader, and confidenceLabel. For running experiments, snapshots compute from live events at request time and are never persisted. For concluded experiments, snapshots prefer the latest persisted readout so the final view matches the official result.
Conclude
POST /v1/experiments/:key/concludeBody: { "decision": "ship" | "revert", "note"?: "..." }. One-way: a concluded experiment cannot be flipped — open a new experiment with a different key to revisit.
Surfaces
- CLI:
arial experiment list,arial experiment get <key>,arial experiment create --key <slug> --target <event> --mde 0.05 --hypothesis "...",arial experiment readout <key>,arial experiment snapshot <key>,arial experiment conclude <key> --decision ship|revert. - SDK:
client.experiments.list/get/create/conclude/readout/snapshot(). - Web UI: workspace dashboard → Experiments.
Coming next
- M1: manual ramp (
arial experiment ramp <key> --to 10) and custom variant weights. - M2: per-experiment ramp schedules with auto-advance, optional guardrail metric with freeze-on-regression, and a GitHub app that turns the PR body into the literal source of truth + posts readouts as PR comments.
Full design rationale: ADR 0009 (experiments as the attribution primitive) and ADR 0010 (M0 implementation).
Events tail
Verification surface, not a query API. After firing events, hit this to confirm the payload landed with the shape you expected — independent of whether any metric has started reporting.
GET /v1/events?limit=50limit is optional (default 50, max 200). Ordered by ingest time (received_at) descending, scoped to your workspace.
{
"events": [
{
"id": "evt_...",
"event": "user.signed_up",
"received_at": "2026-04-19T12:00:01.000Z",
"timestamp": "2026-04-19T12:00:00.000Z",
"user_id": "user_1",
"anonymous_id": "anon_...",
"session_id": "sess_...",
"account_id": null,
"source": "server",
"properties": { "method": "google" }
}
]
}received_at is server receive time; timestamp is client-claimed event time — they differ when the SDK backfills or clocks drift. Use received_at to answer "did my fire just land?"
Surfaces: arial events list [--limit N] (CLI), client.events.list({ limit? }) (SDK).
Flag config
The analytics SDK keeps arial.variant() fast and deterministic by polling a small, public endpoint for your workspace's running experiments. You almost never talk to this endpoint directly — @arial-ai/sdk does it for you — but the surface is documented so you can implement your own SDK or debug in a pinch.
GET https://flags.arial.sh/v1/flagsAuthentication: Authorization: Bearer wk_... (the write key, same one you use for event ingest). The agent key is rejected — the flags service is write-key-only so it's safe to reach from browsers and mobile apps.
Caching: the response includes a strong ETag header. Send If-None-Match: "<etag>" and you'll get 304 Not Modified with no body. The SDK polls every ~60s, so the common case is a near-empty round trip.
Response:
{
"workspaceId": "wsp_...",
"etag": "sha256:...",
"experiments": [
{
"key": "checkout-cta-copy",
"assignmentUnit": "account_or_anonymous",
"variants": [
{ "variantId": "control", "weight": 50 },
{ "variantId": "treatment", "weight": 50 }
]
}
]
}Only running experiments appear. Concluded ones are dropped so a stale SDK never re-emits experiment.exposed for a decided experiment. Weights are integers; the SDK normalises and hashes the actor id (account_id when present, anonymous_id otherwise) to pick a deterministic variant.
Errors follow the standard envelope. A 401 means the write key is invalid or revoked; a 5xx is transient — the SDK keeps serving the last cached body until the next poll succeeds.
Reference
Two services, two base URLs:
- Control plane:
https://api.arial.sh— workspace management, reads, configuration. Authenticate with the agent key (agk_...). - Ingest:
https://events.arial.sh— event writes only. Authenticate with the write key (wk_...).
All versioned endpoints live under /v1/*.
Auth headers:
Authorization: Bearer agk_...— agent key, forapi.arial.sh. Broad privileges; keep server-side.Authorization: Bearer wk_...— write key, forevents.arial.sh. Ingest-only; safe in browser/mobile.Cookie: arial_token=<jwt>— user session (web flow, control plane only).
POST /v1/workspaces/signup and GET /v1/schema are unauthenticated by design — signup mints the first credential, and the schema endpoint is the discovery surface an agent can reach before it has one. Everything else under /v1/* returns 401 without a recognised credential.
GET /v1/schema returns a machine-readable self-description of the control plane: every endpoint, its summary, its auth mode, and links to docs, taxonomy, and the ingest and flags services. One URL, the whole surface — useful when an agent lands on Arial through a search result and needs to orient without reading this page.
Error envelope:
{
"error": {
"code": "NOT_FOUND",
"message": "...",
"details": null
}
}Common codes: 400 VALIDATION_ERROR, 401 UNAUTHORIZED, 404 NOT_FOUND, 409 CONFLICT, 500 INTERNAL.
Versioning: breaking changes get a new URL prefix (/v2/*). Within a version the response is additive only.
Help
Questions, feedback, security reports: hello@arial.sh.