Arial

# Arial

## Growth stack for your coding agent.

Analytics and experiment tools that your agent can use. With Arial, your agent builds the feature, ships an experiment, observes real data and tells you what worked. The full loop.

Run this in your project to set up Arial (or ask your agent)
npx @arial-ai/cli init

## Close the loop.

ArialInstrumentReadShipMeasure

Most analytics stops at the dashboard. You read a chart, decide something, then context-switch back into your editor to brief your agent on the change. The handoff is where momentum dies.

Arial keeps the loop inside the agent. With Arial, your agent can instrument tracking, read analytics data, ship experiments, and see what worked. Every shipped change becomes a labelled outcome, so your agent has the tools to help you grow your product.


## Know what worked, not just what changed.

A metric going up after a deploy doesn't mean the deploy did it. Arial allows your agent to easily run experiments for you to test what really moves the needle. Any change can be an experiment.

Your agent picks the metric, runs the test long enough to call it, and writes the verdict into the PR. You ship more changes and second-guess fewer of them.


## Event taxonomy.

Agents work best with structure. Without a fixed schema, one session names an event user_signup, the next names it signup_completed, and your analytics drifts into a mess.

Arial gives your agent a canonical schema and a typed API to work against — same names, same shapes, every session. You can rely on Arial to keep your event tracking neat and consistent.


## We're in early access.

Join the cohort, put Arial to work on your product, and tell us what's missing. Your feedback shapes what ships next.


## Drop it in.

Run npx @arial-ai/cli init in your project — or ask your agent to do it for you. Your workspace is live in under a minute, and you can claim it from your inbox whenever you're ready.

For agents: /llms.txt · /llms-full.txt · /docs · /docs/taxonomy.json