Vareon product
Integration
ARDA was built from the ground up for AI agents to use as their primary tool for scientific discovery. It is not a human tool retrofitted with an API—it is an AI-native engine where every operation, from ingestion through ledger-backed claims, was designed for programmatic invocation. You supply the questions, datasets, and orchestration—including your own agents. ARDA supplies the discovery computation and governed outputs your workflows consume. Human researchers benefit equally from the same architecture.
This page is for developers and platform teams connecting ARDA over REST, Python, the Model Context Protocol, or the shell.

Access surfaces
The same core is reachable over HTTP, Python, MCP, and the CLI. Pick the surface that matches your stack; semantics and ledger behavior stay consistent across all of them.
Integrate any stack with a versioned HTTP surface. Runs, sessions, campaigns, artifacts, and claims are exposed as resources with predictable URLs and methods, rather than as opaque chat transcripts.
The OpenAPI specification documents request and response shapes for each operation. Authentication uses API keys scoped to organization and project. You can combine synchronous reads with long-running jobs and optional callbacks where your policies require human or system approval before mutating state.
Typed client libraries mirror the API for notebooks, batch jobs, and services. Install with pip and use the same models the server validates, so local code and remote behavior stay aligned.
ARDAClient supports synchronous usage; AsyncARDAClient fits asyncio pipelines. Request and response types follow the OpenAPI definitions. Helpers for retries, timeouts, and structured errors suit batch pipelines and continuous integration.
Connect through the standard Model Context Protocol so agent hosts can list tools, invoke discovery operations, and read structured results without custom glue for every integration.
Compatible hosts discover ARDA via well-known endpoints. Tool definitions include parameters, return shapes, and whether an operation reads or writes. Session-scoped credentials and project policies apply the same rules as direct API access.
Operators and automation use a terminal-first workflow with the same surface area as the API: submit jobs, inspect status, fetch artifacts, and script promotion steps.
Exit codes and machine-readable output support CI/CD. Configuration profiles separate development, staging, and production. Long-running discovery jobs can be followed via streaming or polling, consistent with the REST job model.
Agent sessions

Agent sessions are persistent containers for work driven by your agents or operators. Each session moves through lifecycle stages—planning, ready, running, and completed—so status is visible to people and automation. Task queues carry units of work with heartbeats; stalled or partial work can be detected and resumed according to your policies.
Lineage links sessions to datasets, runs, and claims so context does not depend on a transient chat buffer alone. State supports full CRUD through the API: external orchestrators can snapshot, fork, or archive sessions. After restarts or handoffs, sessions can recover profiling summaries, pending approvals, and in-flight campaigns while preserving continuity from one discovery pass to the next.
Autonomy policies
Policies attach at project and campaign scope. They define experiment approval gates, budget ceilings, safety boundaries, and what automated workflows may read versus change.
Sensitive steps—starting interventions, pulling external data, or promoting claims to a publish tier—can require explicit approval. Gates are versioned policy objects with attribution, so audits record who or what authorized each transition.
Limits on compute time, API usage, parallel runs, and downstream spend can apply at run, campaign, and organization scope. When a ceiling is reached, execution stops in a fail-closed manner with partial ledger entries that explain consumption up to the limit.
You can declare allowed state spaces, forbidden interventions, and environmental invariants. Proposed actions are checked against these rules before execution so automated workflows stay inside the envelope you defined.
Policies separate read-only inspection from mutating operations. Your agents can be limited to status and artifact queries, or allowed to submit runs and update session state, according to role and environment.
Auto-discovery
ARDA serves /.well-known/ai-plugin.json and /.well-known/mcp.json. Compatible hosts load these documents to learn base URLs, authentication expectations, and capability summaries without copying static configuration into every client.
Capability manifests describe available tools and resources: names, parameters, return shapes, and whether an operation reads state or changes it. Agents can discover ARDA without manual wiring when the host follows these standards. Upstream callers can decide what they are permitted to invoke before calling the API, which supports security review and least-privilege integration alongside other service components.
Campaigns

A campaign groups multiple discovery runs under one plan. Phases state intent and entry criteria: broad exploratory search, focused causal interrogation, consolidation under stricter validation, or handoff to external validation. Each phase reads ledger entries from earlier phases so hypotheses evolve instead of restarting from empty context.
Cross-run knowledge transfer carries forward promoted claims, null results, and partial models as inputs to later runs. Adaptive execution adjusts scheduling and resource allocation within policy when validation fails early, a mode underperforms on the current profile, or a new claim suggests a narrower follow-up. Those decisions are logged with metadata so reviewers can see why the plan changed.
Milestones with explicit criteria tied to governance tiers and budgets.
Ledger-linked context so later runs build on earlier evidence.
Replanning inside policy when data or validation points to a different next step.
Integration outcomes
The same APIs and policies support builders, scientists, and compliance-heavy deployments—without forking your integration model.
Next steps
Use the documentation for endpoint reference, SDK usage, MCP tool listings, and policy examples. Open an account when you are ready to run your first session or campaign.