Full command surface
Every platform operation — projects, datasets, campaigns, runs, claims, ledger, artifacts — maps to a focused subcommand with consistent flags and structured output.
Command-Line Interface
The arda CLI gives you full access to discovery orchestration, governance, and artifact management from any terminal — optimized for operators, HPC clusters, and CI/CD pipelines where a browser is the wrong interface.
Every platform operation — projects, datasets, campaigns, runs, claims, ledger, artifacts — maps to a focused subcommand with consistent flags and structured output.
Human-readable tables for exploration, JSON for jq pipelines, CSV for spreadsheets, NDJSON for streaming ingestion. Field names are stable across CLI versions.
Meaningful exit codes, idempotent operations, and API key auth make the CLI safe to embed in Makefiles, SLURM scripts, Airflow DAGs, and GitHub Actions.
Governance policies apply to CLI sessions the same way they apply to API and MCP calls. Promotions require rationale. The ledger records everything.
The arda binary is available through pip, Homebrew, or as a standalone download. No runtime dependencies — a single binary covers macOS, Linux, and Windows.
# pip (includes Python SDK + CLI entry point) pip install arda-sdk # Homebrew brew install vareon/tap/arda # Standalone binary curl -fsSL https://get.arda.vareon.com | sh # Verify installation arda version
$ arda version arda 1.4.0 API version: 2026-03-01 Base URL: https://api.arda.vareon.com

Two authentication modes — interactive login for human operators and API keys for automation. Both produce the same authorization context and are subject to the same governance policies.
Opens a browser-based auth flow. Tokens are stored in the system keychain (macOS/Linux) or credential manager (Windows) and refreshed automatically. You authenticate once; the CLI handles renewal.
Set ARDA_API_KEY in your environment. Keys can be scoped to a project and rotated without downtime by configuring both old and new keys during the rotation window.
# Interactive login (opens browser) $ arda auth login ✓ Authenticated as faruk@vareon.com (Vareon Research) # Check current session $ arda auth status User: faruk@vareon.com Tenant: Vareon Research Token: valid (expires in 23h 41m) Keychain: macOS Keychain # Rotate API key $ arda auth rotate-key Old key: arda_key_proj01_a8f2... (valid for 24h) New key: arda_key_proj01_k9m1... (active now) # Logout and clear stored credentials $ arda auth logout ✓ Session cleared from keychain
Commands are organized into families that mirror the platform's resource hierarchy. Each family groups related operations under a common noun.
Manage authentication — login, logout, check status, and rotate API keys.
arda auth login
arda auth logout
arda auth status
arda auth rotate-key [--project <id>]
Manage projects: create, list, inspect, archive. Projects are the top-level container for datasets, campaigns, and governance policies.
arda projects list [--status active|archived] [--format table|json]
arda projects create --name "Project Name" [--description "..."]
arda projects inspect <project-id>
arda projects archive <project-id>
$ arda projects list ID NAME STATUS DATASETS CAMPAIGNS CREATED proj_a1b2 Pendulum Dynamics active 3 2 2026-02-15 proj_c3d4 Fluid Turbulence active 7 4 2026-01-08 proj_e5f6 Climate Feedbacks archived 12 6 2025-11-20
Upload, profile, list, and inspect datasets. Supports CSV, Parquet, and HDF5 formats. The profile command runs statistical analysis and recommends discovery modes.
arda datasets upload --project <id> --file data.csv --name "Dataset Name"
arda datasets profile <dataset-id>
arda datasets list --project <id>
arda datasets inspect <dataset-id>
$ arda datasets profile ds_29xK4m Dataset: pendulum_timeseries (ds_29xK4m) Rows: 15,000 Columns: 8 COLUMN TYPE NULLS MIN MAX MEAN STD time float64 0 0.000 30.000 15.000 8.660 theta float64 0 -1.571 1.571 0.003 0.782 omega float64 0 -5.124 5.089 -0.012 2.341 ... Quality score: 0.94 Recommended modes: symbolic, neuro_symbolic
Campaigns group related discovery runs under shared governance and budget constraints. Create with budget allocations, list with status filtering, and close when research is complete.
arda campaigns create --project <id> --name "Campaign Name" [--budget-hours 100]
arda campaigns list --project <id> [--status open|closed]
arda campaigns inspect <campaign-id>
arda campaigns close <campaign-id>
$ arda campaigns inspect camp_xyz Campaign: Initial Sweep (camp_xyz) Project: proj_a1b2 Status: open Budget: 40/100 GPU-hours used Runs: 12 completed, 2 running Claims: 47 explore, 8 validate, 1 publish Created: 2026-03-01T09:00:00Z
Submit discovery runs across all four modes, monitor pipeline stages, wait for completion, and cancel running jobs. Each mode accepts mode-specific parameters.
arda runs submit --campaign <id> --dataset <id> --mode symbolic [--max-complexity 8]
arda runs submit --campaign <id> --dataset <id> --mode neural [--epochs 500] [--hidden-dim 128]
arda runs submit --campaign <id> --dataset <id> --mode neuro_symbolic [--library-size 20]
arda runs submit --campaign <id> --dataset <id> --mode cde [--basis spline] [--knots 15]
arda runs status <run-id>
arda runs wait <run-id> [--timeout 3600]
arda runs cancel <run-id>
arda runs list --campaign <id> [--status running|completed|failed]
$ arda runs submit --campaign camp_xyz --dataset ds_29xK4m \
--mode symbolic --max-complexity 8
✓ Run submitted: run_7fGh2p (symbolic)
$ arda runs status run_7fGh2p
Run: run_7fGh2p
Mode: symbolic
Status: running
STAGE STATUS PROGRESS DURATION
ingestion ✓ completed 100% 4s
feature_engineering ✓ completed 100% 12s
symbolic_regression ◉ running 62% 2m 18s
validation ○ pending — —
claim_extraction ○ pending — —
ETA: ~2m 25s remaining
$ arda runs wait run_7fGh2p
Waiting for run_7fGh2p... ████████████████████ 100%
✓ Completed in 4m 53s. 12 claims produced.List, inspect, promote, and export claims. Filter by type (equation, causal_graph, conservation_law, invariant), tier, and producing run. Promotions require a rationale recorded in the Evidence Ledger.
arda claims list --project <id> [--type equation] [--tier explore] [--run <id>]
arda claims inspect <claim-id>
arda claims promote <claim-id> --tier validate --rationale "..."
arda claims export --project <id> [--format json|csv] [--tier validate]
$ arda claims list --project proj_a1b2 --type equation --tier explore
ID TYPE EXPRESSION FITNESS COMPLEXITY TIER
clm_Ax92 equation d²θ/dt² = -(g/L)·sin(θ) 0.9987 5 explore
clm_Bx41 equation θ̈ = -9.81·θ 0.9812 3 explore
clm_Cx73 equation θ̈ = -g·sin(θ)/L + β·θ̇ 0.9801 7 explore
$ arda claims promote clm_Ax92 --tier validate \
--rationale "Exact pendulum equation recovered, physically interpretable"
✓ clm_Ax92 promoted: explore → validate
Ledger entry: le_881fQuery the Evidence Ledger for governance events, verify entry integrity with content hashes, and export audit trails. Every promotion, negative control, and policy event is recorded immutably.
arda ledger query --project <id> [--event-type promotion] [--since 2026-01-01]
arda ledger verify <entry-id>
arda ledger export --project <id> --format json > audit.json
$ arda ledger query --project proj_a1b2 --event-type promotion ENTRY EVENT RESOURCE ACTOR TIMESTAMP HASH le_881f promotion clm_Ax92 faruk@vareon.com 2026-03-28T10:12:00Z sha256:9f86d0... le_772e promotion clm_Dx55 agent:mcp-04a 2026-03-27T14:30:00Z sha256:a3f1c2... le_663d promotion clm_Ex88 faruk@vareon.com 2026-03-26T09:45:00Z sha256:b7e2d4... $ arda ledger verify le_881f ✓ Entry le_881f integrity verified (sha256:9f86d0...)
List, download, and stream artifacts produced by discovery runs — model checkpoints, Pareto front visualizations, export bundles, and intermediate results.
arda artifacts list --run <run-id> [--type checkpoint|visualization|export]
arda artifacts download <artifact-id> [--output ./local-path]
arda artifacts stream <artifact-id>
$ arda artifacts list --run run_7fGh2p ID TYPE NAME SIZE HASH art_kL3m visualization pareto_front.png 180 KB sha256:3c7a2b... art_mN4p checkpoint model_final.pt 12.4 MB sha256:8d9e1f... art_oP5q export results_bundle.zip 2.1 MB sha256:f4a5b6... $ arda artifacts download art_kL3m --output ./figures/ ✓ Downloaded pareto_front.png → ./figures/pareto_front.png
Every command supports the --format flag. Field names in structured formats are stable across CLI versions — renames are backward-compatible for at least one major version.
Default. Human-readable, column-aligned. Streams rows as they arrive.
Single JSON object containing the full result. Ideal for jq processing.
RFC 4180 CSV with header row. Import directly into spreadsheets or pandas.
One JSON object per line. Streamable — downstream tools start processing before results finish.

The CLI is designed for embedding in automated pipelines. Below are production-ready patterns for common environments.
#!/bin/bash set -euo pipefail PROJECT_ID="proj_a1b2" CAMPAIGN_ID="camp_xyz" # Upload fresh dataset DS_ID=$(arda datasets upload \ --project "$PROJECT_ID" \ --file ./data/experiment_batch_42.csv \ --name "Batch 42" \ --format json | jq -r '.dataset_id') # Profile and verify quality QUALITY=$(arda datasets profile "$DS_ID" --format json | jq -r '.quality.score') if (( $(echo "$QUALITY < 0.8" | bc -l) )); then echo "Quality score $QUALITY below threshold" >&2 exit 1 fi # Submit symbolic discovery RUN_ID=$(arda runs submit \ --campaign "$CAMPAIGN_ID" \ --dataset "$DS_ID" \ --mode symbolic \ --max-complexity 8 \ --format json | jq -r '.run_id') # Wait for completion (non-zero exit if run fails) arda runs wait "$RUN_ID" --timeout 3600 # Export claims arda claims list --project "$PROJECT_ID" --run "$RUN_ID" \ --format json > "claims-$RUN_ID.json" # Export audit trail arda ledger export --project "$PROJECT_ID" \ --format json > "ledger-$PROJECT_ID.json" echo "Pipeline complete. Claims: claims-$RUN_ID.json"
#!/bin/bash #SBATCH --job-name=arda-neural #SBATCH --partition=gpu #SBATCH --gres=gpu:1 #SBATCH --time=04:00:00 #SBATCH --output=arda-%j.log module load python/3.11 export ARDA_API_KEY="$ARDA_API_KEY" RUN_ID=$(arda runs submit \ --campaign camp_hpc_01 \ --dataset ds_large_sim \ --mode neural \ --epochs 1000 \ --hidden-dim 256 \ --format json | jq -r '.run_id') arda runs wait "$RUN_ID" --timeout 14400 arda claims list --run "$RUN_ID" --format csv > "$SLURM_SUBMIT_DIR/claims.csv" arda artifacts download "$(arda artifacts list --run "$RUN_ID" \ --type checkpoint --format json | jq -r '.artifacts[0].artifact_id')" \ --output "$SLURM_SUBMIT_DIR/model.pt"
PROJECT := proj_a1b2 CAMPAIGN := camp_xyz DATASET := ds_29xK4m MODES := symbolic neural neuro_symbolic cde .PHONY: profile discover-all claims audit profile: arda datasets profile $(DATASET) --format json > profile.json discover-all: profile @for mode in $(MODES); do \ echo "Submitting $$mode..."; \ arda runs submit --campaign $(CAMPAIGN) \ --dataset $(DATASET) --mode $$mode \ --format json | jq -r '.run_id' >> run_ids.txt; \ done @while read -r rid; do \ arda runs wait "$$rid"; \ done < run_ids.txt claims: discover-all arda claims list --project $(PROJECT) \ --format csv > claims.csv audit: claims arda ledger export --project $(PROJECT) \ --format json > audit.json
from airflow.decorators import dag, task
from airflow.operators.bash import BashOperator
from datetime import datetime
@dag(schedule="@weekly", start_date=datetime(2026, 1, 1),
catchup=False, tags=["arda"])
def arda_weekly_discovery():
upload = BashOperator(
task_id="upload_dataset",
bash_command="""
arda datasets upload --project {{ var.value.arda_project }} \
--file /data/weekly/{{ ds }}.csv \
--name "Weekly {{ ds }}" \
--format json | jq -r '.dataset_id'
""",
env={"ARDA_API_KEY": "{{ var.value.arda_api_key }}"},
)
submit = BashOperator(
task_id="submit_run",
bash_command="""
arda runs submit \
--campaign {{ var.value.arda_campaign }} \
--dataset {{ ti.xcom_pull(task_ids='upload_dataset') }} \
--mode symbolic --max-complexity 10 \
--format json | jq -r '.run_id'
""",
env={"ARDA_API_KEY": "{{ var.value.arda_api_key }}"},
)
wait = BashOperator(
task_id="wait_for_run",
bash_command="""
arda runs wait {{ ti.xcom_pull(task_ids='submit_run') }} \
--timeout 7200
""",
env={"ARDA_API_KEY": "{{ var.value.arda_api_key }}"},
)
upload >> submit >> wait
arda_weekly_discovery()Configuration is read from three sources in order of precedence: command-line flags → environment variables → config file. Set defaults in the file, override in CI with env vars, and further override with flags for one-off commands.
Default location: ~/.arda/config.yaml. Override with --config flag.
base_url: https://api.arda.vareon.com default_project: proj_a1b2 default_format: table timeout: 300 verbose: false
All config values map to ARDA_-prefixed env vars.
ARDA_BASE_URL API base URL ARDA_API_KEY Authentication key ARDA_DEFAULT_PROJECT Default project ID ARDA_DEFAULT_FORMAT Output format ARDA_TIMEOUT Request timeout (seconds) ARDA_VERBOSE Enable verbose output
Exit codes are stable and documented. Scripts can branch on exit codes without parsing error messages.
| Code | Meaning | Description |
|---|---|---|
| 0 | Success | Command completed successfully. |
| 1 | General error | Unhandled error — check stderr for details. |
| 2 | Auth failure | Missing, expired, or invalid credentials. |
| 3 | Policy violation | Governance policy blocked the operation (tier ceiling, budget, allowlist). |
| 4 | Not found | Referenced resource (project, dataset, run, claim) does not exist. |
| 5 | Timeout | Operation exceeded the configured timeout (see --timeout flag). |
Tab completion for commands, flags, and resource IDs. Completions are generated from the CLI binary — no external files to keep in sync.
# Add to ~/.bashrc eval "$(arda completion bash)"
# Add to ~/.zshrc eval "$(arda completion zsh)"
# Save to completions dir arda completion fish > \ ~/.config/fish/completions/arda.fish
Common issues and their fixes. Run any command with --verbose for detailed request/response logging.
No valid credentials found. Run arda auth login or set ARDA_API_KEY. Check arda auth status to verify token validity.
Your session's autonomy policy does not allow promotion to the requested tier. Contact a project admin to adjust the policy, or promote to a lower tier.
Cannot reach the API. Verify ARDA_BASE_URL or the base_url in ~/.arda/config.yaml. Run arda version --verbose to see the configured URL.
The CLI accepts CSV, Parquet, and HDF5. Ensure the file extension matches the actual format. For headerless CSVs, add --no-header to the upload command.
The arda runs wait command exceeded its timeout. Increase with --timeout flag or set ARDA_TIMEOUT. The run continues server-side — check with arda runs status <run-id>.
Shell completions fetch resource IDs from the API. If the API is unreachable or slow, completions time out. Set ARDA_COMPLETION_TIMEOUT=2 to reduce wait time or disable dynamic completions with ARDA_COMPLETION_STATIC=1.
Need agent-native integration? Use the MCP server. Need programmatic access from Python? Use the SDK.