ACI Documentation
Use these docs when AI has to keep changing after launch. Start with ACI Inference for shared services, ACI Personal Agents for device-local memory, or ACI Edge Runtime for bounded local adaptation, then pick the API, SDK, CLI, or MCP surface that fits the workflow.

Quick links
Developer Surfaces
Start here when you are wiring ACI into a service, device, workflow, or operator tool.
Coverage
External behavior, supported deployment patterns, and the published product boundaries used across the site.
The docs explain how ACI keeps models useful after launch with plasticity, stability against catastrophic forgetting, and exact unlearning.
The main control surfaces are explicit operations such as bind, adapt, constrain, infer, unbind, rollback, and consolidation.
Documentation is organized around ACI Inference, ACI Personal Agents, ACI Edge Runtime, and the optional Safety & Policy add-on.
The docs start with workload shapes that have clear evaluation targets, then expand only where stronger evidence exists.
Product Reference
These are the public ACI offers and the default patterns used to evaluate them first.
Use when one shared AI service needs customer-specific updates, rollback, and deletion without cloning the whole model or retraining the backbone.
Recommended starting profile: shared-service tenant updates first, with memory added only after a measured recall lift.
Use when memory, personalization, snapshot, restore, and erase must stay on the user's own device.
Recommended starting profile: local controller with memory and local persistence on.
Use when robots and embedded systems need bounded local adaptation under strict latency, memory, power, and packaging constraints.
Recommended starting profile: compact edge mode first, then enable safety enforcement for control or robotics.
Add hard denial, route restriction, or signed evidence only when the deployment boundary requires them.
Recommended starting profile: language-facing equality rules or control-region shields, depending on the surface.
Install And Deployment
The obtain path and runtime shape are product-specific: service deployment for Inference, local bundle embedding for Personal Agents, native artifact delivery for Edge Runtime, and attach-layer enablement for policy.
Obtain
Managed service engagement or licensed private `aci-engine` wheel plus service container assets.
Run
Run as the shared-service API with PostgreSQL-backed state, health probes, and tenant or operator APIs.
Obtain
Licensed `aci-engine` wheel embedded into the local application or agent bundle.
Run
Run through the Personal Agents SDK with local persistence on the device or workstation.
Obtain
Licensed `aci-engine` wheel for build tooling plus generated native runtime and embedder package artifacts.
Run
Build and ship the compiled runtime inside the robot, industrial app, or embedded product.
Obtain
Ships with the same wheel and runtime artifacts as the host surface rather than as a separate installer.
Run
Attach through `constrain`, proof flows, certificates, or native rule hooks on the existing host deployment.
Limits
The published evidence supports specific product claims. It is not a blanket guarantee for every workload.
Common References
Overview of how ACI handles post-launch model change across shared services, personal devices, and edge systems.
Use cases and operating problems that map to ACI Inference, Personal Agents, Edge Runtime, and the optional policy layer.
Industry pages covering shared services, personal devices, robotics, edge systems, and regulated operations.
Public benchmark page for shared-service inference with accuracy parity, tenant isolation, rollback via unbind, and tenant delete.
Plain-language explanation of stability, plasticity, editability, and the currently stated benchmark interpretation.