Local Setup Guide
Install the Complete AI Assistant stack with one verified command and confirm the local model is functioning before you move on.
One-command installation
Run `curl -sSL install.completeassistant.ai | bash` to pull containers, configure environment variables, and download the compact local language model.
- Preflight checks confirm disk space, ports, and CPU/GPU availability.
- Installer never transmits local data; outbound network is limited to package downloads.
- Artifacts are signed and checksummed so you can verify integrity offline.
Verification steps
When setup completes we automatically run the smoke test harness to ensure the local model produces a non-echo response and that tool catalogs are reachable.
- `mcp doctor` validates background services, database migrations, and policy configuration.
- Evaluation harness confirms tool allowlists and policy modes are loaded.
- All results stream into the episodic log for future audits.
Troubleshooting
If verification fails, inspect the detailed log at `~/.mcp/logs/install.log` and review our GPU, SELinux, and WSL2 remediation steps.
- Override default model choice with `MCP_MODEL=medium` when you have more local compute.
- Use `--offline` to skip remote checksums when running in isolated networks.
- Contact support through `/contact?context=setup` with the install log hash.