Documentation Index
Fetch the complete documentation index at: https://docs.aperium.apps.hillspire.com/llms.txt
Use this file to discover all available pages before exploring further.
This section applies only if you’re running Aperium on your own infrastructure. We also offer a fully hosted version of Aperium that we operate for you. If that’s what you’re after, you can skip the deployment docs entirely and reach out to us to get set up.
Requirements
Runtimes and services Aperium depends on.
Google Cloud
Deploy the prod-style topology on GKE Autopilot.
On-prem Kubernetes
Run inside your network boundary with in-cluster MCP services and a local LLM.
Environment variables
Every backend and frontend env var, grouped by purpose.
Picking a path
- Google Cloud. The supported cloud production deployment, deployed with Terraform, ArgoCD, and Helm against a GKE Autopilot cluster.
- On-prem Kubernetes. A Kubernetes cluster you own, running Aperium with each enabled connector exposed as an in-cluster HTTP MCP service and the primary LLM served locally inside your network boundary. See On-prem overview for the requirements contract.
GCP vs. on-prem at a glance
The two paths share the same in-cluster application shape: Aperium frontend, backend, document worker, optional scheduler and cleanup jobs, in-cluster MCP services, and the supporting data services. They differ in what backs the platform:| Layer | Google Cloud | On-prem Kubernetes |
|---|---|---|
| Cluster | GKE Autopilot | Your own Kubernetes cluster |
| Database | Cloud SQL | Managed or operator-owned PostgreSQL |
| Secrets | Secret Manager + External Secrets | Vault, External Secrets, Sealed Secrets, or approved Kubernetes Secret flow |
| File storage | GCS bucket | RWX PVC or supported object-store replacement |
| Ingress / WAF | GKE Gateway + Cloud Armor | Ingress controller, internal LB, WAF, firewall policy |
| Image registry | Artifact Registry | Private registry mirrored inside your network boundary |
| Identity | Workload Identity | Kubernetes service accounts plus your IAM/RBAC |
| LLM | Cloud-hosted providers | Local OpenAI-compatible model server inside your network boundary |