Documentation Index
Fetch the complete documentation index at: https://docs.aperium.apps.hillspire.com/llms.txt
Use this file to discover all available pages before exploring further.
Many organizations already run their own internal BigQuery MCP server with custom guardrails, query routing, or data-mesh metadata baked in. If that’s you, register that server through Aperium’s Custom integrations flow instead of using this built-in connector. The rest of this page covers the built-in BigQuery connector that ships with Aperium.
What you’ll need
- A Google Cloud project that contains the BigQuery datasets you want Aperium to read.
- An admin account in that project that can enable APIs and grant IAM roles.
- A decision about which connection method to use (see below).
Choose a connection method
Aperium supports two ways to authenticate to BigQuery. The Connection Method dropdown in the admin form switches between them.Application Default Credentials (ADC)
Aperium uses the identity of the environment it’s running in (for example a GKE Workload Identity service account, a Compute Engine service account, or a
GOOGLE_APPLICATION_CREDENTIALS env var). No service account JSON key needs to be uploaded. Best for self-hosted Aperium deployments running inside Google Cloud where you can attach an identity to the workload directly.GCP Service Account JSON
An admin generates a service account key in Google Cloud and pastes the JSON into Aperium. Aperium stores the key encrypted against the tenant. Best when Aperium runs outside Google Cloud, when you want explicit per-tenant credential isolation, or when attaching an environment identity isn’t an option.
Setup
Enable the BigQuery API
In the Google Cloud Console, select the project that owns the datasets you want Aperium to read. Open APIs & Services and enable the BigQuery API if it isn’t already on. From the command line:
Grant the required IAM roles
Aperium’s BigQuery identity (whether ADC or a service account) needs two IAM roles:Keep the JSON key safe; you’ll paste it into Aperium in the last step. Treat it like a secret.
- BigQuery Data Viewer (
roles/bigquery.dataViewer) — list and read tables. - BigQuery Job User (
roles/bigquery.jobUser) — submit query jobs.
Plan your query scopes
A “scope” is one entry in an allowlist of datasets Aperium is allowed to query. Every BigQuery tool call must reference a scope by name. Plan one scope per dataset you want exposed.Each scope needs four fields:
name— A short identifier you’ll see in tool calls (for examplesales,revops).project— The GCP project that owns the dataset.dataset— The BigQuery dataset name.location— The dataset’s BigQuery location (for exampleUS,EU,asia-southeast1). Defaults toUSif omitted, but it’s safer to be explicit.
Open the BigQuery setup form in Aperium
Open Aperium and go to either the admin onboarding flow (first sign-in) or the Admin Console’s MCP Servers tab (any time after). Open the Connect Aperium to BigQuery form. Set the Connection Method to whichever option you chose above, then fill in the rest of the form.
Fill in the form (Application Default Credentials)
With Connection Method set to Application Default Credentials:
- GCP Project ID. The project where Aperium should run query jobs (for example
acme-analytics). - Query Scopes (JSON). The JSON array you planned in step 3.
- Max Bytes Billed (optional). A per-query budget cap in bytes. Leave blank to use the server default (5 GB).
- Query Timeout (seconds, optional). Leave blank to use the server default (30 seconds, max 600).
- Max Result Rows (optional). Leave blank to use the server default (500 rows).

Fill in the form (GCP Service Account)
With Connection Method set to GCP Service Account:
- GCP Project ID, Query Scopes, Max Bytes Billed, Query Timeout, Max Result Rows. Same as the ADC method.
- Service Account JSON. Paste the entire JSON key you downloaded in step 2 (the file should start with
{"type": "service_account", ...}).

What agents can do
Once configured, agents can call four tools:- List scopes. Get the allowlist of datasets they’re allowed to query.
- List tables. List tables inside a chosen scope.
- Describe table. Inspect a table’s schema (columns, types, descriptions).
- Run SQL. Execute a read-only SQL query against a chosen scope, capped by Max Bytes Billed, Query Timeout, and Max Result Rows.
INSERT, UPDATE, DELETE, MERGE, CREATE, DROP) are blocked at the SQL parser layer, so agents can’t mutate data through this connector even if a scope’s IAM role would otherwise allow it.
Notes
- Editing scopes. To add or remove datasets from the allowlist later, open the Admin Console’s MCP Servers tab and click the pencil icon next to BigQuery to update the Query Scopes JSON.
- Multiple projects. If your scopes span multiple GCP projects, the GCP Project ID field is the project Aperium uses to bill query jobs. Make sure the BigQuery identity has the BigQuery Job User role in that project.
- Key rotation. If you used the service account method, rotate the JSON key periodically and re-paste the new key into the same form.