The nao init command sets up your context repository with all necessary files and structure.Run nao init
Copy
nao init
If nao_config.yaml already exists, nao init runs an update flow and lets you adjust existing configuration values instead of starting from scratch.The command will guide you through an interactive setup:1. Project Name
Copy
What is the name of your project?> my-analytics-agent
2. Database Connection (Optional)
Copy
Do you want to connect a database? [y/N]> ySelect your database type: 1. Snowflake 2. BigQuery 3. Databricks 4. PostgreSQL 5. Redshift 6. MySQL
If you select yes, you’ll be prompted for connection details specific to your database type.3. Repository Context (Optional)
Copy
Do you want to add a repository to your agent context? [y/N]> yRepository URL:> https://github.com/your-org/dbt-projectPath within repo (optional):> models/
4. LLM API Key (Optional)
Copy
Do you want to add an LLM key? [y/N]> ySelect your LLM provider: 1. OpenAI 2. Anthropic 3. Mistral 4. Gemini 5. OpenRouter 6. Ollama 7. AWS Bedrock
If you use Ollama provider, you can skip adding an LLM key as Ollama does not require it.5. Slack Integration (Optional)
Copy
Do you want to setup a Slack connection? [y/N]> y
You can skip any optional step and configure it later by editing nao_config.yaml.
What Gets CreatedAfter running nao init, you’ll have a folder with the architecture of your context:
MCP servers are configured via the agent/mcps/mcp.json file, while skills are defined as markdown files in the agent/skills/ folder. Both are part of your project context and are discovered automatically by the agent.
The nao_config.yaml file is the central configuration for your analytics agent.
You can always edit it and re-launch a sync with this configuration.Basic Structure
Copy
project_name: my-analytics-agent# Database Connectionsdatabases: - name: bigquery-prod type: bigquery project_id: my-project dataset_id: analytics # Option 1: Use credentials_path for local files credentials_path: /path/to/credentials.json # Option 2: Use credentials_json for environment variables (recommended for cloud deployments) # credentials_json: {{ env('GCP_SERVICE_ACCOUNT_KEY_JSON') }} accessors: - columns - preview - description # Optional (requires llm config below): # - ai_summary - profiling profiling: refresh_policy: <policy> interval_days: <days> include: [] exclude: [] sso: false location: US# Repository Integrationsrepos: - name: dbt url: https://github.com/your-org/dbt-project.git branch: main# LLM Configurationllm: provider: anthropic api_key: {{ env('ANTHROPIC_API_KEY') }} # Optional model used by database ai_summary templates (when ai_summary accessor is enabled) annotation_model: claude-3-7-sonnet-latest # Optional for Bedrock provider aws_region: us-east-1# Local Ollama example (no API key required)llm: provider: ollama# Slack Integration (optional)slack: bot_token: {{ env('SLACK_BOT_TOKEN') }} signing_secret: {{ env('SLACK_SIGNING_SECRET') }} post_message_url: https://slack.com/api/chat.postMessage# Notion Integration (optional)notion: api_key: {{ env('NOTION_API_KEY') }} pages: - 0123456789abcdef0123456789abcdef - fedcba9876543210fedcba9876543210
Environment Variables
Never commit sensitive credentials to Git! Always use environment variables for secrets.
Store sensitive values in environment variables (examples for different providers and warehouses):
Copy
# .env file (add to .gitignore)# OpenAI / Anthropic / Azure OpenAIOPENAI_API_KEY=sk-...ANTHROPIC_API_KEY=...# AWS Bedrock (choose one auth method)# Option 1: bearer tokenAWS_BEARER_TOKEN_BEDROCK=...# Option 2: IAM credentialsAWS_ACCESS_KEY_ID=...AWS_SECRET_ACCESS_KEY=...AWS_REGION=us-east-1# Warehouse credentialsSNOWFLAKE_USER=my_userSNOWFLAKE_PASSWORD=my_password# SlackSLACK_BOT_TOKEN=xoxb-...SLACK_SIGNING_SECRET=...AWS_REGION=us-east-1
Use credentials_json for cloud deployments (Cloud Run, GitHub Actions, etc.) where you store the full JSON content in an environment variable or secret manager. Use credentials_path for local development with credential files.
When using credentials_json, the environment variable must contain the entire JSON content of your service account key file, not just the path.
Next Steps
Context Synchronization
Learn how to sync and update your agent’s context
Context Principles
Learn how to optimize your context for reliability, speed, and cost