Configuration & Providers
The AI stack relies on a consistent split between public, server, and runtime configuration files:
dapp/app/config/ai.public.tsexposes safe toggles (chat path and whether the UI must attach a JWT).dapp/app/config/public.env.tscontains broader NEXT_PUBLIC values, including the state-worker flag and active chain.dapp/app/config/ai.server.tshandles provider secrets, model lists, image pipeline settings, crypto quota limits, and backdoor reset secrets.dapp/app/config/server.env.tsvalidates sensitive infrastructure values (Durable Object URL/API key, JWT signing keys).
Provider Selection
- Chat Models —
AI_PROVIDERselectsopenaiorlmstudio. Each provider defines up to three models (AI_OPENAI_MODEL_*orAI_LOCAL_MODEL_*) that can be accessed bymodelIndex. - Vision + Images —
AI_OPENAI_VISION_MODEL/AI_LOCAL_VISION_MODELsupply the multimodal companion, whileAI_IMAGE_PROVIDERswitches between OpenAI, Replicate, or HuggingFace for thegenerate_image_with_altworkflow. - Temperature & Limits —
AI_TEMPERATURE,AI_CHAT_MAX_OUTPUT_TOKENS, andAI_CHAT_MAX_DURATIONflow directly intoproviderRegistryandhandleChatRequest.
JWT gating is enforced twice. Setting NEXT_PUBLIC_AI_CHAT_REQUIRES_JWT makes the client attach a bearer token, and the server-side aiServerConfig.requiresJwt ensures both /api/chat and /api/mcp reject unauthenticated calls.
Feature Flags & Secrets
| Key | Location | Purpose |
|---|---|---|
NEXT_PUBLIC_ENABLE_STATE_WORKER | public.env.ts | Turns on Durable Object backed quotas; requires STATE_WORKER_URL/STATE_WORKER_API_KEY in server.env.ts. |
AI_CRYPTO_QUOTA_* | ai.server.ts | Configures global/per-user ETH spend limits for the send-crypto tools (dapp/app/lib/quotas/crypto-quota.ts). |
AI_QUOTA_RESET_SECRET | ai.server.ts | Enables /api/quota-reset so admins can wipe token or crypto windows. |
AI_PRIVATE_KEY | ai.server.ts | Required for send_crypto_to_signed_in_user, the agent sender, and Key NFT management tools. |
OPENAI_API_KEY/AI_BASE_URL | ai.server.ts | Supply credentials for OpenAI or point to a local LM Studio endpoint. |
Pinecone & Semantics
dapp/app/config/pinecone.config.ts parses PINECONE_INDEX_* environment variables, exposes helper methods for namespace validation, and is consumed by both the MCP tool (pinecone-search.ts) and the agent rap workflow. The seeding scripts under dapp/pinecone rely on PINECONE_API_KEY plus index/namespace lists.
Operational Tips
- Keep public and server env validation errors actionable: both files log issues once per boot via their run-once guards.
- Because the chat route reads JSON before opening the SSE stream, incorrect payloads fail fast without tying up a connection.
- When switching to LM Studio, remember to set
AI_BASE_URLto the server’s/v1endpoint—providerRegistrynormalizes trailing slashes but expects the OpenAI-compatible path.