Skip to Content
Welcome to RitoSwap's documentation!
AI SystemsFoundationConfiguration & Providers

Configuration & Providers

The AI stack relies on a consistent split between public, server, and runtime configuration files:

  • dapp/app/config/ai.public.ts exposes safe toggles (chat path and whether the UI must attach a JWT).
  • dapp/app/config/public.env.ts contains broader NEXT_PUBLIC values, including the state-worker flag and active chain.
  • dapp/app/config/ai.server.ts handles provider secrets, model lists, image pipeline settings, crypto quota limits, and backdoor reset secrets.
  • dapp/app/config/server.env.ts validates sensitive infrastructure values (Durable Object URL/API key, JWT signing keys).

Provider Selection

  • Chat ModelsAI_PROVIDER selects openai or lmstudio. Each provider defines up to three models (AI_OPENAI_MODEL_* or AI_LOCAL_MODEL_*) that can be accessed by modelIndex.
  • Vision + ImagesAI_OPENAI_VISION_MODEL / AI_LOCAL_VISION_MODEL supply the multimodal companion, while AI_IMAGE_PROVIDER switches between OpenAI, Replicate, or HuggingFace for the generate_image_with_alt workflow.
  • Temperature & LimitsAI_TEMPERATURE, AI_CHAT_MAX_OUTPUT_TOKENS, and AI_CHAT_MAX_DURATION flow directly into providerRegistry and handleChatRequest.
⚠️

JWT gating is enforced twice. Setting NEXT_PUBLIC_AI_CHAT_REQUIRES_JWT makes the client attach a bearer token, and the server-side aiServerConfig.requiresJwt ensures both /api/chat and /api/mcp reject unauthenticated calls.

Feature Flags & Secrets

KeyLocationPurpose
NEXT_PUBLIC_ENABLE_STATE_WORKERpublic.env.tsTurns on Durable Object backed quotas; requires STATE_WORKER_URL/STATE_WORKER_API_KEY in server.env.ts.
AI_CRYPTO_QUOTA_* ai.server.tsConfigures global/per-user ETH spend limits for the send-crypto tools (dapp/app/lib/quotas/crypto-quota.ts).
AI_QUOTA_RESET_SECRETai.server.tsEnables /api/quota-reset so admins can wipe token or crypto windows.
AI_PRIVATE_KEYai.server.tsRequired for send_crypto_to_signed_in_user, the agent sender, and Key NFT management tools.
OPENAI_API_KEY/AI_BASE_URLai.server.tsSupply credentials for OpenAI or point to a local LM Studio endpoint.

Pinecone & Semantics

dapp/app/config/pinecone.config.ts parses PINECONE_INDEX_* environment variables, exposes helper methods for namespace validation, and is consumed by both the MCP tool (pinecone-search.ts) and the agent rap workflow. The seeding scripts under dapp/pinecone rely on PINECONE_API_KEY plus index/namespace lists.

Operational Tips

  • Keep public and server env validation errors actionable: both files log issues once per boot via their run-once guards.
  • Because the chat route reads JSON before opening the SSE stream, incorrect payloads fail fast without tying up a connection.
  • When switching to LM Studio, remember to set AI_BASE_URL to the server’s /v1 endpoint—providerRegistry normalizes trailing slashes but expects the OpenAI-compatible path.

RitoSwap Docs does not store, collect or access any of your conversations. All saved prompts are stored locally in your browser only.