Skip to Content
Welcome to RitoSwap's documentation!
AI SystemsOperationsAPI Overview

API Overview

The AI footprint exposes three HTTP surfaces from the Next.js app:

  1. POST /api/chat — primary chat ingress, handled by dapp/app/api/chat/route.ts with handleChatRequest.
  2. POST /api/mcp — JSON-RPC bridge to the MCP server (dapp/app/api/mcp/route.ts).
  3. POST /api/quota-reset — admin-only endpoint for clearing token/crypto windows (dapp/app/api/quota-reset/route.ts).

/api/chat

  • Expects a body with { messages, metadata?, model?, modelIndex? }.
  • Responds with an SSE stream containing text-* events plus tool lifecycle markers.
  • Honors Authorization: Bearer when NEXT_PUBLIC_AI_CHAT_REQUIRES_JWT is true. Missing or invalid tokens receive 401 JSON responses before any SSE is opened.
  • Times out after maxDuration = 30 seconds by default.
💡

Because the handler reads the JSON body before opening the stream, malformed payloads fail fast and never hold open a socket.

/api/mcp

1. Validate & Gate

Parses the JSON-RPC payload, verifies JWTs if required, logs the requested method and params, and rejects non-POST methods.

2. Dispatch

Passes the request + parsed body to mcpServer.handleRequest, which reruns per-tool JWT gating and executes the handler.

3. Return JSON

Success responses contain { result } while errors follow the JSON-RPC { error: { code, message } } shape. HTTP 500s are reserved for dispatcher failures.

This route mirrors the chat route’s auth logic so tools cannot be called directly without the same credentials the chat surface would use.

/api/quota-reset

  • Uses constant-time secret comparisons to avoid timing attacks.
  • Supports secrets via headers, bearer tokens, body, or query string.
  • Refuses to run if aiServerConfig.quotaReset.enabled is false or the state service is inactive.

The AI system exposes a minimal set of HTTP endpoints to power the chat experience. These endpoints are designed to be consumed by the dapp frontend but are documented here for completeness.

Data Structures

While the API uses standard JSON, it relies on specific schemas for type safety and validation:

  • DTOs: The API expects typed Data Transfer Objects (e.g., UiMessage, ChatMetadata) for requests.
  • Tool Schemas: Tools are defined using Raw JSON Schema to ensure the LLM generates valid arguments (e.g., strict enums for chain names).
  • Stream Protocol: The response uses a custom Server-Sent Events (SSE) protocol that interleaves text deltas with custom events like tool-input-start and tool-output-available.

Deployment Notes

  • Providers — Switching to LM Studio simply requires setting AI_PROVIDER=lmstudio and AI_BASE_URL (pointing at your /v1 endpoint). No code changes are necessary because providerRegistry abstracts the API.
  • State worker — Set NEXT_PUBLIC_ENABLE_STATE_WORKER=true in the public env, then supply STATE_WORKER_URL and STATE_WORKER_API_KEY in server.env.ts. The client logs whether the worker is active during boot.
  • Edge compatibility — All AI routes declare runtime = 'nodejs' to guarantee access to the required Node APIs (crypto, streaming response helpers). Keep this in mind when tweaking Next.js deployment targets.
  • SSE buffering — Vercel/CDN caches must disable response buffering for SSE. sse-stream.ts already sets Cache-Control: no-transform and X-Accel-Buffering: no.

When running locally, remember to seed Pinecone, provide dummy JWT secrets, and disable quotas if you are not running the Cloudflare worker. The docs in this section assume the full production wiring is in place.

RitoSwap Docs does not store, collect or access any of your conversations. All saved prompts are stored locally in your browser only.