Rate Limiting Library
The Rate Limiting Library protects RitoSwap’s API endpoints from abuse using Redis-backed sliding window rate limiting. This internal library provides granular control over request rates for different endpoints while maintaining a smooth user experience through clear feedback and retry mechanisms.
Overview
RitoSwap’s decentralized nature means users can interact without traditional accounts, making IP-based rate limiting essential for preventing abuse. The library implements a dual-layer approach with both endpoint-specific and global limits to ensure fair resource allocation.
Why Rate Limiting Matters for RitoSwap
The token gate system involves several resource-intensive operations:
- Cryptographic signature verification
- On-chain NFT ownership checks
- Content delivery from Cloudflare R2
- Database operations for tracking token usage
Without rate limiting, these operations could be exploited to overwhelm the system or farm nonces for analysis.
- rateLimit.server.ts
- rateLimit.client.ts
Server Implementation
The server-side component (rateLimit.server.ts
) implements the core rate limiting logic with pre-configured limiters for each endpoint type.
Redis Initialization
The rate limiting library initializes its Redis client conditionally based on configuration:
// rateLimit.server.ts
import { Redis } from '@upstash/redis'
// Initialize Redis client when enabled
const redis = isRateLimitEnabled()
? new Redis({
url: process.env.KV_REST_API_URL!,
token: process.env.KV_REST_API_TOKEN!,
})
: null
This Redis instance is shared across all rate limiters for efficient connection management.
Configuration Detection
Similar to the SIWE library, rate limiting checks for proper Redis configuration:
export const isRateLimitEnabled = (): boolean => {
const flagEnabled = process.env.NEXT_PUBLIC_ACTIVATE_REDIS === 'true'
const hasApi = !!process.env.KV_REST_API_URL &&
process.env.KV_REST_API_URL !== 'false'
const hasKey = !!process.env.KV_REST_API_TOKEN &&
process.env.KV_REST_API_TOKEN !== 'false'
return flagEnabled && hasApi && hasKey
}
Pre-Configured Rate Limiters
The library defines five specialized rate limiters, each calibrated for its specific use case:
export const rateLimiters = {
// Nonce generation - allows multiple auth attempts
nonce: redis ? new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(10, '1 m'),
prefix: 'rl:nonce:',
}) : null,
// Gate access - moderate limit for signature verification
gateAccess: redis ? new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(5, '1 m'),
prefix: 'rl:gate-access:',
}) : null,
// Token verification - strictest limit for submission endpoint
verifyTokenGate: redis ? new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(3, '1 m'),
prefix: 'rl:verify-token:',
}) : null,
// Status polling - higher limit for checking token status
tokenStatus: redis ? new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(60, '1 m'),
prefix: 'rl:token-status:',
}) : null,
// Global protection - overall request limit per hour
global: redis ? new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(100, '1 h'),
prefix: 'rl:global:',
}) : null,
}
Each limiter serves a specific purpose:
- nonce: 10/min - Allows retries for wallet connection issues
- gateAccess: 5/min - Prevents brute force attempts on gate access
- verifyTokenGate: 3/min - Strict limit on actual submissions
- tokenStatus: 60/min - Permits frequent polling for UI updates
- global: 100/hr - Overall protection against abuse
Environment-Aware Client Identification
Security Critical: The library implements environment-aware IP detection to prevent header spoofing attacks that could bypass rate limits.
The getIdentifier
function extracts client identifiers securely based on the deployment environment:
/**
* Extract client identifier (IP) with environment-aware security.
*
* In Vercel production, we trust the x-forwarded-for header because Vercel
* overwrites it with the true client IP. In all other environments, we fall
* back to the socket IP to prevent header spoofing attacks.
*/
export function getIdentifier(req: NextRequest): string {
// Detect if we're running on Vercel production
const isVercelProd = process.env.VERCEL_ENV === 'production'
if (isVercelProd) {
// In Vercel production, trust the x-forwarded-for header
// Vercel guarantees this contains the real client IP
const forwarded = req.headers.get('x-forwarded-for')
if (forwarded) {
// Take the first IP in the chain (the original client)
return forwarded.split(',')[0].trim()
}
}
// In all other environments (local dev, non-Vercel hosts),
// we cannot trust HTTP headers as they can be spoofed
// Fall back to the socket IP or a safe default
// NextRequest doesn't expose socket directly, but we can use the ip property
// which Next.js sets based on the connection. This is safer than headers
// in non-Vercel environments.
// Note: In local development, this might return ::1 (IPv6 localhost) or 127.0.0.1
// which is expected behavior
const ip = (req as any).ip
if (ip) {
return ip
}
// If we can't determine the IP reliably, return a unique identifier
// This prevents all unknown requests from sharing the same rate limit
return `unknown-${Date.now()}-${Math.random()}`
}
Security Considerations
The environment-aware approach prevents a critical vulnerability:
-
In Vercel Production: Vercel’s infrastructure overwrites the
x-forwarded-for
header with the true client IP before it reaches your application. This makes it safe to trust. -
In Other Environments: HTTP headers can be easily spoofed by attackers. For example:
# An attacker could bypass rate limits by spoofing headers curl -H "X-Forwarded-For: 1.2.3.4" https://your-api.com/endpoint curl -H "X-Forwarded-For: 5.6.7.8" https://your-api.com/endpoint
-
The Solution: By only trusting headers in Vercel production and falling back to the socket IP elsewhere, we ensure rate limits cannot be bypassed through header manipulation.
The Main Rate Limiting Function
The checkRateLimitWithNonce
function is the library’s core, implementing a unique optimization:
export async function checkRateLimitWithNonce(
req: NextRequest,
limiterType: keyof typeof rateLimiters,
includeGlobal = true
): Promise<{
success: boolean
limit?: number
remaining?: number
reset?: number
nonce?: string
}> {
if (!isRateLimitEnabled()) {
return { success: true }
}
const identifier = getIdentifier(req)
const limiter = rateLimiters[limiterType]
if (!limiter) {
return { success: true }
}
// Check specific limiter
const { success, limit, remaining, reset } = await limiter.limit(identifier)
if (!success) {
return { success: false, limit, remaining, reset }
}
// Check global limiter (except for tokenStatus)
if (includeGlobal && limiterType !== 'tokenStatus' && rateLimiters.global) {
const globalResult = await rateLimiters.global.limit(identifier)
if (!globalResult.success) {
return {
success: false,
limit: globalResult.limit,
remaining: globalResult.remaining,
reset: globalResult.reset,
}
}
}
// Unique feature: retrieve pre-stored nonce if available
let nonce: string | undefined
if (process.env.NEXT_PUBLIC_ACTIVATE_REDIS === 'true' &&
redis &&
typeof (redis as any).get === 'function') {
const nonceKey = `nonce:${identifier}`
nonce = ((await (redis as any).get(nonceKey)) as string | null) ?? undefined
}
return { success: true, limit, remaining, reset, nonce }
}
Key features:
- Graceful degradation when rate limiting is disabled
- Dual-layer limiting (specific + global)
- Special handling for tokenStatus (no global limit)
- Opportunistic nonce retrieval to reduce Redis operations
- Environment-aware client identification for security
Client Implementation
The client-side component (rateLimit.client.ts
) is minimal, providing only a configuration check:
export const isRateLimitEnabled = () => {
return process.env.NEXT_PUBLIC_ACTIVATE_REDIS === 'true'
}
This allows frontend components to adapt their behavior when rate limiting is active.
Integration in RitoSwap
Nonce Endpoint
The /api/nonce
endpoint demonstrates basic rate limiting integration:
export async function GET(req: NextRequest) {
// Apply rate limiting
const rateLimitResult = await checkRateLimitWithNonce(req, 'nonce', true)
if (!rateLimitResult.success) {
const retryAfter = rateLimitResult.reset
? Math.ceil((rateLimitResult.reset - Date.now()) / 1000)
: 60
return NextResponse.json(
{
error: 'Too many requests',
limit: rateLimitResult.limit,
remaining: rateLimitResult.remaining,
retryAfter
},
{
status: 429,
headers: {
'X-RateLimit-Limit': String(rateLimitResult.limit),
'X-RateLimit-Remaining': String(rateLimitResult.remaining),
'X-RateLimit-Reset': String(rateLimitResult.reset),
'Retry-After': String(retryAfter)
}
}
)
}
// Use pre-fetched nonce if available
let nonce = rateLimitResult.nonce
if (!nonce) {
nonce = await generateNonce(identifier)
}
return NextResponse.json({ nonce })
}
The nonce endpoint showcases the library’s optimization by potentially avoiding a second Redis call for nonce generation.
Gate Access Endpoint
The /api/gate-access
endpoint uses stricter limiting:
const rateLimitResult = await checkRateLimitWithNonce(req, 'gateAccess', true)
if (!rateLimitResult.success) {
return rateLimitResponse(rateLimitResult) // Standardized 429 response
}
Token Verification Endpoint with Timestamp Validation
The /api/verify-token-gate
endpoint demonstrates comprehensive security with both rate limiting and timestamp validation:
export async function POST(request: NextRequest) {
// Apply rate limiting first
const rateLimitResult = await checkRateLimitWithNonce(request, 'verifyTokenGate')
if (!rateLimitResult.success) {
const retryAfter = rateLimitResult.reset
? Math.ceil((rateLimitResult.reset - Date.now()) / 1000)
: 60
return NextResponse.json(
{
error: 'Too many requests',
limit: rateLimitResult.limit,
remaining: rateLimitResult.remaining,
retryAfter
},
{ status: 429 }
)
}
const body = await request.json()
// Validate timestamp is present and is a number
if (typeof body.timestamp !== 'number') {
return NextResponse.json(
{ error: 'Missing or invalid timestamp field' },
{ status: 400 }
)
}
// Validate timestamp is not in the future (30 second tolerance for clock skew)
if (body.timestamp > Date.now() + 30000) {
return NextResponse.json(
{ error: 'Invalid timestamp - cannot be in the future' },
{ status: 400 }
)
}
// Validate timestamp is not too old (5 minutes)
if (Date.now() - body.timestamp > 5 * 60 * 1000) {
return NextResponse.json(
{ error: 'Signature expired' },
{ status: 400 }
)
}
// Continue with signature verification and processing...
}
Timestamp Validation: The verify-token-gate endpoint enforces strict timestamp validation to prevent replay attacks. Signatures must be used within 5 minutes of creation and cannot be from the future.
Token Status Endpoint
The /api/token-status/[tokenId]
endpoint allows frequent polling for UI updates:
// In route.ts
const rateLimitResult = await checkRateLimitWithNonce(request, 'tokenStatus', false)
if (!rateLimitResult.success) {
const retryAfter = rateLimitResult.reset
? Math.ceil((rateLimitResult.reset - Date.now()) / 1000)
: 60
return NextResponse.json(
{
error: 'Too many requests',
limit: rateLimitResult.limit,
remaining: rateLimitResult.remaining,
retryAfter
},
{
status: 429,
headers: {
'X-RateLimit-Limit': String(rateLimitResult.limit || 60),
'X-RateLimit-Remaining': String(rateLimitResult.remaining || 0),
'Retry-After': String(retryAfter)
}
}
)
}
Frontend Polling Example
Components can poll the token status endpoint to keep the UI synchronized:
// Frontend polling example for token status
async function pollTokenStatus(tokenId: number): Promise<TokenStatusResponse> {
const res = await fetch(`/api/token-status/${tokenId}`)
if (res.status === 429) {
const data = await res.json()
showRateLimitModal({
limit: data.limit,
remaining: data.remaining,
retryAfter: data.retryAfter
})
throw new Error('Rate limited')
}
if (!res.ok) {
throw new Error('Status check failed')
}
return res.json()
}
// Example polling implementation with rate limit awareness
function useTokenStatusPolling(tokenId: number | null) {
useEffect(() => {
if (!tokenId) return
const interval = setInterval(async () => {
try {
const status = await pollTokenStatus(tokenId)
// Update UI with status
} catch (error) {
console.error('Polling error:', error)
}
}, 5000) // Poll every 5 seconds
return () => clearInterval(interval)
}, [tokenId])
}
Client-Side Handling
Components handle rate limiting gracefully with user feedback:
// In GateModal.tsx
if (nonceResponse.status === 429) {
const data = await nonceResponse.json()
showRateLimitModal({
limit: data.limit,
remaining: data.remaining,
retryAfter: data.retryAfter
})
setIsSigning(false)
return
}
// In GatePageWrapper.tsx
if (res.status === 429) {
const data = await res.json()
showRateLimitModal({
limit: data.limit,
remaining: data.remaining,
retryAfter: data.retryAfter
})
// Re-enable button
if (submitButton) {
submitButton.disabled = false
submitButton.textContent = 'Sign & Submit'
submitButton.classList.remove('processing')
}
return
}
The showRateLimitModal
function provides clear feedback about when users can retry.
Rate Limit Response Format
All rate-limited endpoints return consistent response structures:
Success Response
{
"success": true,
"limit": 10,
"remaining": 9,
"reset": 1704067200000,
"nonce": "abc12345" // Only for nonce endpoint
}
Rate Limited Response (429)
{
"error": "Too many requests",
"limit": 10,
"remaining": 0,
"retryAfter": 45
}
Response Headers:
X-RateLimit-Limit
: Maximum requests allowedX-RateLimit-Remaining
: Requests remaining in current windowX-RateLimit-Reset
: Unix timestamp when the window resetsRetry-After
: Seconds until the client can retry
Testing
The library includes comprehensive tests in rateLimit.server.test.ts
:
Configuration Tests
describe('isRateLimitEnabled', () => {
it('returns true when all env vars are set', async () => {
process.env.NEXT_PUBLIC_ACTIVATE_REDIS = 'true'
process.env.KV_REST_API_URL = 'https://test-api.upstash.io'
process.env.KV_REST_API_TOKEN = 'test-key'
const { isRateLimitEnabled } = await import('../rateLimit.server')
expect(isRateLimitEnabled()).toBe(true)
})
it('returns false when API URL is "false"', async () => {
process.env.KV_REST_API_URL = 'false'
const { isRateLimitEnabled } = await import('../rateLimit.server')
expect(isRateLimitEnabled()).toBe(false)
})
})
Environment-Aware IP Detection Tests
describe('getIdentifier', () => {
it('trusts x-forwarded-for in Vercel production', () => {
process.env.VERCEL_ENV = 'production'
const req = {
headers: {
get: (name: string) =>
name === 'x-forwarded-for' ? '192.168.1.1, 10.0.0.1' : null
}
}
expect(getIdentifier(req as any)).toBe('192.168.1.1')
})
it('ignores x-forwarded-for in non-Vercel environments', () => {
process.env.VERCEL_ENV = 'development'
const req = {
headers: {
get: (name: string) =>
name === 'x-forwarded-for' ? '192.168.1.1' : null
},
ip: '127.0.0.1'
}
expect(getIdentifier(req as any)).toBe('127.0.0.1')
})
})
Rate Limiting Behavior Tests
it('returns failure when rate limit exceeded', async () => {
const mockLimit = vi.fn().mockResolvedValue({
success: false,
limit: 10,
remaining: 0,
reset: Date.now() + 60000
})
vi.mocked(Ratelimit).mockImplementation(() => ({
limit: mockLimit
}) as any)
const result = await checkRateLimitWithNonce(req, 'nonce')
expect(result.success).toBe(false)
expect(result.remaining).toBe(0)
})
Nonce Integration Tests
it('checks rate limit and returns success with nonce', async () => {
const mockGet = vi.fn().mockResolvedValue('test-nonce')
vi.mocked(Redis).mockImplementation(() => ({
get: mockGet
}) as any)
const result = await checkRateLimitWithNonce(req, 'nonce')
expect(result.success).toBe(true)
expect(result.nonce).toBe('test-nonce')
expect(mockGet).toHaveBeenCalledWith('nonce:192.168.1.1')
})
Global Limiter Tests
it('skips global rate limit for tokenStatus', async () => {
let ratelimitCallCount = 0
vi.mocked(Ratelimit).mockImplementation(() => {
ratelimitCallCount++
return { limit: mockLimit } as any
})
await checkRateLimitWithNonce(req, 'tokenStatus', true)
// Should only create specific limiter, not global
expect(mockLimit).toHaveBeenCalledTimes(1)
})
Configuration
Required environment variables:
Variable | Purpose | Example |
---|---|---|
NEXT_PUBLIC_ACTIVATE_REDIS | Enable rate limiting | true |
KV_REST_API_URL | Upstash Redis URL | https://xxx.upstash.io |
KV_REST_API_TOKEN | Upstash Redis token | AcXXXXX... |
VERCEL_ENV | Vercel environment (auto-set) | production |
API Reference
Server Functions
Function | Signature | Returns | Notes |
---|---|---|---|
isRateLimitEnabled | (): boolean | boolean | Checks if rate limiting is active by validating Redis environment variables |
getIdentifier | (req: NextRequest): string | IP address or unique identifier | Environment-aware extraction of client identifier with anti-spoofing protection |
checkRateLimitWithNonce | (req: NextRequest, limiterType: keyof rateLimiters, includeGlobal?: boolean): Promise<RateLimitResult> | RateLimitResult | Main function that checks rate limits and optionally retrieves stored nonce |
Client Functions
Function | Signature | Returns | Notes |
---|---|---|---|
isRateLimitEnabled | (): boolean | boolean | Client-side check for rate limiting activation |
Pre-configured Rate Limiters
Limiter | Limit | Window | Redis Prefix | Purpose |
---|---|---|---|---|
nonce | 10 requests | 1 minute | rl:nonce: | SIWE nonce generation for auth flows |
gateAccess | 5 requests | 1 minute | rl:gate-access: | Token gate signature verification |
verifyTokenGate | 3 requests | 1 minute | rl:verify-token: | Message submission verification |
tokenStatus | 60 requests | 1 minute | rl:token-status: | Polling for token status updates |
global | 100 requests | 1 hour | rl:global: | Overall protection (skipped for tokenStatus) |
Types and Interfaces
interface RateLimitResult {
success: boolean // Whether request is allowed
limit?: number // Maximum requests in window
remaining?: number // Requests left in current window
reset?: number // Unix timestamp of window reset
nonce?: string // Pre-fetched nonce if available
}
interface TokenStatusResponse {
count: number // 0 or 1 indicating existence
exists: boolean // Whether token exists on-chain
used: boolean // Whether token has been used
usedBy: string | null // Address that used the token
usedAt: Date | null // When token was used
}
type RateLimiterType = 'nonce' | 'gateAccess' | 'verifyTokenGate' | 'tokenStatus' | 'global'
interface RateLimitResponse {
error: string // "Too many requests"
limit: number // Same as RateLimitResult
remaining: number // Same as RateLimitResult
retryAfter: number // Seconds until retry allowed
}
Rate Limit Headers
Header | Description | Example |
---|---|---|
X-RateLimit-Limit | Maximum requests allowed in window | 10 |
X-RateLimit-Remaining | Requests remaining in current window | 7 |
X-RateLimit-Reset | Unix timestamp when window resets | 1704067200000 |
Retry-After | Seconds until client can retry (429 only) | 45 |
Sliding Window Algorithm
The library uses Upstash’s sliding window implementation, which provides several advantages:
- Smooth rate limiting - Requests don’t all reset at once
- Fair distribution - Prevents burst usage at window boundaries
- Accurate counting - Tracks exact request times
Example behavior:
- User makes 9 requests in first 30 seconds
- At 45 seconds, the earliest request (from 0:00) is still counted
- At 61 seconds, the earliest request expires, allowing one more
- This continues, maintaining a rolling 60-second window
Security Best Practices
Preventing Header Spoofing
The environment-aware IP detection prevents a critical vulnerability where attackers could bypass rate limits:
# Attack attempt that would work with naive header trust
for i in {1..100}; do
curl -H "X-Forwarded-For: 192.168.1.$i" https://api.example.com/endpoint
done
# This attack is prevented by our environment-aware approach
Timestamp Validation
The verify-token-gate endpoint implements comprehensive timestamp validation:
- Type Checking: Ensures timestamp is a number, not a string or other type
- Future Prevention: Rejects timestamps from the future (with 30-second tolerance)
- Expiry Enforcement: Signatures expire after 5 minutes to prevent replay attacks
Rate Limit Testing
When testing rate limits, be aware of environment differences:
// Local development test
async function testLocalRateLimit() {
// Will use socket IP (127.0.0.1 or ::1)
for (let i = 0; i < 12; i++) {
const res = await fetch('http://localhost:3000/api/nonce')
console.log(`Request ${i + 1}:`, res.status)
}
}
// Cannot spoof in local dev
async function testSpoofingPrevention() {
const res = await fetch('http://localhost:3000/api/nonce', {
headers: {
'X-Forwarded-For': '1.2.3.4' // This will be ignored
}
})
// Will still use socket IP for rate limiting
}
Troubleshooting
Rate Limiting Not Working
If rate limiting appears to be bypassed:
-
Check Redis connection:
curl -X GET https://your-instance.upstash.io/ping \ -H "Authorization: Bearer your-token"
-
Verify environment variables are loaded:
console.log('Rate limiting enabled:', process.env.NEXT_PUBLIC_ACTIVATE_REDIS) console.log('Redis URL:', process.env.KV_REST_API_URL ? 'Set' : 'Not set') console.log('Vercel env:', process.env.VERCEL_ENV)
-
Check the identifier extraction is working:
console.log('Client identifier:', getIdentifier(req)) console.log('Is Vercel prod:', process.env.VERCEL_ENV === 'production')
Incorrect Rate Limit Counts
If limits seem wrong:
- Remember there are two layers (specific + global)
- Check which limiter is triggering (look at the limit value)
- Global limit (100/hr) may trigger before specific limits
- In development, each request from the same machine shares the same limit
Testing in Different Environments
// Quick test script that works in any environment
async function testRateLimit() {
console.log('Testing in environment:', process.env.NODE_ENV)
console.log('Vercel env:', process.env.VERCEL_ENV)
for (let i = 0; i < 12; i++) {
const res = await fetch('/api/nonce')
const identifier = res.headers.get('X-Debug-Identifier') // If you add this for debugging
console.log(`Request ${i + 1}:`, res.status, 'ID:', identifier)
if (res.status === 429) {
const data = await res.json()
console.log('Rate limited! Retry after:', data.retryAfter)
break
}
}
}
Best Practices
Choosing Appropriate Limits
When configuring rate limits, consider:
- Operation cost - More expensive operations need stricter limits
- User experience - Allow enough for legitimate use cases
- Retry patterns - Account for wallet connection failures
- Mobile users - May need multiple attempts due to app switching
- Security requirements - Balance usability with protection
Error Handling
Always handle rate limit responses gracefully:
try {
const response = await fetch('/api/endpoint')
if (response.status === 429) {
const { retryAfter } = await response.json()
// Show user-friendly message
// Disable UI elements
// Set timeout for retry
return
}
// Handle success
} catch (error) {
// Handle network errors
}
Monitoring
Track rate limit metrics for optimization:
- Which endpoints hit limits most often
- Average remaining capacity
- Peak usage times
- Unique users affected
- Geographic distribution of requests
Deployment Considerations
- Vercel Production: Headers are trustworthy, rate limiting works per real user IP
- Vercel Preview: Uses socket IP, may group preview deployments together
- Local Development: All requests from localhost share the same limit
- Self-Hosted: Must use socket IP to prevent header spoofing
Summary
The Rate Limiting Library provides essential protection for RitoSwap’s API endpoints while maintaining security against spoofing attacks and replay attempts. Through environment-aware IP detection, comprehensive timestamp validation, and carefully calibrated limits, it ensures fair resource allocation without compromising the user experience. The library’s integration with RitoSwap’s token gate system demonstrates how security-first design can protect resource-intensive operations while remaining transparent to legitimate users.
UI Integration
Rate Limit Modal Component
While the rate limiting library handles the server-side logic, the RateLimitModal component provides the critical user-facing feedback when limits are reached. This component creates a seamless experience by displaying clear, non-intrusive notifications that automatically dismiss, keeping users informed without disrupting their workflow.

Component Architecture
The RateLimitModal uses a singleton pattern to ensure only one rate limit notification appears at a time, preventing UI clutter when multiple requests hit limits simultaneously.
- RateLimitModal.tsx
- RateLimitModal.module.css
- index.tsx
Why a Singleton Pattern?
Consider what happens when a user rapidly clicks a button that triggers API calls. Without singleton management, each rate-limited response could spawn its own modal, creating a confusing stack of notifications. The singleton pattern solves this by:
- Canceling any existing modal when a new one needs to appear
- Ensuring clean transitions between notifications
- Maintaining a single source of truth for the current rate limit state
- Preventing memory leaks from multiple timer instances
Component Implementation
The RateLimitModal component manages its own lifecycle with automatic dismissal after 3 seconds:
export default function RateLimitModal({
isVisible,
limit,
remaining,
retryAfter,
onClose
}: RateLimitModalProps) {
const [show, setShow] = useState(false)
const [fadeOut, setFadeOut] = useState(false)
const timerRef = useRef<NodeJS.Timeout | null>(null)
useEffect(() => {
if (isVisible) {
// Cancel any existing timer to prevent conflicts
if (timerRef.current) {
clearTimeout(timerRef.current)
}
setShow(true)
setFadeOut(false)
// Auto-hide after 3 seconds for non-intrusive UX
timerRef.current = setTimeout(() => {
setFadeOut(true) // Trigger fade animation
setTimeout(() => {
setShow(false) // Remove from DOM after animation
onClose?.()
}, 300)
}, 3000)
}
return () => {
if (timerRef.current) {
clearTimeout(timerRef.current)
}
}
}, [isVisible, onClose])
// Component renders the modal with dynamic content based on rate limit state
}
Key implementation details:
- Two-phase hiding: First triggers fade animation, then removes from DOM
- Timer cleanup: Prevents memory leaks and conflicting timers
- Conditional rendering: Only mounts when needed for performance
Provider Setup
The RateLimitModalProvider manages the singleton instance and must wrap your application:
// In your app's provider setup (e.g., providers.tsx)
import { RateLimitModalProvider } from '@/components/utilities/wallet/rateLimitModal'
export function Providers({ children }: { children: ReactNode }) {
return (
<WagmiProvider config={config}>
<QueryClientProvider client={queryClient}>
<WalletConnectProvider>
<RateLimitModalProvider>
{children}
</RateLimitModalProvider>
</WalletConnectProvider>
</QueryClientProvider>
</WagmiProvider>
)
}
The provider creates a global modal instance that can be triggered from anywhere in your application.
Usage with Rate Limiting
The modal integrates seamlessly with rate limit responses. Here’s the complete flow:
// Import the show function
import { showRateLimitModal } from '@/components/utilities/wallet/rateLimitModal'
// In your API call handling
async function handleApiCall() {
try {
const response = await fetch('/api/some-endpoint')
if (response.status === 429) {
const data = await response.json()
// Trigger the modal with rate limit details
showRateLimitModal({
limit: data.limit,
remaining: data.remaining,
retryAfter: data.retryAfter,
onClose: () => {
console.log('User acknowledged rate limit')
}
})
// Handle UI state (e.g., re-enable buttons)
return
}
// Handle successful response
} catch (error) {
// Handle network errors
}
}
Modal States and Messages
The component adapts its message based on the rate limit state:
// When completely rate limited (remaining === 0)
"Rate limit reached. Please try again in 45 seconds."
// When approaching limit (remaining > 0)
"You have 2 requests remaining."
// When retry time unknown
"Rate limit reached. Please try again later."
This progressive messaging helps users understand their current limits and plan their actions accordingly.
Styling and Animation
The modal uses CSS modules for scoped styling with smooth animations:
/* Fade in animation for smooth appearance */
@keyframes modalFadeIn {
from {
opacity: 0;
transform: translate(-50%, -50%) scale(0.95);
}
to {
opacity: 1;
transform: translate(-50%, -50%) scale(1);
}
}
/* Fade out animation for graceful exit */
@keyframes modalFadeOut {
from {
opacity: 1;
transform: translate(-50%, -50%) scale(1);
}
to {
opacity: 0;
transform: translate(-50%, -50%) scale(0.95);
}
}
The animations provide visual feedback without being jarring, maintaining the professional feel of the application.
Complete Integration Example
Here’s how the RateLimitModal works with the rate limiting library in a real component:
// In GateModal.tsx
const handleSign = async () => {
setIsSigning(true)
try {
// Step 1: Get nonce (rate limited endpoint)
const nonceResponse = await fetch('/api/nonce')
if (nonceResponse.status === 429) {
const data = await nonceResponse.json()
// Show rate limit modal with server-provided details
showRateLimitModal({
limit: data.limit,
remaining: data.remaining,
retryAfter: data.retryAfter
})
setIsSigning(false)
return
}
// Step 2: Continue with signing flow
const { nonce } = await nonceResponse.json()
// ... rest of the signing logic
} catch (error) {
console.error('Sign error:', error)
setIsSigning(false)
}
}
Accessibility Considerations
The modal implements several accessibility features:
- High contrast: Uses white text on primary background for readability
- Z-index management: Ensures modal appears above all content (z-index: 10000)
- Mobile responsive: Adjusts sizing and spacing for smaller screens
- Clear messaging: Uses simple, direct language for rate limit feedback
API Reference
Component Props
Prop | Type | Required | Description |
---|---|---|---|
isVisible | boolean | Yes | Controls modal visibility |
limit | number | No | Maximum requests allowed in the window |
remaining | number | No | Requests remaining in current window |
retryAfter | number | No | Seconds until retry is allowed |
onClose | () => void | No | Callback when modal closes (auto or manual) |
Exported Functions
Function | Signature | Description |
---|---|---|
showRateLimitModal | (props: Omit<RateLimitModalProps, 'isVisible'>) => void | Shows the rate limit modal with specified props |
Provider Component
Component | Props | Description |
---|---|---|
RateLimitModalProvider | { children: ReactNode } | Wrapper that manages the modal singleton instance |
Testing the Modal
To test the rate limit modal during development:
// Manual trigger for testing
import { showRateLimitModal } from '@/components/utilities/wallet/rateLimitModal'
// Test different states
function testRateLimitModal() {
// Test "no remaining requests" state
showRateLimitModal({
limit: 10,
remaining: 0,
retryAfter: 45
})
// Wait 4 seconds, then test "some remaining" state
setTimeout(() => {
showRateLimitModal({
limit: 10,
remaining: 3
})
}, 4000)
}
Best Practices
When implementing rate limit feedback in your components:
- Always provide retry information when available from the server
- Re-enable UI elements after showing the modal to prevent stuck states
- Log rate limit events for monitoring and optimization
- Consider the user journey - place checks at natural interaction points
The RateLimitModal component completes the rate limiting system by providing clear, user-friendly feedback that helps users understand and work within system limits without frustration.