Rate Limits
Understand pxlpeak API rate limits and best practices for handling them.
Overview
Rate limits protect the API from abuse and ensure fair usage across all customers. Understanding and handling rate limits properly is essential for building robust integrations.
Rate Limit Tiers
| Plan | Requests/Second | Requests/Day | Concurrent | Burst | |------|-----------------|--------------|------------|-------| | Free | 10 | 10,000 | 5 | 20 | | Starter | 50 | 100,000 | 10 | 100 | | Professional | 200 | 1,000,000 | 50 | 500 | | Enterprise | 1,000 | Unlimited | 200 | 2,000 |
Note: Limits apply per API key. Contact sales for custom limits.
Endpoint-Specific Limits
Some endpoints have additional restrictions:
| Endpoint | Limit | Window |
|----------|-------|--------|
| POST /v1/events | 1,000/s | Per second |
| POST /v1/events/batch | 100/s | Per second |
| GET /v1/analytics/* | 60/min | Per minute |
| POST /v1/exports | 10/hour | Per hour |
| POST /v1/reports | 30/hour | Per hour |
Rate Limit Headers
Every API response includes rate limit information in headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 200
X-RateLimit-Remaining: 147
X-RateLimit-Reset: 1704985800
X-RateLimit-Policy: 200;w=1
Retry-After: 1| Header | Description |
|--------|-------------|
| X-RateLimit-Limit | Maximum requests allowed in window |
| X-RateLimit-Remaining | Requests remaining in current window |
| X-RateLimit-Reset | Unix timestamp when window resets |
| X-RateLimit-Policy | Rate limit policy (requests; window in seconds) |
| Retry-After | Seconds to wait before retrying (only on 429) |
Handling Rate Limits
429 Response
When rate limited, the API returns:
{
"error": {
"code": "rate_limited",
"message": "Rate limit exceeded. Please retry after 2 seconds.",
"retry_after": 2,
"limit": 200,
"remaining": 0,
"reset_at": "2026-01-12T14:30:02Z"
}
}Basic Retry Logic
interface RetryConfig {
maxRetries: number;
baseDelay: number;
maxDelay: number;
}
async function fetchWithRetry<T>(
url: string,
options: RequestInit,
config: RetryConfig = { maxRetries: 3, baseDelay: 1000, maxDelay: 30000 }
): Promise<T> {
let lastError: Error | null = null;
for (let attempt = 0; attempt <= config.maxRetries; attempt++) {
try {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After');
const delay = retryAfter
? parseInt(retryAfter, 10) * 1000
: Math.min(config.baseDelay * Math.pow(2, attempt), config.maxDelay);
console.log(`Rate limited. Retrying in ${delay}ms (attempt ${attempt + 1})`);
await sleep(delay);
continue;
}
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${await response.text()}`);
}
return response.json();
} catch (error) {
lastError = error as Error;
if (attempt < config.maxRetries) {
const delay = Math.min(
config.baseDelay * Math.pow(2, attempt),
config.maxDelay
);
await sleep(delay);
}
}
}
throw lastError || new Error('Max retries exceeded');
}
function sleep(ms: number): Promise<void> {
return new Promise(resolve => setTimeout(resolve, ms));
}Advanced Rate Limiter Class
interface RateLimitState {
remaining: number;
reset: number;
limit: number;
}
class RateLimitedClient {
private apiKey: string;
private baseUrl = 'https://api.pxlpeak.com/v1';
private state: RateLimitState = { remaining: Infinity, reset: 0, limit: 200 };
private queue: Array<() => void> = [];
private processing = false;
constructor(apiKey: string) {
this.apiKey = apiKey;
}
private updateRateLimitState(headers: Headers) {
const remaining = headers.get('X-RateLimit-Remaining');
const reset = headers.get('X-RateLimit-Reset');
const limit = headers.get('X-RateLimit-Limit');
if (remaining) this.state.remaining = parseInt(remaining, 10);
if (reset) this.state.reset = parseInt(reset, 10);
if (limit) this.state.limit = parseInt(limit, 10);
}
private async waitForCapacity(): Promise<void> {
if (this.state.remaining > 0) {
return;
}
const now = Math.floor(Date.now() / 1000);
const waitTime = Math.max(0, (this.state.reset - now) * 1000);
if (waitTime > 0) {
console.log(`Rate limit reached. Waiting ${waitTime}ms for reset.`);
await sleep(waitTime + 100); // Add small buffer
this.state.remaining = this.state.limit;
}
}
async request<T>(
endpoint: string,
options: RequestInit = {}
): Promise<T> {
await this.waitForCapacity();
const response = await fetch(`${this.baseUrl}${endpoint}`, {
...options,
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
...options.headers
}
});
this.updateRateLimitState(response.headers);
if (response.status === 429) {
const retryAfter = parseInt(
response.headers.get('Retry-After') || '1',
10
);
await sleep(retryAfter * 1000);
return this.request(endpoint, options);
}
if (!response.ok) {
const error = await response.json();
throw new Error(error.error?.message || `HTTP ${response.status}`);
}
return response.json();
}
// Batch requests with rate limiting
async batchRequest<T>(
requests: Array<{ endpoint: string; options?: RequestInit }>
): Promise<T[]> {
const results: T[] = [];
for (const req of requests) {
const result = await this.request<T>(req.endpoint, req.options);
results.push(result);
}
return results;
}
// Get current rate limit status
getRateLimitStatus(): RateLimitState {
return { ...this.state };
}
}
// Usage
const client = new RateLimitedClient(process.env.PXLPEAK_API_KEY!);
const sites = await client.request('/sites');
console.log('Rate limit status:', client.getRateLimitStatus());Token Bucket Implementation
For high-throughput applications, implement a token bucket:
class TokenBucket {
private tokens: number;
private lastRefill: number;
private readonly capacity: number;
private readonly refillRate: number; // tokens per second
constructor(capacity: number, refillRate: number) {
this.capacity = capacity;
this.refillRate = refillRate;
this.tokens = capacity;
this.lastRefill = Date.now();
}
private refill() {
const now = Date.now();
const elapsed = (now - this.lastRefill) / 1000;
const tokensToAdd = elapsed * this.refillRate;
this.tokens = Math.min(this.capacity, this.tokens + tokensToAdd);
this.lastRefill = now;
}
async acquire(tokens = 1): Promise<void> {
this.refill();
if (this.tokens >= tokens) {
this.tokens -= tokens;
return;
}
// Calculate wait time
const tokensNeeded = tokens - this.tokens;
const waitTime = (tokensNeeded / this.refillRate) * 1000;
await sleep(waitTime);
this.refill();
this.tokens -= tokens;
}
getAvailableTokens(): number {
this.refill();
return Math.floor(this.tokens);
}
}
// Usage with API client
class ThrottledApiClient {
private bucket: TokenBucket;
private apiKey: string;
constructor(apiKey: string, requestsPerSecond = 50) {
this.apiKey = apiKey;
this.bucket = new TokenBucket(requestsPerSecond * 2, requestsPerSecond);
}
async request<T>(endpoint: string, options?: RequestInit): Promise<T> {
await this.bucket.acquire();
const response = await fetch(`https://api.pxlpeak.com/v1${endpoint}`, {
...options,
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
...options?.headers
}
});
return response.json();
}
}Batch Operations
Reduce API calls by using batch endpoints:
Single vs Batch Comparison
// Bad: Individual requests (10 API calls)
async function trackEventsBad(events: Event[]) {
for (const event of events) {
await fetch('https://api.pxlpeak.com/v1/events', {
method: 'POST',
body: JSON.stringify(event)
});
}
}
// Good: Batch request (1 API call)
async function trackEventsGood(events: Event[]) {
await fetch('https://api.pxlpeak.com/v1/events/batch', {
method: 'POST',
body: JSON.stringify({
site_id: 'site_xxx',
events: events
})
});
}Batch Size Limits
| Endpoint | Max Batch Size |
|----------|----------------|
| /v1/events/batch | 1,000 events |
| /v1/users/batch | 100 users |
| /v1/segments/batch-evaluate | 10,000 user IDs |
Chunking Large Batches
function chunk<T>(array: T[], size: number): T[][] {
const chunks: T[][] = [];
for (let i = 0; i < array.length; i += size) {
chunks.push(array.slice(i, i + size));
}
return chunks;
}
async function trackLargeEventBatch(
client: RateLimitedClient,
events: Event[],
batchSize = 1000
) {
const chunks = chunk(events, batchSize);
const results = [];
for (const eventChunk of chunks) {
const result = await client.request('/events/batch', {
method: 'POST',
body: JSON.stringify({
site_id: 'site_xxx',
events: eventChunk
})
});
results.push(result);
}
return results;
}Caching Strategies
Reduce API calls with intelligent caching:
Response Caching
import { LRUCache } from 'lru-cache';
class CachedApiClient {
private client: RateLimitedClient;
private cache: LRUCache<string, { data: any; expires: number }>;
constructor(apiKey: string) {
this.client = new RateLimitedClient(apiKey);
this.cache = new LRUCache({
max: 1000,
ttl: 5 * 60 * 1000 // 5 minutes default
});
}
async request<T>(
endpoint: string,
options: RequestInit & { cacheTtl?: number } = {}
): Promise<T> {
const { cacheTtl = 300000, ...fetchOptions } = options;
// Only cache GET requests
if (!fetchOptions.method || fetchOptions.method === 'GET') {
const cached = this.cache.get(endpoint);
if (cached && cached.expires > Date.now()) {
return cached.data as T;
}
}
const data = await this.client.request<T>(endpoint, fetchOptions);
// Cache successful GET responses
if (!fetchOptions.method || fetchOptions.method === 'GET') {
this.cache.set(endpoint, {
data,
expires: Date.now() + cacheTtl
});
}
return data;
}
invalidateCache(pattern?: string) {
if (pattern) {
for (const key of this.cache.keys()) {
if (key.includes(pattern)) {
this.cache.delete(key);
}
}
} else {
this.cache.clear();
}
}
}
// Usage
const cached = new CachedApiClient(process.env.PXLPEAK_API_KEY!);
// First call hits API
const sites = await cached.request('/sites');
// Second call returns cached data
const sitesAgain = await cached.request('/sites');
// Custom TTL
const analytics = await cached.request('/sites/xxx/analytics/traffic', {
cacheTtl: 60000 // 1 minute for real-time data
});Cache Headers
Respect API cache headers:
interface CacheableResponse<T> {
data: T;
cacheControl: {
maxAge: number;
staleWhileRevalidate: number;
} | null;
}
async function fetchWithCacheHeaders<T>(
url: string,
options: RequestInit
): Promise<CacheableResponse<T>> {
const response = await fetch(url, options);
const data = await response.json();
const cacheControl = response.headers.get('Cache-Control');
let cacheInfo = null;
if (cacheControl) {
const maxAgeMatch = cacheControl.match(/max-age=(\d+)/);
const staleMatch = cacheControl.match(/stale-while-revalidate=(\d+)/);
cacheInfo = {
maxAge: maxAgeMatch ? parseInt(maxAgeMatch[1], 10) : 0,
staleWhileRevalidate: staleMatch ? parseInt(staleMatch[1], 10) : 0
};
}
return { data, cacheControl: cacheInfo };
}Webhook Alternative
For real-time data, use webhooks instead of polling:
// Bad: Polling every 10 seconds (8,640 requests/day)
setInterval(async () => {
const data = await client.request('/sites/xxx/conversions');
processConversions(data);
}, 10000);
// Good: Webhooks (0 API requests)
app.post('/webhooks/pxlpeak', (req, res) => {
if (req.body.type === 'conversion.completed') {
processConversion(req.body.data);
}
res.status(200).send();
});Monitoring Rate Limits
Logging Rate Limit Events
class MonitoredApiClient extends RateLimitedClient {
private metrics: {
requests: number;
rateLimited: number;
lastRateLimit: Date | null;
} = {
requests: 0,
rateLimited: 0,
lastRateLimit: null
};
async request<T>(endpoint: string, options?: RequestInit): Promise<T> {
this.metrics.requests++;
try {
return await super.request(endpoint, options);
} catch (error: any) {
if (error.message?.includes('rate_limited')) {
this.metrics.rateLimited++;
this.metrics.lastRateLimit = new Date();
// Alert if rate limited frequently
if (this.metrics.rateLimited > 10) {
this.alertRateLimitIssue();
}
}
throw error;
}
}
private alertRateLimitIssue() {
console.warn('Frequent rate limiting detected', {
totalRequests: this.metrics.requests,
rateLimited: this.metrics.rateLimited,
percentage: (this.metrics.rateLimited / this.metrics.requests * 100).toFixed(2)
});
// Send to monitoring service
// metrics.increment('api.rate_limited');
}
getMetrics() {
return { ...this.metrics };
}
}Dashboard Integration
// Report rate limit metrics to monitoring
async function reportRateLimitMetrics(client: MonitoredApiClient) {
const status = client.getRateLimitStatus();
const metrics = client.getMetrics();
await fetch('https://your-monitoring.com/metrics', {
method: 'POST',
body: JSON.stringify({
service: 'pxlpeak-api',
metrics: {
rate_limit_remaining: status.remaining,
rate_limit_percentage: (status.remaining / status.limit) * 100,
total_requests: metrics.requests,
rate_limited_requests: metrics.rateLimited
}
})
});
}Best Practices Summary
| Practice | Impact | |----------|--------| | Use batch endpoints | 10-100x fewer requests | | Implement caching | 50-90% fewer requests | | Use webhooks for real-time | 99% fewer polling requests | | Respect Retry-After header | Avoid rate limit escalation | | Monitor rate limit headers | Prevent unexpected failures | | Use exponential backoff | Graceful recovery | | Pre-fetch during low traffic | Spread load over time |
Quota Increases
Need higher limits? Contact us:
- Enterprise customers: Contact your account manager
- Growing teams: Email api-support@pxlpeak.com with:
- Current usage patterns
- Expected growth
- Use case description
Next: See SDKs & Libraries for pre-built integrations with rate limit handling.