Skip to main content

Rate Limiting

Ticksupply uses a token bucket algorithm to enforce fair usage limits. This guide explains the limits, how to monitor your usage, and best practices for staying within bounds.

Rate limit overview

WindowLimitScope
Per minute200 requestsPer account
Per hour10,000 requestsPer account
Rate limits are shared across all API keys on the same account. If you have multiple applications using different keys, they share the same limits.

Rate limit headers

Every API response includes headers showing your current rate limit status:
HeaderDescription
X-RateLimit-Limit-MinuteMaximum requests allowed per minute
X-RateLimit-Remaining-MinuteRequests remaining this minute
X-RateLimit-Reset-MinuteSeconds until the minute window resets
X-RateLimit-Limit-HourMaximum requests allowed per hour
X-RateLimit-Remaining-HourRequests remaining this hour
X-RateLimit-Reset-HourSeconds until the hour window resets
Example response headers:
X-RateLimit-Limit-Minute: 200
X-RateLimit-Remaining-Minute: 195
X-RateLimit-Reset-Minute: 45
X-RateLimit-Limit-Hour: 10000
X-RateLimit-Remaining-Hour: 9850
X-RateLimit-Reset-Hour: 2345

When rate limits are exceeded

When you exceed rate limits, the API returns a 429 Too Many Requests response:
{
  "error": {
    "code": "rate_limited",
    "message": "Rate limit exceeded. Retry after 30 seconds."
  }
}
The response includes a Retry-After header indicating how many seconds to wait:
HTTP/1.1 429 Too Many Requests
Retry-After: 30
Content-Type: application/json

Handling rate limits

Basic retry with exponential backoff

import time
import requests

def make_request_with_retry(url, headers, max_retries=5):
    for attempt in range(max_retries):
        response = requests.get(url, headers=headers)
        
        if response.status_code == 429:
            retry_after = int(response.headers.get("Retry-After", 30))
            print(f"Rate limited. Waiting {retry_after} seconds...")
            time.sleep(retry_after)
            continue
        
        response.raise_for_status()
        return response.json()
    
    raise Exception("Max retries exceeded")

# Usage
data = make_request_with_retry(
    "https://api.ticksupply.com/v1/exchanges",
    {"X-Api-Key": API_KEY}
)

Proactive rate limiting

Monitor the rate limit headers to avoid hitting limits:
import time

class RateLimitedClient:
    def __init__(self, api_key):
        self.api_key = api_key
        self.remaining_minute = 200
        self.reset_minute = 0
    
    def request(self, method, url, **kwargs):
        # Check if we should wait
        if self.remaining_minute <= 5:
            wait_time = max(0, self.reset_minute - time.time()) + 1
            if wait_time > 0:
                print(f"Approaching limit, waiting {wait_time:.1f}s")
                time.sleep(wait_time)
        
        headers = kwargs.pop("headers", {})
        headers["X-Api-Key"] = self.api_key
        
        response = requests.request(method, url, headers=headers, **kwargs)
        
        # Update rate limit tracking
        self.remaining_minute = int(
            response.headers.get("X-RateLimit-Remaining-Minute", 200)
        )
        reset_in = int(
            response.headers.get("X-RateLimit-Reset-Minute", 60)
        )
        self.reset_minute = time.time() + reset_in
        
        return response

Best practices

Batch requests when possible

Instead of making many individual requests, use endpoints that return multiple items:
# ❌ Avoid: Many individual requests
for subscription_id in subscription_ids:
    response = requests.get(
        f"https://api.ticksupply.com/v1/subscriptions/{subscription_id}",
        headers=headers
    )

# ✅ Better: Single paginated request
response = requests.get(
    "https://api.ticksupply.com/v1/subscriptions",
    headers=headers,
    params={"limit": 100}
)

Use idempotency keys for retries

When retrying failed requests, use idempotency keys to prevent duplicate operations:
curl -X POST -H "X-Api-Key: YOUR_API_KEY" \
  -H "Idempotency-Key: unique-request-id-123" \
  -H "Content-Type: application/json" \
  -d '{"datastream_id": 123}' \
  https://api.ticksupply.com/v1/subscriptions

Implement request queuing

For high-volume applications, queue requests and process them at a sustainable rate:
import queue
import threading
import time

class RequestQueue:
    def __init__(self, api_key, requests_per_second=3):
        self.api_key = api_key
        self.delay = 1.0 / requests_per_second
        self.queue = queue.Queue()
        self.running = True
        self.worker = threading.Thread(target=self._process_queue)
        self.worker.start()
    
    def _process_queue(self):
        while self.running:
            try:
                request_info, result_queue = self.queue.get(timeout=1)
                response = requests.request(**request_info)
                result_queue.put(response)
                time.sleep(self.delay)
            except queue.Empty:
                continue
    
    def submit(self, method, url, **kwargs):
        kwargs["headers"] = kwargs.get("headers", {})
        kwargs["headers"]["X-Api-Key"] = self.api_key
        
        result_queue = queue.Queue()
        self.queue.put(({"method": method, "url": url, **kwargs}, result_queue))
        return result_queue.get()

Cache responses

Cache catalog data that doesn’t change frequently:
from functools import lru_cache
import time

@lru_cache(maxsize=1)
def get_exchanges():
    response = requests.get(
        "https://api.ticksupply.com/v1/exchanges",
        headers={"X-Api-Key": API_KEY}
    )
    return response.json()

# Cache is reused for subsequent calls
exchanges = get_exchanges()
Exchange and stream type data changes infrequently. Cache these responses for at least 1 hour to reduce API calls.

Rate limit increases

If you need higher rate limits for your use case, contact our support team:
Rate limit increases are evaluated on a case-by-case basis and may require an upgraded plan.

Next steps