Ticksupply uses cursor-based pagination for endpoints that return lists of items. This approach provides stable, consistent results even when data changes between requests.
Paginated endpoints accept two query parameters:
Parameter Type Description limitinteger Number of items per page (see limits below) page_tokenstring Cursor for the next page (from previous response)
Default limits vary by endpoint:
Endpoints Default Max Subscriptions, Exports 50 100 Instruments, Datastreams 100 1,000
The response includes pagination metadata:
{
"items" : [ ... ],
"total" : 150 ,
"limit" : 50 ,
"next_page_token" : "eyJpZCI6IjEyMzQ1IiwidHMiOiIyMDI0LTEyLTIxIn0="
}
Field Description itemsArray of results for the current page totalTotal number of items across all pages limitItems per page (as requested) next_page_tokenToken for the next page (null if no more pages)
First page
Request the first page without a page_token:
curl -H "X-Api-Key: YOUR_API_KEY" \
"https://api.ticksupply.com/v1/subscriptions?limit=10"
Response:
{
"items" : [
{ "id" : "sub-1" , ... },
{ "id" : "sub-2" , ... },
// ... 8 more items
],
"total" : 47 ,
"limit" : 10 ,
"next_page_token" : "eyJpZCI6InN1Yi0xMCIsInRzIjoiMjAyNC0xMi0yMVQxMjowMDowMFoifQ=="
}
Subsequent pages
Use the next_page_token from the previous response:
curl -H "X-Api-Key: YOUR_API_KEY" \
"https://api.ticksupply.com/v1/subscriptions?limit=10&page_token=eyJpZCI6InN1Yi0xMCIsInRzIjoiMjAyNC0xMi0yMVQxMjowMDowMFoifQ=="
Last page
When there are no more items, next_page_token is null:
{
"items" : [
{ "id" : "sub-41" , ... },
// ... remaining items
],
"total" : 47 ,
"limit" : 10 ,
"next_page_token" : null
}
def fetch_all_subscriptions ():
"""Fetch all subscriptions using pagination."""
all_subscriptions = []
page_token = None
while True :
params = { "limit" : 100 }
if page_token:
params[ "page_token" ] = page_token
response = requests.get(
"https://api.ticksupply.com/v1/subscriptions" ,
headers = { "X-Api-Key" : API_KEY },
params = params
)
response.raise_for_status()
data = response.json()
all_subscriptions.extend(data[ "items" ])
print ( f "Fetched { len (all_subscriptions) } / { data[ 'total' ] } subscriptions" )
page_token = data.get( "next_page_token" )
if not page_token:
break
return all_subscriptions
# Usage
subscriptions = fetch_all_subscriptions()
print ( f "Total: { len (subscriptions) } subscriptions" )
Generator pattern
For memory efficiency with large datasets, use a generator pattern:
def iter_subscriptions ( limit = 100 ):
"""Iterate through subscriptions one at a time."""
page_token = None
while True :
params = { "limit" : limit}
if page_token:
params[ "page_token" ] = page_token
response = requests.get(
"https://api.ticksupply.com/v1/subscriptions" ,
headers = { "X-Api-Key" : API_KEY },
params = params
)
response.raise_for_status()
data = response.json()
for item in data[ "items" ]:
yield item
page_token = data.get( "next_page_token" )
if not page_token:
break
# Usage - processes one item at a time
for subscription in iter_subscriptions():
process_subscription(subscription)
Cursor stability
Cursor-based pagination provides stable results:
Items aren't skipped or duplicated
Unlike offset-based pagination, cursor pagination doesn’t skip items when new data is added or duplicates items when data is deleted.
Results are ordered by creation time (newest first) and ID. This ordering remains stable across pages.
Treat page tokens as opaque strings. Don’t parse, modify, or construct them—only use tokens returned by the API.
Page tokens may expire after extended periods. If you receive an error with an old token, start from the first page.
Best practices
Use appropriate page sizes
Small datasets : Use default limits or smaller for quick responses
Bulk operations : Use maximum limit (100 for subscriptions/exports, 1,000 for catalog) to reduce API calls
UI pagination : Match your UI’s display capacity
# For listing in UI
response = requests.get(url, params = { "limit" : 20 })
# For bulk processing
response = requests.get(url, params = { "limit" : 100 })
Handle rate limits during pagination
When paginating large datasets, implement rate limit handling:
def fetch_all_with_rate_limit ( endpoint , limit = 100 ):
all_items = []
page_token = None
while True :
params = { "limit" : limit}
if page_token:
params[ "page_token" ] = page_token
response = requests.get(
endpoint,
headers = { "X-Api-Key" : API_KEY },
params = params
)
if response.status_code == 429 :
retry_after = int (response.headers.get( "Retry-After" , 30 ))
print ( f "Rate limited, waiting { retry_after } s..." )
time.sleep(retry_after)
continue
response.raise_for_status()
data = response.json()
all_items.extend(data[ "items" ])
page_token = data.get( "next_page_token" )
if not page_token:
break
return all_items
Don’t store page tokens long-term
Page tokens are meant for immediate pagination, not long-term storage:
# ✅ Good: Use immediately
data = fetch_page_1()
data_2 = fetch_page_2(data[ "next_page_token" ])
# ❌ Bad: Store for later
save_to_database(data[ "next_page_token" ]) # May expire
Paginated endpoints
Endpoint Default limit Max limit GET /v1/subscriptions50 100 GET /v1/exports50 100 GET /v1/exchanges/{exchange}/instruments100 1,000 GET /v1/datastreams100 1,000 GET /v1/exchanges/{exchange}/datastreams100 1,000 GET /v1/exchanges/{exchange}/instruments/{instrument}/datastreams100 1,000
Next steps