Overview
List endpoints return results wrapped in a standard data / meta envelope. Each paginated response includes:
| Field | Type | Description |
|---|
data.items | array | The records for this page |
data.nextCursor | string | null | Pass this as cursor in the next request. null when there are no more pages. |
data.hasMore | boolean | true when more pages exist beyond this one |
meta.requestId | string | Unique request identifier for debugging |
meta.timestamp | integer | Server timestamp (Unix ms) |
Query parameters
| Parameter | Type | Default | Description |
|---|
limit | integer | 50 | Records per page (1–200) |
cursor | string | — | Cursor from previous response |
since | integer | — | Unix timestamp (ms) — include records at or after this time |
until | integer | — | Unix timestamp (ms) — include records before this time |
Basic example
First page:
curl "https://carboncopy.inc/api/v1/portfolio/history?limit=50" \
-H "Authorization: Bearer $CC_API_KEY"
{
"data": {
"items": [ ... ],
"nextCursor": "eyJpZCI6Imsx...",
"hasMore": true
},
"meta": {
"requestId": "cf7301af-0c1e-4354-85a8-9153db69ae6d",
"timestamp": 1741600000000
}
}
Next page — pass nextCursor as cursor:
curl "https://carboncopy.inc/api/v1/portfolio/history?limit=50&cursor=eyJpZCI6Imsx..." \
-H "Authorization: Bearer $CC_API_KEY"
{
"data": {
"items": [ ... ],
"nextCursor": null,
"hasMore": false
},
"meta": {
"requestId": "a1b2c3d4-5678-90ab-cdef-1234567890ab",
"timestamp": 1741600030000
}
}
When hasMore is false (or nextCursor is null), you’ve consumed all pages.
Iterating all pages
Python
import requests
def fetch_all(endpoint, api_key, **params):
headers = {"Authorization": f"Bearer {api_key}"}
cursor = None
results = []
while True:
query = {**params, "limit": 200}
if cursor:
query["cursor"] = cursor
response = requests.get(endpoint, headers=headers, params=query)
response.raise_for_status()
body = response.json()
results.extend(body["data"]["items"])
if not body["data"].get("hasMore") or not body["data"].get("nextCursor"):
break
cursor = body["data"]["nextCursor"]
return results
# Example
trades = fetch_all(
"https://carboncopy.inc/api/v1/portfolio/history",
api_key="cc_your_key_here",
since=1741000000000,
)
print(f"Fetched {len(trades)} trades")
TypeScript
interface PaginatedResponse<T> {
data: {
items: T[];
nextCursor: string | null;
hasMore: boolean;
};
meta: {
requestId: string;
timestamp: number;
};
}
async function fetchAll<T>(
endpoint: string,
apiKey: string,
params: Record<string, string | number> = {}
): Promise<T[]> {
const results: T[] = [];
let cursor: string | null = null;
do {
const url = new URL(endpoint);
url.searchParams.set("limit", "200");
for (const [k, v] of Object.entries(params)) {
url.searchParams.set(k, String(v));
}
if (cursor) url.searchParams.set("cursor", cursor);
const res = await fetch(url.toString(), {
headers: { Authorization: `Bearer ${apiKey}` },
});
if (!res.ok) throw new Error(`HTTP ${res.status}`);
const body: PaginatedResponse<T> = await res.json();
results.push(...body.data.items);
cursor = body.data.nextCursor;
} while (cursor !== null);
return results;
}
// Example
const trades = await fetchAll(
"https://carboncopy.inc/api/v1/portfolio/history",
"cc_your_key_here",
{ since: 1741000000000 }
);
Time-range filtering
Use since and until to fetch a specific window without iterating your entire history:
# All trades in the last 7 days
NOW=$(date +%s%3N)
SEVEN_DAYS_AGO=$((NOW - 7 * 24 * 60 * 60 * 1000))
curl "https://carboncopy.inc/api/v1/portfolio/history?since=${SEVEN_DAYS_AGO}&limit=100" \
-H "Authorization: Bearer $CC_API_KEY"
For production polling, store the nextCursor from your last run and resume from there rather than re-fetching everything.
Notes on cursors
- Cursors are opaque — don’t parse or modify them.
- Cursors are stable — new records arriving during pagination won’t cause you to skip or re-see existing records.
- Cursors may expire after a period of inactivity. If you receive a
400 bad_request error on a cursor, start a fresh request from the beginning.