Records API
Write customer records to the Experiture CDP. The Records API supports synchronous single-record writes for real-time updates and asynchronous batch writes for large datasets.
Base URL: https://api.experiture.ai/public/v1
Authentication: All endpoints require a bearer token in the Authorization header. See Authentication.
Overview
| Operation | Endpoint | Mode | Max Records |
|---|---|---|---|
| Append (single) | POST /records/{object_name}/append | Synchronous | 1 |
| Upsert (single) | POST /records/{object_name}/upsert | Synchronous | 1 |
| Append (batch) | POST /records/{object_name}/append-batch | Asynchronous | 10,000 |
| Upsert (batch) | POST /records/{object_name}/upsert-batch | Asynchronous | 10,000 |
| Get job status | GET /records/{object_name}/jobs/{job_id} | Synchronous | — |
Append vs. Upsert:
- Append — always inserts a new row. Use when every event/record should be preserved (e.g. transaction logs).
- Upsert — inserts or updates based on the optional
matchKeyfield. Use when a record represents a stateful entity (e.g. customer profiles).
Single Record Operations
Append a single record
Insert one record synchronously. Returns immediately with an acceptance payload.
POST /records/{object_name}/append
Authorization: Bearer <token>
Content-Type: application/jsonPath parameters
| Name | Type | Description |
|---|---|---|
object_name | string | The object type (e.g. contacts, orders, events). |
Request body
| Field | Type | Required | Description |
|---|---|---|---|
record | object | Yes | The record payload. Must contain ≥1 property. |
Example request
curl -X POST https://api.experiture.ai/public/v1/records/contacts/append \
-H "Authorization: Bearer <your_access_token>" \
-H "Content-Type: application/json" \
-d '{
"record": {
"email": "john@example.com",
"first_name": "John",
"signup_source": "web"
}
}'Response — 200 OK
{
"success": true,
"data": {
"operation": "append",
"objectName": "contacts",
"accepted": true,
"acceptedAt": "2026-04-21T15:30:00Z",
"acceptedRecords": 1
},
"correlationId": "<uuid>"
}Upsert a single record
Insert or update one record based on a matchKey. If a record matching the key exists, it's merged; otherwise a new one is created.
POST /records/{object_name}/upsert
Authorization: Bearer <token>
Content-Type: application/jsonRequest body
| Field | Type | Required | Description |
|---|---|---|---|
record | object | Yes | The record payload. Must contain ≥1 property. |
matchKey | string | No | Field name to match on (e.g. "email"). Defaults to object's primary key. |
Example request
curl -X POST https://api.experiture.ai/public/v1/records/contacts/upsert \
-H "Authorization: Bearer <your_access_token>" \
-H "Content-Type: application/json" \
-d '{
"record": {
"email": "john@example.com",
"last_seen": "2026-04-21T15:30:00Z"
},
"matchKey": "email"
}'Response — 200 OK
{
"success": true,
"data": {
"operation": "upsert",
"objectName": "contacts",
"accepted": true,
"acceptedAt": "2026-04-21T15:30:00Z",
"acceptedRecords": 1,
"matchKey": "email"
},
"correlationId": "<uuid>"
}Batch Operations
For datasets larger than 1 record, use the batch endpoints. These process asynchronously and return a jobId that you poll for status.
Append a batch
POST /records/{object_name}/append-batch
Authorization: Bearer <token>
Content-Type: application/jsonRequest body
| Field | Type | Required | Description |
|---|---|---|---|
records | array of objects | Yes | 1 – 10,000 records per request. |
Example request
curl -X POST https://api.experiture.ai/public/v1/records/contacts/append-batch \
-H "Authorization: Bearer <your_access_token>" \
-H "Content-Type: application/json" \
-d '{
"records": [
{"email": "a@example.com", "first_name": "Alice"},
{"email": "b@example.com", "first_name": "Bob"}
]
}'Response — 202 Accepted
{
"success": true,
"data": {
"operation": "append-batch",
"objectName": "contacts",
"jobId": "job_01HXYZ...",
"state": "queued",
"acceptedRecords": 2,
"statusPath": "/public/v1/records/contacts/jobs/job_01HXYZ..."
},
"correlationId": "<uuid>"
}Upsert a batch
Same as append-batch but respects matchKey for merge semantics.
POST /records/{object_name}/upsert-batch
Authorization: Bearer <token>
Content-Type: application/jsonRequest body
| Field | Type | Required | Description |
|---|---|---|---|
records | array of objects | Yes | 1 – 10,000 records per request. |
matchKey | string | No | Field name to match on for merge logic. |
Example request
curl -X POST https://api.experiture.ai/public/v1/records/contacts/upsert-batch \
-H "Authorization: Bearer <your_access_token>" \
-H "Content-Type: application/json" \
-d '{
"records": [
{"email": "a@example.com", "tier": "gold"},
{"email": "b@example.com", "tier": "silver"}
],
"matchKey": "email"
}'Response — 202 Accepted
{
"success": true,
"data": {
"operation": "upsert-batch",
"objectName": "contacts",
"jobId": "job_01HXYZ...",
"state": "queued",
"acceptedRecords": 2,
"statusPath": "/public/v1/records/contacts/jobs/job_01HXYZ...",
"matchKey": "email"
},
"correlationId": "<uuid>"
}matchKey is echoed back in the response so you can confirm which field the merge will use. It is null in append-batch responses.
Monitor Batch Jobs
Poll this endpoint with the jobId returned from a batch write to check processing status and metrics.
GET /records/{object_name}/jobs/{job_id}
Authorization: Bearer <token>Example request
curl https://api.experiture.ai/public/v1/records/contacts/jobs/job_01HXYZ... \
-H "Authorization: Bearer <your_access_token>"Response — 200 OK
{
"success": true,
"data": {
"jobId": "job_01HXYZ...",
"objectName": "contacts",
"state": "completed",
"stage": "merge",
"fileName": "batch_01HXYZ.jsonl",
"fileSize": 4821,
"createdAt": "2026-04-21T15:30:00Z",
"startedAt": "2026-04-21T15:30:02Z",
"completedAt": "2026-04-21T15:30:18Z",
"metrics": {
"readRows": 2,
"validRows": 2,
"invalidRows": 0,
"mergedInserts": 1,
"mergedUpdates": 1,
"mergedRetained": 0
},
"rowsTotal": 2,
"rowsImported": 2,
"rowsRejected": 0,
"successRate": 1.0,
"updatedAt": "2026-04-21T15:30:18Z",
"errorMessage": null,
"errorSummary": null
},
"correlationId": "<uuid>"
}metrics fields
| Field | Description |
|---|---|
readRows | Total rows parsed from the request payload. |
validRows | Rows that passed schema validation. |
invalidRows | Rows rejected at validation. These do not reach the merge step. |
mergedInserts | New records written (no existing match found). |
mergedUpdates | Existing records updated (match found, fields changed). |
mergedRetained | Existing records matched but unchanged — the incoming values were identical to what was already stored. A high mergedRetained count relative to mergedUpdates means most of your data was already current. |
rowsImported, rowsRejected, and successRate are derived summaries. errorMessage is populated on failed jobs. errorSummary is a { errorCode: count } map when validation failures occurred.
Job states
| State | Description |
|---|---|
queued | Accepted and waiting to start. |
running | Actively processing. |
completed | Finished successfully. Check metrics for details. |
failed | Processing failed. Check errorMessage. |
Recommended polling: exponential backoff starting at 2s, capped at 30s. Most batches complete within 60s.
Errors
All error responses follow a common format:
{
"success": false,
"error": {
"code": "CDP_ETL.VALIDATION.REQUEST_INVALID",
"message": "records[2].email: invalid email format",
"details": {
"path": "records[2].email",
"value": "not-an-email"
}
}
}| HTTP Status | Code | Meaning |
|---|---|---|
400 | CDP_ETL.VALIDATION.REQUEST_INVALID | Request body is malformed or fields fail validation. |
401 | CDP_ETL.AUTH.UNAUTHORIZED | Missing or invalid bearer token. |
403 | CDP_ETL.AUTH.FORBIDDEN | Token lacks permission for this object or operation. |
404 | CDP_ETL.OBJECT.NOT_FOUND | object_name does not exist in the workspace. |
409 | CDP_ETL.WRITE.CONFLICT | Concurrent write conflict on upsert. Retry with backoff. |
422 | CDP_ETL.VALIDATION.SCHEMA_MISMATCH | Record fields don't match the object's schema. |
429 | CDP_ETL.RATE.LIMITED | See Rate Limits. |
500 | CDP_ETL.INTERNAL_ERROR | Retryable. Use exponential backoff. |
Constraints & Limits
- Single write: 1 record per request, ≥1 field
- Batch write: 1 – 10,000 records per request
- Max payload size: 10 MB per request
- Max field count: 500 fields per record
- Rate limits: see Rate Limits
- Idempotency: include an
Idempotency-Keyheader (UUID) to safely retry
See Also
- Batch Import Guide — end-to-end batch workflow
- Authentication — tokens and scopes
- Rate Limits — quotas and backoff strategy
- OpenAPI spec — full machine-readable reference