API Reference
Data Ingestion
Records

Records API

Write customer records to the Experiture CDP. The Records API supports synchronous single-record writes for real-time updates and asynchronous batch writes for large datasets.

Base URL: https://api.experiture.ai/public/v1

Authentication: All endpoints require a bearer token in the Authorization header. See Authentication.


Overview

OperationEndpointModeMax Records
Append (single)POST /records/{object_name}/appendSynchronous1
Upsert (single)POST /records/{object_name}/upsertSynchronous1
Append (batch)POST /records/{object_name}/append-batchAsynchronous10,000
Upsert (batch)POST /records/{object_name}/upsert-batchAsynchronous10,000
Get job statusGET /records/{object_name}/jobs/{job_id}Synchronous

Append vs. Upsert:

  • Append — always inserts a new row. Use when every event/record should be preserved (e.g. transaction logs).
  • Upsert — inserts or updates based on the optional matchKey field. Use when a record represents a stateful entity (e.g. customer profiles).

Single Record Operations

Append a single record

Insert one record synchronously. Returns immediately with an acceptance payload.

POST /records/{object_name}/append
Authorization: Bearer <token>
Content-Type: application/json

Path parameters

NameTypeDescription
object_namestringThe object type (e.g. contacts, orders, events).

Request body

FieldTypeRequiredDescription
recordobjectYesThe record payload. Must contain ≥1 property.

Example request

curl -X POST https://api.experiture.ai/public/v1/records/contacts/append \
  -H "Authorization: Bearer <your_access_token>" \
  -H "Content-Type: application/json" \
  -d '{
    "record": {
      "email": "john@example.com",
      "first_name": "John",
      "signup_source": "web"
    }
  }'

Response — 200 OK

{
  "success": true,
  "data": {
    "operation": "append",
    "objectName": "contacts",
    "accepted": true,
    "acceptedAt": "2026-04-21T15:30:00Z",
    "acceptedRecords": 1
  },
  "correlationId": "<uuid>"
}

Upsert a single record

Insert or update one record based on a matchKey. If a record matching the key exists, it's merged; otherwise a new one is created.

POST /records/{object_name}/upsert
Authorization: Bearer <token>
Content-Type: application/json

Request body

FieldTypeRequiredDescription
recordobjectYesThe record payload. Must contain ≥1 property.
matchKeystringNoField name to match on (e.g. "email"). Defaults to object's primary key.

Example request

curl -X POST https://api.experiture.ai/public/v1/records/contacts/upsert \
  -H "Authorization: Bearer <your_access_token>" \
  -H "Content-Type: application/json" \
  -d '{
    "record": {
      "email": "john@example.com",
      "last_seen": "2026-04-21T15:30:00Z"
    },
    "matchKey": "email"
  }'

Response — 200 OK

{
  "success": true,
  "data": {
    "operation": "upsert",
    "objectName": "contacts",
    "accepted": true,
    "acceptedAt": "2026-04-21T15:30:00Z",
    "acceptedRecords": 1,
    "matchKey": "email"
  },
  "correlationId": "<uuid>"
}

Batch Operations

For datasets larger than 1 record, use the batch endpoints. These process asynchronously and return a jobId that you poll for status.

Append a batch

POST /records/{object_name}/append-batch
Authorization: Bearer <token>
Content-Type: application/json

Request body

FieldTypeRequiredDescription
recordsarray of objectsYes1 – 10,000 records per request.

Example request

curl -X POST https://api.experiture.ai/public/v1/records/contacts/append-batch \
  -H "Authorization: Bearer <your_access_token>" \
  -H "Content-Type: application/json" \
  -d '{
    "records": [
      {"email": "a@example.com", "first_name": "Alice"},
      {"email": "b@example.com", "first_name": "Bob"}
    ]
  }'

Response — 202 Accepted

{
  "success": true,
  "data": {
    "operation": "append-batch",
    "objectName": "contacts",
    "jobId": "job_01HXYZ...",
    "state": "queued",
    "acceptedRecords": 2,
    "statusPath": "/public/v1/records/contacts/jobs/job_01HXYZ..."
  },
  "correlationId": "<uuid>"
}

Upsert a batch

Same as append-batch but respects matchKey for merge semantics.

POST /records/{object_name}/upsert-batch
Authorization: Bearer <token>
Content-Type: application/json

Request body

FieldTypeRequiredDescription
recordsarray of objectsYes1 – 10,000 records per request.
matchKeystringNoField name to match on for merge logic.

Example request

curl -X POST https://api.experiture.ai/public/v1/records/contacts/upsert-batch \
  -H "Authorization: Bearer <your_access_token>" \
  -H "Content-Type: application/json" \
  -d '{
    "records": [
      {"email": "a@example.com", "tier": "gold"},
      {"email": "b@example.com", "tier": "silver"}
    ],
    "matchKey": "email"
  }'

Response — 202 Accepted

{
  "success": true,
  "data": {
    "operation": "upsert-batch",
    "objectName": "contacts",
    "jobId": "job_01HXYZ...",
    "state": "queued",
    "acceptedRecords": 2,
    "statusPath": "/public/v1/records/contacts/jobs/job_01HXYZ...",
    "matchKey": "email"
  },
  "correlationId": "<uuid>"
}

matchKey is echoed back in the response so you can confirm which field the merge will use. It is null in append-batch responses.


Monitor Batch Jobs

Poll this endpoint with the jobId returned from a batch write to check processing status and metrics.

GET /records/{object_name}/jobs/{job_id}
Authorization: Bearer <token>

Example request

curl https://api.experiture.ai/public/v1/records/contacts/jobs/job_01HXYZ... \
  -H "Authorization: Bearer <your_access_token>"

Response — 200 OK

{
  "success": true,
  "data": {
    "jobId": "job_01HXYZ...",
    "objectName": "contacts",
    "state": "completed",
    "stage": "merge",
    "fileName": "batch_01HXYZ.jsonl",
    "fileSize": 4821,
    "createdAt": "2026-04-21T15:30:00Z",
    "startedAt": "2026-04-21T15:30:02Z",
    "completedAt": "2026-04-21T15:30:18Z",
    "metrics": {
      "readRows": 2,
      "validRows": 2,
      "invalidRows": 0,
      "mergedInserts": 1,
      "mergedUpdates": 1,
      "mergedRetained": 0
    },
    "rowsTotal": 2,
    "rowsImported": 2,
    "rowsRejected": 0,
    "successRate": 1.0,
    "updatedAt": "2026-04-21T15:30:18Z",
    "errorMessage": null,
    "errorSummary": null
  },
  "correlationId": "<uuid>"
}

metrics fields

FieldDescription
readRowsTotal rows parsed from the request payload.
validRowsRows that passed schema validation.
invalidRowsRows rejected at validation. These do not reach the merge step.
mergedInsertsNew records written (no existing match found).
mergedUpdatesExisting records updated (match found, fields changed).
mergedRetainedExisting records matched but unchanged — the incoming values were identical to what was already stored. A high mergedRetained count relative to mergedUpdates means most of your data was already current.

rowsImported, rowsRejected, and successRate are derived summaries. errorMessage is populated on failed jobs. errorSummary is a { errorCode: count } map when validation failures occurred.

Job states

StateDescription
queuedAccepted and waiting to start.
runningActively processing.
completedFinished successfully. Check metrics for details.
failedProcessing failed. Check errorMessage.

Recommended polling: exponential backoff starting at 2s, capped at 30s. Most batches complete within 60s.


Errors

All error responses follow a common format:

{
  "success": false,
  "error": {
    "code": "CDP_ETL.VALIDATION.REQUEST_INVALID",
    "message": "records[2].email: invalid email format",
    "details": {
      "path": "records[2].email",
      "value": "not-an-email"
    }
  }
}
HTTP StatusCodeMeaning
400CDP_ETL.VALIDATION.REQUEST_INVALIDRequest body is malformed or fields fail validation.
401CDP_ETL.AUTH.UNAUTHORIZEDMissing or invalid bearer token.
403CDP_ETL.AUTH.FORBIDDENToken lacks permission for this object or operation.
404CDP_ETL.OBJECT.NOT_FOUNDobject_name does not exist in the workspace.
409CDP_ETL.WRITE.CONFLICTConcurrent write conflict on upsert. Retry with backoff.
422CDP_ETL.VALIDATION.SCHEMA_MISMATCHRecord fields don't match the object's schema.
429CDP_ETL.RATE.LIMITEDSee Rate Limits.
500CDP_ETL.INTERNAL_ERRORRetryable. Use exponential backoff.

Constraints & Limits

  • Single write: 1 record per request, ≥1 field
  • Batch write: 1 – 10,000 records per request
  • Max payload size: 10 MB per request
  • Max field count: 500 fields per record
  • Rate limits: see Rate Limits
  • Idempotency: include an Idempotency-Key header (UUID) to safely retry

See Also