Importing Contacts into a List
Bring a CSV of contacts — from a trade show, event registration, survey tool, or manual research — into the CDP and add them to a list in a single workflow. This guide combines the Bulk File Import and List Management patterns into a cohesive end-to-end flow.
What this guide covers:
- Importing a CSV of contacts and creating a new list from the results
- Importing contacts into an existing list
- Handling partial failures (some rows bad, most good)
- Verifying list membership after import
The two approaches
Approach A: Import-and-create (recommended for new lists)
Pass createList: true and listName when initializing the import job. After all valid rows are merged into profiles, the CDP automatically creates a static list containing those profiles.
Note: listName creates a static list. If you want to create a dynamic audience instead, use createAudience: true and audienceName — these are separate options for different use cases.
curl -X POST https://api.experiture.ai/public/v1/import-jobs \
-H "Authorization: Bearer <your_access_token>" \
-H "Content-Type: application/json" \
-d '{
"objectName": "profiles",
"fileName": "reinvent-2026-leads.csv",
"fileSize": 98304,
"createList": true,
"listName": "AWS re:Invent 2026 Leads"
}'When the job reaches completed, all valid rows have been merged into profiles and added to the newly created list. The import status payload itself does not expose the created list's ID, so use a unique listName and look the list up explicitly after the job completes.
Approach B: Import-into-existing-list
Pass targetListId referencing a list you've already created. Successfully imported profiles are added as members.
curl -X POST https://api.experiture.ai/public/v1/import-jobs \
-H "Authorization: Bearer <your_access_token>" \
-H "Content-Type: application/json" \
-d '{
"objectName": "profiles",
"fileName": "weekly-leads.csv",
"fileSize": 45000,
"targetListId": "lst_01HXYZ"
}'Use this when you're continuously feeding contacts into a standing list (e.g. a weekly lead upload into your "Active Prospects" list).
Step-by-step: import from a CSV upload
1. Prepare your CSV
Your CSV needs at minimum the matchKey column. Additional columns become profile fields after mapping.
Example (reinvent-2026-leads.csv):
email,first_name,last_name,company,job_title,country
alice@acme.com,Alice,Smith,Acme Corp,VP Engineering,US
bob@globex.com,Bob,Jones,Globex,Director of IT,UKRules:
- First row must be a header
- UTF-8 encoding
- Email column must exist and contain valid addresses
- Timestamp columns must include timezone offset:
2026-04-21T15:30:00Z
2. Initialize the job
curl -X POST https://api.experiture.ai/public/v1/import-jobs \
-H "Authorization: Bearer <your_access_token>" \
-H "Content-Type: application/json" \
-d '{
"objectName": "profiles",
"fileName": "reinvent-2026-leads.csv",
"fileSize": 98304,
"createList": true,
"listName": "AWS re:Invent 2026 Leads"
}'The response contains jobId, uploadUrl, and expiresAt:
{
"success": true,
"data": {
"jobId": "imp_01HXYZ",
"uploadUrl": "https://upload.experiture.ai/jobs/imp_01HXYZ/file?X-Amz-Signature=...",
"expiresAt": "2026-04-21T16:30:00Z",
"landingPath": "jobs/imp_01HXYZ/file",
"method": "PUT",
"headers": {},
"requestedBy": "usr_01HXYZ"
}
}3. Upload the file
curl -X PUT "https://upload.experiture.ai/jobs/imp_01HXYZ/file" \
-H "Content-Type: text/csv" \
--data-binary @reinvent-2026-leads.csv4. Map the columns
curl -X POST https://api.experiture.ai/public/v1/import-jobs/imp_01HXYZ/mapping \
-H "Authorization: Bearer <your_access_token>" \
-H "Content-Type: application/json" \
-d '{
"sourceFields": {
"email": "string",
"first_name": "string",
"last_name": "string",
"company": "string",
"job_title": "string",
"country": "string"
},
"fieldMap": {
"email": "email",
"first_name": "first_name",
"last_name": "last_name",
"company": "company_name",
"job_title": "job_title",
"country": "country_code"
}
}'sourceFields maps each source column name to its data type ("string", "datetime", "integer", etc.). fieldMap maps source column names to destination field names.
5. Start the job
curl -X POST https://api.experiture.ai/public/v1/import-jobs/imp_01HXYZ/start \
-H "Authorization: Bearer <your_access_token>" \
-H "Content-Type: application/json" \
-d '{}'6. Monitor to completion
import time, requests
def wait_for_import(job_id, token, timeout=600):
url = f"https://api.experiture.ai/public/v1/import-jobs/{job_id}"
headers = {"Authorization": f"Bearer {token}"}
backoff = 5.0
deadline = time.time() + timeout
while time.time() < deadline:
data = requests.get(url, headers=headers).json()["data"]
if data["state"] in ("completed", "failed"):
return data
time.sleep(backoff)
backoff = min(backoff * 1.5, 30)
raise TimeoutError("Import did not complete in time")
result = wait_for_import("imp_01HXYZ", os.environ["EXPERITURE_API_KEY"])7. Read the result
{
"success": true,
"data": {
"stage": "merge",
"state": "completed",
"readRows": 1248,
"validRows": 1215,
"invalidRows": 33,
"mergedInserts": 891,
"mergedUpdates": 324,
"rowsTotal": 1248,
"rowsImported": 1215,
"rowsRejected": 33,
"successRate": 0.9736
}
}mergedInserts + mergedUpdates = 1215 profiles were written and added to the list. 33 rows failed validation and were not added. The import status payload does not expose the created list's ID, so the safest verification path is:
- Query
GET /lists?status=active&page=1&pageSize=200 - Find the list by the exact
listNameyou supplied during initialization - Use the returned
idwithGET /lists/{list_id}/membersto confirm membership
If you create many lists with similar names, generate a unique name up front (for example, append the upload date or source system) so the lookup remains deterministic.
Full Python example
import os, time, requests
API_KEY = os.environ["EXPERITURE_API_KEY"]
BASE = "https://api.experiture.ai/public/v1"
hdrs = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}
file_path = "reinvent-2026-leads.csv"
file_size = os.path.getsize(file_path)
# 1. Initialize with auto-list creation
init = requests.post(f"{BASE}/import-jobs", headers=hdrs, json={
"objectName": "profiles",
"fileName": "reinvent-2026-leads.csv",
"fileSize": file_size,
"createList": True,
"listName": "AWS re:Invent 2026 Leads",
}).json()["data"]
job_id = init["jobId"]
upload_url = init["uploadUrl"]
# 2. Upload file
with open(file_path, "rb") as f:
requests.put(upload_url, data=f, headers={"Content-Type": "text/csv"}).raise_for_status()
# 3. Set mapping
requests.post(f"{BASE}/import-jobs/{job_id}/mapping", headers=hdrs, json={
"sourceFields": {
"email": "string",
"first_name": "string",
"last_name": "string",
"company": "string",
"job_title": "string",
"country": "string",
},
"fieldMap": {
"email": "email",
"first_name": "first_name",
"last_name": "last_name",
"company": "company_name",
"job_title": "job_title",
"country": "country_code",
},
}).raise_for_status()
# 4. Start
requests.post(f"{BASE}/import-jobs/{job_id}/start", headers=hdrs, json={}).raise_for_status()
# 5. Monitor
result = wait_for_import(job_id, API_KEY, timeout=600)
print(f"Members added: {result.get('mergedInserts',0) + result.get('mergedUpdates',0)}")
print(f"Rejected: {result.get('invalidRows',0)}")
# 6. Handle partial failures — download the error file via the errors endpoint
if result.get("invalidRows", 0) > 0:
errors_meta = requests.get(f"{BASE}/import-jobs/{job_id}/errors", headers=hdrs).json()["data"]
if errors_meta.get("hasErrors"):
import urllib.request
lines = urllib.request.urlopen(errors_meta["errorFileUrl"]).read().decode().strip().split("\n")
print("\nFirst 5 error lines:")
for line in lines[:5]:
print(f" {line}")Handling partial failures
Import jobs are not all-or-nothing. Rows that pass validation are committed to profiles and added to the list; rows that fail validation are skipped and reported in the error file.
Common row-level errors:
| Error | Cause | Fix |
|---|---|---|
invalid email format | Bad email in the matchKey column | Fix in source file; strip whitespace, remove typos |
naive datetime | Timestamp missing timezone | Add Z or offset: 2026-04-21T10:00:00Z |
CDP_ETL.VALIDATION.REQUEST_SCHEMA | Column mapped to a field that doesn't exist | Check GET /metadata/objects/profiles and update mapping |
CDP_ETL.VALIDATION.REQUEST_INVALID (required fields) | Object has required fields not in the CSV | Add the required columns to the file, or update the mapping to point at the correct source columns |
Recovery: download the error file, fix the bad rows in a new CSV, create a new import job targeting only the fixes. If using targetListId, the new job will add the fixed rows to the same list.
Adding to an existing list incrementally
For weekly or recurring uploads into a standing list, use targetListId each time:
def weekly_lead_upload(csv_path: str, list_id: str, api_key: str):
file_size = os.path.getsize(csv_path)
base = "https://api.experiture.ai/public/v1"
hdrs = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}
init = requests.post(f"{base}/import-jobs", headers=hdrs, json={
"objectName": "profiles",
"fileName": os.path.basename(csv_path),
"fileSize": file_size,
"targetListId": list_id,
}).json()["data"]
job_id = init["jobId"]
with open(csv_path, "rb") as f:
requests.put(init["uploadUrl"], data=f, headers={"Content-Type": "text/csv"}).raise_for_status()
requests.post(f"{base}/import-jobs/{job_id}/mapping", headers=hdrs, json={
"sourceFields": {k: "string" for k in STANDARD_LEAD_MAPPING},
"fieldMap": STANDARD_LEAD_MAPPING,
}).raise_for_status()
requests.post(f"{base}/import-jobs/{job_id}/start", headers=hdrs, json={}).raise_for_status()
return wait_for_import(job_id, api_key)Members added in subsequent imports are appended — existing members are not removed. If you want the list to reflect exactly the current upload (and remove contacts from last week), delete and recreate the list, or remove stale members programmatically.
See Also
- Bulk File Import — full import job lifecycle, format details, operational tips
- List Management — creating, updating, and deleting lists
- Import Jobs API reference
- Lists API reference