This endpoint creates a new object in the specified bucket. The object must conform to the bucket’s schema.
Processing: By default, objects are created in DRAFT status and require
batch submission for processing. Set auto_process=true to automatically
create a batch and submit it for processing (zero-touch workflow).
If the bucket has a unique_key configured, the insertion policy determines behavior:
Policy resolution:
REQUIRED: Bearer token authentication using your API key. Format: 'Bearer sk_xxxxxxxxxxxxx'. You can create API keys in the Mixpeek dashboard under Organization Settings.
"Bearer YOUR_API_KEY"
"Bearer YOUR_STRIPE_API_KEY"
REQUIRED: Namespace identifier for scoping this request. All resources (collections, buckets, taxonomies, etc.) are scoped to a namespace. You can provide either the namespace name or namespace ID. Format: ns_xxxxxxxxxxxxx (ID) or a custom name like 'my-namespace'
"ns_abc123def456"
"production"
"my-namespace"
The unique identifier of the bucket.
Insertion policy for unique key enforcement. Valid values: 'insert', 'update', 'upsert'. Only applies if bucket has unique_key configured. Overrides bucket's default_policy if provided.
"insert"
Automatically create a batch and submit it for processing. When true, the object will be immediately queued for processing without requiring separate batch creation and submission calls. Ideal for onboarding and single-object workflows.
Request model for creating a bucket object.
Objects can be created with blobs from two sources:
Upload Reference Workflow: For large files or client-side uploads, use the presigned URL workflow: 1. POST /buckets/{id}/uploads → Returns {upload_id, presigned_url} 2. User uploads file to presigned_url (client-side) 3. POST /uploads/{upload_id}/confirm → Validates upload 4. POST /buckets/{id}/objects with upload_id in blobs (this endpoint)
Use Cases: - Single blob with direct data (simple) - Multiple blobs from presigned uploads (recommended for large files) - Mix of direct data and upload references - Combine multiple uploads into one object
See Also: - CreateBlobRequest for blob field documentation - POST /buckets/{id}/uploads for presigned URL generation
Storage key/path prefix of the object, this will be used to retrieve the object from the storage. It's at the root of the object.
"/contract-2024"
List of blobs to be created in this object
[
{
"data": {
"num_pages": 5,
"title": "Service Agreement 2024"
},
"key_prefix": "/content.pdf",
"metadata": {
"author": "John Doe",
"department": "Legal"
},
"property": "content",
"type": "PDF"
}
]Skip duplicate blobs, if a blob with the same hash already exists, it will be skipped.
Mirror non-S3 sources into internal S3 and reference canonically.
Force re-upload to S3 even if a blob with identical content already exists.
Successful Response
Response model for bucket objects.
ID of the bucket this object belongs to
Unique identifier for the object
Storage key/path of the object, this will be used to retrieve the object from the storage. It is similar to a file path. If not provided, it will be placed in the root of the bucket.
List of blobs contained in this object
Lineage/source details for this object; used for downstream references.
The current status of the object.
PENDING, QUEUED, IN_PROGRESS, PROCESSING, COMPLETED, COMPLETED_WITH_ERRORS, FAILED, CANCELED, UNKNOWN, SKIPPED, DRAFT, ACTIVE, ARCHIVED, SUSPENDED The error message if the object failed to process.
"Failed to process object: Object not found"
Timestamp when the object was created. Automatically populated by the system.
Timestamp when the object was last updated. Automatically populated by the system.
Number of documents produced from this object across all collections. Populated on GET requests. Null on list responses (expensive query). Use this to check if an object has already been processed.