Batch Operations

Batch writes submit multiple events to the same aggregate atomically. All events succeed or none are written. A single OCC check protects the entire batch.

When to batch

Good for: - Initial entity creation with multiple setup events - Complex state transitions that are logically atomic - Bulk imports and data migrations

Not for: - Events to different aggregates (use implications instead) - Single events (no benefit, adds complexity) - Operations needing immediate per-event feedback

Batch write

POST /{aggregate_type}/{aggregate_id}
{
  "events": [
    { "type": "was_created", "data": { "name": "Alice" } },
    { "type": "had_email_updated", "data": { "email": "alice@example.com" } },
    { "type": "had_role_assigned", "data": { "role": "admin" } }
  ],
  "metadata": {
    "actor": { "type": "admin", "id": "550e8400-e29b-41d4-a716-446655440000" },
    "previous_length": 0
  }
}

Each event needs: - type: Event type from spec - data: Event payload (optional, depends on event schema)

Metadata is shared across all events in the batch (actor, target, previous_length).

Response format

{
  "ok": true,
  "stream_ids": ["1706789012345-0", "1706789012345-1", "1706789012345-2"],
  "count": 3,
  "implied_count": 5
}
Field Description
stream_ids Redis stream IDs for each written event
count Number of events written
implied_count Total implied events across all triggers (omitted if 0)

OCC in batches

The previous_length in metadata applies to the batch as a whole:

  • If you specify previous_length: 0, the stream must be empty before writing
  • All events are written atomically after the OCC check passes
  • If OCC fails, none of the events are written

Getting stream length for OCC

Use the length endpoint for an O(1) check:

curl https://myapp.j17.dev/order/ord_123/length \
  -H "Authorization: Bearer $API_KEY"

# Response: {"ok": true, "length": 5}

Then pass it as previous_length in your batch write.

Implications in batches

Each event in a batch can trigger implications. All implications see the pre-batch state, not intermediate states:

Submit: [A, B, C] to order:123

S0 (pre-batch state)
+-- Implications for A see S0
+-- Implications for B see S0
+-- Implications for C see S0

After commit: state = S0 + A + B + C

Idempotency

Batch writes support the X-Idempotency-Key header just like single writes:

curl -X POST https://myapp.j17.dev/user/user_123 \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -H "X-Idempotency-Key: user-setup-user_123-20240115" \
  -d '{
    "events": [
      { "type": "was_created", "data": { "name": "Alice" } },
      { "type": "had_role_assigned", "data": { "role": "admin" } }
    ],
    "metadata": {
      "actor": { "type": "system", "id": "550e8400-e29b-41d4-a716-446655440001" }
    }
  }'

If you retry with the same key and same body, the cached response is returned with X-Idempotency-Replayed: true. Same key with different body returns 422.

Error handling

400 Bad Request

Validation error on one or more events. No events written.

{
  "ok": false,
  "error": "Event data failed schema validation",
  "event_index": 1,
  "path": "data.email"
}

The event_index field tells you which event in the array failed (0-based).

409 Conflict

OCC check failed. No events written. Refetch the length and retry.

Empty or missing events

{
  "ok": false,
  "error": "Missing 'events' array in request body"
}

Or:

{
  "ok": false,
  "error": "Events array cannot be empty"
}

Bulk import example

Migrating from an existing database:

const batchSize = 50;

for (const user of users) {
  const events = [
    { type: 'was_created', data: { name: user.name, email: user.email } },
    { type: 'had_profile_updated', data: { bio: user.bio, avatar: user.avatar } }
  ];

  await fetch(`https://myapp.j17.dev/user/${user.uuid}`, {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${apiKey}`,
      'Content-Type': 'application/json',
      'X-Idempotency-Key': `migration-user-${user.uuid}`,
    },
    body: JSON.stringify({
      events,
      metadata: {
        actor: { type: 'system', id: '550e8400-e29b-41d4-a716-446655440002' }
      }
    })
  });
}

Use idempotency keys so you can resume if interrupted.

When not to batch

Events to different aggregates: Batches only target a single aggregate. For cross-aggregate writes, use implications -- define them in your spec so one trigger event atomically creates derived events in other aggregates.

Events that are truly independent: If event A failing shouldn't block event B, use separate POSTs.

Interactive user actions: Individual POSTs give clearer error feedback than batch failures.

See also

Can't find what you need? support@j17.app