Caching

j17 can cache aggregate state in the background, giving you fast reads and helping you stay within your tier's usage limits.

The trade-off

Without caching, every GET /{type}/{id} replays all events to compute the current state. This is accurate but uses resources.

With caching enabled: - Faster responses: Reads return instantly from pre-computed state - Lower resource usage: Helps you stay within tier limits - Eventually consistent: There's a brief window (typically under 1 second, worst case ~10 seconds) where reads might be slightly stale after a write

You can always bypass the cache with ?synchronous=true when you need guaranteed fresh data.

Enabling caching

Cache is configured in your spec's modules section. List the aggregate types you want cached:

{
  "aggregate_types": {
    "user": { "events": { "..." : {} } },
    "order": { "events": { "..." : {} } },
    "product": { "events": { "..." : {} } },
    "audit_log": { "events": { "..." : {} } }
  },
  "modules": {
    "cache": ["user", "order", "product"]
  }
}

Only types listed in modules.cache are cached. Types not listed (like audit_log above) are always computed fresh.

Reading cached data

Once enabled, GET /{type}/{id} automatically returns cached state:

curl https://myapp.j17.dev/user/abc123 \
  -H "Authorization: Bearer $API_KEY"
# Returns cached aggregate - fast

When you need fresh data

Use ?synchronous=true to bypass the cache and compute from events:

curl https://myapp.j17.dev/user/abc123?synchronous=true \
  -H "Authorization: Bearer $API_KEY"
# Computes from events, guaranteed fresh

Cache invalidation

Caches invalidate automatically when: - New events are written to the aggregate - Spec is redeployed (cache clears automatically)

You do not manage invalidation. The system handles it.

When to cache

Good candidates: - User profiles, settings - Product catalogs - Dashboards and lists - Anything read more often than written

Skip caching for: - Audit logs (append-only, rarely read as aggregates) - Aggregates where even brief staleness is unacceptable (financial balances, inventory counts during checkout)

Staleness expectations

  • Typical: Under 1 second after a write
  • Worst case: ~10 seconds for very large deployments
  • After spec deploy: Cache automatically invalidates

Optimistic UI pattern

Often you don't need to read after a write at all:

  1. User clicks "Update Profile"
  2. You POST the event to j17
  3. On success, immediately update the UI optimistically
  4. Don't bother reading the aggregate back
async function updateProfile(userId, changes) {
  await fetch(`https://myapp.j17.dev/user/${userId}/had_profile_updated`, {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${apiKey}`,
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      data: changes,
      metadata: { actor: { type: "user", id: userId } }
    })
  });

  // Update UI immediately - don't wait for read
  updateLocalState(changes);
  showToast("Profile updated!");
}

Why this works: Event writes are atomic. If the POST succeeds, the event is persisted. The cache will catch up. Your user sees instant feedback.

Handling the rare failure: If the write fails (validation error, network issue), show an error. For conflicts, refetch and let the user retry. Most UIs can show optimistic updates and handle the rare failure gracefully.

Optimistic updates with OCC

For cases where conflicts matter (like adding items to a shared order):

async function addItem(orderId, item) {
  const current = await fetch(
    `https://myapp.j17.dev/order/${orderId}?synchronous=true`,
    { headers: { "Authorization": `Bearer ${apiKey}` } }
  ).then(r => r.json());

  // Optimistically update UI
  renderOrder({
    ...current.state,
    items: [...current.state.items, item]
  });

  // Write event with OCC
  try {
    await fetch(`https://myapp.j17.dev/order/${orderId}/had_item_added`, {
      method: "POST",
      headers: {
        "Authorization": `Bearer ${apiKey}`,
        "Content-Type": "application/json"
      },
      body: JSON.stringify({
        data: item,
        metadata: {
          actor: { type: "user", id: userId },
          previous_length: current.length
        }
      })
    });
  } catch (err) {
    // Conflict - someone else wrote. Refetch and retry.
    const fresh = await fetch(
      `https://myapp.j17.dev/order/${orderId}?synchronous=true`,
      { headers: { "Authorization": `Bearer ${apiKey}` } }
    ).then(r => r.json());
    renderOrder(fresh.state);
  }
}

Note the use of ?synchronous=true to bypass cache when fresh data is needed for conflict detection.

Performance comparison

Without Cache With Cache
Read speed Slower (replays events) Fast (pre-computed)
Resource usage Higher Lower
Consistency Always fresh Eventually consistent
Best for Critical reads Most reads

Best practices

Start without caching. Add it when you see performance issues or want to reduce usage.

Cache the right types. High-read, low-write aggregates benefit most. Don't cache types that change every second.

Use ?synchronous=true sparingly. It bypasses the cache entirely -- use it for critical reads where staleness matters, not as a default.

Monitor your usage. Caching reduces compute usage, which helps you stay within tier limits.

Don't cache user-specific data in shared CDN caches. If you put j17 behind a CDN, use Vary: Authorization to avoid leaking data between users.

See also

Can't find what you need? support@j17.app