Invite-only.
Reference

Performance

Dock's speed promise, measured. The dashboard is a client- interactive SPA; the website is server-rendered. This page documents both halves, the budgets we hold ourselves to, and the CI + smoke tests that fail the build when we break them.

The promise

Two lines, one for each audience:

  • For your team: every interaction inside the app is a sub-200ms crossfade, no matter how much state the workspace holds.
  • For your agents: MCP tool calls hit the same write path as dashboard clicks, with the same latency ceiling and the same events fanning out to every connected client.

The rest of this page is how we back that up.

Two render strategies, honestly split

Dock isn’t a single-page app. It’s a server-rendered website with a dashboard surface that behaves like an SPA because that’s what the interaction model calls for.

Website surfaces (SSR)

Every public route serves full HTML on the first request. Crawlers (Googlebot, GPTBot, Claude, Perplexity, Bing) index the full content; OG images, titles, and structured data are all in the first byte.

  • / home: SSR with hydration for the hero sheet preview
  • /pricing: SSR, plan cards render at request time from lib/plan.ts
  • /docs/*: server components, pure HTML, zero client JS for the text
  • /changelog: SSR from lib/changelog.tsx
  • /login, /invite/*, /oauth/*: SSR, thin

Note on client components in Next 16: a component marked "use client" is still server-rendered for the initial HTML. Crawlers see the full page. Only post-mount useEffect fetches go missing from the server-side HTML — so we keep all SEO-relevant data (meta tags, hero copy, pricing, docs body) server-side.

Dashboard surfaces (SPA-shaped)

  • /workspaces
  • /[org]/[workspace]
  • /settings/*

Auth-gated so crawlers see a login shell (correct behavior). Inside the auth boundary, the dashboard is a persistent React tree: sidebar stays mounted across navigations, workspace state hydrates once and updates via SSE + a 1.5s forward- cursor poll. Every click is a local state change followed by an optimistic mutation, not a full round-trip.

Budgets

Hard numbers the CI fails on. Measured via Playwright + PerformanceObserver against staging and prod.

Mode swap (Table ⇄ Doc)
180ms crossfade via the View Transitions API. Both panels stay mounted so TipTap never cold-inits more than once per session. Fallback on unsupported browsers: instant swap without crossfade, same correctness.
Budget
transition duration   180ms ±20ms
long-task count       0 per swap
CLS delta             < 0.02
Workspace first interaction
Cold cache to first click-ready paint. Bound by client- side fetch waterfall (/me → /workspaces → /workspace → /rows or /doc). Warm cache cuts this by ~1s.
Budget
FCP  cold  < 1800ms   warm < 700ms
LCP  cold  < 2500ms   warm < 1200ms
CLS  any   < 0.1
Long tasks on mode swap = 0
Sidebar → workspace nav
180ms crossfade via View Transitions API wrappingrouter.push. Modifier keys (cmd, ctrl, shift) bypass the transition so new-tab behavior is unchanged.
Budget
perceived latency    < 200ms
network fetches      0 (data prefetched via Next Link)
Real-time event fan-out
SSE via Cloudflare Durable Objects (one DO per workspace). Forward-cursor poll on a 1.5s interval catches cross- instance gaps + pauses on hidden tabs.
Budget
same-instance fan-out      < 100ms
cross-instance catch-up    < 1500ms
Bundle on workspace page
TipTap + 16 extensions + lowlight language packs are lazy-loaded via next/dynamic. Users in table-mode workspaces never download them. Preloads on Doc-tile hover so the click is instant.
Budget
table-only user's workspace bundle:
  no TipTap, no lowlight, no prosemirror
first Doc-tile click after preload:
  perceived latency < 200ms

How we enforce it

Budgets that aren’t measured on every PR rot. Dock’s perf budgets live in CI + smoke:

Per-PR (before staging)

  • Unit tests (Vitest)— render-count regressions. Assert that memoized components don’t re- render when props haven’t changed. Runs in the TypeScript + lint job (required check).
  • Playwright (nfr) — the FCP / LCP / CLS ceilings above, asserted against a seeded 500-row test workspace.

Staging smoke (after every staging deploy)

  • Tier 1.5 TTI — opens staging.trydock.ai, signs in with the sentinel session, navigates to a pinned workspace, measures FCP + LCP. Fails the staging→main promote if either is past budget.

Prod smoke (after every prod deploy)

  • Workspace TTI budget — same check, same budgets, runs against trydock.ai. Triggers Prod Rollback if breached.

In short: if a PR slows the dashboard past the numbers published above, it doesn’t ship.

What shipped (2026-04-23)

First full pass of dashboard perf work. See the changelog for release-level detail.

  • Tier A — both panels stay mounted across mode swap, mode seeded from ?m= URL hint, no cascade delay on workspace detail, View Transitions crossfade on swap.
  • Tier B — DocView lazy-loaded, row fetch skipped on doc-mode entries, TipTap chunk preloaded on hover.
  • Tier C — sidebar nav crossfade, dev-only PerfOverlay for localhost tuning.

What else shipped (2026-04-23 phase 3)

  • C3 — full memoization. All 11 workspace- page handlers now wrap in useCallbackwith ref-backed state reads. SSE events that don’t touch the visible data skip re-rendering entirely. Regression tests at tests/unit/memo/workspace-memoization.spec.tsx.
  • C1 — virtualize the row list. @tanstack/react-virtualintegrated with a 100-row threshold. Below 100: render behavior is identical to before. At or above 100: only the visible rows plus a 10-row overscan stay mounted. Scroll stays 60fps regardless of workspace size. Full contract matrix at tests/e2e/table-virtualization.spec.ts.

What else shipped (2026-04-26 phase 4)

  • C2 — RSC + streaming. The last big cold- load lever. Workspace detail is now an async server component: authentication + access check + initial rows + members all resolve in parallel via Prisma before the first byte of HTML is sent. The client component hydrates into an already-correct DOM — no useEffect fetch waterfall, no blank-sheet flash, no canonical-slug redirect via router.replace after the fact. Contract tests at tests/e2e/rsc-workspace-shell.spec.ts lock seven observable properties (name in first-byte HTML, rows inline, no client-side /api/workspaces fetch on first load, 307 canonical redirect, clean hydration, correct doc-mode seeding).
  • Workspace-detail paint budgets. Warm-cache ceilings enforced on every staging deploy via tests/staging-smoke/tier1-5-paint-budget.spec.ts: TTFB < 500ms, FCP < 900ms, LCP < 1500ms. Red tier 1.5 blocks the staging → main promote, so a perf regression never reaches prod silently.
  • No intermediate skeleton on route transitions. Navigating from one workspace to another keeps the previous route’s content on screen until the new server component streams in. Since the server response lands in ~100-200ms, the swap reads as instant and the SPA feel is preserved. (An earlier pass shipped a branded loading.tsx skeleton here, but the flash was worse than the alternative for fast server responses. Removed 2026-04-27.)
  • Doc-mode row skip.Doc-first workspaces don’t need the 500-row initial slice for first paint — the server now skips loadInitialRows entirely for them. The existing client-side lazy fetch still covers a tab-switch to the table.
  • React cache() on auth + loader. generateMetadata, the page component itself, and the per-workspace OG image all share a single session lookup + single workspace lookup per request. Three server-side callers, one Prisma round-trip.

What else shipped (2026-04-27 RUM)

  • Real-user Web Vitals capture. src/lib/web-vitals.ts observes TTFB / FCP / LCP / CLS / INP on every live page session and reports them to /api/vitals on tab hide or pagehide. Anonymous by design: no user id, no PII, IP truncated to /24 before logging. Paired with the staging-smoke tier 1.5 synthetic budget, this closes the distribution-tail gap — a regression that only shows up on cold Vercel lambdas or low-end devices now surfaces as a p95 drift in the log stream, not a silent slowdown.
  • The three-layer budget gate. Budgets are now enforced at three depths: tests/nfr/ (localhost webServer cold ceiling), tests/staging-smoke/tier1-5-paint-budget (canary-workspace warm ceiling on staging.trydock.ai), and /api/vitals (real-user p95 distribution on prod). A synthetic-only claim misses distribution-tail regressions; this closes that gap without shipping a third-party RUM blob.

What’s next

  • Suspense-streamed members + rows. The next refinement: move the members + rows loaders behind a Suspense boundary so the workspace header + chrome paint as soon as the workspace lookup resolves (~40ms), and the data panels stream in a beat later (~150-200ms). Deferred until we have enough RUM data (a week+ on prod) to confirm the split is worth the added Suspense boundary complexity.
  • RUM p95 dashboard. The rum.web_vitals log event is structured for aggregation — once log drain wiring is in place, a single query across route + day gives the p50/p75/p95 numbers per surface. Internal dashboard to follow.

What we’re honest about

The gaps. Flagging them here so we don’t pretend otherwise.

  • Cold first visit: down from 1-2s to ~400-800ms to first interaction after the RSC migration. Most of the remaining budget is us-east-1 Neon cold-start (~500ms p95 lambda warm-up) and Vercel lambda init — infrastructure-side, not code-side.
  • Very long rows: tables with 1000+ rows where individual rows are very tall (e.g. multi-paragraph longtext cells) can still produce scroll friction because the virtualizer measures each row lazily. Mitigation: estimate-size + auto-measurement converges within a few scroll events. Row virtualization itself (see phase 3 above) keeps the DOM node count flat past 100 rows.
  • Offline: none. Refresh loses unsaved keystrokes during a disconnected session.
  • CRDT doc merge: last-write-wins for now. Two writers in the same paragraph in the same tick: later save overwrites.
  • Multi-region: Neon us-east-1. Cross- Atlantic users eat the round-trip.

Related: MCP server reference · Security · Changelog