Pillar: product-architecture | Date: March 2026
Scope: Technology stack decisions and architecture for a Node.js/Supabase/Tailwind collision repair superapp. UX and UI vision for shop environments: dark mode default, large touch targets for shop floor use, Tesla-meets-fintech aesthetic, PWA for iPad and mobile web, responsive desktop experience. Module prioritization and build order for maximum early market impact. MVP definition scoped to win a pilot with a large multi-location MSO. Technical architecture challenges specific to this domain: real-time sync across shop locations (Supabase Realtime), offline-first capability for shop floor where connectivity is unreliable, photo and document management at scale (damage photos, DVI photos, completed repair documentation), insurance EDI integration patterns, parts ordering API integrations with major suppliers. Multi-tenant and multi-location data architecture. Integration API design for coexistence with CCC/Mitchell during the adoption transition period.
Sources: 25 gathered, consolidated, synthesized.
The single most important architectural finding: Supabase consolidates what currently requires 5 separate vendors — PostgreSQL, Auth, REST API, WebSocket Realtime, and S3-compatible Storage — into one managed platform, while the auto collision repair software market is growing from $3.723 billion (2024) to $25.61 billion by 2035 at a 19.16% CAGR, making a well-architected stack now the difference between owning that growth and subsidizing it for incumbents.[22][18]
The validated production stack for a collision repair superapp is narrow and settled: Next.js 14+ (App Router) + TypeScript + Tailwind CSS on Vercel, backed by Supabase (PostgreSQL + Auth + Realtime + Storage) and Cloudflare Workers for edge routing. The critical architectural guidance is to start as a monolith — "Only transition to microservices when performance bottlenecks specifically demand separation."[8] This is the end-state target (6 microservices: work orders, estimating, parts, payments, analytics, integration framework), not the launch architecture. Drizzle ORM is preferred over Prisma for Supabase workflows because it integrates cleanly with PostgreSQL Row Level Security policies — the foundation of the entire multi-tenant isolation model.[8]
Multi-tenancy for a multi-location MSO requires a two-level hierarchy — organization (MSO group) above location (individual shop) — with 3 non-negotiable domain invariants: every data row is owned by one tenant, a user can belong to multiple tenants, and tenant ID is required, indexed, and part of all uniqueness constraints.[1] Violating any one leaks data across tenants. The canonical schema uses three tables: organizations, locations, and memberships (with location_ids UUID[] for per-location access grants). Every business data table — work orders, estimates, parts — carries both organization_id and location_id as foreign keys. The recommended tenancy model is shared-runtime (shared schema) for all shops at launch — onboarding is a single row insert, cost efficiency is highest, and the architecture can graduate to isolated pod deployments for enterprise MSOs without a rewrite.[1]
Row Level Security enforces tenant isolation at the database layer, making cross-tenant data leaks impossible even if application code has bugs — critical when an MSO's competitive RO pricing could be exposed to a rival shop. RLS performance optimization via composite index placement delivers a 99.94% query speed improvement; wrapping auth functions in a SELECT wrapper ((select auth.uid())) to cache per-query adds another 99.99% improvement for policy evaluation.[12] Unoptimized RLS makes production systems unusably slow at scale — indexes on (tenant_id, created_at DESC) and (tenant_id, location_id, status) must be created before any data volume accumulates. The multi-tenant schema must be built in Phase 0 (Weeks 1–4), before any business logic — retrofitting tenant_id columns onto tables with live data requires an Expand → Backfill → Contract migration that is significantly more expensive than getting it right from day one.[1]
Supabase Realtime runs on Elixir/Phoenix compiled to Erlang — the same foundation powering WhatsApp's messaging infrastructure — enabling millions of concurrent WebSocket connections. The Pro plan supports 500 concurrent Realtime connections, exactly sufficient for 50 locations × 10 concurrent users. Any MSO deployment beyond that threshold requires an Enterprise plan with custom limits.[19] Three Realtime primitives map directly to shop floor use: Broadcast (RO status alerts, parts arrival notifications), Presence (technician-by-bay tracking, advisor availability), and Postgres Changes (live production board synced from database events).[16] One critical constraint: Realtime alone is insufficient for shop floors — large metal buildings, WiFi dead zones near spray booths, and technicians moving between bays create connectivity gaps. Realtime handles online sync; a separate offline layer handles the rest. Building a PWA with Service Worker + IndexedDB outbox delivers ~75% development cost savings vs. separate native iOS/Android apps while achieving identical offline capability.[4] Payments and parts inventory must remain online-only (no offline writes) — strong consistency is non-negotiable for financial operations.[14]
Shop floor UX operates under hard constraints invisible to standard consumer interface design: gloves, grease, ambient light variation from bay doors to frame machine corners, iPads mounted on rolling carts. 82% of mobile users have adopted dark mode, and dark-mode-first is mandatory from launch day — retrofitting dark mode post-launch costs 2–3× more frontend work.[24] Primary actions (status update, photo capture, clock in/out) require 64×72px glove-friendly touch targets, nearly double the WCAG 2.1 AAA minimum of 44×44px.[7] The 80/20 rule governs navigation design: 20% of functionality is used 80% of the time, so critical actions must be reachable in 1 tap from anywhere. Field service UX redesigns that cut taps in half — targeting a reduction from 3 minutes to 30 seconds for an RO status update, and eliminating 90% of paper-based check-in forms — deliver the most measurable ROI during MSO pilot evaluations.[7]
A large MSO generates 60,000–150,000 photos per month (10 locations × 200–500 ROs × 20–50 photos average), driving 300–750GB of monthly storage growth.[9] Supabase Storage's S3-compatible architecture with PostgreSQL metadata enables a capability unavailable from basic cloud storage: querying "all photos for RO-12345" via a single SQL join with tenant-scoped RLS, making real-time customer portals and insurance adjuster access straightforward to build. Image transformations (auto WebP conversion, on-demand thumbnails) reduce mobile bandwidth by 30–40%; client-side compression before upload and 90-day lifecycle archival to cold storage prevent runaway storage costs.[9] The parts ordering workflow has an equally concrete problem: 30–40% of parts on estimates are incorrect, due to incomplete information capture rather than wrong OE numbers, while electronic commerce adoption across parts ordering remains below 20%.[6] VIN-based part number validation before order submission — pulling from OEC CollisionLink (11,000+ dealerships), PartsTrader (insurance-mandated for most DRP programs), and PartsTech (20,000 parts stores, 6 million parts) — eliminates this error rate entirely and is a quantifiable ROI argument for every MSO pilot.[6]
CIECA standards govern all insurer-to-shop data flows. CCC ONE connects 25,000+ body shops and 500+ insurance companies; its AI completes 80% of claim estimating within 2 minutes. Mitchell is fully deployed at Crash Champions' 650+ locations and pre-populates 70% of estimate lines via AI.[5] Any new platform that cannot speak CIECA BMS (current XML standard) and CAPIS (new JSON/OpenAPI 3.1) is functionally locked out of DRP programs. The coexistence strategy during the adoption transition is "layer on top" — import estimates from CCC/Mitchell via their APIs, manage the shop workflow internally, export invoices back via CIECA BMS. Three MVP-level EDI integrations are required before pilot launch: DRP assignment intake (CIECA BMS push from insurer), estimate import from CCC/Mitchell, and invoice/payment submission outbound. Supplement workflow (Phase 2) and photo sharing to insurer portals (Phase 2) can follow.[5]
For practitioners building toward an MSO pilot, the architecture sequence is non-negotiable: Phase 0 (Weeks 1–4) must deliver the complete multi-tenant schema, RLS composite indexes, Supabase Auth with JWT tenant context, and a dark-mode-first Next.js app shell with service worker — every subsequent feature inherits tenant isolation automatically rather than retrofitting it. Phase 1 (Weeks 5–12) delivers the work order spine, unified customer/vehicle database, real-time status board, and cross-location analytics — the four capabilities that directly address MSO owners' core pain of managing remote locations via daily phone calls. Phases 2–3 (Weeks 8–20) layer DVI with offline-capable iPad, photo management, and the CCC/Mitchell/CIECA integration adapters that allow coexistence without forcing shops to abandon their estimating systems. Empirically, full MSO network deployment takes 8–10 months from first pilot shop launch, with meaningful performance data available at ~6 months.[25] Supabase's Pro plan (500 Realtime connections, image transformations, priority support) is the correct launch tier; Enterprise negotiation should begin before reaching 40 locations to avoid connection limit surprises at scale.[19]
The production-validated stack for a Node.js/Supabase collision repair superapp converges on a small set of well-tested primitives. Supabase provides the unified backend infrastructure — PostgreSQL, Auth (GoTrue), auto-generated REST/GraphQL (PostgREST), WebSocket Realtime, S3-compatible Storage, and serverless Edge Functions (Deno) — under a single managed platform.[18] The frontend targets Next.js 14+ with the App Router, TypeScript, and Tailwind CSS, deployed on Vercel with Cloudflare Workers handling edge routing.[8]
Key finding: "Start with a Monolithic architecture (like a Next.js Full Stack app). Only transition to microservices when performance bottlenecks specifically demand separation." The superapp microservices architecture describes the target end-state, not the starting point.[8]
| Layer | Technology | Rationale |
|---|---|---|
| Frontend | Next.js 14+ (App Router), React, TypeScript[8] | SSR/SSG, file-based routing, full-stack in one repo |
| Styling | Tailwind CSS[8] | Utility-first; dark mode via dark: variant |
| ORM | Drizzle ORM (preferred) or Prisma[8] | Drizzle preferred for Supabase RLS workflows |
| Database | Supabase (PostgreSQL + RLS + Auth + Realtime + Storage)[18] | Unified platform; eliminates 5 separate vendors |
| Auth | Supabase Auth (GoTrue) with JWT app_metadata[8][23] | Integrates with PostgreSQL RLS for tenant isolation |
| Serverless logic | Supabase Edge Functions (Deno)[18] | EDI processing, parts API calls, PDF generation, SMS |
| Deployment | Vercel (frontend/API) + Supabase (backend)[8] | Zero-config global deployment |
| Edge routing | Cloudflare Workers[8] | Global latency reduction for multi-location MSOs |
| Cache | Redis (optional, high-traffic only)[8] | Session/rate-limit state; not required in MVP |
| Testing | Vitest (unit/integration)[8] | Native ESM, compatible with Next.js + TypeScript |
| Component | Collision Repair Use Case |
|---|---|
| PostgreSQL with full privileges[18] | Work orders, estimates, parts, labor, payments — full relational schema |
| GoTrue Auth + JWT[18] | Shop staff login; tenant context encoded in app_metadata |
| PostgREST auto-API[18] | Auto-generated REST endpoints; GraphQL via pg_graphql |
| Realtime Engine (Elixir/Phoenix)[3] | Live production board, technician presence tracking, status broadcasts |
| Storage (S3-compatible)[9] | Damage photos, DVI media, PDFs — with Postgres metadata for RO-scoped queries |
| Edge Functions (Deno)[18] | CIECA EDI processing, CCC/Mitchell API calls, SMS/email dispatch |
| Supavisor (connection pooling)[18] | Critical for high-concurrency shop floor — dozens of simultaneous technician sessions |
| Kong API gateway[18] | Rate limiting per tenant, API key management for partner integrations |
| pg_cron extension[18] | Scheduled jobs: DRP report generation, payment reconciliation, data retention |
The target architecture for a collision repair superapp requires six foundational components[17]:
See also: Adoption & Migration
service_role keys in client code — server-side only[18]pg_stat_statements[18]location_id, wo_id) to minimize subscription overhead[18]supabase db push for migrations[18]Multi-tenancy for a collision repair superapp targeting MSOs requires a two-level hierarchy: the organization (e.g., "ABC Auto Group") is the top-level tenant, and locations (individual shops) are first-class entities below it. All business data rows carry both organization_id and location_id, with RLS enforcing isolation at the database layer.[8][1]
Key finding: Three non-negotiable domain invariants apply to every multi-tenant schema: (1) "Every row is owned by one tenant," (2) "A user can belong to multiple tenants," (3) "Tenant ID is required, indexed, and part of uniqueness constraints." Violating any one of these invariants creates cross-tenant data leaks.[1][10]
| Model | Isolation Level | Cost Efficiency | Onboarding Speed | When to Use |
|---|---|---|---|---|
| Shared-Runtime (shared schema)[1] | Code + RLS | Highest | Insert a row | Default for all shops — MVP and growth |
| Separate Schema per Tenant[1] | Postgres namespace | Medium | Schema provisioning | Dozens/hundreds of tenants; per-tenant restore needed |
| Pod Architecture (multi-instance)[1] | App + DB instance | Lower | Infrastructure provisioning | Noisy-neighbor risk for large enterprise MSOs |
| Single-Tenant (bespoke)[1] | Fully dedicated stack | Lowest (linear cost growth) | Full stack deploy | Strict compliance/insurance data residency mandates |
The canonical MSO hierarchy uses three tables — organizations, locations, memberships — plus dual foreign keys on every business data table:[8][11]
-- Organization (top-level tenant, e.g. "ABC Auto Group")
organizations: id, name, slug
-- Locations (individual shops within the organization)
locations: id, organization_id, name, address JSONB
-- Memberships (users connected to orgs, with optional location scoping)
memberships: id, user_id, organization_id, role TEXT, location_ids UUID[]
-- role values: 'owner', 'manager', 'tech', 'advisor', 'viewer'
-- location_ids: NULL = access to all locations
-- All business data tables carry both keys
work_orders: id, organization_id, location_id, ...
| Level | Entity | Scope |
|---|---|---|
| 1 (top) | Organization (MSO/Shop Group)[11] | Owner, Admin, Member roles |
| 2 | Locations (individual shops)[11] | Per-location access grants via location_ids[] |
| 3 | Work Orders[11] | Location-scoped; RLS filters by location_id |
| 4 | RO Documents[11] | Work order-scoped |
| 5 | Audit Logs[11] | Org-scoped; immutable; compliance |
Every request must resolve tenant scope before processing. Four supported mechanisms:[1][10]
acme.yourapp.com)app_metadata.tenant_id) — primary mechanism for Supabase AuthTwo-phase auth flow required for multi-tenant B2B SaaS:[1][10]
Known Supabase Auth Limitation: Supabase Auth enforces global email uniqueness — one email = one account across the entire platform. If the same email needs to belong to multiple tenants (e.g., a vendor who services multiple MSOs), an internal email mapping workaround is required.[23]
| Requirement | Driver |
|---|---|
| SOC 2 Type II[23] | Enterprise MSO customer procurement |
| GDPR/CCPA[23] | Customer PII in repair records and photos |
| Insurance data handling[23] | DRP network requirements from carrier partners |
| Encrypted credentials per tenant[23] | Parts supplier API keys, payment processor tokens |
Shared runtime requires tenant-level fairness controls to prevent one high-volume MSO from degrading others:[1][10]
Live schema changes with zero downtime use a three-phase pattern:[1]
See also: Regulatory Compliance
Row Level Security attaches policies to PostgreSQL tables that function as implicit WHERE clauses on every query — enforcing tenant isolation at the database layer regardless of application code errors.[12] For a collision repair platform where a cross-tenant data leak could expose an MSO's competitive RO data to a rival shop, database-layer enforcement is non-negotiable.
Key finding: RLS performance optimization can deliver 99.94% query speed improvements when policy columns are indexed and functions are wrapped in SELECT to cache results. Unoptimized RLS can make a production system unusably slow at scale.[12]
| Technique | Performance Improvement | Implementation |
|---|---|---|
Index policy columns (tenant_id, user_id)[12] | 99.94% | CREATE INDEX ON table (tenant_id, ...) |
| Wrap auth functions in SELECT[12] | 99.99% | (select auth.uid()) caches per-query |
| Always filter queries explicitly[12] | 94.74% | Clients pass tenant_id filter; RLS double-checks |
| Security definer functions for lookup tables[12] | 99.78% | Bypass RLS on read-only reference tables |
Specify roles with TO clause[12] | 99.78% | CREATE POLICY ... TO authenticated |
Extract tenant context from JWT app_metadata via helper function, then apply uniformly across all business tables:[8][23]
CREATE OR REPLACE FUNCTION auth.tenant_id()
RETURNS text LANGUAGE sql STABLE AS $$
SELECT nullif(
((current_setting('request.jwt.claims')::jsonb ->> 'app_metadata')::jsonb ->> 'tenant_id'), ''
)::text
$$;
-- Apply to every tenant-scoped table
CREATE POLICY tenant_isolation ON repair_orders
USING (auth.tenant_id() = tenant_id);
CREATE POLICY tenant_insert ON repair_orders
WITH CHECK (auth.tenant_id() = tenant_id);
Organization membership checks implemented as SECURITY DEFINER functions bypass RLS on the membership table itself, preventing infinite recursion:[11]
-- Check membership
is_organization_member(org_id TEXT, user_id TEXT) RETURNS BOOLEAN
-- Check admin rights
is_organization_admin(org_id TEXT, user_id TEXT) RETURNS BOOLEAN
-- role IN ('admin', 'owner')
CREATE INDEX ON repair_orders (tenant_id, created_at DESC);
CREATE INDEX ON repair_orders (tenant_id, location_id, status);
CREATE INDEX idx_member_user_org ON public.memberships(user_id, organization_id);
Tenant_id must appear first in all composite indexes — all hot queries filter by tenant before any other dimension.[23][11]
| Principle | Implementation |
|---|---|
| Defense-in-depth[11] | RLS at database layer; application code is a second check, not the only check |
| Least privilege[11] | Each role receives minimum required permissions (tech can't see financials) |
| Role hierarchy[11] | Owner > Admin > Manager > Advisor > Tech > Viewer — no privilege escalation path |
| Server-side context isolation[11] | Use SET LOCAL to prevent tenant context bleeding across requests in connection pool |
Supabase Realtime is built on Elixir compiled to Erlang, using Phoenix Framework and Phoenix.PubSub with the PG2 adapter. This architecture enables millions of concurrent WebSocket connections via lightweight Erlang processes rather than OS-level threads — the same foundation that powers WhatsApp's messaging infrastructure at scale.[3][16]
Key finding: "For offline-first scenarios, Realtime alone is insufficient — need local persistence layer + sync queue." Real-time sync and offline-first are complementary, not interchangeable — a shop floor iPad requires both.[3]
| Feature | Technology | Collision Repair Use Cases |
|---|---|---|
| Broadcast[3][16] | Low-latency ephemeral client-to-client messages via shortest path | RO status updates to production board, part arrival alerts, supplement approvals |
| Presence[16][19] | In-memory CRDT key-value store, synchronized across cluster nodes | Active technician tracking per location, which advisor is viewing an RO, estimator availability |
| Postgres Changes[16][19] | Logical replication slots → WebSocket JSON delivery | Live work order board sync, estimate approval broadcast, parts status push |
Supabase Realtime runs as a globally distributed Elixir cluster:[3][16]
| Plan | Concurrent Realtime Connections | MSO Coverage |
|---|---|---|
| Free[19] | 200 | ~20 concurrent users — sufficient for single pilot shop |
| Pro[19] | 500 | 50 locations × 10 concurrent users = 500 — exactly at limit |
| Team / Enterprise[19] | Custom | Required for large MSOs; negotiate per-deployment |
The realtime.messages table is partitioned daily; partitions are retained 3 days before deletion — efficient cleanup without row-level deletions.[3][16] This is sufficient for event replay on technician iPad reconnection after intermittent shop floor connectivity loss.
Supabase added Broadcast and Presence Authorization in 2024, enabling RLS-style policies on channels:[19]
Critical implementation note: Use scoped channel names — shop:${shopId}:work-orders — and ensure clients subscribing to Postgres Changes have appropriate RLS policies. Without scoped channels, cross-tenant data leakage is possible even with application-level auth.[3]
Shop floor connectivity is unreliable — large metal buildings, WiFi dead zones near spray booths, technicians moving between bays. An offline-first architecture is not a feature; it is a foundational reliability requirement for any shop floor iPad application. The PWA approach delivers ~75% development cost savings vs. separate native iOS/Android apps while achieving the same offline capability.[4]
Key finding: "Offline-first favors low latency over strict consistency" — this is acceptable for shop workflow status updates and DVI capture, but NOT for parts inventory or payment processing, which must be online-only.[14]
| Layer | Technology | Role in Shop Floor App |
|---|---|---|
| App Shell[20] | Minimal HTML/CSS/JS, aggressively cached | Instant launch on iPad even with no connectivity |
| Service Worker[20][4] | Separate thread; manages routing, caching, background sync, push events | Intercepts API requests; returns cached data offline; queues writes for sync |
| Data Layer[20] | IndexedDB (universal) or SQLite/WASM (2025) | Local work orders, photos queue, sync outbox |
| Asset Class | Strategy | Rationale |
|---|---|---|
| Static assets, fonts, icons[20] | Cache-first with revisioning | Instant loads; safe version control via content hashing |
| Content APIs (RO lists, parts catalog)[20] | Stale-while-revalidate | Users see cached data immediately; background refresh |
| User data, authenticated APIs[20] | Network-first with fallback | Prevents stale writes; falls back gracefully offline |
| Vehicle photos, media[20] | Cache-first with LRU limits | Prevents storage bloat; iPad storage is finite |
Four stores required for offline DVI and RO workflows:[14]
work_orders store — localId as keyPath; indexes on updated_at, server_idphotos store — localId, work_order_id; blob storage for pending uploadssyncQueue store — pending mutations with type, entity, clientId, changes, timestampsql.js and wa-sqlite enable full relational local DB; superior for complex shop data relationships[14]Every user mutation must write to local IndexedDB before network transmission. Include idempotency keys to prevent duplicate effects on retry.[20][4] The Background Sync API flushes queued writes when connectivity returns — even if the browser tab is closed.[4]
| Data Type | Strategy | Rationale |
|---|---|---|
| RO status updates[14] | Last-Write-Wins (timestamp) | Simplest; sufficient — only one tech updates status at a time |
| Estimate line items (concurrent edit)[20] | Manual resolution — UI diff surface | Advisor and tech may edit simultaneously; needs human decision |
| Technician presence, DVI checklists[20][14] | CRDTs (Automerge, Yjs) | Mathematically guaranteed convergence; no conflict possible |
| Parts inventory, payments[14] | Online-only (no offline writes) | Strong consistency required; double-orders and double-charges unacceptable |
| Constraint | Value | Shop Floor Impact |
|---|---|---|
| Idle timeout[4] | 30 seconds | Small photo batches must complete within limit |
| Promise settlement maximum[4] | 5 minutes | Large DVI photo uploads may exceed — use Background Fetch instead |
| Safari Background Sync support[14] | Limited | iPad Safari requires fallback: retry on app foreground |
| Periodic Background Sync minimum interval[4] | 24 hours | Not suitable for operational sync — use active sync on reconnect |
For batch photo upload after connectivity return — shows persistent browser UI with user-visible progress. Continues even if the iPad screen locks. Suitable for end-of-day DVI photo sync when the tech returns to the WiFi area.[4]
Auto body shop environments impose extreme constraints on interface design: technicians wear gloves, grease and debris impair touch accuracy, lighting ranges from direct sunlight near bay doors to dim corners near frame machines, and iPads are mounted on rolling carts or walls in both portrait and landscape orientations. Standard consumer-grade UI patterns fail in this environment.[7][24]
Key finding: "A digital solution that will be successful cannot be designed in a boardroom without input from the users in the field — it is imperative to involve end-users early on in the design process." Field service UX redesigns that cut taps in half deliver the most measurable ROI for shop adoption.[7]
82% of mobile users have adopted dark mode.[24] For the "Tesla-meets-fintech" aesthetic targeting collision repair shops:
| Standard | Minimum Size | Application |
|---|---|---|
| WCAG 2.1 AAA[24] | 44×44px | Accessibility baseline for all interactive elements |
| iOS Apple HIG[24] | 34pt minimum | iPad baseline; all tappable elements |
| Safe minimum[24] | 30×30px to 48×48px | Standard shop floor use |
| Industrial / glove-friendly[7] | 64×72px | Primary actions: status update, photo capture, clock in/out |
Touch precision varies by screen location — sizing must compensate:[24]
| Depth | Actions |
|---|---|
| 1 tap from anywhere[7] | Update RO status, take/attach photo, add time entry/clock in-out, flag part needed/arrived |
| 2–3 taps OK[7] | View estimate details, add technician note, request supplement |
| Menu level[7] | Edit customer info, generate reports, configure settings |
"20% of the functionality is used 80% of the time." Critical actions must be 1–2 taps from any screen.[7] Five principles for shop floor interfaces:
| Orientation | Layout | Primary User |
|---|---|---|
| Portrait[7] | Single-column, mobile phone-like | Technician walking the shop floor with iPad |
| Landscape[7] | Two-column: left = RO list/nav, right = detail view | Estimator at workstation; service advisor at counter |
Define concrete, measurable hypotheses before development:[7]
A large MSO generates 60,000–150,000 photos per month at scale (10 locations × 200–500 ROs × 30 photos average = 300GB–750GB monthly storage growth).[9] Photo management is not a secondary feature — it is a primary workflow touchpoint for both DVI and insurance supplement documentation.
Key finding: Supabase Storage's S3-compatible architecture with PostgreSQL metadata enables a capability unique among basic cloud storage solutions: querying "all photos for RO-12345" with a single SQL join rather than manual file path parsing. This enables real-time customer portals and insurance adjuster access with standard RLS tenant isolation.[9]
| Bucket | Contents | Access Policy |
|---|---|---|
vehicle-photos[9] | Damage intake, repair progress, completion photos | Tenant-scoped RLS; read by customer portal |
dvi-media[9] | Digital vehicle inspection photos and videos | Tenant + location scoped; insurer read access for DRP |
documents[9] | Insurance estimates, supplements, invoices (PDFs), signed authorizations | Tenant-scoped; audit trail required |
avatars[9] | Staff profile photos | Public read; authenticated write |
vehicle-photos/
{tenant_id}/
{location_id}/
{repair_order_id}/
intake/ damage_001.jpg, damage_002.jpg
repair_progress/ day1_001.jpg
completion/ final_001.jpg
documents/
{tenant_id}/
{repair_order_id}/
estimate.pdf, supplement_1.pdf, final_invoice.pdf, authorization_signed.pdf
First path segment is always tenant_id — enables RLS storage policy using storage.foldername(name)[1].[9]
| Capability | Specification | Shop Floor Use |
|---|---|---|
| Resize[9] | 1–2500px; modes: cover, contain, fill | Thumbnail generation for RO list view; avoid storing separate thumbnail copies |
| Quality[9] | 20–100 (default 80) | Compress for web delivery; retain originals at full resolution |
| Auto WebP conversion[9] | When client supports it | 30–40% bandwidth reduction on mobile |
| Format support[9] | PNG, JPEG, WebP, AVIF, GIF, ICO, SVG, BMP, TIFF; HEIC source-only | Accepts iPhone/iPad native HEIC captures |
| Maximum file size[9] | 25MB per file | Sufficient for all photo types; videos need separate handling |
| Maximum resolution[9] | 50 megapixels | Covers all current mobile camera outputs |
| CDN coverage[9] | 285+ cities worldwide | Fast delivery to insurance adjuster portals globally |
| Pricing (transformations)[9] | 100/month included (Pro/Team); $0.50 per 1,000 additional | At 5 transformations per RO × 3,000 ROs/month = 15,000 = $72.50/month |
| Parameter | Low Estimate | High Estimate |
|---|---|---|
| Photos per RO[9] | 20 | 50 |
| ROs/month per location[9] | 200 | 500 |
| Total ROs/month (10 locations)[9] | 2,000 | 5,000 |
| Total photos/month[9] | 60,000 | 150,000 |
| Storage growth/month (5MB avg)[9] | 300GB | 750GB |
cache-control: max-age=31536000) for completed repair photos[9]Complete end-to-end DVI photo flow:[9]
dvi-videos bucket; 25MB per-file limit is too small for videos — use signed upload URLs directly to object storage[9]The Collision Industry Electronic Commerce Association (CIECA) develops the electronic communication standards underlying all insurer-to-shop and estimating-system-to-shop data flows. "The highly-automated processes most insurers use to assign vehicles to repair shops and to receive estimates and invoices are driven by CIECA Standards."[5] Any new platform that cannot speak these standards is functionally locked out of DRP programs.
Key finding: "CCC and Mitchell own the estimate data format. Any new superapp needs to be a 'layer on top' during transition — import estimates from CCC/Mitchell, manage the shop workflow, then export invoices back." Full CIECA compliance is a prerequisite, not a differentiator.[5]
| Format | Description | Status | Requirement |
|---|---|---|---|
| CAPIS[5][13] | JSON/OpenAPI 3.1 | Newest — future-forward standard | Required for new development; all greenfield integrations |
| BMS[5][13] | XML-based Business Message Suite | Current industry standard | Required for most existing insurer integrations |
| EMS[5][13] | dBase IV format | Legacy (1994 original standard) | Still present in older estimating systems; declining |
A cloud-based API network enabling "more than 22,000 collision repairers to connect to apps using the CIECA BMS data standard."[5][13]
| Technical Attribute | Specification |
|---|---|
| Standard | CIECA BMS (Business Message Suite) compliance required[5] |
| API type | RESTful cloud service, JSON/XML[5] |
| Encryption | 128-bit encrypted transmission[5] |
| Architecture | Cloud-to-cloud; no local data pumps or EMS file configuration[5] |
| Access | Developer registration required; BMS message assigned per app based on business purpose[5] |
| Market reach | 22,000+ collision repairers connected via CIECA BMS standard[5][13] |
Enables "near real-time" access to shop-specific job data:[21]
| Attribute | Specification |
|---|---|
| Authentication | Application keys; registration required[21] |
| Rate limiting | 100 rows per API call, paging-based[21] |
| Methods | GET and POST[21] |
| RepairOrder Services | ClaimService (insurance info), CustomerService, JobService (repair lines, status, costs, vehicle details)[21] |
| Opportunity Services | Estimate-level data[21] |
| Shop Services | Shop and employee/user data[21] |
| System | Market Data |
|---|---|
| CCC ONE[5] | 25,000+ body shops; 500+ insurance companies; AI completes 80% of claim estimating within 2 minutes |
| Mitchell[5] | Crash Champions (650+ locations) fully deployed end-to-end; pre-populates 70% of estimate lines via AI |
| Mitchell/Guidewire[5] | Cloud-native integration with Guidewire ClaimCenter launched — direct carrier workflow integration |
| CCC/RepairLogic[5] | OEConnection RepairLogic integration into CCC ONE expected early 2026 |
| Use Case | Format Required | Priority |
|---|---|---|
| DRP Assignment intake[5][13] | CIECA BMS/CAPIS push from insurer | MVP — required for DRP shop participation |
| Estimate import from CCC/Mitchell[5] | CCC Secure Share API / Mitchell API | MVP — coexistence during transition |
| Invoice/payment submission[5][13] | CIECA BMS/CAPIS outbound | MVP — cash flow critical |
| Supplement workflow[5][13] | BMS/CAPIS; photo sharing via insurer portal | Phase 2 |
| Photo sharing to insurer portal[5] | Insurer-specific (varies) | Phase 2 |
See also: Regulatory Compliance
Parts ordering remains the most manual, error-prone workflow in collision repair. "For most shops, parts orders are still handled by phone or fax" with electronic commerce adoption below 20%.[6] The quality problem is severe: "About 30 percent or 40 percent of parts that appear on estimates are incorrect," primarily due to incomplete information capture rather than wrong OE numbers.[6]
Key finding: 30–40% of parts on estimates are incorrect, costing shops in wrong-order returns, delay penalties, and labor rework. A new platform that validates part numbers against VIN data before ordering can eliminate this entirely — a quantifiable ROI argument for MSO pilots.[6]
| Platform | Scale | Key Capability | Integration Priority |
|---|---|---|---|
| OEC CollisionLink[6] | 11,000+ dealerships and repair shops | Dominant OEM parts ordering; factory pricing incentives; bi-directional PartsTrader integration | #1 — highest coverage, OEM focus |
| PartsTrader[6] | Insurance-mandated for many DRP programs; 1,700+ dealer network | Online procurement marketplace; integrated with CollisionLink since 2016 | #2 — insurance mandate makes it non-optional for DRP shops |
| PartsTech API[6] | 20,000 parts stores; 6 million parts | Keyword/VIN/plate search; real-time inventory; geolocation for hard-to-find parts | #3 — broadest catalog coverage |
| LKQ/Keystone[6] | LKQ Europe API serves 300+ customers | Aftermarket/recycled parts; uParts "best mix" bundles of OEM+aftermarket+LKQ per estimate | #4 — required for cost-driven DRP programs |
| Partly API[15] | "AI Parts Infrastructure" — unified API layer | Single API across auto parts industry; VIN/plate lookup; image upload for damage assessment; white-label procurement | #5 — reduces N individual integrations to 1 |
| CCC Parts Network[6] | Connected to CCC ONE estimating | Dealers, aftermarket, recyclers; reduces returns via integrated shop communications | #6 — required for CCC ONE shops |
Estimate created → Part line items extracted →
Query parts APIs by VIN + part description →
Show availability/pricing across OEM/aftermarket/recycled →
One-click order → Parts received confirmation →
RO updated with actual parts costs
Data required for accurate parts ordering: VIN, OEM part numbers from estimate, Year/Make/Model, part description, quantity, shop location (for delivery/pickup logistics).[6]
| Challenge | Data Point | Solution Approach |
|---|---|---|
| Part accuracy[6] | 30–40% of estimate parts are incorrect | VIN-based validation before order submission |
| Multi-vendor management[6] | Shops order across multiple vendors simultaneously | Unified parts order dashboard with per-vendor status |
| Returns/credits[6] | Wrong parts require return tracking + AP integration | Returns workflow linked to original RO and GL codes |
| Price comparison[6] | OEM vs. aftermarket vs. recycled decision per line item | Side-by-side pricing from CollisionLink + LKQ + PartsTech |
| Real-time availability[6] | Inventory status varies by warehouse proximity | PartsTech geolocation API for nearest warehouse ETA |
See also: Workflow Pain Points
The auto collision repair management software market reached $3.723 billion in 2024, projected to grow to $25.61 billion by 2035 at a 19.16% CAGR. The cloud-based segment dominates at $2.5 billion (2024) and is projected to reach $17.5 billion (2035). The large facilities/MSO segment is the fastest-growing and highest-value category, projected at $14.61 billion by 2035.[22]
Key finding: MSO critical pain points are operational, not technical: inability to monitor multiple locations simultaneously, customer/vehicle data fragmented by location, inconsistent technician processes across shops, and growth friction when opening new locations. The MVP must solve all four to win a pilot.[25]
| Pain Point | Business Impact |
|---|---|
| Lack of visibility[25] | Owner cannot monitor multiple locations; relies on daily phone calls |
| Data fragmentation[25] | Customer/vehicle history isolated by location; repeat customers are strangers at every shop |
| Inconsistent service[25] | Technicians use different processes at different shops; quality varies; training is per-location |
| Growth friction[25] | Opening a new location requires rebuilding systems from scratch; no pre-configured templates |
| Phase | Feature | Rationale |
|---|---|---|
| Phase 1 (MVP — MSO Pilot Win)[25] | Multi-location work order management | Core operational requirement; without this, it's not shop management software |
| Phase 1[25] | Centralized customer/vehicle database | Unified profiles searchable across all locations — the "medical record for a car" |
| Phase 1[25] | Role-based access control (owner/manager/advisor/tech/viewer per location) | Required for MSO — not all staff see all locations or all data |
| Phase 1[25] | Real-time status board (per-location + cross-location view for owners) | Replaces daily phone calls; directly addresses visibility pain point |
| Phase 1[25] | Cross-location analytics (KPI, revenue, cycle time comparison) | MSO owners need comparative data to manage remote locations |
| Phase 1[25] | Parts ordering (OEC CollisionLink, PartsTrader) | Mandatory for DRP shops; directly reduces 30–40% parts error rate |
| Phase 1[25] | DVI with photos (offline-capable iPad) | Standardizes inspection process; insurance documentation; tech workflow cornerstone |
| Phase 1[25] | Customer communication (SMS/email automated triggers) | Customer satisfaction KPI; reduces inbound phone volume by 30–50% |
| Phase 1[25] | CCC/Mitchell read integration (pull estimates) | Coexistence during transition — shops won't abandon estimating systems for MVP |
| Phase 2[25] | Insurance supplement workflow | Revenue recovery; requires CIECA BMS compliance |
| Phase 2[25] | Labor time tracking (clock in/out per operation) | Labor costing accuracy; efficiency benchmarking |
| Phase 2[25] | Invoicing and payment processing | Full AR cycle; required for standalone operation |
| Phase 2[25] | Document management (estimate PDFs, photos, sign-offs) | Paperless workflow; audit trail for insurance disputes |
| Phase 3[25] | AI damage assessment integration | AI completes 80% of estimating in <2 min (CCC); competitive parity requirement |
| Phase 3[25] | Insurance DRP network management | Enterprise differentiation; complex insurer relationship management |
| Phase 3[25] | OEM certification workflow management | ADAS/EV repair growth — significant revenue premium for certified shops |
| Milestone | Typical Duration |
|---|---|
| First pilot shop launch[25] | Day 0 |
| Meaningful performance data[25] | ~6 months (first two locations) |
| Full network deployment[25] | 8–10 months from pilot launch |
| Best practice[25] | Pilot single shop before rolling out; validate use cases before MSO-wide rollout |
Key capabilities that distinguish MSO-class software from single-shop platforms:[25]
See also: Market Economics; Pricing & Business Model
The strategic architecture challenge is not building a better SMS — it is displacing deeply entrenched incumbents (CCC ONE at 25,000+ shops; Mitchell at 650+ with Crash Champions alone) while maintaining interoperability during the transition period. The work order is the data spine that makes this possible: a single source of truth that ingests estimates from CCC/Mitchell and orchestrates every downstream workflow.[17]
Key finding: "Start with the core RO workflow as the unified data spine, then layer each additional capability module on top. The single source of truth for the work order enables true superapp benefits — parts auto-ordered from estimate, customer texts triggered by status changes." This is the architecture pattern that converts a feature-competitive SMS into a platform.[17]
| Pattern | Implementation | Purpose |
|---|---|---|
| API-First Design[17] | Documented REST APIs for all SMS functions | Simplifies integration with CCC/Mitchell and future partners |
| Event-Driven Integration[17] | Pub/sub patterns for loose coupling | CCC estimate arrival triggers RO creation without tight binding |
| Integration Adaptors[17] | Components translating between modern REST and legacy CIECA XML/EMS | Support all three CIECA formats in a single adapter layer |
| Webhook Management[17] | Secure event notifications with retry capabilities | Insurance supplement approvals, DRP assignment intake |
| Partner Sandboxes[17] | Isolated integration testing environments | New insurer integrations without production risk |
| Current Pain | Superapp Solution |
|---|---|
| 6+ separate logins (SMS, estimating, DVI, parts, customer portal, payments)[17] | Single sign-on; one identity across all modules |
| Multiple isolated databases with no cross-module analytics[17] | Unified data layer; cycle time + cost + satisfaction in one dashboard |
| Manual data re-entry between systems (estimate → RO → parts → invoice)[17] | Work order as data spine; parts auto-ordered from estimate lines |
| Each system vendor owns customer data[17] | Platform owns data layer; customer stickiness through multi-service integration |
| Data Type | Storage Technology | Rationale |
|---|---|---|
| Transactional data (ROs, estimates, parts, payments)[17] | Supabase/PostgreSQL | ACID transactions; RLS tenant isolation; full-text search |
| Photos, PDFs, videos[17] | Supabase Storage (S3-compatible) | CDN delivery; Postgres metadata for querying |
| Session state, rate-limit counters[17] | Redis | Sub-millisecond TTL operations |
| Real-time presence, ephemeral events[17] | Supabase Realtime (Elixir/CRDT) | In-memory; not persisted; handles connection-level state |
| Analytics/warehouse (Phase 3)[17] | Separate analytics DB or Postgres partitioned tables | Historical trend queries without impacting OLTP performance |
The build order is constrained by data dependencies: the work order entity is the spine that everything else references. Multi-tenant infrastructure must exist before any business logic is built, because retrofitting RLS onto existing tables is significantly more complex than enabling it from day one.[11][1]
Key finding: The highest-risk architectural decision is the multi-tenant schema — getting organization/location/membership right before any business data exists is far cheaper than the Expand → Backfill → Contract migration pattern required to add tenant_id columns to production tables with live data.[1]
| Module | Deliverables | Why First |
|---|---|---|
| Supabase project setup[18] | PostgreSQL, Auth, Storage buckets, Edge Functions runtime | Everything else depends on this |
| Multi-tenant schema[1][8] | organizations, locations, memberships tables with RLS enabled | Cannot add tenancy to existing tables without costly migration |
| Auth + JWT tenant context[8][23] | Supabase Auth; auth.tenant_id() helper function; RLS policies on all tables | Every subsequent feature inherits tenant isolation automatically |
| Next.js + Tailwind + dark mode[8][24] | App shell; PWA manifest; service worker; dark-mode-first design system | Retrofitting dark mode costs 2–3× more if done post-launch |
| RLS composite indexes[12] | CREATE INDEX ON table (tenant_id, ...) for all business tables | 99.94% performance improvement; must exist before data volume grows |
| Module | Deliverables | Dependencies |
|---|---|---|
| Work Order CRUD[25] | Create/edit/view ROs; status workflow; location-scoped RLS | Phase 0 schema |
| Customer/Vehicle database[25] | Unified customer profiles; VIN decode; vehicle history across locations | Work Order entity |
| Real-time status board[3][16] | Supabase Realtime channels per location; cross-location owner view; presence tracking | Work Order CRUD; Realtime channel authorization |
| Role-based access control[25][11] | RBAC for owner/manager/advisor/tech/viewer; per-location permission matrix | Auth + memberships table |
| Cross-location analytics[25] | KPI dashboard; revenue, cycle time, throughput per location; comparative views | Work Order CRUD with timestamps |
| Module | Deliverables | Dependencies |
|---|---|---|
| Offline-first PWA[20][4] | Service worker; IndexedDB stores; sync queue outbox; Background Sync API | Phase 0 app shell |
| DVI checklist + camera[9] | Structured inspection forms; camera capture; offline photo queue | Offline-first module; Supabase Storage buckets |
| Photo management[9] | Tenant-scoped bucket structure; image transformations for thumbnails; CDN delivery | Supabase Storage Pro plan |
| Document management[9] | PDF storage; estimate/invoice/supplement documents; authorization signatures | Supabase Storage; Work Order CRUD |
| Module | Deliverables | Dependencies |
|---|---|---|
| CCC/Mitchell estimate import[5][21] | CCC Secure Share API adapter; Mitchell RepairCenter API adapter; estimate → RO conversion | Work Order CRUD; Edge Functions for API calls |
| CIECA BMS/CAPIS adapter[5][13] | XML BMS parser; JSON CAPIS client; DRP assignment intake; invoice submission | Edge Functions; Work Order CRUD |
| Parts ordering[6][15] | OEC CollisionLink integration; PartsTrader integration; PartsTech API; VIN-based validation | Work Order/estimate line items; Edge Functions |
| Customer communication[25] | SMS/email triggers on RO status changes; pre-built message templates; opt-out management | Work Order status transitions; Realtime events |
| Module | Deliverables |
|---|---|
| Labor time tracking[25] | Clock in/out per operation; flat-rate vs. actual time; tech efficiency reporting |
| Invoicing[25] | Invoice generation from estimate; insurance vs. customer split billing; PDF generation |
| Payment processing[25] | Customer payments; insurance payment reconciliation; online-only (no offline writes) |
| Supplement workflow[5][13] | Supplement submission via CIECA BMS; approval tracking; photo attachment to supplement |
| Decision | Choice | Source |
|---|---|---|
| Multi-tenancy model | Shared-runtime (shared schema); graduate to isolated deployments for enterprise MSOs without rewriting[1] | WorkOS Developer Guide |
| Architecture start point | Monolith-first (Next.js full-stack); microservices only when bottlenecks demand[8] | Ryan O'Neill / Supabase community |
| RLS enforcement | Database-layer; application code is a second check only[11] | LockIn architecture pattern |
| Dark mode | Dark-mode-first from launch; retrofitting costs 2–3× more[24] | Smart Interface Design Patterns |
| Offline strategy | IndexedDB + Service Worker outbox; online-only for payments and parts inventory[14] | LogRocket offline-first analysis |
| EDI coexistence | "Layer on top" during transition — import from CCC/Mitchell, manage workflow, export invoices back[5] | CIECA industry analysis |
| Realtime scale | Pro plan (500 connections) for MSO pilot; Enterprise for 50+ location deployments[19] | Supabase Realtime documentation |