ADR 0002 — No Supabase Edge Functions; Deno standalone containers instead
Context
Section titled “Context”Supabase’s self-hosted stack ships an optional Edge Functions runtime based on Deno. When the
platform was being designed, sdk-prd.md assigned two categories of server-side work to
“Supabase Edge Functions”:
- The research-data aggregation pipeline (anonymised health data export for academic partners)
- Email trigger logic for BillionMail (transactional email — OTP delivery, streak notifications, medication reminders)
platform-roadmap.md Section 8 also referenced Edge Functions in the context of BillionMail
integration.
Memory budget on CX23
Section titled “Memory budget on CX23”CX23 is a Hetzner CX23: 2 vCPU, 4 GB RAM. The Supabase compose stack at steady state consumes approximately 2.6 GB of RAM across its containers (Kong, Auth, PostgREST, Realtime, Storage, imgproxy, pg-meta, vector). Postgres has a reservation of 768 MB. That leaves roughly 600 MB of headroom before memory pressure triggers Linux OOM.
The Supabase Edge Functions runtime (Deno) requires a minimum of approximately 256 MB on top of the base stack. Under concurrent function invocations the runtime can spike to 400–600 MB. Installing it on CX23 would regularly push the host into memory contention, degrading Supabase’s core services (Auth, Realtime) for all users.
Upgrade and operational considerations
Section titled “Upgrade and operational considerations”Edge Functions on self-hosted Supabase have a more complex upgrade path than the core stack.
The runtime version is pinned to the Supabase release and requires careful coordination during
supabase/supabase compose stack updates. For a small team, maintaining a pinned Deno runtime
alongside the core stack adds disproportionate operational overhead.
Existing Deno container precedent
Section titled “Existing Deno container precedent”Two server-side components were already implemented as standalone Deno containers before this decision was formalised:
- firebase-migration-bridge — temporary service that proxied Firebase Auth tokens during the Blutdruck Firebase → Supabase migration.
- RevenueCat webhook handler — receives
INITIAL_PURCHASE,RENEWAL,EXPIRATION, andCANCELLATIONevents from RevenueCat and updatessubscriptionsrows in Postgres.
Both containers communicate with Supabase over the internal Docker network using
SUPABASE_URL=http://kong:8000 and a scoped service-role key (webhook handler) or anon key
(migration bridge). Both are deployed and managed via Dokploy on CX43, which maintains them
alongside every other hosted service.
This pattern proved simpler to deploy, upgrade, monitor, and rollback than the embedded Edge Functions runtime would have been.
Decision
Section titled “Decision”Supabase Edge Functions are not installed on CX23. All server-side business logic is implemented as standalone Deno containers, deployed via Dokploy on CX43, communicating with Supabase over the internal Docker network or via the public Supabase API endpoint.
Each function is its own Docker service:
- Named container in a dedicated compose file (e.g.,
docker-compose.functions.ymlor a Dokploy application per function) - Minimal Deno image (
denoland/deno:alpine) - Scoped environment variables — only the secrets the function actually needs
- Dokploy deployment slot with health check and restart policy
Consequences
Section titled “Consequences”Documentation corrections required
sdk-prd.mdreferences to “Supabase Edge Functions” for the research pipeline and email triggers are incorrect. They must be updated to describe the Deno container pattern.platform-roadmap.mdSection 8 BillionMail integration must be updated similarly.- Any future architecture doc should describe server-side logic as “Deno containers on CX43, deployed via Dokploy” — not as Edge Functions.
Per-function conventions
| Concern | Convention |
|---|---|
| Runtime | denoland/deno:alpine, pinned minor version |
| Entry point | main.ts at repo root |
| Secrets | Injected via Dokploy env vars; never committed |
| Supabase access | SUPABASE_URL + least-privilege key |
| Logging | console.log JSON to stdout; collected by Dokploy log viewer |
| Health check | HTTP GET /health → 200 |
| Deployment | Dokploy app per function; GitLab CI builds and pushes image |
Secret scoping
Containers receive only the keys they need. The RevenueCat webhook handler receives the
service-role key because it must bypass RLS to update subscriptions. Future functions should
use the anon key plus a Postgres function (SECURITY DEFINER) wherever possible, to limit the
blast radius of a compromised container secret.
Scalability
Standalone containers can be scaled horizontally by Dokploy (replica count) without touching the Supabase stack. Edge Functions, by contrast, share the Supabase runtime and cannot be scaled independently. For low-volume webhook handling, a single replica is sufficient; the option to scale exists without re-architecting.
Monitoring
GlitchTip (self-hosted on CX43) receives errors from Deno containers via the Sentry-compatible
SDK (@sentry/deno). Each container initialises GlitchTip with its own DSN and a service tag
for filtering in the GlitchTip dashboard.
Alternatives considered
Section titled “Alternatives considered”Install Edge Functions on CX23
Would keep all Supabase-related server logic co-located with the Supabase stack. Rejected due to the memory budget constraint (risks OOM on a 4 GB host already running at ~84% baseline utilisation) and the more complex upgrade path. Revisit if CX23 is upgraded to a larger instance tier and if the Edge Functions runtime stabilises its self-hosted upgrade story.
Supabase Cloud’s hosted Edge Functions
Would eliminate the runtime management concern entirely. Rejected because health data must remain on EU infrastructure under direct control. Supabase Cloud’s EU region is a shared multi-tenant environment; self-hosted Supabase on CX23 with dedicated Postgres provides stronger data residency guarantees. This is a hard requirement for GDPR compliance in the health data context.
AWS Lambda / Cloudflare Workers
Serverless functions on third-party platforms. Rejected for the same data residency reason: health data payloads (BP readings, blood sugar logs, medication records) would transit non-EU or uncontrolled infrastructure. Additionally, adding AWS or Cloudflare to the stack contradicts the deliberate strategy of keeping the infrastructure footprint minimal and under direct control.