Database Backups
Daily encrypted PostgreSQL backups to AWS S3, with per-project end-user exports accessible from the dashboard API.
Overview
AuthGate runs a daily backup job via GitHub Actions. Each backup includes:
- A full
pg_dumpof the entire database - Per-project JSON exports of end-users and their accounts
All data is compressed with gzip and encrypted with AES-256-GCM before upload. No plaintext ever reaches S3.
Architecture
GitHub Actions (daily 02:00 UTC)
-> node scripts/backup-db.ts
1. pg_dump --format=custom (stdout)
-> gzip -> AES-256-GCM encrypt -> S3
2. For each project:
SELECT end_users, end_user_accounts
-> JSON -> gzip -> AES-256-GCM encrypt -> S3
The backup script (scripts/backup-db.ts) handles both steps. The full dump uses pg_dump --format=custom piped through Node's zlib.createGzip() then AES-256-GCM encryption before upload. Per-project exports query end_users and end_user_accounts only — sessions are excluded because they expire.
S3 key layout
backups/daily/YYYYMMDDTHHmmss/
full.dump.gz.enc # full pg_dump
end-users/
proj_abc123.json.gz.enc # per-project end-user data
proj_def456.json.gz.enc
Each per-project file contains:
{
"projectId": "proj_abc123",
"timestamp": "2026-03-05T02:00:00Z",
"endUsers": [...],
"endUserAccounts": [...]
}
Encryption
| Property | Value |
|---|---|
| Algorithm | AES-256-GCM |
| Key source | BACKUP_ENCRYPTION_KEY (64-char hex, 32 bytes) |
| IV | Random 12 bytes per file, prepended to ciphertext |
| Auth tag | 16 bytes, appended to ciphertext |
| File format | [12-byte IV][ciphertext][16-byte auth tag] |
BACKUP_ENCRYPTION_KEY is separate from MFA_ENCRYPTION_KEY. Never reuse keys between systems.
The S3 bucket also has SSE-S3 server-side encryption enabled as a second layer, but the primary encryption is application-level — S3 never sees plaintext.
Retention
S3 lifecycle rules automatically expire objects after 30 days. Versioning is enabled on the bucket; object versioning does not extend retention beyond the lifecycle expiry.
For shorter-interval recovery, use Neon's built-in PITR — the S3 backup is the offsite compliance copy.
GitHub Actions workflow
The workflow lives at .github/workflows/db-backup.yml.
- Schedule: daily at 02:00 UTC (
cron: '0 2 * * *') - Manual trigger:
workflow_dispatch(useful for testing or on-demand backups) - On failure: GitHub Actions sends a built-in failure email
The workflow installs the PostgreSQL client (for pg_dump), then runs:
npx tsx scripts/backup-db.ts
Exit codes: 0 on success, 1 on failure with error written to stderr.
Required GitHub Actions secrets
| Secret | Description |
|---|---|
DATABASE_URL | PostgreSQL connection string |
BACKUP_ENCRYPTION_KEY | 64-char hex key for AES-256-GCM encryption |
BACKUP_S3_BUCKET | S3 bucket name |
BACKUP_S3_ACCESS_KEY_ID | IAM access key |
BACKUP_S3_SECRET_ACCESS_KEY | IAM secret key |
BACKUP_S3_REGION | AWS region (eu-central-1) |
Environment variables
Add these to apps/web/.env.local to enable the tRPC backup API in the dashboard:
BACKUP_ENCRYPTION_KEY= # 64-char hex, separate from MFA_ENCRYPTION_KEY
BACKUP_S3_BUCKET= # S3 bucket name
BACKUP_S3_REGION=eu-central-1
BACKUP_S3_ACCESS_KEY_ID= # IAM credentials for backup bucket
BACKUP_S3_SECRET_ACCESS_KEY=
Generate a secure key with openssl rand -hex 32. Keep this key backed up separately — losing it makes all existing backups unrecoverable.
Infrastructure
The S3 bucket and IAM user are provisioned by Pulumi in the infra/ package (infra/src/backup.ts).
Bucket configuration:
- Region:
eu-central-1 - Name:
authgate-db-backups-{env} - SSE-S3 encryption enabled
- Versioning enabled
- All public access blocked
- 30-day lifecycle expiration rule
IAM policy: scoped to s3:PutObject, s3:GetObject, s3:ListBucket, s3:DeleteObject on this bucket only. Credentials are exported as Pulumi stack outputs for use as GitHub Actions secrets.
See the Email Infrastructure guide for the general Pulumi workflow (pulumi stack init, pulumi up, reading secret outputs).
Restore procedures
Full database restore
Run the restore script with the S3 key of the snapshot you want to restore:
npx tsx scripts/restore-db.ts backups/daily/20260305T020000/full.dump.gz.enc
Required environment variables:
DATABASE_URL= # target database connection string
BACKUP_ENCRYPTION_KEY= # must match the key used to encrypt the backup
BACKUP_S3_BUCKET=
BACKUP_S3_REGION=eu-central-1
BACKUP_S3_ACCESS_KEY_ID=
BACKUP_S3_SECRET_ACCESS_KEY=
The script downloads the file from S3, decrypts it, decompresses it, and pipes the result into pg_restore.
A full restore overwrites the target database. Point DATABASE_URL at a scratch database or a Neon branch to avoid data loss.
Per-project end-user restore
Use the backup.restoreEndUsers tRPC procedure from the dashboard or via a direct API call. The restore is additive: it inserts missing end-users and accounts but skips records that already exist (matched by ID).
See tRPC Backup API below for the full procedure reference.
tRPC Backup API
All procedures are project-scoped — they operate on the authenticated project's data only. Cross-project access is not possible.
backup.listSnapshots
Lists available daily snapshots that contain a backup for the authenticated project.
Returns: array of snapshot keys with timestamps.
backup.downloadEndUsers
Downloads and decrypts the project's end-user JSON from a given snapshot.
Input:
| Field | Type | Description |
|---|---|---|
snapshotKey | string | S3 key prefix for the snapshot (e.g. backups/daily/20260305T020000) |
Returns: parsed JSON with endUsers and endUserAccounts arrays.
backup.restoreEndUsers
Downloads, decrypts, and restores end-users from a snapshot. Inserts records that do not exist; skips records that already exist (matched by ID). Does not overwrite existing data.
Input:
| Field | Type | Description |
|---|---|---|
snapshotKey | string | S3 key prefix for the snapshot to restore from |
Returns:
{
restored: number // records inserted
skipped: number // records already present
errors: number // records that failed to insert
}