Are you an LLM? Read llms.txt for a summary of the docs, or llms-full.txt for the full context.
Skip to content

Deploying to Production

Best practices for deploying Postbase applications to production.

Overview

A typical production deployment:

  1. Provision cloud database
  2. Run migrations
  3. Configure environment
  4. Deploy application
  5. Enable backups and PITR

Step 1: Provision Database

Create Project

postbase cloud projects create myapp

Provision Production Database

postbase cloud provision production \
  -p myapp \
  --region us-west1 \
  --cpu 2 \
  --memory 4096 \
  --storage 50

Get Connection String

postbase cloud url production -p myapp --copy

Step 2: Run Migrations

Apply Migrations

# Set environment variable
export DATABASE_URL=$(postbase cloud url production -p myapp --json | jq -r '.connection_string')
 
# Run migrations
postbase migrate up --database-url $DATABASE_URL

Verify Migration Status

postbase migrate status --database-url $DATABASE_URL

Step 3: Configure Environment

Environment Variables

# .env.production
DATABASE_URL="postgresql://postgres:xxx@xxx.proxy.rlwy.net:12345/railway?sslmode=disable"
NODE_ENV="production"

Secrets Management

Use your platform's secret management:

Vercel:
vercel env add DATABASE_URL production
Railway:
railway variables set DATABASE_URL="..."
AWS:
aws secretsmanager create-secret \
  --name myapp/database-url \
  --secret-string "postgresql://..."

Step 4: Enable Data Protection

Enable PITR

postbase cloud pitr enable -p myapp -d production

Verify Backups

# Check automated backups
postbase cloud backups list -p myapp -d production
 
# Check PITR status
postbase cloud pitr status -p myapp -d production

Step 5: Deploy Application

Next.js on Vercel

# Build and deploy
vercel --prod
 
# Or via Git push
git push origin main

Node.js on Railway

# Link to Railway
railway link
 
# Deploy
railway up

Docker

FROM node:22-alpine
 
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
 
ENV NODE_ENV=production
CMD ["node", "dist/index.js"]

Connection Best Practices

Connection Pooling

Use connection pooling for production:

import { createClient } from '@postbase/sdk'
 
const db = createClient({
  connectionString: process.env.DATABASE_URL,
  pool: {
    min: 2,
    max: 10,
    idleTimeoutMillis: 30000,
  },
})

Health Checks

Implement database health checks:

app.get('/health', async (req, res) => {
  try {
    await db.query('SELECT 1')
    res.json({ status: 'healthy', database: 'connected' })
  } catch (error) {
    res.status(503).json({ status: 'unhealthy', database: 'disconnected' })
  }
})

Graceful Shutdown

Close connections on shutdown:

process.on('SIGTERM', async () => {
  console.log('Shutting down...')
  await db.close()
  process.exit(0)
})

Security Checklist

Credentials

  • Use environment variables, not hardcoded values
  • Never commit .env files to version control
  • Rotate passwords periodically
  • Use different credentials per environment

Network

  • Use SSL in production (if not using Railway proxy)
  • Configure firewall rules if needed
  • Use private networking when available

Access

  • Limit database user permissions
  • Use separate users for app vs admin
  • Enable query logging for auditing

Monitoring

Application Metrics

Track key metrics:

import { metrics } from 'your-metrics-library'
 
// Query timing
const start = Date.now()
const result = await db.from('users').execute()
metrics.timing('db.query.duration', Date.now() - start)
 
// Connection pool
metrics.gauge('db.pool.active', db.pool.totalCount)
metrics.gauge('db.pool.idle', db.pool.idleCount)

Database Metrics

Monitor via Postbase:

# Check WAL receiver health
postbase cloud pitr receiver -p myapp -d production
 
# Check backup status
postbase cloud backups list -p myapp -d production

Alerting

Set up alerts for:

  • Database connection failures
  • High query latency (>100ms)
  • Backup failures
  • WAL lag >5 minutes

Disaster Recovery

Regular Backups

Automated backups run daily. Create manual backups before major changes:

postbase cloud backups create -p myapp -d production

Point-in-Time Recovery

Restore to any point in time:

postbase cloud pitr restore \
  -p myapp \
  -d production \
  --target-time "2026-01-25T14:00:00Z"

Recovery Testing

Test recovery quarterly:

# Provision test environment
postbase cloud provision dr-test -p myapp
 
# Restore to test
postbase cloud pitr restore \
  -p myapp \
  -d dr-test \
  --source production \
  --target-time "2026-01-25T12:00:00Z"
 
# Verify data
postbase cloud psql -p myapp -d dr-test \
  -c "SELECT COUNT(*) FROM users"
 
# Cleanup
postbase cloud destroy dr-test -p myapp

Migration Strategies

Rolling Updates

For zero-downtime migrations:

  1. Add new columns/tables (backward compatible)
  2. Deploy new application code
  3. Migrate data if needed
  4. Remove old columns/tables

Blue-Green Deployment

# Create new environment
postbase cloud provision production-new -p myapp
 
# Restore data
postbase cloud backups restore $LATEST_BACKUP \
  -p myapp -d production-new
 
# Run new migrations
postbase migrate up --database-url $NEW_URL
 
# Verify
postbase cloud psql -p myapp -d production-new \
  -c "SELECT * FROM _postbase_migrations"
 
# Switch traffic
# (in your load balancer / DNS)
 
# Cleanup old environment later
postbase cloud destroy production-old -p myapp

Cost Optimization

Right-Sizing

Start small and scale up:

# Start with minimal config
postbase cloud provision production \
  -p myapp \
  --cpu 1 \
  --memory 1024 \
  --storage 10
 
# Scale up when needed
postbase cloud scale production \
  -p myapp \
  --cpu 2 \
  --memory 4096

Cleanup

Remove unused resources:

# List all databases
postbase cloud databases list -p myapp
 
# Remove old test environments
postbase cloud destroy staging-old -p myapp