Deploying to Production
Best practices for deploying Postbase applications to production.
Overview
A typical production deployment:
- Provision cloud database
- Run migrations
- Configure environment
- Deploy application
- Enable backups and PITR
Step 1: Provision Database
Create Project
postbase cloud projects create myappProvision Production Database
postbase cloud provision production \
-p myapp \
--region us-west1 \
--cpu 2 \
--memory 4096 \
--storage 50Get Connection String
postbase cloud url production -p myapp --copyStep 2: Run Migrations
Apply Migrations
# Set environment variable
export DATABASE_URL=$(postbase cloud url production -p myapp --json | jq -r '.connection_string')
# Run migrations
postbase migrate up --database-url $DATABASE_URLVerify Migration Status
postbase migrate status --database-url $DATABASE_URLStep 3: Configure Environment
Environment Variables
# .env.production
DATABASE_URL="postgresql://postgres:xxx@xxx.proxy.rlwy.net:12345/railway?sslmode=disable"
NODE_ENV="production"Secrets Management
Use your platform's secret management:
Vercel:vercel env add DATABASE_URL productionrailway variables set DATABASE_URL="..."aws secretsmanager create-secret \
--name myapp/database-url \
--secret-string "postgresql://..."Step 4: Enable Data Protection
Enable PITR
postbase cloud pitr enable -p myapp -d productionVerify Backups
# Check automated backups
postbase cloud backups list -p myapp -d production
# Check PITR status
postbase cloud pitr status -p myapp -d productionStep 5: Deploy Application
Next.js on Vercel
# Build and deploy
vercel --prod
# Or via Git push
git push origin mainNode.js on Railway
# Link to Railway
railway link
# Deploy
railway upDocker
FROM node:22-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
ENV NODE_ENV=production
CMD ["node", "dist/index.js"]Connection Best Practices
Connection Pooling
Use connection pooling for production:
import { createClient } from '@postbase/sdk'
const db = createClient({
connectionString: process.env.DATABASE_URL,
pool: {
min: 2,
max: 10,
idleTimeoutMillis: 30000,
},
})Health Checks
Implement database health checks:
app.get('/health', async (req, res) => {
try {
await db.query('SELECT 1')
res.json({ status: 'healthy', database: 'connected' })
} catch (error) {
res.status(503).json({ status: 'unhealthy', database: 'disconnected' })
}
})Graceful Shutdown
Close connections on shutdown:
process.on('SIGTERM', async () => {
console.log('Shutting down...')
await db.close()
process.exit(0)
})Security Checklist
Credentials
- Use environment variables, not hardcoded values
- Never commit
.envfiles to version control - Rotate passwords periodically
- Use different credentials per environment
Network
- Use SSL in production (if not using Railway proxy)
- Configure firewall rules if needed
- Use private networking when available
Access
- Limit database user permissions
- Use separate users for app vs admin
- Enable query logging for auditing
Monitoring
Application Metrics
Track key metrics:
import { metrics } from 'your-metrics-library'
// Query timing
const start = Date.now()
const result = await db.from('users').execute()
metrics.timing('db.query.duration', Date.now() - start)
// Connection pool
metrics.gauge('db.pool.active', db.pool.totalCount)
metrics.gauge('db.pool.idle', db.pool.idleCount)Database Metrics
Monitor via Postbase:
# Check WAL receiver health
postbase cloud pitr receiver -p myapp -d production
# Check backup status
postbase cloud backups list -p myapp -d productionAlerting
Set up alerts for:
- Database connection failures
- High query latency (>100ms)
- Backup failures
- WAL lag >5 minutes
Disaster Recovery
Regular Backups
Automated backups run daily. Create manual backups before major changes:
postbase cloud backups create -p myapp -d productionPoint-in-Time Recovery
Restore to any point in time:
postbase cloud pitr restore \
-p myapp \
-d production \
--target-time "2026-01-25T14:00:00Z"Recovery Testing
Test recovery quarterly:
# Provision test environment
postbase cloud provision dr-test -p myapp
# Restore to test
postbase cloud pitr restore \
-p myapp \
-d dr-test \
--source production \
--target-time "2026-01-25T12:00:00Z"
# Verify data
postbase cloud psql -p myapp -d dr-test \
-c "SELECT COUNT(*) FROM users"
# Cleanup
postbase cloud destroy dr-test -p myappMigration Strategies
Rolling Updates
For zero-downtime migrations:
- Add new columns/tables (backward compatible)
- Deploy new application code
- Migrate data if needed
- Remove old columns/tables
Blue-Green Deployment
# Create new environment
postbase cloud provision production-new -p myapp
# Restore data
postbase cloud backups restore $LATEST_BACKUP \
-p myapp -d production-new
# Run new migrations
postbase migrate up --database-url $NEW_URL
# Verify
postbase cloud psql -p myapp -d production-new \
-c "SELECT * FROM _postbase_migrations"
# Switch traffic
# (in your load balancer / DNS)
# Cleanup old environment later
postbase cloud destroy production-old -p myappCost Optimization
Right-Sizing
Start small and scale up:
# Start with minimal config
postbase cloud provision production \
-p myapp \
--cpu 1 \
--memory 1024 \
--storage 10
# Scale up when needed
postbase cloud scale production \
-p myapp \
--cpu 2 \
--memory 4096Cleanup
Remove unused resources:
# List all databases
postbase cloud databases list -p myapp
# Remove old test environments
postbase cloud destroy staging-old -p myapp