# Postbase > Local-first PostgreSQL development platform with cloud provisioning ## Getting Started Get up and running with Postbase in under 5 minutes. ### Prerequisites Before installing Postbase, ensure you have: * **Node.js** 20 or later * **pnpm** 9 or later (recommended) or npm * **Docker** Desktop or Docker Engine * **Git** (for version control) ### Installation #### From npm (Recommended) ```bash # Install globally npm install -g postbase # Or with pnpm pnpm add -g postbase ``` #### From Source ```bash # Clone the repository git clone https://github.com/zeroexcore/postbase.git cd postbase # Install dependencies pnpm install # Build all packages pnpm build # Link CLI globally pnpm --filter @postbase/cli link --global ``` ### Start the Daemon The Postbase daemon manages your local PostgreSQL instance: ```bash postbase start ``` This will: 1. Start the Postbase API server on port 9432 2. Pull and start a PostgreSQL Docker container 3. Initialize the default database :::info The first run may take a moment to pull the PostgreSQL Docker image (\~400MB). ::: ### Create Your First Database ```bash # Create a new database postbase db create myapp ``` ### Verify Installation ```bash # Check daemon status postbase status # List databases postbase db list # Get connection details postbase connect myapp ``` You should see output like: ``` Database: myapp Connection Details: Host: localhost Port: 5432 Database: myapp User: postgres Password: postgres Connection String: postgresql://postgres:postgres@localhost:5432/myapp ``` ### Open the Admin UI Start the Admin UI development server: ```bash # From the postbase directory pnpm --filter @postbase/ui dev ``` Then open [http://localhost:3000](http://localhost:3000) in your browser. ### Next Steps * [Create migrations](/local/migrations) to define your schema * [Explore the Admin UI](/local/admin-ui) to manage your database * [Set up cloud deployment](/cloud/overview) for staging/production ## Installation Detailed installation instructions for different environments. ### System Requirements | Requirement | Minimum | Recommended | | ----------- | -------- | ----------- | | Node.js | 20.x | 22.x | | pnpm | 9.x | Latest | | Docker | 20.x | Latest | | RAM | 4GB | 8GB+ | | Disk | 2GB free | 10GB+ | ### Installing Node.js #### macOS ```bash # Using Homebrew brew install node@22 # Or using nvm curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash nvm install 22 nvm use 22 ``` #### Linux (Ubuntu/Debian) ```bash # Using NodeSource curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash - sudo apt-get install -y nodejs ``` #### Windows Download the installer from [nodejs.org](https://nodejs.org/) or use: ```powershell # Using winget winget install OpenJS.NodeJS.LTS # Or using Chocolatey choco install nodejs-lts ``` ### Installing pnpm ```bash # Using npm npm install -g pnpm # Or using Corepack (Node.js 16.13+) corepack enable corepack prepare pnpm@latest --activate ``` ### Installing Docker #### macOS ```bash # Using Homebrew brew install --cask docker # Then start Docker Desktop from Applications ``` #### Linux ```bash # Ubuntu/Debian curl -fsSL https://get.docker.com | sh sudo usermod -aG docker $USER # Log out and back in ``` #### Windows Download [Docker Desktop for Windows](https://www.docker.com/products/docker-desktop/). ### Installing Postbase #### Option 1: npm Global Install ```bash npm install -g postbase ``` #### Option 2: pnpm Global Install ```bash pnpm add -g postbase ``` #### Option 3: From Source ```bash # Clone repository git clone https://github.com/zeroexcore/postbase.git cd postbase # Install dependencies pnpm install # Build all packages pnpm build # Link CLI globally cd packages/cli pnpm link --global ``` #### Option 4: npx (No Install) ```bash # Run without installing npx postbase start npx postbase db create myapp ``` ### Verify Installation ```bash # Check CLI version postbase --version # Check Docker is running docker info # Start Postbase postbase start # Check status postbase status ``` Expected output: ``` Postbase Status Daemon: Running (PID: 12345) API: http://localhost:9432 PostgreSQL: Running (Port: 5432) Databases: 2 Databases: - myapp (12.5 MB) - demo (3.2 MB) ``` ### Uninstalling #### npm ```bash npm uninstall -g postbase ``` #### pnpm ```bash pnpm remove -g postbase ``` #### Complete Cleanup ```bash # Stop daemon postbase stop # Remove Docker container docker rm -f postbase-postgres # Remove data directory rm -rf ~/.postbase ``` ### Troubleshooting Installation #### Docker not found Ensure Docker is installed and the daemon is running: ```bash docker info ``` If not running, start Docker Desktop or: ```bash # Linux sudo systemctl start docker ``` #### Permission denied On Linux, add your user to the docker group: ```bash sudo usermod -aG docker $USER # Log out and back in ``` #### Port already in use If port 5432 is already in use: ```bash # Check what's using the port lsof -i :5432 # Or use a different port POSTBASE_PORT=9433 postbase start ``` #### Node.js version mismatch Ensure you're using Node.js 20+: ```bash node --version # Should show v20.x.x or later ``` ## Quick Start Build a complete application with Postbase in 10 minutes. ### What We'll Build A simple task management API with: * Users table with authentication fields * Tasks table with foreign key to users * Database migrations * Type-safe queries with the SDK ### Step 1: Initialize Project ```bash # Create project directory mkdir my-task-app cd my-task-app # Initialize npm project npm init -y # Install dependencies npm install @postbase/sdk pg dotenv npm install -D typescript @types/node tsx ``` ### Step 2: Start Postbase ```bash # Start the daemon (in another terminal) postbase start # Create database postbase db create tasks # Initialize Postbase config postbase init ``` This creates a `postbase.toml` file: ```toml [project] name = "my-task-app" [database] name = "tasks" [migrations] directory = "./migrations" schema = "public" ``` ### Step 3: Create Migrations ```bash # Create users migration postbase migrate new create_users ``` Edit `migrations/YYYYMMDD_create_users.up.sql`: ```sql -- Create users table CREATE TABLE users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL, name VARCHAR(255) NOT NULL, password_hash VARCHAR(255) NOT NULL, created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(), updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); -- Create index for email lookups CREATE INDEX idx_users_email ON users(email); ``` Edit `migrations/YYYYMMDD_create_users.down.sql`: ```sql DROP TABLE IF EXISTS users; ``` ```bash # Create tasks migration postbase migrate new create_tasks ``` Edit `migrations/YYYYMMDD_create_tasks.up.sql`: ```sql -- Create tasks table CREATE TABLE tasks ( id SERIAL PRIMARY KEY, user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE, title VARCHAR(255) NOT NULL, description TEXT, status VARCHAR(50) DEFAULT 'pending', due_date DATE, created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(), updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); -- Create indexes CREATE INDEX idx_tasks_user_id ON tasks(user_id); CREATE INDEX idx_tasks_status ON tasks(status); ``` ### Step 4: Apply Migrations ```bash # Apply all pending migrations postbase migrate up # Check status postbase migrate status ``` Output: ``` Migration Status for tasks (public) Applied: ✓ 20260125_create_users (applied: 2026-01-25 10:30:00) ✓ 20260125_create_tasks (applied: 2026-01-25 10:30:01) Pending: 0 ``` ### Step 5: Generate Types ```bash # Generate TypeScript types postbase types generate \ --database tasks \ --all \ --output ./src/db/types.ts ``` This creates `src/db/types.ts`: ```typescript export interface Database { public: { Tables: { users: { Row: { id: number email: string name: string password_hash: string created_at: Date updated_at: Date } Insert: { email: string name: string password_hash: string created_at?: Date updated_at?: Date } Update: { email?: string name?: string password_hash?: string updated_at?: Date } } tasks: { Row: { id: number user_id: number title: string description: string | null status: string due_date: Date | null created_at: Date updated_at: Date } Insert: { user_id: number title: string description?: string | null status?: string due_date?: Date | null } Update: { title?: string description?: string | null status?: string due_date?: Date | null updated_at?: Date } } } } } ``` ### Step 6: Create Database Client Create `src/db/client.ts`: ```typescript import { createClient } from '@postbase/sdk' import type { Database } from './types' export const db = createClient({ connectionString: process.env.DATABASE_URL || 'postgresql://postgres:postgres@localhost:5432/tasks', }) ``` ### Step 7: Build Your Application Create `src/index.ts`: ```typescript import { db } from './db/client' async function main() { // Create a user const user = await db .from('users') .insert({ email: 'alice@example.com', name: 'Alice', password_hash: 'hashed_password_here', }) .returning() .single() console.log('Created user:', user) // Create some tasks await db.from('tasks').insert([ { user_id: user.id, title: 'Learn Postbase', status: 'in_progress' }, { user_id: user.id, title: 'Build an app', status: 'pending' }, { user_id: user.id, title: 'Deploy to cloud', status: 'pending' }, ]) // Query tasks const tasks = await db .from('tasks') .eq('user_id', user.id) .order('created_at', { ascending: false }) .execute() console.log('Tasks:', tasks) // Update a task await db .from('tasks') .eq('title', 'Learn Postbase') .update({ status: 'completed' }) // Get completed tasks const completed = await db .from('tasks') .eq('status', 'completed') .execute() console.log('Completed tasks:', completed) } main().catch(console.error) ``` ### Step 8: Run Your Application ```bash # Run with tsx npx tsx src/index.ts ``` Output: ``` Created user: { id: 1, email: 'alice@example.com', name: 'Alice', ... } Tasks: [ { id: 3, title: 'Deploy to cloud', status: 'pending', ... }, { id: 2, title: 'Build an app', status: 'pending', ... }, { id: 1, title: 'Learn Postbase', status: 'in_progress', ... }, ] Completed tasks: [ { id: 1, title: 'Learn Postbase', status: 'completed', ... }, ] ``` ### Step 9: Deploy to Cloud ```bash # Login to Postbase Cloud postbase cloud login # Create a project postbase cloud projects create my-task-app # Provision staging database postbase cloud provision staging # Get connection string postbase cloud url staging ``` Update your `.env`: ```env DATABASE_URL=postgresql://postgres:xxx@xxx.proxy.rlwy.net:12345/my_task_app_staging ``` ### Next Steps * [Enable backups](/backups/automated) for data protection * [Set up PITR](/backups/pitr) for point-in-time recovery * [Add realtime](/sdk/realtime) for live updates * [Deploy to production](/guides/production) ## SDK Installation Install and configure the Postbase SDK. ### Installation ```bash npm install @postbase/sdk pg ``` Or with pnpm: ```bash pnpm add @postbase/sdk pg ``` ### TypeScript Support Install type definitions: ```bash npm install -D @types/pg typescript ``` ### Basic Setup #### Create Client ```typescript import { createClient } from '@postbase/sdk' const db = createClient({ connectionString: process.env.DATABASE_URL, }) ``` #### With Types ```typescript import { createClient } from '@postbase/sdk' import type { Database } from './db/types' const db = createClient({ connectionString: process.env.DATABASE_URL, }) ``` ### Configuration Options #### Connection String ```typescript const db = createClient({ connectionString: 'postgresql://user:password@host:5432/database', }) ``` #### Individual Parameters ```typescript const db = createClient({ host: 'localhost', port: 5432, database: 'myapp', user: 'postgres', password: 'postgres', }) ``` #### Connection Pool ```typescript const db = createClient({ connectionString: process.env.DATABASE_URL, pool: { min: 2, // Minimum connections max: 10, // Maximum connections idleTimeoutMillis: 30000, // Close idle connections after 30s }, }) ``` #### SSL Configuration ```typescript // Require SSL const db = createClient({ connectionString: process.env.DATABASE_URL, ssl: true, }) // With certificate const db = createClient({ connectionString: process.env.DATABASE_URL, ssl: { ca: fs.readFileSync('/path/to/ca.crt'), rejectUnauthorized: true, }, }) // Railway (no SSL needed) const db = createClient({ connectionString: process.env.DATABASE_URL + '?sslmode=disable', }) ``` ### Environment Setup #### Development ```env # .env.development DATABASE_URL="postgresql://postgres:postgres@localhost:5432/myapp" ``` #### Production ```env # .env.production DATABASE_URL="postgresql://postgres:xxx@xxx.proxy.rlwy.net:12345/railway?sslmode=disable" ``` #### Loading Environment ```typescript import 'dotenv/config' import { createClient } from '@postbase/sdk' const db = createClient({ connectionString: process.env.DATABASE_URL!, }) ``` ### Generate Types Generate TypeScript types from your database: ```bash postbase types generate \ --database myapp \ --all \ --output ./src/db/types.ts ``` See [Type Generation](/sdk/type-generation) for details. ### Project Structure Recommended file structure: ``` src/ ├── db/ │ ├── client.ts # Database client │ └── types.ts # Generated types ├── services/ │ └── users.ts # Service using db client └── index.ts ``` #### db/client.ts ```typescript import { createClient } from '@postbase/sdk' import type { Database } from './types' export const db = createClient({ connectionString: process.env.DATABASE_URL!, pool: { min: 2, max: 10, }, }) export type { Database } ``` #### services/users.ts ```typescript import { db } from '../db/client' export async function getActiveUsers() { return db.from('users').eq('active', true).execute() } export async function getUserById(id: number) { return db.from('users').eq('id', id).single() } ``` ### Framework Integration #### Express ```typescript import express from 'express' import { db } from './db/client' const app = express() app.get('/users', async (req, res) => { const users = await db.from('users').execute() res.json(users) }) // Graceful shutdown process.on('SIGTERM', async () => { await db.close() process.exit(0) }) ``` #### Next.js ```typescript // lib/db.ts import { createClient } from '@postbase/sdk' import type { Database } from './types' const globalForDb = globalThis as { db?: ReturnType> } export const db = globalForDb.db ?? createClient({ connectionString: process.env.DATABASE_URL!, }) if (process.env.NODE_ENV !== 'production') { globalForDb.db = db } ``` #### Fastify ```typescript import Fastify from 'fastify' import { db } from './db/client' const fastify = Fastify() fastify.get('/users', async () => { return db.from('users').execute() }) fastify.addHook('onClose', async () => { await db.close() }) ``` ### Connection Lifecycle #### Opening Connections Connections are opened lazily on first query: ```typescript const db = createClient({ connectionString: '...' }) // No connection yet const users = await db.from('users').execute() // Connection opened here ``` #### Closing Connections Always close connections when done: ```typescript // Graceful shutdown process.on('SIGTERM', async () => { await db.close() process.exit(0) }) ``` #### Health Check Verify database connectivity: ```typescript app.get('/health', async (req, res) => { try { await db.raw(sql`SELECT 1`) res.json({ database: 'connected' }) } catch (error) { res.status(503).json({ database: 'disconnected' }) } }) ``` ### Troubleshooting #### Connection Refused ``` Error: connect ECONNREFUSED 127.0.0.1:5432 ``` * Ensure PostgreSQL is running * Check host and port in connection string #### Authentication Failed ``` Error: password authentication failed for user "postgres" ``` * Verify username and password * Check `pg_hba.conf` allows your connection #### SSL Required ``` Error: The server does not support SSL connections ``` * Add `?sslmode=disable` for Railway * Or configure SSL properly for production ## SDK Overview Type-safe PostgreSQL client with Supabase-compatible query builder. ### Features * **Type-safe queries** - Full TypeScript support from database schema * **Supabase-compatible API** - Familiar `.eq()`, `.single()`, `.order()` methods * **Kysely integration** - Powerful query builder under the hood * **RPC support** - Call PostgreSQL functions with type safety * **Realtime subscriptions** - LISTEN/NOTIFY with automatic reconnection ### Installation ```bash npm install @postbase/sdk pg npm install -D @types/pg ``` ### Quick Start ```typescript import { createClient } from '@postbase/sdk' import type { Database } from './db/types' // Create client const db = createClient({ connectionString: process.env.DATABASE_URL, }) // Query data const users = await db .from('users') .eq('active', true) .order('created_at', { ascending: false }) .limit(10) .execute() // Insert data const newUser = await db .from('users') .insert({ email: 'alice@example.com', name: 'Alice' }) .returning() .single() // Update data await db .from('users') .eq('id', 1) .update({ name: 'Alice Smith' }) // Delete data await db .from('users') .eq('id', 1) .delete() ``` ### Generating Types Generate TypeScript types from your database: ```bash postbase types generate \ --database myapp \ --all \ --output ./src/db/types.ts ``` This creates fully typed interfaces: ```typescript export interface Database { public: { Tables: { users: { Row: { id: number email: string name: string created_at: Date } Insert: { email: string name: string created_at?: Date } Update: { email?: string name?: string } } } } } ``` ### Query Builder #### Select Queries ```typescript // All rows const users = await db.from('users').execute() // With conditions const activeUsers = await db .from('users') .eq('active', true) .gt('age', 18) .order('name') .execute() // Single row const user = await db .from('users') .eq('id', 1) .single() // With limit const recentUsers = await db .from('users') .order('created_at', { ascending: false }) .limit(10) .execute() ``` #### Filter Methods ```typescript // Equality .eq('column', value) // column = value .neq('column', value) // column != value // Comparison .gt('column', value) // column > value .gte('column', value) // column >= value .lt('column', value) // column < value .lte('column', value) // column <= value // Patterns .like('column', 'pattern%') // column LIKE 'pattern%' .ilike('column', 'pattern%') // column ILIKE 'pattern%' // Lists .in('column', [1, 2, 3]) // column IN (1, 2, 3) // Null checks .is('column', null) // column IS NULL .not('column', null) // column IS NOT NULL // Range .between('age', 18, 65) // age BETWEEN 18 AND 65 // Text search .textSearch('content', 'query') ``` #### Ordering & Pagination ```typescript // Order by single column .order('created_at', { ascending: false }) // Order by multiple columns .order('status') .order('created_at', { ascending: false }) // Pagination .limit(10) .offset(20) // Range (convenience) .range(0, 9) // First 10 rows ``` #### Selecting Columns ```typescript // Select specific columns const users = await db .from('users') .select('id', 'name', 'email') .execute() ``` ### Insert Operations ```typescript // Insert single row const user = await db .from('users') .insert({ email: 'alice@example.com', name: 'Alice' }) .returning() .single() // Insert multiple rows const users = await db .from('users') .insert([ { email: 'alice@example.com', name: 'Alice' }, { email: 'bob@example.com', name: 'Bob' }, ]) .returning() .execute() // Upsert (insert or update) const user = await db .from('users') .upsert( { email: 'alice@example.com', name: 'Alice Updated' }, { onConflict: 'email' } ) .returning() .single() ``` ### Update Operations ```typescript // Update with filter await db .from('users') .eq('id', 1) .update({ name: 'New Name' }) // Update multiple rows await db .from('users') .eq('active', false) .update({ deleted_at: new Date() }) // Update with returning const updated = await db .from('users') .eq('id', 1) .update({ name: 'New Name' }) .returning() .single() ``` ### Delete Operations ```typescript // Delete with filter await db .from('users') .eq('id', 1) .delete() // Delete multiple rows await db .from('users') .lt('last_login', new Date('2025-01-01')) .delete() // Soft delete pattern await db .from('users') .eq('id', 1) .update({ deleted_at: new Date() }) ``` ### Error Handling ```typescript try { const user = await db .from('users') .eq('id', 1) .single() } catch (error) { if (error instanceof PostbaseError) { console.error('Database error:', error.message) console.error('Code:', error.code) } } ``` ### Connection Management ```typescript // Create client with connection pool const db = createClient({ connectionString: process.env.DATABASE_URL, pool: { min: 2, max: 10, }, }) // Close connections when done await db.close() ``` ### Next Steps * [Type Generation](/sdk/type-generation) - Generate types from schema * [Query Builder](/sdk/query-builder) - Advanced query patterns * [RPC Functions](/sdk/rpc) - Call PostgreSQL functions * [Realtime](/sdk/realtime) - Live subscriptions ## Query Builder Advanced query patterns with the Postbase SDK. ### Overview The SDK provides a Supabase-compatible query builder powered by Kysely: ```typescript const users = await db .from('users') .eq('active', true) .order('created_at', { ascending: false }) .limit(10) .execute() ``` ### Select Queries #### Basic Select ```typescript // Select all rows const users = await db.from('users').execute() // Select specific columns const names = await db .from('users') .select('id', 'name', 'email') .execute() ``` #### Single Row ```typescript // Get one row (throws if not found) const user = await db .from('users') .eq('id', 1) .single() // Get one row (returns null if not found) const user = await db .from('users') .eq('id', 1) .maybeSingle() ``` ### Filter Methods #### Equality ```typescript // Equal .eq('column', value) // Not equal .neq('column', value) ``` #### Comparison ```typescript // Greater than .gt('age', 18) // Greater than or equal .gte('age', 18) // Less than .lt('age', 65) // Less than or equal .lte('age', 65) ``` #### Pattern Matching ```typescript // LIKE (case-sensitive) .like('name', 'John%') // ILIKE (case-insensitive) .ilike('email', '%@gmail.com') ``` #### Lists ```typescript // IN .in('status', ['active', 'pending']) // NOT IN .notIn('status', ['deleted', 'banned']) ``` #### Null Checks ```typescript // IS NULL .is('deleted_at', null) // IS NOT NULL .not('email_verified', null) ``` #### Range ```typescript // BETWEEN .between('age', 18, 65) ``` #### Text Search ```typescript // Full-text search .textSearch('content', 'hello world') // With configuration .textSearch('content', 'hello world', { config: 'english' }) ``` ### Combining Filters #### AND (default) ```typescript // Multiple .eq() are ANDed const users = await db .from('users') .eq('active', true) .eq('role', 'admin') .execute() // WHERE active = true AND role = 'admin' ``` #### Complex Conditions For OR and complex conditions, use raw SQL: ```typescript const users = await db .from('users') .where(sql`active = true OR role = 'admin'`) .execute() ``` ### Ordering #### Single Column ```typescript // Ascending (default) .order('name') // Descending .order('created_at', { ascending: false }) ``` #### Multiple Columns ```typescript .order('status') .order('created_at', { ascending: false }) // ORDER BY status ASC, created_at DESC ``` #### Null Handling ```typescript .order('name', { nullsFirst: true }) .order('name', { nullsFirst: false }) // nulls last ``` ### Pagination #### Limit & Offset ```typescript // First 10 rows .limit(10) // Skip first 20, take next 10 .limit(10) .offset(20) ``` #### Range (convenience) ```typescript // Rows 0-9 (first 10) .range(0, 9) // Rows 10-19 (second page) .range(10, 19) ``` ### Insert Queries #### Single Insert ```typescript const user = await db .from('users') .insert({ email: 'alice@example.com', name: 'Alice', }) .returning() .single() ``` #### Multiple Insert ```typescript const users = await db .from('users') .insert([ { email: 'alice@example.com', name: 'Alice' }, { email: 'bob@example.com', name: 'Bob' }, ]) .returning() .execute() ``` #### Upsert ```typescript // Insert or update on conflict const user = await db .from('users') .upsert( { email: 'alice@example.com', name: 'Alice Updated' }, { onConflict: 'email' } ) .returning() .single() ``` ### Update Queries #### Update with Filter ```typescript await db .from('users') .eq('id', 1) .update({ name: 'New Name' }) ``` #### Update Multiple ```typescript await db .from('users') .eq('active', false) .update({ deleted_at: new Date() }) ``` #### Update with Returning ```typescript const updated = await db .from('users') .eq('id', 1) .update({ name: 'New Name' }) .returning() .single() ``` ### Delete Queries #### Delete with Filter ```typescript await db .from('users') .eq('id', 1) .delete() ``` #### Soft Delete Pattern ```typescript // Soft delete (recommended) await db .from('users') .eq('id', 1) .update({ deleted_at: new Date() }) // Query non-deleted const users = await db .from('users') .is('deleted_at', null) .execute() ``` ### Counting ```typescript // Count all rows const count = await db .from('users') .count() // Count with filter const activeCount = await db .from('users') .eq('active', true) .count() ``` ### Raw SQL For complex queries not supported by the builder: ```typescript import { sql } from '@postbase/sdk' // Raw where clause const users = await db .from('users') .where(sql`created_at > NOW() - INTERVAL '7 days'`) .execute() // Full raw query const result = await db.raw(sql` SELECT u.*, COUNT(p.id) as post_count FROM users u LEFT JOIN posts p ON p.user_id = u.id GROUP BY u.id ORDER BY post_count DESC `) ``` ### Type Safety With generated types, all queries are fully typed: ```typescript import type { Database } from './db/types' const db = createClient({ connectionString: '...' }) // TypeScript knows the shape const user = await db.from('users').eq('id', 1).single() user.email // string user.name // string user.foo // Error: Property 'foo' does not exist // Insert validation await db.from('users').insert({ email: 'test@example.com', name: 'Test', // missing required fields = TypeScript error }) ``` ### Performance Tips #### Use Indexes Ensure columns used in filters have indexes: ```sql CREATE INDEX idx_users_email ON users(email); CREATE INDEX idx_users_active ON users(active); ``` #### Limit Results Always use `.limit()` for large tables: ```typescript // Good const users = await db.from('users').limit(100).execute() // Bad (loads all rows) const users = await db.from('users').execute() ``` #### Select Only Needed Columns ```typescript // Good const names = await db.from('users').select('id', 'name').execute() // Less efficient (selects all columns) const users = await db.from('users').execute() ``` ## Realtime Subscriptions Subscribe to database changes using PostgreSQL LISTEN/NOTIFY. ### Overview Postbase SDK provides realtime subscriptions using PostgreSQL's native LISTEN/NOTIFY mechanism: ```typescript // Subscribe to changes db.channel('users') .on('INSERT', (payload) => { console.log('New user:', payload.new) }) .on('UPDATE', (payload) => { console.log('User updated:', payload.new) }) .on('DELETE', (payload) => { console.log('User deleted:', payload.old) }) .subscribe() ``` ### Setup #### 1. Create Triggers Set up database triggers to send notifications: ```bash postbase types realtime-triggers --database myapp --tables users,posts ``` Or create manually: ```sql -- Function to notify changes CREATE OR REPLACE FUNCTION notify_changes() RETURNS TRIGGER AS $$ DECLARE payload JSON; BEGIN IF TG_OP = 'INSERT' THEN payload := json_build_object( 'operation', 'INSERT', 'table', TG_TABLE_NAME, 'schema', TG_TABLE_SCHEMA, 'new', row_to_json(NEW) ); ELSIF TG_OP = 'UPDATE' THEN payload := json_build_object( 'operation', 'UPDATE', 'table', TG_TABLE_NAME, 'schema', TG_TABLE_SCHEMA, 'old', row_to_json(OLD), 'new', row_to_json(NEW) ); ELSIF TG_OP = 'DELETE' THEN payload := json_build_object( 'operation', 'DELETE', 'table', TG_TABLE_NAME, 'schema', TG_TABLE_SCHEMA, 'old', row_to_json(OLD) ); END IF; PERFORM pg_notify(TG_TABLE_NAME, payload::text); RETURN NEW; END; $$ LANGUAGE plpgsql; -- Trigger for users table CREATE TRIGGER users_changes AFTER INSERT OR UPDATE OR DELETE ON users FOR EACH ROW EXECUTE FUNCTION notify_changes(); ``` #### 2. Subscribe in Your App ```typescript import { createClient } from '@postbase/sdk' const db = createClient({ connectionString: process.env.DATABASE_URL, }) // Subscribe to users table db.channel('users') .on('INSERT', (payload) => { console.log('New user:', payload.new) }) .subscribe() ``` ### Channel API #### Creating a Channel ```typescript const channel = db.channel('channel_name') ``` The channel name corresponds to the PostgreSQL NOTIFY channel. #### Event Handlers ```typescript channel // Insert events .on('INSERT', (payload) => { console.log('Inserted:', payload.new) }) // Update events .on('UPDATE', (payload) => { console.log('Updated from:', payload.old, 'to:', payload.new) }) // Delete events .on('DELETE', (payload) => { console.log('Deleted:', payload.old) }) // All events .on('*', (payload) => { console.log('Change:', payload.operation, payload) }) ``` #### Payload Structure ```typescript interface RealtimePayload { operation: 'INSERT' | 'UPDATE' | 'DELETE' table: string schema: string old?: T // For UPDATE and DELETE new?: T // For INSERT and UPDATE } ``` #### Starting Subscription ```typescript // Start listening channel.subscribe() ``` #### Stopping Subscription ```typescript // Stop listening channel.unsubscribe() ``` ### Multiple Tables Subscribe to multiple tables: ```typescript // Users channel db.channel('users') .on('INSERT', handleNewUser) .subscribe() // Posts channel db.channel('posts') .on('*', handlePostChange) .subscribe() // Comments channel db.channel('comments') .on('INSERT', handleNewComment) .subscribe() ``` ### Filtered Subscriptions Filter events in your handlers: ```typescript db.channel('orders') .on('INSERT', (payload) => { // Only process high-value orders if (payload.new.total > 1000) { notifyManager(payload.new) } }) .on('UPDATE', (payload) => { // Only process status changes if (payload.old.status !== payload.new.status) { handleStatusChange(payload.old.status, payload.new.status) } }) .subscribe() ``` ### Automatic Reconnection The SDK handles connection drops automatically: ```typescript const channel = db.channel('users') .on('INSERT', handleInsert) .on('error', (error) => { console.error('Connection error:', error) }) .on('reconnect', () => { console.log('Reconnected to database') }) .subscribe() ``` ### React Integration #### Custom Hook ```typescript // hooks/useRealtimeTable.ts import { useEffect, useState } from 'react' import { db } from '../db/client' export function useRealtimeTable(tableName: string, initialData: T[] = []) { const [data, setData] = useState(initialData) useEffect(() => { const channel = db.channel(tableName) .on('INSERT', (payload) => { setData(prev => [...prev, payload.new as T]) }) .on('UPDATE', (payload) => { setData(prev => prev.map(item => (item as any).id === (payload.new as any).id ? payload.new as T : item )) }) .on('DELETE', (payload) => { setData(prev => prev.filter(item => (item as any).id !== (payload.old as any).id )) }) .subscribe() return () => { channel.unsubscribe() } }, [tableName]) return data } ``` #### Using the Hook ```tsx function UserList() { const [users, setUsers] = useState([]) // Fetch initial data useEffect(() => { db.from('users').execute().then(setUsers) }, []) // Subscribe to changes const realtimeUsers = useRealtimeTable('users', users) return (
    {realtimeUsers.map(user => (
  • {user.name}
  • ))}
) } ``` ### Express.js Integration Stream changes to clients via Server-Sent Events: ```typescript import express from 'express' import { db } from './db/client' const app = express() app.get('/events/users', (req, res) => { res.setHeader('Content-Type', 'text/event-stream') res.setHeader('Cache-Control', 'no-cache') res.setHeader('Connection', 'keep-alive') const channel = db.channel('users') .on('*', (payload) => { res.write(`data: ${JSON.stringify(payload)}\n\n`) }) .subscribe() req.on('close', () => { channel.unsubscribe() }) }) ``` ### Best Practices #### 1. Limit Trigger Columns Only notify on relevant column changes: ```sql CREATE TRIGGER users_email_change AFTER UPDATE OF email ON users FOR EACH ROW EXECUTE FUNCTION notify_changes(); ``` #### 2. Batch Updates For high-throughput tables, batch notifications: ```sql -- Debounced trigger (pseudocode) CREATE OR REPLACE FUNCTION batch_notify() RETURNS TRIGGER AS $$ BEGIN -- Store change in queue table INSERT INTO _change_queue (table_name, change_data) VALUES (TG_TABLE_NAME, row_to_json(NEW)); RETURN NEW; END; $$ LANGUAGE plpgsql; -- Process queue every second with pg_cron SELECT cron.schedule( 'process_changes', '* * * * *', -- Every minute $$SELECT process_and_notify_changes()$$ ); ``` #### 3. Clean Up Subscriptions Always unsubscribe when done: ```typescript // In React useEffect(() => { const channel = db.channel('users').on('*', handler).subscribe() return () => channel.unsubscribe() }, []) ``` #### 4. Handle Reconnection Monitor connection status: ```typescript db.channel('users') .on('reconnect', () => { // Refresh data to catch missed events fetchLatestData() }) .subscribe() ``` ### Troubleshooting #### No Events Received 1. Check trigger exists: `\d users` in psql 2. Test NOTIFY: `NOTIFY users, '{"test": true}'` 3. Check LISTEN: `LISTEN users` in another session #### Missing Events Events may be missed during reconnection. Always fetch latest data on reconnect. #### High Latency * Check database connection pooling * Consider batching high-frequency tables * Use filtered triggers to reduce volume ## RPC Functions Call PostgreSQL functions with type safety. ### Overview The SDK's `rpc()` method calls PostgreSQL functions: ```typescript const posts = await db.rpc('get_user_posts', { user_id: 1, limit: 10 }) ``` ### Creating Functions #### Basic Function ```sql CREATE FUNCTION get_user_by_email(email_param TEXT) RETURNS users AS $$ SELECT * FROM users WHERE email = email_param; $$ LANGUAGE sql; ``` #### Function with Multiple Returns ```sql CREATE FUNCTION get_user_posts(user_id_param INTEGER, limit_count INTEGER DEFAULT 10) RETURNS SETOF posts AS $$ SELECT * FROM posts WHERE user_id = user_id_param ORDER BY created_at DESC LIMIT limit_count; $$ LANGUAGE sql; ``` #### Function with Custom Return Type ```sql CREATE TYPE user_stats AS ( user_id INTEGER, post_count INTEGER, comment_count INTEGER, total_likes INTEGER ); CREATE FUNCTION get_user_stats(user_id_param INTEGER) RETURNS user_stats AS $$ SELECT user_id_param, (SELECT COUNT(*) FROM posts WHERE user_id = user_id_param), (SELECT COUNT(*) FROM comments WHERE user_id = user_id_param), (SELECT COALESCE(SUM(likes), 0) FROM posts WHERE user_id = user_id_param) $$ LANGUAGE sql; ``` #### PL/pgSQL Function ```sql CREATE FUNCTION transfer_funds( from_account INTEGER, to_account INTEGER, amount DECIMAL ) RETURNS BOOLEAN AS $$ DECLARE from_balance DECIMAL; BEGIN -- Check balance SELECT balance INTO from_balance FROM accounts WHERE id = from_account; IF from_balance < amount THEN RETURN FALSE; END IF; -- Transfer UPDATE accounts SET balance = balance - amount WHERE id = from_account; UPDATE accounts SET balance = balance + amount WHERE id = to_account; RETURN TRUE; END; $$ LANGUAGE plpgsql; ``` ### Calling Functions #### Basic Call ```typescript const user = await db.rpc('get_user_by_email', { email_param: 'alice@example.com' }) ``` #### With Default Parameters ```typescript // Uses default limit of 10 const posts = await db.rpc('get_user_posts', { user_id_param: 1 }) // Override default const posts = await db.rpc('get_user_posts', { user_id_param: 1, limit_count: 50 }) ``` #### Single Row Result ```typescript const stats = await db.rpc('get_user_stats', { user_id_param: 1 }) // Returns: { user_id: 1, post_count: 42, ... } ``` #### Multiple Row Result ```typescript const posts = await db.rpc('get_user_posts', { user_id_param: 1 }) // Returns: [{ id: 1, title: '...', ... }, { id: 2, ... }] ``` #### Boolean Result ```typescript const success = await db.rpc('transfer_funds', { from_account: 1, to_account: 2, amount: 100.00 }) if (success) { console.log('Transfer completed') } ``` ### Type Generation Generate types for functions: ```bash postbase types generate \ --database myapp \ --include-functions \ --output ./src/db/types.ts ``` Generated types: ```typescript export interface Database { public: { Functions: { get_user_posts: { Args: { user_id_param: number limit_count?: number // Optional (has default) } Returns: { id: number user_id: number title: string content: string }[] } get_user_stats: { Args: { user_id_param: number } Returns: { user_id: number post_count: number comment_count: number total_likes: number } } } } } ``` ### Type-Safe RPC With generated types: ```typescript // Fully typed arguments const posts = await db.rpc('get_user_posts', { user_id_param: 1, // Must be number limit_count: 10, // Optional, must be number if provided }) // Fully typed return posts.forEach(post => { console.log(post.title) // TypeScript knows this exists console.log(post.foo) // Error: Property 'foo' does not exist }) ``` ### Use Cases #### Business Logic Encapsulate complex logic in the database: ```sql CREATE FUNCTION create_order( customer_id INTEGER, items JSONB ) RETURNS orders AS $$ DECLARE new_order orders; total DECIMAL := 0; item JSONB; BEGIN -- Calculate total FOR item IN SELECT * FROM jsonb_array_elements(items) LOOP total := total + (item->>'quantity')::INTEGER * (item->>'price')::DECIMAL; END LOOP; -- Create order INSERT INTO orders (customer_id, total, status) VALUES (customer_id, total, 'pending') RETURNING * INTO new_order; -- Create order items INSERT INTO order_items (order_id, product_id, quantity, price) SELECT new_order.id, (item->>'product_id')::INTEGER, (item->>'quantity')::INTEGER, (item->>'price')::DECIMAL FROM jsonb_array_elements(items) AS item; RETURN new_order; END; $$ LANGUAGE plpgsql; ``` ```typescript const order = await db.rpc('create_order', { customer_id: 123, items: [ { product_id: 1, quantity: 2, price: 29.99 }, { product_id: 2, quantity: 1, price: 49.99 }, ] }) ``` #### Aggregations ```sql CREATE FUNCTION get_dashboard_stats() RETURNS TABLE ( total_users INTEGER, active_users INTEGER, total_orders INTEGER, revenue DECIMAL ) AS $$ SELECT (SELECT COUNT(*) FROM users), (SELECT COUNT(*) FROM users WHERE last_login > NOW() - INTERVAL '7 days'), (SELECT COUNT(*) FROM orders), (SELECT COALESCE(SUM(total), 0) FROM orders WHERE status = 'completed') $$ LANGUAGE sql; ``` ```typescript const stats = await db.rpc('get_dashboard_stats', {}) ``` #### Search ```sql CREATE FUNCTION search_products( query TEXT, category_id INTEGER DEFAULT NULL, min_price DECIMAL DEFAULT NULL, max_price DECIMAL DEFAULT NULL ) RETURNS SETOF products AS $$ SELECT * FROM products WHERE (query IS NULL OR name ILIKE '%' || query || '%' OR description ILIKE '%' || query || '%') AND (category_id IS NULL OR products.category_id = search_products.category_id) AND (min_price IS NULL OR price >= min_price) AND (max_price IS NULL OR price <= max_price) ORDER BY CASE WHEN name ILIKE query || '%' THEN 0 ELSE 1 END, name $$ LANGUAGE sql; ``` ```typescript const products = await db.rpc('search_products', { query: 'laptop', min_price: 500, max_price: 2000 }) ``` ### Error Handling ```typescript try { const result = await db.rpc('transfer_funds', { from_account: 1, to_account: 2, amount: 1000000 // More than balance }) } catch (error) { if (error.code === 'P0001') { // Custom RAISE EXCEPTION in function console.error('Transfer failed:', error.message) } } ``` ### Best Practices 1. **Use for complex logic** - Keep simple queries in the SDK 2. **Include defaults** - Make functions flexible with DEFAULT parameters 3. **Return meaningful types** - Use custom types for complex returns 4. **Handle errors** - Use RAISE EXCEPTION for validation 5. **Document functions** - Add COMMENT ON FUNCTION ## Type Generation Generate TypeScript types from your database schema. ### Overview The `postbase types generate` command introspects your database and creates TypeScript interfaces that perfectly match your schema. ### Basic Usage ```bash postbase types generate \ --database myapp \ --output ./src/db/types.ts ``` ### Options | Option | Description | | --------------------- | ----------------------------------- | | `--database` | Database name (required) | | `--output` | Output file path (required) | | `--all` | Include functions, views, and enums | | `--include-functions` | Include PostgreSQL functions | | `--include-views` | Include database views | | `--include-enums` | Include enum types | | `--schema` | Target schema (default: public) | | `--dates-as-strings` | Map timestamps to string | ### Generated Types #### Table Types For each table, three interfaces are generated: ```typescript export interface Database { public: { Tables: { users: { // Full row type (SELECT) Row: { id: number email: string name: string created_at: Date updated_at: Date } // Insert type (INSERT) Insert: { email: string name: string created_at?: Date // Optional if has default updated_at?: Date } // Update type (UPDATE) Update: { email?: string name?: string updated_at?: Date } } } } } ``` #### Column Types PostgreSQL types map to TypeScript: | PostgreSQL | TypeScript | | ------------------------------- | -------------------- | | `integer`, `serial`, `smallint` | `number` | | `bigint` | `string` (precision) | | `real`, `double precision` | `number` | | `numeric`, `decimal` | `string` | | `boolean` | `boolean` | | `varchar`, `text`, `char` | `string` | | `timestamp`, `date`, `time` | `Date` | | `timestamptz` | `Date` | | `json`, `jsonb` | `unknown` | | `uuid` | `string` | | `bytea` | `Buffer` | | `array` | `T[]` | #### Nullable Columns Nullable columns use union with `null`: ```typescript { description: string | null deleted_at: Date | null } ``` #### Default Values Columns with defaults are optional in `Insert`: ```typescript { Insert: { email: string // Required created_at?: Date // Optional (has DEFAULT NOW()) status?: string // Optional (has DEFAULT 'pending') } } ``` ### Including Enums ```bash postbase types generate \ --database myapp \ --include-enums \ --output ./src/db/types.ts ``` PostgreSQL: ```sql CREATE TYPE user_role AS ENUM ('user', 'admin', 'moderator'); ``` Generated: ```typescript export type UserRole = 'user' | 'admin' | 'moderator' export interface Database { public: { Enums: { user_role: UserRole } Tables: { users: { Row: { role: UserRole } } } } } ``` ### Including Views ```bash postbase types generate \ --database myapp \ --include-views \ --output ./src/db/types.ts ``` Generated: ```typescript export interface Database { public: { Views: { active_users: { Row: { id: number email: string name: string } } } } } ``` ### Including Functions ```bash postbase types generate \ --database myapp \ --include-functions \ --output ./src/db/types.ts ``` PostgreSQL: ```sql CREATE FUNCTION get_user_posts(user_id integer, limit_count integer DEFAULT 10) RETURNS SETOF posts AS $$ SELECT * FROM posts WHERE posts.user_id = $1 LIMIT $2; $$ LANGUAGE sql; ``` Generated: ```typescript export interface Database { public: { Functions: { get_user_posts: { Args: { user_id: number limit_count?: number // Has default } Returns: { id: number user_id: number content: string }[] } } } } ``` ### Using Generated Types #### With SDK ```typescript import { createClient } from '@postbase/sdk' import type { Database } from './db/types' const db = createClient({ connectionString: process.env.DATABASE_URL, }) // Fully typed queries const users = await db.from('users').execute() // ^? User[] const user = await db .from('users') .eq('id', 1) .single() // ^? User | null // Type-safe inserts await db.from('users').insert({ email: 'alice@example.com', // Required name: 'Alice', // Required // created_at optional (has default) }) ``` #### Type Utilities Extract types for use elsewhere: ```typescript import type { Database } from './db/types' // Table row type type User = Database['public']['Tables']['users']['Row'] // Insert type type NewUser = Database['public']['Tables']['users']['Insert'] // Update type type UserUpdate = Database['public']['Tables']['users']['Update'] // Function args type GetUserPostsArgs = Database['public']['Functions']['get_user_posts']['Args'] ``` ### Dates as Strings For frameworks that serialize dates to JSON: ```bash postbase types generate \ --database myapp \ --dates-as-strings \ --output ./src/db/types.ts ``` ```typescript { Row: { created_at: string // Instead of Date updated_at: string } } ``` ### Multi-Schema Support Generate types for multiple schemas: ```bash # Public schema postbase types generate \ --database myapp \ --schema public \ --output ./src/db/types/public.ts # Auth schema postbase types generate \ --database myapp \ --schema auth \ --output ./src/db/types/auth.ts ``` ### Watch Mode Regenerate types on schema changes: ```bash # In package.json { "scripts": { "db:types": "postbase types generate --database myapp --all --output ./src/db/types.ts", "db:types:watch": "nodemon --exec 'npm run db:types' --watch migrations" } } ``` ### CI Integration Generate types in CI to catch schema drift: ```yaml # .github/workflows/typecheck.yml jobs: typecheck: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 - name: Generate types run: npm run db:types - name: Check for changes run: | if git diff --exit-code src/db/types.ts; then echo "Types up to date" else echo "Types out of date! Run 'npm run db:types'" exit 1 fi ``` ### Best Practices #### Commit Generated Types Include `types.ts` in version control: * Ensures consistent types across team * Catches schema drift in code review * Works without database connection #### Regenerate After Migrations Add to your migration workflow: ```bash postbase migrate up postbase types generate --database myapp --all --output ./src/db/types.ts ``` #### Use Strict TypeScript Enable strict mode for full type safety: ```json { "compilerOptions": { "strict": true, "noUncheckedIndexedAccess": true } } ``` ## Architecture Technical architecture of the Postbase platform. ### System Overview ``` ┌─────────────────────────────────────────────────────────────────────────────┐ │ POSTBASE PLATFORM │ ├─────────────────────────────────┬───────────────────────────────────────────┤ │ LOCAL DEVELOPMENT │ CLOUD PLATFORM │ │ │ │ │ ┌─────────────────────────┐ │ ┌─────────────────────────────────────┐│ │ │ @postbase/cli │ │ │ @postbase/cloud ││ │ │ • Database commands │ │ │ Cloudflare Worker at Edge ││ │ │ • Migration engine │ │ │ ││ │ │ • Type generation │───┼──▶│ • GitHub OAuth ││ │ │ • Cloud operations │ │ │ • Project management ││ │ └───────────┬─────────────┘ │ │ • Database provisioning ││ │ │ │ │ • Backup orchestration ││ │ ┌───────────▼─────────────┐ │ └──────────────┬──────────────────────┘│ │ │ @postbase/daemon │ │ │ │ │ │ Hono API on :9432 │ │ ┌──────────────▼──────────────────────┐│ │ │ │ │ │ Infrastructure ││ │ │ • Docker management │ │ │ ││ │ │ • PostgreSQL control │ │ │ ┌────────────┐ ┌────────────────┐ ││ │ │ • Migration execution │ │ │ │ Railway │ │ Cloudflare │ ││ │ │ • Query proxy │ │ │ │ PostgreSQL │ │ D1 + R2 + KV │ ││ │ └───────────┬─────────────┘ │ │ └────────────┘ └────────────────┘ ││ │ │ │ │ ││ │ ┌───────────▼─────────────┐ │ │ ┌────────────────────────────────┐ ││ │ │ PostgreSQL │ │ │ │ @postbase/backup-service │ ││ │ │ Docker Container │ │ │ │ Railway Docker Container │ ││ │ │ │ │ │ │ │ ││ │ │ • supabase/postgres │ │ │ │ • pg_dump / pg_restore │ ││ │ │ • Multiple databases │ │ │ │ • pg_receivewal (PITR) │ ││ │ │ • Pre-installed exts │ │ │ │ • R2 streaming upload │ ││ │ └─────────────────────────┘ │ │ └────────────────────────────────┘ ││ │ │ └─────────────────────────────────────┘│ │ ┌─────────────────────────┐ │ │ │ │ @postbase/ui │ │ │ │ │ React Admin Panel │ │ │ │ │ │ │ │ │ │ • Schema browser │ │ │ │ │ • Data viewer/editor │ │ │ │ │ • SQL editor │ │ │ │ │ • Migration manager │ │ │ │ └─────────────────────────┘ │ │ └─────────────────────────────────┴───────────────────────────────────────────┘ ``` ### Package Structure ``` postbase/ ├── packages/ │ ├── cli/ # Command-line interface │ ├── daemon/ # Local API server │ ├── cloud/ # Cloudflare Worker API │ ├── backup-service/ # Backup Docker container │ ├── postgres-image/ # Custom PostgreSQL image │ ├── sdk/ # TypeScript client SDK │ ├── shared/ # Shared types and schemas │ ├── ui/ # React admin interface │ ├── cdn/ # Static asset hosting │ └── docs/ # Documentation (Vocs) ├── docs/ # Architecture documents └── package.json # pnpm monorepo root ``` ### Technology Stack #### Frontend | Package | Technologies | | ---------------- | ---------------------------------------------------------------------------- | | `@postbase/ui` | React 18, Vite, TanStack Query/Table, Monaco Editor, shadcn/ui, Tailwind CSS | | `@postbase/docs` | Vocs, MDX, React | #### Backend | Package | Technologies | | -------------------------- | ----------------------------------------------- | | `@postbase/daemon` | Hono, TypeScript, tsx, dockerode, pg | | `@postbase/cloud` | Hono, Cloudflare Workers, D1, KV, R2 | | `@postbase/backup-service` | Hono, Docker, pg\_dump, pg\_receivewal, AWS SDK | #### CLI & SDK | Package | Technologies | | ------------------ | --------------------------------- | | `@postbase/cli` | Commander.js, chalk, ora, pg, tsx | | `@postbase/sdk` | Kysely, pg, TypeScript | | `@postbase/shared` | Zod, TypeScript | ### Data Flow #### Local Development ``` ┌──────────┐ HTTP ┌──────────┐ postgres ┌────────────┐ │ CLI │──────────────▶│ Daemon │───────────────▶│ PostgreSQL │ │ │ │ :9432 │ │ :5432 │ └──────────┘ └──────────┘ └────────────┘ │ ┌──────────┐ HTTP │ │ UI │────────────────────┘ │ :3000 │ └──────────┘ ``` #### Cloud Platform ``` ┌──────────┐ HTTPS ┌──────────┐ Railway ┌────────────┐ │ CLI │──────────────▶│ Cloud │──────API─────▶│ PostgreSQL │ │ │ │ API │ │ Railway │ └──────────┘ └──────────┘ └────────────┘ │ D1 / KV / R2 │ ┌──────────▼──────────┐ │ Backup Service │ │ (Railway) │ └─────────────────────┘ ``` #### Backup Flow ``` ┌────────────────┐ │ Cloud API │ │ (Cloudflare) │ └───────┬────────┘ │ POST /backup │ ┌───────▼────────┐ │ Backup Service │ │ (Railway) │ └───────┬────────┘ │ ┌───────────────┼───────────────┐ │ │ │ ┌───────▼───────┐ │ ┌───────▼───────┐ │ PostgreSQL │──pg_dump──────│ R2 │ │ (Railway) │ │ (Cloudflare) │ └───────────────┘ └───────────────┘ │ callback │ ┌───────▼────────┐ │ Cloud API │ │ Update D1 meta │ └────────────────┘ ``` #### PITR Flow ``` Continuous ┌───────────────────┐ WAL Streaming │ PostgreSQL │──────pg_receivewal────────────┐ │ (Railway) │ │ └───────────────────┘ │ ┌───────▼───────┐ │Backup Service │ └───────┬───────┘ │ Upload WAL │ ┌───────▼───────┐ │ R2 │ │ WAL Storage │ └───────────────┘ Recovery: ┌──────────┐ ┌───────────────┐ │ Cloud API│──POST /pitr-restore│Backup Service │ └──────────┘ └───────┬───────┘ │ ┌───────────────┼───────────────┐ │ │ │ ┌───────▼───────┐ │ ┌───────▼───────┐ │ R2 │───download────│ temp restore │ │ base + WAL │ │ folder │ └───────────────┘ └───────┬───────┘ │ pg_restore + WAL replay │ ┌───────▼───────┐ │ PostgreSQL │ └───────────────┘ ``` ### Cloud Services #### Cloudflare | Service | Purpose | | ------- | --------------------------------------------------------- | | Workers | Cloud API at api.postbase.sh | | D1 | Relational metadata (users, projects, databases, backups) | | KV | Session storage, device codes | | R2 | Backup storage, WAL archives | #### Railway | Service | Purpose | | ---------- | ------------------------ | | PostgreSQL | Cloud database instances | | Docker | Backup service container | #### GitHub | Feature | Purpose | | ------------------ | ------------------------ | | OAuth | User authentication | | Container Registry | Custom PostgreSQL images | ### Security #### Authentication 1. **Local**: No authentication (trusted local environment) 2. **Cloud**: GitHub OAuth with device flow 3. **Backup Service**: Shared secret token #### Data Protection 1. **Passwords**: 32-character cryptographically random 2. **Backups**: Encrypted at rest in R2 3. **Transit**: HTTPS for all cloud communication 4. **Credentials**: Stored in `~/.postbase/`, never in version control #### Network 1. **Railway**: Private networking, SSL optional 2. **Cloudflare**: Edge TLS termination 3. **Backup Service**: Internal Railway network ### Scalability #### Current Limits | Resource | Limit | | --------------------- | ----------------- | | Databases per project | 10 | | Backup size | 10 GB | | WAL retention | 7 days | | API rate limit | 60 req/min (free) | #### Future Scaling 1. **Read Replicas**: Railway regional deployment 2. **Connection Pooling**: PgBouncer integration 3. **Multi-region**: Edge routing with Cloudflare 4. **High Availability**: Primary-replica failover ## Contributing Guide Thank you for your interest in contributing to Postbase! This guide covers everything you need to know to contribute effectively. ### Quick Start ```bash # Fork and clone git clone https://github.com/YOUR_USERNAME/postbase.git cd postbase # Install dependencies pnpm install # Create feature branch git checkout -b feature/your-feature # Start development pnpm dev ``` ### Development Setup #### Prerequisites * **Node.js** 20 or later * **pnpm** 9 or later * **Docker Desktop** for PostgreSQL * **PostgreSQL client** (optional) #### Running Services ```bash # Start all services pnpm dev # Run specific package pnpm --filter @postbase/cli dev pnpm --filter @postbase/admin dev pnpm --filter @postbase/docs dev pnpm --filter @postbase/web dev # Run tests pnpm test # Build all pnpm build ``` ### Project Structure ``` postbase/ ├── packages/ │ ├── cli/ # CLI tool (@postbase/cli) │ ├── admin/ # Admin UI (@postbase/admin) │ ├── daemon/ # Background service (@postbase/daemon) │ ├── sdk/ # TypeScript SDK (@postbase/sdk) │ ├── cloud-api/ # Cloud API (@postbase/cloud-api) │ ├── backup-service/ # Backup worker │ ├── docs/ # Documentation (docs.postbase.sh) │ └── web/ # Landing page (postbase.sh) ├── docs/ # Internal documentation ├── CONTRIBUTING.md # Contribution guidelines └── .cursorrules # Project guidelines ``` ### Documentation Requirements \:::warning Documentation is Required All contributions **MUST** include appropriate documentation updates. PRs without documentation will not be merged. \::: #### Documentation Checklist Before submitting a PR, update: | Change Type | Required Updates | | --------------- | -------------------------------- | | New feature | README, docs site, roadmap | | Bug fix | Troubleshooting (if user-facing) | | API change | API reference, SDK docs | | CLI change | CLI reference | | Config change | Configuration docs | | Breaking change | Migration guide, changelog | #### Files to Update * **Package READMEs**: `packages/*/README.md` * **Docs Site**: `packages/docs/pages/` * **Roadmap**: `packages/docs/pages/reference/roadmap.mdx` * **Cursor Rules**: `.cursorrules` (for major changes) #### Preview Documentation ```bash # Start docs dev server pnpm --filter @postbase/docs dev # Build to verify pnpm --filter @postbase/docs build ``` ### Code Style #### TypeScript Guidelines ```typescript // Use explicit types function createDatabase(name: string): Promise { // ... } // Prefer type over interface type DatabaseConfig = { name: string schema?: string } // Use const assertions const ENVIRONMENTS = ['development', 'staging', 'production'] as const type Environment = typeof ENVIRONMENTS[number] ``` #### Naming Conventions | Type | Convention | Example | | ------------------- | ---------------- | --------------------- | | Variables/Functions | camelCase | `createDatabase` | | Types/Classes | PascalCase | `DatabaseConfig` | | Constants | SCREAMING\_SNAKE | `MAX_CONNECTIONS` | | Files | kebab-case | `database-manager.ts` | ### Pull Request Process #### Before Submitting 1. ✅ Update documentation 2. ✅ Run `pnpm test` 3. ✅ Run `pnpm lint` 4. ✅ Run `pnpm build` 5. ✅ Test changes locally #### PR Title Format Use conventional commits: ``` feat(cli): add database branching fix(sdk): handle connection timeout docs(api): update PITR endpoints chore: upgrade dependencies ``` #### PR Template ```markdown ## Summary Brief description of changes ## Changes - Change 1 - Change 2 ## Documentation - [ ] Updated README - [ ] Updated docs site - [ ] Added to roadmap ## Testing - [ ] Tests pass - [ ] Manual testing done ``` ### Commit Messages Follow [Conventional Commits](https://www.conventionalcommits.org/): ```bash # Feature git commit -m "feat(cli): add database branching command" # Fix git commit -m "fix(sdk): handle connection pool exhaustion" # Documentation git commit -m "docs(api): document PITR endpoints" # Refactor git commit -m "refactor(daemon): extract health check logic" ``` #### Commit Types | Type | Description | | ---------- | ---------------- | | `feat` | New feature | | `fix` | Bug fix | | `docs` | Documentation | | `style` | Formatting | | `refactor` | Code restructure | | `perf` | Performance | | `test` | Tests | | `chore` | Maintenance | ### Getting Help * **GitHub Issues**: Bugs and features * **GitHub Discussions**: Questions * **Email**: [contributors@postbase.sh](mailto\:contributors@postbase.sh) ### Thank You! Every contribution makes Postbase better. We appreciate your time and effort! ## Roadmap Planned features and improvements for Postbase. ### Current Version **v0.6.0** (January 2026) - Beta ### Release Notes #### v0.6.0 - Documentation & Landing Page (January 27, 2026) **New Features:** * **Documentation Site**: Comprehensive docs at [docs.postbase.sh](https://docs.postbase.sh) * 36 documentation pages covering all features * Getting started guides and tutorials * Full CLI and API reference * SDK documentation with examples * Built with Vocs for fast, modern experience * **Landing Page**: New marketing site at [postbase.sh](https://postbase.sh) * Modern dark theme with React + Tailwind * Feature showcase and pricing information * Deployed on Cloudflare Workers **Infrastructure:** * New `@postbase/docs` package with Vocs * New `@postbase/web` package with React + Vite * Cloudflare Workers deployment for both sites * SPA routing support for web package #### v0.5.2 - Point-in-Time Recovery (January 2026) **New Features:** * Custom PostgreSQL image with WAL streaming * Continuous WAL archiving to Cloudflare R2 * Point-in-time recovery to any second * Recovery monitoring and management * PITR status in backup service **CLI Commands:** ```bash # Restore to specific point in time postbase cloud pitr restore production \ --target-time "2026-01-25T14:30:00Z" \ -p myapp # Check PITR status postbase cloud pitr status production -p myapp ``` #### v0.5.1 - Data Protection (January 2026) **New Features:** * Daily automated backups at 2:00 AM UTC * Manual backup creation via CLI * Backup restore operations * Cloudflare R2 storage for zero egress fees * Backup retention policies **CLI Commands:** ```bash # Create manual backup postbase cloud backups create production -p myapp # List backups postbase cloud backups list production -p myapp # Restore from backup postbase cloud backups restore production --backup-id abc123 ``` #### v0.5.0 - Cloud Platform (January 2026) **New Features:** * Cloud PostgreSQL provisioning via Railway * GitHub OAuth authentication * Project and database management * Environment-based deployments (staging/production) * Secure auto-generated passwords **CLI Commands:** ```bash # Login to cloud postbase cloud login # Create project postbase cloud projects create myapp # Provision database postbase cloud provision production -p myapp ``` ### 2026 Roadmap #### Q1 2026 - Usage-Based Billing **Status**: In Development Granular, per-second billing with Solana payments: * Prepaid credits via USDC on Solana * Per-second compute billing * Real-time usage dashboard * Low balance alerts (\< 48 hours remaining) * Auto-pause at $0 (data preserved) * CLI commands for credits management **Pricing Model**: ``` Compute: $0.013 - $0.21/hour (per instance size) Storage: $0.20/GB-month Backups: $0.02/GB-month PITR: $0.10/day (beyond 3 free days) Egress: $0.06/GB ``` ```bash # Check balance postbase cloud balance # Add credits (opens wallet connection) postbase cloud credits add 25 # View usage breakdown postbase cloud usage ``` #### Q2 2026 - Read Replicas **Status**: Planning Read replicas for horizontal scaling: * Regional read replicas * Automatic failover * Connection routing * Replica lag monitoring ```bash # Create read replica postbase cloud replicas create production-read \ -p myapp \ --source production \ --region eu-west1 # Get read-only connection string postbase cloud replicas url production-read -p myapp ``` #### Q3 2026 - High Availability **Status**: Planning Multi-node PostgreSQL clusters: * Automatic failover * Zero-downtime upgrades * Connection pooling (PgBouncer) * Health monitoring #### Q4 2026 - Teams & Security **Status**: Planning Team collaboration features: * Team invitations * Role-based access control (RBAC) * Audit logging * Password rotation * Encryption at rest ```bash # Invite team member postbase cloud teams invite alice@example.com --role admin # Set permissions postbase cloud databases permissions set production \ --user alice@example.com \ --role read-only ``` ### 2027 Roadmap #### Q1 2027 - Monitoring Dashboard **Status**: Planning Comprehensive monitoring: * Query performance metrics * Connection analytics * Storage utilization * Custom dashboards * Alert configuration #### Q2-Q3 2027 - Advanced Features **Status**: Planning Enterprise capabilities: * Multi-region deployments * Database branching * Schema diff and sync * Advanced query optimization ### 2028 Roadmap #### Q1-Q2 2028 - AI/ML Integration **Status**: Conceptual AI-powered features: * Query optimization suggestions * Anomaly detection * Natural language queries * Schema recommendations #### Q3-Q4 2028 - Enterprise **Status**: Conceptual Enterprise-grade features: * SSO/SAML integration * Compliance certifications (SOC2, HIPAA) * Custom SLAs * Dedicated support ### Feature Requests Have a feature request? Let us know: * **GitHub Issues**: [https://github.com/zeroexcore/postbase/issues](https://github.com/zeroexcore/postbase/issues) * **Discord**: Coming soon * **Email**: [features@postbase.sh](mailto\:features@postbase.sh) ### Version History | Version | Date | Highlights | | ------- | ------------ | ----------------------------------- | | 0.6.0 | Jan 27, 2026 | Documentation site, landing page | | 0.5.2 | Jan 2026 | PITR, custom PostgreSQL image | | 0.5.1 | Jan 2026 | Backups, SDK, Supabase image | | 0.5.0 | Jan 2026 | Cloud platform, Railway integration | | 0.4.0 | Dec 2025 | Admin UI, migrations | | 0.3.0 | Nov 2025 | CLI, daemon | | 0.2.0 | Oct 2025 | Initial release | ### Contributing Postbase is open source. Contributions welcome: 1. Fork the repository 2. Create a feature branch 3. Submit a pull request See [Contributing Guide](/reference/contributing) for guidelines. ## Troubleshooting Common issues and solutions. ### Local Development #### Daemon Won't Start **Symptom**: `postbase start` fails or hangs **Solutions**: 1. Check Docker is running: ```bash docker info ``` 2. Check port availability: ```bash lsof -i :9432 lsof -i :5432 ``` 3. Remove stale state: ```bash rm -f ~/.postbase/state.json postbase start ``` 4. Check Docker logs: ```bash docker logs postbase-postgres ``` #### Port Already in Use **Symptom**: `Error: Port 5432 already in use` **Solutions**: 1. Find and stop the process: ```bash lsof -i :5432 kill ``` 2. Use a different port: ```bash postbase start --pg-port 5433 ``` 3. Stop existing PostgreSQL: ```bash # macOS with Homebrew brew services stop postgresql # Linux sudo systemctl stop postgresql ``` #### Cannot Connect to Database **Symptom**: `Connection refused` or `ECONNREFUSED` **Solutions**: 1. Check daemon is running: ```bash postbase status ``` 2. Verify PostgreSQL container: ```bash docker ps | grep postbase ``` 3. Test direct connection: ```bash psql "postgresql://postgres:postgres@localhost:5432/postgres" ``` 4. Restart the daemon: ```bash postbase stop postbase start ``` #### Migration Checksum Mismatch **Symptom**: `Checksum mismatch for migration X` **Cause**: Migration file was modified after being applied. **Solutions**: 1. If intentional, reset the checksum: ```sql UPDATE _postbase_migrations SET checksum = '' WHERE name = 'migration_name'; ``` 2. If accidental, restore the original file from version control. 3. Create a new migration instead of modifying existing ones. ### Cloud Platform #### Authentication Failed **Symptom**: `401 Unauthorized` or `Authentication required` **Solutions**: 1. Re-authenticate: ```bash postbase cloud logout postbase cloud login ``` 2. Check credentials file: ```bash cat ~/.postbase/cloud.json ``` 3. Verify token hasn't expired (tokens are long-lived but may be revoked). #### Database Provisioning Stuck **Symptom**: Database stuck in "provisioning" status **Solutions**: 1. Check Railway status: [https://status.railway.app](https://status.railway.app) 2. Wait a few minutes (provisioning can take 2-5 minutes) 3. Check detailed status: ```bash postbase cloud databases list -p --json ``` 4. Contact support if stuck for >10 minutes #### Cannot Connect to Cloud Database **Symptom**: `Connection refused` or SSL errors **Solutions**: 1. Get correct connection string: ```bash postbase cloud url -p ``` 2. Ensure SSL is disabled (Railway proxy doesn't require SSL): ``` postgresql://...?sslmode=disable ``` 3. Check network connectivity: ```bash telnet ``` #### Backup Failed **Symptom**: Backup shows "failed" status **Solutions**: 1. Check backup service health: ```bash curl https://postbase-backup-service.up.railway.app/health ``` 2. Check database size (large databases may timeout) 3. Retry the backup: ```bash postbase cloud backups create -p -d ``` 4. Check backup service logs (contact support) ### PITR Issues #### WAL Receiver Not Running **Symptom**: `pitr status` shows "WAL Receiver: NOT RUNNING" **Solutions**: 1. Check receiver status: ```bash postbase cloud pitr receiver -p -d ``` 2. Restart the receiver: ```bash postbase cloud pitr restart-receiver -p -d ``` 3. Verify database supports replication: * Must use `ghcr.io/zeroexcore/postbase:postgres-15-pitr` image 4. Check backup service health: ```bash postbase cloud pitr service-status -p -d ``` #### High WAL Lag **Symptom**: WAL lag is >5 minutes **Causes**: * High write volume on database * Network issues between Railway and backup service * Backup service overloaded **Solutions**: 1. Monitor lag: ```bash postbase cloud pitr receiver -p -d ``` 2. Restart receiver: ```bash postbase cloud pitr restart-receiver -p -d ``` 3. Contact support if lag persists #### Cannot Restore to Target Time **Symptom**: `Target time outside recovery window` **Solutions**: 1. Check recovery window: ```bash postbase cloud pitr status -p -d ``` 2. Choose a time within the window 3. The earliest time is the base backup time 4. The latest time is the most recent WAL segment ### SDK Issues #### Type Errors **Symptom**: TypeScript errors when using SDK **Solutions**: 1. Regenerate types: ```bash postbase types generate --database myapp --all --output ./src/db/types.ts ``` 2. Ensure types match SDK version: ```bash npm update @postbase/sdk ``` 3. Check import path: ```typescript import type { Database } from './db/types' ``` #### Connection Pool Exhausted **Symptom**: `Connection pool exhausted` or slow queries **Solutions**: 1. Close unused connections: ```typescript await db.close() ``` 2. Increase pool size: ```typescript const db = createClient({ connectionString: '...', pool: { max: 20 } }) ``` 3. Use connection pooling (e.g., PgBouncer) for high-traffic apps #### Realtime Not Receiving Events **Symptom**: Subscriptions don't receive updates **Solutions**: 1. Verify triggers exist: ```sql \d users -- Check triggers section ``` 2. Test NOTIFY manually: ```sql NOTIFY users, '{"test": true}'; ``` 3. Check channel name matches table: ```typescript db.channel('users') // Must match NOTIFY channel ``` 4. Check for connection errors: ```typescript channel.on('error', console.error) ``` ### Getting Help #### Debug Information Collect this info when reporting issues: ```bash # CLI version postbase --version # System info uname -a node --version docker --version # Daemon status postbase status --json # Docker containers docker ps # Recent logs docker logs postbase-postgres --tail 50 ``` #### Support Channels * **GitHub Issues**: [https://github.com/zeroexcore/postbase/issues](https://github.com/zeroexcore/postbase/issues) * **Discord**: Coming soon * **Email**: [support@postbase.sh](mailto\:support@postbase.sh) #### Known Issues Check the GitHub issues for known problems: [https://github.com/zeroexcore/postbase/issues?q=is%3Aopen+label%3Abug](https://github.com/zeroexcore/postbase/issues?q=is%3Aopen+label%3Abug) ## Admin UI Web-based interface for database management. ### Overview The Postbase Admin UI provides a visual interface for: * Browsing database schemas * Viewing and editing data * Executing SQL queries * Managing migrations ### Starting the UI ```bash # From postbase directory pnpm --filter @postbase/ui dev ``` Open [http://localhost:3000](http://localhost:3000) ### Features #### Dashboard The dashboard shows: * Connected databases * Quick stats (tables, size, connections) * Recent activity #### Schema Browser Navigate your database structure: ``` 📁 public ├── 📋 users │ ├── id (serial) │ ├── email (varchar) │ ├── name (varchar) │ └── created_at (timestamp) ├── 📋 posts │ ├── id (serial) │ ├── user_id (integer) │ └── content (text) └── 📋 comments ``` Click any table to view its structure and data. #### Data Viewer Browse and edit table data: | id | email | name | created\_at | | -- | ---------------------------------------------- | ----- | ---------------- | | 1 | [alice@example.com](mailto\:alice@example.com) | Alice | 2026-01-25 10:30 | | 2 | [bob@example.com](mailto\:bob@example.com) | Bob | 2026-01-25 11:00 | Features: * **Pagination** - Navigate large datasets * **Sorting** - Click column headers * **Filtering** - Filter by column values * **Inline editing** - Double-click to edit * **Add/Delete rows** - CRUD operations #### Schema Editor Create and modify tables visually: ##### Create Table 1. Click "New Table" 2. Enter table name 3. Add columns with types 4. Configure constraints (PK, NOT NULL, UNIQUE) 5. Click "Create" ##### Modify Table 1. Select table from browser 2. Click "Structure" tab 3. Add/edit/remove columns 4. Manage indexes 5. Configure foreign keys ##### Supported Column Types | Type | Description | | ------------ | ------------------------- | | `serial` | Auto-incrementing integer | | `integer` | 32-bit integer | | `bigint` | 64-bit integer | | `varchar(n)` | Variable-length string | | `text` | Unlimited text | | `boolean` | true/false | | `timestamp` | Date and time | | `jsonb` | JSON data | | `uuid` | UUID | #### SQL Editor Execute arbitrary SQL: ```sql SELECT u.name, COUNT(p.id) as post_count FROM users u LEFT JOIN posts p ON p.user_id = u.id GROUP BY u.id ORDER BY post_count DESC LIMIT 10; ``` Features: * **Monaco editor** - VS Code-like experience * **Syntax highlighting** - PostgreSQL-aware * **Auto-completion** - Table and column names * **Multiple queries** - Run multiple statements * **Export results** - CSV/JSON export Keyboard shortcuts: * `Ctrl+Enter` / `Cmd+Enter` - Execute query * `Ctrl+Shift+Enter` - Execute selected text * `Ctrl+S` / `Cmd+S` - Save to history #### Migration Manager View and manage migrations: ##### Applied Migrations | Name | Applied | Checksum | | ----------------------- | ------------ | --------- | | 20260125\_create\_users | Jan 25 10:30 | abc123... | | 20260125\_create\_posts | Jan 25 10:31 | def456... | ##### Pending Migrations | Name | Status | | ---------------------- | ------- | | 20260126\_add\_indexes | Pending | Actions: * **Apply All** - Run all pending migrations * **Apply One** - Run specific migration * **Rollback** - Revert last migration ### Configuration #### Theme Toggle dark/light mode in Settings. #### Connection View and copy connection details: ``` Host: localhost Port: 5432 Database: myapp User: postgres Password: postgres ``` #### Preferences * Default schema * Results per page * Editor font size * Auto-refresh interval ### Screenshots #### Dashboard View all your databases at a glance. #### Schema Browser Navigate tables, views, and functions in a tree structure. #### Data Viewer Edit data inline with pagination and filtering. #### SQL Editor Execute queries with syntax highlighting and auto-completion. ### Keyboard Shortcuts | Shortcut | Action | | ------------ | ------------- | | `Ctrl+K` | Quick search | | `Ctrl+N` | New query tab | | `Ctrl+Enter` | Execute query | | `Ctrl+S` | Save query | | `Escape` | Close modal | ## Database Management Create, list, and manage local PostgreSQL databases. ### Creating Databases #### Basic Creation ```bash postbase db create myapp ``` Output: ``` ✓ Database 'myapp' created Connection: postgresql://postgres:postgres@localhost:5432/myapp ``` #### With Template Create from an existing database: ```bash postbase db create myapp_test --template myapp ``` ### Listing Databases ```bash postbase db list ``` Output: ``` Databases: myapp Size: 12.5 MB Tables: 8 Created: 2026-01-25 10:30:00 demo Size: 3.2 MB Tables: 3 Created: 2026-01-24 15:45:00 ``` #### JSON Output ```bash postbase db list --json ``` ```json { "databases": [ { "name": "myapp", "size": 13107200, "tableCount": 8, "createdAt": "2026-01-25T10:30:00Z" } ] } ``` ### Database Details ```bash postbase db info myapp ``` Output: ``` Database: myapp Size: 12.5 MB Tables: 8 Views: 2 Functions: 5 Indexes: 12 Schemas: - public (8 tables) - auth (2 tables) Extensions: - uuid-ossp - pgcrypto ``` ### Deleting Databases #### With Confirmation ```bash postbase db drop myapp ``` Output: ``` ⚠ This will permanently delete the database 'myapp' and all its data. This cannot be undone. Are you sure? (y/N): y ✓ Database 'myapp' dropped ``` #### Force Delete ```bash postbase db drop myapp --force ``` ### Connection Details #### Get Connection String ```bash postbase connect myapp ``` Output: ``` Database: myapp Connection Details: Host: localhost Port: 5432 Database: myapp User: postgres Password: postgres Connection String: postgresql://postgres:postgres@localhost:5432/myapp psql Command: psql "postgresql://postgres:postgres@localhost:5432/myapp" ``` #### Copy to Clipboard ```bash postbase connect myapp --copy ``` #### JSON Output ```bash postbase connect myapp --json ``` ```json { "host": "localhost", "port": 5432, "database": "myapp", "user": "postgres", "password": "postgres", "connectionString": "postgresql://postgres:postgres@localhost:5432/myapp" } ``` ### Using with Applications #### Environment Variables ```bash # .env DATABASE_URL="postgresql://postgres:postgres@localhost:5432/myapp" ``` #### Node.js (pg) ```javascript import pg from 'pg' const pool = new pg.Pool({ connectionString: 'postgresql://postgres:postgres@localhost:5432/myapp', }) const result = await pool.query('SELECT * FROM users') ``` #### Postbase SDK ```typescript import { createClient } from '@postbase/sdk' const db = createClient({ connectionString: 'postgresql://postgres:postgres@localhost:5432/myapp', }) const users = await db.from('users').execute() ``` #### Prisma ```prisma // schema.prisma datasource db { provider = "postgresql" url = env("DATABASE_URL") } ``` #### Drizzle ```typescript import { drizzle } from 'drizzle-orm/node-postgres' import { Pool } from 'pg' const pool = new Pool({ connectionString: 'postgresql://postgres:postgres@localhost:5432/myapp', }) const db = drizzle(pool) ``` ### Direct psql Access ```bash # Connect with psql psql "postgresql://postgres:postgres@localhost:5432/myapp" # Or using environment variable export PGPASSWORD=postgres psql -h localhost -U postgres -d myapp ``` ### Database Operations via API All operations are available via the daemon API: #### List Databases ```bash curl http://localhost:9432/databases ``` #### Create Database ```bash curl -X POST http://localhost:9432/databases \ -H "Content-Type: application/json" \ -d '{"name": "newdb"}' ``` #### Get Database Details ```bash curl http://localhost:9432/databases/myapp ``` #### Delete Database ```bash curl -X DELETE http://localhost:9432/databases/myapp ``` ### Best Practices #### Naming Conventions * Use lowercase names: `myapp` not `MyApp` * Use underscores for multi-word: `my_app` not `my-app` * Avoid reserved words: `user`, `order`, `table` #### Development Workflow ```bash # Create development database postbase db create myapp_dev # Create test database postbase db create myapp_test --template myapp_dev # Run tests npm test # Drop test database postbase db drop myapp_test --force ``` #### Database per Environment ```bash # Local development postbase db create myapp_dev # Local testing postbase db create myapp_test # CI environment (use cloud) postbase cloud provision ci --ephemeral ``` ## Migrations SQL-first migrations with checksum verification and rollback support. ### Overview Postbase uses plain SQL files for migrations: ``` migrations/ ├── 20260125090000_create_users.up.sql ├── 20260125090000_create_users.down.sql ├── 20260125090100_create_posts.up.sql ├── 20260125090100_create_posts.down.sql └── 20260125090200_add_indexes.up.sql └── 20260125090200_add_indexes.down.sql ``` * `.up.sql` - Applied when migrating up * `.down.sql` - Applied when rolling back ### Creating Migrations #### Generate Files ```bash postbase migrate new create_users ``` Creates: ``` Created migration files: migrations/20260125090000_create_users.up.sql migrations/20260125090000_create_users.down.sql ``` #### Custom Directory ```bash postbase migrate new create_users --dir ./db/migrations ``` #### Write Migration SQL Edit `migrations/20260125090000_create_users.up.sql`: ```sql -- Create users table CREATE TABLE users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL, name VARCHAR(255) NOT NULL, password_hash VARCHAR(255) NOT NULL, role VARCHAR(50) DEFAULT 'user', email_verified BOOLEAN DEFAULT FALSE, created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(), updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); -- Create indexes CREATE INDEX idx_users_email ON users(email); CREATE INDEX idx_users_role ON users(role); -- Create trigger for updated_at CREATE OR REPLACE FUNCTION update_updated_at_column() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = NOW(); RETURN NEW; END; $$ language 'plpgsql'; CREATE TRIGGER update_users_updated_at BEFORE UPDATE ON users FOR EACH ROW EXECUTE FUNCTION update_updated_at_column(); ``` Edit `migrations/20260125090000_create_users.down.sql`: ```sql DROP TRIGGER IF EXISTS update_users_updated_at ON users; DROP FUNCTION IF EXISTS update_updated_at_column(); DROP TABLE IF EXISTS users; ``` ### Applying Migrations #### Apply All Pending ```bash postbase migrate up ``` Output: ``` Applying migrations to myapp (public)... ↑ 20260125090000_create_users ✓ Applied in 45ms ↑ 20260125090100_create_posts ✓ Applied in 32ms Applied 2 migrations in 77ms ``` #### Apply to Specific Database ```bash postbase migrate up --database myapp ``` #### Apply to Specific Schema ```bash postbase migrate up --schema auth ``` #### Dry Run Preview without applying: ```bash postbase migrate up --dry-run ``` ### Rolling Back #### Rollback Last Migration ```bash postbase migrate down ``` Output: ``` Rolling back last migration in myapp (public)... ↓ 20260125090100_create_posts ✓ Rolled back in 28ms ``` #### Rollback Multiple ```bash postbase migrate down --count 3 ``` #### Rollback to Specific Migration ```bash postbase migrate down --to 20260125090000 ``` ### Checking Status ```bash postbase migrate status ``` Output: ``` Migration Status for myapp (public) Applied (2): ✓ 20260125090000_create_users Applied: 2026-01-25 10:30:00 ✓ 20260125090100_create_posts Applied: 2026-01-25 10:30:01 Pending (1): ○ 20260125090200_add_indexes Total: 3 migrations, 2 applied, 1 pending ``` #### JSON Output ```bash postbase migrate status --json ``` ### Multi-Schema Support #### Separate Migration Directories ``` migrations/ ├── public/ │ ├── 20260125_create_users.up.sql │ └── 20260125_create_users.down.sql └── auth/ ├── 20260125_create_sessions.up.sql └── 20260125_create_sessions.down.sql ``` #### Apply per Schema ```bash # Apply public schema migrations postbase migrate up --schema public # Apply auth schema migrations postbase migrate up --schema auth ``` #### Status per Schema ```bash postbase migrate status --schema auth ``` ### Migration Tracking Postbase tracks migrations in the `_postbase_migrations` table: ```sql SELECT * FROM _postbase_migrations; ``` | id | name | checksum | applied\_at | | -- | ----------------------- | --------- | ------------------- | | 1 | 20260125\_create\_users | abc123... | 2026-01-25 10:30:00 | | 2 | 20260125\_create\_posts | def456... | 2026-01-25 10:30:01 | #### Checksum Verification Postbase computes SHA-256 checksums of migration files. If a file changes after being applied, you'll see a warning: ``` ⚠ Checksum mismatch for 20260125_create_users File may have been modified after migration was applied. ``` ### Best Practices #### Naming Conventions ``` YYYYMMDDHHMMSS_descriptive_name.up.sql ``` Examples: * `20260125090000_create_users.up.sql` * `20260125090100_add_email_verified_to_users.up.sql` * `20260125090200_create_orders_table.up.sql` #### Idempotent Migrations Use `IF NOT EXISTS` and `IF EXISTS`: ```sql -- Up migration CREATE TABLE IF NOT EXISTS users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL ); -- Down migration DROP TABLE IF EXISTS users; ``` #### Transaction Safety Wrap complex migrations in transactions: ```sql BEGIN; ALTER TABLE users ADD COLUMN role VARCHAR(50); UPDATE users SET role = 'user' WHERE role IS NULL; ALTER TABLE users ALTER COLUMN role SET NOT NULL; COMMIT; ``` #### Data Migrations Separate schema and data migrations: ```sql -- 20260125_add_role_to_users.up.sql (schema change) ALTER TABLE users ADD COLUMN role VARCHAR(50); -- 20260125_set_default_roles.up.sql (data migration) UPDATE users SET role = 'user' WHERE role IS NULL; ALTER TABLE users ALTER COLUMN role SET NOT NULL; ``` #### Testing Migrations ```bash # Create test database postbase db create myapp_test # Apply migrations postbase migrate up --database myapp_test # Run tests npm test # Cleanup postbase db drop myapp_test --force ``` ### Common Patterns #### Adding a Column ```sql -- Up ALTER TABLE users ADD COLUMN phone VARCHAR(20); -- Down ALTER TABLE users DROP COLUMN phone; ``` #### Adding an Index ```sql -- Up CREATE INDEX CONCURRENTLY idx_users_phone ON users(phone); -- Down DROP INDEX CONCURRENTLY IF EXISTS idx_users_phone; ``` #### Renaming a Column ```sql -- Up ALTER TABLE users RENAME COLUMN name TO full_name; -- Down ALTER TABLE users RENAME COLUMN full_name TO name; ``` #### Adding a Foreign Key ```sql -- Up ALTER TABLE posts ADD CONSTRAINT fk_posts_user FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE; -- Down ALTER TABLE posts DROP CONSTRAINT fk_posts_user; ``` #### Creating an Enum ```sql -- Up CREATE TYPE user_role AS ENUM ('user', 'admin', 'moderator'); ALTER TABLE users ADD COLUMN role user_role DEFAULT 'user'; -- Down ALTER TABLE users DROP COLUMN role; DROP TYPE user_role; ``` ## Local Development Overview Postbase provides a complete local PostgreSQL development environment that runs entirely on your machine. ### Architecture ``` ┌─────────────────────────────────────────────────────────┐ │ Your Machine │ ├─────────────────────────────────────────────────────────┤ │ │ │ ┌─────────────┐ ┌─────────────────────────────┐ │ │ │ Your App │ │ Admin UI │ │ │ │ │ │ localhost:3000 │ │ │ └──────┬──────┘ └────────────┬────────────────┘ │ │ │ │ │ │ │ ┌────────────────┴────────────────┐ │ │ │ │ Postbase Daemon │ │ │ │ │ localhost:9432 │ │ │ │ │ │ │ │ │ │ • Database Management │ │ │ │ │ • Migration Engine │ │ │ │ │ • Schema Operations │ │ │ │ │ • Query Execution │ │ │ │ └────────────────┬────────────────┘ │ │ │ │ │ │ │ ┌────────────────┴────────────────┐ │ │ └────────▶│ PostgreSQL (Docker) │ │ │ │ localhost:5432 │ │ │ │ │ │ │ │ • Single instance │ │ │ │ • Multiple databases │ │ │ │ • Pre-installed extensions │ │ │ └─────────────────────────────────┘ │ │ │ └─────────────────────────────────────────────────────────┘ ``` ### Key Features #### Single PostgreSQL Instance Unlike solutions that create one container per project, Postbase uses a single PostgreSQL instance that hosts multiple databases. This approach: * **Reduces resource usage** - One container instead of many * **Simplifies management** - Single point of control * **Enables cross-database queries** - Connect to multiple databases * **Mirrors production** - Same pattern as managed databases #### SQL-First Migrations Migrations are plain SQL files, giving you full control: ```sql -- migrations/20260125_create_users.up.sql CREATE TABLE users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL, created_at TIMESTAMP DEFAULT NOW() ); CREATE INDEX idx_users_email ON users(email); ``` Benefits: * **No abstraction layer** - Write real PostgreSQL SQL * **Version control friendly** - Review changes in PRs * **Checksum verification** - Detect unauthorized changes * **Multi-schema support** - Organize by schema #### Modern Admin UI A web-based interface for database management: * **Schema browser** - Navigate tables, views, functions * **Data viewer** - Browse and edit data with inline editing * **SQL editor** - Execute queries with Monaco editor * **Migration manager** - Apply and rollback migrations #### Pre-installed Extensions The Supabase PostgreSQL image includes popular extensions: | Extension | Purpose | | -------------------- | ------------------------ | | `pg_cron` | Job scheduling | | `pgvector` | Vector similarity search | | `PostGIS` | Geospatial queries | | `pg_stat_statements` | Query performance | | `uuid-ossp` | UUID generation | | `pgcrypto` | Cryptographic functions | ### Getting Started #### Start the Daemon ```bash postbase start ``` #### Create a Database ```bash postbase db create myapp ``` #### Get Connection Info ```bash postbase connect myapp ``` #### Open Admin UI ```bash # Start the UI (from postbase repo) pnpm --filter @postbase/ui dev ``` Then open [http://localhost:3000](http://localhost:3000) ### Configuration #### Default Ports | Service | Port | | ---------- | ---- | | Daemon API | 9432 | | PostgreSQL | 5432 | | Admin UI | 3000 | #### Default Credentials ``` Host: localhost Port: 5432 User: postgres Password: postgres ``` #### Data Directory All data is stored in `~/.postbase/`: ``` ~/.postbase/ ├── data/ # PostgreSQL data files ├── state.json # Daemon state └── logs/ # Log files ``` ### Next Steps * [Database Management](/local/databases) - Create, list, delete databases * [Migrations](/local/migrations) - Manage schema changes * [Admin UI](/local/admin-ui) - Use the web interface ## SQL Editor Execute PostgreSQL queries with a modern code editor. ### Features * **Monaco Editor** - Same editor as VS Code * **Syntax Highlighting** - PostgreSQL-aware highlighting * **Auto-completion** - Tables, columns, functions * **Multiple Tabs** - Work on multiple queries * **Query History** - Access previous queries * **Result Export** - CSV and JSON formats ### Executing Queries #### Basic Execution Type your query and press `Ctrl+Enter` (or `Cmd+Enter` on Mac): ```sql SELECT * FROM users WHERE active = true; ``` #### Multiple Statements Separate statements with semicolons: ```sql SELECT COUNT(*) FROM users; SELECT COUNT(*) FROM posts; SELECT COUNT(*) FROM comments; ``` All results appear in separate tabs. #### Execute Selection Select specific text and press `Ctrl+Shift+Enter`: ```sql -- This won't run SELECT * FROM users; -- Only this runs when selected SELECT * FROM posts; ``` ### Auto-completion The editor provides intelligent suggestions: #### Tables ```sql SELECT * FROM us| ↓ users user_sessions user_roles ``` #### Columns ```sql SELECT email, | ↓ id name created_at ``` #### Functions ```sql SELECT NOW| ↓ NOW() NULLIF() ``` #### Keywords ```sql SEL| ↓ SELECT SELF ``` ### Query Results #### Table View Results display in a sortable, scrollable table: | id | email | name | | -- | ---------------------------------------------- | ----- | | 1 | [alice@example.com](mailto\:alice@example.com) | Alice | | 2 | [bob@example.com](mailto\:bob@example.com) | Bob | #### Row Count Shows affected rows for INSERT, UPDATE, DELETE: ``` Query executed successfully 3 rows affected Time: 45ms ``` #### Errors Syntax and runtime errors display with line numbers: ``` ERROR: column "emali" does not exist LINE 1: SELECT emali FROM users ^ HINT: Perhaps you meant to reference the column "users.email" ``` ### Exporting Results #### CSV Export Click "Export CSV": ```csv id,email,name,created_at 1,alice@example.com,Alice,2026-01-25 10:30:00 2,bob@example.com,Bob,2026-01-25 11:00:00 ``` #### JSON Export Click "Export JSON": ```json [ {"id": 1, "email": "alice@example.com", "name": "Alice"}, {"id": 2, "email": "bob@example.com", "name": "Bob"} ] ``` ### Query History Access previous queries via the history panel: 1. Click "History" in sidebar 2. Browse recent queries 3. Click to load into editor 4. Edit and re-execute History is stored locally and persists across sessions. ### Query Templates Common query patterns: #### Select All ```sql SELECT * FROM {table} LIMIT 100; ``` #### Count Rows ```sql SELECT COUNT(*) FROM {table}; ``` #### Find by ID ```sql SELECT * FROM {table} WHERE id = {id}; ``` #### Join Tables ```sql SELECT a.*, b.name as related_name FROM {table_a} a JOIN {table_b} b ON a.{fk} = b.id LIMIT 100; ``` #### Group and Count ```sql SELECT {column}, COUNT(*) as count FROM {table} GROUP BY {column} ORDER BY count DESC; ``` ### Keyboard Shortcuts | Shortcut | Action | | ------------------ | ---------------------- | | `Ctrl+Enter` | Execute query | | `Ctrl+Shift+Enter` | Execute selection | | `Ctrl+S` | Save to history | | `Ctrl+/` | Toggle comment | | `Ctrl+D` | Select next occurrence | | `Alt+Up/Down` | Move line up/down | | `Ctrl+Shift+K` | Delete line | | `Ctrl+Space` | Trigger autocomplete | ### Settings #### Font Size Adjust editor font size in Settings. #### Theme Editor follows app theme (dark/light). #### Tab Size Configure indentation (default: 2 spaces). #### Word Wrap Toggle line wrapping for long queries. ### Tips #### Format SQL Use online formatters or install: ```bash npm install -g sql-formatter ``` #### Explain Queries Analyze query performance: ```sql EXPLAIN ANALYZE SELECT * FROM users WHERE email LIKE '%@example.com'; ``` #### Transaction Testing Test changes without committing: ```sql BEGIN; UPDATE users SET name = 'Test' WHERE id = 1; SELECT * FROM users WHERE id = 1; ROLLBACK; ``` #### Parameterized Queries Use `$1`, `$2` syntax for parameters (SDK handles this): ```sql SELECT * FROM users WHERE email = $1 AND active = $2; ``` ## CI/CD Integration Automate database operations in your CI/CD pipeline. ### Overview Integrate Postbase into your pipeline: 1. Run migrations 2. Generate types 3. Run tests 4. Deploy ### GitHub Actions #### Basic Setup ```yaml # .github/workflows/deploy.yml name: Deploy on: push: branches: [main] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 22 - name: Install dependencies run: npm ci - name: Install Postbase CLI run: npm install -g postbase - name: Run migrations env: POSTBASE_TOKEN: ${{ secrets.POSTBASE_TOKEN }} run: | postbase cloud migrate up \ -p my-project \ -d production - name: Deploy application run: npm run deploy ``` #### Full Pipeline ```yaml name: CI/CD Pipeline on: push: branches: [main, develop] pull_request: branches: [main] env: POSTBASE_TOKEN: ${{ secrets.POSTBASE_TOKEN }} jobs: test: runs-on: ubuntu-latest services: postgres: image: supabase/postgres:15.14.1.072 env: POSTGRES_PASSWORD: postgres ports: - 5432:5432 options: >- --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 22 cache: npm - name: Install dependencies run: npm ci - name: Install Postbase CLI run: npm install -g postbase - name: Run migrations (local) env: DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres run: postbase migrate up - name: Generate types env: DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres run: postbase types generate --output ./src/db/types.ts --all - name: Check types match run: | git diff --exit-code src/db/types.ts || \ (echo "Types out of date! Run 'postbase types generate'" && exit 1) - name: Run tests env: DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres run: npm test deploy-staging: needs: test if: github.ref == 'refs/heads/develop' runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 22 - name: Install Postbase CLI run: npm install -g postbase - name: Run migrations (staging) run: | postbase cloud migrate up \ -p my-project \ -d staging - name: Deploy to staging run: npm run deploy:staging deploy-production: needs: test if: github.ref == 'refs/heads/main' runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 22 - name: Install Postbase CLI run: npm install -g postbase - name: Create backup run: | postbase cloud backups create \ -p my-project \ -d production - name: Run migrations (production) run: | postbase cloud migrate up \ -p my-project \ -d production - name: Deploy to production run: npm run deploy:production ``` ### GitLab CI ```yaml # .gitlab-ci.yml stages: - test - deploy variables: POSTGRES_PASSWORD: postgres DATABASE_URL: postgresql://postgres:postgres@postgres:5432/postgres test: stage: test image: node:22 services: - name: supabase/postgres:15.14.1.072 alias: postgres script: - npm ci - npm install -g postbase - postbase migrate up - npm test deploy: stage: deploy image: node:22 only: - main script: - npm ci - npm install -g postbase - postbase cloud migrate up -p my-project -d production - npm run deploy environment: name: production ``` ### CircleCI ```yaml # .circleci/config.yml version: 2.1 jobs: test: docker: - image: node:22 - image: supabase/postgres:15.14.1.072 environment: POSTGRES_PASSWORD: postgres steps: - checkout - run: npm ci - run: npm install -g postbase - run: name: Wait for Postgres command: | for i in {1..30}; do pg_isready -h localhost -p 5432 && break sleep 1 done - run: name: Run migrations environment: DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres command: postbase migrate up - run: npm test deploy: docker: - image: node:22 steps: - checkout - run: npm ci - run: npm install -g postbase - run: name: Deploy migrations command: | postbase cloud migrate up \ -p my-project \ -d production - run: npm run deploy workflows: build-and-deploy: jobs: - test - deploy: requires: - test filters: branches: only: main ``` ### Best Practices #### 1. Pre-Deployment Backups Always backup before migrations: ```yaml - name: Create backup run: postbase cloud backups create -p $PROJECT -d production ``` #### 2. Migration Dry Run Test migrations before applying: ```yaml - name: Dry run migrations run: postbase migrate up --dry-run ``` #### 3. Type Checking Verify generated types match: ```yaml - name: Check types run: | postbase types generate --output ./src/db/types.ts git diff --exit-code src/db/types.ts ``` #### 4. Rollback Strategy Have rollback ready: ```yaml - name: Rollback on failure if: failure() run: | postbase migrate down # Or restore from backup postbase cloud backups restore $BACKUP_ID -p $PROJECT -d production ``` #### 5. Environment Isolation Use separate databases per environment: ```yaml deploy-staging: environment: staging env: DATABASE: staging deploy-production: environment: production env: DATABASE: production ``` ### Secrets Management #### GitHub ```bash # Set secrets gh secret set POSTBASE_TOKEN --body "pb_xxx..." gh secret set DATABASE_URL --body "postgresql://..." ``` #### GitLab Settings → CI/CD → Variables #### CircleCI Project Settings → Environment Variables ### Troubleshooting #### Migration Failed ```yaml - name: Check migration status if: failure() run: postbase migrate status ``` #### Connection Issues ```yaml - name: Test connection run: | psql "$DATABASE_URL" -c "SELECT 1" ``` #### Token Expired Regenerate token: ```bash postbase cloud login # Copy new token to CI secrets ``` ## Migrating from Supabase Migrate your Supabase project to Postbase. ### Overview Postbase is designed for easy migration from Supabase: * Same PostgreSQL extensions * Compatible query API * Similar project structure ### Migration Steps #### 1. Export Database From Supabase Dashboard: 1. Go to Settings → Database 2. Click "Download backup" 3. Save the `.sql` file Or via CLI: ```bash # Using Supabase CLI supabase db dump -f backup.sql ``` #### 2. Create Postbase Project ```bash # Create cloud project postbase cloud projects create myapp # Provision database postbase cloud provision production -p myapp ``` #### 3. Import Database ```bash # Get connection string POSTBASE_URL=$(postbase cloud url production -p myapp) # Import backup psql "$POSTBASE_URL" < backup.sql ``` #### 4. Update SDK Replace Supabase client: ```typescript // Before: Supabase import { createClient } from '@supabase/supabase-js' const supabase = createClient( process.env.SUPABASE_URL!, process.env.SUPABASE_KEY! ) // After: Postbase import { createClient } from '@postbase/sdk' import type { Database } from './db/types' const db = createClient({ connectionString: process.env.DATABASE_URL!, }) ``` #### 5. Update Queries Most queries work unchanged: ```typescript // Both work the same const users = await db .from('users') .eq('active', true) .order('created_at', { ascending: false }) .limit(10) .execute() ``` ### API Compatibility #### Fully Compatible | Method | Status | | ------------------------------------ | ------ | | `.from()` | ✅ | | `.select()` | ✅ | | `.insert()` | ✅ | | `.update()` | ✅ | | `.delete()` | ✅ | | `.eq()` | ✅ | | `.neq()` | ✅ | | `.gt()`, `.gte()`, `.lt()`, `.lte()` | ✅ | | `.like()`, `.ilike()` | ✅ | | `.in()` | ✅ | | `.is()` | ✅ | | `.order()` | ✅ | | `.limit()` | ✅ | | `.single()` | ✅ | | `.maybeSingle()` | ✅ | | `.rpc()` | ✅ | #### Different Approach | Feature | Supabase | Postbase | | -------- | ---------------- | -------------- | | Realtime | WebSocket | LISTEN/NOTIFY | | Auth | Supabase Auth | Bring your own | | Storage | Supabase Storage | Bring your own | ### Extension Compatibility Postbase includes the same PostgreSQL extensions as Supabase: | Extension | Status | | -------------------- | ------ | | `pg_cron` | ✅ | | `pgvector` | ✅ | | `PostGIS` | ✅ | | `uuid-ossp` | ✅ | | `pgcrypto` | ✅ | | `pg_stat_statements` | ✅ | ### Realtime Migration #### Supabase Realtime ```typescript // Supabase supabase .channel('changes') .on('postgres_changes', { event: 'INSERT', table: 'users' }, handler) .subscribe() ``` #### Postbase Realtime ```typescript // Postbase (requires trigger setup) db.channel('users') .on('INSERT', handler) .subscribe() ``` See [Realtime](/sdk/realtime) for setup instructions. ### Auth Migration Postbase doesn't include built-in auth. Options: 1. **Auth.js** (NextAuth) - OAuth, email, credentials 2. **Lucia** - Session-based auth 3. **Clerk** - Managed auth service 4. **Custom** - Roll your own #### Database Tables Keep your existing auth tables or migrate to Auth.js: ```sql -- Your existing users table works SELECT * FROM auth.users; -- Or create new schema CREATE TABLE users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL, password_hash VARCHAR(255) NOT NULL, created_at TIMESTAMP DEFAULT NOW() ); ``` ### Storage Migration Postbase doesn't include object storage. Options: 1. **Cloudflare R2** - S3-compatible, cheap 2. **AWS S3** - Industry standard 3. **MinIO** - Self-hosted S3 #### Migrating Files ```bash # Export from Supabase Storage supabase storage ls -b bucket-name > files.txt # Download files cat files.txt | while read file; do supabase storage cp "gs://bucket-name/$file" "./$file" done # Upload to R2/S3 aws s3 sync ./files s3://new-bucket/ ``` ### Edge Functions → Cloudflare Workers #### Supabase Edge Function ```typescript // supabase/functions/hello/index.ts Deno.serve(async (req) => { return new Response('Hello!') }) ``` #### Cloudflare Worker ```typescript // src/index.ts export default { fetch(request: Request) { return new Response('Hello!') } } ``` ### Environment Variables Update your environment: ```bash # Before SUPABASE_URL=https://xxx.supabase.co SUPABASE_ANON_KEY=eyJ... SUPABASE_SERVICE_KEY=eyJ... # After DATABASE_URL=postgresql://postgres:xxx@xxx.proxy.rlwy.net:12345/railway?sslmode=disable ``` ### Testing Migration 1. **Local testing first**: ```bash postbase start postbase db create myapp_test psql "postgresql://localhost:5432/myapp_test" < backup.sql ``` 2. **Run your test suite** 3. **Deploy to staging**: ```bash postbase cloud provision staging -p myapp ``` 4. **Production cutover** ### Rollback Plan Keep Supabase running during migration: 1. Enable read-only mode on Supabase 2. Export final backup 3. Import to Postbase 4. Switch traffic 5. Monitor for issues 6. Decommission Supabase after validation period ## Multi-Schema Setup Organize your database with multiple schemas. ### Why Multiple Schemas? Schemas help organize large databases: * **Separation of concerns** - auth, billing, app data * **Access control** - Different permissions per schema * **Name collision avoidance** - Same table names in different contexts * **Logical grouping** - Related tables together ### Creating Schemas ```sql -- Create schemas CREATE SCHEMA auth; CREATE SCHEMA billing; CREATE SCHEMA app; ``` ### Migration Structure Organize migrations by schema: ``` migrations/ ├── public/ │ ├── 20260125_create_users.up.sql │ └── 20260125_create_users.down.sql ├── auth/ │ ├── 20260125_create_sessions.up.sql │ └── 20260125_create_sessions.down.sql └── billing/ ├── 20260125_create_subscriptions.up.sql └── 20260125_create_subscriptions.down.sql ``` ### Running Migrations #### Per Schema ```bash # Apply public schema postbase migrate up --schema public # Apply auth schema postbase migrate up --schema auth # Apply billing schema postbase migrate up --schema billing ``` #### All Schemas ```bash # Apply all schemas in order for schema in public auth billing; do postbase migrate up --schema $schema done ``` ### Example Schema: Auth ```sql -- migrations/auth/20260125_create_sessions.up.sql CREATE SCHEMA IF NOT EXISTS auth; CREATE TABLE auth.users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL, password_hash VARCHAR(255) NOT NULL, email_verified BOOLEAN DEFAULT FALSE, created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(), updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); CREATE TABLE auth.sessions ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), user_id INTEGER NOT NULL REFERENCES auth.users(id) ON DELETE CASCADE, token_hash VARCHAR(255) NOT NULL, expires_at TIMESTAMP WITH TIME ZONE NOT NULL, created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); CREATE INDEX idx_sessions_user_id ON auth.sessions(user_id); CREATE INDEX idx_sessions_expires_at ON auth.sessions(expires_at); ``` ### Example Schema: Billing ```sql -- migrations/billing/20260125_create_subscriptions.up.sql CREATE SCHEMA IF NOT EXISTS billing; CREATE TABLE billing.plans ( id SERIAL PRIMARY KEY, name VARCHAR(100) NOT NULL, price_cents INTEGER NOT NULL, features JSONB DEFAULT '[]', created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); CREATE TABLE billing.subscriptions ( id SERIAL PRIMARY KEY, user_id INTEGER NOT NULL, -- References auth.users plan_id INTEGER NOT NULL REFERENCES billing.plans(id), status VARCHAR(50) DEFAULT 'active', current_period_start TIMESTAMP WITH TIME ZONE NOT NULL, current_period_end TIMESTAMP WITH TIME ZONE NOT NULL, created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); CREATE INDEX idx_subscriptions_user_id ON billing.subscriptions(user_id); ``` ### Querying Across Schemas #### Direct Query ```sql SELECT u.email, s.status, p.name as plan_name FROM auth.users u JOIN billing.subscriptions s ON s.user_id = u.id JOIN billing.plans p ON p.id = s.plan_id WHERE u.id = 1; ``` #### SDK with Schema ```typescript // Query specific schema const users = await db .from('auth.users') .eq('email_verified', true) .execute() // Query billing const subscriptions = await db .from('billing.subscriptions') .eq('user_id', 1) .execute() ``` ### Type Generation Generate types for each schema: ```bash # Generate for all schemas postbase types generate \ --database myapp \ --schema public \ --output ./src/db/types/public.ts postbase types generate \ --database myapp \ --schema auth \ --output ./src/db/types/auth.ts postbase types generate \ --database myapp \ --schema billing \ --output ./src/db/types/billing.ts ``` #### Unified Types Create a combined type file: ```typescript // src/db/types/index.ts import type { Database as PublicDB } from './public' import type { Database as AuthDB } from './auth' import type { Database as BillingDB } from './billing' export interface Database { public: PublicDB['public'] auth: AuthDB['auth'] billing: BillingDB['billing'] } ``` ### Search Path Set default schema search order: ```sql -- Set for session SET search_path TO app, public; -- Set for role ALTER ROLE myapp_user SET search_path TO app, public; ``` ### Schema Permissions #### Create Role per Schema ```sql -- Auth schema role CREATE ROLE auth_admin; GRANT ALL ON SCHEMA auth TO auth_admin; GRANT ALL ON ALL TABLES IN SCHEMA auth TO auth_admin; -- Billing schema role CREATE ROLE billing_admin; GRANT ALL ON SCHEMA billing TO billing_admin; GRANT ALL ON ALL TABLES IN SCHEMA billing TO billing_admin; -- App role (read-only on auth/billing) CREATE ROLE app_user; GRANT USAGE ON SCHEMA auth TO app_user; GRANT SELECT ON auth.users TO app_user; GRANT USAGE ON SCHEMA billing TO app_user; GRANT SELECT ON billing.subscriptions TO app_user; ``` ### Best Practices #### 1. Schema Naming * Use lowercase: `auth`, `billing`, `app` * Keep names short but descriptive * Avoid generic names like `data` or `misc` #### 2. Cross-Schema References Prefer soft references for flexibility: ```sql -- Soft reference (user_id without FK) CREATE TABLE billing.subscriptions ( user_id INTEGER NOT NULL, -- No FK to auth.users ... ); ``` #### 3. Migration Order Define clear dependency order: ```bash # Order matters for foreign keys SCHEMAS="public auth billing app" for schema in $SCHEMAS; do postbase migrate up --schema $schema done ``` #### 4. Consistent Naming Use consistent column names across schemas: ```sql -- All tables use same pattern user_id INTEGER NOT NULL -- Always user_id, not uid or user created_at TIMESTAMP -- Always created_at, not created ``` ## Deploying to Production Best practices for deploying Postbase applications to production. ### Overview A typical production deployment: 1. Provision cloud database 2. Run migrations 3. Configure environment 4. Deploy application 5. Enable backups and PITR ### Step 1: Provision Database #### Create Project ```bash postbase cloud projects create myapp ``` #### Provision Production Database ```bash postbase cloud provision production \ -p myapp \ --region us-west1 \ --cpu 2 \ --memory 4096 \ --storage 50 ``` #### Get Connection String ```bash postbase cloud url production -p myapp --copy ``` ### Step 2: Run Migrations #### Apply Migrations ```bash # Set environment variable export DATABASE_URL=$(postbase cloud url production -p myapp --json | jq -r '.connection_string') # Run migrations postbase migrate up --database-url $DATABASE_URL ``` #### Verify Migration Status ```bash postbase migrate status --database-url $DATABASE_URL ``` ### Step 3: Configure Environment #### Environment Variables ```bash # .env.production DATABASE_URL="postgresql://postgres:xxx@xxx.proxy.rlwy.net:12345/railway?sslmode=disable" NODE_ENV="production" ``` #### Secrets Management Use your platform's secret management: **Vercel:** ```bash vercel env add DATABASE_URL production ``` **Railway:** ```bash railway variables set DATABASE_URL="..." ``` **AWS:** ```bash aws secretsmanager create-secret \ --name myapp/database-url \ --secret-string "postgresql://..." ``` ### Step 4: Enable Data Protection #### Enable PITR ```bash postbase cloud pitr enable -p myapp -d production ``` #### Verify Backups ```bash # Check automated backups postbase cloud backups list -p myapp -d production # Check PITR status postbase cloud pitr status -p myapp -d production ``` ### Step 5: Deploy Application #### Next.js on Vercel ```bash # Build and deploy vercel --prod # Or via Git push git push origin main ``` #### Node.js on Railway ```bash # Link to Railway railway link # Deploy railway up ``` #### Docker ```dockerfile FROM node:22-alpine WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY . . ENV NODE_ENV=production CMD ["node", "dist/index.js"] ``` ### Connection Best Practices #### Connection Pooling Use connection pooling for production: ```typescript import { createClient } from '@postbase/sdk' const db = createClient({ connectionString: process.env.DATABASE_URL, pool: { min: 2, max: 10, idleTimeoutMillis: 30000, }, }) ``` #### Health Checks Implement database health checks: ```typescript app.get('/health', async (req, res) => { try { await db.query('SELECT 1') res.json({ status: 'healthy', database: 'connected' }) } catch (error) { res.status(503).json({ status: 'unhealthy', database: 'disconnected' }) } }) ``` #### Graceful Shutdown Close connections on shutdown: ```typescript process.on('SIGTERM', async () => { console.log('Shutting down...') await db.close() process.exit(0) }) ``` ### Security Checklist #### Credentials * [ ] Use environment variables, not hardcoded values * [ ] Never commit `.env` files to version control * [ ] Rotate passwords periodically * [ ] Use different credentials per environment #### Network * [ ] Use SSL in production (if not using Railway proxy) * [ ] Configure firewall rules if needed * [ ] Use private networking when available #### Access * [ ] Limit database user permissions * [ ] Use separate users for app vs admin * [ ] Enable query logging for auditing ### Monitoring #### Application Metrics Track key metrics: ```typescript import { metrics } from 'your-metrics-library' // Query timing const start = Date.now() const result = await db.from('users').execute() metrics.timing('db.query.duration', Date.now() - start) // Connection pool metrics.gauge('db.pool.active', db.pool.totalCount) metrics.gauge('db.pool.idle', db.pool.idleCount) ``` #### Database Metrics Monitor via Postbase: ```bash # Check WAL receiver health postbase cloud pitr receiver -p myapp -d production # Check backup status postbase cloud backups list -p myapp -d production ``` #### Alerting Set up alerts for: * Database connection failures * High query latency (>100ms) * Backup failures * WAL lag >5 minutes ### Disaster Recovery #### Regular Backups Automated backups run daily. Create manual backups before major changes: ```bash postbase cloud backups create -p myapp -d production ``` #### Point-in-Time Recovery Restore to any point in time: ```bash postbase cloud pitr restore \ -p myapp \ -d production \ --target-time "2026-01-25T14:00:00Z" ``` #### Recovery Testing Test recovery quarterly: ```bash # Provision test environment postbase cloud provision dr-test -p myapp # Restore to test postbase cloud pitr restore \ -p myapp \ -d dr-test \ --source production \ --target-time "2026-01-25T12:00:00Z" # Verify data postbase cloud psql -p myapp -d dr-test \ -c "SELECT COUNT(*) FROM users" # Cleanup postbase cloud destroy dr-test -p myapp ``` ### Migration Strategies #### Rolling Updates For zero-downtime migrations: 1. **Add** new columns/tables (backward compatible) 2. **Deploy** new application code 3. **Migrate** data if needed 4. **Remove** old columns/tables #### Blue-Green Deployment ```bash # Create new environment postbase cloud provision production-new -p myapp # Restore data postbase cloud backups restore $LATEST_BACKUP \ -p myapp -d production-new # Run new migrations postbase migrate up --database-url $NEW_URL # Verify postbase cloud psql -p myapp -d production-new \ -c "SELECT * FROM _postbase_migrations" # Switch traffic # (in your load balancer / DNS) # Cleanup old environment later postbase cloud destroy production-old -p myapp ``` ### Cost Optimization #### Right-Sizing Start small and scale up: ```bash # Start with minimal config postbase cloud provision production \ -p myapp \ --cpu 1 \ --memory 1024 \ --storage 10 # Scale up when needed postbase cloud scale production \ -p myapp \ --cpu 2 \ --memory 4096 ``` #### Cleanup Remove unused resources: ```bash # List all databases postbase cloud databases list -p myapp # Remove old test environments postbase cloud destroy staging-old -p myapp ``` ## Environment Variables Configure Postbase with environment variables. ### CLI Environment Variables | Variable | Description | Default | | --------------------- | ---------------------------------- | -------------------------------------------------- | | `DATABASE_URL` | Default database connection string | - | | `POSTBASE_API_URL` | Cloud API URL | [https://api.postbase.sh](https://api.postbase.sh) | | `POSTBASE_DAEMON_URL` | Local daemon URL | [http://localhost:9432](http://localhost:9432) | | `POSTBASE_DATA_DIR` | Data directory | \~/.postbase | | `POSTBASE_TOKEN` | Cloud API token | - | ### Usage #### DATABASE\_URL Most commands use this for database connection: ```bash export DATABASE_URL="postgresql://postgres:postgres@localhost:5432/myapp" # Now commands use this database postbase migrate up postbase types generate --output ./types.ts ``` #### POSTBASE\_TOKEN For CI/CD environments: ```bash export POSTBASE_TOKEN="pb_xxx..." # CLI uses this token instead of stored credentials postbase cloud projects list ``` ### .env Files #### Local Development ```bash # .env DATABASE_URL="postgresql://postgres:postgres@localhost:5432/myapp" ``` #### Load with dotenv ```bash # package.json { "scripts": { "dev": "dotenv -- tsx src/index.ts", "migrate": "dotenv -- postbase migrate up" } } ``` Or in code: ```typescript import 'dotenv/config' ``` ### Environment-Specific Files ``` myapp/ ├── .env # Default (development) ├── .env.staging # Staging ├── .env.production # Production ├── .env.example # Template (committed) └── .env.local # Local overrides (not committed) ``` #### Example Files ```bash # .env.example (commit this) DATABASE_URL=postgresql://postgres:postgres@localhost:5432/myapp POSTBASE_TOKEN= # .env.production (don't commit) DATABASE_URL=postgresql://postgres:xxx@prod.example.com:5432/myapp POSTBASE_TOKEN=pb_xxx... ``` ### CI/CD Configuration #### GitHub Actions ```yaml env: DATABASE_URL: ${{ secrets.DATABASE_URL }} POSTBASE_TOKEN: ${{ secrets.POSTBASE_TOKEN }} steps: - name: Run Migrations run: postbase migrate up ``` #### Vercel ```bash vercel env add DATABASE_URL production ``` #### Railway Environment variables are automatic for Railway databases. #### Docker ```dockerfile FROM node:22-alpine ENV DATABASE_URL="" # Or use ARG for build-time ARG DATABASE_URL ``` ```bash docker run -e DATABASE_URL="postgresql://..." myapp ``` ### SDK Configuration The SDK reads `DATABASE_URL` automatically: ```typescript import { createClient } from '@postbase/sdk' // Uses DATABASE_URL if no connectionString provided const db = createClient() // Or explicitly const db = createClient({ connectionString: process.env.DATABASE_URL, }) ``` ### Security Best Practices #### Never Commit Secrets ```gitignore # .gitignore .env .env.* !.env.example ``` #### Use Secret Managers Production deployments should use secret managers: * **AWS Secrets Manager** * **HashiCorp Vault** * **Doppler** * **1Password Secrets** #### Rotate Credentials Regularly rotate database passwords: ```bash # Generate new password NEW_PASSWORD=$(openssl rand -base64 24) # Update in Railway/secret manager # Update DATABASE_URL ``` ### Troubleshooting #### Variable Not Set ``` Error: DATABASE_URL environment variable is not set ``` Ensure the variable is exported: ```bash export DATABASE_URL="..." # or source .env ``` #### Wrong Environment Check which .env file is loaded: ```bash echo $DATABASE_URL ``` #### Quotes in Values Avoid quotes in .env files: ```bash # Bad DATABASE_URL="postgresql://..." # Good DATABASE_URL=postgresql://... ``` (Unless you need spaces or special characters) ## postbase.toml Project configuration file for Postbase. ### Overview The `postbase.toml` file configures your Postbase project: ```toml [project] name = "myapp" [database] name = "myapp" [migrations] directory = "./migrations" schema = "public" ``` ### Location Place `postbase.toml` in your project root: ``` myapp/ ├── postbase.toml ├── migrations/ ├── src/ └── package.json ``` ### Configuration Sections #### \[project] ```toml [project] name = "myapp" # Project name (used in cloud) ``` #### \[database] ```toml [database] name = "myapp" # Default database name host = "localhost" # Database host (optional) port = 5432 # Database port (optional) user = "postgres" # Database user (optional) password = "postgres" # Database password (optional) ``` #### \[migrations] ```toml [migrations] directory = "./migrations" # Migration files directory schema = "public" # Default schema for migrations ``` #### \[types] ```toml [types] output = "./src/db/types.ts" # Generated types output path include_functions = true # Include PostgreSQL functions include_views = true # Include database views include_enums = true # Include enum types dates_as_strings = false # Map timestamps to string ``` #### \[environments] ```toml [environments.development] database = "myapp_dev" connection_string = "postgresql://postgres:postgres@localhost:5432/myapp_dev" [environments.staging] database = "myapp_staging" connection_string = "${STAGING_DATABASE_URL}" [environments.production] database = "myapp_production" connection_string = "${PRODUCTION_DATABASE_URL}" ``` ### Full Example ```toml [project] name = "myapp" [database] name = "myapp" [migrations] directory = "./migrations" schema = "public" [types] output = "./src/db/types.ts" include_functions = true include_views = true include_enums = true [environments.development] database = "myapp_dev" connection_string = "postgresql://postgres:postgres@localhost:5432/myapp_dev" [environments.staging] database = "myapp_staging" connection_string = "${STAGING_DATABASE_URL}" [environments.production] database = "myapp_production" connection_string = "${PRODUCTION_DATABASE_URL}" ``` ### Using Environments #### CLI Usage ```bash # Use default (development) postbase migrate up # Use specific environment postbase migrate up --env staging # Use production postbase migrate up --env production ``` #### Environment Variables Reference environment variables with `${VAR_NAME}`: ```toml [environments.production] connection_string = "${DATABASE_URL}" ``` ### Creating Config #### Initialize ```bash postbase init ``` Creates `postbase.toml` with defaults. #### Interactive Setup ```bash postbase init --interactive ``` Prompts for configuration values. ### Validation Validate your configuration: ```bash postbase config validate ``` ### Override with CLI CLI flags override config file: ```bash # Override database postbase migrate up --database other_db # Override migrations directory postbase migrate new create_users --dir ./db/migrations # Override schema postbase migrate up --schema auth ``` ### Environment Inheritance Environments inherit from base: ```toml [database] user = "postgres" password = "postgres" [environments.development] host = "localhost" port = 5432 [environments.production] host = "prod.example.com" port = 5432 # Inherits user and password from [database] ``` ### Best Practices #### Keep Secrets Out Never commit secrets: ```toml # Bad - hardcoded password [environments.production] connection_string = "postgresql://postgres:secret123@..." # Good - use environment variable [environments.production] connection_string = "${PRODUCTION_DATABASE_URL}" ``` #### Use .env Files ```bash # .env (development) DATABASE_URL="postgresql://postgres:postgres@localhost:5432/myapp" # .env.production DATABASE_URL="postgresql://postgres:xxx@prod.example.com:5432/myapp" ``` #### Version Control Commit `postbase.toml`: ```gitignore # .gitignore .env .env.* !.env.example ``` But keep `.env` files out of version control. ## Cloud Authentication Authenticate with Postbase Cloud using GitHub OAuth. ### Overview Postbase uses GitHub OAuth with the device flow for CLI authentication: 1. Run `postbase cloud login` 2. Open the provided URL in your browser 3. Enter the verification code 4. Authorize the Postbase application 5. CLI receives access token ### Login Process #### Step 1: Start Authentication ```bash postbase cloud login ``` Output: ``` Authenticating with Postbase Cloud... Open this URL in your browser: https://github.com/login/device Enter this code: ABCD-1234 Waiting for authorization... (press Ctrl+C to cancel) ``` #### Step 2: Enter Code 1. Open [https://github.com/login/device](https://github.com/login/device) 2. Enter the code shown in terminal (e.g., `ABCD-1234`) 3. Click "Continue" #### Step 3: Authorize 1. Review the permissions requested 2. Click "Authorize postbase" 3. CLI automatically completes #### Step 4: Confirmation ``` ✓ Authenticated as username Welcome to Postbase Cloud! Run 'postbase cloud projects list' to get started. ``` ### Permissions Postbase requests minimal GitHub permissions: | Scope | Purpose | | ------------ | ------------------------------- | | `read:user` | Get your username and email | | `user:email` | Access verified email addresses | We do **not** request access to your repositories or organizations. ### Token Storage Access tokens are stored locally: ``` ~/.postbase/cloud.json ``` This file contains: ```json { "access_token": "pb_xxx...", "user": { "id": "user_xxx", "login": "username", "email": "user@example.com" } } ``` :::warning Keep this file secure. Never commit it to version control. ::: ### Logout Clear stored credentials: ```bash postbase cloud logout ``` ``` ✓ Logged out from Postbase Cloud ``` This removes `~/.postbase/cloud.json`. ### Check Current User View your authenticated account: ```bash postbase cloud whoami ``` ``` Logged in as: username Email: user@example.com User ID: user_xxx ``` ### Token Refresh Tokens are long-lived and don't expire automatically. If you need a fresh token: ```bash postbase cloud logout postbase cloud login ``` ### Programmatic Access #### Using the Token ```typescript const token = process.env.POSTBASE_TOKEN const response = await fetch('https://api.postbase.sh/projects', { headers: { 'Authorization': `Bearer ${token}` } }) ``` #### Environment Variables Set `POSTBASE_TOKEN` for CI/CD: ```bash export POSTBASE_TOKEN="pb_xxx..." # CLI will use this token instead of stored credentials postbase cloud projects list ``` ### CI/CD Integration #### GitHub Actions ```yaml jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup Node uses: actions/setup-node@v4 with: node-version: 22 - name: Install Postbase CLI run: npm install -g postbase - name: Run Migrations env: POSTBASE_TOKEN: ${{ secrets.POSTBASE_TOKEN }} run: | postbase cloud migrate up \ -p my-project \ -d production ``` #### Getting a Token for CI 1. Log in locally: `postbase cloud login` 2. Copy token from `~/.postbase/cloud.json` 3. Add as secret in your CI platform ### Multiple Accounts Currently, Postbase supports one account at a time. Switch accounts by logging out and in: ```bash postbase cloud logout postbase cloud login # Log in with different GitHub account ``` ### Troubleshooting #### Login Stuck If the CLI is waiting forever: 1. Check browser completed authorization 2. Try refreshing the GitHub page 3. Press Ctrl+C and retry #### Invalid Token If you see `401 Unauthorized`: ```bash # Re-authenticate postbase cloud logout postbase cloud login ``` #### Token Missing If credentials file is missing: ```bash # Re-authenticate to create new file postbase cloud login ``` #### GitHub OAuth Error If GitHub shows an error: 1. Ensure you're logged into GitHub 2. Try a different browser 3. Clear browser cookies for github.com 4. Contact support if persistent ## Configuration Configure cloud database infrastructure. ### Available Options | Option | Description | Default | | --------- | ----------------- | -------- | | `region` | Deployment region | us-west1 | | `cpu` | vCPU count | 1 | | `memory` | Memory in MB | 1024 | | `storage` | Storage in GB | 10 | ### Regions #### Available Regions | ID | Location | Provider | | --------------- | ------------- | -------- | | `us-west1` | Oregon, USA | Railway | | `us-east1` | Virginia, USA | Railway | | `eu-west1` | Ireland | Railway | | `ap-southeast1` | Singapore | Railway | #### Choosing a Region Select based on: * **User proximity** - Minimize latency * **Compliance** - Data residency requirements * **Cost** - May vary by region #### Set at Provisioning ```bash postbase cloud provision production -p myapp --region eu-west1 ``` ### Compute Resources #### CPU | vCPUs | Connections | Use Case | | ----- | ----------- | ------------------- | | 1 | \~100 | Development | | 2 | \~200 | Production (small) | | 4 | \~400 | Production (medium) | | 8 | \~800 | Production (large) | #### Memory | Memory | Shared Buffers | Use Case | | ------ | -------------- | ------------------- | | 1 GB | \~256 MB | Development | | 2 GB | \~512 MB | Production (small) | | 4 GB | \~1 GB | Production (medium) | | 8 GB | \~2 GB | Production (large) | #### Recommendations | Workload | CPU | Memory | Storage | | ------------------- | --- | ------ | ------- | | Development | 1 | 1024 | 10 | | Staging | 1 | 2048 | 25 | | Production (small) | 2 | 4096 | 50 | | Production (medium) | 4 | 8192 | 100 | ### Storage #### Size Options | GB | Use Case | | --- | -------------------- | | 10 | Development, testing | | 25 | Small production | | 50 | Medium production | | 100 | Large production | | 250 | Enterprise | #### Storage Includes * Database files * Indexes * WAL files (temp) * Temporary tables #### Storage Does NOT Include * Backups (stored in R2) * WAL archives (stored in R2) ### PostgreSQL Configuration #### Default Settings The PostgreSQL image includes optimized defaults: ``` max_connections = 100 shared_buffers = 256MB (scales with memory) work_mem = 4MB maintenance_work_mem = 64MB effective_cache_size = 768MB # WAL Settings (for PITR) wal_level = replica max_wal_senders = 10 max_replication_slots = 10 ``` #### Connection Limits | Memory | max\_connections | | ------ | ---------------- | | 1 GB | 100 | | 2 GB | 150 | | 4 GB | 200 | | 8 GB | 300 | ### Network Configuration #### SSL Railway databases use a proxy that handles SSL termination: ``` # Use sslmode=disable for Railway proxy postgresql://...?sslmode=disable ``` #### Connection Pooling For high-traffic applications, consider: 1. **Application-level pooling**: ```typescript const db = createClient({ connectionString: '...', pool: { max: 20 } }) ``` 2. **PgBouncer** (coming in HA tier) #### Firewall Railway databases are publicly accessible. Use strong passwords and consider: * IP allowlisting (coming soon) * VPN/private networking (coming soon) ### Example Configurations #### Minimal (Development) ```bash postbase cloud provision dev -p myapp \ --region us-west1 \ --cpu 1 \ --memory 1024 \ --storage 10 ``` Cost: \~$5/month #### Standard (Production) ```bash postbase cloud provision production -p myapp \ --region us-west1 \ --cpu 2 \ --memory 4096 \ --storage 50 ``` Cost: \~$25/month #### High Performance ```bash postbase cloud provision production -p myapp \ --region us-west1 \ --cpu 4 \ --memory 8192 \ --storage 100 ``` Cost: \~$75/month ### Scaling (Coming Soon) Scale existing databases without downtime: ```bash # Scale up postbase cloud scale production -p myapp \ --cpu 4 \ --memory 8192 # Scale storage (storage can only increase) postbase cloud scale production -p myapp \ --storage 100 ``` ### Configuration via API ```http POST /projects/:project_id/databases Authorization: Bearer Content-Type: application/json { "name": "production", "region": "us-west1", "config": { "cpu": 2, "memory": 4096, "storage": 50 } } ``` ### Monitoring Configuration Check current configuration: ```bash postbase cloud databases info production -p myapp ``` ``` Database: production Status: running Region: us-west1 Configuration: CPU: 2 vCPU Memory: 4096 MB Storage: 50 GB Used: 12.5 GB (25%) Performance: Connections: 45/200 CPU Usage: 15% Memory Usage: 62% ``` ## Cloud Platform Overview Deploy PostgreSQL databases to the cloud with one command. ### Architecture ``` ┌─────────────────────────────────────────────────────────────┐ │ Postbase Cloud │ ├─────────────────────────────────────────────────────────────┤ │ │ │ ┌─────────────┐ ┌─────────────────────────────┐│ │ │ CLI │──────────▶│ Cloud API ││ │ │ │ │ api.postbase.sh ││ │ └─────────────┘ │ ││ │ │ • GitHub OAuth ││ │ │ • Project Management ││ │ │ • Database Provisioning ││ │ └──────────┬──────────────────┘│ │ │ │ │ ┌───────────────────┼───────────────────┐│ │ │ │ ││ │ ┌───────▼───────┐ ┌───────▼───────┐ ┌──────▼──────┐│ │ │ Railway │ │ Cloudflare │ │ Cloudflare ││ │ │ PostgreSQL │ │ D1 │ │ R2 ││ │ │ │ │ (Metadata) │ │ (Backups) ││ │ └───────────────┘ └───────────────┘ └──────────────┘│ │ │ │ │ ┌───────▼───────┐ │ │ │Backup Service │ │ │ │ (Railway) │ │ │ └───────────────┘ │ │ │ └─────────────────────────────────────────────────────────────┘ ``` ### Features #### One-Command Provisioning ```bash # Login with GitHub postbase cloud login # Provision a database postbase cloud provision production # Done! Get connection string postbase cloud url production ``` #### Enterprise PostgreSQL Databases run on Railway with the Supabase PostgreSQL image: | Feature | Description | | -------------- | --------------------- | | PostgreSQL 15 | Latest stable version | | pg\_cron | Scheduled jobs | | pgvector | Vector embeddings | | PostGIS | Geospatial data | | pgcrypto | Encryption | | 50+ extensions | Pre-installed | #### Automated Backups * **Daily backups** at 3 AM UTC * **7-day retention** (expandable) * **Point-in-Time Recovery** (PITR) * **Zero egress fees** with Cloudflare R2 #### Infrastructure Configuration ```bash postbase cloud provision production \ --region us-west-1 \ --cpu 2 \ --memory 4096 \ --storage 50 ``` ### Pricing #### Free Tier | Resource | Limit | | --------- | --------------- | | Databases | 1 | | Storage | 1 GB | | Backups | 3-day retention | | PITR | Not included | #### Pro Tier ($20/mo) | Resource | Limit | | --------- | ---------------- | | Databases | Unlimited | | Storage | 100 GB | | Backups | 30-day retention | | PITR | Included | ### Getting Started #### 1. Login ```bash postbase cloud login ``` This opens your browser for GitHub authentication. #### 2. Create a Project ```bash postbase cloud projects create myapp ``` #### 3. Provision Database ```bash # Staging environment postbase cloud provision staging # Production environment postbase cloud provision production ``` #### 4. Get Connection String ```bash postbase cloud url production ``` #### 5. Enable Backups ```bash # Automated backups are enabled by default # Enable PITR for point-in-time recovery postbase cloud pitr enable -d production ``` ### Project Structure ``` myapp/ # Your project ├── postbase.toml # Configuration ├── migrations/ # Database migrations ├── .env # Environment variables └── .env.production # Production credentials ``` #### postbase.toml ```toml [project] name = "myapp" [environments.staging] provider = "railway" connection_string = "${STAGING_DATABASE_URL}" [environments.production] provider = "railway" connection_string = "${PRODUCTION_DATABASE_URL}" ``` ### Security #### Secure Passwords All cloud databases use auto-generated 32-character passwords: ``` YfF6Yfeqaf66IHSGLzhPcIAQ5Bs3Ayxa ``` #### Network Security * Private networking within Railway * SSL/TLS optional (disabled by default for Railway proxy) * IP allowlisting available #### Credential Storage Local credentials stored in: ``` ~/.postbase/cloud.json ``` Never committed to version control. ### Supported Regions | Region | Location | | ------------- | ------------- | | us-west1 | Oregon, USA | | us-east1 | Virginia, USA | | eu-west1 | Ireland | | ap-southeast1 | Singapore | ### Next Steps * [Authentication](/cloud/authentication) - GitHub OAuth setup * [Projects](/cloud/projects) - Project management * [Provisioning](/cloud/provisioning) - Database deployment * [Configuration](/cloud/configuration) - Infrastructure options ## Projects Organize databases into projects. ### Overview Projects are the top-level organizational unit in Postbase Cloud: ``` Organization (your account) └── Project (myapp) ├── Database (production) ├── Database (staging) └── Database (development) ``` ### Creating Projects #### Via CLI ```bash postbase cloud projects create myapp ``` Output: ``` ✓ Project created ID: proj_D3uqtMIZXOoU Name: myapp Next: Provision a database with: postbase cloud provision production -p myapp ``` #### Naming Rules * Lowercase letters, numbers, hyphens * Must start with a letter * 3-50 characters * Unique within your account Valid: `myapp`, `my-app-2`, `production-api` Invalid: `My App`, `123app`, `my_app` ### Listing Projects ```bash postbase cloud projects list ``` Output: ``` Projects ID NAME DATABASES CREATED proj_D3uqtMIZ... myapp 3 2026-01-20 proj_K9Rvw2Hx... demo-app 1 2026-01-15 proj_P7nFqLmA... api-service 2 2026-01-10 Total: 3 projects ``` #### JSON Output ```bash postbase cloud projects list --json ``` ### Project Details ```bash postbase cloud projects info myapp ``` Output: ``` Project: myapp ID: proj_D3uqtMIZXOoU Created: 2026-01-20 10:00:00 UTC Databases (3): production running us-west1 12.5 GB staging running us-west1 3.2 GB development running us-west1 1.1 GB Total Storage: 16.8 GB ``` ### Deleting Projects :::warning Deleting a project permanently destroys all databases and backups. ::: ```bash # With confirmation postbase cloud projects delete myapp # Force delete postbase cloud projects delete myapp --force ``` You must delete all databases first, or use `--force`. ### Project Settings #### Default Region Set default region for new databases: ```bash postbase cloud projects settings myapp --default-region eu-west1 ``` #### Project Limits | Tier | Databases per Project | | ---- | --------------------- | | Free | 3 | | Pro | Unlimited | ### Working with Multiple Projects #### CLI Project Flag Most commands accept `-p` or `--project`: ```bash # Specify project explicitly postbase cloud databases list -p myapp postbase cloud provision staging -p myapp postbase cloud backups list -p myapp -d production ``` #### Default Project Set a default project: ```bash postbase cloud projects default myapp ``` Then omit `-p`: ```bash postbase cloud databases list # Uses myapp ``` #### Project Context View current project context: ```bash postbase cloud projects current ``` ``` Current project: myapp (proj_D3uqtMIZ...) ``` ### Project-Level Resources Resources scoped to projects: | Resource | Scope | | ------------ | ----------------------------- | | Databases | Per project | | Backups | Per database (within project) | | WAL Archives | Per database (within project) | | Settings | Per project | ### API #### List Projects ```http GET /projects Authorization: Bearer ``` #### Create Project ```http POST /projects Authorization: Bearer Content-Type: application/json { "name": "myapp" } ``` #### Get Project ```http GET /projects/:id Authorization: Bearer ``` #### Delete Project ```http DELETE /projects/:id Authorization: Bearer ``` ## Database Provisioning Create cloud PostgreSQL databases. ### Quick Start ```bash # Create a project first postbase cloud projects create myapp # Provision a database postbase cloud provision production -p myapp # Get connection string postbase cloud url production -p myapp ``` ### Provisioning Options #### Basic Provisioning ```bash postbase cloud provision production -p myapp ``` Uses default configuration: * Region: `us-west1` * CPU: 1 vCPU * Memory: 1024 MB * Storage: 10 GB #### Custom Configuration ```bash postbase cloud provision production -p myapp \ --region us-west1 \ --cpu 2 \ --memory 4096 \ --storage 50 ``` ### Configuration Options #### Regions | Region | Location | Latency | | --------------- | ------------- | ---------------- | | `us-west1` | Oregon, USA | Best for US West | | `us-east1` | Virginia, USA | Best for US East | | `eu-west1` | Ireland | Best for Europe | | `ap-southeast1` | Singapore | Best for Asia | #### CPU | Option | Description | Use Case | | ------ | ----------- | ---------------------------- | | `1` | 1 vCPU | Development, low traffic | | `2` | 2 vCPU | Production, moderate traffic | | `4` | 4 vCPU | High traffic | | `8` | 8 vCPU | Enterprise | #### Memory | Option | Description | | ------ | ----------- | | `1024` | 1 GB RAM | | `2048` | 2 GB RAM | | `4096` | 4 GB RAM | | `8192` | 8 GB RAM | #### Storage | Option | Description | | ------ | ----------- | | `10` | 10 GB | | `25` | 25 GB | | `50` | 50 GB | | `100` | 100 GB | | `250` | 250 GB | ### Provisioning Process ``` 1. Create Railway project [10s] 2. Deploy PostgreSQL [30s] 3. Configure networking [10s] 4. Generate credentials [5s] 5. Initialize database [15s] ─────── ~70s total ``` ### Provisioning Status #### Check Status ```bash postbase cloud databases info production -p myapp ``` Output: ``` Database: production Status: provisioning (45%) Region: us-west1 Created: 2026-01-25 14:30:00 Configuration: CPU: 2 vCPU Memory: 4096 MB Storage: 50 GB Estimated completion: ~30 seconds ``` #### Wait for Ready ```bash postbase cloud provision production -p myapp --wait ``` Blocks until database is ready. ### Connection Details #### Get Connection String ```bash postbase cloud url production -p myapp ``` Output: ``` postgresql://postgres:YfF6Yfeqaf66IHSGLzhPcIAQ5Bs3Ayxa@xxx.proxy.rlwy.net:12345/railway?sslmode=disable ``` #### Copy to Clipboard ```bash postbase cloud url production -p myapp --copy ``` #### Detailed Connection Info ```bash postbase cloud url production -p myapp --verbose ``` ``` Connection Details Host: containers-xxx.railway.app Port: 12345 Database: railway User: postgres Password: YfF6Yfeqaf66IHSGLzhPcIAQ5Bs3Ayxa Connection String: postgresql://postgres:YfF6Yfeqaf66IHSGLzhPcIAQ5Bs3Ayxa@containers-xxx.railway.app:12345/railway?sslmode=disable psql Command: psql "postgresql://postgres:YfF6Yfeqaf66IHSGLzhPcIAQ5Bs3Ayxa@containers-xxx.railway.app:12345/railway?sslmode=disable" ``` ### PostgreSQL Image Cloud databases use the custom Postbase PostgreSQL image: ``` ghcr.io/zeroexcore/postbase:postgres-15-pitr ``` Based on `supabase/postgres:15.14.1.072` with: * PostgreSQL 15 * WAL streaming support * 50+ extensions #### Included Extensions | Extension | Purpose | | -------------------- | ----------------- | | `pg_cron` | Job scheduling | | `pgvector` | Vector embeddings | | `PostGIS` | Geospatial data | | `uuid-ossp` | UUID generation | | `pgcrypto` | Encryption | | `pg_stat_statements` | Query analysis | ### Environment-Based Provisioning #### Development ```bash postbase cloud provision development -p myapp \ --cpu 1 --memory 1024 --storage 10 ``` #### Staging ```bash postbase cloud provision staging -p myapp \ --cpu 1 --memory 2048 --storage 25 ``` #### Production ```bash postbase cloud provision production -p myapp \ --cpu 2 --memory 4096 --storage 50 ``` ### Destroying Databases ```bash # With confirmation postbase cloud destroy production -p myapp # Force destroy postbase cloud destroy production -p myapp --force ``` :::warning Destroying deletes all data. Create a backup first if needed. ::: ### Scaling (Coming Soon) Scale existing databases: ```bash postbase cloud scale production -p myapp \ --cpu 4 --memory 8192 --storage 100 ``` ### API #### Provision Database ```http POST /projects/:project_id/databases Authorization: Bearer Content-Type: application/json { "name": "production", "region": "us-west1", "config": { "cpu": 2, "memory": 4096, "storage": 50 } } ``` #### Get Database ```http GET /projects/:project_id/databases/:name Authorization: Bearer ``` #### Delete Database ```http DELETE /projects/:project_id/databases/:name Authorization: Bearer ``` ## CLI Reference Complete reference for the Postbase command-line interface. ### Installation ```bash npm install -g postbase ``` ### Global Options ```bash postbase [command] [options] Options: -V, --version Output version number -h, --help Display help for command --json Output in JSON format (where supported) --quiet Suppress non-essential output ``` ### Commands Overview #### Local Development | Command | Description | | ------------------ | ------------------------- | | `postbase start` | Start the Postbase daemon | | `postbase stop` | Stop the daemon | | `postbase status` | Show daemon status | | `postbase db` | Database management | | `postbase migrate` | Migration operations | | `postbase connect` | Get connection details | | `postbase types` | Generate TypeScript types | #### Cloud Platform | Command | Description | | -------------------------- | ------------------------ | | `postbase cloud login` | Authenticate with GitHub | | `postbase cloud logout` | Clear credentials | | `postbase cloud projects` | Manage projects | | `postbase cloud provision` | Create cloud database | | `postbase cloud url` | Get connection string | | `postbase cloud destroy` | Delete cloud database | | `postbase cloud backups` | Backup operations | | `postbase cloud pitr` | Point-in-Time Recovery | ### Command Reference #### `postbase start` Start the Postbase daemon and PostgreSQL container. ```bash postbase start [options] Options: --port Daemon API port (default: 9432) --pg-port PostgreSQL port (default: 5432) --data-dir Data directory (default: ~/.postbase) ``` **Examples:** ```bash # Start with defaults postbase start # Custom ports postbase start --port 9000 --pg-port 5433 ``` #### `postbase stop` Stop the Postbase daemon. ```bash postbase stop [options] Options: --force Force stop without confirmation --keep-container Keep PostgreSQL container running ``` #### `postbase status` Show daemon and database status. ```bash postbase status [options] Options: --json Output in JSON format ``` **Output:** ``` Postbase Status Daemon: Running (PID: 12345) API: http://localhost:9432 PostgreSQL: Running (Port: 5432) Databases: 3 Databases: - myapp (12.5 MB) - demo (3.2 MB) - test (1.1 MB) ``` #### `postbase db` Database management commands. ```bash postbase db Commands: list List all databases create Create a new database drop Delete a database info Show database details ``` **Examples:** ```bash # List databases postbase db list # Create database postbase db create myapp # Create from template postbase db create myapp_test --template myapp # Get database info postbase db info myapp # Delete database postbase db drop myapp postbase db drop myapp --force ``` #### `postbase migrate` Migration management commands. ```bash postbase migrate Commands: new Create new migration files up Apply pending migrations down Rollback migrations status Show migration status ``` **Options:** ```bash # Common options --database Target database (default: from config) --schema Target schema (default: public) --dir Migrations directory (default: ./migrations) ``` **Examples:** ```bash # Create migration postbase migrate new create_users # Apply all pending postbase migrate up # Apply to specific database postbase migrate up --database myapp # Apply to specific schema postbase migrate up --schema auth # Dry run postbase migrate up --dry-run # Rollback last migration postbase migrate down # Rollback multiple postbase migrate down --count 3 # Check status postbase migrate status ``` #### `postbase connect` Get database connection details. ```bash postbase connect [options] Options: --copy Copy connection string to clipboard --json Output in JSON format ``` **Example:** ```bash postbase connect myapp ``` **Output:** ``` Database: myapp Connection Details: Host: localhost Port: 5432 Database: myapp User: postgres Password: postgres Connection String: postgresql://postgres:postgres@localhost:5432/myapp ``` #### `postbase types` Generate TypeScript types from database schema. ```bash postbase types generate [options] Options: --database Database name --output Output file path --all Include functions, views, enums --include-functions Include PostgreSQL functions --include-views Include views --include-enums Include enum types --schema Target schema (default: public) --dates-as-strings Map timestamps to string ``` **Example:** ```bash postbase types generate \ --database myapp \ --all \ --output ./src/db/types.ts ``` #### `postbase cloud login` Authenticate with GitHub OAuth. ```bash postbase cloud login ``` Opens browser for GitHub authentication using device flow. #### `postbase cloud projects` Manage cloud projects. ```bash postbase cloud projects Commands: list List all projects create Create a new project delete Delete a project ``` **Examples:** ```bash # List projects postbase cloud projects list # Create project postbase cloud projects create myapp # Delete project postbase cloud projects delete proj_xxx --force ``` #### `postbase cloud provision` Create a cloud PostgreSQL database. ```bash postbase cloud provision [options] Options: -p, --project Project ID --region Deployment region --cpu CPU cores (1, 2, 4, 8) --memory Memory in MB (1024, 2048, 4096, 8192) --storage Storage in GB (10, 25, 50, 100) ``` **Examples:** ```bash # Basic provisioning postbase cloud provision production -p myapp # With configuration postbase cloud provision production \ -p myapp \ --region us-west1 \ --cpu 2 \ --memory 4096 \ --storage 50 ``` #### `postbase cloud url` Get cloud database connection string. ```bash postbase cloud url [options] Options: -p, --project Project ID --copy Copy to clipboard ``` #### `postbase cloud backups` Backup management commands. ```bash postbase cloud backups Commands: list List backups create Create manual backup download Download backup file restore Restore from backup delete Delete manual backup ``` **Options:** ```bash -p, --project Project ID -d, --database Database name ``` **Examples:** ```bash # List backups postbase cloud backups list -p myapp -d production # Create backup postbase cloud backups create -p myapp -d production # Download backup postbase cloud backups download bkp_xxx \ -p myapp -d production \ -o backup.sql # Restore from backup postbase cloud backups restore bkp_xxx \ -p myapp -d production # Delete backup postbase cloud backups delete bkp_xxx \ -p myapp -d production --force ``` #### `postbase cloud pitr` Point-in-Time Recovery commands. ```bash postbase cloud pitr Commands: status Show PITR status enable Enable PITR disable Disable PITR wal List WAL archives restore Restore to point in time restores List restore operations receiver Show WAL receiver status restart-receiver Restart WAL receiver service-status Show backup service status ``` **Options:** ```bash -p, --project Project ID -d, --database Database name ``` **Examples:** ```bash # Check status postbase cloud pitr status -p myapp -d production # Enable PITR postbase cloud pitr enable -p myapp -d production # List WAL archives postbase cloud pitr wal -p myapp -d production # Restore to specific time postbase cloud pitr restore \ -p myapp -d production \ --target-time "2026-01-25T14:30:00Z" \ --force # Restore to latest postbase cloud pitr restore \ -p myapp -d production \ --target-time latest \ --force # Check restore status postbase cloud pitr restores -p myapp -d production # Check WAL receiver health postbase cloud pitr receiver -p myapp -d production # Restart WAL receiver postbase cloud pitr restart-receiver -p myapp -d production ``` ### Environment Variables | Variable | Description | Default | | --------------------- | ------------------------- | -------------------------------------------------- | | `POSTBASE_API_URL` | Cloud API URL | [https://api.postbase.sh](https://api.postbase.sh) | | `POSTBASE_DAEMON_URL` | Local daemon URL | [http://localhost:9432](http://localhost:9432) | | `POSTBASE_DATA_DIR` | Data directory | \~/.postbase | | `DATABASE_URL` | Default connection string | - | ### Configuration File `postbase.toml`: ```toml [project] name = "myapp" [database] name = "myapp" [migrations] directory = "./migrations" schema = "public" [daemon] port = 9432 [postgresql] port = 5432 ``` ## Automated Backups Daily automated backups for all cloud databases. ### Overview Postbase automatically backs up all cloud databases: * **Schedule**: Daily at 3:00 AM UTC * **Format**: pg\_dump custom format * **Compression**: gzip * **Storage**: Cloudflare R2 (zero egress fees) ### Retention Policy | Tier | Retention | Storage | | ---- | --------- | ------- | | Free | 3 days | 10 GB | | Pro | 30 days | 100 GB | Old backups are automatically deleted after the retention period. ### Viewing Backups #### CLI ```bash postbase cloud backups list -p myapp -d production ``` Output: ``` Backups for production ID TYPE STATUS SIZE CREATED EXPIRES bkp_D3uqtMIZ... automated completed 12.5 MB 2026-01-25 03:00:00 2026-01-28 bkp_K9Rvw2Hx... automated completed 12.3 MB 2026-01-24 03:00:00 2026-01-27 bkp_P7nFqLmA... automated completed 12.1 MB 2026-01-23 03:00:00 2026-01-26 Total: 3 automated backups ``` #### API ```bash curl -H "Authorization: Bearer $TOKEN" \ https://api.postbase.sh/projects/myapp/databases/production/backups ``` ### Backup Contents Each backup includes: * **Schema**: All tables, indexes, constraints, views, functions * **Data**: All table rows * **Permissions**: GRANT statements * **Extensions**: Enabled extensions Not included: * Connection settings * Runtime configuration * Active connections * Temporary tables ### Backup Storage Backups are stored in Cloudflare R2: ``` postbase-backups/ ├── proj_xxx/ │ ├── db_production/ │ │ ├── bkp_D3uqtMIZ.dump.gz │ │ ├── bkp_K9Rvw2Hx.dump.gz │ │ └── bkp_P7nFqLmA.dump.gz │ └── db_staging/ │ └── bkp_Abc123.dump.gz ``` ### Monitoring Backups #### Check Recent Backups ```bash # List last 7 days postbase cloud backups list -p myapp -d production --days 7 ``` #### Verify Backup Completed ```bash # Check specific backup postbase cloud backups info bkp_D3uqtMIZ -p myapp -d production ``` Output: ``` Backup Details ID: bkp_D3uqtMIZXOoU_WPo2gbi3_xu Type: automated Status: completed Size: 12.5 MB Database: production Created: 2026-01-25 03:00:00 UTC Expires: 2026-01-28 03:00:00 UTC Duration: 45 seconds ``` ### Backup Notifications #### Webhook (Coming Soon) Configure webhooks for backup events: ```json { "event": "backup.completed", "backup_id": "bkp_D3uqtMIZ...", "database": "production", "status": "completed", "size_bytes": 13107200, "timestamp": "2026-01-25T03:00:45Z" } ``` #### Email Alerts (Coming Soon) Receive email notifications for: * Backup failures * Storage quota warnings * Retention expiration ### Backup Performance #### Typical Backup Times | Database Size | Backup Time | | ------------- | --------------- | | \< 1 GB | \< 1 minute | | 1-10 GB | 1-5 minutes | | 10-50 GB | 5-15 minutes | | > 50 GB | Contact support | #### Impact on Database Automated backups use: * `pg_dump` with custom format * Minimal locking (consistent snapshot) * Background priority Impact: * Negligible for most workloads * May increase I/O during backup * No locks on tables (MVCC snapshot) ### Restoring from Backup #### Via CLI ```bash # Restore to same database postbase cloud backups restore bkp_D3uqtMIZ \ -p myapp -d production --force # Restore to different database postbase cloud backups restore bkp_D3uqtMIZ \ -p myapp -d production-restored ``` #### What Restore Does 1. Downloads backup from R2 2. Stops active connections (if `--force`) 3. Drops existing data 4. Runs pg\_restore 5. Rebuilds indexes 6. Restores permissions #### Restore Duration | Backup Size | Restore Time | | ----------- | ------------- | | \< 1 GB | \< 2 minutes | | 1-10 GB | 2-10 minutes | | 10-50 GB | 10-30 minutes | ### Downloading Backups #### Download via CLI ```bash postbase cloud backups download bkp_D3uqtMIZ \ -p myapp -d production \ -o ./backup-2026-01-25.dump ``` #### Download via API ```bash # Get download URL curl -H "Authorization: Bearer $TOKEN" \ https://api.postbase.sh/projects/myapp/databases/production/backups/bkp_D3uqtMIZ/download ``` Response: ```json { "download_url": "https://r2.postbase.sh/...", "expires_at": "2026-01-25T04:00:00Z" } ``` #### Local Restore Restore downloaded backup locally: ```bash # Decompress gunzip backup-2026-01-25.dump.gz # Restore pg_restore -d myapp backup-2026-01-25.dump ``` ### Troubleshooting #### Backup Failed If automated backup fails: 1. Check database is running 2. Check storage quota 3. Check backup service health ```bash postbase cloud pitr service-status -p myapp -d production ``` #### Large Database Timeout For databases >50GB, contact support for: * Extended timeout * Dedicated backup window * Parallel backup configuration #### Missing Backup If expected backup is missing: 1. Check backup ran (may be delayed) 2. Check retention (may have expired) 3. Check manual deletion ```bash # Check all backups including deleted postbase cloud backups list -p myapp -d production --include-deleted ``` ## Manual Backups Create on-demand backups for additional data protection. ### Overview Manual backups supplement automated daily backups: * Create before deployments * Create before destructive operations * Create for archival purposes ### Creating Manual Backups #### Via CLI ```bash postbase cloud backups create -p myapp -d production ``` Output: ``` Creating backup for production... ✓ Backup created ID: bkp_ManualXyz123 Status: pending Size: Calculating... Monitor progress with: postbase cloud backups info bkp_ManualXyz123 -p myapp -d production ``` #### Via API ```bash curl -X POST \ -H "Authorization: Bearer $TOKEN" \ https://api.postbase.sh/projects/myapp/databases/production/backups ``` ### Manual Backup Limits | Tier | Limit | Retention | | ---- | --------- | --------- | | Free | 1 backup | 7 days | | Pro | Unlimited | 90 days | If at limit, delete an existing manual backup first: ```bash postbase cloud backups delete bkp_OldBackup -p myapp -d production ``` ### Checking Backup Status #### Immediate Status ```bash postbase cloud backups info bkp_ManualXyz123 -p myapp -d production ``` Output: ``` Backup Details ID: bkp_ManualXyz123 Type: manual Status: in_progress Created: 2026-01-25 14:30:00 UTC Progress: 45% Estimated completion: ~2 minutes ``` #### Completed Status ``` Backup Details ID: bkp_ManualXyz123 Type: manual Status: completed Size: 12.5 MB Database: production Created: 2026-01-25 14:30:00 UTC Expires: 2026-02-01 14:30:00 UTC Duration: 45 seconds ``` ### Pre-Deployment Backups Best practice: Create backup before major changes. ```bash # 1. Create backup postbase cloud backups create -p myapp -d production # 2. Wait for completion postbase cloud backups list -p myapp -d production --watch # 3. Note the backup ID BACKUP_ID=bkp_ManualXyz123 # 4. Deploy changes npm run deploy # 5. Run migrations postbase migrate up # 6. If issues, restore postbase cloud backups restore $BACKUP_ID -p myapp -d production --force ``` ### Downloading Manual Backups #### CLI Download ```bash postbase cloud backups download bkp_ManualXyz123 \ -p myapp -d production \ -o ./manual-backup-2026-01-25.dump ``` #### Get Download URL ```bash postbase cloud backups download-url bkp_ManualXyz123 \ -p myapp -d production ``` URL valid for 1 hour. ### Restoring from Manual Backup ```bash # Restore with confirmation postbase cloud backups restore bkp_ManualXyz123 -p myapp -d production # Restore without confirmation postbase cloud backups restore bkp_ManualXyz123 -p myapp -d production --force ``` :::warning Restoring replaces all current data. Create a backup first if needed. ::: ### Deleting Manual Backups ```bash # Delete with confirmation postbase cloud backups delete bkp_ManualXyz123 -p myapp -d production # Force delete postbase cloud backups delete bkp_ManualXyz123 -p myapp -d production --force ``` Only manual backups can be deleted. Automated backups expire automatically. ### Listing All Backups ```bash # All backups postbase cloud backups list -p myapp -d production # Only manual postbase cloud backups list -p myapp -d production --type manual # Only automated postbase cloud backups list -p myapp -d production --type automated ``` ### Use Cases #### Before Schema Migration ```bash # Create safety backup postbase cloud backups create -p myapp -d production # Run migration postbase migrate up --database-url $PROD_URL # If migration fails postbase cloud backups restore $BACKUP_ID -p myapp -d production --force postbase migrate down --database-url $PROD_URL # Fix and retry ``` #### Before Data Import ```bash # Backup before import postbase cloud backups create -p myapp -d production # Import data psql $PROD_URL < large_import.sql # If import corrupted data postbase cloud backups restore $BACKUP_ID -p myapp -d production --force ``` #### Archival/Compliance ```bash # Create monthly archive postbase cloud backups create -p myapp -d production # Download for off-site storage postbase cloud backups download bkp_Archive \ -p myapp -d production \ -o ./archive-$(date +%Y-%m).dump # Upload to cold storage aws s3 cp ./archive-2026-01.dump s3://my-cold-storage/backups/ ``` ### Best Practices 1. **Create before deployments** - Always have a rollback point 2. **Don't rely on single backup** - PITR provides continuous protection 3. **Test restores** - Regularly verify backups are restorable 4. **Monitor quotas** - Delete old manual backups to stay within limits 5. **Download for compliance** - Keep off-site copies for regulations ## Backups & Recovery Overview Comprehensive data protection with automated backups and point-in-time recovery. ### Architecture ``` ┌─────────────────────────────────────────────────────────────────┐ │ Data Protection │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ PostgreSQL Backup Service R2 │ │ (Railway) (Railway) (Cloudflare) │ │ │ │ ┌─────────┐ ┌─────────────┐ ┌──────────┐│ │ │ Primary │──pg_dump─────▶│ │───────▶│ Backups ││ │ │ Database│ │ Backup │ │ /*.dump ││ │ │ │ │ Service │ └──────────┘│ │ │ │ │ │ │ │ │ │──pg_receivewal│ │ ┌──────────┐│ │ │ │───────────────▶ │───────▶│ WAL ││ │ └─────────┘ └─────────────┘ │ Archives ││ │ └──────────┘│ │ │ │ Recovery Options: │ │ • Full restore from daily backup │ │ • PITR to any second within retention window │ │ • Latest recovery (most recent state) │ │ │ └─────────────────────────────────────────────────────────────────┘ ``` ### Backup Types #### Automated Backups | Feature | Description | | ----------- | ----------------------------- | | Schedule | Daily at 3 AM UTC | | Format | pg\_dump custom format | | Compression | gzip | | Retention | 3 days (free) / 30 days (pro) | | Storage | Cloudflare R2 | #### Manual Backups | Feature | Description | | --------- | --------------------------------------- | | Trigger | On-demand via CLI/API | | Format | pg\_dump custom format | | Limit | 1 per database (free) / Unlimited (pro) | | Retention | 7 days (free) / 90 days (pro) | #### WAL Archives (PITR) | Feature | Description | | ----------- | --------------------------------------- | | Method | Continuous streaming via pg\_receivewal | | Granularity | Per-transaction | | Retention | 1 day (free) / 7 days (pro) | | Recovery | Any point in time | ### Quick Start #### List Backups ```bash postbase cloud backups list -d production ``` #### Create Manual Backup ```bash postbase cloud backups create -d production ``` #### Restore from Backup ```bash postbase cloud backups restore bkp_xxx -d production ``` #### Enable PITR ```bash postbase cloud pitr enable -d production ``` #### Restore to Point in Time ```bash postbase cloud pitr restore \ -d production \ --target-time "2026-01-25T14:30:00Z" ``` ### Storage & Costs #### Cloudflare R2 * **Zero egress fees** - No charge for downloads * **S3-compatible** - Standard API * **Global availability** - Edge caching #### Storage Pricing | Tier | Storage | Price | | ----- | ------- | ------------ | | Free | 10 GB | $0 | | Pro | 100 GB | Included | | Extra | Per GB | $0.015/GB/mo | ### Recovery Time Objectives | Recovery Type | RTO | RPO | | ------------- | -------- | ----------- | | Full Restore | \~5 min | 24 hours | | PITR | \~10 min | \< 1 second | | Latest | \~10 min | Real-time | ### Best Practices #### Testing Restores Regularly test your backup restoration: ```bash # Create a test database postbase cloud provision test-restore # Restore backup to test postbase cloud backups restore bkp_xxx -d test-restore # Verify data postbase cloud url test-restore # Cleanup postbase cloud destroy test-restore ``` #### Pre-Deployment Backups Create a backup before major deployments: ```bash # Create backup postbase cloud backups create -d production # Deploy changes npm run deploy # If issues, restore postbase cloud backups restore bkp_xxx -d production ``` #### PITR for Critical Data Enable PITR for production databases: ```bash # Enable PITR postbase cloud pitr enable -d production # Check status postbase cloud pitr status -d production ``` ### Monitoring #### Backup Status ```bash postbase cloud backups list -d production --json ``` #### PITR Status ```bash postbase cloud pitr status -d production ``` #### WAL Receiver Health ```bash postbase cloud pitr receiver -d production ``` ### Next Steps * [Automated Backups](/backups/automated) - Daily backup configuration * [Manual Backups](/backups/manual) - On-demand backups * [Restore Operations](/backups/restore) - Recovery procedures * [Point-in-Time Recovery](/backups/pitr) - PITR setup and usage ## Point-in-Time Recovery (PITR) Restore your database to any second within the retention window. ### How PITR Works PITR combines: 1. **Base backup** - Full database snapshot 2. **WAL archives** - Transaction log segments To recover, Postbase: 1. Restores the base backup 2. Replays WAL segments up to the target time 3. Returns the database to the exact state at that moment ``` Timeline: ────────────────────────────────────────────────────────────────▶ │ │ Base Backup Target Time (2026-01-25 03:00) (2026-01-25 14:30) │ │ └─────── WAL Segments (continuous) ─────────────────────────────┘ 000001, 000002, 000003, ... , 000050 ``` ### Enabling PITR #### Via CLI ```bash postbase cloud pitr enable -d production ``` Output: ``` Enabling PITR for production... ✓ PITR enabled Base Backup: bkp_D3uqtMIZXOoU_WPo2gbi3_xu Retention: 7 days WAL Receiver: Starting... The database is now protected with continuous WAL archiving. Recovery point objective (RPO): < 1 second ``` #### With Custom Retention ```bash postbase cloud pitr enable -d production --retention 14 ``` ### Checking PITR Status ```bash postbase cloud pitr status -d production ``` Output: ``` PITR Status for production Status: Enabled WAL Receiver: running WAL Files: 42 Recovery Window: From: 2026-01-24 03:00:00 To: 2026-01-25 14:45:00 Lag: 15s Base: 2026-01-24 03:00:00 View WAL archives: postbase cloud pitr wal -d production ``` ### Monitoring WAL Receiver #### Receiver Status ```bash postbase cloud pitr receiver -d production ``` Output: ``` WAL Receiver Status for production Status: ✓ Healthy Receiver: running Started: 2026-01-24 03:05:00 Uptime: 1d 11h 40m WAL Activity: Files: 42 Data: 672 MB Last File: 000000010000000000000042 Last WAL: 2026-01-25 14:44:45 Lag: 15s ``` #### Restart Receiver If the receiver has issues: ```bash postbase cloud pitr restart-receiver -d production ``` ### Viewing WAL Archives ```bash postbase cloud pitr wal -d production ``` Output: ``` WAL Archives for production FILENAME SIZE ARCHIVED AT 000000010000000000000042 16 MB 2026-01-25 14:44:45 000000010000000000000041 16 MB 2026-01-25 14:30:12 000000010000000000000040 16 MB 2026-01-25 14:15:33 ... Total: 42 archives, 672 MB ``` ### Recovery Window The recovery window shows the time range available for PITR: ```bash postbase cloud pitr status -d production ``` ``` Recovery Window: From: 2026-01-24 03:00:00 (base backup time) To: 2026-01-25 14:45:00 (latest WAL) ``` You can restore to any point between these times. ### Performing PITR Restore #### To Specific Time ```bash postbase cloud pitr restore \ -d production \ --target-time "2026-01-25T14:30:00Z" \ --force ``` :::warning This will replace all data after the target time. Create a backup first if needed. ::: #### To Latest Restore to the most recent available state: ```bash postbase cloud pitr restore \ -d production \ --target-time latest \ --force ``` #### Dry Run Preview what would be restored: ```bash postbase cloud pitr restore \ -d production \ --target-time "2026-01-25T14:30:00Z" \ --dry-run ``` Output: ``` PITR Restore Preview Database: production Target Time: 2026-01-25T14:30:00Z Recovery Plan: 1. Stop current database 2. Restore base backup: bkp_D3uqtMIZXOoU_WPo2gbi3_xu 3. Apply WAL segments: 000001 through 000035 4. Recovery to: 2026-01-25T14:30:00Z Estimated Time: ~5 minutes Run without --dry-run to execute. ``` ### Checking Restore Progress #### List Restores ```bash postbase cloud pitr restores -d production ``` Output: ``` PITR Restores for production ID TARGET TIME STATUS WAL FILES DURATION rstr_abc123... 2026-01-25 14:30:00 completed 35 4m 32s rstr_def456... 2026-01-24 18:00:00 completed 12 2m 15s ``` #### Get Restore Details ```bash postbase cloud pitr restores rstr_abc123 -d production ``` ### Disabling PITR ```bash postbase cloud pitr disable -d production --force ``` :::warning Disabling PITR stops WAL archiving. Existing archives are retained until expiration. ::: ### Best Practices #### Monitor WAL Lag Keep WAL lag under 5 minutes: ```bash postbase cloud pitr receiver -d production ``` If lag is high, check database activity and network. #### Test Recovery Regularly Monthly recovery tests: ```bash # Create test database postbase cloud provision production-pitr-test # Restore to test postbase cloud pitr restore \ -d production-pitr-test \ --source production \ --target-time "2026-01-25T12:00:00Z" # Verify data postbase cloud psql -d production-pitr-test \ -c "SELECT COUNT(*) FROM users" # Cleanup postbase cloud destroy production-pitr-test ``` #### Coordinate with Deployments Before major changes: ```bash # Note the current time date -u +"%Y-%m-%dT%H:%M:%SZ" # 2026-01-25T14:00:00Z # Deploy changes npm run migrate # If issues, restore to pre-deployment postbase cloud pitr restore \ -d production \ --target-time "2026-01-25T14:00:00Z" \ --force ``` ### Troubleshooting #### WAL Receiver Not Running ```bash # Check status postbase cloud pitr receiver -d production # Restart if needed postbase cloud pitr restart-receiver -d production ``` #### High WAL Lag Causes: * High write volume * Network issues * Backup service overloaded Solutions: 1. Check service status 2. Review database write patterns 3. Contact support if persistent #### Recovery Window Too Short If you need longer retention: ```bash # Increase retention postbase cloud pitr enable -d production --retention 14 ``` #### Cannot Restore to Specific Time Error: "Target time outside recovery window" The requested time is: * Before the base backup * After the latest WAL segment Check the recovery window: ```bash postbase cloud pitr status -d production ``` ## Restore Operations Restore databases from backups or point-in-time. ### Restore Types | Type | Use Case | RPO | | ------------------ | ---------------------- | -------------- | | **Backup Restore** | Full database recovery | Up to 24 hours | | **PITR Restore** | Precise recovery | \< 1 second | ### Backup Restore #### List Available Backups ```bash postbase cloud backups list -p myapp -d production ``` ``` ID TYPE STATUS SIZE CREATED bkp_D3uqtMIZ... automated completed 12.5 MB 2026-01-25 03:00 bkp_K9Rvw2Hx... automated completed 12.3 MB 2026-01-24 03:00 bkp_ManualXyz... manual completed 12.1 MB 2026-01-23 14:30 ``` #### Restore to Same Database ```bash postbase cloud backups restore bkp_D3uqtMIZ \ -p myapp -d production --force ``` :::warning This replaces all current data in the database. ::: #### Restore to New Database ```bash # Provision new database postbase cloud provision production-restored -p myapp # Restore backup to new database postbase cloud backups restore bkp_D3uqtMIZ \ -p myapp -d production-restored ``` ### PITR Restore Point-in-Time Recovery allows restoring to any second within the recovery window. #### Check Recovery Window ```bash postbase cloud pitr status -p myapp -d production ``` ``` Recovery Window: From: 2026-01-18 03:00:00 To: 2026-01-25 14:45:00 ``` #### Restore to Specific Time ```bash postbase cloud pitr restore \ -p myapp -d production \ --target-time "2026-01-25T14:30:00Z" \ --force ``` #### Restore to Latest Restore to the most recent available state: ```bash postbase cloud pitr restore \ -p myapp -d production \ --target-time latest \ --force ``` ### Restore Process #### What Happens During Restore 1. **Download** - Backup files downloaded from R2 2. **Stop Connections** - Active connections terminated 3. **Drop Data** - Existing data removed 4. **Restore** - pg\_restore runs backup file 5. **WAL Replay** (PITR only) - Replay transactions to target time 6. **Rebuild Indexes** - Rebuild indexes for consistency 7. **Verify** - Run integrity checks #### Estimated Duration | Operation | Duration | | ----------------- | ------------- | | \< 1 GB backup | \< 2 minutes | | 1-10 GB backup | 2-10 minutes | | 10-50 GB backup | 10-30 minutes | | PITR + WAL replay | +1-5 minutes | ### Monitoring Restore Progress #### CLI ```bash postbase cloud backups restore-status $RESTORE_ID -p myapp -d production ``` ``` Restore Status ID: rstr_Abc123 Status: in_progress Progress: 65% Started: 2026-01-25 14:30:00 Phase: Restoring data Estimated remaining: ~2 minutes ``` #### Watch Mode ```bash postbase cloud backups restore-status $RESTORE_ID \ -p myapp -d production --watch ``` ### Restore Verification #### Check Data ```bash # Connect and verify postbase cloud psql -p myapp -d production -- Check row counts SELECT 'users' as table, COUNT(*) FROM users UNION ALL SELECT 'posts', COUNT(*) FROM posts UNION ALL SELECT 'comments', COUNT(*) FROM comments; ``` #### Check Schema ```bash postbase cloud psql -p myapp -d production -c "\dt" ``` #### Check Recent Data ```bash postbase cloud psql -p myapp -d production \ -c "SELECT * FROM users ORDER BY created_at DESC LIMIT 5" ``` ### Rollback Scenarios #### Accidental Deletion ```sql -- Oops! DELETE FROM users WHERE active = false; -- Deleted 1000 users -- Check PITR window postbase cloud pitr status -p myapp -d production -- Restore to 5 minutes ago postbase cloud pitr restore \ -p myapp -d production \ --target-time "2026-01-25T14:25:00Z" \ --force ``` #### Bad Migration ```bash # Migration went wrong postbase migrate up # Corrupted data # Find last good backup postbase cloud backups list -p myapp -d production # Restore postbase cloud backups restore bkp_BeforeMigration \ -p myapp -d production --force ``` #### Application Bug ```typescript // Bug in code wrote bad data await db.from('orders').update({ total: 0 }) // All orders now $0 // Use PITR to go back postbase cloud pitr restore \ -p myapp -d production \ --target-time "2026-01-25T10:00:00Z" \ // Before the bug --force ``` ### Best Practices #### Pre-Restore Backup Before restoring to production, create a backup of current state: ```bash # Backup current state postbase cloud backups create -p myapp -d production # Then restore postbase cloud backups restore $OLD_BACKUP_ID -p myapp -d production --force ``` #### Test Restores First Restore to a test database before production: ```bash # Create test database postbase cloud provision restore-test -p myapp # Restore to test postbase cloud backups restore $BACKUP_ID -p myapp -d restore-test # Verify data postbase cloud psql -p myapp -d restore-test \ -c "SELECT COUNT(*) FROM users" # If good, restore to production postbase cloud backups restore $BACKUP_ID -p myapp -d production --force # Cleanup postbase cloud destroy restore-test -p myapp ``` #### Document Recovery Keep a recovery runbook: ```markdown ## Recovery Runbook ### Accidental Data Deletion 1. Identify when deletion occurred 2. Check PITR window: `postbase cloud pitr status` 3. Restore to pre-deletion time: `postbase cloud pitr restore --target-time "..."` ### Full Database Recovery 1. List backups: `postbase cloud backups list` 2. Select appropriate backup 3. Restore: `postbase cloud backups restore` 4. Verify data integrity ``` ### Troubleshooting #### Restore Failed ```bash # Check restore status postbase cloud backups restore-status $RESTORE_ID -p myapp -d production # If failed, check logs # Contact support with restore ID ``` #### Target Time Invalid ``` Error: Target time outside recovery window ``` Check the available recovery window: ```bash postbase cloud pitr status -p myapp -d production ``` #### Incomplete Restore If restore partially completed: 1. Don't panic 2. Create backup of current state (if possible) 3. Retry the restore 4. Contact support if persistent ## API Reference RESTful API for Postbase Cloud platform. ### Base URL ``` https://api.postbase.sh ``` ### Authentication All API requests require a Bearer token: ```bash Authorization: Bearer ``` Get your token via CLI: ```bash postbase cloud login # Token stored in ~/.postbase/cloud.json ``` ### Response Format All responses follow this structure: ```json { "success": true, "data": { ... } } ``` Error responses: ```json { "success": false, "error": { "code": "ERROR_CODE", "message": "Human-readable error message" } } ``` ### Rate Limits | Tier | Requests/minute | Burst | | ---- | --------------- | ----- | | Free | 60 | 10 | | Pro | 1000 | 100 | ### API Endpoints #### Health Check ```http GET / ``` **Response:** ```json { "name": "Postbase Cloud API", "version": "0.5.2", "status": "healthy" } ``` *** ### Authentication #### Start Device Flow ```http POST /auth/device ``` Initiates GitHub OAuth device flow. **Response:** ```json { "device_code": "abc123...", "user_code": "ABCD-1234", "verification_uri": "https://github.com/login/device", "expires_in": 900, "interval": 5 } ``` #### Poll for Token ```http POST /auth/device/token Content-Type: application/json { "device_code": "abc123..." } ``` **Response (pending):** ```json { "status": "pending" } ``` **Response (success):** ```json { "access_token": "pb_xxx...", "token_type": "Bearer", "user": { "id": "user_xxx", "github_id": 12345, "login": "username", "email": "user@example.com" } } ``` #### Get Current User ```http GET /auth/me Authorization: Bearer ``` **Response:** ```json { "id": "user_xxx", "github_id": 12345, "login": "username", "email": "user@example.com", "created_at": "2026-01-20T10:00:00Z" } ``` *** ### Projects #### List Projects ```http GET /projects Authorization: Bearer ``` **Response:** ```json { "projects": [ { "id": "proj_xxx", "name": "myapp", "created_at": "2026-01-20T10:00:00Z" } ] } ``` #### Create Project ```http POST /projects Authorization: Bearer Content-Type: application/json { "name": "myapp" } ``` **Response:** ```json { "id": "proj_xxx", "name": "myapp", "created_at": "2026-01-20T10:00:00Z" } ``` #### Get Project ```http GET /projects/:id Authorization: Bearer ``` #### Delete Project ```http DELETE /projects/:id Authorization: Bearer ``` *** ### Databases #### List Databases ```http GET /projects/:project_id/databases Authorization: Bearer ``` **Response:** ```json { "databases": [ { "id": "db_xxx", "name": "production", "railway_service_id": "srv_xxx", "region": "us-west1", "status": "running", "created_at": "2026-01-20T10:00:00Z" } ] } ``` #### Provision Database ```http POST /projects/:project_id/databases Authorization: Bearer Content-Type: application/json { "name": "production", "region": "us-west1", "config": { "cpu": 2, "memory": 4096, "storage": 50 } } ``` **Response:** ```json { "id": "db_xxx", "name": "production", "railway_service_id": "srv_xxx", "region": "us-west1", "status": "provisioning", "created_at": "2026-01-20T10:00:00Z" } ``` #### Get Database ```http GET /projects/:project_id/databases/:name Authorization: Bearer ``` #### Get Connection URL ```http GET /projects/:project_id/databases/:name/url Authorization: Bearer ``` **Response:** ```json { "host": "xxx.proxy.rlwy.net", "port": 12345, "database": "railway", "user": "postgres", "connection_string": "postgresql://postgres:xxx@xxx.proxy.rlwy.net:12345/railway?sslmode=disable" } ``` #### Delete Database ```http DELETE /projects/:project_id/databases/:name Authorization: Bearer ``` *** ### Backups #### List Backups ```http GET /projects/:project_id/databases/:name/backups Authorization: Bearer ``` **Response:** ```json { "backups": [ { "id": "bkp_xxx", "type": "automated", "status": "completed", "size_bytes": 12345678, "created_at": "2026-01-25T03:00:00Z", "expires_at": "2026-01-28T03:00:00Z" } ] } ``` #### Create Backup ```http POST /projects/:project_id/databases/:name/backups Authorization: Bearer ``` **Response:** ```json { "id": "bkp_xxx", "type": "manual", "status": "pending", "created_at": "2026-01-25T14:30:00Z" } ``` #### Get Backup Details ```http GET /projects/:project_id/databases/:name/backups/:backup_id Authorization: Bearer ``` #### Download Backup ```http GET /projects/:project_id/databases/:name/backups/:backup_id/download Authorization: Bearer ``` Returns a pre-signed URL for downloading the backup file. **Response:** ```json { "download_url": "https://...", "expires_at": "2026-01-25T15:30:00Z" } ``` #### Restore from Backup ```http POST /projects/:project_id/databases/:name/backups/:backup_id/restore Authorization: Bearer ``` **Response:** ```json { "restore_id": "rstr_xxx", "status": "pending", "started_at": "2026-01-25T14:30:00Z" } ``` #### Delete Backup ```http DELETE /projects/:project_id/databases/:name/backups/:backup_id Authorization: Bearer ``` *** ### PITR (Point-in-Time Recovery) #### Get PITR Config ```http GET /projects/:project_id/databases/:name/pitr/config Authorization: Bearer ``` **Response:** ```json { "enabled": true, "base_backup_id": "bkp_xxx", "retention_days": 7, "recovery_window": { "from": "2026-01-18T03:00:00Z", "to": "2026-01-25T14:45:00Z" } } ``` #### Enable PITR ```http POST /projects/:project_id/databases/:name/pitr/enable Authorization: Bearer Content-Type: application/json { "retention_days": 7 } ``` **Response:** ```json { "enabled": true, "base_backup_id": "bkp_xxx", "wal_receiver_status": "starting" } ``` #### Disable PITR ```http POST /projects/:project_id/databases/:name/pitr/disable Authorization: Bearer ``` #### List WAL Archives ```http GET /projects/:project_id/databases/:name/pitr/wal Authorization: Bearer ``` **Response:** ```json { "archives": [ { "filename": "000000010000000000000042", "size_bytes": 16777216, "archived_at": "2026-01-25T14:44:45Z" } ], "total_count": 42, "total_size_bytes": 704643072 } ``` #### Get Recovery Window ```http GET /projects/:project_id/databases/:name/pitr/recovery-window Authorization: Bearer ``` **Response:** ```json { "from": "2026-01-18T03:00:00Z", "to": "2026-01-25T14:45:00Z", "base_backup_time": "2026-01-18T03:00:00Z", "latest_wal_time": "2026-01-25T14:45:00Z" } ``` #### Initiate PITR Restore ```http POST /projects/:project_id/databases/:name/pitr/restore Authorization: Bearer Content-Type: application/json { "target_time": "2026-01-25T14:30:00Z" } ``` Or for latest recovery: ```json { "target_time": "latest" } ``` **Response:** ```json { "restore_id": "rstr_xxx", "target_time": "2026-01-25T14:30:00Z", "status": "pending", "started_at": "2026-01-25T14:50:00Z" } ``` #### List PITR Restores ```http GET /projects/:project_id/databases/:name/pitr/restores Authorization: Bearer ``` **Response:** ```json { "restores": [ { "id": "rstr_xxx", "target_time": "2026-01-25T14:30:00Z", "status": "completed", "wal_files_applied": 35, "started_at": "2026-01-25T14:50:00Z", "completed_at": "2026-01-25T14:55:00Z" } ] } ``` #### Get Restore Status ```http GET /projects/:project_id/databases/:name/pitr/restores/:restore_id Authorization: Bearer ``` #### Get WAL Receiver Status ```http GET /projects/:project_id/databases/:name/pitr/receiver-status Authorization: Bearer ``` **Response:** ```json { "status": "running", "started_at": "2026-01-24T03:05:00Z", "last_wal_file": "000000010000000000000042", "bytes_received": 704643072, "errors": 0, "lag_seconds": 15 } ``` #### Restart WAL Receiver ```http POST /projects/:project_id/databases/:name/pitr/restart-receiver Authorization: Bearer ``` *** ### Error Codes | Code | Description | | --------------------- | -------------------------------------- | | `AUTH_REQUIRED` | Missing or invalid authorization | | `PROJECT_NOT_FOUND` | Project does not exist | | `DATABASE_NOT_FOUND` | Database does not exist | | `BACKUP_NOT_FOUND` | Backup does not exist | | `PITR_NOT_ENABLED` | PITR is not enabled for this database | | `TARGET_TIME_INVALID` | Target time is outside recovery window | | `RATE_LIMIT_EXCEEDED` | Too many requests | | `INTERNAL_ERROR` | Server error |