\chapter{Chapter 15: Database Design and Management}

\section{Overview}

Database design and management tasks represent some of the most critical infrastructure challenges in Claude Code development work. These tasks involve architecting data storage systems, designing schemas, implementing CRUD operations, and ensuring data integrity and performance. Success requires systematic planning, understanding of database principles, and careful consideration of scalability, security, and maintenance requirements.

\subsection{\textbf{Key Characteristics}}
\begin{itemize}
\item \textbf{Scope}: Database architecture, schema design, data modeling, migration management
\item \textbf{Complexity}: Medium to Very High (3-5 on complexity scale)  
\item \textbf{Typical Duration}: Multiple sessions spanning hours to weeks
\item \textbf{Success Factors}: Requirements analysis, data modeling, performance optimization, security design
\item \textbf{Common Patterns}: Requirements → Schema Design → Implementation → Migration → Optimization
\end{itemize}

\subsection{\textbf{When to Use This Task Type}}
\begin{itemize}
\item Designing database schemas for new applications
\item Implementing user authentication and authorization systems
\item Creating data models for complex business logic
\item Setting up database migrations and versioning systems
\item Optimizing database performance and query efficiency
\item Managing log databases and analytics systems
\item Implementing multi-user subscription and billing systems
\item Setting up database backup, recovery, and monitoring systems
\end{itemize}

\section{Real-World Examples from Session Analysis}

\subsection{\textbf{Example 1: Multi-User Subscription Platform Database}}
\begin{lstlisting}[language=bash]
Task: Design comprehensive database system for ArXiv subscription platform

Initial Problem Pattern:
"it failed to build this service via docker, so try direct run on localhost. 
But, we should configure the redis and postgresql-17 and use the correct .env configuration"

Database Architecture Requirements:
\begin{itemize}
\item PostgreSQL 17 for primary data storage
\item Redis for caching and session management
\item User authentication and authorization
\item Subscription management with billing
\item Multi-tenant data isolation
\item CORS and API security integration
\end{itemize}

Development Approach:
\begin{enumerate}
\item Database service configuration and startup
\item Environment variable management for database connections
\item Schema design for users, subscriptions, and authentication
\item API integration with database operations
\item Migration management and version control
\item Performance optimization and caching strategies
\end{enumerate}
\end{lstlisting}

\subsection{\textbf{Example 2: Log Management Database System}}
\begin{lstlisting}[language=bash]
Task: Design SQLite database for GCR solver build and test log management

Working Directory Pattern:
\texttt{/home/linden/code/work/Helmholtz/gcr-nccl}

Initial Requirements:
"ultrathink to analyze if the database design for the management of building 
and running logs is too complex, how to optimize it to make it simple"

Database Simplification Approach:
\begin{itemize}
\item Simplified from PostgreSQL + Redis to single SQLite database
\item Reduced from 5+ complex tables to 1 streamlined table
\item File-based log storage with database indexing
\item Time zone management for consistent logging (CST)
\item Status tracking for builds and test runs
\item Integration with command-line tools for querying
\end{itemize}

Final Schema Design:
\end{lstlisting}sql
CREATE TABLE runs (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    timestamp TEXT,
    type TEXT,      -- 'build' or 'test'
    solver TEXT,    -- 'gcr', 'gmres', 'ca-gcr', etc.
    gpu\_type TEXT,  -- 'cuda', 'hip', 'cpu'
    status TEXT,    -- 'success', 'failed', 'running'
    log\_file TEXT,
    duration REAL
);
\begin{lstlisting}

\end{lstlisting}

\subsection{\textbf{Example 3: Task Management Database with SQLite}}
\begin{lstlisting}[language=bash]
Task: Create task management system with database-backed storage

Initial Prompt Pattern:
"write a simple command-based notepad editor and also serve as a task manager...
note will save the full task prompts to markdown file and manage it use database tech"

Database Requirements:
\begin{itemize}
\item Task metadata storage (title, work directory, creation time)
\item Full-text search capabilities for task content
\item Relationship management between tasks and working directories
\item Integration with file system for markdown storage
\item Command-line interface for database queries
\item Time-based sorting and task history tracking
\end{itemize}

Implementation Pattern:
\begin{enumerate}
\item SQLite database for lightweight, embedded storage
\item Task table with foreign key relationships
\item File system integration for markdown content storage
\item CLI commands: \texttt{note task -l}, \texttt{note task -s <task-id>}, \texttt{cdtask <task-id>}
\item Integration with external AI services for content analysis
\item Database migration and backup strategies
\end{enumerate}
\end{lstlisting}

\subsection{\textbf{Example 4: Authentication Database Integration}}
\begin{lstlisting}[language=bash]
Task: Fix authentication system with proper database integration

Problem Analysis:
"when I login it report error: \\textcolor{red}{$\\times$} Sign in failed: Error: Invalid credentials"

Database Integration Challenges:
\begin{itemize}
\item Backend API database connection issues
\item Field name mismatches (snake\_case vs camelCase)
\item Authentication service integration with database
\item CORS configuration for database API access
\item Environment variable management for database URLs
\item User session management and token storage
\end{itemize}

Resolution Approach:
\begin{enumerate}
\item Database connection debugging and configuration
\item API endpoint design for authentication operations
\item User table schema design with proper field naming
\item Password hashing and security implementation
\item Session token management and validation
\item Database transaction handling for user operations
\end{enumerate}
\end{lstlisting}

\section{Templates and Procedures}

\subsection{\textbf{Database Design Planning Template}}

\begin{lstlisting}[language=bash]
# Database Design Planning Checklist

\section{1. Requirements Analysis}
\begin{itemize}
\item [ ] Data entities and relationships identification
\item [ ] Expected data volume and growth patterns
\item [ ] Query patterns and access frequencies
\item [ ] Performance requirements (latency, throughput)
\item [ ] Consistency requirements (ACID vs eventual consistency)
\item [ ] Integration requirements (APIs, external systems)
\end{itemize}

\section{2. Technology Selection}
\begin{itemize}
\item [ ] Database type decision (SQL vs NoSQL)
\item [ ] Specific database selection (PostgreSQL, MySQL, SQLite, MongoDB)
\item [ ] Caching strategy (Redis, Memcached, in-memory)
\item [ ] Connection pooling and management
\item [ ] Backup and recovery requirements
\item [ ] Monitoring and observability needs
\end{itemize}

\section{3. Schema Design}
\begin{itemize}
\item [ ] Entity-relationship diagram creation
\item [ ] Table structure and column definitions
\item [ ] Primary key and foreign key relationships
\item [ ] Index strategy for query optimization
\item [ ] Data validation and constraint rules
\item [ ] Normalization level decisions
\end{itemize}

\section{4. Security Considerations}
\begin{itemize}
\item [ ] Authentication and authorization mechanisms
\item [ ] Data encryption (at rest and in transit)
\item [ ] Access control and user permissions
\item [ ] SQL injection prevention strategies
\item [ ] Audit logging and compliance requirements
\item [ ] Data privacy and GDPR compliance
\end{itemize}

\section{5. Migration and Versioning}
\begin{itemize}
\item [ ] Database version control strategy
\item [ ] Migration script development and testing
\item [ ] Rollback procedures and safety mechanisms
\item [ ] Environment-specific configurations
\item [ ] Data seeding and initialization scripts
\item [ ] Deployment automation and CI/CD integration
\end{itemize}
\end{lstlisting}

\subsection{\textbf{Schema Development Template}}

\begin{lstlisting}[language=bash]
# Database Schema Development Process

\section{Initial Design Phase}
\begin{itemize}
\item [ ] Create conceptual data model with entities and relationships
\item [ ] Define business rules and constraints
\item [ ] Identify required indexes for query performance
\item [ ] Plan for future scalability and growth
\end{itemize}

\section{Implementation Template}
\end{lstlisting}sql
-- Users table with authentication
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    email VARCHAR(255) UNIQUE NOT NULL,
    password\_hash VARCHAR(255) NOT NULL,
    full\_name VARCHAR(255),
    display\_name VARCHAR(100),
    created\_at TIMESTAMP DEFAULT CURRENT\_TIMESTAMP,
    updated\_at TIMESTAMP DEFAULT CURRENT\_TIMESTAMP,
    is\_active BOOLEAN DEFAULT true,
    gdpr\_consent BOOLEAN DEFAULT false,
    marketing\_consent BOOLEAN DEFAULT false
);

-- Index strategy
CREATE INDEX idx\_users\_email ON users(email);
CREATE INDEX idx\_users\_active ON users(is\_active);
CREATE INDEX idx\_users\_created ON users(created\_at);

-- Audit table for user changes
CREATE TABLE user\_audit (
    id SERIAL PRIMARY KEY,
    user\_id INTEGER REFERENCES users(id),
    action VARCHAR(50) NOT NULL,
    changed\_data JSONB,
    changed\_by INTEGER REFERENCES users(id),
    created\_at TIMESTAMP DEFAULT CURRENT\_TIMESTAMP
);
\begin{lstlisting}
\section{Validation and Testing}
\begin{itemize}
\item [ ] Test data integrity constraints
\item [ ] Verify foreign key relationships
\item [ ] Performance test with sample data
\item [ ] Test migration scripts in staging environment
\item [ ] Validate backup and recovery procedures
\end{itemize}
\end{lstlisting}

\subsection{\textbf{Database Integration Template}}

\begin{lstlisting}[language=bash]
# Application-Database Integration Guide

\section{Connection Management}
\end{lstlisting}typescript
// Environment configuration
interface DatabaseConfig \{
    host: string;
    port: number;
    database: string;
    username: string;
    password: string;
    ssl?: boolean;
    poolSize?: number;
\}

// Connection pool setup
const pool = new Pool(\{
    connectionString: process.env.DATABASE\_URL,
    ssl: process.env.NODE\_ENV === 'production',
    max: 20,
    idleTimeoutMillis: 30000,
    connectionTimeoutMillis: 2000,
\});

// Connection health check
async function healthCheck(): Promise<boolean> \{
    try \{
        const client = await pool.connect();
        await client.query('SELECT 1');
        client.release();
        return true;
    \} catch (error) \{
        console.error('Database health check failed:', error);
        return false;
    \}
\}
\begin{lstlisting}
\section{CRUD Operations Template}
\end{lstlisting}typescript
// User repository pattern
class UserRepository \{
    async create(userData: CreateUserRequest): Promise<User> \{
        const query = `
            INSERT INTO users (email, password\_hash, full\_name, display\_name, gdpr\_consent)
            VALUES (\$1, \$2, \$3, \$4, \$5)
            RETURNING *
        `;
        const values = [
            userData.email,
            userData.passwordHash,
            userData.fullName,
            userData.displayName,
            userData.gdprConsent
        ];
        
        const result = await pool.query(query, values);
        return result.rows[0];
    \}

    async findByEmail(email: string): Promise<User | null> \{
        const query = 'SELECT * FROM users WHERE email = \$1 AND is\_active = true';
        const result = await pool.query(query, [email]);
        return result.rows[0] || null;
    \}

    async update(id: number, updates: UpdateUserRequest): Promise<User> \{
        const setClause = Object.keys(updates)
            .map((key, index) => \textbackslash\{\}texttt\{\$\{key\} = \$\$\{index + 2\}\})
            .join(', ');
        
        const query = `
            UPDATE users 
            SET \$\{setClause\}, updated\_at = CURRENT\_TIMESTAMP
            WHERE id = \$1
            RETURNING *
        `;
        
        const values = [id, ...Object.values(updates)];
        const result = await pool.query(query, values);
        return result.rows[0];
    \}
\}
\begin{lstlisting}
\section{Transaction Management}
\end{lstlisting}typescript
async function transferWithTransaction(fromId: number, toId: number, amount: number) \{
    const client = await pool.connect();
    
    try \{
        await client.query('BEGIN');
        
        // Debit from source account
        await client.query(
            'UPDATE accounts SET balance = balance - \$1 WHERE id = \$2',
            [amount, fromId]
        );
        
        // Credit to destination account
        await client.query(
            'UPDATE accounts SET balance = balance + \$1 WHERE id = \$2',
            [amount, toId]
        );
        
        // Record transaction
        await client.query(
            'INSERT INTO transactions (from\_account, to\_account, amount) VALUES (\$1, \$2, \$3)',
            [fromId, toId, amount]
        );
        
        await client.query('COMMIT');
    \} catch (error) \{
        await client.query('ROLLBACK');
        throw error;
    \} finally \{
        client.release();
    \}
\}
\begin{lstlisting}

\end{lstlisting}

\subsection{\textbf{Database Operations Template}}

\begin{lstlisting}[language=bash]
# Database Operations and Maintenance

\section{Migration Management}
\end{lstlisting}

\begin{lstlisting}[language=bash]
# !/bin/bash
# Database migration script template

set -e

DB\_HOST=\$\{DB\_HOST:-localhost\}
DB\_PORT=\$\{DB\_PORT:-5432\}
DB\_NAME=\$\{DB\_NAME:-myapp\}
DB\_USER=\$\{DB\_USER:-postgres\}

# Migration directory structure
MIGRATIONS\_DIR="./migrations"
CURRENT\_VERSION=\$(psql -h \$DB\_HOST -p \$DB\_PORT -U \$DB\_USER -d \$DB\_NAME -t -c "SELECT version FROM schema\_versions ORDER BY applied\_at DESC LIMIT 1" 2>/dev/null || echo "0")

echo "Current schema version: \$CURRENT\_VERSION"

# Apply pending migrations
for migration in \$MIGRATIONS\_DIR/*.sql; do
    version=\$(basename "\$migration" .sql)
    if [[ "\$version" > "\$CURRENT\_VERSION" ]]; then
        echo "Applying migration: \$version"
        psql -h \$DB\_HOST -p \$DB\_PORT -U \$DB\_USER -d \$DB\_NAME -f "\$migration"
        psql -h \$DB\_HOST -p \$DB\_PORT -U \$DB\_USER -d \$DB\_NAME -c "INSERT INTO schema\_versions (version, applied\_at) VALUES ('\$version', CURRENT\_TIMESTAMP)"
    fi
done
\begin{lstlisting}
\section{Backup and Recovery Procedures}
\end{lstlisting}

\begin{lstlisting}[language=bash]
# Automated backup script
# !/bin/bash

DB\_NAME="myapp"
BACKUP\_DIR="/backups/postgresql"
DATE=\$(date +\%Y\%m\%d\_\%H\%M\%S)
BACKUP\_FILE="\$BACKUP\_DIR/\$\{DB\_NAME\}\_\$DATE.sql"

# Create backup directory
mkdir -p \$BACKUP\_DIR

# Perform backup
pg\_dump -h localhost -U postgres -d \$DB\_NAME > \$BACKUP\_FILE

# Compress backup
gzip \$BACKUP\_FILE

# Clean old backups (keep last 7 days)
find \$BACKUP\_DIR -name "\$\{DB\_NAME\}\_*.sql.gz" -mtime +7 -delete

echo "Backup completed: \$\{BACKUP\_FILE\}.gz"
\begin{lstlisting}
\section{Performance Monitoring}
\end{lstlisting}sql
-- Query performance analysis
SELECT 
    query,
    calls,
    total\_time,
    mean\_time,
    rows,
    100.0 * shared\_blks\_hit / nullif(shared\_blks\_hit + shared\_blks\_read, 0) AS hit\_percent
FROM pg\_stat\_statements 
ORDER BY total\_time DESC 
LIMIT 10;

-- Index usage statistics
SELECT 
    schemaname,
    tablename,
    attname,
    n\_distinct,
    correlation
FROM pg\_stats
WHERE schemaname = 'public'
ORDER BY n\_distinct DESC;

-- Lock monitoring
SELECT 
    pid,
    state,
    query,
    query\_start,
    waiting
FROM pg\_stat\_activity 
WHERE state != 'idle'
ORDER BY query\_start;
\begin{lstlisting}

\end{lstlisting}

\section{Common Database Patterns}

\subsection{\textbf{Relational vs NoSQL Design Decisions}}

\textbf{Choose Relational (PostgreSQL/MySQL) When:}
\begin{itemize}
\item ACID compliance is critical
\item Complex relationships between entities
\item Strong consistency requirements
\item Complex query patterns with joins
\item Mature ecosystem and tooling needs
\end{itemize}

\textbf{Choose NoSQL (MongoDB/Redis) When:}
\begin{itemize}
\item Horizontal scaling is priority
\item Flexible schema requirements
\item Document-oriented data structures
\item High-speed read/write operations
\item Eventual consistency is acceptable
\end{itemize}

\subsection{\textbf{CRUD Operation Patterns and Optimization}}

\begin{lstlisting}
// Optimized batch operations
class BatchOperationService {
    async batchInsert<T>(table: string, records: T[]): Promise<void> {
        const batchSize = 1000;
        
        for (let i = 0; i < records.length; i += batchSize) {
            const batch = records.slice(i, i + batchSize);
            const values = batch.map((_, index) => 
                `(${Object.keys(batch[0]).map((_, colIndex) => 
                    \texttt{$${index * Object.keys(batch[0]).length + colIndex + 1}}
                ).join(', ')})`
            ).join(', ');
            
            const columns = Object.keys(batch[0]).join(', ');
            const query = \texttt{INSERT INTO ${table} (${columns}) VALUES ${values}};
            const flatValues = batch.flatMap(record => Object.values(record));
            
            await pool.query(query, flatValues);


    async batchUpdate<T>(table: string, updates: Array<{id: number, data: T}>): Promise<void> {
        const client = await pool.connect();
        
        try {
            await client.query('BEGIN');
            
            for (const update of updates) {
                const setClause = Object.keys(update.data)
                    .map((key, index) => \texttt{${key} = $${index + 2}})
                    .join(', ');
                
                await client.query(
                    \texttt{UPDATE ${table} SET ${setClause} WHERE id = $1},
                    [update.id, ...Object.values(update.data)]
                );

            await client.query('COMMIT');
        } catch (error) {
            await client.query('ROLLBACK');
            throw error;
        } finally {
            client.release();



\end{lstlisting}

\subsection{\textbf{Database Migration and Versioning Strategies}}

\begin{lstlisting}[language=SQL]
-- Migration versioning table
CREATE TABLE schema\_versions (
    id SERIAL PRIMARY KEY,
    version VARCHAR(50) UNIQUE NOT NULL,
    description TEXT,
    applied\_at TIMESTAMP DEFAULT CURRENT\_TIMESTAMP,
    applied\_by VARCHAR(100) DEFAULT CURRENT\_USER
);

-- Example migration file: 001\_create\_users\_table.sql
-- Version: 001
-- Description: Create initial users table with authentication

CREATE TABLE IF NOT EXISTS users (
    id SERIAL PRIMARY KEY,
    email VARCHAR(255) UNIQUE NOT NULL,
    password\_hash VARCHAR(255) NOT NULL,
    created\_at TIMESTAMP DEFAULT CURRENT\_TIMESTAMP,
    updated\_at TIMESTAMP DEFAULT CURRENT\_TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx\_users\_email ON users(email);

-- Update version
INSERT INTO schema\_versions (version, description) 
VALUES ('001', 'Create initial users table with authentication')
ON CONFLICT (version) DO NOTHING;
\end{lstlisting}

\subsection{\textbf{Connection Pooling and Performance Optimization}}

\begin{lstlisting}
// Advanced connection pool configuration
const poolConfig = {
    // Connection limits
    max: parseInt(process.env.DB\_POOL\_MAX || '20'),
    min: parseInt(process.env.DB\_POOL\_MIN || '5'),
    
    // Timeouts
    acquireTimeoutMillis: 60000,
    createTimeoutMillis: 30000,
    destroyTimeoutMillis: 5000,
    idleTimeoutMillis: 30000,
    reapIntervalMillis: 1000,
    createRetryIntervalMillis: 200,
    
    // Validation
    validate: (connection) => connection.state !== 'disconnected',
    
    // Lifecycle hooks
    afterCreate: async (connection) => {
        // Set connection-specific parameters
        await connection.query("SET timezone TO 'UTC'");
        await connection.query("SET statement\_timeout TO '30s'");

};

// Query optimization patterns
class QueryOptimizer {
    // Use prepared statements for repeated queries
    private preparedStatements = new Map();

    async executePrepared(name: string, query: string, params: any[]): Promise<any> {
        if (!this.preparedStatements.has(name)) {
            await pool.query(\texttt{PREPARE ${name} AS ${query}});
            this.preparedStatements.set(name, true);

        return pool.query(\texttt{EXECUTE ${name}(${params.map((_, i) => }$${i + 1}\texttt{).join(', ')})}, params);

    // Batch similar queries
    async batchQuery(queries: Array<{query: string, params: any[]}>): Promise<any[]> {
        const client = await pool.connect();
        const results = [];
        
        try {
            for (const {query, params} of queries) {
                results.push(await client.query(query, params));

        } finally {
            client.release();

        return results;


\end{lstlisting}

\section{Best Practices}

\subsection{\textbf{How to Structure Database Conversations with Claude}}

\begin{enumerate}
\item \textbf{Start with Requirements Analysis}
\end{enumerate}
\begin{itemize}
\item Clearly define data entities and relationships
\item Specify performance and scalability requirements
\item Identify security and compliance needs
\item Document integration requirements with existing systems
\end{itemize}

\begin{enumerate}
\item \textbf{Provide Context About Your Environment}
\end{enumerate}
\begin{itemize}
\item Database technology constraints (existing infrastructure)
\item Development environment (local vs cloud)
\item Team expertise and maintenance capabilities
\item Deployment and operational requirements
\end{itemize}

\begin{enumerate}
\item \textbf{Use Iterative Design Approach}
\end{enumerate}
\begin{itemize}
\item Begin with conceptual model discussion
\item Refine schema design through multiple iterations
\item Test and validate each component before proceeding
\item Plan for future evolution and scalability needs
\end{itemize}

\subsection{\textbf{When to Use Different Database Technologies}}

\textbf{SQLite:}
\begin{itemize}
\item Lightweight applications and prototyping
\item Single-user or low-concurrency applications
\item Embedded systems and desktop applications
\item Development and testing environments
\end{itemize}

\textbf{PostgreSQL:}
\begin{itemize}
\item Complex applications with ACID requirements
\item Applications requiring advanced SQL features
\item High-concurrency web applications
\item Applications with complex data relationships
\end{itemize}

\textbf{MySQL:}
\begin{itemize}
\item Web applications with read-heavy workloads
\item Applications requiring high availability
\item E-commerce and content management systems
\item Applications with established MySQL expertise
\end{itemize}

\textbf{Redis:}
\begin{itemize}
\item Caching and session management
\item Real-time analytics and counting
\item Message queues and pub/sub systems
\item High-speed data structure operations
\end{itemize}

\textbf{MongoDB:}
\begin{itemize}
\item Document-oriented applications
\item Rapid prototyping with evolving schemas
\item Applications with nested data structures
\item Big data applications with horizontal scaling needs
\end{itemize}

\subsection{\textbf{Schema Design Principles and Normalization}}

\begin{enumerate}
\item \textbf{Normalization Guidelines}
\end{enumerate}
\begin{itemize}
\item First Normal Form: Eliminate repeating groups
\item Second Normal Form: Remove partial dependencies
\item Third Normal Form: Eliminate transitive dependencies
\item Consider denormalization for read-heavy applications
\end{itemize}

\begin{enumerate}
\item \textbf{Index Strategy}
\end{enumerate}
\begin{itemize}
\item Index foreign keys and frequently queried columns
\item Use composite indexes for multi-column queries
\item Monitor index usage and remove unused indexes
\item Consider covering indexes for query optimization
\end{itemize}

\begin{enumerate}
\item \textbf{Data Type Selection}
\end{enumerate}
\begin{itemize}
\item Use appropriate data types for storage efficiency
\item Consider future requirements and data growth
\item Use constraints to enforce data integrity
\item Plan for internationalization and character encoding
\end{itemize}

\subsection{\textbf{Security and Performance Considerations}}

\textbf{Security Best Practices:}
\begin{itemize}
\item Use parameterized queries to prevent SQL injection
\item Implement proper authentication and authorization
\item Encrypt sensitive data at rest and in transit
\item Audit database access and changes
\item Follow principle of least privilege for database users
\end{itemize}

\textbf{Performance Optimization:}
\begin{itemize}
\item Analyze query execution plans regularly
\item Optimize slow queries with proper indexing
\item Use connection pooling to manage database connections
\item Monitor database performance metrics
\item Plan for horizontal and vertical scaling
\end{itemize}

\section{Advanced Techniques}

\subsection{\textbf{Distributed Database Architectures}}

\begin{lstlisting}
// Database sharding strategy
class DatabaseShardManager {
    private shards: Map<string, Pool> = new Map();

    constructor(shardConfigs: Array<{id: string, config: PoolConfig}>) {
        shardConfigs.forEach(({id, config}) => {
            this.shards.set(id, new Pool(config));
        });

    // Hash-based sharding
    getShardForUser(userId: number): Pool {
        const shardId = \texttt{shard_${userId % this.shards.size}};
        return this.shards.get(shardId)!;

    // Geography-based sharding
    getShardForRegion(region: string): Pool {
        const regionShardMap: Record<string, string> = {
            'us-east': 'shard\_us\_east',
            'us-west': 'shard\_us\_west',
            'europe': 'shard\_europe',
            'asia': 'shard\_asia'
        };
        
        return this.shards.get(regionShardMap[region]) || this.shards.values().next().value;

    // Execute query across all shards
    async executeAcrossShards(query: string, params: any[]): Promise<any[]> {
        const results = await Promise.all(
            Array.from(this.shards.values()).map(shard =>
                shard.query(query, params)
            )
        );
        
        return results.flatMap(result => result.rows);


\end{lstlisting}

\subsection{\textbf{Database Sharding and Partitioning}}

\begin{lstlisting}[language=SQL]
-- Table partitioning by date range
CREATE TABLE user\_activities (
    id SERIAL,
    user\_id INTEGER NOT NULL,
    activity\_type VARCHAR(50) NOT NULL,
    created\_at TIMESTAMP NOT NULL DEFAULT CURRENT\_TIMESTAMP,
    data JSONB
) PARTITION BY RANGE (created\_at);

-- Create monthly partitions
CREATE TABLE user\_activities_2025_01 PARTITION OF user\_activities
    FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');

CREATE TABLE user\_activities_2025_02 PARTITION OF user\_activities
    FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');

-- Hash partitioning by user\_id
CREATE TABLE user\_data (
    id SERIAL,
    user\_id INTEGER NOT NULL,
    data JSONB,
    created\_at TIMESTAMP DEFAULT CURRENT\_TIMESTAMP
) PARTITION BY HASH (user\_id);

CREATE TABLE user\_data_0 PARTITION OF user\_data
    FOR VALUES WITH (MODULUS 4, REMAINDER 0);
CREATE TABLE user\_data_1 PARTITION OF user\_data
    FOR VALUES WITH (MODULUS 4, REMAINDER 1);
CREATE TABLE user\_data_2 PARTITION OF user\_data
    FOR VALUES WITH (MODULUS 4, REMAINDER 2);
CREATE TABLE user\_data_3 PARTITION OF user\_data
    FOR VALUES WITH (MODULUS 4, REMAINDER 3);
\end{lstlisting}

\subsection{\textbf{Real-time Replication and Synchronization}}

\begin{lstlisting}
// Master-slave replication management
class ReplicationManager {
    private master: Pool;
    private slaves: Pool[];
    private currentSlaveIndex = 0;

    constructor(masterConfig: PoolConfig, slaveConfigs: PoolConfig[]) {
        this.master = new Pool(masterConfig);
        this.slaves = slaveConfigs.map(config => new Pool(config));

    // Write operations go to master
    async write(query: string, params: any[]): Promise<any> {
        return this.master.query(query, params);

    // Read operations use round-robin slave selection
    async read(query: string, params: any[]): Promise<any> {
        const slave = this.slaves[this.currentSlaveIndex];
        this.currentSlaveIndex = (this.currentSlaveIndex + 1) % this.slaves.length;
        
        try {
            return await slave.query(query, params);
        } catch (error) {
            // Fallback to master if slave is unavailable
            console.warn('Slave unavailable, falling back to master:', error);
            return this.master.query(query, params);


    // Health check for replication lag
    async checkReplicationLag(): Promise<Array<{slave: number, lag: number}>> {
        const masterLSN = await this.master.query("SELECT pg\_current\_wal\_lsn()");
        const lagPromises = this.slaves.map(async (slave, index) => {
            try {
                const slaveLSN = await slave.query("SELECT pg\_last\_wal\_receive\_lsn()");
                const lag = await this.master.query(
                    "SELECT pg\_wal\_lsn\_diff($1, $2) as lag",
                    [masterLSN.rows[0].pg\_current\_wal\_lsn, slaveLSN.rows[0].pg\_last\_wal\_receive\_lsn]
                );
                return { slave: index, lag: lag.rows[0].lag };
            } catch (error) {
                return { slave: index, lag: -1 };

        });

        return Promise.all(lagPromises);


\end{lstlisting}

\subsection{\textbf{Advanced Query Optimization and Indexing}}

\begin{lstlisting}[language=SQL]
-- Advanced indexing strategies
-- Partial index for active users only
CREATE INDEX idx\_users\_active\_email 
ON users(email) 
WHERE is\_active = true;

-- Expression index for case-insensitive searches
CREATE INDEX idx\_users\_email\_lower 
ON users(LOWER(email));

-- Covering index to avoid table lookups
CREATE INDEX idx\_users\_covering 
ON users(id, email, full\_name, created\_at) 
WHERE is\_active = true;

-- Multi-column index with proper ordering
CREATE INDEX idx\_user\_activities\_compound 
ON user\_activities(user\_id, created\_at DESC, activity\_type);

-- GIN index for JSONB data
CREATE INDEX idx\_user\_data\_gin 
ON user\_data USING GIN (data);

-- Query optimization techniques
-- Use EXPLAIN ANALYZE for query planning
EXPLAIN ANALYZE
SELECT u.email, u.full\_name, COUNT(ua.id) as activity\_count
FROM users u
LEFT JOIN user\_activities ua ON u.id = ua.user\_id
WHERE u.is\_active = true
  AND u.created\_at >= '2025-01-01'
  AND (ua.created\_at >= '2025-01-01' OR ua.created\_at IS NULL)
GROUP BY u.id, u.email, u.full\_name
ORDER BY activity\_count DESC
LIMIT 100;

-- Window functions for analytics
SELECT 
    user\_id,
    activity\_type,
    created\_at,
    COUNT(*) OVER (PARTITION BY user\_id ORDER BY created\_at 
                   RANGE BETWEEN INTERVAL '1 hour' PRECEDING 
                   AND CURRENT ROW) as activities\_last\_hour,
    ROW\_NUMBER() OVER (PARTITION BY user\_id, activity\_type 
                       ORDER BY created\_at DESC) as recent\_rank
FROM user\_activities
WHERE created\_at >= CURRENT\_DATE - INTERVAL '7 days';

-- Common Table Expressions for complex queries
WITH monthly\_stats AS (
    SELECT 
        DATE\_TRUNC('month', created\_at) as month,
        COUNT(*) as total\_users,
        COUNT(CASE WHEN is\_active THEN 1 END) as active\_users
    FROM users
    WHERE created\_at >= CURRENT\_DATE - INTERVAL '1 year'
    GROUP BY DATE\_TRUNC('month', created\_at)
),
growth\_rates AS (
    SELECT 
        month,
        total\_users,
        active\_users,
        LAG(total\_users) OVER (ORDER BY month) as prev\_total,
        LAG(active\_users) OVER (ORDER BY month) as prev\_active
    FROM monthly\_stats
)
SELECT 
    month,
    total\_users,
    active\_users,
    CASE 
        WHEN prev\_total > 0 THEN 
            ROUND((total\_users - prev\_total) * 100.0 / prev\_total, 2)
        ELSE 0 
    END as total\_growth\_percent,
    CASE 
        WHEN prev\_active > 0 THEN 
            ROUND((active\_users - prev\_active) * 100.0 / prev\_active, 2)
        ELSE 0 
    END as active\_growth\_percent
FROM growth\_rates
ORDER BY month;
\end{lstlisting}

\section{Conclusion}

Database design and management tasks require careful planning, systematic implementation, and ongoing optimization. Success depends on understanding both the technical requirements and business context, choosing appropriate technologies, and following established best practices for security, performance, and maintainability.

The examples and templates in this chapter provide proven patterns for common database scenarios, from simple SQLite applications to complex distributed systems. Use these templates as starting points, adapting them to your specific requirements while maintaining focus on data integrity, performance, and security.

Remember that database design is often iterative - start with a solid foundation based on current requirements, but plan for future evolution and scaling needs. Regular monitoring, optimization, and maintenance are essential for long-term success.
