\chapter{Chapter 14: Infrastructure and DevOps Tasks}

\section{Overview}

Infrastructure \& DevOps tasks represent some of the most complex operational challenges in Claude Code development work. These tasks involve deploying, configuring, and managing production systems that require careful orchestration, security considerations, and operational excellence. Success depends on systematic planning, understanding of distributed systems concepts, and comprehensive testing strategies.

\subsection{\textbf{Key Characteristics}}
\begin{itemize}
\item \textbf{Scope}: Production deployment, system configuration, operational automation
\item \textbf{Complexity}: Very High (4-5 on complexity scale)  
\item \textbf{Typical Duration}: Multiple sessions spanning days to weeks
\item \textbf{Success Factors}: Infrastructure planning, security hardening, monitoring setup, scalability design
\item \textbf{Common Patterns}: Planning → Configuration → Deployment → Monitoring → Optimization
\end{itemize}

\subsection{\textbf{When to Use This Task Type}}
\begin{itemize}
\item Setting up production deployment pipelines
\item Configuring containerized applications with Docker/Kubernetes
\item Implementing CI/CD automation and testing workflows
\item Setting up monitoring, logging, and alerting systems  
\item Configuring web servers, load balancers, and networking
\item Managing multi-service architectures and microservices
\item Implementing backup, recovery, and disaster planning
\item Automating operational tasks and maintenance procedures
\end{itemize}

\section{Real-World Examples from Session Analysis}

\subsection{\textbf{Example 1: Multi-Service Application Deployment}}
\begin{lstlisting}[language=bash]
Task: Deploy ArXiv subscription platform with frontend/backend services

Initial Problem Pattern:
"./start.sh
🚀 Starting ArXiv Subscription Platform...
🔍 Checking for processes on port 3001...
⚡ Killing non-browser processes on port 3001: 22046
🔍 Checking for processes on port 3002...  
⚡ Killing non-browser processes on port 3002: 18347
📡 Starting backend server on port 3001...
🌐 Starting frontend server on port 3002...

when I login it report error: \\textcolor{red}{$\\times$} Sign in failed: Error: Invalid credentials"

Development Approach:
\begin{enumerate}
\item Process management and port conflict resolution
\item Service startup script optimization 
\item Cross-origin resource sharing (CORS) configuration
\item Authentication service integration debugging
\item Log aggregation and error tracking setup
\item Environment variable management across services
\end{enumerate}
\end{lstlisting}

\subsection{\textbf{Example 2: Docker Container Environment Setup}}
\begin{lstlisting}[language=bash]
Task: Configure GSI (Gridpoint Statistical Interpolation) system in Docker

Working Directory Pattern:
\texttt{/home/docker/comgsi/tutorial/comGSIv3.7\_EnKFv1.3/src/comGSI-wiki}

Infrastructure Requirements:
\begin{itemize}
\item Multi-language runtime environment (Fortran, Julia, Python)
\item Scientific computing libraries and dependencies
\item Build system automation for complex mathematical software
\item Cross-platform compatibility and containerization
\item Performance monitoring for computational workloads
\item Data pipeline orchestration for large-scale processing
\end{itemize}
\end{lstlisting}

\subsection{\textbf{Example 3: No-Docker Service Installation and Configuration}}
\begin{lstlisting}[language=bash]
Task: Install GitHub MCP server without Docker containerization

Initial Prompt Pattern:
"no-docker install the github-mcp-server and configurate it for claude cli"

DevOps Approach:
\begin{enumerate}
\item Direct system installation without containerization
\item Native dependency management and conflict resolution
\item Service configuration for Claude CLI integration
\item Authentication setup and credential management
\item Process monitoring and service management
\item Integration testing with existing CLI tools
\end{enumerate}
\end{lstlisting}

\subsection{\textbf{Example 4: Automated Pipeline and Testing Systems}}
\begin{lstlisting}[language=bash]
Task: Create automated pipeline for GCR solver comparison

System Requirements:
\begin{itemize}
\item Automated shell script execution for multiple software variants
\item Python-based results comparison and analysis
\item Log file processing and automated report generation  
\item Parallel execution coordination across different solver implementations
\item Performance benchmarking and statistical analysis
\item Automated experiment report generation in markdown format
\end{itemize}

Pipeline Pattern:
\begin{enumerate}
\item Shell scripts for software execution coordination
\item Python scripts for log analysis and comparison
\item Automated calling of analysis scripts from shell scripts
\item Complete automated running pipeline with minimal human intervention
\end{enumerate}
\end{lstlisting}

\section{Templates and Procedures}

\subsection{\textbf{Infrastructure Planning Template}}

\begin{lstlisting}[language=bash]
# Infrastructure Planning Checklist

\section{1. Requirements Analysis}
\begin{itemize}
\item [ ] Performance requirements (throughput, latency, concurrency)
\item [ ] Scalability targets (expected growth, load patterns)
\item [ ] Security requirements (authentication, authorization, data protection)
\item [ ] Compliance needs (regulatory, organizational policies)
\item [ ] Budget constraints (infrastructure costs, operational overhead)
\item [ ] Integration requirements (external services, APIs, databases)
\end{itemize}

\section{2. Architecture Design}
\begin{itemize}
\item [ ] Service topology (monolith vs microservices)
\item [ ] Data flow and storage requirements
\item [ ] Network architecture and security boundaries
\item [ ] High availability and disaster recovery planning
\item [ ] Monitoring and observability strategy
\item [ ] Deployment strategy (blue-green, rolling, canary)
\end{itemize}

\section{3. Technology Stack Selection}
\begin{itemize}
\item [ ] Container orchestration platform (Kubernetes, Docker Swarm)
\item [ ] Web server and reverse proxy (Nginx, Apache, HAProxy)
\item [ ] Database systems (relational, NoSQL, caching)
\item [ ] Message queues and event streaming
\item [ ] Monitoring and logging tools
\item [ ] CI/CD pipeline tools
\end{itemize}

\section{4. Security Considerations}
\begin{itemize}
\item [ ] Network security (firewalls, VPNs, security groups)
\item [ ] SSL/TLS certificate management
\item [ ] Secret management (API keys, database passwords)
\item [ ] Access control and user management
\item [ ] Security scanning and vulnerability management
\item [ ] Audit logging and compliance monitoring
\end{itemize}

\section{5. Operations Planning}
\begin{itemize}
\item [ ] Backup and recovery procedures
\item [ ] Update and maintenance schedules
\item [ ] Incident response procedures
\item [ ] Performance monitoring and alerting
\item [ ] Cost optimization strategies
\item [ ] Documentation and runbooks
\end{itemize}
\end{lstlisting}

\subsection{\textbf{Deployment Pipeline Template}}

\begin{lstlisting}[language=bash]
# CI/CD Pipeline Configuration

\section{1. Source Code Management}
\end{lstlisting}

\begin{lstlisting}[language=yaml]
# .github/workflows/deploy.yml
name: Deploy to Production
on:
  push:
    branches: [main]
  pull\_request:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
\begin{itemize}
\item uses: actions/checkout@v3
\item name: Setup Node.js
        uses: actions/setup-node@v3
        with:
          node-version: '18'
          cache: 'npm'
\item run: npm ci
\item run: npm run test
\item run: npm run build
\end{itemize}
\begin{lstlisting}
\section{2. Build and Package}
\begin{itemize}
\item [ ] Automated testing (unit, integration, e2e)
\item [ ] Code quality checks (linting, security scanning)
\item [ ] Artifact building (Docker images, packages)
\item [ ] Vulnerability scanning
\item [ ] Performance testing
\item [ ] Documentation generation
\end{itemize}

\section{3. Deployment Automation}
\end{lstlisting}

\begin{lstlisting}[language=bash]
# !/bin/bash
# deploy.sh - Production deployment script

set -euo pipefail

# Configuration
APP\_NAME="myapp"
DOCKER\_IMAGE="\$APP\_NAME:\$BUILD\_NUMBER"
NAMESPACE="production"

# Pre-deployment checks
echo "🔍 Running pre-deployment checks..."
kubectl get nodes
kubectl get namespaces

# Deploy application
echo "🚀 Deploying \$DOCKER\_IMAGE to \$NAMESPACE..."
kubectl set image deployment/\$APP\_NAME \$APP\_NAME=\$DOCKER\_IMAGE -n \$NAMESPACE

# Wait for rollout
echo "⏳ Waiting for rollout to complete..."
kubectl rollout status deployment/\$APP\_NAME -n \$NAMESPACE --timeout=300s

# Post-deployment verification
echo "\\textcolor{green}{$\\checkmark$} Running post-deployment checks..."
kubectl get pods -n \$NAMESPACE
curl -f http://myapp.example.com/health || exit 1

echo "🎉 Deployment completed successfully!"
\begin{lstlisting}
\section{4. Progressive Deployment Strategy}
\begin{itemize}
\item [ ] Canary deployments (1%, 10%, 50%, 100%)
\item [ ] Blue-green deployment switching
\item [ ] Feature flag integration
\item [ ] Automated rollback triggers
\item [ ] Health check validation
\item [ ] Performance monitoring during rollout
\end{itemize}

\section{5. Quality Gates}
\begin{itemize}
\item [ ] Automated test suite passage (>95% coverage)
\item [ ] Security scan completion (no critical vulnerabilities)
\item [ ] Performance benchmarks (response time <200ms)
\item [ ] Load testing validation (handles expected traffic)
\item [ ] Manual approval for production releases
\item [ ] Documentation updates
\end{itemize}
\end{lstlisting}

\subsection{\textbf{Container Orchestration Template}}

\begin{lstlisting}[language=bash]
# Container Orchestration Setup

\section{1. Dockerfile Best Practices}
\end{lstlisting}

\begin{lstlisting}[language=dockerfile]
# Multi-stage build for optimization
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

FROM node:18-alpine AS runtime
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001

WORKDIR /app
COPY --from=builder --chown=nextjs:nodejs /app/node\_modules ./node\_modules
COPY --chown=nextjs:nodejs . .

USER nextjs
EXPOSE 3000
ENV NODE\_ENV production

CMD ["npm", "start"]
\begin{lstlisting}
\section{2. Kubernetes Deployment Configuration}
\end{lstlisting}

\begin{lstlisting}[language=yaml]
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: production
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
\begin{itemize}
\item name: myapp
        image: myapp:latest
        ports:
\item containerPort: 3000
        env:
\item name: NODE\_ENV
          value: "production"
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 3000
          initialDelaySeconds: 5
          periodSeconds: 5
\end{itemize}
\begin{lstlisting}
\section{3. Service and Ingress Configuration}
\end{lstlisting}

\begin{lstlisting}[language=yaml]
# k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
  namespace: production
spec:
  selector:
    app: myapp
  ports:
\begin{itemize}
\item port: 80
    targetPort: 3000
  type: ClusterIP
\end{itemize}

---
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp-ingress
  namespace: production
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
\begin{itemize}
\item hosts:
\item myapp.example.com
    secretName: myapp-tls
  rules:
\item host: myapp.example.com
    http:
      paths:
\item path: /
        pathType: Prefix
        backend:
          service:
            name: myapp-service
            port:
              number: 80
\end{itemize}
\begin{lstlisting}
\section{4. ConfigMap and Secret Management}
\end{lstlisting}

\begin{lstlisting}[language=yaml]
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: myapp-config
  namespace: production
data:
  app.properties: |
    debug=false
    log.level=info
    cache.ttl=3600

---
# k8s/secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: myapp-secrets
  namespace: production
type: Opaque
stringData:
  database-url: "postgresql://user:pass@db:5432/myapp"
  api-key: "your-secret-api-key"
\begin{lstlisting}

\end{lstlisting}

\subsection{\textbf{Monitoring and Operations Template}}

\begin{lstlisting}[language=bash]
# Monitoring and Operations Setup

\section{1. Application Monitoring}
\end{lstlisting}

\begin{lstlisting}[language=yaml]
# monitoring/prometheus-config.yml
global:
  scrape\_interval: 15s
  evaluation\_interval: 15s

rule\_files:
\begin{itemize}
\item "alert\_rules.yml"
\end{itemize}

scrape\_configs:
\begin{itemize}
\item job\_name: 'myapp'
    static\_configs:
\item targets: ['myapp-service:80']
    metrics\_path: /metrics
    scrape\_interval: 10s
\end{itemize}

alerting:
  alertmanagers:
\begin{itemize}
\item static\_configs:
\item targets: ['alertmanager:9093']
\end{itemize}
\begin{lstlisting}
\section{2. Logging Configuration}
\end{lstlisting}

\begin{lstlisting}[language=yaml]
# logging/fluentd-config.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-config
data:
  fluent.conf: |
    <source>
      @type kubernetes\_logs
      read\_from\_head true
      tag kubernetes.*
    </source>
    
    <filter kubernetes.**>
      @type kubernetes\_metadata
      kubernetes\_url "\#\{ENV['FLUENT\_FILTER\_KUBERNETES\_URL'] || 'https://' + ENV.fetch('KUBERNETES\_SERVICE\_HOST') + ':' + ENV.fetch('KUBERNETES\_SERVICE\_PORT') + '/api'\}"
      bearer\_token\_file /var/run/secrets/kubernetes.io/serviceaccount/token
      ca\_file /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    </filter>
    
    <match kubernetes.**>
      @type elasticsearch
      host elasticsearch.monitoring.svc.cluster.local
      port 9200
      index\_name kubernetes
      type\_name fluentd
    </match>
\begin{lstlisting}
\section{3. Alerting Rules}
\end{lstlisting}

\begin{lstlisting}[language=yaml]
# monitoring/alert-rules.yml
groups:
\begin{itemize}
\item name: myapp.rules
    rules:
\item alert: HighErrorRate
        expr: rate(http\_requests\_total\{status=\textasciitilde{}"5.."\}[5m]) > 0.1
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: "High error rate detected"
          description: "Error rate is above 10\% for 5 minutes"
\end{itemize}
      
\begin{itemize}
\item alert: HighMemoryUsage
        expr: container\_memory\_usage\_bytes / container\_spec\_memory\_limit\_bytes > 0.9
        for: 2m
        labels:
          severity: warning
        annotations:
          summary: "High memory usage"
          description: "Memory usage is above 90\%"
\end{itemize}
      
\begin{itemize}
\item alert: PodCrashLooping
        expr: rate(kube\_pod\_container\_status\_restarts\_total[15m]) > 0
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: "Pod is crash looping"
          description: "Pod \{\{\$labels.pod\}\} is restarting frequently"
\end{itemize}
\begin{lstlisting}
\section{4. Health Check Implementation}
\end{lstlisting}javascript
// health-check.js - Express.js health endpoints
const express = require('express');
const app = express();

// Basic health check
app.get('/health', (req, res) => \{
  res.status(200).json(\{
    status: 'healthy',
    timestamp: new Date().toISOString(),
    uptime: process.uptime()
  \});
\});

// Readiness check (includes dependency checks)
app.get('/ready', async (req, res) => \{
  try \{
    // Check database connection
    await checkDatabase();
    
    // Check external services
    await checkExternalServices();
    
    res.status(200).json(\{
      status: 'ready',
      timestamp: new Date().toISOString(),
      dependencies: \{
        database: 'healthy',
        external\_api: 'healthy'
      \}
    \});
  \} catch (error) \{
    res.status(503).json(\{
      status: 'not\_ready',
      error: error.message,
      timestamp: new Date().toISOString()
    \});
  \}
\});

async function checkDatabase() \{
  // Database connection check logic
  return new Promise((resolve, reject) => \{
    // Implement your database health check
    resolve();
  \});
\}

async function checkExternalServices() \{
  // External service health check logic
  return new Promise((resolve, reject) => \{
    // Implement your external service checks
    resolve();
  \});
\}
\begin{lstlisting}
\section{5. Automated Backup and Recovery}
\end{lstlisting}

\begin{lstlisting}[language=bash]
# !/bin/bash
# backup.sh - Automated backup script

set -euo pipefail

BACKUP\_DIR="/backups/\$(date +\%Y\%m\%d)"
DATABASE\_NAME="myapp\_production"
S3\_BUCKET="myapp-backups"

# Create backup directory
mkdir -p "\$BACKUP\_DIR"

# Database backup
echo "📦 Creating database backup..."
pg\_dump "\$DATABASE\_NAME" | gzip > "\$BACKUP\_DIR/database.sql.gz"

# Application data backup
echo "📁 Backing up application data..."
kubectl exec deployment/myapp -- tar czf - /app/data | cat > "\$BACKUP\_DIR/app-data.tar.gz"

# Upload to S3
echo "☁️ Uploading to S3..."
aws s3 sync "\$BACKUP\_DIR" "s3://\$S3\_BUCKET/\$(date +\%Y\%m\%d)/"

# Cleanup local backups (keep last 7 days)
find /backups -type d -mtime +7 -exec rm -rf \{\} \textbackslash\{\};

echo "\\textcolor{green}{$\\checkmark$} Backup completed successfully"
\begin{lstlisting}

\end{lstlisting}

\section{Common Infrastructure Patterns}

\subsection{\textbf{Blue-Green Deployment Strategy}}
\begin{lstlisting}[language=bash]
# Blue-Green Deployment Pattern

\section{Concept}
Maintain two identical production environments ("Blue" and "Green") where only one serves live traffic at any time. Deploy to the inactive environment, test thoroughly, then switch traffic.

\section{Implementation Steps}
\begin{enumerate}
\item \textbf{Prepare Green Environment}: Deploy new version to inactive environment
\item \textbf{Validate Deployment}: Run comprehensive tests on Green environment  
\item \textbf{Switch Traffic}: Update load balancer to route traffic to Green
\item \textbf{Monitor Health}: Watch metrics and logs for any issues
\item \textbf{Keep Blue as Fallback}: Maintain Blue environment for quick rollback
\end{enumerate}

\section{Benefits}
\begin{itemize}
\item Zero-downtime deployments
\item Easy rollback capability  
\item Full production testing before traffic switch
\item Reduced deployment risk
\end{itemize}

\section{Considerations}
\begin{itemize}
\item Requires double infrastructure resources
\item Database migration challenges
\item Session state management complexity
\end{itemize}
\end{lstlisting}

\subsection{\textbf{Canary Deployment Strategy}}
\begin{lstlisting}[language=bash]
# Canary Deployment Pattern

\section{Concept}
Gradually roll out new versions by routing a small percentage of traffic to the new version while monitoring for issues.

\section{Implementation Phases}
\begin{enumerate}
\item \textbf{1% Traffic}: Route 1% of users to new version
\item \textbf{Monitor Metrics}: Check error rates, response times, user feedback
\item \textbf{10% Traffic}: If metrics are good, increase to 10%
\item \textbf{50% Traffic}: Continue gradual increase
\item \textbf{100% Traffic}: Complete rollout or rollback if issues detected
\end{enumerate}

\section{Kubernetes Implementation}
\end{lstlisting}yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: myapp-rollout
spec:
  replicas: 10
  strategy:
    canary:
      steps:
\begin{itemize}
\item setWeight: 10
\item pause: {}
\item setWeight: 50  
\item pause: {duration: 30m}
\item setWeight: 100
  selector:
    matchLabels:
      app: myapp
  template:
    \# Pod template spec
\end{itemize}
\begin{lstlisting}
\section{Benefits}
\begin{itemize}
\item Reduced blast radius of deployment issues
\item Real user feedback on new features
\item Ability to measure business impact
\item Automated rollback on metric thresholds
\end{itemize}

\section{Considerations}
\begin{itemize}
\item Complex traffic routing setup
\item Requires sophisticated monitoring
\item Feature flag integration needed
\item Database consistency challenges
\end{itemize}
\end{lstlisting}

\subsection{\textbf{Infrastructure as Code (IaC) Patterns}}

\chapter{Infrastructure as Code Best Practices}

\section{Terraform Configuration Structure}
\begin{lstlisting}
# main.tf
terraform {
  required\_version = ">= 1.0"
  required\_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"


  backend "s3" {
    bucket = "myapp-terraform-state"
    key    = "production/terraform.tfstate"
    region = "us-west-2"


# modules/vpc/main.tf
resource "aws\_vpc" "main" {
  cidr\_block           = var.vpc\_cidr
  enable\_dns\_hostnames = true
  enable\_dns\_support   = true
  
  tags = {
    Name        = "${var.environment}-vpc"
    Environment = var.environment


# environments/production/main.tf
module "vpc" {
  source = "../../modules/vpc"
  
  environment = "production"
  vpc\_cidr    = "10.0.0.0/16"

\end{lstlisting}

\section{Ansible Playbook Structure}
\begin{lstlisting}[language=bash]
# playbooks/site.yml
---
\begin{itemize}
\item name: Configure web servers
  hosts: webservers
  become: yes
  roles:
\item common
\item nginx
\item ssl-certificates
\item monitoring
\end{itemize}

\begin{itemize}
\item name: Configure database servers  
  hosts: databases
  become: yes
  roles:
\item common
\item postgresql
\item backup
\item monitoring
\end{itemize}

# roles/nginx/tasks/main.yml
---
\begin{itemize}
\item name: Install nginx
  package:
    name: nginx
    state: present
\end{itemize}

\begin{itemize}
\item name: Configure nginx
  template:
    src: nginx.conf.j2
    dest: /etc/nginx/nginx.conf
    backup: yes
  notify: restart nginx
\end{itemize}

\begin{itemize}
\item name: Enable and start nginx
  service:
    name: nginx
    state: started
    enabled: yes
\end{itemize}
\end{lstlisting}

\section{Benefits}
\begin{itemize}
\item Version controlled infrastructure
\item Reproducible environments
\item Documentation through code
\item Automated provisioning and updates
\item Reduced configuration drift
\end{itemize}

\section{Best Practices}
\begin{itemize}
\item Use modules for reusable components
\item Implement proper state management
\item Include comprehensive testing
\item Document dependencies and prerequisites
\item Implement approval workflows for production changes
\end{itemize}
\begin{lstlisting}
\subsection{\textbf{Service Mesh Architecture}}
\end{lstlisting}markdown
\chapter{Service Mesh Implementation with Istio}

\section{Concept}
Service mesh provides communication infrastructure for microservices with features like traffic management, security, and observability without requiring application changes.

\section{Istio Configuration}
\begin{lstlisting}[language=bash]
# gateway.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: myapp-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
\begin{itemize}
\item port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
\item myapp.example.com
\item port:
      number: 443  
      name: https
      protocol: HTTPS
    tls:
      mode: SIMPLE
      credentialName: myapp-tls
    hosts:
\item myapp.example.com
\end{itemize}

# virtual-service.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: myapp-vs
spec:
  hosts:
\begin{itemize}
\item myapp.example.com
  gateways:
\item myapp-gateway
  http:
\item match:
\item uri:
        prefix: /api/v2
    route:
\item destination:
        host: myapp-v2
        port:
          number: 80
      weight: 10
\item destination:
        host: myapp-v1  
        port:
          number: 80
      weight: 90
\end{itemize}
\end{lstlisting}

\section{Traffic Policies}
\begin{lstlisting}[language=bash]
# destination-rule.yaml
apiVersion: networking.istio.io/v1beta1  
kind: DestinationRule
metadata:
  name: myapp-dr
spec:
  host: myapp
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 100
      http:
        http1MaxPendingRequests: 50
        maxRequestsPerConnection: 2
    circuitBreaker:
      consecutiveErrors: 3
      interval: 30s
      baseEjectionTime: 30s
  subsets:
\begin{itemize}
\item name: v1
    labels:
      version: v1
\item name: v2  
    labels:
      version: v2
\end{itemize}
\end{lstlisting}

\section{Benefits}
\begin{itemize}
\item Centralized traffic management
\item Built-in security with mTLS
\item Comprehensive observability
\item Policy enforcement
\item Gradual feature rollouts
\end{itemize}

\section{Considerations}
\begin{itemize}
\item Additional infrastructure complexity
\item Learning curve for operations teams  
\item Performance overhead
\item Debugging challenges in distributed systems
\end{itemize}
\begin{lstlisting}
\section{Best Practices}

\subsection{\textbf{How to Structure Infrastructure Conversations with Claude}}
\end{lstlisting}markdown
\chapter{Effective Infrastructure Conversation Patterns}

\section{1. Start with Context Setting}
"I need to deploy a [application type] with [specific requirements]:
\begin{itemize}
\item Expected traffic: [concurrent users/requests per second]
\item Performance requirements: [response times, throughput]
\item Security requirements: [authentication, data protection]
\item Budget constraints: [infrastructure costs]
\item Integration needs: [databases, external services]"
\end{itemize}

\section{2. Progressive Complexity Approach}
Phase 1: Basic deployment configuration
Phase 2: Security and monitoring setup  
Phase 3: Scalability and optimization
Phase 4: Advanced features and automation

\section{3. Include Environmental Details}
\begin{itemize}
\item Current infrastructure (cloud provider, existing resources)
\item Team expertise level (DevOps experience, tool familiarity)  
\item Compliance requirements (regulations, organizational policies)
\item Maintenance windows and deployment schedules
\end{itemize}

\section{4. Request Specific Deliverables}
\begin{itemize}
\item Configuration files with explanations
\item Step-by-step deployment procedures
\item Monitoring and alerting setup
\item Troubleshooting guides and runbooks
\item Testing and validation procedures
\end{itemize}
\begin{lstlisting}
\subsection{\textbf{When to Use Different Deployment Approaches}}
\end{lstlisting}markdown
\chapter{Deployment Strategy Selection Guide}

\section{Single Server Deployment}
\textbf{Use When:}
\begin{itemize}
\item Development/testing environments
\item Low traffic applications (<1000 users)
\item Simple architecture with minimal dependencies
\item Limited infrastructure budget
\end{itemize}

\textbf{Example Pattern:}
"Set up a simple production deployment on a single VPS with:
\begin{itemize}
\item Nginx reverse proxy
\item PM2 for process management  
\item SSL certificate configuration
\item Basic monitoring with system metrics"
\end{itemize}

\section{Container-Based Deployment}
\textbf{Use When:}
\begin{itemize}
\item Multiple services with different technology stacks
\item Need for consistent environments across dev/staging/production
\item Scaling requirements (horizontal scaling)
\item Team prefers infrastructure as code
\end{itemize}

\textbf{Example Pattern:}
"Create a Docker-based deployment with:
\begin{itemize}
\item Multi-stage Dockerfile for optimization
\item Docker Compose for local development
\item Container registry integration
\item Health checks and restart policies"
\end{itemize}

\section{Kubernetes Orchestration}
\textbf{Use When:}  
\begin{itemize}
\item Complex microservices architecture
\item High availability requirements
\item Advanced traffic management needs
\item Team has Kubernetes expertise
\item Enterprise-scale applications
\end{itemize}

\textbf{Example Pattern:}
"Implement Kubernetes deployment with:
\begin{itemize}
\item Namespace isolation for environments
\item Ingress controller for traffic routing
\item ConfigMap and Secret management
\item HorizontalPodAutoscaler for scaling
\item Service mesh for advanced traffic management"
\end{itemize}

\section{Serverless Deployment}
\textbf{Use When:}
\begin{itemize}
\item Event-driven architectures
\item Unpredictable or bursty traffic patterns
\item Minimal operational overhead requirements  
\item Cost optimization for low-utilization periods
\end{itemize}

\textbf{Example Pattern:}
"Deploy using serverless architecture with:
\begin{itemize}
\item AWS Lambda functions for compute
\item API Gateway for HTTP endpoints
\item CloudFormation for infrastructure management
\item CloudWatch for monitoring and logging"
\end{itemize}
\begin{lstlisting}
\subsection{\textbf{Security Hardening Procedures}}
\end{lstlisting}markdown
\chapter{Infrastructure Security Checklist}

\section{1. Network Security}
\begin{itemize}
\item [ ] Configure firewall rules (minimal required ports)
\item [ ] Implement VPC/subnet isolation
\item [ ] Set up VPN for administrative access
\item [ ] Configure DDoS protection
\item [ ] Implement rate limiting at multiple layers
\item [ ] Use private subnets for databases and internal services
\end{itemize}

\section{2. Access Control}
\begin{itemize}
\item [ ] Implement least privilege access principles
\item [ ] Set up multi-factor authentication (MFA)
\item [ ] Use service accounts with minimal permissions
\item [ ] Implement certificate-based authentication where possible
\item [ ] Configure audit logging for all access attempts
\item [ ] Regular access review and cleanup
\end{itemize}

\section{3. Data Protection}
\begin{itemize}
\item [ ] Encrypt data at rest (databases, file systems)
\item [ ] Encrypt data in transit (TLS/SSL everywhere)  
\item [ ] Implement proper key management
\item [ ] Configure database access controls
\item [ ] Set up automated backups with encryption
\item [ ] Test backup restoration procedures
\end{itemize}

\section{4. Container Security}
\begin{lstlisting}[language=bash]
# Security-focused Dockerfile
FROM node:18-alpine

# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001

# Update packages and remove unnecessary ones
RUN apk update && apk upgrade && apk add --no-cache dumb-init

WORKDIR /app
COPY --chown=nextjs:nodejs package*.json ./
RUN npm ci --only=production && npm cache clean --force

COPY --chown=nextjs:nodejs . .

# Switch to non-root user
USER nextjs

# Use dumb-init for proper signal handling
ENTRYPOINT ["dumb-init", "--"]
CMD ["npm", "start"]
\end{lstlisting}

\section{5. Kubernetes Security}
\begin{lstlisting}[language=bash]
# Security contexts and policies
apiVersion: v1
kind: Pod
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1001
    fsGroup: 2000
  containers:
\begin{itemize}
\item name: myapp
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop:
\item ALL
    volumeMounts:
\item name: tmp
      mountPath: /tmp
  volumes:
\item name: tmp
    emptyDir: {}
\end{itemize}
\end{lstlisting}

\section{6. Compliance and Auditing}
\begin{itemize}
\item [ ] Implement comprehensive logging
\item [ ] Set up log aggregation and analysis
\item [ ] Configure security event monitoring
\item [ ] Establish incident response procedures
\item [ ] Regular vulnerability scanning
\item [ ] Compliance reporting automation
\end{itemize}
\begin{lstlisting}
\subsection{\textbf{Operational Excellence Practices}}
\end{lstlisting}markdown
\chapter{Operational Excellence Framework}

\section{1. Monitoring Strategy}
\begin{lstlisting}[language=bash]
# Comprehensive monitoring setup
monitoring:
  infrastructure:
\begin{itemize}
\item CPU, memory, disk, network metrics
\item Service health and availability
\item Resource utilization trends
\end{itemize}
    
  application:
\begin{itemize}
\item Request rates and response times  
\item Error rates and types
\item Business metrics and KPIs
\item User experience metrics
\end{itemize}
    
  security:
\begin{itemize}
\item Failed authentication attempts
\item Unusual access patterns
\item Security policy violations
\item Vulnerability scan results
\end{itemize}

alerting:
  levels:
    critical: "Immediate response required"
    warning: "Investigation needed within 4 hours"  
    info: "Awareness, no immediate action"
  
  channels:
\begin{itemize}
\item PagerDuty for critical alerts
\item Slack for warning/info alerts
\item Email for daily/weekly summaries
\end{itemize}
\end{lstlisting}

\section{2. Incident Response Procedures}
\begin{lstlisting}[language=bash]
# Incident Response Runbook

\section{Severity Classification}
\begin{itemize}
\item \textbf{P0 (Critical)}: Complete service outage, security breach
\item \textbf{P1 (High)}: Major functionality impaired, performance degraded >50%
\item \textbf{P2 (Medium)}: Minor functionality issues, performance degraded <50%  
\item \textbf{P3 (Low)}: Cosmetic issues, documentation problems
\end{itemize}

\section{Response Timeline}
\begin{itemize}
\item P0: Acknowledge within 15 minutes, resolution target 1 hour
\item P1: Acknowledge within 30 minutes, resolution target 4 hours
\item P2: Acknowledge within 2 hours, resolution target 24 hours
\item P3: Acknowledge within 24 hours, resolution target 1 week
\end{itemize}

\section{Response Process}
\begin{enumerate}
\item \textbf{Acknowledge}: Confirm incident receipt and assign owner
\item \textbf{Assess}: Determine severity and impact scope
\item \textbf{Communicate}: Notify stakeholders and provide status updates
\item \textbf{Mitigate}: Implement temporary fixes to restore service  
\item \textbf{Resolve}: Apply permanent fix and verify resolution
\item \textbf{Review}: Post-incident analysis and improvement planning
\end{enumerate}
\end{lstlisting}

\section{3. Change Management}
\begin{lstlisting}[language=bash]
# Change Management Process

\section{Change Categories}
\begin{itemize}
\item \textbf{Emergency}: Security patches, critical bug fixes
\item \textbf{Standard}: Routine updates, configuration changes
\item \textbf{Major}: Architecture changes, new feature releases
\end{itemize}

\section{Approval Process}
\begin{itemize}
\item Emergency: Post-change review
\item Standard: Automated approval for pre-approved changes
\item Major: Change advisory board approval required
\end{itemize}

\section{Implementation Requirements}
\begin{itemize}
\item All changes must have rollback procedures
\item Testing in staging environment required
\item Documentation must be updated before deployment
\item Monitoring must be enhanced for new components
\item Communication plan for user-impacting changes
\end{itemize}
\end{lstlisting}

\section{4. Capacity Planning}
\begin{lstlisting}[language=bash]
# Capacity Planning Framework

\section{Resource Monitoring}
\begin{itemize}
\item Track resource utilization trends over time
\item Identify seasonal and cyclical patterns  
\item Monitor growth rates and scaling triggers
\item Plan for expected business growth
\end{itemize}

\section{Scaling Triggers}
\begin{itemize}
\item CPU utilization > 70% for 10 minutes
\item Memory utilization > 80% for 5 minutes  
\item Request queue depth > 100 for 2 minutes
\item Response time > 500ms for 15 minutes
\end{itemize}

\section{Scaling Actions}
\begin{itemize}
\item Horizontal pod autoscaling for stateless services
\item Vertical scaling for databases and stateful services
\item Load balancer pool expansion for traffic distribution
\item Database read replica addition for read-heavy workloads
\end{itemize}
\end{lstlisting}
\begin{lstlisting}
\section{Advanced Techniques}

\subsection{\textbf{Multi-Cloud and Hybrid Deployment Strategies}}
\end{lstlisting}markdown
\chapter{Multi-Cloud Architecture Patterns}

\section{1. Active-Active Multi-Cloud}
\begin{lstlisting}[language=bash]
# Terraform multi-cloud configuration
# AWS Resources
provider "aws" {
  region = "us-west-2"
  alias  = "primary"

resource "aws\_eks\_cluster" "primary" {
  provider = aws.primary
  name     = "myapp-primary"
  # EKS configuration

# GCP Resources
provider "google" {
  project = "myapp-project"
  region  = "us-central1"
  alias   = "secondary"

resource "google\_container\_cluster" "secondary" {
  provider = google.secondary
  name     = "myapp-secondary"  
  # GKE configuration

# DNS-based traffic distribution
resource "cloudflare\_record" "app" {
  zone\_id = var.cloudflare\_zone\_id
  name    = "app"
  type    = "A"
  value   = aws\_lb.primary.dns\_name
  ttl     = 60
  
  # Weighted routing for traffic distribution
  weighted\_routing\_policy {
    weight = 70  # 70% to AWS


resource "cloudflare\_record" "app\_secondary" {
  zone\_id = var.cloudflare\_zone\_id
  name    = "app"
  type    = "A" 
  value   = google\_compute\_global\_forwarding\_rule.secondary.ip\_address
  ttl     = 60
  
  weighted\_routing\_policy {
    weight = 30  # 30% to GCP


\end{lstlisting}

\section{Benefits}
\begin{itemize}
\item Avoid vendor lock-in
\item Improved disaster recovery
\item Cost optimization through provider competition
\item Regional compliance requirements
\end{itemize}

\section{Challenges}
\begin{itemize}
\item Increased complexity in management
\item Data consistency across clouds
\item Network latency and cross-cloud communication
\item Skills and tooling requirements
\end{itemize}

\section{2. Hybrid Cloud Strategy}
\begin{lstlisting}[language=bash]
# On-premises + Cloud hybrid setup
# Local Kubernetes cluster with cloud bursting

apiVersion: v1
kind: ConfigMap
metadata:
  name: hybrid-config
data:
  cloud-provider: "aws"
  burst-threshold: "80"
  local-capacity: "1000"
  
---
# HPA with custom metrics for cloud bursting
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  minReplicas: 3
  maxReplicas: 100
  metrics:
\begin{itemize}
\item type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  behavior:
    scaleUp:
      stabilizationWindowSeconds: 60
      policies:
\item type: Percent
        value: 100
        periodSeconds: 15
\end{itemize}
\end{lstlisting}

\section{Implementation Considerations}
\begin{itemize}
\item Network connectivity (VPN, dedicated connections)
\item Data synchronization strategies  
\item Security boundary management
\item Compliance and regulatory requirements
\item Disaster recovery procedures
\end{itemize}
\begin{lstlisting}
\subsection{\textbf{Advanced Container Orchestration}}
\end{lstlisting}markdown
\chapter{Advanced Kubernetes Patterns}

\section{1. Operator Pattern Implementation}
\begin{lstlisting}
// Custom Resource Definition for application management
// myapp-operator/api/v1/myapp\_types.go
package v1

import (
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

type MyAppSpec struct {
    Replicas    int32             \texttt{json:"replicas"}
    Image       string            \texttt{json:"image"}
    Database    DatabaseConfig    \texttt{json:"database"}
    Monitoring  MonitoringConfig  \texttt{json:"monitoring"}

type MyAppStatus struct {
    Phase       string \texttt{json:"phase"}
    Replicas    int32  \texttt{json:"replicas"}
    ReadyReplicas int32 \texttt{json:"readyReplicas"}

// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
type MyApp struct {
    metav1.TypeMeta   \texttt{json:",inline"}
    metav1.ObjectMeta \texttt{json:"metadata,omitempty"}
    
    Spec   MyAppSpec   \texttt{json:"spec,omitempty"}
    Status MyAppStatus \texttt{json:"status,omitempty"}

\end{lstlisting}

\begin{lstlisting}[language=bash]
# Custom Resource usage
apiVersion: mycompany.io/v1
kind: MyApp
metadata:
  name: production-app
spec:
  replicas: 5
  image: "myapp:v1.2.3"
  database:
    type: postgresql
    size: "100Gi"
    backup:
      enabled: true
      schedule: "0 2 \textit{ } *"
  monitoring:
    enabled: true
    alerts:
\begin{itemize}
\item name: high-error-rate
        threshold: 0.05
\end{itemize}
\end{lstlisting}

\section{2. Advanced Scheduling and Resource Management}
\begin{lstlisting}[language=bash]
# Node affinity and pod anti-affinity
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
spec:
  replicas: 6
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
\begin{itemize}
\item matchExpressions:
\item key: node-type
                operator: In
                values: ["compute-optimized"]
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
\item weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
\item key: app
                  operator: In  
                  values: ["myapp"]
              topologyKey: "kubernetes.io/hostname"
      tolerations:
\item key: "dedicated"
        operator: "Equal"
        value: "myapp"
        effect: "NoSchedule"
\end{itemize}
\end{lstlisting}

\section{3. Advanced Networking with Service Mesh}
\begin{lstlisting}[language=bash]
# Istio traffic management
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: myapp-canary
spec:
  http:
\begin{itemize}
\item match:
\item headers:
        canary:
          exact: "true"
    route:
\item destination:
        host: myapp
        subset: v2
\item route:
\item destination:
        host: myapp
        subset: v1
      weight: 90
\item destination:
        host: myapp
        subset: v2
      weight: 10
  fault:
    delay:
      percentage:
        value: 0.1
      fixedDelay: 5s
    abort:
      percentage:
        value: 0.1
      httpStatus: 500
\end{itemize}
\end{lstlisting}
\begin{lstlisting}
\subsection{\textbf{GitOps and Declarative Infrastructure}}
\end{lstlisting}markdown
\chapter{GitOps Implementation with ArgoCD}

\section{1. Repository Structure}
\begin{lstlisting}
gitops-repo/
├── applications/
│   ├── myapp/
│   │   ├── base/
│   │   │   ├── deployment.yaml
│   │   │   ├── service.yaml  
│   │   │   └── kustomization.yaml
│   │   └── environments/
│   │       ├── staging/
│   │       │   ├── kustomization.yaml
│   │       │   └── patches/
│   │       └── production/
│   │           ├── kustomization.yaml
│   │           └── patches/
├── infrastructure/
│   ├── monitoring/
│   ├── ingress/
│   └── storage/
└── argocd/
    ├── applications/
    └── projects/
\end{lstlisting}

\section{2. ArgoCD Application Definition}
\begin{lstlisting}[language=bash]
# argocd/applications/myapp-production.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: myapp-production
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/mycompany/gitops-repo
    targetRevision: HEAD
    path: applications/myapp/environments/production
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
\begin{itemize}
\item CreateNamespace=true
  revisionHistoryLimit: 10
\end{itemize}
\end{lstlisting}

\section{3. Kustomization for Environment-Specific Config}
\begin{lstlisting}[language=bash]
# applications/myapp/environments/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: production

resources:
\begin{itemize}
\item ../../base
\end{itemize}

patchesStrategicMerge:
\begin{itemize}
\item patches/deployment-production.yaml
\item patches/ingress-production.yaml
\end{itemize}

images:
\begin{itemize}
\item name: myapp
  newTag: v1.2.3
\end{itemize}

configMapGenerator:
\begin{itemize}
\item name: myapp-config
  files:
\item config/production.env
\end{itemize}
\end{lstlisting}

\section{4. Automated Promotion Pipeline}
\begin{lstlisting}[language=bash]
# .github/workflows/promote.yml
name: Promote to Production
on:
  push:
    tags:
\begin{itemize}
\item 'v*'
\end{itemize}

jobs:
  promote:
    runs-on: ubuntu-latest
    steps:
\begin{itemize}
\item uses: actions/checkout@v3
      with:
        token: ${{ secrets.GITOPS\_TOKEN }}
        repository: mycompany/gitops-repo
\end{itemize}
        
\begin{itemize}
\item name: Update production image tag
      run: |
        cd applications/myapp/environments/production
        kustomize edit set image myapp=myapp:${GITHUB\_REF#refs/tags/}
\end{itemize}
        
\begin{itemize}
\item name: Commit and push changes
      run: |
        git config --local user.email "action@github.com"
        git config --local user.name "GitHub Action"
        git add .
        git commit -m "Promote myapp to ${GITHUB\_REF#refs/tags/}"
        git push
\end{itemize}
\end{lstlisting}

\section{Benefits}
\begin{itemize}
\item Declarative configuration management
\item Automated synchronization and drift detection
\item Clear audit trail through Git history
\item Simplified rollback procedures
\item Separation of concerns (app code vs config)
\end{itemize}

\section{Best Practices}
\begin{itemize}
\item Use separate repositories for application code and configuration
\item Implement pull request workflows for configuration changes
\item Set up automated testing for configuration changes
\item Use sealed secrets for sensitive data management
\item Implement progressive delivery with automated rollbacks
\end{itemize}
\begin{lstlisting}
\subsection{\textbf{Site Reliability Engineering Practices}}
\end{lstlisting}markdown
\chapter{SRE Implementation Framework}

\section{1. Service Level Objectives (SLOs)}
\begin{lstlisting}[language=bash]
# SLO definition for myapp service
slos:
  availability:
    target: 99.9%  # 43.8 minutes downtime per month
    measurement\_window: 30d
    indicators:
\begin{itemize}
\item success\_rate\_from\_lb\_logs
\item health\_check\_success\_rate
\end{itemize}
    
  latency:
    target: 95th percentile < 200ms
    measurement\_window: 7d
    indicators:
\begin{itemize}
\item response\_time\_from\_application\_metrics
\end{itemize}
    
  throughput:  
    target: 1000 RPS sustained
    measurement\_window: 1d
    indicators:
\begin{itemize}
\item requests\_per\_second\_from\_metrics
\end{itemize}

error\_budget\_policy:
\begin{itemize}
\item if error\_budget > 50%: all deployments allowed
\item if error\_budget 10-50%: staged rollouts only  
\item if error\_budget < 10%: deployment freeze, focus on reliability
\end{itemize}
\end{lstlisting}

\section{2. Error Budget Implementation}
\begin{lstlisting}[language=Python]
# error-budget-calculator.py
import datetime
from dataclasses import dataclass
from typing import Dict, List

@dataclass
class SLO:
    name: str
    target: float  # e.g., 0.999 for 99.9%
    window\_days: int

class ErrorBudgetCalculator:
    def \textbf{init}(self, slo: SLO):
        self.slo = slo
        
    def calculate\_budget(self) -> Dict:
        window\_minutes = self.slo.window\_days \textit{ 24 } 60
        allowed\_downtime = window\_minutes * (1 - self.slo.target)
        
        return {
            "total\_budget\_minutes": allowed\_downtime,
            "budget\_per\_day": allowed\_downtime / self.slo.window\_days,
            "budget\_per\_hour": allowed\_downtime / (self.slo.window\_days * 24)

    def current\_burn\_rate(self, incidents: List[Dict]) -> float:
        # Calculate current error budget burn rate
        # Implementation depends on your metrics system
        pass
\end{lstlisting}

\section{3. Chaos Engineering Implementation}
\begin{lstlisting}[language=bash]
# chaos-monkey.yaml - Chaos engineering experiments
apiVersion: chaos-mesh.org/v1alpha1
kind: PodChaos
metadata:
  name: myapp-pod-failure
spec:
  action: pod-failure
  mode: fixed-percent
  value: "10"
  selector:
    labelSelectors:
      app: myapp
  scheduler:
    cron: "0 12 \textit{ } 1-5"  # Weekdays at noon
  duration: "30s"

---
apiVersion: chaos-mesh.org/v1alpha1  
kind: NetworkChaos
metadata:
  name: myapp-network-delay
spec:
  action: delay
  mode: all
  selector:
    labelSelectors:
      app: myapp
  delay:
    latency: "10ms"
    correlation: "25"
    jitter: "0ms"
  duration: "60s"
  scheduler:
    cron: "0 14 \textit{ } 1-5"
\end{lstlisting}

\section{4. Runbook Automation}
\begin{lstlisting}[language=Python]
# runbook-automation.py
import subprocess
import logging
from typing import Dict, Any

class RunbookAutomation:
    def \textbf{init}(self):
        self.logger = logging.getLogger(\textbf{name})
    
    def high\_error\_rate\_remediation(self, alert\_data: Dict[str, Any]):
        """Automated response to high error rate alerts"""
        
        # 1. Collect diagnostic data
        self.collect\_diagnostics()
        
        # 2. Check if it's a known issue
        if self.is\_known\_issue(alert\_data):
            self.apply\_known\_fix(alert\_data)
        else:
            # 3. Implement circuit breaker
            self.enable\_circuit\_breaker()
            
            # 4. Scale up resources
            self.scale\_up\_resources()
            
            # 5. Notify on-call engineer
            self.notify\_oncall(alert\_data)
    
    def collect\_diagnostics(self):
        """Collect logs, metrics, and traces"""
        subprocess.run([
            "kubectl", "logs", "--tail=100", 
            "-l", "app=myapp", 
            "--since=5m"
        ])
        
    def enable\_circuit\_breaker(self):
        """Enable circuit breaker to protect downstream services"""
        subprocess.run([
            "kubectl", "patch", "destinationrule", "myapp",
            "--type=merge", "-p", 
            '{"spec":{"trafficPolicy":{"circuitBreaker":{"consecutiveErrors":3}}}}'
        ])
\end{lstlisting}

\section{5. SRE Dashboard Implementation}
\begin{lstlisting}[language=bash]
# Grafana dashboard configuration
dashboard:
  title: "SRE Golden Signals - MyApp"
  panels:
\begin{itemize}
\item title: "Availability SLO"
    type: "stat"
    targets:
\item expr: '(1 - rate(http\_requests\_total{status=~"5.."}[5m]) / rate(http\_requests\_total[5m])) * 100'
      legendFormat: "Success Rate %"
    thresholds:
\item color: "red"
      value: 99.0
\item color: "yellow"  
      value: 99.5
\item color: "green"
      value: 99.9
\end{itemize}
      
\begin{itemize}
\item title: "Error Budget Burn Rate"
    type: "graph"
    targets:
\item expr: 'increase(http\_requests\_total{status=~"5.."}[1h]) / (0.001 * increase(http\_requests\_total[1h]))'
      legendFormat: "Hourly Burn Rate"
    alert:
      conditions:
\item query: A
        reducer: last
        evaluator:
          params: [2.0]
          type: gt
      message: "Error budget burning too fast"
      frequency: "1m"
\end{itemize}
\end{lstlisting}

\section{Benefits}
\begin{itemize}
\item Quantified reliability targets
\item Automated incident response
\item Proactive reliability testing  
\item Data-driven reliability decisions
\item Balance between feature velocity and reliability
\end{itemize}

\section{Implementation Steps}
\begin{enumerate}
\item Define SLOs based on user experience
\item Implement SLI collection and monitoring
\item Create error budget policies
\item Automate common remediation procedures
\item Regular reliability reviews and improvements
\end{enumerate}

This comprehensive chapter provides practical, evidence-based guidance for infrastructure and DevOps tasks with Claude Code. The templates and examples are designed to be immediately actionable while following industry best practices for security, scalability, and operational excellence.
