\chapter{Gradient Computation and Optimization}
\label{ch:gradient_computation}

The gradient computation system in GSI implements the mathematical core of variational data assimilation through calculation of the cost function gradient with respect to control variables. This chapter examines the implementation of gradient evaluation, decomposition methods, and integration with iterative minimization algorithms.

\section{Mathematical Foundation}

The gradient of the total cost function represents the direction of steepest ascent and is essential for iterative minimization. The complete gradient is expressed as:

\begin{equation}
\nabla J(\mathbf{x}) = \nabla J_b(\mathbf{x}) + \nabla J_o(\mathbf{x}) + \nabla J_c(\mathbf{x}) + \nabla J_l(\mathbf{x})
\label{eq:total_gradient}
\end{equation}

where:
\begin{itemize}
\item $\nabla J_b = \mathbf{B}^{-1}(\mathbf{x} - \mathbf{x}_b)$ is the background gradient
\item $\nabla J_o = \mathbf{H}^T \mathbf{R}^{-1} (\mathbf{H}(\mathbf{x}) - \mathbf{y})$ is the observation gradient
\item $\nabla J_c$ represents constraint gradients (mass conservation, moisture limits)
\item $\nabla J_l$ includes bias correction gradients
\end{itemize}

The implementation includes nonlinear quality control through a probabilistic framework:

\begin{equation}
\nabla J_o = \sum_{\text{obs}} \mathbf{H}^T \cdot \frac{(\mathbf{H}(\mathbf{x}) - \mathbf{y}_o) \cdot (P_o - W_{\text{gross}})}{P_o \sigma_o^2}
\label{eq:nonlinear_qc_gradient}
\end{equation}

where $P_o = W_{\text{notgross}} \exp(-\frac{1}{2}d^2) + W_{\text{gross}}$ is the probability density.

\section{Gradient Evaluation Framework}

The \texttt{jgrad} subroutine provides the main interface for cost function and gradient evaluation:

\subsection{Core Algorithm Implementation}

\begin{lstlisting}[language=Fortran,caption={Gradient Evaluation Main Loop},label=code:jgrad_main]
subroutine jgrad(xhat, yhat, fjcost, gradx, lupdfgs, nprt, calledby)
  use control_vectors, only: control_vector, dot_product
  use state_vectors, only: allocate_state, deallocate_state
  use gsi_bundlemod, only: gsi_bundle, assignment(=)
  
  type(control_vector), intent(in) :: xhat      ! Control variable
  type(control_vector), intent(inout) :: yhat   ! Work vector
  real(r_quad), intent(out) :: fjcost          ! Cost function value
  type(control_vector), intent(out) :: gradx   ! Gradient vector
  logical, intent(in) :: lupdfgs               ! Update first guess
  integer(i_kind), intent(in) :: nprt          ! Print level
  character(len=*), intent(in) :: calledby     ! Calling routine
  
  ! Local variables
  type(gsi_bundle) :: sval(nsubwin)             ! State vectors
  type(gsi_bundle) :: rval(nsubwin)             ! Gradient state vectors
  type(predictors) :: sbias, rbias              ! Bias predictors
  real(r_quad) :: pjo, pjb, pjc, pjl           ! Cost function components
  
  ! Initialize cost function and gradient
  fjcost = zero_quad
  call allocate_cv(gradx)
  gradx = zero
  
  ! Allocate state and bias vectors
  do ii = 1, nsubwin
    call allocate_state(sval(ii))
    call allocate_state(rval(ii))
  end do
  call allocate_preds(sbias)
  call allocate_preds(rbias)
  
  ! Forward transformation: control → state
  do ii = 1, nsubwin
    call control2state(xhat, sval(ii), sbias)
  end do
  
  ! Evaluate observation cost function Jo
  call evaljo(pjo, nobs_used, nprt, .false.)
  fjcost = fjcost + pjo
  
  ! Evaluate background cost function Jb  
  call evaljb(xhat, pjb)
  fjcost = fjcost + pjb
  
  ! Initialize gradient computation
  do ii = 1, nsubwin
    rval(ii) = zero
  end do
  rbias = zero
  
  ! Compute observation gradient contributions
  call intall(sval, sbias, rval, rbias)
  
  ! Add constraint contributions
  if (ljcpdry) call intjcpdry(rval, sval, nsubwin)
  if (ljcdfi) call intjcdfi(rval, sval)
  
  ! Adjoint transformation: state → control
  do ii = 1, nsubwin
    call control2state_ad(gradx, rval(ii), rbias)
  end do
  
  ! Add background gradient: Jb = 1/2 * x^T * x (in control space)
  gradx = gradx + xhat
  
  ! Print diagnostics
  if (nprt >= 1) then
    write(stdout,*) 'Cost function components:'
    write(stdout,'(a,f15.6)') '  Jo =', real(pjo)
    write(stdout,'(a,f15.6)') '  Jb =', real(pjb) 
    write(stdout,'(a,f15.6)') '  Jc =', real(pjc)
    write(stdout,'(a,f15.6)') 'Total J =', real(fjcost)
    write(stdout,'(a,f15.6)') '|grad| =', sqrt(dot_product(gradx, gradx))
  end if
  
  ! Cleanup
  do ii = 1, nsubwin
    call deallocate_state(sval(ii))
    call deallocate_state(rval(ii))
  end do
  call deallocate_preds(sbias)
  call deallocate_preds(rbias)
end subroutine jgrad
\end{lstlisting}

\subsection{Background Cost Function Evaluation}

In control variable space, the background term simplifies to:

\begin{lstlisting}[language=Fortran,caption={Background Cost Function},label=code:jb_evaluation]
subroutine evaljb(xhat, pjb)
  type(control_vector), intent(in) :: xhat
  real(r_quad), intent(out) :: pjb
  
  ! In control space: Jb = 1/2 * x^T * x
  ! Background error covariance preconditioning built into control transform
  pjb = 0.5_r_quad * dot_product(xhat, xhat)
  
  ! For hybrid ensemble systems, modify background term
  if (l_hyb_ens) then
    ! Static component
    pjb_static = 0.5_r_quad * (1.0_r_quad - ensemble_alpha) * &
                 dot_product(xhat, xhat)
    
    ! Ensemble component (handled in separate routines)
    call evaljb_ensemble(xhat, pjb_ensemble)
    
    pjb = pjb_static + pjb_ensemble
  end if
end subroutine evaljb
\end{lstlisting}

\section{Observation Gradient Integration}

The \texttt{intall} subroutine computes observation contributions to the gradient with nonlinear quality control:

\subsection{Nonlinear Quality Control Implementation}

\begin{lstlisting}[language=Fortran,caption={Observation Gradient with QC},label=code:intall_qc]
subroutine intall(sval, sbias, rval, rbias)
  type(gsi_bundle), intent(in) :: sval(:)
  type(predictors), intent(in) :: sbias
  type(gsi_bundle), intent(inout) :: rval(:)
  type(predictors), intent(inout) :: rbias
  
  ! Local variables for nonlinear QC
  real(r_kind) :: departure, wnotgross, wgross, po, penalty
  real(r_kind) :: qc_weight, error_variance
  
  ! Initialize gradient arrays
  do ii = 1, nsubwin
    rval(ii) = zero
  end do
  
  ! Loop over observation types
  do obstype = 1, nobs_type
    obsptr => obsdiags(obstype, 1)%ptr
    
    do while (associated(obsptr))
      if (obsptr%luse .and. obsptr%muse(jiter)) then
        ! Compute observation departure
        departure = obsptr%res  ! o - H(x)
        error_variance = obsptr%err2
        wnotgross = obsptr%wnotgross
        wgross = obsptr%wgross
        
        ! Nonlinear QC probability computation
        po = wnotgross * exp(-0.5_r_kind * (departure**2 / error_variance)) + wgross
        
        ! Quality control weight
        if (po > tiny_r_kind) then
          qc_weight = (wnotgross * exp(-0.5_r_kind * (departure**2 / error_variance))) / po
        else
          qc_weight = zero
        end if
        
        ! Penalty gradient term
        penalty = departure * qc_weight / error_variance
        
        ! Apply observation operator adjoint
        call obsmod_adjoint(obsptr, penalty, sval, rval)
      end if
      
      obsptr => obsptr%next
    end do
  end do
end subroutine intall
\end{lstlisting}

\subsection{Individual Observation Operator Integration}

Each observation type implements its specific adjoint operator:

\begin{lstlisting}[language=Fortran,caption={Temperature Observation Integration},label=code:intt_integration]
subroutine intt(tval, rval, sval)
  ! Temperature observation operator adjoint
  real(r_kind), intent(in) :: tval      ! Observation penalty
  type(gsi_bundle), intent(inout) :: rval
  type(gsi_bundle), intent(in) :: sval
  
  real(r_kind), pointer :: t_ptr(:,:,:)
  real(r_kind), pointer :: rt_ptr(:,:,:)
  
  ! Get pointers to temperature fields
  call gsi_bundlegetpointer(sval, 'tv', t_ptr, istatus)
  call gsi_bundlegetpointer(rval, 'tv', rt_ptr, istatus)
  
  ! Apply interpolation weights (adjoint)
  do k = 1, nsig
    do j = 1, lon2
      do i = 1, lat2
        ! Bilinear interpolation adjoint
        rt_ptr(i,j,k) = rt_ptr(i,j,k) + &
                       interpolation_weight(i,j,k) * tval
      end do
    end do
  end do
end subroutine intt
\end{lstlisting}

\section{Constraint Gradient Computation}

The GSI system implements several types of physical constraints through penalty terms:

\subsection{Dry Mass Conservation}

The dry mass constraint maintains global mass conservation:

\begin{lstlisting}[language=Fortran,caption={Dry Mass Conservation Gradient},label=code:intjcpdry]
subroutine intjcpdry(rval, sval, nsubwin)
  type(gsi_bundle), intent(inout) :: rval(:)
  type(gsi_bundle), intent(in) :: sval(:)
  integer(i_kind), intent(in) :: nsubwin
  
  real(r_kind) :: global_mass_tendency, local_mass_change
  real(r_kind) :: constraint_weight, penalty_gradient
  real(r_kind), pointer :: ps_ptr(:,:), rps_ptr(:,:)
  
  ! Compute global mass tendency
  call intjcpdry1(sval, global_mass_tendency)
  
  ! Apply constraint gradient
  constraint_weight = factjcpdry  ! Lagrange multiplier weight
  penalty_gradient = constraint_weight * global_mass_tendency
  
  ! Distribute gradient to surface pressure
  do ii = 1, nsubwin
    call gsi_bundlegetpointer(sval(ii), 'ps', ps_ptr, istatus)
    call gsi_bundlegetpointer(rval(ii), 'ps', rps_ptr, istatus)
    
    do j = 1, lon2
      do i = 1, lat2
        rps_ptr(i,j) = rps_ptr(i,j) + penalty_gradient * area_weight(i,j)
      end do
    end do
  end do
end subroutine intjcpdry

subroutine intjcpdry1(sval, global_tendency)
  ! Compute global dry mass tendency
  type(gsi_bundle), intent(in) :: sval(:)
  real(r_kind), intent(out) :: global_tendency
  
  real(r_kind) :: local_integral, global_integral
  real(r_kind), pointer :: ps_ptr(:,:)
  
  local_integral = zero
  
  do ii = 1, nsubwin
    call gsi_bundlegetpointer(sval(ii), 'ps', ps_ptr, istatus)
    
    do j = 1, lon2
      do i = 1, lat2
        local_integral = local_integral + ps_ptr(i,j) * area_weight(i,j)
      end do
    end do
  end do
  
  ! Global reduction
  call mpi_allreduce(local_integral, global_integral, 1, &
                    mpi_real8, mpi_sum, mpi_comm_world, ierror)
  
  global_tendency = global_integral / global_area
end subroutine intjcpdry1
\end{lstlisting}

\subsection{Moisture and Physical Variable Limits}

Negative value constraints prevent unphysical analysis states:

\begin{lstlisting}[language=Fortran,caption={Moisture Limit Constraint},label=code:intlimq]
subroutine intlimq(rval, sval, itbin)
  ! Apply negative specific humidity constraint
  type(gsi_bundle), intent(inout) :: rval
  type(gsi_bundle), intent(in) :: sval
  integer(i_kind), intent(in) :: itbin
  
  real(r_kind), pointer :: q_ptr(:,:,:), rq_ptr(:,:,:)
  real(r_kind) :: penalty, constraint_derivative
  real(r_kind), parameter :: qmin = 1.e-6_r_kind
  
  call gsi_bundlegetpointer(sval, 'q', q_ptr, istatus)
  call gsi_bundlegetpointer(rval, 'q', rq_ptr, istatus)
  
  do k = 1, nsig
    do j = 1, lon2
      do i = 1, lat2
        ! Apply penalty for negative values
        if (q_ptr(i,j,k) < qmin) then
          penalty = factqlim * (q_ptr(i,j,k) - qmin)**2
          constraint_derivative = 2.0_r_kind * factqlim * (q_ptr(i,j,k) - qmin)
          rq_ptr(i,j,k) = rq_ptr(i,j,k) + constraint_derivative
        end if
      end do
    end do
  end do
end subroutine intlimq
\end{lstlisting}

\section{Gradient Decomposition and Analysis}

The GSI system provides detailed gradient analysis for diagnostic purposes:

\subsection{Component-wise Gradient Norms}

\begin{lstlisting}[language=Fortran,caption={Gradient Component Analysis},label=code:gradient_analysis]
subroutine analyze_gradient_components(gradx)
  type(control_vector), intent(in) :: gradx
  
  real(r_kind) :: grad_sf, grad_vp, grad_t, grad_q, grad_ps
  real(r_kind) :: grad_total
  real(r_kind), pointer :: ptr_3d(:,:,:), ptr_2d(:,:)
  
  ! Stream function gradient norm
  if (getindex(cvars3d, 'sf') > 0) then
    call gsi_bundlegetpointer(gradx, 'sf', ptr_3d, istatus)
    grad_sf = sqrt(sum(ptr_3d**2))
  end if
  
  ! Velocity potential gradient norm  
  if (getindex(cvars3d, 'vp') > 0) then
    call gsi_bundlegetpointer(gradx, 'vp', ptr_3d, istatus)
    grad_vp = sqrt(sum(ptr_3d**2))
  end if
  
  ! Temperature gradient norm
  if (getindex(cvars3d, 'tv') > 0) then
    call gsi_bundlegetpointer(gradx, 'tv', ptr_3d, istatus)
    grad_t = sqrt(sum(ptr_3d**2))
  end if
  
  ! Humidity gradient norm
  if (getindex(cvars3d, 'q') > 0) then
    call gsi_bundlegetpointer(gradx, 'q', ptr_3d, istatus)
    grad_q = sqrt(sum(ptr_3d**2))
  end if
  
  ! Surface pressure gradient norm
  if (getindex(cvars2d, 'ps') > 0) then
    call gsi_bundlegetpointer(gradx, 'ps', ptr_2d, istatus)
    grad_ps = sqrt(sum(ptr_2d**2))
  end if
  
  grad_total = dot_product(gradx, gradx)
  
  ! Print analysis
  write(stdout,*) 'Gradient component analysis:'
  write(stdout,'(a,f12.6)') '  |∇J/∇sf| =', grad_sf
  write(stdout,'(a,f12.6)') '  |∇J/∇vp| =', grad_vp  
  write(stdout,'(a,f12.6)') '  |∇J/∇t|  =', grad_t
  write(stdout,'(a,f12.6)') '  |∇J/∇q|  =', grad_q
  write(stdout,'(a,f12.6)') '  |∇J/∇ps| =', grad_ps
  write(stdout,'(a,f12.6)') '  |∇J|     =', sqrt(grad_total)
end subroutine analyze_gradient_components
\end{lstlisting}

\subsection{Observation Type Contribution Analysis}

\begin{lstlisting}[language=Fortran,caption={Observation Gradient Contributions},label=code:obs_gradient_contrib]
subroutine analyze_observation_contributions(gradx_by_type)
  type(control_vector), intent(in) :: gradx_by_type(:)
  
  real(r_kind) :: contribution_norm(nobs_type)
  integer(i_kind) :: itype
  
  do itype = 1, nobs_type
    contribution_norm(itype) = sqrt(dot_product(gradx_by_type(itype), &
                                               gradx_by_type(itype)))
    
    write(stdout,'(a,a12,f12.6)') '  Gradient from ', &
          trim(cobstype(itype)), contribution_norm(itype)
  end do
end subroutine analyze_observation_contributions
\end{lstlisting}

\section{Ensemble Gradient Integration}

The hybrid ensemble-variational system requires specialized gradient computation:

\subsection{Ensemble Background Gradient}

\begin{lstlisting}[language=Fortran,caption={Hybrid Ensemble Gradient},label=code:ensemble_gradient]
subroutine compute_ensemble_gradient(xhat, gradx_ens)
  type(control_vector), intent(in) :: xhat
  type(control_vector), intent(out) :: gradx_ens
  
  ! Local variables for ensemble processing
  real(r_kind), allocatable :: ens_weights(:)
  real(r_kind), allocatable :: localization_matrix(:,:)
  type(gsi_bundle) :: ens_increment
  
  if (l_hyb_ens) then
    ! Compute ensemble background gradient
    ! ∇Jb_ens = α * (P_ens)^(-1/2) * x_ens
    
    allocate(ens_weights(n_ens))
    
    ! Extract ensemble weights from control vector
    call extract_ensemble_weights(xhat, ens_weights)
    
    ! Apply localization if configured
    if (l_localization) then
      call apply_localization_gradient(ens_weights, gradx_ens)
    else
      call direct_ensemble_gradient(ens_weights, gradx_ens)
    end if
    
    deallocate(ens_weights)
  end if
end subroutine compute_ensemble_gradient
\end{lstlisting}

\section{Performance Optimization}

The gradient computation system implements several optimization strategies:

\subsection{Memory Access Patterns}

\begin{lstlisting}[language=Fortran,caption={Optimized Memory Access},label=code:optimized_access]
subroutine optimize_gradient_computation(sval, rval)
  ! Optimize memory access patterns for cache efficiency
  
  ! Process by vertical level for better cache locality
  do k = 1, nsig
    ! Process all horizontal points at this level
    call process_level_gradient(k, sval, rval)
  end do
end subroutine optimize_gradient_computation

subroutine process_level_gradient(k_level, sval, rval)
  integer(i_kind), intent(in) :: k_level
  type(gsi_bundle), intent(in) :: sval
  type(gsi_bundle), intent(inout) :: rval
  
  ! Vectorizable loop over horizontal domain
  do j = 1, lon2
    do i = 1, lat2
      call compute_gradient_point(i, j, k_level, sval, rval)
    end do
  end do
end subroutine process_level_gradient
\end{lstlisting}

\subsection{Parallel Processing}

\begin{lstlisting}[language=Fortran,caption={Parallel Gradient Computation},label=code:parallel_gradient]
! OpenMP parallelization for gradient computation
!$OMP PARALLEL DO PRIVATE(itype, obsptr, departure, penalty)
do itype = 1, nobs_type
  ! Process observation type independently
  call compute_obstype_gradient(itype, sval, rval)
end do
!$OMP END PARALLEL DO

! Combine contributions with reduction
!$OMP PARALLEL DO REDUCTION(+:rval)
do ii = 1, nsubwin
  call add_constraint_gradients(rval(ii), sval(ii))
end do  
!$OMP END PARALLEL DO
\end{lstlisting}

\section{Convergence Monitoring}

The gradient system provides comprehensive convergence diagnostics:

\subsection{Gradient Norm Tracking}

\begin{lstlisting}[language=Fortran,caption={Convergence Monitoring},label=code:convergence_monitoring]
subroutine monitor_convergence(gradx, fjcost, iteration)
  type(control_vector), intent(in) :: gradx
  real(r_quad), intent(in) :: fjcost
  integer(i_kind), intent(in) :: iteration
  
  real(r_kind) :: grad_norm, cost_reduction
  real(r_kind), save :: prev_grad_norm = huge(1.0_r_kind)
  real(r_quad), save :: prev_cost = huge(1.0_r_quad)
  
  grad_norm = sqrt(dot_product(gradx, gradx))
  
  if (iteration > 1) then
    cost_reduction = (prev_cost - fjcost) / prev_cost
    
    write(stdout,'(a,i3)') 'Iteration:', iteration
    write(stdout,'(a,e12.4)') '  Cost function:', real(fjcost)
    write(stdout,'(a,e12.4)') '  Gradient norm:', grad_norm
    write(stdout,'(a,e12.4)') '  Cost reduction:', cost_reduction
    write(stdout,'(a,e12.4)') '  Grad reduction:', &
          (prev_grad_norm - grad_norm) / prev_grad_norm
    
    ! Check convergence criteria
    if (grad_norm < gradient_tolerance) then
      write(stdout,*) 'Convergence achieved: gradient norm below threshold'
    end if
    
    if (cost_reduction < cost_tolerance) then
      write(stdout,*) 'Convergence achieved: cost reduction below threshold'
    end if
  end if
  
  prev_grad_norm = grad_norm
  prev_cost = fjcost
end subroutine monitor_convergence
\end{lstlisting}

\section{Summary}

The gradient computation framework in GSI provides:

\begin{itemize}
\item \textbf{Mathematical rigor}: Precise implementation of variational calculus principles
\item \textbf{Nonlinear quality control}: Robust handling of observation outliers through probabilistic methods
\item \textbf{Physical constraints}: Enforcement of mass conservation and variable bounds
\item \textbf{Comprehensive diagnostics}: Detailed analysis of gradient components and contributions
\item \textbf{Ensemble integration}: Seamless hybrid ensemble-variational gradient computation  
\item \textbf{Performance optimization}: Memory-efficient and parallel-optimized implementation
\item \textbf{Convergence monitoring}: Real-time tracking of minimization progress
\end{itemize}

The gradient system enables GSI to achieve optimal analysis solutions through efficient and accurate iterative minimization, maintaining numerical stability while handling diverse observation types and physical constraints. This forms the computational backbone of the GSI variational data assimilation system.