
#import "template.typ" : *

#show: doc => simple-slide(doc, 
  subtitle: "",
  author: "Yue Tao"
)

// Title slide
// = Characterization & Identification of Cluster Workloads in LLM Applications: A Survey
= Workload Characterization in LLM Applications: A Survey

= Research Objectives

*Research Goal*: Study workload behavior in the large distributed systems with LLM-based applications, such as:

- LLM-based recommendation systems
- Agentic AI systems  
- Distributed database systems

*Key Questions*:
1. What are the reference designs and typical implementations?
2. What is the role and responsibility of hardware node in such systems?
3. How do we characterize the workload behavior in such systems?


= Adopters: Xiaohongshu

*Big Data Architecture for Search, Recommendation & Advertising*

#figure(
  image("survey/image.png", width: 50%),
  caption: [Xiaohongshu's big data architecture for search, recommendation and advertising scenarios]
)

= Adopters: Xiaohongshu

#set text(size: 12pt)
#table(
  columns: (0.6fr, 1.2fr, 2fr),
  align: left,
  [*Node Type*], [*Hardware Preferences*], [*Typical Bottlenecks*],
  [Online Recommender], [CPU, low-latency memory & cache], [QPS spikes cause CPU saturation; insufficient cache capacity],
  [Kafka Message Queue], [Disk I/O, memory, bandwidth], [Message backlog causes memory exhaustion, training delays],
  [Flink Real-time Computing], [Multi-core CPU, memory, bandwidth], [Large state data triggers GC; CPU saturation increases latency],
  [KV Feature Cache], [Memory, CPU], [Memory insufficient → cache breakdown, access backend storage],
  [OLAP Database], [CPU, multi-memory, SSD I/O], [Multi-dimensional analysis queries cause CPU saturation; ETL delay],
  [Hive/DW], [CPU, disk I/O], [ETL delay, unable to quickly respond to experiments],
  [Online Training], [CPU/memory, network bandwidth], [Data surge causes CPU/memory exhaustion; parameter synchronization delay],
  [Offline Training (PS + Worker)], [GPU, CPU, memory, network bandwidth], [GPU computing power idle; PS memory insufficient; PS-worker network bottleneck],
)

= Adopters: Facebook

#figure(
  placement: bottom,
  image("survey/image 1.png", width: 100%),
)

Facebook's distributed training architecture @Zhao_2022 demonstrates the importance of workload characterization in production systems.

= Adopters: Facebook

#set text(size: 12pt)
#table(
  columns: (0.6fr, 1fr, 2fr, 2fr),
  align: left,
  [*Resource*], [*Load*], [*Behavior*], [*Bottlenecks*],
  [CPU], [Tokenization, audio/image, augmentations], [Python UDFs, GIL affected, parallelizable but cross-process copying], [Single-core throughput insufficient, serialization overhead],
  [GPU], [Model forward/backward], [High throughput, depends on stable input; Te2e = min(Tp, Tg)], [GPU idle if Tp < Tg; insufficient input transmission],
  [Memory], [Prefetch buffers, batch aggregation], [Prefetch improves Te2e (~30-35%); large data occupies RAM], [Insufficient memory limits prefetch, excessive copying],
  [Network], [Cross-node transmission, offload], [Offload cost depends on bandwidth/delay; serialized transmission consumes CPU/network], [Insufficient bandwidth/delay, high egress cost],
  [Storage I/O], [Sample reading, shuffle, loading], [Small file delay significant; TFRecord/Arrow can improve], [Backend throughput insufficient, random access waiting],
  [Serialization], [Embedding, tensor, Python objects], [Large object serialization consumes CPU/network; Arrow/zero-copy reduces cost], [Large embeddings become dual bottlenecks],
  [Cache], [Hot data, prefetch output], [Smooths I/O, covers training steps, reduces backend access], [Insufficient capacity or wrong placement reduces benefits]
)

= Vendors: NVIDIA

Merlin@nvidia2024recsys is a recommender framework provided by NVidia.

#figure(
  image("survey/image 2.png", width: 80%),
  caption: [NVIDIA's recommendation system architecture overview]
)

= Vendors: NVIDIA

#figure(
  image("survey/image 3.png", width: 80%),
  caption: [Detailed hardware resource allocation in recommendation systems]
)

= Vendors: NVIDIA

#set text(size: 12pt)
#table(
  columns: (0.6fr, 1fr, 2fr, 2fr),
  align: left,
  [*Component*], [*Node Type*], [*Hardware Resources*], [*Load Behavior*],
  [Data Preprocessing], [GPU acceleration, storage], [GPU, CPU, RAM, SSD/HDD], [GPU: efficient loading (NVTabular), parallel features; CPU: validation; Storage: Parquet read/write],
  [Traditional ML Training], [GPU computing], [GPU, GPU RAM], [GPU: accelerates XGBoost, LightGBM; GPU RAM: stores data and parameters],
  [DL Training], [Single/multi-GPU, clusters], [GPU, RAM, CPU, NVLink, InfiniBand, storage], [GPU: embedding lookup, MLP, mixed precision; NVLink: embedding tables, model parallelism],
  [Deployment & Serving], [Inference servers], [GPU, CPU, RAM], [GPU: online inference, mixed precision, CUDA Graphs; Auto-scaling through Triton],
  [Logging & Monitoring], [General computing], [CPU, memory, storage], [CPU: KPI monitoring, model retraining; Storage: log persistence]
)

= Academia: Characterizing LLM Workload @vellaisamy2025characterizingoptimizingllminference

#set text(size: 11pt)
#table(
  columns: (1fr, 1fr, 1fr),
  align: left,
  [*Analysis Dimension*], [*Methodology*], [*Key Insights*],
  [*Operator-to-Kernel Tracing*], [
    - Build dependency graph from PyTorch Aten operators to CUDA kernels
    - Capture CPU-GPU interaction patterns
    - Identify operator-kernel mapping relationships
  ], [
    - Fine-grained workload behavior analysis
    - CPU-side operator triggers GPU kernel execution
    - Kernel launch and queuing behavior patterns
  ],
  [*Performance Boundary Classification*], [
    - TKLQT (Total Kernel Launch and Queuing Time) metric
    - TKLQT = $sum(t_"s,h"(k_i) - t_"s,h"(l_i))$
    - Distinguishes CPU-bound vs GPU-bound workloads
  ], [
    - CPU-bound: kernel launch overhead dominates
    - GPU-bound: kernel queuing time dominates
    - GH200 has 4x larger CPU-bound region than PCIe systems
  ],
  [*Kernel Fusion Analysis*], [
    - Proximity Score: PS(C) = f(C)/f(k_i)
    - Identifies deterministic kernel execution patterns
    - Quantifies fusion potential for different kernel chains
  ], [
    - CPU-bound region: ideal acceleration up to 6.8x
    - More effective for tightly-coupled systems
    - Data-driven fusion recommendations
  ]
)

= Academia: Literal Surveys @liang2024resourceallocationworkloadscheduling @asgar2025efficientscalableagenticai

#set text(size: 12pt)
#table(
  columns: 4,
  align: left,
  [*Resource*], [*Role*], [*Implementation*], [*Bottlenecks*],
  [*GPU Compute*], [Main computation (forward/backward)], [Space sharing (MIG), time sharing (Salus), elastic scaling (Pollux)], [Low utilization (25-50%), fragmentation, context switching],
  [*GPU Memory*], [Store parameters, activations, gradients], [Memory isolation (Muxflow), dynamic scaling (Zico), batch optimization (Fluid)], [Insufficient memory, frequent spilling, multi-task interference],
  [*Network*], [Parameter sync, gradient communication], [Job-level scheduling (Liquid), gradient block scheduling (Prophet)], [Communication bottlenecks, uneven bandwidth, topology differences],
  [*CPU & Memory*], [Preprocessing, scheduling, auxiliary tasks], [VM/container scheduling, unified memory (CUDA UM)], [CPU preprocessing slow, migration overhead],
  [*Storage/I/O*], [Data loading, parameter saving], [Distributed filesystems (HDFS, Ceph), caching], [I/O delay, storage imbalance]
)

= Academia: Tracing Data Based Analysis

The work "MLaaS" @276938 provides a comprehensive analysis based on the tracing data from Alibaba's PAI cluster.

#set text(size: 12pt)
#table(
  columns: (1fr, 1fr, 1fr, 1fr),
  align: left,
  [*Task Structure & Scheduling*], [*Temporal Patterns*], [*Resource Request vs. Usage*], [*Machine Resource Utilization*],
  [
    - Task distribution skewed: top 5% users submit 77% of instances
    - 85% require gang-scheduling
    - GPU locality: same machine needed (10x speedup with NVLink)
  ],
  [
    - Day-night submission patterns
    - Runtime spans 4 orders of magnitude (median: 23 min, P90: 4.5 hours)
    - Queuing delays vary by GPU type
  ],
  [
    - Heavy-tailed request distribution
    - Actual usage much lower: median 1.4 vCPU, 0.042 GPU, 3.5 GiB memory
    - 18% barely use GPU
  ],
  [
    - 8-GPU: P90 GPU 82%, CPU 77%
    - 2-GPU: P90 GPU 77%, CPU 42%
    - Memory \<60% (not bottleneck)
    - Network/I/O: 34-54% of guaranteed bandwidth
  ]
)

= Academia: Tracing Data Based Analysis

*High GPU Tasks (Low proportion but critical)*:
- NLP tasks (BERT, XLNet): 40% request >1 GPU, actual usage >0.4 GPU
- Image classification (ResNet-100k): depends on NVLink for gradient exchange (10.5x speedup)

*Low GPU Tasks (High proportion)*:
- CTR prediction: 75% inference, high CPU usage, low GPU usage
- GNN training: graph preprocessing takes 30-90% time, CPU usage higher than GPU
- Reinforcement learning: 72% tasks have ≥10 instances, simulation uses CPU heavily

#grid(
  columns: 2,
  gutter: 10pt,
  figure(
    image("survey/req-usage.png", width: 100%),
  ),
  figure(
    image("survey/utilization.png", width: 100%),
  )
)


// Bibliography
#let bib-content = {
  set text(size: 12pt)
  bibliography("references.bib", title: "References")
}
#bib-content
#set text(size: 14pt)
= Tracing Data Sources

*Alibaba Cluster Tracing Data*
- https://github.com/alibaba/clusterdata

#figure(
  image("survey/tracing.png", width: 70%),
  caption: [Alibaba Cluster Tracing Data #raw("cluster-trace-v2026-spot-gpu")]
)

= Conclusion & Future Work
#set text(size: 18pt)

*Key Findings*:
- Finer hardware resource model is required for better workload characterization.
- Limited work about prediction of workload behavior.
- Complex LLM-integrated services in practices is still limited.
- Public real-world tracing data is coarse-grained.

*Next steps*:
1. build a unified hierarchical model for fine-grained hardware resource for better illustrating the behaviors and characteristics of workload.
2. Further study of workload characteristics in different LLM-based scenarios.
3. Study the current tracing data to exploit more information for workload characterization.

= 😊

#align(center)[
  #v(2em)
  #text(size: 2.5em, weight: "bold")[Thank You!]
  
  #v(1em)
  #text(size: 1.5em)[Questions & Discussion]
  
]