id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,012,071,335
|
fully_shard() for huggingface 72B model: pytorch caches too much GPU memory
|
mingdianliu
|
open
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 5
|
NONE
|
Dear Community,
I'm working on fine-tuning the Qwen2-VL model using `fully_shard()` and wrote a script for it. However, it will run into OOM when I try to fine tune 72B model with 128 GPUs.
I found it is due to pytorch cache. The allocated and reserved GPU memory is quite small while the cached GPU memory is even higher than 50GB. I had a shoot on torch.cuda.empty_cache() after each training iteration but the GPU memory cache during each training iteration is also high (~20GB). I wonder if it is a bug of FSDP2. If not, is there any method that can mitigate this issue?
I'd really appreciate any insights or suggestions you might have. Thanks in advance!
### My code
```
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
from transformers import Qwen2VLForConditionalGeneration, Qwen2VLProcessor, AutoModelForVision2Seq, AutoConfig
from qwen_vl_utils import process_vision_info
from peft import LoraConfig, get_peft_model
from datasets import load_dataset
import numpy as np
from PIL import Image
import io
import logging
import os
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
from torch.distributed.device_mesh import init_device_mesh
from transformers.models.qwen2_vl.modeling_qwen2_vl import Qwen2VLDecoderLayer, Qwen2VLVisionBlock
from torch.distributed._composable.fsdp import fully_shard
from torch.distributed import init_process_group, destroy_process_group
from torch.distributed.checkpoint import DefaultLoadPlanner, DefaultSavePlanner
from torch.distributed._composable.fsdp import (
CPUOffloadPolicy,
fully_shard,
MixedPrecisionPolicy,
)
# Set up logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
# init dist
distributed_backend = "nccl" # gloo for cpu
dist.init_process_group(distributed_backend)
local_rank = int(os.environ["LOCAL_RANK"])
world_size = int(os.environ["WORLD_SIZE"])
device = torch.device(f"cuda:{local_rank}")
torch.cuda.set_device(device)
# model_name = "Qwen/Qwen2-VL-2B-Instruct"
# revision = "895c3a49bc3fa70a340399125c650a463535e71c"
model_name = "Qwen/Qwen2-VL-7B-Instruct"
revision = "a28a094eb66a9f2ac70eef346f040d8a79977472"
# model_name = "Qwen/Qwen2-VL-72B-Instruct"
# revision = "f9b556a74d58e6d9915f73227c21045c87342b42"
dataset_id = "HuggingFaceM4/ChartQA"
processor = Qwen2VLProcessor.from_pretrained(model_name,
revision=revision,
)
# Configuration
class Config:
dataset_id = "HuggingFaceM4/ChartQA"
output_dir = "/tmp_ckpt"
batch_size = 2
num_epochs = 3
learning_rate = 5e-5
max_seq_length = 512
lora_rank = 32
lora_alpha = 64
lora_dropout = 0.1
device = "cuda" if torch.cuda.is_available() else "cpu"
system_message = """You are a Vision Language Model specialized in interpreting visual data from chart images.
Your task is to analyze the provided chart image and respond to queries with concise answers, usually a single word, number, or short phrase.
The charts include a variety of types (e.g., line charts, bar charts) and contain colors, labels, and text.
Focus on delivering accurate, succinct answers based on the visual information. Avoid additional explanation unless absolutely necessary."""
def format_data(sample):
return [
{
"role": "system",
"content": [{"type": "text", "text": system_message}],
},
{
"role": "user",
"content": [
{
"type": "image",
"image": sample["image"],
},
{
"type": "text",
"text": sample["query"],
},
],
},
{
"role": "assistant",
"content": [{"type": "text", "text": sample["label"][0]}],
},
]
# Training function
def train_model(model, train_loader, optimizer, config):
model.train()
total_steps = len(train_loader) * config.num_epochs
step = 0
scaler = torch.amp.GradScaler("cuda", enabled=True)
for epoch in range(config.num_epochs):
total_loss = 0
for batch_idx, batch in enumerate(train_loader):
inputs, labels = batch
inputs = inputs.to(config.device)
labels = labels.to(config.device)
# Mixed precision training
loss = model(**inputs, labels=labels).loss
loss.backward() # no scaler
optimizer.step()
optimizer.zero_grad()
step += 1
logger.info(f"Epoch {epoch+1}/{config.num_epochs}, Step {step}/{total_steps}, Loss: {loss.item():.4f}")
del loss
# Create a data collator to encode text and image pairs
def collate_fn(examples):
# Get the texts and images, and apply the chat template
texts = [
processor.apply_chat_template(example, tokenize=False) for example in examples
] # Prepare texts for processing
image_inputs = [process_vision_info(example)[0] for example in examples] # Process the images to extract inputs
# Tokenize the texts and process the images
batch = processor(
text=texts, images=image_inputs, return_tensors="pt", padding=True
) # Encode texts and images into tensors
# The labels are the input_ids, and we mask the padding tokens in the loss computation
labels = batch["input_ids"].clone() # Clone input IDs for labels
labels[labels == processor.tokenizer.pad_token_id] = -100 # Mask padding tokens in labels
# Ignore the image token index in the loss computation (model specific)
if isinstance(processor, Qwen2VLProcessor): # Check if the processor is Qwen2VLProcessor
image_tokens = [151652, 151653, 151655] # Specific image token IDs for Qwen2VLProcessor
else:
image_tokens = [processor.tokenizer.convert_tokens_to_ids(processor.image_token)] # Convert image token to ID
# Mask image token IDs in the labels
for image_token_id in image_tokens:
labels[labels == image_token_id] = -100 # Mask image token IDs in labels
return batch, labels
# Main function
def main():
config = Config()
# Load model and processor
logger.info("Loading model and processor...")
hf_config = AutoConfig.from_pretrained(
model_name,
revision=revision,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
with torch.device("meta"):
model = AutoModelForVision2Seq.from_config(hf_config, torch_dtype=torch.bfloat16)
mp_policy=MixedPrecisionPolicy(param_dtype=torch.bfloat16,
reduce_dtype=torch.bfloat16,
output_dtype=torch.bfloat16,
cast_forward_inputs=True)
offload_policy = CPUOffloadPolicy(pin_memory=False)
# apply FSDP2
device_mesh = init_device_mesh("cuda", (world_size,))
for module in model.modules():
if isinstance(module, Qwen2VLDecoderLayer):
fully_shard(module,
mesh=device_mesh,
reshard_after_forward=True,
mp_policy=mp_policy,
# offload_policy=offload_policy,
)
model = fully_shard(model,
mesh=device_mesh,
reshard_after_forward=True,
mp_policy=mp_policy,
# offload_policy=offload_policy,
)
model.to_empty(device='cuda')
model_state_dict = model.state_dict()
model_dir = "/cache/fsdp_test/72B_8_files"
# load qwen2-vl model
dcp.load(
state_dict=model_state_dict,
checkpoint_id=model_dir,
planner=DefaultLoadPlanner(allow_partial_load=True),
)
model = model.to(torch.bfloat16).cuda()
# Load dataset
logger.info("Loading dataset...")
train_dataset, eval_dataset, test_dataset = load_dataset(
config.dataset_id, split=['train[:10%]', 'val[:10%]', 'test[:10%]'])
train_dataset = [format_data(sample) for sample in train_dataset]
train_dataloader = torch.utils.data.DataLoader(
train_dataset,
batch_size=1,
collate_fn=collate_fn,
shuffle=True,
)
# Optimizer
optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate)
# Create output directory
os.makedirs(config.output_dir, exist_ok=True)
# Train
logger.info("Starting training...")
train_model(model, train_dataloader, optimizer, config)
if __name__ == "__main__":
main()
destroy_process_group()
logger.info("Training completed.")
```
### Running command
`torchrun --nnodes=2 --nproc_per_node=8 qwenvl_train_fsdp.py`
`torchrun --nnodes=4 --nproc_per_node=8 qwenvl_train_fsdp.py`
`torchrun --nnodes=8 --nproc_per_node=8 qwenvl_train_fsdp.py`
### GPU memory monitor
The following is the screenshot of the result of `nvidia-smi`:
16 GPU:

32 GPU:

64 GPU:

cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @tianyu-l @XilunWu
| true
|
3,012,059,868
|
[Observability][Optimus] Fix the tlparse name
|
mengluy0125
|
open
|
[
"fb-exported",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
Summary:
This is a followup of PR https://github.com/pytorch/pytorch/pull/151635
As suggested by James Wu, we need to remove the space for the name.
Test Plan:
```
TORCH_TRACE=~/my_trace_log_dir CUDA_VISIBLE_DEVICES=5 buck2 run mode/opt aps_models/ads/ecosystem/tooling/tools/efficient_module_suite/benchmark:omnifm_perf_benchmark -- benchmark-with-prod-model --prod_config mast_omnifm_v1-5_mwb --prod_config_override prod_config_override_jointarch --batch_size 1024 --enable_pt2 True --mfu_profile_module uniarch_layers.0.seq_summarization_layer.seq_summarization_modules.fb_feed_vpv_p90
```
tlparse link: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpyYiYdN/dedicated_log_torch_trace_vz90m_xo.log/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
Differential Revision: D73459951
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,012,041,979
|
`torch.bmm` is slow on non-contiguous BF16 CPU tensors
|
ar0ck
|
open
|
[
"module: performance",
"module: cpu",
"triaged",
"module: bfloat16",
"module: half",
"matrix multiplication"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I observe that `torch.bfloat16` is ~4x slower than `torch.float32` on the following benchmark:
```python
import torch
from torch.utils.benchmark import Compare, Timer
B = 1 << 6
N = 1 << 12
results = []
for dtype in (torch.float32, torch.bfloat16):
x = torch.randn(B, 1, N, dtype=dtype)
y = torch.randn(1, N, N, dtype=dtype).expand(B, N, N)
results.append(Timer(
"torch.bmm(x, y)",
globals={"x": x, "y": y},
description=str(dtype),
).timeit(1))
Compare(results).print()
```
On my machine, this produces:
```
[--------------------------- ---------------------------]
| torch.float32 | torch.bfloat16
1 threads: -----------------------------------------------
torch.bmm(x, y) | 209.1 | 855.3
Times are in milliseconds (ms).
```
Things which "fix" this include:
- making `y` contiguous
- using `device="cuda"`
- using `torch.compile`
### Versions
```
Collecting environment information...
PyTorch version: 2.8.0.dev20250422+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.14 (main, Apr 6 2024, 18:45:05) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 570.124.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 6
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 768 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 20 MiB (16 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] torch==2.8.0.dev20250422+cpu
[conda] Could not collect
```
cc @msaroufim @jerryzh168 @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
3,012,033,982
|
[CI] Remove protobuf from docker image
|
clee2000
|
closed
|
[
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Pretty sure the source should be the one in third-party
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,011,993,116
|
Exporting the operator 'aten::lift_fresh' to ONNX - not supported
|
kraza8
|
open
|
[
"module: onnx",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
Hi,
Attempting to export this model to onnx. Export keeps failing:
**torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::lift_fresh' to ONNX opset version 17 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.**
Tried with lower and higher opset versions, same failure.
**Code:**
wrapperpolicy = PolicyWrapper(policy)
dummy_obs = torch.randn(56, 2, 20, dtype=torch.float32)
dummy_mask = torch.ones(56, 2, 20, dtype=torch.bool)
onnx_model_path = "/proj/work/kraza/git/diffusion_policy_cnn_stanford/DynamoWrapperDiffusionPolicyCNN.onnx"
print("starting onnx export\n\n")
torch.onnx.export(wrapperpolicy, (dummy_obs, dummy_mask), onnx_model_path, opset_version=17, input_names=["input_node"], output_names=["output_node"], report=True, export_params=True)
**Error:**
:: # /proj/work/kraza/git/diffusion_policy_cnn_stanford/diffusion_policy/diffusion_policy/policy/diffusion_unet_lowdim_policy.py:165:0
return (%1270965)
Traceback (most recent call last):
File "/proj/work/kraza/git/diffusion_policy_cnn_stanford/diffusion_policy/eval_dynamo_mlir_compile_v2_onnx_wrapper.py", line 105, in <module>
main()
File "/proj/work/kraza/git/mlir-compiler2/mlir-compiler/venv/lib/python3.10/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/proj/work/kraza/git/mlir-compiler2/mlir-compiler/venv/lib/python3.10/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/proj/work/kraza/git/mlir-compiler2/mlir-compiler/venv/lib/python3.10/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/proj/work/kraza/git/mlir-compiler2/mlir-compiler/venv/lib/python3.10/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/proj/work/kraza/git/diffusion_policy_cnn_stanford/diffusion_policy/eval_dynamo_mlir_compile_v2_onnx_wrapper.py", line 82, in main
torch.onnx.export(wrapperpolicy, (dummy_obs, dummy_mask), onnx_model_path, opset_version=17, input_names=["input_node"], output_names=["output_node"], report=True, export_params=True)
File "/proj/work/kraza/git/mlir-compiler2/mlir-compiler/venv/lib/python3.10/site-packages/torch/onnx/__init__.py", line 383, in export
export(
File "/proj/work/kraza/git/mlir-compiler2/mlir-compiler/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 495, in export
_export(
File "/proj/work/kraza/git/mlir-compiler2/mlir-compiler/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1428, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/proj/work/kraza/git/mlir-compiler2/mlir-compiler/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1057, in _model_to_graph
graph = _optimize_graph(
File "/proj/work/kraza/git/mlir-compiler2/mlir-compiler/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 632, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "/proj/work/kraza/git/mlir-compiler2/mlir-compiler/venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1709, in _run_symbolic_function
raise errors.UnsupportedOperatorError(
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::lift_fresh' to ONNX opset version 17 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
### Versions
**Environment:**
Collecting environment information...
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (aarch64)
GCC version: (GCC) 13.3.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.29.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-1019-nvidia-64k-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GH200 480GB
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Vendor ID: ARM
Model name: Neoverse-V2
Model: 0
Thread(s) per core: 1
Core(s) per socket: 72
Socket(s): 1
Stepping: r0p0
Frequency boost: disabled
CPU max MHz: 3375.0000
CPU min MHz: 81.0000
BogoMIPS: 2000.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh bti
L1d cache: 4.5 MiB (72 instances)
L1i cache: 4.5 MiB (72 instances)
L2 cache: 72 MiB (72 instances)
L3 cache: 114 MiB (1 instance)
NUMA node(s): 9
NUMA node0 CPU(s): 0-71
NUMA node1 CPU(s):
NUMA node2 CPU(s):
NUMA node3 CPU(s):
NUMA node4 CPU(s):
NUMA node5 CPU(s):
NUMA node6 CPU(s):
NUMA node7 CPU(s):
NUMA node8 CPU(s):
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.3
[pip3] onnx==1.17.0
[pip3] onnxruntime-training==1.20.0+cpu
[pip3] onnxscript==0.2.3
[pip3] torch==2.6.0
[pip3] torchvision==0.13.1
[conda] Could not collect
| true
|
3,011,871,773
|
[inductor] fix lowering for cummin, cummax for one element tensors
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151931
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Fixes https://github.com/pytorch/pytorch/issues/151738
| true
|
3,011,847,417
|
Add operator name to the size/strides/alignment assertion
|
shunting314
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"internal ramp-up task"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
_No response_
### Error logs
_No response_
### Versions
Inductor would generates size/strides/alignment assertions for ir.FallbackKernel (e.g. output of a custom op).
Those assertions looks like this:
```
assert_size_stride(buf2, (16, 32), (32, 1))
assert_alignment(buf2, 16)
```
The assertion does not contains the name of the op concerned. It would be convenient if Inductor could add the op name to the error message of the assertion.
This test case can be a good start point:
```
python test/inductor/test_torchinductor.py -k test_custom_op_1_cuda
```
It defines a custom op torch.ops.test.foo . We should have this op name in the error message of the assertion.
Add TORCH_LOGS=output_code to see the Inductor generated code.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @zou3519 who asks for this a few times.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,011,814,244
|
[ROCm][CI] Update dockerfile to use centos9
|
ethanwee1
|
open
|
[
"module: rocm",
"open source",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Upstream contents of centos stream dockerfile into upstream dockerfile
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,011,810,740
|
Improve cache key graph printing performance
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Teach the graph printer how to allow overriding printing SymTypes (`SymInt`, `SymFloat`, `SymBool`) and then use that to reuse the fast SymNode printing from `torch._inductor.utils.sympy_str()` to make computing the cache key faster.
On my computer the repro from #151823 goes from 480s -> 80s (still terrible... but better).
Fixes #151823
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151928
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,792,848
|
[rocm6.4_internal_testing] Dockerfile swap
|
ethanwee1
|
closed
|
[
"oncall: distributed",
"module: rocm",
"release notes: releng",
"module: inductor",
"module: dynamo"
] | 2
|
CONTRIBUTOR
|
rocm6.4_internal_testing move contents of centos stream dockerfile into dockerfile
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,780,475
|
[Break XPU] generalize newly introduced device bias code in Inductor UT.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ciflow/xpu"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151926
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,774,747
|
[feature request][AOTI] Expand check input assertions to cover input guards created during compilation?
|
henrylhtsang
|
open
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Hi, I want to request the following feature. The purpose is for debugging. There is a chance it exists already, in which case please let me know.
# Feature
Add input assertions for guards for AOTI. Check for shapes at least.
# Problem
The following function expects the size of x to be divisible by 100.
```
import torch
def main():
class M(torch.nn.Module):
def forward(self, x):
y = x.reshape(100, -1).clone()
y = y + 1
return y
input1 = (torch.rand(100, device="cuda"),)
input2 = (torch.rand(2099, device="cuda"),)
inputs = [input1, input2]
model = M().cuda()
_ = model(*input1)
dynamic_shapes = {
"x": {0: torch.export.Dim.DYNAMIC},
}
ep = torch.export.export(model, input1, dynamic_shapes=dynamic_shapes, strict=False)
path = torch._inductor.aoti_compile_and_package(ep)
aot_model = torch._inductor.aoti_load_package(path)
for input in inputs:
_ = aot_model(*input)
print("done")
if __name__ == "__main__":
main()
```
Side note: To my surprise, AOTI can run **without errors**. This is after me adding flags like
```
TORCH_SHOW_CPP_STACKTRACES=1 TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 TORCH_LOGS="output_code" PYTORCH_NO_CUDA_MEMORY_CACHING=1 CUDA_LAUNCH_BLOCKING=1
```
and even using compute-sanitizer.
But regardless, this kind of function is expected to fail. It fails in eager mode, it fails in torch.compile.
```
# eager
RuntimeError: shape '[100, -1]' is invalid for input of size 2099
# torch.compile
torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_method reshape(*(FakeTensor(..., device='cuda:0', size=(s77,)), 100, -1), **{}): got RuntimeError("shape '[100, -1]' is invalid for input of size s77")
```
# Guards
If you look at tlparse, you can actually see the following guards:
```
**Eq(Mod(s35, 100), 0) | User Stack | Framework Stack**
Ne(100*((s35//100)), 0) | User Stack | Framework Stack
Eq((s35//100), 1) | User Stack | Framework Stack
Eq(Max(1, (s35//100)), 1) | User Stack | Framework Stack
Eq(s35, 100*((s35//100))) | User Stack | Framework Stack
100*((s35//100)) < 2147483648 | User Stack | Framework Stack
((s35//100)) + 99 < 2147483648 | User Stack | Framework Stack
s35 < 2147483648 | User Stack | Framework Stack
```
In fact, if we look at AOTI codegen, it has some input guards. Just the trivial ones:
```
AOTI_NOINLINE static void check_input_0(
AtenTensorHandle* input_handles
) {
ConstantHandle arg0_1 = ConstantHandle(input_handles[0]);
int32_t arg0_1_dtype;
AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_dtype(arg0_1, &arg0_1_dtype));
int32_t arg0_1_expected_dtype = aoti_torch_dtype_float32();
if (arg0_1_expected_dtype != arg0_1_dtype) {
std::stringstream ss;
ss << "input_handles[0]: unmatched dtype, "
<< "expected: " << arg0_1_expected_dtype << "(at::kFloat), "
<< "but got: " << arg0_1_dtype << "\n";
throw std::runtime_error(ss.str());
}
auto arg0_1_size = arg0_1.sizes();
if (arg0_1_size[0] < 2) {
std::stringstream ss;
ss << "input_handles[0]: dim value is too small at 0, "
<< "expected it to be >= 2, " << "but got: "
<< arg0_1_size[0] << "\n";
throw std::runtime_error(ss.str());
}
if (arg0_1_size[0] > 2147483647) {
std::stringstream ss;
ss << "input_handles[0]: dim value is too large at 0, "
<< "expected to be <= 2147483647, " << "but got: "
<< arg0_1_size[0] << "\n";
throw std::runtime_error(ss.str());
}
auto arg0_1_stride = arg0_1.strides();
if (1 != arg0_1_stride[0]) {
std::stringstream ss;
ss << "input_handles[0]: unmatched stride value at 0, "
<< "expected: 1, " << "but got: " << arg0_1_stride[0]
<< "\n";
throw std::runtime_error(ss.str());
}
int32_t arg0_1_device_type;
AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_get_device_type(arg0_1, &arg0_1_device_type));
int32_t arg0_1_expected_device_type = 1;
if (arg0_1_expected_device_type != arg0_1_device_type) {
std::stringstream ss;
ss << "input_handles[0]: unmatched device type, "
<< "expected: " << arg0_1_expected_device_type << "1(cuda), "
<< "but got: " << arg0_1_device_type << "\n";
throw std::runtime_error(ss.str());
}
}
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,011,743,615
|
Skip fuse attention on fp32 if not tf32
|
eellison
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151924
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,725,922
|
[pytorch][triton] flex attention fwd kernel with Q and K TMA loads
|
mandroid6
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary:
Device side TMA for flex_attention fwd kernel, Q K tensors
NOTE: V tensor TMA has numeric issues, to be updated in followup.
Test Plan:
Unit test:
```
buck test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:flex_attention -- test_tma_with_customer_kernel_options
```
https://www.internalfb.com/intern/testinfra/testrun/14355223891618726
Bench comparison:
- with TMA: 41.03
- without TMA (default best): 49.78
TMA shows a ~`18%` speed up.
Differential Revision: D71082691
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,715,478
|
[StandaloneCompile] Autotune at compile time
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151922
* #151921
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,715,005
|
[MegaCache] Return None on no compilation
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151922
* __->__ #151921
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,670,828
|
Add precedence to the infix printing done by sympy_str.
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
Add precedence to the infix printing done by sympy_str.
Without this change sympy_str will print the same string for both `a+b*(c+d)` and `(a+b)*(c+d)`.
While there I also cleaned up the printing for `-a` and `a - b`.
Added some tests.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151920
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,660,204
|
[WIP]: track remaining runtime time asserts for backward coddgen instead of trying to regenerate all
|
laithsakka
|
open
|
[
"release notes: fx",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151919
address https://github.com/pytorch/pytorch/issues/151879.
The runtime assertion code generation tracks defined unbacked symints, when all defined unabacked summits for a given assert are seen(defined), the runtime assertion is emitted.
Before this PR, unbacked symints that are input to the graph were not detected as defined, hence dependent assertions used to never be triggered.
One issue with the fix is handling emitting runtime assertion for backward .Before this PR, backward will try to regenerate all the assertions again, not considering input defined unabcked symints used to operate as a proxy to avoid generating assertions that should have been defined in the forward (based on the assumption that the unbacked symint is coming from forward output and it would have been defined).
However now as i removed that check we start failing. While I can say (for backward do not consider input defined unbacked symint, this sounds risky and not a complete solution)? What if an assertion depends on both forward .item() call and backward .item() call?
The proposed fix is,
(1) when we finish forward codegen we store the remaining runtime assertions and we only
try to emit those in backward.
(2) after backward we ensure all runtime assertions are emitted.
My only concern is if caching forward and not backward can interfere here. basically if we cache hit forward and miss backward! we wont know what is remaining to lower.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,657,528
|
Inductor generates wrong code for `torch.embedding`
|
YouJiacheng
|
open
|
[
"triaged",
"oncall: pt2",
"module: decompositions",
"module: inductor"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Without `torch.compile`, negative indices will trigger ```/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1369: indexSelectSmallIndex: block: [0,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.``` on CUDA, and `IndexError: index out of range in self` on CPU.
With `torch.compile`, negative indices will be viewed as indices starting from the end of the array (behaves like `w[idx]`).
Interestingly, the gradient for `-1` (which is the place holder for `padding_idx`) will be zero'd.
```python
import torch
from torch import Tensor
from torch._logging._internal import trace_structured # type: ignore
import torch._inductor.codecache
import torch._inductor.graph
def _patched_trace_structured(name, metadata_fn, **kwargs):
if name == "inductor_output_code":
print(f"inductor_output_code: {metadata_fn().get('filename', 'Unknown')}")
trace_structured(name, metadata_fn, **kwargs)
torch._inductor.codecache.trace_structured = _patched_trace_structured # type: ignore
torch._inductor.graph.trace_structured = _patched_trace_structured # type: ignore
@torch.compile
def foo(w: Tensor, idx: Tensor):
return torch.embedding(w, idx)
dtype = torch.float32 # bfloat16 also has this problem
with torch.device("cuda"):
w = torch.zeros(8, 1, dtype=dtype) + torch.arange(8)[:, None]
w.requires_grad_(True)
idx = torch.arange(8) - 4
out = foo(w.detach(), idx)
out_w_grad = foo(w, idx)
out_w_grad.backward(torch.ones_like(out))
print(out)
print(out_w_grad)
print(w.grad)
```
The output:
```
inductor_output_code: /tmp/torchinductor_root/ik/cik6xiyvgk4l6qrmtzhstl7y2rii35e6ypnrlerwhzu6sy2l6vmx.py
inductor_output_code: /tmp/torchinductor_root/g7/cg7qb4d3yn5u7hzorvswvawpbul4dcyenxexcd3jxq3lavvuhai6.py
inductor_output_code: /tmp/torchinductor_root/gz/cgzm7ubuxpihujm7bnkl2jknrcqvufyv64gu6foclevh3yx5gamn.py
tensor([[4.],
[5.],
[6.],
[7.],
[0.],
[1.],
[2.],
[3.]], device='cuda:0')
tensor([[4.],
[5.],
[6.],
[7.],
[0.],
[1.],
[2.],
[3.]], device='cuda:0', grad_fn=<CompiledFunctionBackward>)
tensor([[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[0.]], device='cuda:0')
```
The backward code generated by Inductor:
```python
triton_poi_fused_embedding_dense_backward_1 = async_compile.triton('triton_poi_fused_embedding_dense_backward_1', '''
import triton
import triton.language as tl
from triton.compiler.compiler import AttrsDescriptor
from torch._inductor.runtime import triton_helpers, triton_heuristics
from torch._inductor.runtime.triton_helpers import libdevice, math as tl_math
from torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties
triton_helpers.set_driver_to_gpu()
@triton_heuristics.pointwise(
size_hints={'x': 8},
filename=__file__,
triton_meta={'signature': {'in_ptr0': '*i64', 'in_ptr1': '*fp32', 'out_ptr0': '*fp32', 'xnumel': 'i32'}, 'device': DeviceProperties(type='cuda', index=0, multi_processor_count=132, cc=90, major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, warp_size=32), 'constants': {}, 'configs': [AttrsDescriptor.from_dict({'arg_properties': {'tt.divisibility': (0, 1, 2), 'tt.equal_to': ()}, 'cls': 'AttrsDescriptor'})]},
inductor_meta={'autotune_hints': set(), 'kernel_name': 'triton_poi_fused_embedding_dense_backward_1', 'mutated_arg_names': ['out_ptr0'], 'optimize_mem': True, 'no_x_dim': False, 'num_load': 2, 'num_reduction': 0, 'backend_hash': 'A0D3A2B50857E9501D843044B01F725922648D76E6D26323B14F8A4EA4473D1B', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False},
min_elem_per_thread=0
)
@triton.jit
def triton_poi_fused_embedding_dense_backward_1(in_ptr0, in_ptr1, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 8
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), xmask)
tmp8 = tl.load(in_ptr1 + (x0), xmask)
tmp1 = tl.full([XBLOCK], 8, tl.int32)
tmp2 = tmp0 + tmp1
tmp3 = tmp0 < 0
tmp4 = tl.where(tmp3, tmp2, tmp0)
tl.device_assert(((0 <= tmp4) & (tmp4 < 8)) | ~(xmask), "index out of bounds: 0 <= tmp4 < 8")
tmp6 = tl.full([1], -1, tl.int64)
tmp7 = tmp0 == tmp6
tmp9 = 0.0
tmp10 = tl.where(tmp7, tmp9, tmp8)
tl.atomic_add(out_ptr0 + (tl.broadcast_to(tmp4, [XBLOCK])), tmp10, xmask, sem='relaxed')
''', device_str='cuda')
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.9 (main, Mar 11 2025, 17:26:57) [Clang 20.1.0 ] (64-bit runtime)
Python platform: Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 168
On-line CPU(s) list: 0-74
Off-line CPU(s) list: 75-167
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 42
Socket(s): 2
Stepping: 8
BogoMIPS: 5199.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.9 MiB (84 instances)
L1i cache: 2.6 MiB (84 instances)
L2 cache: 168 MiB (84 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-83
NUMA node1 CPU(s): 84-167
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu @SherlockNoMad @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,011,618,840
|
Reland fast gather and index implementation
|
ngimel
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"ciflow/inductor",
"ci-no-td"
] | 5
|
COLLABORATOR
|
This PR reapplies #151490 and #151753 together, and adds some missing checks when applying the fast path.
Previously missed checks:
1) indexing path has the stride in the indexed dimension in bytes, gather path has the stride in the indexed dimension in elements. When checking if fast path is applicable, I didn't take this difference into account, and still multiplied the indexing stride by element size. Fixed and test added
2) We want to take fast path only when we are copying contiguous equally spaced slices of inputs + all the necessary alignment requirements. The effective tensor size should be 2d (after all possible flattening is applied), the index stride in the last dimension should be 0, and, since in the kernel we are not applying non-indexing-related offsets to src tensor, the src tensor stride in the second dimension should be 0. This automatically happens for gather with dim=0, so I didn't put in an explicit condition for this. Sometimes all conditions except first dim "effective" stride equal to 0 are satisfied for scatter on non-zero dim, when index size in the indexing dimension is 1 and thus it is collapsed (dimensions of size 1 are always collapsed), e.g.
```
# test gather along 1st dim that can accidentally trigger fast path
# because due to index dimension in the gather dim being 1
# an unexpected squashing in tensorIterator happens
src = make_tensor((16, 2, 16), device=device, dtype=dtype)
ind = torch.randint(2, (16, 1), device=device).view(16, 1, 1).expand(16, 1, 16)
res = torch.gather(src, dim=1, index=ind)
if res.device.type == "cuda":
ref_cpu = torch.gather(src.cpu(), dim=1, index=ind.cpu())
self.assertEqual(res.cpu(), ref_cpu, atol=0, rtol=0)
```
Note that if index size here was (16, 2, 16) instead of (16, 1, 16) then the middle dimension could not be collapsed and we wouldn't end up incorrectly taking fast path.
We could update the kernel to take this stride into account when computing offsets into src tensor, or we could specifically disallow non-zero stride on the first dimension. I took the second path for now.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,011,558,696
|
[WIP] Deprecate AcceleratorHooksInterface isPinnedPtr, use at::getHostAllocator()->is_pinned instead
|
guangyey
|
open
|
[
"open source",
"release notes: mps",
"ciflow/mps"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151531
* __->__ #151916
* #151913
| true
|
3,011,497,088
|
Deprecated pkg_resources and use distributions instead
|
FFFrog
|
open
|
[
"open source",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151915
As the title stated.
| true
|
3,011,480,148
|
[OpenReg] Add _lazy_init and rng_state support for OpenReg
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 12
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151914
As the title stated.
**Changes**:
- Add get_rng_state & set_rng_state support for OpenReg
- Add _lazy_init support for OpenReg
- Remove redundant code for cuda/Module.cpp
| true
|
3,011,401,982
|
Add MPS support for getHostAllocator API
|
guangyey
|
open
|
[
"open source",
"release notes: mps",
"ciflow/mps"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151531
* #151916
* __->__ #151913
# Motivation
In https://github.com/pytorch/pytorch/pull/151431, PyTorch provides a unified API `at::getHostAllocator(...)` that facilitates writing device-agnostic code. This PR aims to add MPS support for `at::getHostAllocator` API.
# Additional Context
The following APIs are not support on `at::getHostAllocator(at::kMPS)` yet:
- `at::getHostAllocator()->record_event`
- `at::getHostAllocator()->get_stats`
- `at::getHostAllocator()->reset_accumulated_stats`
- `at::getHostAllocator()->reset_peak_stats`
| true
|
3,011,150,732
|
bfloat16 numerical errors for SDPA math backend
|
AmdSampsa
|
open
|
[
"triaged",
"oncall: pt2"
] | 4
|
COLLABORATOR
|
### 🐛 Describe the bug
## Faulty triton kernel in the math backend
SDPA attention for `bfloat16` has numerical errors (as compared to eager mode), when using the math backend (`SDPBackend.MATH`).
Comparing the results between eager mode and `torch.compile` gives a maximum abs difference of `0.0156`. I've confirmed this both for MI300 and A100.
For a minimum repro, please see [this gist](https://gist.github.com/AmdSampsa/ea1d8a6daddcc4d9bdf8fbf34d9b25cc).
When looking at the triton code produced by `torch.compile` and for the math backend, one finds this suspicious kernel:
```python
def triton_poi_fused_add_4(in_ptr0, in_ptr1, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 196608
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = tl.full([XBLOCK], True, tl.int1)
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), None).to(tl.float32)
tmp1 = tl.load(in_ptr1 + (x0), None)
tmp2 = tmp1.to(tl.float32)
tmp3 = tmp0 + tmp2
tl.store(out_ptr0 + (x0), tmp3, None)
```
I've located the numerical errors into this kernel and it is likely because of that casting into `tl.float32` even when we are doing `bfloat16`.
As a side note, `SDPBackend.FLASH_ATTENTION` works allright.
## Defaulting to math backend
I also find this weird behaviour (when using the provided repro):
`rm -rf /tmp/torchinductor_root && python sdpa_repro.py --flash`
-> **OK**
`rm -rf /tmp/torchinductor_root && python sdpa_repro.py --math`
-> **ERRORS**
`rm -rf /tmp/torchinductor_root && python sdpa_repro.py --math && python sdpa_repro.py --flash`
-> **ERRORS IN BOTH CASES!**
So it seems that SDPA defaults to math backend if it has been used earlier even when we try to force the FA backend!
## Summary
- Found numerical errors in the math backend for bfloat16 because of a dodgy triton kernel produced by inductor
- Forcing flash attention SDPA backend doesn't work if math backend has previously been used.. to get rid of this persistence, one must clear the inductor cache
- Of course, it's another matter if the math backend is as of today relevant for SDPA..?
## Related
- maybe/kinda: https://github.com/pytorch/pytorch/issues/139352
### Error logs
_No response_
### Versions
```bash
Collecting environment information...
PyTorch version: 2.7.0a0+ecf3bae40a.nv25.02
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.39
Python version: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-42-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 555.42.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 6
CPU(s) scaling MHz: 30%
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 40 MiB (32 instances)
L3 cache: 48 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cudnn-frontend==1.10.0
[pip3] nvtx==0.2.5
[pip3] onnx==1.17.0
[pip3] optree==0.14.0
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-triton==3.2.0+git0d4682f0b.nvinternal
[pip3] torch==2.7.0a0+ecf3bae40a.nv25.2
[pip3] torch_geometric==2.5.3
[pip3] torch_tensorrt==2.6.0a0
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.0a0
[conda] Could not collect
```
cc @chauhang @penguinwu
| true
|
3,011,092,138
|
Enable type promotions in slice_scatter (pytorch#147842)
|
tommyadams5
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
Fixes #147842
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,006,903
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE_256_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE_256_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40921259228).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE_256_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1219, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 509, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 881, in sdpa_dense_backward
grad_scores = torch.where(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 864.12 MiB is free. Including non-PyTorch memory, this process has 21.19 GiB memory in use. Of the allocated memory 6.78 GiB is allocated by PyTorch, and 14.15 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE_256_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,006,767
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE3_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE3_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40921259228).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE3_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1219, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 509, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 718.12 MiB is free. Including non-PyTorch memory, this process has 21.34 GiB memory in use. Of the allocated memory 6.84 GiB is allocated by PyTorch, and 14.23 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE3_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,006,617
|
DISABLED test_captured_buffers_all_dims_float32_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_captured_buffers_all_dims_float32_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40921259228).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_captured_buffers_all_dims_float32_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,011,006,465
|
DISABLED test_builtin_score_mods_dynamic_float16_score_mask_mod0_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_dynamic_float16_score_mask_mod0_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40920556633).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_dynamic_float16_score_mask_mod0_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,010,943,907
|
Exempt overriding methods from docstring_linter (fix #151692)
|
rec
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152256
* #148959
* __->__ #151906
| true
|
3,010,790,742
|
Fix dependencies version error when building docs locally
|
dangnguyen2910
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
NONE
|
Fixes #151786
This PR downgrades version of 2 dependencies, which allow `pip install -r docs/requirements.txt`
| true
|
3,010,771,607
|
[graph pickler] [inductor compile async] imprecise filter for non standard op?
|
ChuanqiXu9
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
In https://github.com/pytorch/pytorch/blob/159e2f96e3d7aec0187b03c483e177c5e96b1ebb/torch/fx/_graph_pickler.py#L410-L413
there is a check for `torch.ops.aten` and the pickler will raise an exception saying it is not picklable.
This is slightly confusing to me. I feel, if something is not picklable, it will fail automatically. And in practice, in my demo, it prevents something to work and everything works fine after I commented this check out. Then at least, the condition may be incomplete to me.
Reproducer:
```
import torch
import json
from transformers import BertModel, BertConfig
CONFIG = """
{
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.6.0.dev0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
"""
config = json.loads(CONFIG)
bloom_config = BertConfig(**config)
model = BertModel(bloom_config).half().cuda()
vocab_size = 30522
torch.compiler.reset()
torch.cuda.empty_cache()
warmup = torch.randint(0, vocab_size, (2, 3)).cuda()
warmup_attention_mask = torch.ones(2, 3).cuda()
torch._dynamo.mark_dynamic(warmup, (0, 1))
torch._dynamo.mark_dynamic(warmup_attention_mask, (0, 1))
compiled_fn = torch.compile(model)
with torch.no_grad():
compiled_fn(warmup, warmup_attention_mask)
```
And we need to raise an exception here or look at the exception here: https://github.com/pytorch/pytorch/blob/159e2f96e3d7aec0187b03c483e177c5e96b1ebb/torch/_inductor/compile_fx_ext.py#L484-L491
otherwise it may continue unexpectedly and silently.
Then we need to compile the reproducer with compile async mode:
```
TORCHINDUCTOR_FX_COMPILE_MODE=async+SUBPROCESS TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 python reproducer.py
```
### Versions
trunk 2.8+9c2ac2b876b1f74aaf7ed3fbfa04db8c973f4b52
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,010,632,970
|
[cp] Context Parallel: dispatch flex_attention to CP impl in TorchDispatchMode
|
XilunWu
|
closed
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151903
* #151900
* #151685
* #151507
* #151495
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,010,632,843
|
[cp] cast tensor to DTensor for flex_attention
|
XilunWu
|
closed
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151903
* __->__ #151902
* #151900
* #151685
* #151497
* #151507
* #151495
| true
|
3,010,531,506
|
Issue while Building the Documentation
|
rohit-kumar-manav
|
open
|
[
"module: build",
"module: docs",
"triaged"
] | 1
|
NONE
|
### 📚 The doc issue
Trying to build the documentation from the below link.
https://github.com/pytorch/pytorch#from-source
Steps mentioned:
1. cd docs/
2. pip install -r requirements.txt
3. make html
4. make serve
Getting following error in step2 attached below:

@ezyang @svekars
### Suggest a potential alternative/fix
_No response_
cc @malfet @seemethere @svekars @sekyondaMeta @AlannaBurke
| true
|
3,010,513,141
|
[cp] dispatch flex_attention on DTensor to cp implementation
|
XilunWu
|
open
|
[
"oncall: distributed",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151900
* #151495
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,010,508,314
|
[BE] Upgrade XPU support package to 2025.1 in CICD
|
chuanqi129
|
open
|
[
"open source",
"release notes: releng",
"topic: not user facing",
"ciflow/binaries_wheel",
"ciflow/xpu"
] | 2
|
COLLABORATOR
|
Address #151097. Including below changes,
- Add XPU support package 2025.1 build and test in CI for both Linux and Windows
- Keep XPU support package 2025.0 build in CI to ensure no break issue until PyTorch 2.8 release
- Upgrade XPU support package from 2025.0 to 2025.1 in CD for both Linux and Windows
- Enable XCCL in Linux CD wheel and oneMKL integration in both both Linux and Windows
- Update XPU runtime pypi packages of CD wheels
- Remove deprecated support package docker image build
| true
|
3,010,430,112
|
[XPU] Get [ZE]: 0x78000011 on torch.compile with new driver
|
Stonepia
|
closed
|
[
"triaged",
"module: xpu"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The latest Intel GPU Driver introduces breaking changes that may lead to `torch.compile` failure with the error message of `Triton Error [ZE]: 0x78000011` on Windows.
For instance, one might fail in the following cases:
- Due to the renaming of `imf` function. It means that Triton can't find the former libdevice name, so it will raise a load failure. For example: https://github.com/intel/intel-xpu-backend-for-triton/pull/3936
- There are errors when there is `select_scatter`, and it might fail on interacting with the `slice` op. For example: https://github.com/intel/intel-xpu-backend-for-triton/issues/3916
Currently, the Intel GPU driver is working on the process to fix the failure. During the fixing stage, it can be mitigated by rolling back to the prior version. Please see the final `Solution` section for details.
### Error logs
Here is an example of the error message.
```Bash
Traceback (most recent call last):
"C:\Users\user\AppData\Local\Temp\torchinductor_user\7n\c7nlelwrngcnizvcufhtqm7gds4sbaacn7qzxwal23rzxxt34efu.py", line 49, in <module>
triton_poi_fused_0 = async_compile.triton('triton_poi_fused_0', '''
...
File "F:\miniforge\envs\nightly\lib\site-packages\triton\compiler\compiler.py", line 422, in _init_handles
self.module, self.function, self.n_regs, self.n_spills = driver.active.utils.load_binary(
File "F:\miniforge\envs\nightly\lib\site-packages\triton\backends\intel\driver.py", line 208, in load_binary
return self.shared_library.load_binary(args)
RuntimeError: Triton Error [ZE]: 0x78000011
```
### Solution
A temporary solution is to roll back to the previous version of the driver.
#### Affected Driver
Only on Windows did we detect such an issue. The `32.0.101.6734` and `32.0.101.6737` are affected.
#### Rollback method
One could download the previous driver from [Intel® Arc™ & Iris® Xe Graphics - Windows*](https://www.intel.com/content/www/us/en/download/785597/intel-arc-iris-xe-graphics-windows.html) page. One could use the previous version, like `32.0.101.6647` for example.
Please click on the "perform a clean installation" checkbox when installing.
Also, note that if you failed with other issues of Triton on XPU Windows, it might be because of the setting of `LEVEL_ZERO_V1_SDK_PATH`. The old driver will need to set this manually. Please refer to [Windows.md](https://github.com/intel/intel-xpu-backend-for-triton/blob/main/.github/WINDOWS.md#level-zero-sdk) for details.
Normally, it requires two additional steps:
1. Download level-zero-win-sdk-*.*.*.zip from https://github.com/oneapi-src/level-zero/releases and extract it somewhere, like `C:'\level_zero`.
2. Set an environment variable. For example, if you are using PowerShell, use `$env:LEVEL_ZERO_V1_SDK_PATH = "C:\level_zero"`.
Then you could run normally.
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,010,404,620
|
Add an additional check to trigger graph break for sparse tensor
|
js00070
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 20
|
CONTRIBUTOR
|
Fixes #151522
This PR fixes the issue that Dynamo fails to trigger a graph break for sparse tensors in certain code paths. I added an additional check to handle this case, and it resolves the original problem.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,010,182,920
|
Avoid differing results in `linalg.(tensor_)solve`
|
Flamefire
|
open
|
[
"triaged",
"module: mkldnn",
"open source",
"ciflow/trunk",
"release notes: nn",
"ciflow/linux-aarch64"
] | 7
|
COLLABORATOR
|
Remove an optimization potentially using a transposed matrix as input for `linalg_lu_factor_ex_out`.
Depending on whether the input memory layout is contiguous or not this may lead to slightly different results which may cause larger differences in subsequent steps ultimately leading to test failures in e.g. `test_vmapvjp_linalg_tensorsolve_cpu_float32` & `test_vmapvjpvjp_linalg_tensorsolve_cpu_float32`.
The intended optimization no longer applies after 59bc76f so this code can be removed too resolving the accuracy issues observed in those tests.
With this change the code path used for the "regular" and "vmap" cases are identical: A batched tensor is iterated over in the batch dimension in [lu_solve](https://github.com/pytorch/pytorch/blob/f84062f78d723543e4962ffa1d38dcf42947e3f1/aten/src/ATen/native/BatchLinearAlgebraKernel.cpp#L1009) and [lu_factor](https://github.com/pytorch/pytorch/blob/f84062f78d723543e4962ffa1d38dcf42947e3f1/aten/src/ATen/native/BatchLinearAlgebraKernel.cpp#L955)
Prior to this it might not be the case as either tensor would/could have been non-contiguous leading to using a transposed tensor for the LU factorization instead.
So the (CPU) results should now be identical.
Fixes #151440
Fixes #114868
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
3,010,140,950
|
[Dynamo] Replace `unimplemented` with `unimplemented_v2` in `torch/_dynamo/variables/nn_module.py`
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 15
|
CONTRIBUTOR
|
Part of #147913
Replace `unimplemented` with`unimplemented_v2` in `torch/_dynamo/variables/nn_module.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,010,057,266
|
Add `LinearLR` compute lr formula in doc
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"release notes: optim"
] | 3
|
CONTRIBUTOR
|
Fixes #68058
## Test Result
### Before

### After

| true
|
3,010,048,156
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE3_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE3_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40911525300).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE3_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_flex_attention.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1)
headers: {"connection":"keep-alive","content-length":"202619","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"b35816086647517b7d683a331f8b787d0c8e182c5620a6887a28f30e26065ae4\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"293D:1DB346:246775:2FBBCA:68073AEE","accept-ranges":"bytes","date":"Tue, 22 Apr 2025 06:45:03 GMT","via":"1.1 varnish","x-served-by":"cache-sjc1000089-SJC","x-cache":"HIT","x-cache-hits":"1","x-timer":"S1745304303.120667,VS0,VE215","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"8b3ab0021c515283fb5a151c06d0b32dc1a4bff6","expires":"Tue, 22 Apr 2025 06:50:03 GMT","source-age":"0"}
cc @clee2000 @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
3,010,048,059
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE2_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE2_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40909504426).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE2_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1219, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 509, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 718.12 MiB is free. Including non-PyTorch memory, this process has 21.34 GiB memory in use. Of the allocated memory 6.84 GiB is allocated by PyTorch, and 14.23 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod3_BLOCK_SIZE2_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_flex_attention.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1)
headers: {"connection":"keep-alive","content-length":"202619","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"b35816086647517b7d683a331f8b787d0c8e182c5620a6887a28f30e26065ae4\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"293D:1DB346:246775:2FBBCA:68073AEE","accept-ranges":"bytes","date":"Tue, 22 Apr 2025 06:45:03 GMT","via":"1.1 varnish","x-served-by":"cache-sjc10020-SJC","x-cache":"HIT","x-cache-hits":"1","x-timer":"S1745304303.118866,VS0,VE216","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"71c222560767cf0e88506d4af998df65d396a759","expires":"Tue, 22 Apr 2025 06:50:03 GMT","source-age":"0"}
cc @clee2000 @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
3,010,048,058
|
DISABLED test_skip_odd_keys_bfloat16_cuda_bfloat16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_skip_odd_keys_bfloat16_cuda_bfloat16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40909504426).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_skip_odd_keys_bfloat16_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1445, in test_skip_odd_keys
self.run_test(score_mod, dtype, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 509, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 808.12 MiB is free. Including non-PyTorch memory, this process has 21.25 GiB memory in use. Of the allocated memory 6.89 GiB is allocated by PyTorch, and 14.08 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_skip_odd_keys_bfloat16_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_flex_attention.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1)
headers: {"connection":"keep-alive","content-length":"202619","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"b35816086647517b7d683a331f8b787d0c8e182c5620a6887a28f30e26065ae4\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"293D:1DB346:246775:2FBBCA:68073AEE","accept-ranges":"bytes","date":"Tue, 22 Apr 2025 06:45:03 GMT","via":"1.1 varnish","x-served-by":"cache-sjc10075-SJC","x-cache":"HIT","x-cache-hits":"1","x-timer":"S1745304303.122203,VS0,VE213","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"af0bd73ed2604f3fcd181542baa76097bf35e805","expires":"Tue, 22 Apr 2025 06:50:03 GMT","source-age":"0"}
cc @clee2000 @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
3,010,047,232
|
DISABLED test_cublas_and_lt_reduced_precision_fp16_accumulate_cuda (__main__.TestMatmulCudaCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"module: cublas",
"skipped"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cublas_and_lt_reduced_precision_fp16_accumulate_cuda&suite=TestMatmulCudaCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40909504436).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cublas_and_lt_reduced_precision_fp16_accumulate_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_matmul_cuda.py", line 183, in test_cublas_and_lt_reduced_precision_fp16_accumulate
out = torch.baddbmm(a, b, c)
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasLtMatmulAlgoGetHeuristic( ltHandle, computeDesc.descriptor(), Adesc.descriptor(), Bdesc.descriptor(), Cdesc.descriptor(), Cdesc.descriptor(), preference.descriptor(), 1, &heuristicResult, &returnedResult)`
To execute this test, run the following from the base repo dir:
python test/test_matmul_cuda.py TestMatmulCudaCUDA.test_cublas_and_lt_reduced_precision_fp16_accumulate_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_matmul_cuda.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_matmul_cuda.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1)
headers: {"connection":"keep-alive","content-length":"82572","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"60cdeeb596d078346db904097b9b5a15014242587f2e1a6637f974cb29443593\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"28E9:12A213:2598BB:30EDB3:68073AED","accept-ranges":"bytes","date":"Tue, 22 Apr 2025 06:45:03 GMT","via":"1.1 varnish","x-served-by":"cache-sjc1000103-SJC","x-cache":"MISS","x-cache-hits":"0","x-timer":"S1745304303.125742,VS0,VE193","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"23f40ef721841e24aaf0fec1afdc86cbdd84d23c","expires":"Tue, 22 Apr 2025 06:50:03 GMT","source-age":"0"}
cc @clee2000 @csarofeen @ptrblck @xwang233 @eqy
| true
|
3,010,047,171
|
DISABLED test_builtin_score_mods_dynamic_float16_score_mask_mod7_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_dynamic_float16_score_mask_mod7_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40909504426).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_dynamic_float16_score_mask_mod7_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_flex_attention.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1)
headers: {"connection":"keep-alive","content-length":"202619","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"b35816086647517b7d683a331f8b787d0c8e182c5620a6887a28f30e26065ae4\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"293D:1DB346:246775:2FBBCA:68073AEE","accept-ranges":"bytes","date":"Tue, 22 Apr 2025 06:45:03 GMT","via":"1.1 varnish","x-served-by":"cache-sjc1000119-SJC","x-cache":"HIT","x-cache-hits":"1","x-timer":"S1745304303.123932,VS0,VE211","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"661440a4a2ddc72240b75b4325b23ac5d5dc74ea","expires":"Tue, 22 Apr 2025 06:50:03 GMT","source-age":"0"}
cc @clee2000 @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
3,009,950,458
|
Fix xrefs
|
shoumikhin
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"Merged",
"release notes: quantization",
"fx",
"release notes: distributed (torchelastic)"
] | 3
|
CONTRIBUTOR
|
Fix existing cross references and removed old ones
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad
| true
|
3,009,933,351
|
[Inductor][CPP] Fix Codegen Issue when Parallel Reduction under the vectorization
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151887
**Summary**
Fixes [#151290](https://github.com/pytorch/pytorch/issues/151290) and [#151523](https://github.com/pytorch/pytorch/issues/151523), which are regressions introduced by [#144020](https://github.com/pytorch/pytorch/pull/144020). That PR enabled parallelization at the inner loop level.
However, a currently unsupported case arises when parallel reduction occurs under the vectorization loop level, specifically in patterns like:
```
for vec_loop_level:
do_parallel_reduction
```
In such cases, a temporary buffer `tmp_acc_array` is allocated for tail scalar kernels, and another temporary buffer `tmp_acc_array` is also defined for parallel reduction. This results in a conflict due to overlapping temporary buffers. This PR disables the problematic case to avoid the conflict until proper support is implemented.
**Test Plan**
```
python test/inductor/test_flex_attention.py -k test_make_block_mask_cpu
python test/inductor/test_cpu_repro.py -k test_parallel_reduction_vectorization
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,903,839
|
[Do Not Merge] update get start xpu
|
ZhaoqiongZ
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
update link and product name
add print to print ```torch.xpu.is_available()``` result in code snippet for user not using command python
| true
|
3,009,902,114
|
Remove as_tensor epilogue from float outputs
|
bobrenjc93
|
open
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151885
* #151766
Fixes #151470
This was originally added as a nicety - ensuring all graphs maintain
the invariant that ouputs are tensors. This, however, breaks simple
graphs such as
```
def f(y):
y_scalar = y.item()
return y_scalar
```
since inductor only knows how to codegen tensors and sizevars (aka
ints), it ends up barfing when doing the codegen of this epilogue.
Notice in particular for the following example, it doesn't correctly
codegen the float kernel argument.
```
cpp_fused_scalar_tensor_0 = async_compile.cpp_pybinding(['double*'], '''
extern "C" void kernel(double* out_ptr0)
{
{
{
{
auto tmp0 = zuf0;
auto tmp1 = c10::convert<double>(tmp0);
out_ptr0[static_cast<int64_t>(0L)] = tmp1;
}
}
}
}
''')
```
Since this is just a nicety, let's get rid of it for now until we
officially support floats in inductor. By doing this we stay in
python when doing item() calls, resulting in same behavior for eager vs
compile
```
def call(args):
arg0_1, = args
args.clear()
assert_size_stride(arg0_1, (), ())
zuf0 = arg0_1.item()
buf0 = None
del arg0_1
return (zuf0, )
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,009,842,421
|
The "eager" and "aot_eager" backends have different behavior for the expected gradient tensor of the torch.expend_as operator
|
yihuaxu
|
open
|
[
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 3
|
NONE
|
### 🐛 Describe the bug
It can not get the expected grad of "expand_as" when used "aot_eager" backend.
**Reproduce steps:**
1. install torch
$pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
2. run the below scripts:
python expand_as_ut_cpu.py
**Exception Result:**
Traceback (most recent call last):
File "/tmp/expand_as_ut_cpu.py", line 61, in <module>
assert((result_bwd_0[0] is None) == (result_bwd_1[0] is None))
AssertionError
**Basic Analyze:**
To use aot_dispatch_autograd_graph to create fx graph with trace method will lost the relation between "self" and "other" tensor (replaced "expand_as" operator with "expand" + "shape").
**Before:**
> /home/test/torch/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py(150)__call__()
-> compiled_gm = compiler_fn(gm, example_inputs)

**After:**
> /home/test/torch/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py(818)aot_dispatch_autograd()
-> disable_amp = torch._C._is_any_autocast_enabled()

### Error logs
_No response_
### Versions
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.8.0.dev20250421+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 160
On-line CPU(s) list: 0-159
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 40
Socket(s): 2
Stepping: 6
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3.8 MiB (80 instances)
L1i cache: 2.5 MiB (80 instances)
L2 cache: 100 MiB (80 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-39,80-119
NUMA node1 CPU(s): 40-79,120-159
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.8.0.dev20250421+cpu
[pip3] torchaudio==2.6.0.dev20250421+cpu
[pip3] torchvision==0.22.0.dev20250421+cpu
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
3,009,803,440
|
[Easy] Remove redundant code
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151883
As the title stated.
| true
|
3,009,785,997
|
Add upload of rc source code to download.pytorch.org
|
zklaus
|
closed
|
[
"open source",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Adds upload of pre-release source code to `create_release.yml`.
Closes #124759
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152098
* __->__ #151882
| true
|
3,009,771,264
|
Update description for torch.random.fork_rng
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151881
As the title stated.
Related ISSUE:
https://github.com/pytorch/pytorch/issues/151784
| true
|
3,009,733,122
|
[BE] Replace `std::runtime_error` with `TORCH_CHECK` [1/N]
|
shink
|
closed
|
[
"open source",
"module: amp (automated mixed precision)",
"Merged",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Part of: #148114
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5 @albanD
| true
|
3,009,725,255
|
Runtime assertion not generated in inductor for input unbacked symints
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0
|
CONTRIBUTOR
|
```
import torch
@torch.compile(fullgraph=True, dynamic=True, backend="inductor")
def func(a, b, c):
torch._check(a.size()[0]==b.size()[0])
return a * 10, c.item()
a = torch.rand(1,1)
b = torch.rand(2,2)
c = torch.tensor([2])
torch._dynamo.decorators.mark_unbacked(a, 0)
torch._dynamo.decorators.mark_unbacked(a, 1)
torch._dynamo.decorators.mark_unbacked(b, 0)
torch._dynamo.decorators.mark_unbacked(b, 1)
with fresh_inductor_cache():
func(a, b, c)
```
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
3,009,703,101
|
Potential error in calling scaled_dot_product_attention in pytorch/torch/testing/_internal/distributed/_tensor /common_dtensor.py ?
|
githubsgi
|
open
|
[
"oncall: distributed",
"triaged",
"module: dtensor",
"module: context parallel"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Wondering whether the following call at [here ](https://github.com/pytorch/pytorch/blob/a02eae8142ddd8fbf068a3e17fc0dd276d92fc78/torch/testing/_internal/distributed/_tensor/common_dtensor.py#L155) is correct . The mask is set to None and use_attn_mask is set to True.
`
output = F.scaled_dot_product_attention(
queries,
keys,
values,
None,
self.dropout_p if self.training else 0,
self.use_attn_mask,
)
`
Per these documentations, [here ](https://github.com/pytorch/pytorch/blob/40cf49d4607cf59453193321986eac34a8fbaa93/torch/nn/functional.py#L5806) and [here ](https://github.com/pytorch/pytorch/blob/40cf49d4607cf59453193321986eac34a8fbaa93/aten/src/ATen/native/transformers/attention.cpp#L718), there is no use_attn_mask argument for scaled_dot_product_attention.
### Versions
version is master , but does not matter here.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @tianyu-l @XilunWu
| true
|
3,009,664,362
|
[dynamo][ci] Fix recently broken test
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151847
* __->__ #151877
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,009,660,009
|
[Inductor] define custom pass as list
|
Valentine233
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
Custom passes are defined as a `CustomGraphPassType` or an optional `Callable Function`. So for now, we can only register one pass as the custom pass.
```
post_grad_custom_pre_pass: torch._inductor.custom_graph_pass.CustomGraphPassType = None
post_grad_custom_post_pass: torch._inductor.custom_graph_pass.CustomGraphPassType = None
joint_custom_pre_pass: Optional[Callable[[torch.fx.Graph], None]] = None
joint_custom_post_pass: Optional[Callable[[torch.fx.Graph], None]] = None
pre_grad_custom_pass: Optional[Callable[[torch.fx.graph.Graph], None]] = None
```
For example, if `joint_custom_pre_pass` is assigned two times, the latter pass will overwrite the previous one, without any warning.
Hence, it is better to extend the custom pass to a list.
### Versions
Pytorch: main branch
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,009,647,075
|
[cutlass backend] delay construction of cutlass presets to when called
|
henrylhtsang
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151875
In hindsight, always constructing the dict is a bit silly. We should only construct it when we need it.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,644,647
|
The operator 'aten::_linalg_eigh.eigenvalues' is not currently implemented for the MPS device
|
PhysicsMizu
|
open
|
[
"triaged",
"enhancement",
"module: linear algebra",
"actionable",
"module: mps"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
I'm working on projects that require calculating eigenvalues repeatedly on an apple silicon device and would like _linalg_eigh to work for mps devices
### Alternatives
_No response_
### Additional context
_No response_
cc @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
3,009,623,352
|
[Dynamo] Replace `unimplemented` with `unimplemented_v2` in `torch/_dynamo/variables/lists.py`
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 19
|
CONTRIBUTOR
|
Part of #147913
Replace `unimplemented` with`unimplemented_v2` in `torch/_dynamo/variables/lists.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,009,615,927
|
[Testing] Enable `test_mutations_loop_fusion_mps`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151872
* #151871
* #151869
By testing it against float32 rather than double dtype
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,613,384
|
[MPSInductor] Implement `atomic_add` store mode
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151872
* __->__ #151871
* #151869
Which fixes `GPUTests.test_index_put2_mps`, `GPUTests. test__unsafe_masked_index_put_accumulate_mps` and dozen of scatter/gather tests that relied on atomic_add store mode
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,612,805
|
Add BufferDict works like ParameterDict
|
zeshengzong
|
open
|
[
"open source"
] | 2
|
CONTRIBUTOR
|
Fixes #37386
| true
|
3,009,599,111
|
[MPS] Extend index_put to half precision floats
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151872
* #151871
* __->__ #151869
By reusing `c10/metal/atomic.h`
This also fixes `GPUTests.test_index_put_fallback[12]_mps` that is unrolled by inductor, so no need for dedicated atomic_add support
TODOs:
- Get rid of indexing kernel and compute it directly when kernel is run
- Simulate atomic_add for int64 types as series of int32 atomic-add-and-fetch
- Setup tolerances correctly to pass float16/bfloat16 tests (as CPU always takes sequential strategy)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,599,058
|
[Metal][BE] Move atomic ops to c10/metal/atomic.h
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151872
* #151871
* #151869
* __->__ #151868
To be reused from indexing and MPSInductor implementaiton of atomic_add stores
Added wrapper for `metal::atomic<int>`(to be used by followup PR)
| true
|
3,009,582,609
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE2_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE2_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40900492673).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE2_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 873, in sdpa_dense_backward
grad_scores = grad_scores * scale
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 718.12 MiB is free. Including non-PyTorch memory, this process has 21.34 GiB memory in use. Of the allocated memory 6.84 GiB is allocated by PyTorch, and 14.23 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod1_BLOCK_SIZE2_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,582,483
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE2_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE2_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40900463508).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE2_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 881, in sdpa_dense_backward
grad_scores = torch.where(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 864.12 MiB is free. Including non-PyTorch memory, this process has 21.19 GiB memory in use. Of the allocated memory 6.78 GiB is allocated by PyTorch, and 14.15 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE2_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,582,482
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE_128_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE_128_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40900463508).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE_128_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 870, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.1013 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 7, in forward
mul_2 = torch.ops.aten.mul.Tensor(arg5_1, arg0_1); arg5_1 = arg0_1 = None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 806, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 806, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 452.12 MiB is free. Including non-PyTorch memory, this process has 21.60 GiB memory in use. Of the allocated memory 6.77 GiB is allocated by PyTorch, and 14.57 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float32_score_mod2_BLOCK_SIZE_128_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,582,270
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE2_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE2_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40900463508).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE2_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 501, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 497, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 357, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 870, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.905 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 9, in forward
where = torch.ops.aten.where.self(ge, add, scalar_tensor); add = scalar_tensor = where = None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 806, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 806, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 588.12 MiB is free. Including non-PyTorch memory, this process has 21.46 GiB memory in use. Of the allocated memory 6.70 GiB is allocated by PyTorch, and 14.50 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE2_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,582,267
|
DISABLED test_non_equal_head_dims_score_mod4_float32_head_dims0_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod4_float32_head_dims0_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40900463508).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod4_float32_head_dims0_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,582,141
|
DISABLED test_cublas_addmm_size_1000_cuda_float16 (__main__.TestMatmulCudaCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"module: linear algebra",
"skipped"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cublas_addmm_size_1000_cuda_float16&suite=TestMatmulCudaCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40900492676).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cublas_addmm_size_1000_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_matmul_cuda.py`
cc @clee2000 @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
| true
|
3,009,582,093
|
DISABLED test_event_elapsed_time (__main__.TestOpenReg)
|
pytorch-bot[bot]
|
open
|
[
"module: windows",
"module: cpp",
"triaged",
"module: flaky-tests",
"skipped"
] | 2
|
NONE
|
Platforms: win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_event_elapsed_time&suite=TestOpenReg&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40900968612).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 6 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_event_elapsed_time`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "C:\actions-runner\_work\pytorch\pytorch\test\test_openreg.py", line 133, in test_event_elapsed_time
self.assertTrue(ms > 0)
File "C:\Jenkins\Miniconda3\lib\unittest\case.py", line 688, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
python test\test_openreg.py TestOpenReg.test_event_elapsed_time
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_openreg.py`
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jbschlosser @clee2000
| true
|
3,009,577,914
|
[aot][ca] save bw_module in AOTAutogradCache
|
xmfan
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: composability",
"module: dynamo",
"ciflow/inductor",
"merging",
"module: compiled autograd",
"ci-no-td"
] | 18
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151860
* #152119
* #151962
* #151731
Compiled Autograd retraces AOT's bw_module at backward runtime into a larger graph, and today this runs into an issue on warm cache runs because the bw_module is not restored. This PR adds it to the cache, by first stripping it bare from unserializable metadata. I also intentionally differentiate the cached and non-cached versions to avoid accidental attempts of AOT compilation with a restored bw_module (would probably crash).
Note that since the cache entry may be used by runs that use compiled autograd and runs that do not, we need to cache both the lowered backward and the bw_module.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,009,561,072
|
[inductor] handle offset in ReinterpretView for alignment
|
shunting314
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151859
* #151841
Fix https://github.com/pytorch/pytorch/issues/151589
It's interesting that the Q4_K dequantization example in the referred GH issue does not crash even if Inductor pass triton the wrong alignment information. I dig this a bit. The main reason is, there are 2 things in triton that decides the vectorization size
1. alignement
2. max number of contiguous elements a thread need to process
Here is the triton code that decides vectorization size [link](https://github.com/triton-lang/triton/blob/c5fed8e1ca66c1e04b773ffa9d4e49d481fc43ad/third_party/nvidia/lib/TritonNVIDIAGPUToLLVM/LoadStoreOpToLLVM.cpp#L147-L157), and here is the triton code that considers contiguity for vectorization [link](https://github.com/triton-lang/triton/blob/c5fed8e1ca66c1e04b773ffa9d4e49d481fc43ad/lib/Analysis/AxisInfo.cpp#L1250-L1269)
When Inductor wrongly tell triton that a unaligned tensor is aligned, Triton may not do vectorization (or not do full vectorization) because of the second restriction.
Check this test:
```
@parametrize(
"size",
(
128,
1024,
1024 * 1024,
),
)
def test_slice_view_dtype(self, size):
offset = 1
def f(x):
return x[2:].view(dtype=torch.float32) + 1
x = torch.randn((size + offset) * 2, dtype=torch.bfloat16, device=self.device)
self.common(f, (x,), reference_in_float=False)
```
Before the fix, Inductor would tell Triton that the output of aten.view.dtype tensor is aligned even though it's not. That tensor will be passed to the triton kernel for the aten.add. Triton may do different vectorization decision depending on the tensor size
1. when size = 128, triton pick ld.global.b32 to load data from global memory
2. when size = 1024, triton uses ld.global.v2.b32
4. when size = 1024 * 1024, triton uses ld.global.v4.b32
So whether wrong alignment metadata causes issue depends on if triton picks the vectorized instructions. The latter depends on the triton config (block size) decided by inductor and triton internal logic (how they assign elements to each thread). We'd better to make sure Inductor always generate correct metadata to make sure such hidden issues does not turn into crash later.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,560,489
|
construcing DTensor on a 2D device mesh SIGTERMs
|
wangkuiyi
|
open
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 19
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I am working with @wanchaol on drafting a tutorial about DTensor https://wkyi.quip.com/YVhXArYw2a5c/PyTorch-DTensor-From-Zero-To-Hero
You can run all examples in the above Quip doc using the same command line:
```shell
OMP_NUM_THREADS=1 torchrun --nproc_per_node=4 a.py
```
All the examples work well on a Runpod.ai pod with four GPUs except for the last one, whose complete source code is attached as the following:
```
# OMP_NUM_THREADS=1 torchrun --nproc_per_node=4 a.py
import os
import torch
import torch.distributed as dist
import contextlib
@contextlib.contextmanager
def distributed_context():
try:
local_rank = int(os.environ["LOCAL_RANK"])
local_device = torch.device("cuda", local_rank)
dist.init_process_group(backend="nccl", device_id=local_device)
yield local_device
finally:
dist.barrier()
dist.destroy_process_group()
print(f"Rank {local_rank} finished")
def main(local_device):
import torch.distributed.tensor.debug
import torch.distributed.tensor as dt
local_rank = local_device.index
mesh = dist.init_device_mesh("cuda", (2, 2), mesh_dim_names=["dp", "tp"])
placements = [dt.Shard(dim=0), dt.Shard(dim=1)]
dtensor = dt.full((4, 4), 1.23, device_mesh=mesh, placements=placements)
print(f"Rank {local_rank} created \n{dtensor}")
dt.debug.visualize_sharding(dtensor)
if __name__ == "__main__":
with distributed_context() as local_device:
main(local_device)
```
Running it gave me the following errors
```
root@42cc9eef3ad3:/w# OMP_NUM_THREADS=1 torchrun --nproc_per_node=4 a.py
Rank 0 created
DTensor(local_tensor=tensor([[1.2300, 1.2300],
[1.2300, 1.2300]], device='cuda:0'), device_mesh=DeviceMesh('cuda', [[0, 1], [2, 3]], mesh_dim_names=('dp', 'tp')), placements=(Shard(dim=0), Shard(dim=1)))
Col 0-1 Col 2-3
------- --------- ---------
Row 0-1 cuda:0 cuda:1
Row 2-3 cuda:2 cuda:3
Rank 1 created
DTensor(local_tensor=tensor([[1.2300, 1.2300],
[1.2300, 1.2300]], device='cuda:0'), device_mesh=DeviceMesh('cuda', [[0, 1], [2, 3]], mesh_dim_names=('dp', 'tp')), placements=(Shard(dim=0), Shard(dim=1)))
Rank 2 created
DTensor(local_tensor=tensor([[1.2300, 1.2300],
[1.2300, 1.2300]], device='cuda:0'), device_mesh=DeviceMesh('cuda', [[0, 1], [2, 3]], mesh_dim_names=('dp', 'tp')), placements=(Shard(dim=0), Shard(dim=1)))
Rank 3 created
DTensor(local_tensor=tensor([[1.2300, 1.2300],
[1.2300, 1.2300]], device='cuda:0'), device_mesh=DeviceMesh('cuda', [[0, 1], [2, 3]], mesh_dim_names=('dp', 'tp')), placements=(Shard(dim=0), Shard(dim=1)))
W0422 00:38:25.921000 3596 torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3599 closing signal SIGTERM
W0422 00:38:25.922000 3596 torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3600 closing signal SIGTERM
E0422 00:38:26.086000 3596 torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: -11) local_rank: 0 (pid: 3598) of binary: /usr/bin/python
Traceback (most recent call last):
File "/usr/local/bin/torchrun", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/distributed/run.py", line 892, in main
run(args)
File "/usr/local/lib/python3.11/dist-packages/torch/distributed/run.py", line 883, in run
elastic_launch(
File "/usr/local/lib/python3.11/dist-packages/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
======================================================
a.py FAILED
------------------------------------------------------
Failures:
[1]:
time : 2025-04-22_00:38:25
host : 42cc9eef3ad3
rank : 3 (local_rank: 3)
exitcode : -11 (pid: 3601)
error_file: <N/A>
traceback : Signal 11 (SIGSEGV) received by PID 3601
------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-04-22_00:38:25
host : 42cc9eef3ad3
rank : 0 (local_rank: 0)
exitcode : -11 (pid: 3598)
error_file: <N/A>
traceback : Signal 11 (SIGSEGV) received by PID 3598
======================================================
```
### Versions
Version 2.8.0
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @tianyu-l @XilunWu @chauhang @penguinwu
| true
|
3,009,527,970
|
Proposal: Beautify torch.distributed.tensor.debug.visualize_sharding
|
wangkuiyi
|
closed
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 3
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
`jax.debug.visualize_array_sharding` prints colorized result in terminals or notebooks like the following (cf. [my LinkedIn post](https://www.linkedin.com/posts/yidewang_i-coauthored-this-notebook-with-wanchao-activity-7319523841595076608-bcm3?utm_source=share&utm_medium=member_desktop&rcm=ACoAAALdDjQBRj38KfRE5-nY27SqXVIIS8171vE)):

`torch.distributed.tensor.debug.visualize_sharding` prints the following (cf. [a DTensor tutorial draft](https://wkyi.quip.com/YVhXArYw2a5c/PyTorch-Distributed-From-Zero-To-Hero)):
<img width="515" alt="Image" src="https://github.com/user-attachments/assets/7a974d53-41b7-4360-8c55-ff49a0e8c275" />
Is it a good idea to make the visualization of DTensors closer to that of JAX arrays?
cc. @wanchaol
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @tianyu-l @XilunWu
| true
|
3,009,512,171
|
[draft export] normalize sympy expressions for data-dependent counting
|
pianpwk
|
open
|
[
"release notes: export"
] | 1
|
CONTRIBUTOR
|
Introduces expression normalization scheme so we don't deduplicate different framework-code expressions with same user-code stack.
| true
|
3,009,504,503
|
Better error msg for too big to optimize
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: In the "too big to optimize" error message, tell the user that they should use the torch._inductor.config.aot_inductor.compile_wrapper_opt_level = 'O0' flag
Test Plan:
This is not added to unit test cases because it runs for a little longer time before the expected failure
```
def test_runtime_checks_error_msg(self):
with torch.library._scoped_library("mylib", "FRAGMENT") as lib:
torch.library.define(
"mylib::foo",
"(Tensor a, Tensor b) -> Tensor",
tags=torch.Tag.pt2_compliant_tag,
lib=lib,
)
torch.library.impl("mylib::foo", "cpu", lib=lib)
def foo(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
return a + b
torch.library.impl_abstract("mylib::foo", lib=lib)
def foo_fake_impl(a, b):
return a + b
class Model(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(self, x):
for i in range(10000):
x = torch.ops.mylib.foo(x, x)
return x
inputs = (torch.ones(8, 8, 8), )
model = Model()
with self.assertRaisesRegex(Exception, "torch._inductor.config.aot_inductor.compile_wrapper_opt_level"):
with torch.no_grad():
AOTIRunnerUtil.compile(
model,
inputs,
)
```
Differential Revision: D72323380
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,496,585
|
Add assert_on_assumption on to guard_or_true, and guard_or_false
|
laithsakka
|
open
|
[
"release notes: fx",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
One thing that we will be doing often is asserting on the assumption made by
guard_or_false, and guard_or_true.
For example in expand if the input ==-1. we will assume its not -1 and want to add a runtime assertion that it is not -1
only when we do such assumptions.
similarly this will be used for broadcast checks. and some other locations. where we do not want to silently deviate.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,496,519
|
[DO NOT REVIEW] PIAN PR PLACE HOLDER
|
laithsakka
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151854
* __->__ #151853
* #151852
* #151041
* #151038
* #151023
| true
|
3,009,496,454
|
[DO NOT REVIEW] PIAN PR PLACE HOLDER
|
laithsakka
|
closed
|
[
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151854
* #151853
* __->__ #151852
* #151041
* #151038
* #151023
Summary:
Differential Revision: D72483950
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,009,478,741
|
[MTIA] Store original file path and line number that FX op derives from. Print the file path and line number for each op in IR printing.
|
patrick-toulme
|
closed
|
[
"fb-exported",
"fx",
"release notes: export"
] | 12
|
NONE
|
Summary:
When tracing with torch export to FX IR, we need to keep track of the original file path and line number that the FX op originates from in the user code. We then need to print this at the far right of the Node in IR printing.
Without a mapping of user PyTorch code to FX IR, it is very difficult to debug the IR, especially in cases where there are mysterious ops appearing.
I also add support for propagating the line number and file path even with decomposition. Side note: I find it interesting that decomposition actually produces a second traced ExportedProgram. Why do we not just run decomposition during the first tracing?
Jax/XLA currently offers this to users, and it is extremely useful for debug.
**We default the printing to not print debug file source / line info by default.**
IR Example for Llama: **Scroll to the right**
```
%div_tensor_12 : f16[1, 32, 2960, 2960][#users=1] = call_function[target=aten.div.Tensor](kwargs = {input: %reshape_default_906, other: 11.313708498984761}) file_path = transformers/models/llama/modeling_llama.py line_number = 419
%slice_tensor_229 : f16[1, 1, 2960, 2960][#users=1] = call_function[target=aten.slice.Tensor](kwargs = {input: %clone_default_152, dim: 0, start: 0, end: 1, step: 1}) file_path = transformers/models/llama/modeling_llama.py line_number = 423
%slice_tensor_230 : f16[1, 1, 2960, 2960][#users=1] = call_function[target=aten.slice.Tensor](kwargs = {input: %slice_tensor_229, dim: 1, start: 0, end: 1, step: 1}) file_path = transformers/models/llama/modeling_llama.py line_number = 423
%slice_tensor_231 : f16[1, 1, 2960, 2960][#users=1] = call_function[target=aten.slice.Tensor](kwargs = {input: %slice_tensor_230, dim: 2, start: 0, end: 2960, step: 1}) file_path = transformers/models/llama/modeling_llama.py line_number = 423
%slice_tensor_232 : f16[1, 1, 2960, 2960][#users=1] = call_function[target=aten.slice.Tensor](kwargs = {input: %slice_tensor_231, dim: 3, start: 0, end: 2960, step: 1}) file_path = transformers/models/llama/modeling_llama.py line_number = 423
%add_tensor_135 : f16[1, 32, 2960, 2960][#users=1] = call_function[target=aten.add.Tensor](kwargs = {input: %div_tensor_12, other: %slice_tensor_232, alpha: 1}) file_path = transformers/models/llama/modeling_llama.py line_number = 423
%_to_copy_default_194 : f32[1, 32, 2960, 2960][#users=1] = call_function[target=aten._to_copy.default](kwargs = {input: %add_tensor_135, dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: None, non_blocking: False, memory_format: None}) file_path = transformers/models/llama/modeling_llama.py line_number = 426
%_softmax_default_12 : f32[1, 32, 2960, 2960][#users=1] = call_function[target=aten._softmax.default](kwargs = {input: %_to_copy_default_194, dim: -1, half_to_float: False}) file_path = transformers/models/llama/modeling_llama.py line_number = 426
%_to_copy_default_195 : f16[1, 32, 2960, 2960][#users=1] = call_function[target=aten._to_copy.default](kwargs = {input: %_softmax_default_12, dtype: torch.float16, layout: None, device: None, pin_memory: None, non_blocking: False, memory_format: None}) file_path = transformers/models/llama/modeling_llama.py line_number = 426
%clone_default_74 : f16[1, 32, 2960, 2960][#users=1] = call_function[target=aten.clone.default](kwargs = {input: %_to_copy_default_195, memory_format: None}) file_path = transformers/models/llama/modeling_llama.py line_number = 427
%repeat_default_173 : f16[1, 32, 2960, 2960][#users=1] = call_function[target=aten.repeat.default](kwargs = {input: %clone_default_74, repeats: [1, 1, 1, 1]}) file_path = transformers/models/llama/modeling_llama.py line_number = 428
%reshape_default_907 : f16[32, 2960, 2960][#users=1] = call_function[target=aten.reshape.default](kwargs = {input: %repeat_default_173, shape: [32, 2960, 2960]}) file_path = transformers/models/llama/modeling_llama.py line_number = 428
%repeat_default_174 : f16[1, 32, 2960, 128][#users=1] = call_function[target=aten.repeat.default](kwargs = {input: %reshape_default_903, repeats: [1, 1, 1, 1]}) file_path = transformers/models/llama/modeling_llama.py line_number = 428
%reshape_default_908 : f16[32, 2960, 128][#users=1] = call_function[target=aten.reshape.default](kwargs = {input: %repeat_default_174, shape: [32, 2960, 128]}) file_path = transformers/models/llama/modeling_llama.py line_number = 428
%bmm_default_72 : f16[32, 2960, 128][#users=1] = call_function[target=aten.bmm.default](kwargs = {input: %reshape_default_907, mat2: %reshape_default_908}) file_path = transformers/models/llama/modeling_llama.py line_number = 428
%reshape_default_909 : f16[1, 32, 2960, 128][#users=1] = call_function[target=aten.reshape.default](kwargs = {input: %bmm_default_72, shape: [1, 32, 2960, 128]}) file_path = transformers/models/llama/modeling_llama.py line_number = 428
%permute_default_453 : f16[1, 2960, 32, 128][#users=1] = call_function[target=aten.permute.default](kwargs = {input: %reshape_default_909, dims: [0, 2, 1, 3]}) file_path = transformers/models/llama/modeling_llama.py line_number = 436
%clone_default_75 : f16[1, 2960, 32, 128][#users=1] = call_function[target=aten.clone.default](kwargs = {input: %permute_default_453, memory_format: torch.contiguous_format}) file_path = transformers/models/llama/modeling_llama.py line_number = 436
%reshape_default_910 : f16[1, 2960, 4096][#users=1] = call_function[target=aten.reshape.default](kwargs = {input: %clone_default_75, shape: [1, 2960, -1]}) file_path = transformers/models/llama/modeling_llama.py line_number = 438
%permute_default_454 : f16[4096, 4096][#users=1] = call_function[target=aten.permute.default](kwargs = {input: %model_model_model_layers_12_self_attn_o_proj_weight, dims: [1, 0]}) file_path = transformers/models/llama/modeling_llama.py line_number = 445
%reshape_default_911 : f16[2960, 4096][#users=1] = call_function[target=aten.reshape.default](kwargs = {input: %reshape_default_910, shape: [2960, 4096]}) file_path = transformers/models/llama/modeling_llama.py line_number = 445
%mm_default_87 : f16[2960, 4096][#users=1] = call_function[target=aten.mm.default](kwargs = {input: %reshape_default_911, mat2: %permute_default_454}) file_path = transformers/models/llama/modeling_llama.py line_number = 445
%reshape_default_912 : f16[1, 2960, 4096][#users=1] = call_function[target=aten.reshape.default](kwargs = {input: %mm_default_87, shape: [1, 2960, 4096]}) file_path = transformers/models/llama/modeling_llama.py line_number = 445
%add_tensor_136 : f16[1, 2960, 4096][#users=2] = call_function[target=aten.add.Tensor](kwargs = {input: %add_tensor_131, other: %reshape_default_912, alpha: 1}) file_path = transformers/models/llama/modeling_llama.py line_number = 740
```
Test Plan: Added unit test
Differential Revision: D73346635
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,009,469,783
|
Rewrite the guts of torch::jit::Lexer to speed it up
|
swolchok
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: jit",
"ci-no-td"
] | 17
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151850
* #151849
* #151810
* #151807
* #151806
* #151805
* #151804
* #151803
* #151802
* #151801
The trie-based approach was, apparently, not efficient. This incidentally fixes a bug where "not inp" and "is note" were lexed incorrectly; see test_lexer.cpp update.
Differential Revision: [D73129443](https://our.internmc.facebook.com/intern/diff/D73129443/)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,009,469,734
|
Add simple direct C++ tests for torch::jit::Lexer
|
swolchok
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151850
* __->__ #151849
* #151810
* #151807
* #151806
* #151805
* #151804
* #151803
* #151802
* #151801
We have test_jit.py, but given that I'm working on
significant changes to the lexer, it seems nice to have direct C++
tests. (Also, writing the tests caught a pair of related bugs; see the
two tests with "Bug" in their name. The rewrite will fix them.)
Differential Revision: [D73402367](https://our.internmc.facebook.com/intern/diff/D73402367/)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,009,463,990
|
[MKLDNN] Check that strides are positive
|
malfet
|
closed
|
[
"module: cpu",
"module: mkldnn",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: bug fixes",
"ciflow/linux-aarch64"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151848
For pooling ops. Prevents division-by-zero when argument is wrong
Fixes https://github.com/pytorch/pytorch/issues/149274
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
3,009,444,879
|
[Dynamo] Use LazyVariableTracker in base VT
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"ci-no-td"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151847
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,009,440,566
|
[FlexAttention] Fix device test instantation
|
drisspg
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151959
* __->__ #151846
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,402,399
|
[reland][ROCm] remove caffe2 from hipify
|
jeffdaily
|
open
|
[
"module: rocm",
"triaged",
"open source",
"release notes: rocm",
"skip-pr-sanity-checks",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ci-no-td",
"ciflow/inductor-rocm"
] | 2
|
COLLABORATOR
|
Reland of https://github.com/pytorch/pytorch/pull/137157.
Needs https://github.com/pytorch/FBGEMM/pull/4028 to be merged first to keep FBGEMM functional.
- Remove all "MasqueradingAsCUDA" files and classes.
- Do not rename "CUDA" classes to "HIP".
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,393,866
|
Add scripts to check xrefs and urls
|
shoumikhin
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Traverses the docs and code to find any broken links
| true
|
3,009,387,522
|
[EZ/Profiler] Update Submodule
|
sraikund16
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary: Update to https://github.com/pytorch/kineto/commit/d82680bbd44f872aa04394fa5bba23a7992f9fa4
Test Plan: CI
Differential Revision: D73397323
| true
|
3,009,377,863
|
[export] Enable symint inputs for AdditionalInputs and ShapesCollection
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
With `AdditionalInputs`, the behavior is the same as with tensors:
```python
class M(torch.nn.Module):
def forward(self, x, y):
return x + y
additional_inputs = torch.export.AdditionalInputs()
additional_inputs.add((5, 5))
additional_inputs.add((3, 5))
additional_inputs.add((5, 4))
ep = torch.export.export(
M(), (6, 7), dynamic_shapes=additional_inputs, strict=False
)
```
With `ShapesCollection`, we now need to wrap integer inputs as `_IntWrapper` so that we can have a unique identifier for each integer input.
```python
class M(torch.nn.Module):
def forward(self, x, y):
return x + y
from torch.export.dynamic_shapes import _IntWrapper
args = (_IntWrapper(5), _IntWrapper(5))
# Or we can do `args = pytree.tree_map_only(int, lambda a: _IntWrapper(a), orig_args)`
shapes_collection = torch.export.ShapesCollection()
shapes_collection[args[0]] = Dim.DYNAMIC
shapes_collection[args[1]] = Dim.DYNAMIC
ep = torch.export.export(
M(), args, dynamic_shapes=shapes_collection, strict=False
)
```
| true
|
3,009,362,373
|
[Inductor] move alignment tests to a separate file
|
shunting314
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151859
* __->__ #151841
This is a pure code movement. test_torchinductor.py is already 15K lines of code. Move alignment related tests I added recently to a separate file. I need add more such kind of tests.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,361,835
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE3_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE3_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40883373165).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE3_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 1201, in test_builtin_score_mods_different_block_size
self.run_test(score_mod, dtype, block_mask=block_mask, device=device)
File "/var/lib/jenkins/workspace/test/inductor/test_flex_attention.py", line 491, in run_test
golden_out.backward(backward_grad.to(torch.float64))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 679, in backward
) = flex_attention_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 320, in maybe_run_autograd
return self(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 132, in __call__
return super().__call__(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 490, in __call__
return wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 486, in wrapper
return self.dispatch(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 346, in dispatch
return kernel(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 870, in sdpa_dense_backward
grad_scores, _, _, _, _, *grad_score_mod_captured = joint_score_mod(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/apis.py", line 202, in wrapped
return vmap_impl(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 334, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 484, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 833, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 409, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 396, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.1013 from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py:1265 in wrapped", line 9, in forward
where = torch.ops.aten.where.self(ge, add, scalar_tensor); add = scalar_tensor = where = None
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 795, in __call__
return self._op(*args, **kwargs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacity of 22.05 GiB of which 452.12 MiB is free. Including non-PyTorch memory, this process has 21.60 GiB memory in use. Of the allocated memory 6.70 GiB is allocated by PyTorch, and 14.63 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_builtin_score_mods_different_block_size_float16_score_mod6_BLOCK_SIZE3_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,361,834
|
DISABLED test_non_equal_head_dims_score_mod3_float16_head_dims1_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_equal_head_dims_score_mod3_float16_head_dims1_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40890572206).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_equal_head_dims_score_mod3_float16_head_dims1_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,361,833
|
DISABLED test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE_128_cuda_float32 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE_128_cuda_float32&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40883373165).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float32_score_mod0_BLOCK_SIZE_128_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,009,359,766
|
DISABLED test_builtin_score_mods_different_block_size_float16_score_mod7_BLOCK_SIZE_128_cuda_float16 (__main__.TestFlexAttentionCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_builtin_score_mods_different_block_size_float16_score_mod7_BLOCK_SIZE_128_cuda_float16&suite=TestFlexAttentionCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40890572206).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_builtin_score_mods_different_block_size_float16_score_mod7_BLOCK_SIZE_128_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_flex_attention.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.