id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2 values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4 values | body stringlengths 7 62.5k ⌀ | is_title bool 1 class |
|---|---|---|---|---|---|---|---|---|
2,812,541,502 | Add correct `__repr__` for parallel distributed modules | ArthurZucker | open | [
"oncall: distributed",
"triaged",
"oncall: pt2"
] | 3 | NONE | ### 🚀 The feature, motivation and pitch
Sorry if this is a duplicated, but instantiating a `transformers` and paralellizing will print the same model, which is unintuitive.
```python
import torch
import os
from transformers import LlamaConfig, LlamaModel
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
bs = 1
seqlen = 4096
# Get distributed settings
rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
# Initialize distributed
device = torch.device(f"cuda:{rank}")
torch.distributed.init_process_group("nccl", device_id=device)
device_mesh = torch.distributed.init_device_mesh("cuda", (world_size,))
# Get model config
config = LlamaConfig.from_pretrained(model_id)
config.hidden_size = 2048
config.attention_bias = False
# Instantiate model
with device:
model = LlamaModel(config).to(dtype=torch.float16)
model.eval()
# Tensor Parallel
if world_size > 1:
model.tensor_parallel(device_mesh)
print(model)
```
should IMO print `ColumParallel(...., device_mesh = [0,1])` etc
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @chauhang @penguinwu | true |
2,812,534,440 | Add `sharding strategy` for `torch.distributed.tensor.parallel.ParallelStyle` with `inference_mode` | ArthurZucker | closed | [
"oncall: distributed",
"triaged",
"module: dtensor"
] | 3 | NONE | ### 🚀 The feature, motivation and pitch
When using tensor parallel and `with torch.inference_mode()`, any `torch.distributed.tensor.parallel.ParallelStyle` layers, we have:
`aten.mm.default: got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators!`.
With `transformers == 4.48` and `torch==2.5.1`
```python
import torch
from transformers import LlamaConfig, LlamaModel
import os
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
bs = 1
seqlen = 4096
# Get distributed settings
rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
# Initialize distributed
device = torch.device(f"cuda:{rank}")
torch.distributed.init_process_group("nccl", device_id=device)
device_mesh = torch.distributed.init_device_mesh("cuda", (world_size,))
# Get model config
config = LlamaConfig.from_pretrained(model_id)
config.hidden_size = 2048
config.attention_bias = False
# Instantiate model
with device:
model = LlamaModel(config).to(dtype=torch.float16)
model.eval()
model.tensor_parallel(device_mesh)
inputs = torch.randint(config.vocab_size, (bs, seqlen), device=device)
# Test compile
with torch.inference_mode():
out = model(inputs)
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu @albanD as we talked about this
### Alternatives
_No response_
### Additional context
_No response_ | true |
2,812,288,005 | internal assert failed -- trying to update all from pinokio comfy ui | lingtalfi | closed | [
"module: windows",
"triaged"
] | 3 | NONE | ### 🐛 Describe the bug
I'm on windows 11, with nvidia 4090, and playing with comfyUI installed via pinokio.
Everything works well, but when i clicked the "update all" button from the comfyUI's node manager button, i got this message that suggested me to report the bug to pyTorch, so here i am.
Note that i can just restart the comfyUI server and use comfyUI works well after that, so the "bug" is not harmful and only occurs when i try to update all (that's the second time i notice that bug, the first time was also when i try to update all).
So here is my terminal output when starting comfyUI (via pinokio)
```
Microsoft Windows [Version 10.0.22631.4751]
(c) Microsoft Corporation. All rights reserved.
D:\tools\ai\pinokio\api\comfy.git\app>conda_hook && conda deactivate && conda deactivate && conda deactivate && conda activate base && D:\tools\ai\pinokio\api\comfy.git\app\env\Scripts\activate D:\tools\ai\pinokio\api\comfy.git\app\env && python main.py
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-01-27 08:11:47.070
** Platform: Windows
** Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:07:43) [MSC v.1942 64 bit (AMD64)]
** Python executable: D:\tools\ai\pinokio\api\comfy.git\app\env\Scripts\python.exe
** ComfyUI Path: D:\tools\ai\pinokio\api\comfy.git\app
** ComfyUI Base Folder Path: D:\tools\ai\pinokio\api\comfy.git\app
** User directory: D:\tools\ai\pinokio\api\comfy.git\app\user
** ComfyUI-Manager config path: D:\tools\ai\pinokio\api\comfy.git\app\user\default\ComfyUI-Manager\config.ini
** Log path: D:\tools\ai\pinokio\api\comfy.git\app\user\comfyui.log
[notice] A new release of pip is available: 23.0.1 -> 24.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
[notice] A new release of pip is available: 23.0.1 -> 24.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
Prestartup times for custom nodes:
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-easy-use
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\rgthree-comfy
17.5 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-Manager
Checkpoint files will always be loaded safely.
Total VRAM 24564 MB, total RAM 130780 MB
pytorch version: 2.5.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
Using pytorch attention
ComfyUI version: 0.3.12
[Prompt Server] web root: D:\tools\ai\pinokio\api\comfy.git\app\web
Adding D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes to sys.path
Could not find efficiency nodes
[comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui_controlnet_aux\ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
DWPose: Onnxruntime with acceleration providers detected
Loaded ControlNetPreprocessors nodes from D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui_controlnet_aux
Could not find AdvancedControlNet nodes
Could not find AnimateDiff nodes
Could not find IPAdapter nodes
Could not find VideoHelperSuite nodes
Could not load ImpactPack nodes Could not find ImpactPack nodes
[Crystools INFO] Crystools version: 1.21.0
[Crystools INFO] CPU: Intel(R) Core(TM) i9-14900KS - Arch: AMD64 - OS: Windows 10
[Crystools INFO] Pynvml (Nvidia) initialized.
[Crystools INFO] GPU/s:
[Crystools INFO] 0) NVIDIA GeForce RTX 4090
[Crystools INFO] NVIDIA Driver: 551.86
Depthcrafter Nodes Loaded
[ComfyUI-Easy-Use] server: v1.2.7 Loaded
[ComfyUI-Easy-Use] web root: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-easy-use\web_version/v2 Loaded
### Loading: ComfyUI-Impact-Pack (V8.4.1)
[Impact Pack] Wildcards loading done.
Total VRAM 24564 MB, total RAM 130780 MB
pytorch version: 2.5.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
### Loading: ComfyUI-Manager (V3.9.4)
### ComfyUI Version: v0.3.12-25-g4f011b9a | Released on '2025-01-26'
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
------------------------------------------
Comfyroll Studio v1.76 : 175 Nodes Loaded
------------------------------------------
** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md
** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki
------------------------------------------
Initializing ControlAltAI Nodes
D:\tools\ai\pinokio\api\comfy.git\app\env\lib\site-packages\albumentations\__init__.py:13: UserWarning: A new version of Albumentations is available: 2.0.1 (you have 1.4.15). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1.
check_for_updates()
FETCH ComfyRegistry Data: 5/31
WAS Node Suite: Importing styles from `D:\tools\ai\pinokio\api\comfy.git\app\user\default\prompt-styles\sd-styles.csv`.
WAS Node Suite: Styles import complete.
WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite Warning: `ffmpeg_bin_path` is not set in `D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\pr-was-node-suite-comfyui-47064894\was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.
WAS Node Suite: Finished. Loaded 220 nodes successfully.
"Art is the mirror that reflects the beauty within us." - Unknown
[rgthree-comfy] Loaded 42 fantastic nodes. 🎉
Traceback (most recent call last):
File "D:\tools\ai\pinokio\api\comfy.git\app\nodes.py", line 2110, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\stable-point-aware-3d\__init__.py", line 17, in <module>
from spar3d.models.mesh import QUAD_REMESH_AVAILABLE, TRIANGLE_REMESH_AVAILABLE
File "D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\stable-point-aware-3d\spar3d\models\mesh.py", line 10, in <module>
from jaxtyping import Float, Integer
ModuleNotFoundError: No module named 'jaxtyping'
Cannot import D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\stable-point-aware-3d module for custom nodes: No module named 'jaxtyping'
Import times for custom nodes:
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-styles_csv_loader
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-GGUF
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui_controlnet_aux
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui_controlaltai_nodes
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-frame-interpolation
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\ComfyUI_Comfyroll_CustomNodes
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-depthcrafter-nodes
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\websocket_image_save.py
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-liveportraitkj
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui_essentials
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-kjnodes
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-hunyuanvideowrapper
Microsoft Windows [Version 10.0.22631.4751]
(c) Microsoft Corporation. All rights reserved.
D:\tools\ai\pinokio\api\comfy.git\app>conda_hook && conda deactivate && conda deactivate && conda deactivate && conda activate base && D:\tools\ai\pinokio\api\comfy.git\app\env\Scripts\activate D:\tools\ai\pinokio\api\comfy.git\app\env && python main.py
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-01-27 08:15:27.048
** Platform: Windows
** Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:07:43) [MSC v.1942 64 bit (AMD64)]
** Python executable: D:\tools\ai\pinokio\api\comfy.git\app\env\Scripts\python.exe
** ComfyUI Path: D:\tools\ai\pinokio\api\comfy.git\app
** ComfyUI Base Folder Path: D:\tools\ai\pinokio\api\comfy.git\app
** User directory: D:\tools\ai\pinokio\api\comfy.git\app\user
** ComfyUI-Manager config path: D:\tools\ai\pinokio\api\comfy.git\app\user\default\ComfyUI-Manager\config.ini
** Log path: D:\tools\ai\pinokio\api\comfy.git\app\user\comfyui.log
[notice] A new release of pip is available: 23.0.1 -> 24.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
```
And then when i click the "update all" button here is where the "bug" happen:
```
#######################################################################
[ComfyUI-Manager] Starting dependency installation/(de)activation for the extension
Downloading https://storage.googleapis.com/comfy-registry/drltdata/comfyui-impact-pack/8.5.1/node.zip to D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\CNR_temp_cc707452-be31-4a13-8364-7cb1c4aed1dd.zip
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.63M/1.63M [00:00<00:00, 7.77MB/s]
Extracted zip file to D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-impact-pack
'D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-impact-pack' is moved to 'D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-impact-pack'
Install: pip packages for 'D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-impact-pack'
[SKIP] Downgrading pip package isn't allowed: scipy (cur=1.15.1)
Install: install script for 'D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-impact-pack'
[!]
[!] WARN: The `COMFYUI_PATH` environment variable is not set. Assuming `D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-impact-pack/../../` as the ComfyUI path.
[!]
[!] WARN: The `COMFYUI_MODEL_PATH` environment variable is not set. Assuming `D:\tools\ai\pinokio\api\comfy.git\app\models` as the ComfyUI path.
### ComfyUI-Impact-Pack: Check dependencies
[!]### ComfyUI-Impact-Pack: Check basic models
Downloading https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth to D:\tools\ai\pinokio\api\comfy.git\app\models\sams\sam_vit_b_01ec64.pth
[!] 0%| | 0.00/375M [00:00<?, ?B/s]
[!] 0%| | 983k/375M [00:00<00:38, 9.68MB/s]
[!] 1%| | 1.97M/375M [00:00<00:38, 9.77MB/s]
[!] 1%| | 2.98M/375M [00:00<00:39, 9.45MB/s]
[!] 1%|▌ | 4.03M/375M [00:00<00:37, 9.80MB/s]
[!] 1%|▌ | 5.01M/375M [00:00<00:39, 9.45MB/s]
...
[!] 100%|█████████▌| 375M/375M [00:40<00:00, 9.33MB/s]
[!] 100%|██████████| 375M/375M [00:40<00:00, 9.37MB/s]
### ComfyUI-Impact-Pack: onnx model directory created (D:\tools\ai\pinokio\api\comfy.git\app\models\onnx)
[ComfyUI-Manager] Startup script completed.
#######################################################################
[notice] A new release of pip is available: 23.0.1 -> 24.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
Prestartup times for custom nodes:
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\rgthree-comfy
0.0 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-easy-use
46.8 seconds: D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-Manager
Checkpoint files will always be loaded safely.
Traceback (most recent call last):
File "D:\tools\ai\pinokio\api\comfy.git\app\main.py", line 136, in <module>
import execution
File "D:\tools\ai\pinokio\api\comfy.git\app\execution.py", line 13, in <module>
import nodes
File "D:\tools\ai\pinokio\api\comfy.git\app\nodes.py", line 22, in <module>
import comfy.diffusers_load
File "D:\tools\ai\pinokio\api\comfy.git\app\comfy\diffusers_load.py", line 3, in <module>
import comfy.sd
File "D:\tools\ai\pinokio\api\comfy.git\app\comfy\sd.py", line 6, in <module>
from comfy import model_management
File "D:\tools\ai\pinokio\api\comfy.git\app\comfy\model_management.py", line 166, in <module>
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
File "D:\tools\ai\pinokio\api\comfy.git\app\comfy\model_management.py", line 129, in get_torch_device
return torch.device(torch.cuda.current_device())
File "D:\tools\ai\pinokio\api\comfy.git\app\env\lib\site-packages\torch\cuda\__init__.py", line 940, in current_device
_lazy_init()
File "D:\tools\ai\pinokio\api\comfy.git\app\env\lib\site-packages\torch\cuda\__init__.py", line 319, in _lazy_init
torch._C._cuda_init()
RuntimeError: config[i] == get()->name() INTERNAL ASSERT FAILED at "C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\c10\\cuda\\CUDAAllocatorConfig.cpp":230, please report a bug to PyTorch. Allocator backend parsed at runtime != allocator backend parsed at load time
(env) (base) D:\tools\ai\pinokio\api\comfy.git\app>
```
I noticed that the internal assert failed at C something, while all my install is in the D drive.
That's all i can tell, as i only use the GUI of those tools, i don't know what's going on under the hood.
Hope this helps.
### Versions
my env is from pinokio
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | true |
2,812,281,001 | [dynamo][dicts] Fix dict.__new__ bug | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145559
* #145753
* #145744
* __->__ #145723
* #145558
* #145547
* #145519
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,812,237,128 | [MPS] optimize cholesky | Isalia20 | closed | [
"triaged",
"open source",
"Merged",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 4 | COLLABORATOR | Followup to #145701
Optimizes the syrk and trsm kernels of cholesky decomposition on mps. For SYRK kernel it does matmuls with apple's simdgroup matrices instead of a tiled implementation and for trsm kernel we do vectorized loads. Also this PR puts command encoder inside of the stream queue dispatch (as discussed on last PR).
Script to collect perf
```
mport torch
import numpy as np
import time
import csv
matrix_sizes = [512, 1024, 2048, 4096]
batch_sizes = [1, 2, 4, 8, 16]
num_runs = 10
warmup_runs = 3
def create_spd_matrix(n, batch_size):
torch.manual_seed(42)
A = torch.randn(batch_size, n, n, dtype=torch.float32)
return A @ A.transpose(-2, -1) + n * torch.eye(n).expand(batch_size, -1, -1)
def run_cholesky_mps(A):
torch.mps.synchronize()
start = time.perf_counter()
b = torch.linalg.cholesky(A, upper=False)
torch.mps.synchronize()
end = time.perf_counter()
return b, end - start
results = {
'N': [],
'batch_size': [],
'mean_time': [],
'std_time': []
}
for n in matrix_sizes:
for batch_size in batch_sizes:
print(f"\nBenchmarking N={n}, batch_size={batch_size}")
try:
A_cpu = create_spd_matrix(n, batch_size)
A_mps = A_cpu.to("mps")
for _ in range(warmup_runs):
_, _ = run_cholesky_mps(A_mps)
times = []
for _ in range(num_runs):
_, t = run_cholesky_mps(A_mps)
times.append(t)
mean_time = np.mean(times)
std_time = np.std(times)
results['N'].append(n)
results['batch_size'].append(batch_size)
results['mean_time'].append(mean_time)
results['std_time'].append(std_time)
print(f"Mean time: {mean_time:.4f}s ± {std_time:.4f}s")
except RuntimeError as e:
print(f"Error for N={n}, batch_size={batch_size}: {e}")
continue
with open('cholesky_benchmark_times.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(['N', 'batch_size', 'mean_time', 'std_time'])
for i in range(len(results['N'])):
writer.writerow([
results['N'][i],
results['batch_size'][i],
results['mean_time'][i],
results['std_time'][i]
])
```
Observed speedups on M1 Pro

cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | true |
2,812,227,068 | [Customized Optimus][Inductor] Add split cat pattern in aten level | mengluy0125 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"inductor_pattern_match"
] | 7 | CONTRIBUTOR | Summary:
Thanks Microve for discovering that recGPT has some repeated similar kernels that might be optimized through optimus. After investigation, I designed a pattern in the aten level to remove such excessive kernels.
trace: https://fburl.com/perfdoctor/82fauil7
tlparse: https://fburl.com/98q6tadx
Test Plan:
# unit test
```
buck2 test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/inductor:split_cat_fx_aten_passes -- test_split_cat_post_grad
```
Buck UI: https://www.internalfb.com/buck2/e8458d63-b8ca-498b-a731-77a83fb4d1cb
Test UI: https://www.internalfb.com/intern/testinfra/testrun/16325548715106567
Network: Up: 341KiB Down: 359KiB (reSessionID-7d3de666-7fc1-4988-8d11-d75ba958016d)
Executing actions. Remaining 0/3
Command: test. Finished 2 local
Time elapsed: 3:04.8s
Tests finished: Pass 2. Fail 0. Fatal 0. Skip 0. Build failure 0
# local run
```
buck2 run @//mode/opt aps_models/ads/recgpt_exp:recgpt_launcher -- mode=local_recgpt_ranking_30x_v0_unified_seq_1115
```
https://www.internalfb.com/mlhub/pipeline/1630903954173593
# E2E
```
buck2 run @//mode/opt aps_models/ads/recgpt_exp:recgpt_launcher -- mode=mast_recgpt_ranking_30x_v0_unified_seq_1115 launcher.oncall=ads_model_platform launcher.data_project=ai_large_scale launcher.fbl_entitlement=ads_global_tc_training_efficiency launcher.tags=[ads_ranking_taxonomy_mc_qps_optimization] launcher.hardware=SMC_T20 launcher.job_name=recgpt_ranking_1115_pt2_with_optimus data_loader.dataset.table_ds=[2024-12-13,2024-12-14,2024-12-15,2024-12-16,2024-12-17,2024-12-18]
```
### how to add the config
Add the following patterns to the dynamo config
```
post_grad_fusion_options: {
"normalization_aten_pass": {},
"split_cat_aten_pass": {},
}
```
{F1974700331}
baseline:
aps-recgpt_ranking_1115_pt2_5-8cb4905c7d
{F1974700216}
proposal:
Differential Revision: D68695717
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,812,161,492 | Fix support for nccl < 2.17 | oraluben | open | [
"oncall: distributed",
"open source",
"ciflow/trunk",
"release notes: distributed (c10d)",
"topic: not user facing"
] | 28 | CONTRIBUTOR | Fix build failure with older (< 2.17) NCCL.
Refactoring NCCL version related code:
1. Fix failure against old NCCL versions since #138527 cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o ;
2. remove unused checks caused by unsupported NCCL version (since there's a static assert checking NCCL >= 2.7: #142023);
3. move NCCL macros to `torch/csrc/cuda/nccl.h` from various places and uniform some style (`#if` to `#ifdef`), which could improve maintainability of the NCCL part I hope.
Resolves #141914
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,812,082,404 | [BE] Use copy_method to import all tests | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145718
Less chances for typo when doing the imports
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,812,035,836 | add input shape check for _local_scalar_dense | jiayisunx | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145717
Fix https://github.com/pytorch/pytorch/issues/145066.
| true |
2,812,022,132 | support arm architectures in your Docker images on docker hub? ( | cboettig | open | [
"triaged",
"enhancement",
"module: docker",
"module: arm"
] | 2 | NONE | ### 🚀 The feature, motivation and pitch
`pytorch` images on Docker Hub are still only available for amd64 architectures. Nvidia/cuda images support both arm64 and amd64. I believe we are seeing more and more devices offering Nvidia GPUs witth ARM cpu architecture, and multi-arch builds are commonplace. It would be great to have pytorch docker images that support these.
Thanks!
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @snadampal @milpuz01 | true |
2,811,957,447 | Log cache state for AOTAutograd in title of file | jamesjwu | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145715
Differential Revision: [D68692755](https://our.internmc.facebook.com/intern/diff/D68692755/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,811,947,398 | setitem node shouldn't be deadcode eliminated | leslie-fang-intel | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145714
**Summary**
Fix issue https://github.com/pytorch/pytorch/issues/145697. The `operator.setitem` has been eliminated as dead code, causing a correctness issue. Mark it as impure in this PR to avoid this side effect.
**TestPlan**
```
python -u -m pytest -s -v test/fx/test_dce_pass.py -k test_keep_setitem
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,811,856,964 | Doubt about the actual behavior of handle.wait() in torch.distributed | Edenzzzz | closed | [
"oncall: distributed",
"triaged"
] | 7 | NONE | ### 📚 The doc issue
For `comm_handle.wait()` in the case of async comm, the doc (https://pytorch.org/docs/stable/distributed.html#synchronous-and-asynchronous-collective-operations) says
`In the case of CUDA collectives, will block until the operation has been successfully enqueued onto a CUDA stream and the output can be utilized on the default stream without further synchronization.`
However, I doubt this should be
`the output can be utilized on the stream in which wait() is called`.
### Suggest a potential alternative/fix
I used nsys to profile a snippet to verify that handle.wait() calls `cudaStreamWaitEvent()`. It occured to me that when called in a stream context, the comm output will be synchonrized to be used on that stream. Line 14 in python backtrace is exactly `handle.wait()` followed by elt-wise add, and I think this should ensure the correct output.
```
from torch.multiprocessing import spawn
import torch.distributed as dist
import torch
import os
def test(rank, world_size):
# Code runs on each rank.
dist.init_process_group("nccl", rank=rank, world_size=2)
output = torch.ones((1000, 1000)).cuda(rank)
s = torch.cuda.Stream()
handle = dist.all_reduce(output, async_op=True)
# Wait ensures the operation is enqueued, but not necessarily complete.
# Using result on non-default stream.
with torch.cuda.stream(s):
handle.wait()
# s.wait_stream(torch.cuda.default_stream())
output.add_(100)
if rank == 0:
print(output)
dist.destroy_process_group()
if __name__ == "__main__":
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29503'
spawn(test, args=(2,), nprocs=2, join=True)
```

## One remaining question:
In the C source code I see [Work::wait](https://github.com/pytorch/pytorch/blob/c6ad08357bf8e766b5220bfb5cbbfdb2a4ec0ca5/torch/csrc/distributed/c10d/Work.cpp#L81) has nothing to do with `cudaStreamWaitEvent`. I searched for `cudaStreamWaitEvent` in the whole codebase and couldn't find anything related to torch.distributed. Would love a explaination of the call stack.
Thanks for your attention.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,811,816,932 | [inductor] Remove type ignores from scheduler.py | jansel | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145712
* #145692
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,811,812,021 | [Inductor-CPU] Templated codegened kernel name may not correctly reflect epilogue fusions | sanchitintel | open | [
"oncall: pt2",
"oncall: cpu inductor"
] | 4 | COLLABORATOR | ### 🐛 Describe the bug
Ran LLaMA2 with BF16 AMP & max-autotune enabled for CPU.
Some codegened kernel names are of the type `cpp_fused_add_mul_silu_xxx`, but `SiLU` & `mul` are not present in the generated code, and the only epilogue present in the sample codegened code below is `add`.
UPDATE: The issue is related to the kernel naming scheme being non-intuitive - it also includes input nodes
cc @chauhang @penguinwu @jianan-gu @leslie-fang-intel
```
extern "C"
void cpp_fused_add_mul_silu_255(const bfloat16* X, const bfloat16* W, const bfloat16* in_ptr2, bfloat16* Y)
{
RECORD_FUNCTION("graph_1_cpp_fused_add_mul_silu_255", c10::ArrayRef<c10::IValue>({}));
constexpr int64_t num_threads = 192;
constexpr int64_t N = 4096;
constexpr int64_t K = 11008;
constexpr int64_t Mr = 48;
constexpr int64_t Nr = 16;
constexpr int64_t Kr = 32;
constexpr int64_t Nr_blocks = (N + Nr - 1) / Nr;
constexpr int64_t Kr_blocks = (K + Kr - 1) / Kr;
constexpr int64_t M = static_cast<int64_t>(1L);
constexpr int64_t Mr_blocks = (M + Mr - 1) / Mr;
constexpr int64_t Mt_blocks = 1;
constexpr int64_t Nt_blocks = 2;
constexpr int64_t Kt_blocks = 344;
constexpr int64_t Mc_blocks = 1;
constexpr int64_t Nc_blocks = 2;
constexpr int64_t Kc_blocks = 38;
constexpr int64_t num_Mc_blocks = (Mr_blocks + Mc_blocks - 1) / Mc_blocks;
constexpr int64_t num_Nc_blocks = (Nr_blocks + Nc_blocks - 1) / Nc_blocks;
constexpr int64_t num_Mt_blocks = (Mr_blocks + Mt_blocks - 1) / Mt_blocks;
constexpr int64_t num_Nt_blocks = (Nr_blocks + Nt_blocks - 1) / Nt_blocks;
constexpr int64_t num_Kt_blocks = (Kr_blocks + Kt_blocks - 1) / Kt_blocks;
// make sure all partitions are assigned
TORCH_CHECK(
Mt_blocks * Nt_blocks * Kt_blocks * 192 >= Mr_blocks * Nr_blocks * Kr_blocks,
"Not all partitions are assigned."
);
#pragma omp parallel num_threads(192)
{
const int tid = omp_get_thread_num();
const int64_t k_group_id = tid / num_Kt_blocks;
const int64_t k_slice_id = tid % num_Kt_blocks;
const int64_t n_group_id = k_group_id / num_Nt_blocks;
const int64_t n_slice_id = k_group_id % num_Nt_blocks;
const int64_t k_block_start = k_slice_id * Kt_blocks;
const int64_t k_block_end = std::min(k_block_start + Kt_blocks, Kr_blocks);
const int64_t n_block_start = n_slice_id * Nt_blocks;
const int64_t n_block_end = std::min(n_block_start + Nt_blocks, Nr_blocks);
const int64_t m_block_start = std::min(n_group_id * Mt_blocks, Mr_blocks);
const int64_t m_block_end = std::min(m_block_start + Mt_blocks, Mr_blocks);
const int64_t num_Mc_blocks_per_thread = (m_block_end - m_block_start + Mc_blocks - 1) / Mc_blocks;
AMXState amx_state;
auto _local_acc_buf = std::make_unique<float[]>(static_cast<int64_t>(Mc_blocks*Mr*Nc_blocks*Nr)); auto local_acc_buf = _local_acc_buf.get();
for (int64_t mc_block_id = 0; mc_block_id < num_Mc_blocks_per_thread; mc_block_id++) {
const int64_t my_mc_block_id = (mc_block_id + n_slice_id) % num_Mc_blocks_per_thread;
const int64_t mc = m_block_start + my_mc_block_id * Mc_blocks;
const int64_t m_start = mc * Mr;
const int64_t m_end = std::min(std::min(mc + Mc_blocks, m_block_end) * Mr, M);
const int64_t m_size = m_end - m_start;
for (int64_t nc = n_block_start; nc < n_block_end; nc += Nc_blocks) {
const int64_t n_start = nc * Nr;
const int64_t n_end = std::min(std::min(nc + Nc_blocks, n_block_end) * Nr, N);
const int64_t n_size = n_end - n_start;
// NB: assume we pad N, nc_block_end won't exceed padded N here.
const int64_t nc_block_end = std::min(nc + Nc_blocks, n_block_end);
if (_local_acc_buf == nullptr) { _local_acc_buf = std::make_unique<float[]>(static_cast<int64_t>(Mc_blocks*Mr*Nc_blocks*Nr)); local_acc_buf = _local_acc_buf.get(); }
for (int64_t kc = k_block_start; kc < k_block_end; kc += Kc_blocks) {
int64_t k_start = kc * Kr;
int64_t k_end = std::min(std::min(kc + Kc_blocks, k_block_end) * Kr, K);
for (int64_t nci = nc; nci < nc_block_end; nci++) {
if (kc == k_block_start) {
cpp_fused_add_mul_silu_255_micro_gemm<static_cast<bool>(false)>(
amx_state,
&(X[static_cast<int64_t>(k_start)]),
&(W[static_cast<int64_t>(16L*k_start + 176128L*nci)]),
&(local_acc_buf[static_cast<int64_t>(Nr*nci + ((-1L)*Nr*nc))]),
static_cast<int64_t>(m_end + ((-1L)*m_start)),
static_cast<int64_t>(Nr),
static_cast<int64_t>(k_end + ((-1L)*k_start)),
static_cast<int64_t>(0L),
static_cast<int64_t>(16L),
static_cast<int64_t>(Nc_blocks*Nr)
);
} else {
cpp_fused_add_mul_silu_255_micro_gemm<static_cast<bool>(true)>(
amx_state,
&(X[static_cast<int64_t>(k_start)]),
&(W[static_cast<int64_t>(16L*k_start + 176128L*nci)]),
&(local_acc_buf[static_cast<int64_t>(Nr*nci + ((-1L)*Nr*nc))]),
static_cast<int64_t>(m_end + ((-1L)*m_start)),
static_cast<int64_t>(Nr),
static_cast<int64_t>(k_end + ((-1L)*k_start)),
static_cast<int64_t>(0L),
static_cast<int64_t>(16L),
static_cast<int64_t>(Nc_blocks*Nr)
);
}
}
}
{
{
#pragma GCC ivdep
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(m_end + ((-1L)*m_start)); x0+=static_cast<int64_t>(1L))
{
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(n_end + ((-1L)*n_start)); x1+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x1 >= static_cast<int64_t>(0) && x1 < static_cast<int64_t>(16L*(c10::div_floor_integer(static_cast<int64_t>(n_end + ((-1L)*n_start)), static_cast<int64_t>(16L))))))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(local_acc_buf + static_cast<int64_t>(x1), static_cast<int64_t>(16));
auto tmp1 = at::vec::Vectorized<bfloat16>::loadu(in_ptr2 + static_cast<int64_t>(n_start + x1), static_cast<int64_t>(16));
auto tmp2 = at::vec::convert<float>(tmp1);
auto tmp3 = tmp0 + tmp2;
auto tmp4 = at::vec::convert<bfloat16>(tmp3);
tmp4.store(Y + static_cast<int64_t>(n_start + x1 + 4096L*m_start + 4096L*x0), static_cast<int64_t>(16));
}
if(C10_UNLIKELY(x1 >= static_cast<int64_t>(16L*(c10::div_floor_integer(static_cast<int64_t>(n_end + ((-1L)*n_start)), static_cast<int64_t>(16L)))) && x1 < static_cast<int64_t>(n_end + ((-1L)*n_start))))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(local_acc_buf + static_cast<int64_t>(x1), static_cast<int64_t>(n_end + ((-1L)*n_start) + ((-16L)*(c10::div_floor_integer(static_cast<int64_t>(n_end + ((-1L)*n_start)), static_cast<int64_t>(16L))))));
auto tmp1 = at::vec::Vectorized<bfloat16>::loadu(in_ptr2 + static_cast<int64_t>(n_start + x1), static_cast<int64_t>(n_end + ((-1L)*n_start) + ((-16L)*(c10::div_floor_integer(static_cast<int64_t>(n_end + ((-1L)*n_start)), static_cast<int64_t>(16L))))));
auto tmp2 = at::vec::convert<float>(tmp1);
auto tmp3 = tmp0 + tmp2;
auto tmp4 = at::vec::convert<bfloat16>(tmp3);
tmp4.store(Y + static_cast<int64_t>(n_start + x1 + 4096L*m_start + 4096L*x0), static_cast<int64_t>(n_end + ((-1L)*n_start) + ((-16L)*(c10::div_floor_integer(static_cast<int64_t>(n_end + ((-1L)*n_start)), static_cast<int64_t>(16L))))));
}
}
}
}
}
}
}
}
amx_state.release([]() { _tile_release(); });
}
}
```
### Versions
Main branch commit cd68d549111a8c5d0e056bbb2922e6b37bf88841dated Jan 21 | true |
2,811,805,366 | LBFGS-B with cuda implementation (and CPU one too) | AnFunctionArray | open | [
"module: optimizer",
"triaged"
] | 4 | NONE | ### 🚀 The feature, motivation and pitch
I've found LBFGS to be bottleneck in my application, so I searched and found [this](https://github.com/raymondyfei/lbfgsb-gpu) - then I implemented in my own LBFGS1.cpp and LBFGS1.h using the cuda files from [culbfgsb](https://github.com/raymondyfei/lbfgsb-gpu/tree/master/culbfgsb) (there seems to be an CPU implementation as well tbh) - it might be worth considering as alternative to strong wolfe cpu slow algorithm.
I would add it myself if the windows build was running on my machine and not failing at runtime (although I could still do the work and someone else test the whole build).
**(Currently said cpp and h files are nicely added to my Visual Studio 2022 project and compiled and used there)**
The implementation is fairly simple anyway.
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar | true |
2,811,794,179 | [Inductor-CPU] Templated codegened kernel names are ambiguous in PyTorch profiler results | sanchitintel | open | [
"oncall: pt2",
"oncall: cpu inductor"
] | 1 | COLLABORATOR | ### 🐛 Describe the bug
When max-autotune is used with Inductor-CPU & PyTorch Profiler is used to collect performance data, kernel names are ambiguous - when a templated FlexAttention kernel or a templated GEMM is used, the kernel name doesn't reflect whether the kernel pertains to GEMM or to attention.
e.g. `cpp_fused_add_mul_99` may either be a GEMM kernel with epilogues, or it might be an attention kernel with epilogues.
UPDATE:
The following Inductor config fixes this issue for codegened templated GEMM kernels.
```
inductor_config.cpp.descriptive_names = "inductor_node"
```
### Versions
Current main branch
cc @chauhang @penguinwu | true |
2,811,756,969 | PEP585: .github release triggers | aorenste | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145708
| true |
2,811,756,935 | PEP585: .github | aorenste | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145708
* __->__ #145707
| true |
2,811,734,182 | Modify torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py | rec | closed | [
"ciflow/inductor"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145706
* #145636
| true |
2,811,726,803 | [MPSInductor] Add rand support | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145718
* __->__ #145705
Using Philox4 as PRNG
Test plan (other that CI)
Run
```python
mport torch
from torch._inductor.utils import run_and_get_code
from contextlib import nullcontext
def foo(x):
return x * torch.randn_like(x)
foo_c = torch.compile(foo)
x = torch.ones(100, 100, device="mps")
y = foo_c(x)
print(y.mean().item(), y.std().item())
for i in range(25):
print(y[i].mean(), y[i].std())
```
And observe that printed values are close to 0 and 1
TODO: Better `randint` algorithm for large ranges
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,811,650,043 | Bot spamming on PR trying to deploy to upload-benchmark-results environment | huydhn | closed | [
"triaged",
"module: devx"
] | 6 | CONTRIBUTOR | ### 🐛 Describe the bug
Reported by @ezyang, for example https://github.com/pytorch/pytorch/pull/144695
<img width="903" alt="Image" src="https://github.com/user-attachments/assets/099bbf5d-340a-4ccc-9cfe-000569e55b03" />
### Versions
The issue manifests on both regular and ghstack PRs
cc @ZainRizvi @kit1980 @clee2000 | true |
2,811,644,603 | CUDA IPC tensors keep occupying GPU memory that cannot be freed | fingertap | closed | [] | 0 | NONE | ### 🐛 Describe the bug
CUDA IPC is very important for sharing tensors between multiple ray actors in RL applications to accelerate the communication. Currently the CUDA IPC memory cannot get freed:
```python
import ray
import torch
from torch.multiprocessing.reductions import reduce_tensor
def create_handle(tensor):
reduce_tensor(tensor)
def rebuild_from_handle(handle):
func, args = handle
return func(*args)
@ray.remote(num_gpus=1)
class Actor:
def tensor(self):
return torch.zeros(4, 1024, 1024, 1024, device="cuda:0") # 16GB tensor
def get_handle(self):
return create_handle(self.tensor())
def memory_allocated(self):
torch.cuda.synchronize()
torch.cuda.empty_cache()
return torch.cuda.memory_allocated() // 1024 ** 2
def main():
ray.init()
a = Actor.remote()
handle = ray.get(a.get_handle.remote())
rebuilt = rebuild_from_handle(*handle)
assert rebuilt.shape == (4, 1024, 1024, 1024)
del rebuilt
torch.cuda.empty_cache()
memalloc = ray.get(a.memory_allocated.remote())
print("Main process memory allocated:", torch.cuda.memory_allocated() // 1024 ** 2, "MB")
print("Actor memory allocated:", ray.get(a.memory_allocated.remote()), "MB")
assert memalloc == 0
ray.shutdown()
if __name__ == "__main__":
main()
```
The output:
```
Main process memory allocated: 0 MB
Actor memory allocated: 16384 MB
```
The behavior I expect is:
```
Main process memory allocated: 0 MB
Actor memory allocated: 0 MB
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 525.147.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-95
Off-line CPU(s) list: 96-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
Frequency boost: enabled
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.4.0+cu121
[pip3] torchaudio==2.4.0+cu121
[pip3] torchvision==0.19.0+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.4.0+cu121 pypi_0 pypi
[conda] torchaudio 2.4.0+cu121 pypi_0 pypi
[conda] torchvision 0.19.0+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi | true |
2,811,602,739 | PyTorch VS2022 official build Windows binary illegal instruction on AVX2(max ISA level) CPU | xuhancn | open | [
"high priority",
"module: crash",
"module: build",
"module: windows",
"module: cpu",
"triaged"
] | 17 | COLLABORATOR | ### 🐛 Describe the bug
# Background
This issue is re-submit of https://github.com/pytorch/pytorch/issues/145042, because of we taked about this issue and we think the original issue will make us confuse it is a `XPU` related issue. But acturally it is a `CPU` only issue.
# Reproduce steps:
1. It is easy step to reproduce it, just switch Windows CPU build to VS2022. Reference to https://github.com/pytorch/pytorch/pull/143791
2. Add `ciflow/binaries` tag to trigger nightly binary build.
3. Download `wheel-py3_10-cpu` wheel from `Artifacts` page https://github.com/pytorch/pytorch/actions/runs/12972478091?pr=143791
4. Install the wheel to the CPU, which max ISA level is `AVX2`.
5. Run the reproduce code via `pytest`:
```python
# cpu_vs2022_inst_issue.py
import torch
class TestClass:
def test_grid_sampler_2d(self):
torch.manual_seed(0)
b = torch.rand(2, 13, 10, 2, dtype=torch.float64)
a = torch.rand(2, 3, 5, 20, dtype=torch.float64)
torch.grid_sampler_2d(a, b, interpolation_mode=0, padding_mode=0, align_corners=False)
```
command line:
```cmd
pytest -v cpu_vs2022_inst_issue.py
```
# Root cause
<img width="1395" alt="Image" src="https://github.com/user-attachments/assets/7dcb8960-9653-49ce-9fa7-30085761cf22" />
I debugged it via WinDBG, the reason is VS2022 genarated `AVX512` instruction, and it is run on the client CPU, which max ISA level is `AVX2`.
# Additional information
1. This issue is not impact on currenct PyTorch officical binaries, because of current PyTorch official binaries built by VS2019.
2. The PyTorch CI can't test this issue, due to the CI runs on server CPU, which is support AVX512.
3. I tried to reproduce it on my local VS2022 build environment, but it is can't reproduce. I think it only occurred issue on PyTorch official build environment.
4. I just open this issue for track it, to avoid to upgrade VS2022 and occurred this issue, in the further.
5. It is not impact on official PyTorch binary, so I will add `low priority` tag.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,811,472,484 | [MPS] cholesky implementation | Isalia20 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: improvements",
"release notes: mps",
"ciflow/mps",
"ciflow/inductor"
] | 4 | COLLABORATOR | Requested in #77764
Closed #144193 due to a lot of conflicts when rebasing | true |
2,811,367,294 | [POC] [CPU][Inductor] Support INT8 SDPA based on CPP template | Valentine233 | closed | [
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2 | COLLABORATOR | This PR implements the Int8 SDPA CPU kernel based on CPP template following the RFC #144941.
ARs:
- [ ] INT8 SDPA patterns:
- Done: FP32 with/wo mask, batch size >/= 1;
- Remain: BF16 with/wo mask, batch size >/= 1.
- [ ] Add the `select_strategy` to generate the kernel with various parallel loop strategies.
- [ ] Enable and validate on related models, and make sure the good accuracy/perf.
- [ ] Refactor the codes, and make best use of the common parts.
- [ ] Add necessary comments.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,811,315,942 | PEP585: .github 1b1 | aorenste | closed | [
"topic: not user facing"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145699
| true |
2,811,302,087 | PEP585: .github 1b2 | aorenste | closed | [
"topic: not user facing"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145698
| true |
2,811,298,530 | Constant folding pass leads to model inference errors | fernchen | closed | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 3 | NONE | ### 🐛 Describe the bug
I accidentally discovered that PyTorch's built-in constant folding optimization pass(`torch._inductor.constant_folding.constant_fold` ) may cause inference errors in the model in certain situations, as shown below:
```Python
import torch
import torch._dynamo as torch_dynamo
import torchvision.models as models
import torch.ao.quantization.pt2e.duplicate_dq_pass
from torch._inductor import config
config.freezing = True
@torch_dynamo.register_backend(name="tmp")
def tmp_compile(gm, inputs, **kwargs):
# print("before const fold: \n", gm.graph)
# return gm
import torch._inductor.constant_folding
torch._inductor.constant_folding.constant_fold(gm)
# print("after const fold: \n", gm.graph)
return gm
def load_model():
model = models.swin_v2_s(weights=True)
model.eval()
return model
if __name__ == "__main__":
model = load_model()
x = torch.randn(1, 3, 224, 224)
x_cp = x.detach().clone()
import copy
model_cp = copy.deepcopy(model)
eager_out = model_cp(x_cp)
sm = torch.compile(model, backend="tmp")
with torch.inference_mode():
out = sm(x)
torch.testing.assert_close(eager_out, out, atol=1e-5, rtol=1e-5)
```
The tested model is swin_v2_s which is from torchvision and was compiled by `torch.compile`. It should be noted that I customized a simple backend and did nothing else except for constant folding.
### Versions
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 19.1.6 (https://github.com/conda-forge/clangdev-feedstock a097c63bb6a9919682224023383a143d482c552e)
CMake version: version 3.31.2
Libc version: glibc-2.31
Python version: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8369HB CPU @ 3.30GHz
Stepping: 11
CPU MHz: 3800.114
CPU max MHz: 4200.0000
CPU min MHz: 1200.0000
BogoMIPS: 6600.06
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 66 MiB
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 ida arat avx512_vnni
Versions of relevant libraries:
[pip3] flake8==3.8.2
[pip3] flake8-bugbear==20.1.4
[pip3] flake8-comprehensions==3.3.0
[pip3] flake8-executable==2.0.4
[pip3] flake8-pyi==20.5.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.3
[pip3] torch==2.5.1+cpu
[pip3] torchaudio==2.5.1+cpu
[pip3] torchvision==0.20.1+cpu
[conda] numpy 1.26.3 pypi_0 pypi
[conda] torch 2.5.1+cpu pypi_0 pypi
[conda] torchaudio 2.5.1+cpu pypi_0 pypi
[conda] torchvision 0.20.1+cpu pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | true |
2,811,297,236 | [CI][CUDA][cuSPARSELt] cusparselt 0.6.3 and cu121 related cleanups | nWEIdia | closed | [
"oncall: distributed",
"open source",
"module: amp (automated mixed precision)",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 4 | COLLABORATOR | Fix inconsistency between CD (nightly binary) and CI (what ci jobs install for cu126)
Remove remaining cu121 ci/docker jobs
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @mcarilli @ptrblck @leslie-fang-intel @jgong5 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @xmfan @atalman @malfet @eqy @tinglvv | true |
2,811,288,468 | [inductor] Remove mask_str from IndexingOptions | jansel | closed | [
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145689
* #145688
* __->__ #145695
* #145671
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,811,281,430 | feat: improve indexing error messages | MattGPT-ai | closed | [
"triaged",
"open source",
"Stale",
"release notes: mps"
] | 3 | NONE | Improves the error messages when certain tensor operations fail, for example
`RuntimeError: index_select(): Expected dtype int32 or int64 for index` will now say
e.g.
`RuntimeError: index_select(): Expected dtype int32 or int64 for index, got float32`
| true |
2,811,278,439 | AMD MI300A Unified Memory Support | lancelotnd | open | [
"feature",
"module: rocm",
"triaged",
"topic: new features"
] | 4 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
I am working on improving performance for LLM workloads on AMD Instinct™ MI300A Accelerator. This APU is equipped with a fully unified-memory architecture not taken advantage by PyTorch at this time. Because both the GPU and CPU share the same physical memory, memcpy ops become redundant and can limit the size of the model we can train. Adding support for unified memory on ROCm for that particular APU would allow for zero-copy operations.
The motivation is similar that of #140787, but for ROCm instead of MPS.
Given that this APU is targeted for the most demanding HPC ML workloads, there is a great interest in optimizing the performance of PyTorch for it. Notably, El Capitan, the top1 Supercomputer from [top500](https://top500.org/) runs exclusively with AMD's MI300A.
### Alternatives
_No response_
### Additional context
To facilitate understanding I provide more details as to the kind of changes this involves.
To understand the differences in operations between non-unified and unified memory, let us consider a regular matrix multiplication of matrices $A$ and $B$ where the result is stored in matrix $C$.
In a non-unified setup with a discrete GPU (device):
1. `malloc` matrices $A,B,C$ on the host each of size $n \times n$.
2. ... values are written on matrices $A$,$B$
3. `CudaMalloc` to allocate memory for matrices $A',B',C'$
4. `CudaMemcpy` $A \rightarrow A'$, and $B \rightarrow B'$ (`hostToDevice`)
5. Kernel launch (results are written in $C'$).
6. `CudaMemcpy` $C' \rightarrow C$ (`DeviceToHost`) to get back the results
Whereas with unified-memory you would have:
1. `CudaMallocManaged` matrices $A,B,C$ on the host each of size $n \times n$.
2. ... values are written on matrices $A$,$B$
5. Kernel launch (results are written in $C$).
On machines with discrete GPUs the concept of unified-memory is purely virtual and still results in memory movement by way of page faults and page migrations. This adds a lot of overhead.
On architectures where CPU and GPU share the same physical memory such as the MAC, AMD MI300A, any memcpy operation becomes pointless and wastes space and time.
The quickest and dirtiest *hack* for the support of unified memory on Pytorch is to replace all `cudaMalloc` by `cudaMallocManaged` and to get rid of `memCopy` operations. [Like in this paper](https://doi.org/10.1109/ACSOS-C52956.2021.00029). This however is not ideal, nor portable.
Perhaps a better way to do it would be to toggle on or off the unified memory. Given that this is a relatively new architecture, more hardware is likely to come out with such configuration from different manufacturers and so it would be great to have a device-agnostic support of unified memory.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,811,276,060 | [inductor] Change type of get_backend_features to OrderedSet | jansel | closed | [
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145712
* __->__ #145692
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,811,275,967 | [inductor] Add some typing to common.py | jansel | closed | [
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145712
* #145692
* __->__ #145691
* #145690
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,811,234,883 | [inductor] Add some typing to simd.py | jansel | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145712
* #145692
* #145691
* __->__ #145690
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,811,224,680 | [inductor] Support non-power-of-2 cooperative RSPLIT | jansel | closed | [
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143812
* #142295
* __->__ #145689
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,811,224,646 | [inductor] Add some typing to triton.py | jansel | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145689
* __->__ #145688
* #145695
* #145671
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,811,211,133 | [Inductor-CPU] `add` is not being fused with templated int8 WoQ GEMM while running LLaMA2 | sanchitintel | open | [
"oncall: pt2",
"oncall: cpu inductor"
] | 3 | COLLABORATOR | ### 🐛 Describe the bug
When max-autotune is enabled with Inductor-CPU, and when int8 WoQ GEMM is used with auto-tuning enabled, `add` isn't being fused with it.
So, there's an opportunity cost. i.e. For LLaMA, for BF16 dtype, with BF16 templated GEMM, we are even able to fuse `add` & RMSNorm's decomposed ops as epilogues.
But the same doesn't happen when the activation is BF16 & the weights are quantized to int8.
This issue is not reproducible with a simple UT, so will have to check the precise scenario in LLaMA2 to ascertain why the intended fusions are not happening.
cc @chauhang @penguinwu @leslie-fang-intel @Guobing-Chen @chunyuan-w @jianan-gu
### Versions
Current main branch | true |
2,811,162,520 | [inductor] Adjust test_log_fp64 to only run when float64 is supported. | dcci | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6 | MEMBER | cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,811,124,908 | [Easy] update pip sources for ROCm in nightly pull tool | XuehaiPan | open | [
"module: rocm",
"open source",
"topic: not user facing",
"ciflow/rocm"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149143
* __->__ #145685
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,811,113,793 | [inductor triton] Disable incorrect TF32 usage on CUDA capability < 8 | benjaminglass1 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"module: inductor",
"ciflow/inductor"
] | 5 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145683
* #145655
* #145654
* #145095
* __->__ #145684
Triton 2.2 and greater have a bug where allowing TF32 generation for a GPU that does not support TF32 will cause code generation errors. Patch around this problem by:
1. Adding a function to `torch.cuda` that determines whether CUDA hardware is capable of using the TF32 format.
2. Using that function to explicitly disable TF32 generation when calling Triton, where needed.
To demonstrate that this fix works, try running `test/inductor/test_max_autotune.py` on a GPU with CUDA compute capability < 8 (e.g. any NVIDIA consumer GPU) without this fix.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,811,110,100 | cpp_wrapper: fix CPU cpp_wrapper and max-autotune tests | benjaminglass1 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146424
* #146109
* __->__ #145683
* #145655
* #145654
* #145095
Both of these tests mostly failed due to incorrect assumptions about the generated code.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,811,088,644 | PEP585: .github 2 | aorenste | closed | [
"topic: not user facing"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145682
| true |
2,811,055,638 | Avoid data-dependent errors by runtime assert substitution. | ysiraichi | open | [
"open source",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 9 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142372
* __->__ #145681
This PR adds a simplification method using runtime asserts, whenever we
are about to raise a data-dependent error. We use the recorded runtime
asserts as a source of knowledge for substituting the free symbols in
the given expression.
This is useful for avoiding data-dependent errors, specifically avoiding
guarding on expressions with unbacked integers, that are deducible from
the past runtime asserts.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,811,039,337 | Add icdf to Gamma dist | moghadas76 | open | [
"triaged",
"open source",
"Stale"
] | 6 | NONE | Fixes #145679
| true |
2,811,029,133 | Gamma cdf function | moghadas76 | open | [
"module: distributions",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
```python
chi2 = torch.distributions.Gamma(df=self.df.squeeze(-1)).icdf(uniform)
```
File "/home/seyed/miniconda3/envs/env/lib/python3.11/site-packages/torch/distributions/distribution.py", line 212, in icdf
raise NotImplementedError
NotImplementedError
### Versions
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] easy-torch==1.3.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnx==1.15.0
[pip3] pytorch-forecasting==1.2.0
[pip3] pytorch-lightning==2.2.0
[pip3] torch==2.3.0
[pip3] torch_cluster==1.6.3+pt23cu121
[pip3] torch_geometric==2.4.0
[pip3] torch_scatter==2.1.2+pt23cu121
[pip3] torch_sparse==0.6.18+pt23cu121
[pip3] torch_spline_conv==1.2.2+pt23cu121
[pip3] torch-summary==1.4.5
[pip3] torchaudio==2.3.0
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.3.0.post0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.18.0
[pip3] triton==2.3.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.3.52 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] easy-torch 1.3.2 pypi_0 pypi
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.4.52 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-forecasting 1.2.0 pypi_0 pypi
[conda] pytorch-lightning 2.2.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.3.0 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt23cu121 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt23cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt23cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt23cu121 pypi_0 pypi
[conda] torch-summary 1.4.5 pypi_0 pypi
[conda] torchaudio 2.3.0 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchmetrics 1.3.0.post0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.18.0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
cc @fritzo @neerajprad @alicanb @nikitaved | true |
2,811,019,416 | torch.tensor reports inconsistent errors when running on GPU and CPU | CZXIANGOvO | closed | [] | 1 | NONE | ### 🐛 Describe the bug
# description
The error messages displayed for torch.tensor on GPU and CPU are inconsistent. The error messages on GPU are very abstract and difficult to understand. The error messages on CPU are very straightforward.
# GPU
```python
import torch
data = torch.tensor(np.random.randn(1, 10), dtype=torch.float32).to("cuda:0")
newout = data[range(data.shape[1]), 0]
```

# CPU
```python
import torch
data = torch.tensor(np.random.randn(1, 10), dtype=torch.float32).to("cpu")
newout = data[range(data.shape[1]), 0]
```

### Versions
ubuntu 20.04.6
torch 1.13.1+cu116
CUDA Version: 12.4 | true |
2,810,916,337 | [1/N] Improve typing in torch/_C/__init__.pyi.in | cyyever | closed | [
"oncall: distributed",
"triaged",
"open source",
"module: amp (automated mixed precision)",
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 14 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @mcarilli @ptrblck @leslie-fang-intel @jgong5 | true |
2,810,907,179 | Build RowwiseScaledMM.cu for SM89 | alexsamardzic | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"topic: build",
"module: inductor",
"ciflow/inductor"
] | 10 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145676
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,810,815,305 | [c10d] Add NCCL memory allocator | kwen2501 | closed | [
"oncall: distributed",
"module: nccl",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ci-no-td"
] | 15 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145675
This PR implements a small UI improvement over #133603.
It prepares a NCCL memory allocator in torch cpp and then pybind's it out, so that user can directly use it.
UI:
```
pool = torch.cuda.MemPool(backend.mem_allocator)
with torch.cuda.use_mem_pool(pool):
tensor = torch.arange(1024 * 1024 * 2, device=device)
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,810,812,915 | Convert Tensor lr to 0-dim as needed for the optimizer to normally work | Tony-Y | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: optim"
] | 62 | CONTRIBUTOR | Fixes #145461
| true |
2,810,747,029 | [Custom Ops] Fix f-strings in custom ops error message | yanboliang | closed | [
"module: custom-operators",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145673
* #145588
| true |
2,810,741,786 | [3/N] Remove unnecessary once flag usage | cyyever | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,810,691,146 | [inductor] Fix handling of fixed XBLOCK larger than xnumel=1 | jansel | closed | [
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145689
* #145688
* #145695
* __->__ #145671
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,810,677,890 | compiling for rocm gfx1010, getting cuda errors | bragadzilla | open | [
"module: build",
"module: rocm",
"triaged"
] | 8 | NONE | ### 🐛 Describe the bug
I am compiling for rocm, but getting a cuda error.
The same error has been discussed in https://github.com/pytorch/pytorch/issues/108344, but the user said it was solved by using `USE_ROCM` and did not elaborate further.
```
export VERBOSE=1
export PYTORCH_ROCM_ARCH="gfx1010"
export USE_ROCM=1
export ROCM_PATH=/usr
export USE_CUDA=0
export USE_XPU=0
```
```
[6372/7445] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o
/usr/bin/ccache /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DBUILD_ONEDNN_GRAPH -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -DXNN_LOG_LEVEL=0 -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/home/user/git/pytorch/build/aten/src -I/home/user/git/pytorch/aten/src -I/home/user/git/pytorch/build -I/home/user/git/pytorch -I/home/user/git/pytorch/cmake/../third_party/benchmark/include -I/home/user/git/pytorch/third_party/onnx -I/home/user/git/pytorch/build/third_party/onnx -I/home/user/git/pytorch/nlohmann -I/home/user/git/pytorch/torch/csrc/api -I/home/user/git/pytorch/torch/csrc/api/include -I/home/user/git/pytorch/caffe2/aten/src/TH -I/home/user/git/pytorch/build/caffe2/aten/src/TH -I/home/user/git/pytorch/build/caffe2/aten/src -I/home/user/git/pytorch/build/caffe2/../aten/src -I/home/user/git/pytorch/torch/csrc -I/home/user/git/pytorch/third_party/miniz-3.0.2 -I/home/user/git/pytorch/third_party/kineto/libkineto/include -I/home/user/git/pytorch/third_party/kineto/libkineto/src -I/home/user/git/pytorch/third_party/cpp-httplib -I/home/user/git/pytorch/aten/src/ATen/.. -I/home/user/git/pytorch/third_party/FXdiv/include -I/home/user/git/pytorch/c10/.. -I/home/user/git/pytorch/third_party/pthreadpool/include -I/home/user/git/pytorch/third_party/cpuinfo/include -I/home/user/git/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/include -I/home/user/git/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/src -I/home/user/git/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/deps/clog/include -I/home/user/git/pytorch/third_party/NNPACK/include -I/home/user/git/pytorch/third_party/fbgemm/include -I/home/user/git/pytorch/third_party/fbgemm -I/home/user/git/pytorch/third_party/fbgemm/third_party/asmjit/src -I/home/user/git/pytorch/third_party/ittapi/src/ittnotify -I/home/user/git/pytorch/third_party/FP16/include -I/home/user/git/pytorch/third_party/tensorpipe -I/home/user/git/pytorch/build/third_party/tensorpipe -I/home/user/git/pytorch/third_party/tensorpipe/third_party/libnop/include -I/home/user/git/pytorch/third_party/fmt/include -I/home/user/git/pytorch/build/third_party/ideep/mkl-dnn/include -I/home/user/git/pytorch/third_party/ideep/mkl-dnn/src/../include -I/home/user/git/pytorch/third_party/flatbuffers/include -isystem /home/user/git/pytorch/build/third_party/gloo -isystem /home/user/git/pytorch/cmake/../third_party/gloo -isystem /home/user/git/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /home/user/git/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /home/user/git/pytorch/cmake/../third_party/googletest/googletest/include -isystem /home/user/git/pytorch/third_party/protobuf/src -isystem /home/user/git/pytorch/third_party/XNNPACK/include -isystem /home/user/git/pytorch/third_party/ittapi/include -isystem /home/user/git/pytorch/cmake/../third_party/eigen -isystem /home/user/git/pytorch/third_party/ideep/mkl-dnn/include/oneapi/dnnl -isystem /home/user/git/pytorch/third_party/ideep/include -isystem /home/user/git/pytorch/INTERFACE -isystem /home/user/git/pytorch/third_party/nlohmann/include -isystem /home/user/git/pytorch/build/include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-dangling-reference -Wno-error=dangling-reference -Wno-error=redundant-move -Wno-stringop-overflow -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIC -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-missing-field-initializers -Wno-array-bounds -Wno-unknown-pragmas -Wno-strict-overflow -Wno-strict-aliasing -Wunused-function -Wunused-variable -Wunused-but-set-variable -Wno-maybe-uninitialized -fvisibility=hidden -O2 -pthread -DASMJIT_STATIC -fopenmp -fopenmp -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o -c /home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp
/home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp: In member function 'bool torch::jit::Node::hasSideEffects() const':
/home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp:1181:16: error: 'set_stream' is not a member of 'torch::jit::cuda'; did you mean 'c10::cuda::set_stream'?
1181 | case cuda::set_stream:
| ^~~~~~~~~~
In file included from /home/user/git/pytorch/torch/csrc/jit/ir/ir.h:18,
from /home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp:1:
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:223:11: note: 'c10::cuda::set_stream' declared here
223 | _(cuda, set_stream) \
| ^~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:351:35: note: in definition of macro 'DEFINE_SYMBOL'
351 | namespace ns { constexpr Symbol s(static_cast<unique_t>(_keys::ns##_##s)); }
| ^
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:352:1: note: in expansion of macro 'FORALL_NS_SYMBOLS'
352 | FORALL_NS_SYMBOLS(DEFINE_SYMBOL)
| ^~~~~~~~~~~~~~~~~
/home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp:1182:16: error: '_set_device' is not a member of 'torch::jit::cuda'; did you mean 'c10::cuda::_set_device'?
1182 | case cuda::_set_device:
| ^~~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:222:11: note: 'c10::cuda::_set_device' declared here
222 | _(cuda, _set_device) \
| ^~~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:351:35: note: in definition of macro 'DEFINE_SYMBOL'
351 | namespace ns { constexpr Symbol s(static_cast<unique_t>(_keys::ns##_##s)); }
| ^
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:352:1: note: in expansion of macro 'FORALL_NS_SYMBOLS'
352 | FORALL_NS_SYMBOLS(DEFINE_SYMBOL)
| ^~~~~~~~~~~~~~~~~
/home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp:1183:16: error: '_current_device' is not a member of 'torch::jit::cuda'; did you mean 'c10::cuda::_current_device'?
1183 | case cuda::_current_device:
| ^~~~~~~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:224:11: note: 'c10::cuda::_current_device' declared here
224 | _(cuda, _current_device) \
| ^~~~~~~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:351:35: note: in definition of macro 'DEFINE_SYMBOL'
351 | namespace ns { constexpr Symbol s(static_cast<unique_t>(_keys::ns##_##s)); }
| ^
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:352:1: note: in expansion of macro 'FORALL_NS_SYMBOLS'
352 | FORALL_NS_SYMBOLS(DEFINE_SYMBOL)
| ^~~~~~~~~~~~~~~~~
/home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp:1184:16: error: 'synchronize' is not a member of 'torch::jit::cuda'; did you mean 'c10::cuda::synchronize'?
1184 | case cuda::synchronize:
| ^~~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:225:11: note: 'c10::cuda::synchronize' declared here
225 | _(cuda, synchronize) \
| ^~~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:351:35: note: in definition of macro 'DEFINE_SYMBOL'
351 | namespace ns { constexpr Symbol s(static_cast<unique_t>(_keys::ns##_##s)); }
| ^
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:352:1: note: in expansion of macro 'FORALL_NS_SYMBOLS'
352 | FORALL_NS_SYMBOLS(DEFINE_SYMBOL)
| ^~~~~~~~~~~~~~~~~
```
<details>
<summary>output of `rocminfo`</summary>
```
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version: 1.1
Runtime Ext Version: 1.6
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
DMAbuf Support: YES
==========
HSA Agents
==========
*******
Agent 1
*******
Name: AMD Ryzen 9 5950X 16-Core Processor
Uuid: CPU-XX
Marketing Name: AMD Ryzen 9 5950X 16-Core Processor
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 3400
BDFID: 0
Internal Node ID: 0
Compute Unit: 32
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Memory Properties:
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 65759348(0x3eb6874) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 65759348(0x3eb6874) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 65759348(0x3eb6874) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 4
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 65759348(0x3eb6874) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*******
Agent 2
*******
Name: gfx1010
Uuid: GPU-XX
Marketing Name: AMD Radeon RX 5700 XT
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
Cache Info:
L1: 16(0x10) KB
L2: 4096(0x1000) KB
Chip ID: 29471(0x731f)
ASIC Revision: 2(0x2)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 2100
BDFID: 2048
Internal Node ID: 1
Compute Unit: 40
SIMDs per CU: 2
Shader Engines: 2
Shader Arrs. per Eng.: 2
WatchPts on Addr. Ranges:4
Coherent Host Access: FALSE
Memory Properties:
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 32(0x20)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 40(0x28)
Max Work-item Per CU: 1280(0x500)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Packet Processor uCode:: 151
SDMA engine uCode:: 35
IOMMU Support:: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 8372224(0x7fc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 8372224(0x7fc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 3
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Recommended Granule:0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx1010:xnack-
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***
```
</details>
[full log](https://github.com/user-attachments/files/18544411/log.txt)
please do not tell me to use docker
### Versions
<details>
<summary>output of `python torch/utils/collect_env.py`</summary>
```
Collecting environment information... PyTorch version: 2.7.0.dev20250123+rocm6.3
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-fa1d09cbd
OS: Gentoo Linux (x86_64)
GCC version: (Gentoo 13.2.1_p20240210 p14) 13.2.1 20240210
Clang version: 19.1.5
CMake version: version 3.31.4
Libc version: glibc-2.40
Python version: 3.12.8 (main, Jan 12 2025, 23:50:05) [GCC 14.2.1 20241116] (64-bit runtime)
Python platform: Linux-6.7.7-gentoo-dist-x86_64-AMD_Ryzen_9_5950X_16-Core_Processor-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon RX 5700 XT (gfx1010:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5950X 16-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 73%
CPU max MHz: 5083.3979
CPU min MHz: 2200.0000
BogoMIPS: 6789.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] onnx==1.16.2
[pip3] optree==0.14.0
[pip3] pytorch-triton-rocm==3.2.0+git0d4682f0
[pip3] pytorch-triton-xpu==3.2.0+gite98b6fcb
[pip3] torch==2.7.0.dev20250123+rocm6.3
[pip3] torchaudio==2.6.0.dev20250124+rocm6.3
[pip3] torchsde==0.2.6
[pip3] torchvision==0.22.0.dev20250124+rocm6.3
[conda] Could not collect
```
cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,810,670,135 | [CUDA][B200] Update the number of threads in `avg_pool2d` backward for SM 10.0 | eqy | closed | [
"module: cuda",
"open source",
"Merged",
"module: pooling",
"ciflow/trunk",
"topic: not user facing"
] | 34 | COLLABORATOR | Fixes register count issue when launching on SM 10.0, originally authored by @bilal2vec
cc @ptrblck @msaroufim @mikaylagawarecki | true |
2,810,648,759 | Pull request | rohansudarshan1810 | closed | [
"open source"
] | 3 | NONE | Fixes #ISSUE_NUMBER
| true |
2,810,647,083 | Make sure to evaluate annotation strings in the context of where the prototype was created | aorenste | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7 | CONTRIBUTOR | This was incorrectly evaluating the annotation in the context of infer_schema - make sure to evaluate annotation strings in the context of where the prototype was created instead.
Fixes #145481
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145667
| true |
2,810,632,136 | [autocast][pytorch] Support autocast for MTIA (policy) | nautsimon | closed | [
"fb-exported",
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: mtia"
] | 7 | MEMBER | Summary: Add autocast support for MTIA (policy)
Reviewed By: egienvalue
Differential Revision: D68604796
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5 @egienvalue | true |
2,810,632,070 | Pickling duplicates Storage | roosephu | open | [
"module: pickle",
"triaged"
] | 2 | NONE | ### 🐛 Describe the bug
When pickling two tensors with the same underlying storage, I expected the storage to be only pickled once. However, it seems that the storage is saved twice.
```python
import torch
import pickle
x = torch.zeros(1_000_000)
print(len(pickle.dumps(x)), len(pickle.dumps([x, x[:]])))
```
Output:
```
4000402 8000697
```
Expected output:
```
4000402 4000???
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-112-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 176
On-line CPU(s) list: 0-175
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468V
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 44
Socket(s): 2
Stepping: 8
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.1 MiB (88 instances)
L1i cache: 2.8 MiB (88 instances)
L2 cache: 176 MiB (88 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-87
NUMA node1 CPU(s): 88-175
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.7.0
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.20
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.5.1+cu121
[pip3] torchaudio==2.5.1+cu121
[pip3] torchmetrics==1.5.0
[pip3] torchvision==0.20.1+cu121
[pip3] triton==3.1.0
[conda] _anaconda_depends 2024.06 py312_mkl_2
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.8 py312h5eee18b_0
[conda] mkl_random 1.2.4 py312hdb19cb5_0
[conda] numpy 1.26.4 py312hc5e2394_0
[conda] numpy-base 1.26.4 py312h0da6c21_0
[conda] numpydoc 1.7.0 py312h06a4308_0
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.20 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] torch 2.5.1+cu121 pypi_0 pypi
[conda] torchaudio 2.5.1+cu121 pypi_0 pypi
[conda] torchmetrics 1.5.0 pypi_0 pypi
[conda] torchvision 0.20.1+cu121 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
``` | true |
2,810,570,168 | Remove FBGEMM sccache hack | huydhn | closed | [
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Testing https://github.com/pytorch/pytorch/actions/runs/12959358756, sccache is working correctly now
| true |
2,810,533,892 | The difference between CPP and python3 on matrix permute | George0726 | closed | [] | 0 | NONE | ### 🐛 Describe the bug
I am trying to implement code on CPP,
However, I found that the permute and transpose functions in cpp have different values from permute python3. Besides, the tensor after view is the same and I also try
``` python
output = tensor1.view(1, -1, 24, 128).permute(0, 2, 1, 3) #python3 code
```
``` cpp
output = tensor1.view({1, -1, 24, 128}).permute({0, 2, 1, 3}); //CPP code
```
FollowUP: I am testing the difference between CPP and python3. For my output dimension (1,24,128,128), I found that only output[:,0,0,:] and output[:,23,127,:] are the same, while others have a significant difference. Besides, I also added **contiguous()** before permuting and testing the tensor.stride(). There is no help for debugging.
### Versions
Collecting environment information...
PyTorch version: 2.4.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-25-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.2.65
[pip3] nvidia-cuda-cupti-cu12==12.4.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.99
[pip3] nvidia-cuda-runtime-cu12==12.4.99
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.0.44
[pip3] nvidia-curand-cu12==10.3.5.119
[pip3] nvidia-cusolver-cu12==11.6.0.99
[pip3] nvidia-cusparse-cu12==12.3.0.142
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.4.99
[pip3] nvidia-nvtx-cu12==12.4.99
[pip3] rotary-embedding-torch==0.8.6
[pip3] torch==2.4.1+cu124
[pip3] torchaudio==2.4.1+cu124
[pip3] torchmetrics==1.6.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.19.1+cu124
[pip3] triton==3.0.0
[conda] Could not collect | true |
2,810,500,337 | [CUDA] Change slim-wheel libraries load order | pytorchbot | closed | [
"open source"
] | 1 | COLLABORATOR | There is no libnvjitlink in CUDA-11.x , so attempts to load it first will abort the execution and prevent the script from preloading nvrtc
Fixes issues reported in https://github.com/pytorch/pytorch/pull/145614#issuecomment-2613107072
cc @atalman @malfet @ptrblck @eqy @tinglvv
| true |
2,810,473,542 | PYTORCH_NO_CUDA_MEMORY_CACHING=0 affects native allocator as if: = 1 | Teriks | closed | [
"module: docs",
"module: cuda",
"triaged"
] | 3 | NONE | ### 🐛 Describe the bug
It came as a surprise to me that setting `PYTORCH_NO_CUDA_MEMORY_CACHING=0` is treated the same as `PYTORCH_NO_CUDA_MEMORY_CACHING=1` in the native allocator code.
The python code of torch handles this envvar by checking its value, the cpp code for the allocator essentially just checks if it is set to anything and changes behavior accordingly.
I experienced unexpected OOM conditions when setting this envar value to 0 explicitly in my environment and it took a bit to realize what was happening.
I see at least one stale pull referencing this behavior at: https://github.com/pytorch/pytorch/pull/90431
It definitely confused me for a bit, and also effects HIP?
See line: https://github.com/pytorch/pytorch/blob/2a70de7e9257e3f8c2874a10e3612c8939b79867/c10/cuda/CUDACachingAllocator.cpp#L3267
This issue might arise elsewhere? I didn’t check
### Versions
Torch 2.5.1
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @ptrblck @msaroufim @eqy | true |
2,810,449,563 | Reapply "refactor tensorify restart logic to use sources (#141517)" (#143623) | bobrenjc93 | closed | [
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145660
* #145659
* #145650
This reverts commit 4f8b7c4272db521f7ffc4070ce1bdece513d1183.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,810,449,469 | add speculation log divergence test | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145659
Followup from a SEV. Confirmed that this breaks when stacked on top of https://github.com/pytorch/pytorch/pull/145660 (offending PR that caused the SEV)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,810,431,288 | [dynamo] Fix read/write conflicts in a cuda test | StrongerXi | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145658
Prior to this patch, the `test_cuda_event_created_outside_of_graph`
is flaky in CI, and that's because we have read and write to the same
`foo` tensor buffer from 2 different streams. This patch eliminates that
by adding a synchronization to wait till read finishes before starting
the write.
Fixes #133837, #133828.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,810,405,320 | PEP585: .github 1a | aorenste | closed | [
"topic: not user facing"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145657
| true |
2,810,395,787 | Stable C bindings for libtorch | ehartford | open | [
"module: cpp",
"triaged"
] | 4 | NONE | ### 🚀 The feature, motivation and pitch
Thank you for all your hard work on PyTorch and LibTorch! The C++ API is excellent, but there’s a recurring need across many developer communities for an official or semi-official C API (i.e., a stable libtorch_c.so / “C shim”).
## Why a C API?
Language Interop: Most languages (Go, Rust, Nim, D, Julia, R, etc.) have straightforward foreign function interfaces to C but not necessarily to C++ with templates/exceptions. An official C layer would dramatically simplify and standardize these bindings.
- Stability: A C API can be versioned more predictably and avoid name-mangling or ABI mismatches. This reduces the maintenance burden on community wrappers that rely on private or auto-generated bindings.
- Ecosystem Expansion: With a stable C library, more open-source contributors could create robust, long-lived libraries in many different languages without reinventing a custom “C++ → C → X Language” layer each time.
## Proposed Approach
- Minimal, Carefully Designed C API: Provide a stable subset of tensor creation, basic ops, module loading, forward passes, etc. The full C++ feature set isn’t necessary initially.
- Officially Documented & Versioned: Offer minimal but clear function signatures, guaranteed not to break across patch/minor releases.
- Built/Distributed Alongside LibTorch: So that developers can do -ltorch_c or similar, just as they do for -ltorch.
## Potential Impact
- Ease of Development: By lowering the complexity of bridging languages, new communities can adopt PyTorch with less friction.
- Reduced Maintenance for the Core Team: While it might be some up-front work, a stable C layer cuts down on repeated bug reports/issues from “unofficial, hacky” C++ wrappers that break each release.
- Wider Adoption: A formal C API can accelerate usage of PyTorch in more domains and languages where official support is lacking.
### Alternatives
_No response_
### Additional context
_No response_
cc @jbschlosser | true |
2,810,384,503 | cpp_wrapper: enable all CPU repro tests | benjaminglass1 | closed | [
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146424
* #146109
* #145683
* __->__ #145655
* #145654
* #145095
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,810,384,444 | cpp_wrapper: fix set_.source_Tensor lowering | benjaminglass1 | closed | [
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146424
* #146109
* #145683
* #145655
* __->__ #145654
* #145095
Adds a C-shim fallback for `set_.source_Tensor`, which is effectively required by `ir.SetSourceTensorKernel`. As a necessary prerequisite to use that IR node, updates `CppWrapperCpu` to handle in-place returns in C-shim ops (the arguments for those returns are silently dropped by `torchgen`).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,810,373,279 | [draft_export] fix dense-in-memory check for inferring fakes | pianpwk | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4 | CONTRIBUTOR | Test Plan: fixes check for dense tensors with size-1 dimensions
Differential Revision: D68644028
| true |
2,810,360,776 | [c10d] implement ReduceOp.unbox() | yifuwang | open | [
"oncall: distributed",
"open source",
"Stale",
"release notes: distributed (c10d)"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145652
* #144886
```python
>>> import torch
>>> op = torch.classes.c10d.ReduceOp()
>>> op
<torch.ScriptObject object at 0x5b688b0>
>>> torch.distributed.ReduceOp.unbox(op)
<torch.distributed.distributed_c10d.ReduceOp object at 0x7fd9e3066ff0>
>>> torch.distributed.ReduceOp.unbox(op).op
<RedOpType.SUM: 0>
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,810,351,107 | Add link to non_blocking/pinmem tutorial in `Tensor.to` docstrings | vmoens | closed | [
"module: docs",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145651
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke | true |
2,810,340,548 | Fix nonzero meta function striding | bobrenjc93 | closed | [
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
Fixes #130290
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,810,306,970 | Add fake_impl for unique_consecutive | zou3519 | closed | [
"Merged",
"ciflow/trunk",
"release notes: composability",
"module: dynamo",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145649
Summary:
It's fairly similar to torch.unique and torch.unique_dim.
Test Plan:
New test
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,810,290,499 | Test distributions compilation | vmoens | open | [
"Stale",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145648
* #145647
* #145646
* #145645
| true |
2,810,290,331 | Fix distributions dynamo tracing (`__init__`, sample and log_prob) | vmoens | open | [
"Stale",
"module: dynamo",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145648
* __->__ #145647
* #145646
* #145645
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,810,290,151 | refactor Distribution class for compile support | vmoens | open | [
"Stale"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145648
* #145647
* __->__ #145646
* #145645
| true |
2,810,289,953 | Parametrize distributions tests | vmoens | open | [
"Stale",
"topic: not user facing"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145648
* #145647
* #145646
* __->__ #145645
| true |
2,810,281,752 | [inductor] Fix crash running wrapper_benchmark with no device | mgraczyk | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 11 | CONTRIBUTOR | Fixes #145434
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,810,268,911 | [mps/inductor] Add support for `erfinv`. | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | MEMBER | After several rounds of refactoring, this seems to be done now.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,810,255,093 | torch/utils/cpp_extension.py doesn't support sm_90a | zhengqigao | closed | [
"module: cpp-extensions",
"module: cuda",
"triaged",
"module: regression"
] | 3 | NONE | ### 🐛 Describe the bug
I am compiling a customized CUDA extension on H200 GPU using torch.utils.cpp_extension, and get the following error:
```
/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1973, in <listcomp>
supported_sm = [int(arch.split('_')[1])
ValueError: invalid literal for int() with base 10: '90a'
```
I checked in the source code, `torch.cuda.get_arch_list()` returns a list `['sm_50', 'sm_80', 'sm_86', 'sm_89', 'sm_90', 'sm_90a']`, and I think the last element `sm_90a` causes the error.
### Versions
Collecting environment information...
Model: 0
Thread(s) per core: 1
Core(s) per socket: 72
Socket(s): 1
Stepping: r0p0
Frequency boost: disabled
CPU max MHz: 3375.0000
CPU min MHz: 81.0000
BogoMIPS: 2000.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh
L1d cache: 4.5 MiB (72 instances)
L1i cache: 4.5 MiB (72 instances)
L2 cache: 72 MiB (72 instances)
L3 cache: 114 MiB (1 instance)
NUMA node(s): 9
NUMA node0 CPU(s): 0-71
NUMA node1 CPU(s):
NUMA node2 CPU(s):
NUMA node3 CPU(s):
NUMA node4 CPU(s):
NUMA node5 CPU(s):
NUMA node6 CPU(s):
NUMA node7 CPU(s):
NUMA node8 CPU(s):
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] open_clip_torch==2.29.0
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[conda] cuda-cudart 12.6.77 0 nvidia
[conda] cuda-cudart-dev 12.6.77 0 nvidia
[conda] cuda-cudart-dev_linux-aarch64 12.6.77 0 nvidia
[conda] cuda-cudart-static 12.6.77 0 nvidia
[conda] cuda-cudart-static_linux-aarch64 12.6.77 0 nvidia
[conda] cuda-cudart_linux-aarch64 12.6.77 0 nvidia
[conda] cuda-cupti 12.6.80 0 nvidia
[conda] cuda-cupti-dev 12.6.80 0 nvidia
[conda] cuda-libraries 12.6.3 0 nvidia
[conda] cuda-libraries-dev 12.6.3 0 nvidia
[conda] cuda-nvrtc 12.6.85 0 nvidia
[conda] cuda-nvrtc-dev 12.6.85 0 nvidia
[conda] cuda-nvtx 12.6.77 0 nvidia
[conda] libcublas 12.6.4.1 0 nvidia
[conda] libcublas-dev 12.6.4.1 0 nvidia
[conda] libcufft 11.3.0.4 0 nvidia
[conda] libcufft-dev 11.3.0.4 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcurand-dev 10.3.7.77 0 nvidia
[conda] libcusolver 11.7.1.2 0 nvidia
[conda] libcusolver-dev 11.7.1.2 0 nvidia
[conda] libcusparse 12.5.4.2 0 nvidia
[conda] libcusparse-dev 12.5.4.2 0 nvidia
[conda] libnvjitlink 12.6.85 0 nvidia
[conda] libnvjitlink-dev 12.6.85 0 nvidia
[conda] numpy 2.0.2 pypi_0 pypi
[conda] open-clip-torch 2.29.0 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
cc @malfet @zou3519 @xmfan @ptrblck @msaroufim @eqy @seemethere @pytorch/pytorch-dev-infra | true |
2,810,245,001 | [CI][CUDA][Blackwell] sm_\d\d no longer matches sm_100. | nWEIdia | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13 | COLLABORATOR | Therefore making it sm_\d+
Fixes this unit test failure: python test/test_cpp_extensions_jit.py -k TestCppExtensionJIT.test_jit_cuda_archflags
cc @atalman @malfet @ptrblck @eqy @tinglvv | true |
2,810,186,360 | [dynamo] clear out traced frames at the start of `test_log_traced_frames` | StrongerXi | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145640
The test was being flaky in CI, and this patch fixes it.
Fixes #137461.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,810,179,123 | [AOTInductor] Refactor CPU and GPU to remove ifdef macros | muchulee8 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Summary: Remove #ifdef USE_CUDA macros through some refactor
Test Plan: Refactor code, existing tests.
Differential Revision: D68636743
| true |
2,810,140,747 | [CUDA] Change slim-wheel libraries load order | nWEIdia | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: build"
] | 16 | COLLABORATOR | There is no libnvjitlink in CUDA-11.x , so attempts to load it first will abort the execution and prevent the script from preloading nvrtc
Fixes issues reported in https://github.com/pytorch/pytorch/pull/145614#issuecomment-2613107072
cc @atalman @malfet @ptrblck @eqy @tinglvv
| true |
2,810,138,356 | Remove duplicate code in _aot_autograd/dispatch_and_compile_graph.py | rec | closed | [
"topic: not user facing",
"ciflow/inductor"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145637
* #145636
| true |
2,810,137,955 | Simplify functional composition in _aot_autograd/dispatch_and_compile_graph.py | rec | closed | [
"open source",
"topic: not user facing",
"ciflow/inductor"
] | 14 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145636
| true |
2,810,135,973 | [ROCm] Improvements to non-vectorized elementwise kernels | jerrymannil | closed | [
"module: rocm",
"open source",
"topic: not user facing",
"ciflow/periodic",
"rocm",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 11 | CONTRIBUTOR | * Unroll loops manually to hide memory access latency
* Strided access for coalesced memory acesses
Cherry-pick of https://github.com/pytorch/pytorch/pull/145635
Co-authors: @akadutta @doru1004 @amd-hhashemi @carlobertolli
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,810,129,477 | Revert D68241649 | ezyang | closed | [
"fb-exported"
] | 6 | CONTRIBUTOR | Summary:
This diff reverts D68241649
Revert D68241649 as it impacts model inference performance.
The impacted performance was observed in the Shaker patform, which runs embedded Linux.
Test Plan: NA
Reviewed By: ezyang
Differential Revision: D68636019
| true |
2,810,120,056 | Simplify inplace decomp logic by avoiding recursion | albanD | closed | [
"topic: not user facing",
"ciflow/inductor"
] | 2 | COLLABORATOR | This significantly reduces the number of infinite recursion when writing decomps.
Helps with https://github.com/pytorch/pytorch/issues/145094 | true |
2,810,113,565 | Enable `sm_89` support for relevant ops in PyTorch | vgoklani | closed | [
"module: build",
"module: cuda",
"triaged",
"module: m1"
] | 14 | NONE | Please add sm_89 to the list of target architectures for the stable, nightly, and Docker images. While I’ve seen references indicating that sm_89 might not need explicit builds due to binary compatibility with sm_86 and sm_80, that compatibility does not hold for FP8-related features on sm_89.
For more details, see this comment: https://github.com/pytorch/ao/issues/1057#issuecomment-2613095265.
Thanks in advance!
cc: @alexsamardzic
cc @malfet @seemethere @ptrblck @msaroufim @eqy | true |
2,810,093,266 | Back out "Add generator parameter to rand*_like functions (#136780)" | ezyang | closed | [
"fb-exported"
] | 5 | CONTRIBUTOR | Summary: Revert D68241649 as it impacts model inference performance
Test Plan: TBD
Reviewed By: ezyang
Differential Revision: D68635488
| true |
2,810,048,146 | Introduce aoti_call_delegate HOP | SherlockNoMad | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 22 | CONTRIBUTOR | Summary:
Previously, aoti compile node is represented as a kernel-less custom op in the exported program. The node was not eager runnable, which is a common practice for numerical validation during lowering.
I introduce a new HOP to address this.
The schema is following
```
aoti_call_delegate(lower_moduel: AOTInductorEPModule, original_gm: fx.GraphModule, weights: List[Tensor], inputs: List[Tensor])
```
There are a few problems exposed by HOP
- AOTI expects a FX graph with weights as getattr nodes, aka stateful graph. HOP expect graph_module arguments to be stateless. Export serializer also expect a stateless graph. Currently, to make AOTI happy, I am making `original_gm` stateful, and bypassing the serialization for `original_gm`.
- As a result, the HOP is not re-traceable, as functionalization on stateful graph module argument will fail.
Test Plan: buck2 test 'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:cpu_lowering_utils_test
Reviewed By: zhxchen17
Differential Revision: D68359391
| true |
2,809,968,113 | [ROCm] trunk.yml only runs pre-merge via ciflow/trunk label | amdfaa | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3 | CONTRIBUTOR | cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,809,944,166 | Add __all__ for torch.nn.init | mikaylagawarecki | closed | [
"Stale"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145628
| true |
2,809,910,190 | [autocast][pytorch] Support autocast for MTIA | nautsimon | closed | [
"fb-exported",
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: mtia"
] | 4 | MEMBER | Summary: Add autocast support to MTIA
Reviewed By: egienvalue
Differential Revision: D68572548
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5 @egienvalue | true |
2,809,892,278 | [Rocm][Inductor][CK] silence ck package not installed warning when CK backend is not used to autotune bmm | tenpercent | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 5 | COLLABORATOR | As titled
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.