id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,966,257,840
|
[torchrun] Fix: Use Correctly Reachable Host Address in c10d Rendezvous
|
kuizhiqing
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"release notes: distributed (torchelastic)"
] | 2
|
NONE
|
Fixes https://github.com/pytorch/pytorch/issues/150532
In this PR, we replace `socket.getfqdn()` with `socket.gethostbyname(socket.getfqdn())`, ensuring that an IP address is used instead of a potentially unresolvable hostname.
Anyway, using an IP is more reliable than a hostname in this case.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,966,233,840
|
torchrun Hangs Due to Unresolvable Hostname in c10d Rendezvous
|
kuizhiqing
|
open
|
[
"oncall: distributed",
"triaged",
"module: c10d"
] | 1
|
NONE
|
I'm managing a cluster with a large number of nodes, where each node's `hostname` is only resolvable locally on that node.
This causes my `torchrun` program to hang when using the `c10d` rendezvous backend:
```bash
export PET_NPROC_PER_NODE=8
export PET_NNODES=2
export PET_RDZV_ENDPOINT=<MASTER_IP>:36123
export PET_RDZV_BACKEND=c10d
torchrun demo.py
```
After investigating the issue, I found that the problem originates from the `local_addr` being retrieved via `socket.getfqdn()`. This method does not return a correctly reachable hostname, leading to connectivity issues during rendezvous.
Precisely, in `torch/distributed/elastic/rendezvous/dynamic_rendezvous.py`
```python
class _NodeDescGenerator:
def generate(self, local_addr: Optional[str] = None) -> _NodeDesc:
return _NodeDesc(local_addr or socket.getfqdn(), os.getpid(), local_id)
```
A potential issue also exists in `torch/distributed/elastic/rendezvous/api.py`
```python
class RendezvousStoreInfo:
def build(...):
if rank == 0:
addr = local_addr or socket.getfqdn()
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,965,892,385
|
Intermittent SSL certificate expiry warnings for `download.pytorch.org` (load balancer?)
|
charlienewey-odin
|
open
|
[
"triaged"
] | 12
|
NONE
|
### 🐛 Describe the bug
This was tested at 10:00 GMT (11:00 London time). We're based in the UK (might be relevant if the issue is specific to e.g. UK geo).
On _some_ HTTPS requests to `download.pytorch.org`, the SSL certificate on the server is expired. This is intermittent so I imagine the problem is an expired certificate on a load-balanced node or something similar.
Here is an example of an expired certificate:
```
11:01:28.183488 [0-0] * ALPN: server accepted h2
11:01:28.183499 [0-0] * Server certificate:
11:01:28.183510 [0-0] * subject: CN=pytorch.org
11:01:28.183519 [0-0] * start date: Mar 4 00:00:00 2024 GMT
11:01:28.183527 [0-0] * expire date: Apr 1 23:59:59 2025 GMT
11:01:28.183538 [0-0] * issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M03
```
The failing node in this case appears to be `11:01:28.183657 [0-0] * Connected to download.pytorch.org (108.156.46.108) port 443`. I can't verify whether this is the only failing node.
Here is the full output from `curl --insecure -vvI https://download.pytorch.org/models/resnet101-cd907fc2.pth 2>&1`:
```
11:04:51.467721 [0-0] * Host download.pytorch.org:443 was resolved.
11:04:51.467771 [0-0] * IPv6: 2600:9000:2491:7c00:d:607e:4540:93a1, 2600:9000:2491:9a00:d:607e:4540:93a1, 2600:9000:2491:fa00:d:607e:4540:93a1, 2600:9000:2491:2800:d:607e:4540:93a1, 2600:9000:2491:5000:d:607e:4540:93a1, 2600:9000:2491:8200:d:607e:4540:93a1, 2600:9000:2491:1800:d:607e:4540:93a1, 2600:9000:2491:a200:d:607e:4540:93a1
11:04:51.467780 [0-0] * IPv4: 108.138.26.122, 108.138.26.24, 108.138.26.16, 108.138.26.43
11:04:51.467794 [0-0] * [HTTPS-CONNECT] created with 1 ALPNs -> 0
11:04:51.467804 [0-0] * [HTTPS-CONNECT] added
11:04:51.467817 [0-0] * [HTTPS-CONNECT] connect, init
11:04:51.467840 [0-0] * Trying [2600:9000:2491:7c00:d:607e:4540:93a1]:443...
11:04:51.467923 [0-0] * Immediate connect fail for 2600:9000:2491:7c00:d:607e:4540:93a1: Network is unreachable
11:04:51.467945 [0-0] * [HTTPS-CONNECT] connect -> 0, done=0
11:04:51.467956 [0-0] * [HTTPS-CONNECT] adjust_pollset -> 0 socks
11:04:51.467972 [0-0] * Trying [2600:9000:2491:9a00:d:607e:4540:93a1]:443...
11:04:51.467989 [0-0] * Immediate connect fail for 2600:9000:2491:9a00:d:607e:4540:93a1: Network is unreachable
11:04:51.467999 [0-0] * [HTTPS-CONNECT] connect -> 0, done=0
11:04:51.468007 [0-0] * [HTTPS-CONNECT] adjust_pollset -> 0 socks
11:04:51.468018 [0-0] * Trying [2600:9000:2491:fa00:d:607e:4540:93a1]:443...
11:04:51.468028 [0-0] * Immediate connect fail for 2600:9000:2491:fa00:d:607e:4540:93a1: Network is unreachable
11:04:51.468037 [0-0] * [HTTPS-CONNECT] connect -> 0, done=0
11:04:51.468045 [0-0] * [HTTPS-CONNECT] adjust_pollset -> 0 socks
11:04:51.468064 [0-0] * Trying [2600:9000:2491:2800:d:607e:4540:93a1]:443...
11:04:51.468074 [0-0] * Immediate connect fail for 2600:9000:2491:2800:d:607e:4540:93a1: Network is unreachable
11:04:51.468083 [0-0] * [HTTPS-CONNECT] connect -> 0, done=0
11:04:51.468091 [0-0] * [HTTPS-CONNECT] adjust_pollset -> 0 socks
11:04:51.468104 [0-0] * Trying [2600:9000:2491:5000:d:607e:4540:93a1]:443...
11:04:51.468116 [0-0] * Immediate connect fail for 2600:9000:2491:5000:d:607e:4540:93a1: Network is unreachable
11:04:51.468127 [0-0] * [HTTPS-CONNECT] connect -> 0, done=0
11:04:51.468137 [0-0] * [HTTPS-CONNECT] adjust_pollset -> 0 socks
11:04:51.468149 [0-0] * Trying [2600:9000:2491:8200:d:607e:4540:93a1]:443...
11:04:51.468162 [0-0] * Immediate connect fail for 2600:9000:2491:8200:d:607e:4540:93a1: Network is unreachable
11:04:51.468173 [0-0] * [HTTPS-CONNECT] connect -> 0, done=0
11:04:51.468182 [0-0] * [HTTPS-CONNECT] adjust_pollset -> 0 socks
11:04:51.468194 [0-0] * Trying [2600:9000:2491:1800:d:607e:4540:93a1]:443...
11:04:51.468206 [0-0] * Immediate connect fail for 2600:9000:2491:1800:d:607e:4540:93a1: Network is unreachable
11:04:51.468217 [0-0] * [HTTPS-CONNECT] connect -> 0, done=0
11:04:51.468227 [0-0] * [HTTPS-CONNECT] adjust_pollset -> 0 socks
11:04:51.468239 [0-0] * Trying [2600:9000:2491:a200:d:607e:4540:93a1]:443...
11:04:51.468253 [0-0] * Immediate connect fail for 2600:9000:2491:a200:d:607e:4540:93a1: Network is unreachable
11:04:51.468269 [0-0] * Trying 108.138.26.122:443...
11:04:51.468337 [0-0] * [HTTPS-CONNECT] connect -> 0, done=0
11:04:51.468348 [0-0] * [HTTPS-CONNECT] adjust_pollset -> 1 socks
11:04:51.470431 [0-0] * [HTTPS-CONNECT] connect -> 0, done=0
11:04:51.470452 [0-0] * [HTTPS-CONNECT] adjust_pollset -> 1 socks
11:04:51.488784 [0-0] * ALPN: curl offers h2,http/1.1
11:04:51.488933 [0-0] } [5 bytes data]
11:04:51.488957 [0-0] * TLSv1.3 (OUT), TLS handshake, Client hello (1):
11:04:51.488967 [0-0] } [512 bytes data]
11:04:51.489033 [0-0] * [HTTPS-CONNECT] connect -> 0, done=0
11:04:51.489046 [0-0] * [HTTPS-CONNECT] adjust_pollset -> 1 socks
11:04:51.507775 [0-0] { [5 bytes data]
11:04:51.507835 [0-0] * TLSv1.3 (IN), TLS handshake, Server hello (2):
11:04:51.507861 [0-0] { [122 bytes data]
11:04:51.508358 [0-0] * TLSv1.3 (IN), TLS change cipher, Change cipher spec (1):
11:04:51.508376 [0-0] { [1 bytes data]
11:04:51.508431 [0-0] * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
11:04:51.508454 [0-0] { [19 bytes data]
11:04:51.508518 [0-0] * [HTTPS-CONNECT] connect -> 0, done=0
11:04:51.508546 [0-0] * [HTTPS-CONNECT] adjust_pollset -> 1 socks
11:04:51.508670 [0-0] { [1 bytes data]
11:04:51.508711 [0-0] * TLSv1.3 (IN), TLS handshake, Certificate (11):
11:04:51.508731 [0-0] { [3811 bytes data]
11:04:51.509455 [0-0] * TLSv1.3 (IN), TLS handshake, CERT verify (15):
11:04:51.509470 [0-0] { [264 bytes data]
11:04:51.509600 [0-0] * TLSv1.3 (IN), TLS handshake, Finished (20):
11:04:51.509615 [0-0] { [36 bytes data]
11:04:51.509685 [0-0] * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
11:04:51.509700 [0-0] } [1 bytes data]
11:04:51.509751 [0-0] * TLSv1.3 (OUT), TLS handshake, Finished (20):
11:04:51.509769 [0-0] } [36 bytes data]
11:04:51.509849 [0-0] * SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 / x25519 / RSASSA-PSS
11:04:51.509868 [0-0] * ALPN: server accepted h2
11:04:51.509887 [0-0] * Server certificate:
11:04:51.509908 [0-0] * subject: CN=pytorch.org
11:04:51.509929 [0-0] * start date: Apr 2 00:00:00 2025 GMT
11:04:51.509947 [0-0] * expire date: May 1 23:59:59 2026 GMT
11:04:51.509964 [0-0] * issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M04
11:04:51.509984 [0-0] * SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
11:04:51.510002 [0-0] * Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
11:04:51.510020 [0-0] * Certificate level 1: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
11:04:51.510033 [0-0] * Certificate level 2: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
11:04:51.510052 [0-0] * [HTTPS-CONNECT] connect+handshake h2: 42ms, 1st data: 39ms
11:04:51.510089 [0-0] * [HTTP/2] [0] created h2 session
11:04:51.510111 [0-0] * [HTTP/2] [0] -> FRAME[SETTINGS, len=18]
11:04:51.510128 [0-0] * [HTTP/2] [0] -> FRAME[WINDOW_UPDATE, incr=1048510465]
11:04:51.510145 [0-0] * [HTTP/2] cf_connect() -> 0, 1,
11:04:51.510163 [0-0] * [HTTPS-CONNECT] connect -> 0, done=1
11:04:51.510186 [0-0] * Connected to download.pytorch.org (108.138.26.122) port 443
11:04:51.510206 [0-0] * using HTTP/2
11:04:51.510234 [0-0] * [HTTP/2] [1] OPENED stream for https://download.pytorch.org/models/resnet101-cd907fc2.pth
11:04:51.510248 [0-0] * [HTTP/2] [1] [:method: HEAD]
11:04:51.510261 [0-0] * [HTTP/2] [1] [:scheme: https]
11:04:51.510270 [0-0] * [HTTP/2] [1] [:authority: download.pytorch.org]
11:04:51.510288 [0-0] * [HTTP/2] [1] [:path: /models/resnet101-cd907fc2.pth]
11:04:51.510297 [0-0] * [HTTP/2] [1] [user-agent: curl/8.12.1]
11:04:51.510314 [0-0] * [HTTP/2] [1] [accept: */*]
11:04:51.510329 [0-0] * [HTTP/2] [1] submit -> 112, 0
11:04:51.510351 [0-0] * [HTTP/2] [1] -> FRAME[HEADERS, len=62, hend=1, eos=1]
11:04:51.510374 [0-0] } [5 bytes data]
11:04:51.510401 [0-0] * [HTTP/2] [0] egress: wrote 135 bytes
11:04:51.510416 [0-0] * [HTTP/2] [1] cf_send(len=112) -> 112, 0, eos=1, h2 windows 65535-65535 (stream-conn), buffers 0-0 (stream-conn)
11:04:51.510424 [0-0] > HEAD /models/resnet101-cd907fc2.pth HTTP/2
11:04:51.510424 [0-0] > Host: download.pytorch.org
11:04:51.510424 [0-0] > User-Agent: curl/8.12.1
11:04:51.510424 [0-0] > Accept: */*
11:04:51.510424 [0-0] >
11:04:51.510512 [0-0] * [HTTP/2] [0] progress ingress: done
11:04:51.510525 [0-0] * [HTTP/2] [1] cf_recv(len=102400) -> -1 81, window=0/65535, connection 1048576000/1048576000
11:04:51.510541 [0-0] * Request completely sent off
11:04:51.528973 [0-0] { [5 bytes data]
11:04:51.529045 [0-0] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
11:04:51.529070 [0-0] { [157 bytes data]
11:04:51.529182 [0-0] * [HTTP/2] [0] ingress: read 40 bytes
11:04:51.529212 [0-0] * [HTTP/2] [0] <- FRAME[SETTINGS, len=18]
11:04:51.529240 [0-0] * [HTTP/2] [0] MAX_CONCURRENT_STREAMS: 128
11:04:51.529264 [0-0] * [HTTP/2] [0] ENABLE_PUSH: TRUE
11:04:51.529289 [0-0] * [HTTP/2] [0] notify MAX_CONCURRENT_STREAMS: 128
11:04:51.529324 [0-0] * [HTTP/2] [0] <- FRAME[WINDOW_UPDATE, incr=2147418112]
11:04:51.529343 [0-0] * [HTTP/2] [0] progress ingress: inbufg=0
11:04:51.529373 [0-0] { [5 bytes data]
11:04:51.529442 [0-0] * [HTTP/2] [0] ingress: read 9 bytes
11:04:51.529466 [0-0] * [HTTP/2] [0] <- FRAME[SETTINGS, ack=1]
11:04:51.529481 [0-0] * [HTTP/2] [0] progress ingress: inbufg=0
11:04:51.529509 [0-0] * [HTTP/2] [0] progress ingress: done
11:04:51.529537 [0-0] * [HTTP/2] [0] -> FRAME[SETTINGS, ack=1]
11:04:51.529558 [0-0] } [5 bytes data]
11:04:51.529601 [0-0] * [HTTP/2] [0] egress: wrote 9 bytes
11:04:51.529626 [0-0] * [HTTP/2] [1] cf_recv(len=102400) -> -1 81, window=0/65536, connection 1048576000/1048576000
11:04:51.529654 [0-0] { [5 bytes data]
11:04:51.529696 [0-0] * [HTTP/2] [0] ingress: read 386 bytes
11:04:51.529718 [0-0] < HTTP/2 200
11:04:51.529756 [0-0] * [HTTP/2] [1] local window update by 10420224
11:04:51.529783 [0-0] * [HTTP/2] [1] status: HTTP/2 200
11:04:51.529816 [0-0] < content-type: application/x-www-form-urlencoded; charset=utf-8
11:04:51.529846 [0-0] * [HTTP/2] [1] header: content-type: application/x-www-form-urlencoded; charset=utf-8
11:04:51.529875 [0-0] < content-length: 178814045
11:04:51.529912 [0-0] * [HTTP/2] [1] header: content-length: 178814045
11:04:51.529949 [0-0] < last-modified: Wed, 10 Nov 2021 13:13:40 GMT
11:04:51.529977 [0-0] * [HTTP/2] [1] header: last-modified: Wed, 10 Nov 2021 13:13:40 GMT
11:04:51.530006 [0-0] < x-amz-version-id: WxVjHsX41t.Gox4D9vXBqw8_BNcgtttq
11:04:51.530030 [0-0] * [HTTP/2] [1] header: x-amz-version-id: WxVjHsX41t.Gox4D9vXBqw8_BNcgtttq
11:04:51.530052 [0-0] < accept-ranges: bytes
11:04:51.530074 [0-0] * [HTTP/2] [1] header: accept-ranges: bytes
11:04:51.530099 [0-0] < server: AmazonS3
11:04:51.530123 [0-0] * [HTTP/2] [1] header: server: AmazonS3
11:04:51.530149 [0-0] < date: Tue, 01 Apr 2025 10:38:55 GMT
11:04:51.530177 [0-0] * [HTTP/2] [1] header: date: Tue, 01 Apr 2025 10:38:55 GMT
11:04:51.530199 [0-0] < etag: "e06d6d4c722f9d6a4848468cb70ea3df-11"
11:04:51.530222 [0-0] * [HTTP/2] [1] header: etag: "e06d6d4c722f9d6a4848468cb70ea3df-11"
11:04:51.530242 [0-0] < x-cache: Hit from cloudfront
11:04:51.530261 [0-0] * [HTTP/2] [1] header: x-cache: Hit from cloudfront
11:04:51.530285 [0-0] < via: 1.1 9672a97668a5842cedcfaee3e743019e.cloudfront.net (CloudFront)
11:04:51.530311 [0-0] * [HTTP/2] [1] header: via: 1.1 9672a97668a5842cedcfaee3e743019e.cloudfront.net (CloudFront)
11:04:51.530332 [0-0] < x-amz-cf-pop: FRA56-P7
11:04:51.530358 [0-0] * [HTTP/2] [1] header: x-amz-cf-pop: FRA56-P7
11:04:51.530379 [0-0] < x-amz-cf-id: I9q0GFiron5llgpNG3QFDYBkbK5zPEpaqqW3mEZlf1Ki8MRyOHTw6Q==
11:04:51.530399 [0-0] * [HTTP/2] [1] header: x-amz-cf-id: I9q0GFiron5llgpNG3QFDYBkbK5zPEpaqqW3mEZlf1Ki8MRyOHTw6Q==
11:04:51.530422 [0-0] < age: 84357
11:04:51.530438 [0-0] * [HTTP/2] [1] header: age: 84357
11:04:51.530455 [0-0] * [HTTP/2] [1] <- FRAME[HEADERS, len=377, hend=1, eos=1]
11:04:51.530478 [0-0] <
11:04:51.530499 [0-0] * [HTTP/2] [1] DRAIN select_bits=1
11:04:51.530513 [0-0] * [HTTP/2] [1] CLOSED
11:04:51.530533 [0-0] * [HTTP/2] [1] DRAIN select_bits=1
11:04:51.530557 [0-0] * [HTTP/2] [0] progress ingress: inbufg=0
11:04:51.530579 [0-0] * [HTTP/2] [1] DRAIN select_bits=1
11:04:51.530601 [0-0] * [HTTP/2] [0] progress ingress: done
11:04:51.530627 [0-0] * [HTTP/2] [1] returning CLOSE
11:04:51.530649 [0-0] * [HTTP/2] handle_stream_close -> 0, 0
11:04:51.530672 [0-0] * [HTTP/2] [1] cf_recv(len=102400) -> 0 0, window=-1/-1, connection 1048576000/1048576000
11:04:51.530689 [0-0] { [0 bytes data]
0 170M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
11:04:51.530826 [0-0] * Connection #0 to host download.pytorch.org left intact
HTTP/2 200
content-type: application/x-www-form-urlencoded; charset=utf-8
content-length: 178814045
last-modified: Wed, 10 Nov 2021 13:13:40 GMT
x-amz-version-id: WxVjHsX41t.Gox4D9vXBqw8_BNcgtttq
accept-ranges: bytes
server: AmazonS3
date: Tue, 01 Apr 2025 10:38:55 GMT
etag: "e06d6d4c722f9d6a4848468cb70ea3df-11"
x-cache: Hit from cloudfront
via: 1.1 9672a97668a5842cedcfaee3e743019e.cloudfront.net (CloudFront)
x-amz-cf-pop: FRA56-P7
x-amz-cf-id: I9q0GFiron5llgpNG3QFDYBkbK5zPEpaqqW3mEZlf1Ki8MRyOHTw6Q==
age: 84357
```
### Versions
This is not version specific.
| true
|
2,965,826,377
|
DISABLED test_parity__foreach_abs_fastpath_inplace_cuda_float32 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_inplace_cuda_float32&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39821368287).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_inplace_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_abs_', keys=('aten::_foreach_abs_', 'Unrecognized', 'cudaLaunchKernel', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float32], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float32], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float32], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float32], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float32], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float32], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float32], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float32], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float32], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float32], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float32], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float32], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float32], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float32], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float32], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float32], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float32], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float32], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float32], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float32]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_abs_fastpath_inplace_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,965,824,087
|
Remove redundant code in cuda/__init__.py
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150529
As the title stated.
Follow: https://github.com/pytorch/pytorch/pull/147078
Fix issue: https://github.com/pytorch/pytorch/issues/150519
| true
|
2,965,768,931
|
Work on API Forwarding
|
kpouget
|
closed
|
[
"oncall: distributed",
"module: rocm",
"module: cpu",
"release notes: releng",
"fx",
"module: inductor",
"module: dynamo"
] | 3
|
NONE
|
PR opened against the wrong repo :/
| true
|
2,965,680,428
|
Add BF16 SVE intrinsics
|
Ryo-not-rio
|
open
|
[
"module: cpu",
"open source",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
** DO NOT REVIEW **
Draft PR for a sqaushed version of https://github.com/pytorch/pytorch/pull/143666
| true
|
2,965,637,848
|
Consider context when tuning kernels for max-autotune to more accurately reflect the performance of real workloads
|
CaoE
|
open
|
[
"oncall: pt2",
"module: inductor",
"oncall: cpu inductor"
] | 2
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
### Motivation
This request was initiated by https://github.com/pytorch/pytorch/pull/147368, which adds float16 support for CppMicroGemmAMX to get better performance for float16 templated gemm. We get improvements in micro-benchmarks with single linear, but we found regressions in real workloads. Profiling results show that kernels after templated gemm are affected.
For example:
* max-autotune enabled:
<img width="567" alt="Image" src="https://github.com/user-attachments/assets/45cb279d-6c63-411f-a2f6-7e2d7141b082" />
* max-autotune disabled:
<img width="560" alt="Image" src="https://github.com/user-attachments/assets/0d5d49e6-a779-43bb-9b0a-42b58faea15d" />
From the above results, the selected templated gemms are faster than mkldnn linear but flash attention kernel and `cpp_fused__log_softmax__to_copy_masked_fill_...` become slower.
Such impacts maybe different on different cores due to different cache behavior or load Imbalance.
Benchmarks in autotune need to more accurately reflect the performance of real workloads. In more detail, it may need to reflect the overall performance of the current kernel, subsequent kernels, and previous kernels.
### Alternatives
Benchmarks in max-autotune may need a context environment, the previous kernels and the following kernels, to more accurately reflect the performance of real workloads.
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,965,633,827
|
[Do Not Review][WIP] Enable Mkldnn fusion for XPU.
|
etaf
|
open
|
[
"module: cpu",
"module: mkldnn",
"open source",
"ciflow/binaries_wheel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,965,576,593
|
Fix CPU bitwise shifts for out-of-limit values in VSX-vec
|
Flamefire
|
open
|
[
"module: cpu",
"triaged",
"open source"
] | 2
|
COLLABORATOR
|
Similar to #96659 this implements the conditionals handling the out-of-limit values in the shift amounts (rhs) for the vectorized VSX code using the same logic as the scalar code.
Fixes #109777
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,965,527,857
|
[Question] How to load extremely large model checkpoint for FSDP wrapped model?
|
zigzagcai
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 2
|
NONE
|
Hello,
We tried to train DeepSeek v3 model with the parallelism of `FSDP+Expert Parallel`. It works well with random initialized weights. But if we want do SFT or RLHF, we need to load the 670B model weights from https://huggingface.co/deepseek-ai/DeepSeek-V3-0324/tree/main
So, does PyTorch has ways to load extremely large model weight checkpoint for FSDP wrapped model?
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
| true
|
2,965,451,384
|
[AOTInductor] Fix autotuning code's codegen
|
muchulee8
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary:
Codegen used to generate tmp_arg_{index} as temporary args, and index is the position of the caller.
We changed the logic of codegen such that we can reuse previous generated samples, and only delete after arg is no longer used. In this case, we need to make {index} unique, since different functions could reuse the same "tmp_arg_{index}" name string, but corresponds to different args.
Test Plan: `python test/inductor/test_aot_inductor.py -k test_autotuning_args_reuse`
Differential Revision: D72297084
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
2,965,443,235
|
[ROCm][Windows] Include AOTriton dependent sources in Windows build
|
ikalinic
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 13
|
CONTRIBUTOR
|
Includes ATen native transformers hipified sources in ROCm+Windows build. This was removed due to Trinton not being available on Windows, but this causes further linker errors. Setting `USE_FLASH_ATTENTION=0` and `USE_MEM_EFF_ATTENTION=0` during the build will mitigate the missing headers, but also not cause any linker errors, so we will use this approach for now.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,965,408,110
|
[XPU] Fix XPU unit test on Windows
|
LuFinch
|
closed
|
[
"open source",
"Merged",
"module: testing",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/xpu",
"module: xpu"
] | 14
|
CONTRIBUTOR
|
This PR is to resolve issue reported in https://github.com/intel/torch-xpu-ops/issues/1478
There are two cases failing in our Windows CI enabling.
- **test_xpu.py::TestXpuXPU::test_lazy_init_xpu** Needs to add `if __name__ == '__main__':` for Windows when using multiprocess. Refer to https://stackoverflow.com/a/18205006
```
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Traceback (most recent call last):
File "C:\Users\sdp\lufengqing\torch-xpu-ops\test\xpu\xpu_test_utils.py", line 24, in <module>
test_multi_process(model, input)
File "C:\Users\sdp\lufengqing\torch-xpu-ops\test\xpu\xpu_test_utils.py", line 16, in test_multi_process
assert p.exitcode == 0
AssertionError
```
- **test_xpu.py::TestXpuXPU::test_wrong_xpu_fork_xpu** is a linux only test case, we should skip it on Windows. Refer to https://github.com/pytorch/pytorch/blob/248487f455e943cbba368404119ca9bcb14c0499/test/test_multiprocessing.py#L609
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,965,375,858
|
Potential redundant code
|
MisterLin1995
|
open
|
[
"module: cuda",
"triaged",
"better-engineering"
] | 1
|
NONE
|
These lines looks redundant to me since we already get the handler through the previous line.
https://github.com/pytorch/pytorch/blob/main/torch/cuda/__init__.py#L1214
https://github.com/pytorch/pytorch/blob/main/torch/cuda/__init__.py#L1223:L1224
https://github.com/pytorch/pytorch/blob/main/torch/cuda/__init__.py#L1230:L1231
https://github.com/pytorch/pytorch/blob/main/torch/cuda/__init__.py#L1273:L1274
https://github.com/pytorch/pytorch/blob/main/torch/cuda/__init__.py#L1294:L1295
https://github.com/pytorch/pytorch/blob/main/torch/cuda/__init__.py#L1315:L1316
cc @ptrblck @msaroufim @eqy
| true
|
2,965,366,942
|
fix bug in logging code
|
exclamaforte
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/150379
```python
>>> key = "aten._int_mm_1_2_3"
>>> m, n, k = key.split("_")[-3:]
>>> m, n, k
('1', '2', '3')
>>> name = "_".join(key.split("_")[:-3])
>>> name
'aten._int_mm'
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,965,267,880
|
Pinned memory doubles memory usage for tensors slightly over 128MB
|
scott306lr
|
open
|
[
"module: cuda",
"module: memory usage",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
This issue appears related to #95823 but on smaller tensors.
Although #95823 is closed the underlying problem persists.
PyTorch seems to allocate memory up to the next power of two (256MB) when pinning tensors slightly above 128MB in size.
This causes nearly double the expected memory usage.
### Minimal Example
```python
import torch
def get_free():
import subprocess
r = subprocess.run(["free", "-m"], capture_output=True)
d = r.stdout.decode('utf-8')
s = d.split(':')[1].split()
return f"[used={s[1]:7}, shared={s[3]:7}] "
model_weight = torch.randn(18944, 3584, dtype=torch.float16, device='cpu') #129.5MB (qwen2.5 7b, up_proj)
# model_weight = torch.randn(14336, 4096, dtype=torch.float16, device='cpu') #112.0MB (llama3.1 7b, up_proj)
print("weight memory usage:", model_weight.element_size() * model_weight.nelement() / (1024 ** 2), "MB")
# Pinning memory
print(get_free() + "Before pin")
model_weight = model_weight.pin_memory()
print(get_free() + "After pin")
```
### Observed Behavior
It allocates almost double the memory when pinning qwen2.5 7b's up_proj (129.5 MB):
```bash
weight memory usage: 129.5 MB
[used=9306 , shared=108 ] Before pin
[used=9334 , shared=372 ] After pin
```
Pinning llama3.1 8b's up_proj (112.0 MB) takes much less memory:
```bash
weight memory usage: 112.0 MB
[used=9280 , shared=108 ] Before pin
[used=9321 , shared=244 ] After pin
```
Although the additionally used memory to pin a single tensor is less noticeable, it scales ip when pinning all decoder layers and significantly inflates DRAM usage.
For instance, it results in approximately 12GB of extra memory overhead for Qwen2.5 7b
### Versions
PyTorch version: 2.6.0+cu126
cc @ptrblck @msaroufim @eqy
| true
|
2,965,197,092
|
OLMo in-loop evals change with `torch.compile()` in 2.7.0
|
dirkgr
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2"
] | 11
|
CONTRIBUTOR
|
### 🐛 Describe the bug
OLMo-core is the LLM trainer used for the OLMo series of models. It features in-loop evals that compute perplexity on held-out validation sets. With torch 2.7.0, these evals start the same as with torch 2.6.0, but start diverging at some point.
<img width="420" alt="Image" src="https://github.com/user-attachments/assets/4bcdb5e4-4e90-4b61-8172-f692ac631a03" />
After a brief discussion on the PyTorch Slack, I have put together a self-contained repro in the OLMo-core codebase. It takes about three minutes to reproduce on one H100. Please don't be alarmed by how much code there is. OLMo-core has a lot of features, but most of it doesn't run in this example. Most of the flags needed below are just there to turn stuff off and force the trainer to just run the eval, instead of training.
To reproduce the problem:
1. Check out https://github.com/allenai/OLMo-core
2. Switch to the `1B-ReproForTorch` branch
3. `pip install -e .[all]`
4. To see the bug, install torch 2.7.0 at this point. For the baseline / expected behavior, skip this step.
5. Run this gnarly command: `torchrun --standalone src/scripts/train/OLMo2-1B.py train titan-baseline-5T-eval-local local --train_module.optim.compile=true --trainer.callbacks.lm_evaluator.eval_on_startup=true --trainer.load_path=s3://ai2-llm-public/checkpoints/dirkg/titan-baseline-5T/step200000 --trainer.callbacks.comet.enabled=false --trainer.hard_stop.unit=steps --trainer.hard_stop.value=200001 --trainer.callbacks.lm_evaluator.eval_interval=2 --trainer.callbacks.downstream_evaluator.enabled=false --trainer.load_strategy=always --trainer.save_folder=./runs/test --dataset.mix_base_dir=http://olmo-data.org --trainer.callbacks.lm_evaluator.eval_dataset.mix_base_dir=http://olmo-data.org`
6. The command starts up the trainer, loads the model and data (from the internet the first time, cached after that), and performs an evaluation right away. Then runs out of memory because you can't train with these settings on a single GPU, but we don't care about that. We just care about the evaluation. It will print some lines that look like the following:
```
pile-validation/CE loss=2.230
pile-validation/PPL=9.296
```
A CE loss around 2.25 is expected. CE loss of 2.90 or worse shows the bug.
More notes:
* In the command, you can turn off compile with `--train_module.compile_model=False`.
* The model checkpoint this is loading was trained with torch 2.7.0. This seems to be an eval-only issue.
### Error logs
_No response_
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-135-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 570.124.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8468
Stepping: 8
Frequency boost: enabled
CPU MHz: 3800.010
CPU max MHz: 2101.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 4.5 MiB
L1i cache: 3 MiB
L2 cache: 192 MiB
L3 cache: 210 MiB
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] torch==2.7.0+cu128
[pip3] torchaudio==2.7.0+cu128
[pip3] torchmetrics==1.7.0
[pip3] torchvision==0.22.0+cu128
[pip3] triton==3.3.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.7.1.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] torch 2.7.0+cu128 pypi_0 pypi
[conda] torchaudio 2.7.0+cu128 pypi_0 pypi
[conda] torchmetrics 1.7.0 pypi_0 pypi
[conda] torchvision 0.22.0+cu128 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
2,965,193,552
|
[export] Fix deserialization issue
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
An internal model was serialized in 2023, and is now breaking while loading with the following error:
```
File "<eval_with_key>.1675", line 4
def forward(self, arg1163_1, arg1164_1, , arg1166_1, , arg1168_1, arg1169_1, arg1170_1, , arg1172_1, arg1173_1, arg1174_1, arg1175_1, arg1176_1, arg1177_1, arg1178_1, arg1179_1, arg1180_1, arg1181_1, arg1182_1, arg1183_1, arg1184_1, arg1185_1, arg1186_1, arg1187_1, arg1188_1, arg1189_1, arg1190_1, arg1191_1, arg1192_1, arg1193_1, arg1194_1, arg1195_1, arg1196_1, arg1197_1, arg1198_1, arg1199_1, arg1200_1, arg1201_1, arg1202_1, arg1203_1, arg1204_1, arg1205_1, arg1206_1, arg1207_1, arg1208_1, arg1209_1, arg1210_1, arg1211_1, arg1212_1, arg1213_1, arg1214_1, arg1215_1, arg1216_1, , arg1218_1, arg1219_1, arg1220_1, arg1221_1, arg1222_1, arg1223_1, arg1224_1, , arg1226_1, arg1227_1, arg1228_1, , arg1230_1, , , , , , , , , , , , , , , ):
^
SyntaxError: invalid syntax
```
The syntax errors are due to inputs that are `None` when exporting. Prior to changes in https://github.com/pytorch/pytorch/pull/123590 (landed 4/2024), input specs for none inputs look like `InputSpec(userInput=UserInputSpec(arg=Argument(asNone=True)))`, and during deserialization when creating a node, we would just use a dummy name `arg`. After to those changes, the input specs for none inputs look like `InputSpec(constantInput=InputToConstantInputSpec(name='y', value=ConstantValue(asNone=True)))`, and when creating a node we would use the name `y` as the name. However the PR didn't handle the case if it's loading an old package which doesn't have this name, so ended up putting empty names in the placeholder nodes.
This error was uncovered after https://github.com/pytorch/pytorch/pull/149717, where we now use the GraphModule's python codegen to run the UnflattenedModule instead of going through the interpreter path. The placeholder nodes having empty names caused the python codegen to fail.
| true
|
2,965,140,419
|
Conv2D performance regression
|
jiqing-feng
|
closed
|
[
"triaged",
"topic: performance",
"intel"
] | 6
|
NONE
|
### 🐛 Describe the bug
The Conv2D is too slow.
CMD: numactl -C 0-31 -m 0 python test_conv.py
```python
import time
import torch
conv_layer = torch.nn.Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), dtype=torch.float16)
input_tensor = torch.rand([16, 256, 512, 512]).to(conv_layer.weight.dtype) - 0.5
with torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA,
],
) as prof:
with torch.no_grad():
for i in range(2):
start = time.time()
out = conv_layer(input_tensor)
end = time.time()
print(f"time costs: {end-start} s")
print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))
```
Regression version: torch-2.8.0.dev20250331+cpu
Fine version: torch-2.7.0.dev20250216+cpu
### Versions
```
Collecting environment information...
PyTorch version: 2.8.0.dev20250401+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: Intel(R) Xeon(R) 6972P
BIOS Model name: Intel(R) Xeon(R) 6972P
CPU family: 6
Model: 173
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acp
i mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm
pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_faul
t epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad f
sgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb int
el_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local spli
t_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req hfi vnm
i avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid b
us_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 a
mx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 9 MiB (192 instances)
L1i cache: 12 MiB (192 instances)
L2 cache: 384 MiB (192 instances)
L3 cache: 960 MiB (2 instances)
NUMA node(s): 6
NUMA node0 CPU(s): 0-31,192-223
NUMA node1 CPU(s): 32-63,224-255
NUMA node2 CPU(s): 64-95,256-287
NUMA node3 CPU(s): 96-127,288-319
NUMA node4 CPU(s): 128-159,320-351
NUMA node5 CPU(s): 160-191,352-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not a
ffected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.8.0+git6daf1d8
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch-metric-learning==2.8.1
[pip3] pytorch-msssim==1.0.0
[pip3] pytorchvideo==0.1.5
[pip3] torch==2.8.0.dev20250401+cpu
[pip3] torch-audiomentations==0.11.1
[pip3] torch_pitch_shift==1.2.5
[pip3] torchaudio==2.6.0.dev20250401+cpu
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.22.0.dev20250401+cpu
[conda] Could not collect
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,965,137,329
|
[c10d] Add logging for desync debug report
|
fduwjj
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 7
|
CONTRIBUTOR
|
Summary: We want to add a logging to first understand what is the distribution of desync debug report.
Test Plan: Test with logger staging
Differential Revision: D72249281
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,965,133,326
|
[BE] Fix triton windows build
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Fixes #150480
| true
|
2,965,122,393
|
Inductor respects exact strides on custom ops by default
|
zou3519
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150511
* #148104
If a tag is not specified on a custom operator, then inductor will
assume that it needs exact strides.
Test Plan:
- tests + CI
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,965,099,565
|
DISABLED test_parity__foreach_abs_fastpath_inplace_cuda_float16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_inplace_cuda_float16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39806742309).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_inplace_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_abs_', keys=('aten::_foreach_abs_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float16]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_abs_fastpath_inplace_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,965,099,518
|
DISABLED test_foreach_l2_large_value_input__foreach_norm_cuda_float16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 3
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_l2_large_value_input__foreach_norm_cuda_float16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39810501223).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_l2_large_value_input__foreach_norm_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1420, in only_fn
return fn(slf, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 1004, in test_foreach_l2_large_value_input
actual = fn(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_norm', keys=('aten::_foreach_norm', 'Unrecognized', 'aten::zeros', 'aten::empty', 'aten::zero_', 'aten::fill_', 'cudaLaunchKernel', 'Lazy Function Loading', 'void at::native::lpnorm_cleanup<c10::Half, (at::native::NormType)1, c10::Half, float>(float const*, at::native::TensorListAddresses, int)', 'void at::native::vectorized_elementwise_kernel<8, at::native::FillFunctor<c10::Half>, std::array<char*, 1ul> >(int, at::native::FillFunctor<c10::Half>, std::array<char*, 1ul>)', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(0,), device="cuda:0", dtype=torch.float16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float16], Tensor[size=(0,), device="cuda:0", dtype=torch.float16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float16], Tensor[size=(0,), device="cuda:0", dtype=torch.float16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float16], Tensor[size=(0,), device="cuda:0", dtype=torch.float16], Tensor[size=(0,), device="cuda:0", dtype=torch.float16]], args=(), kwargs={'ord': '0', 'dtype': 'torch.float64'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_foreach_l2_large_value_input__foreach_norm_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,965,095,245
|
caffe2: Fix lint errors in native/xnnpack/Linear.cpp
|
EricGriffith
|
closed
|
[
"triaged",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Summary: See title
Test Plan: Sandcastle
Differential Revision: D72275403
| true
|
2,965,094,979
|
caffe2: Fix lint errors in native/TensorShape.cpp
|
EricGriffith
|
open
|
[
"fb-exported"
] | 7
|
CONTRIBUTOR
|
Summary: See title
Test Plan: Sandcastle
Differential Revision: D72275198
| true
|
2,965,094,244
|
caffe2: Fix lint errors in native/TensorAdvancedIndexing.cpp
|
EricGriffith
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary: See title
Test Plan: Sandcastle
Differential Revision: D72274536
| true
|
2,965,093,901
|
caffe2: Fix lint errors in native/RNN.cpp
|
EricGriffith
|
open
|
[
"fb-exported"
] | 7
|
CONTRIBUTOR
|
Summary: See title
Test Plan: Sandcastle
Differential Revision: D72273826
| true
|
2,965,093,577
|
caffe2: Fix lint errors in native/quantized/TensorAdvancedIndexing
|
EricGriffith
|
open
|
[
"fb-exported",
"release notes: quantization"
] | 7
|
CONTRIBUTOR
|
Summary: See title
Test Plan: Sandcastle
Differential Revision: D72273049
| true
|
2,965,093,253
|
caffe2: Fix lint errors in native/int4mm_kernel
|
EricGriffith
|
open
|
[
"module: cpu",
"fb-exported"
] | 6
|
CONTRIBUTOR
|
Summary: See title
Test Plan: Sandcastle
Differential Revision: D72218816
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,965,058,558
|
Enable weekly test for operator benchmark
|
LifengWang
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/op-benchmark"
] | 3
|
CONTRIBUTOR
|
To regularly track the performance of the operator benchmark, enable the weekly test.
Hi, @huydhn, as you mentioned in https://github.com/pytorch/pytorch/pull/143733#issuecomment-2578317520, we could integrate the performance data from the weekly test into the OSS benchmark database for the dashboard.
| true
|
2,964,948,179
|
caffe2: Fix lint errors in runtime/register_prim_ops.cpp
|
EricGriffith
|
open
|
[
"oncall: jit",
"fb-exported",
"release notes: jit"
] | 6
|
CONTRIBUTOR
|
Summary: See title
Test Plan: Sandcastle
Differential Revision: D72276588
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,964,944,307
|
Revert "[fx] Move map_aggregate to C++ (#148243)"
|
clee2000
|
closed
|
[
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150500
* #150499
* #150498
* #150497
* #150496
Something in this stack is causes a memory leak, some context can be found here in #150059. My guess is 150498
It is also causing issues in internal [S503111](https://www.internalfb.com/sevmanager/view/503111)
Manual revert because merge conflicts in expected results csv
This reverts commit bec7bdad47a4a96863af623a63029dfc5ea8d011.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D72289031](https://our.internmc.facebook.com/intern/diff/D72289031)
| true
|
2,964,943,884
|
Revert "[fx] Move Node._update_args_kwargs to C++ (#148260)"
|
clee2000
|
closed
|
[
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150500
* __->__ #150499
* #150498
* #150497
* #150496
This reverts commit bf752c36da08871d76a66fd52ad09f87e66fc770.
Differential Revision: [D72289029](https://our.internmc.facebook.com/intern/diff/D72289029)
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,964,925,173
|
Revert "[fx] Move Node._prepend/Node._remove_from_list to C++ (#148261)"
|
clee2000
|
closed
|
[
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150500
* #150499
* __->__ #150498
* #150497
* #150496
This reverts commit 5d4e7d58b42623a9024a84f0050967ff0318dcdb.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D72289030](https://our.internmc.facebook.com/intern/diff/D72289030)
| true
|
2,964,924,890
|
Revert "[fx] Optimizations for node name generation (#148288)"
|
clee2000
|
closed
|
[
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150500
* #150499
* #150498
* __->__ #150497
* #150496
This reverts commit 8f858e226ba81fde41d39aa34f1fd4cb4a4ecc51.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D72289033](https://our.internmc.facebook.com/intern/diff/D72289033)
| true
|
2,964,924,803
|
Revert "[fx] Optimize TracerBase.create_arg and Graph._gen_python_code (#148292)"
|
clee2000
|
closed
|
[
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150500
* #150499
* #150498
* #150497
* __->__ #150496
This reverts commit a60b4ed6236fea46bd41c6410204612f85c37818.
Differential Revision: [D72289032](https://our.internmc.facebook.com/intern/diff/D72289032)
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,964,919,127
|
Fix _del_library
|
zou3519
|
closed
|
[
"Merged",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150511
* #148104
* __->__ #150495
On library deletion, we need to clear fx's schema cache.
Test Plan:
- top PR in the stack, I don't have a good test case for this PR.
| true
|
2,964,890,747
|
[inductor][autotune cache] add torch_key() to configs hash
|
davidberard98
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 20
|
CONTRIBUTOR
|
Summary:
**Context**: https://github.com/pytorch/pytorch/pull/150122 (D71982587 - let's call this "the WS diff") introduces "bc/fc-breaking" cache changes.
In particular, it introduces `num_consumer_groups` and adds it to the cached config. In versions of torch that include the WS diff, `num_consumer_groups` is treated as a class variable on a triton.Config object (i.e. `triton.Config({..kwargs..}, num_consumer_groups=num_consumer_groups, ...`). And in versions of torch that don't include the WS diff, you generally don't expect to see this kwarg.
But if a program is run WS-torch (i.e. torch w/ the WS diff), and then later you run the same program with non-WS-torch, then non-WS-torch is going to find this autotune cache entry, and interpret `num_consumer_groups` as a kwarg, because there's no special handling for for num_consumer_groups in this version of torch. Then the program crashes with a triton failure message.
**The fix**: add the torch version / torch key into the hash, so that any changes to inductor will invalidate the cache (ensuring that other changes to triton_heuristics won't cause these bc/fc issues).
Test Plan: D72285868 (or https://gist.github.com/davidberard98/2ea697eb550c94d0d1948fedb5c5c7d8, but this doesn't repro in OSS because this version of warp specialization is not available in oss triton) can repro the failure, and the failure is fixed after this PR is patched.
Also, added a test in test/inductor/test_codecache.py which verifies that there's no cache hit if the torch_key changes (and verified that without the functional changes in this PR, the test fails).
Differential Revision: D72285303
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,836,843
|
[DTensor] add _explicit_order_placements util
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150887
* #150862
* #150650
* #150490
* __->__ #150493
The util converts a list of placements in the traditional DTensor format
(e.g. [_StridedShard(0), Shard(0)], where list position is mesh_dim and sharding
is always applied left-to-right (from dim 0 to higher dims))
to a more explicitly ordered format, also replacing '_StridedShard' with
simple 'Shard' placements in the process.
(e.g. the above becomes [(1, Shard(0)), (0, Shard(0)] where the first
item in the tuple is the mesh_dim and the ordering of the tuples is the
sharding order.
This is useful so far as a helper for fixing local shape computation for
strided sharding in the uneven shape case, in the following PR- but may
also be useful more broadly if we can use explicit orderings to simplify
other parts of DTensor logic.
This skips implementing some combinations of _StridedSharding that are
not currently used in the wild today, but could be supported easily.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,964,827,662
|
Expired SSL breaking CI builds
|
harshalparekh6
|
closed
|
[
"module: ci",
"ci: sev"
] | 5
|
NONE
|
The SSL certificate is expired causing this error:
```
Could not fetch URL https://download.pytorch.org/whl/cpu/torchvision/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='download.pytorch.org', port=443): Max retries exceeded with url: /whl/cpu/torchvision/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1133)'))) - skipping
```
cc @seemethere @malfet @pytorch/pytorch-dev-infra @ezyang @gchanan @zou3519 @kadeng @msaroufim
| true
|
2,964,826,276
|
Security certificate expired on https://download.pytorch.org/whl/
|
djgagne
|
closed
|
[] | 7
|
NONE
|
I have a CI pipeline that depends on the Linux CPU version of PyTorch and downloads from https://download.pytorch.org/whl/cpu. The CI script failed, so I visited the wheel site and discovered that the site's RSA security certificate Expired: Tuesday, April 1, 2025 at 5:59:59 PM Mountain Daylight Time. When will the certificate be renewed?
| true
|
2,964,795,100
|
[DTensor] StridedShard support uneven sharding
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"ciflow/inductor",
"release notes: distributed (dtensor)",
"release notes: distributed (checkpoint)",
"merging"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150887
* #150862
* #150650
* __->__ #150490
This enables using FSDP+TP on parameters with dimensions that aren't
evenly divisible by the DP/TP mesh sizes.
- this may not support all possible combinations of strided shardings
and shardings, but the support before this PR is not complete anyway
This contains several fixes for different aspects of DTensor behavior
relating to uneven strided sharding:
- original creation of the strided tensor requires fixes in
StridedShard._split_tensor
- full_tensor() reconstruction requries fixes in
StridedShard._to_replicate_tensor to correctly reshuffle the data into
the original pre-sharded order
- Distributed Checkpointing support requires correct computation of the
compute_local_shape_and_global_offset util so it knows how a local
shard maps to the global tensor, for reconstruction during
load/reshard.
This PR also adds a util `_explicit_order_placements` which converts a list of
placements with StridedSharding into a list of placements with only
regular sharding, with the order shuffled such that it is equivalent.
Builds on and completes the work started in https://github.com/pytorch/pytorch/pull/148894
Uneven Sharding Example
-------
(copied from _StridedShard._to_replicate_tensor docstring)
mesh = (DP=2, TP=2)
original = torch.arange(5)
**Applying Sharding**
Step 1 - Apply TP sharding
`tp = distribute_tensor(x, world_mesh['tp'], [Shard(0)])`
local_tensors:
rank0: [0,1,2] rank1: [3,4]
rank1: [0,1,2] rank3: [3,4]
Step 2 - Apply FSDP sharding
`dp_tp = ...` (the process of creating a strided-shard tensor is skipped over as it is hacky and complicated)
dp_tp has placement (_StridedShard(0, split_factor=2), Shard(0))
local_tensors:
rank0: [0,1] rank1: [3]
rank1: [2] rank3: [4]
**Reconstructing the Full Tensor**
Now, say someone wants to reconstruct dp_tp's full tensor. This will invoke 'redistribute' to replicate.
redistribute will first replicate the "Shard(0)" placement on the rightmost mesh dim, then replicate the
StridedShard placement second, which is implemented by this function.
So our starting point (`local_tensor` arg) is the result of replicating the Shard(0) placement across the
TP dim, which looks like this.
Note the discrepancy with the 'tp sharded tensor' line above! We'll fix it by locally shuffling data.
local_tensors:
rank0: [0,1,3] rank1: [0,1,3]
rank1: [2,4] rank3: [2,4]
Step 1: replicate over the DP dimension. Afterwards, each rank can locally sort the values.
note: we need padding to do this allgather, and we'll need to keep track of the padding amount for later
local_tensors:
rank0: [0,1,3,2,4] rank1: [0,1,3,2,4]
rank1: [0,1,3,2,4] rank3: [0,1,3,2,4]
Step 2: chunk and shuffle values around to account for the wrong order of operations above
and get the original tensor content back
01324# <- our allgather includes padding, if padding was applied in step 1
01324 <- Remove the padding
013, 24 <- chunk once, 'undoing' the DP allgather
01, 3, 2, 4 <- chunk each chunk, 'undoing' the initial (wrong) TP allgather performed by Shard(0)->Replicate()
012, 34 <- interleave with stride=TP mesh dim size
01234 <- concatenate
Co-authored-by: Luca Wehrstedt <lw@meta.com>
Co-authored-by: Will Constable <whc@meta.com>
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,964,780,337
|
[Inductor] Refactor accuracy check to `allclose_many` function
|
blaine-rister
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
**Note: This seems like a duplicate of `torch._dynamo.utils.same`. I will likely close this PR in favor of that.**
Preparatory refactor for https://github.com/pytorch/pytorch/pull/146942.
# Feature
This is a small change to refactor an existing test utility into a common library, so it can be reused across test modules. The feature is a function called `allclose_many`, which checks accuracy across a pytree of tensors. Most end-to-end tests perform this type of check. I'm not sure if there's an existing utility for this, but this one seems simple enough.
As a bonus, `allclose_many` calls into its own helper `call_many`, which can be used for other types of checks. This is essentially a variadic form of `pytree.tree_map`.
# Test plan
This feature is used by existing block pointer tests. Also, this PR adds a new unit test checking that `allclose_many` correctly spots an accuracy bug.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,779,061
|
Lightweight CUDAGraph backend
|
BoyuanFeng
|
open
|
[
"triaged",
"module: cuda graphs",
"oncall: pt2",
"module: inductor",
"vllm-compile"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
There might be a large runtime overhead from TorchDyanmo Cache Lookup and cudagraph tree runtime checks. This overhead is small when the computation graph is large. However, it becomes noticeable when the computation graph is small.
In one example, the breakdown is
1) Other torch.compile overhead (e.g., TorchDynamoCacheLookup): 176 us
2) Cudagraph tree overhead: 128 us
3) cudaGraphLaunch time: 41 us
4) Actual cuda kernel time: 72 us
1)+2)+3)+4) = 417 us
Ideally we only need 3) + 4), so the overall latency reduces from 417 us -> 113 us. In this example, the user turns on fullgraph=True and there is a single cudagraph.
This may also apply to vLLM case where there is 1 cudagraph for each layer after graph partition.
Another use case is multiple smaller cudagraphs from graph partition.
We may consider a “CUDAGraph List” when there is a sequence of CUDAGraphs that always run one after another. Then we don’t need runtime checks (e.g., [check_invariants](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/cudagraph_trees.py#L2116-L2122)) and can avoid overhead. This loses generality but also reduces some overhead.
Internal ref: https://fb.workplace.com/groups/1075192433118967/permalink/1633387517299453/
cc @mcarilli @ezyang @eellison @penguinwu @chauhang @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @zou3519
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,964,766,992
|
Expose symbols on macos in the xplat pytorch stack
|
stepanhruda
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Summary:
X-link: https://github.com/pytorch/executorch/pull/9819
Had to revert D71321310 because it affected way too many targets and build sizes.
These changes should expose just enough symbols to be buildable in arvr mode on macOS. Could potentially make narrow it down even more by avoiding eg `get_pt_compiler_flags`
Differential Revision: D72255474
| true
|
2,964,766,244
|
[invoke_subgraph] Filter out grad_out where fw_out requires_grad is False
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150486
* #150450
* #150082
I am not sure if this is the right way.
| true
|
2,964,762,336
|
[inductor][test] Disable Triton GEMM backend tests for SM89
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148622
* __->__ #150485
Motivation: To deprecate a silent fallback behavior https://github.com/pytorch/pytorch/issues/150390
Problem: On SM89, Trition GEMM backend isn't working. This seems to be a pre-existing issue. I don't have access to SM89 to debug further.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,723,206
|
API to specify cudagraph sizes
|
BoyuanFeng
|
open
|
[
"triaged",
"module: cuda graphs",
"oncall: pt2",
"module: inductor",
"vllm-compile"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Currently PT2 cudagraph supports automated dynamic shapes. Specifically, we caches on symint function args and record a new cudagraph whenever we see a new dynamic shapes. When there are many dynamic shapes, we keep recording new cudagraphs until reaching certain threshold (e.g., 256 cudagraphs). This automated experience frees users from considering dynamic shapes in cudagraph. However, it also adds runtime overhead.
An alternative ux is to allow users specify a set of important input shapes and only record a cudagraph for these shapes. For all other shapes, we fallback to a general but non-cudagraphed code. This only targets pro-users.
cc @mcarilli @ezyang @eellison @penguinwu @chauhang @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @zou3519
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,964,722,314
|
[dynamic shapes] guard_or_false for computeStorageNbytes
|
pianpwk
|
open
|
[
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
removes fast path for computing storage, fixes some adjacent tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,964,713,890
|
API to specify static input indices for cudagraph
|
BoyuanFeng
|
open
|
[
"triaged",
"module: cuda graphs",
"oncall: pt2",
"module: inductor",
"vllm-compile"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Currently we rely on AOTAutograd to identify static input indices. However, if the graph module is not captured by dynamo, we don’t have static input indices anymore ([code](https://github.com/pytorch/pytorch/blob/main/torch/_functorch/aot_autograd.py#L1013-L1015)). This leads to cudagraph issues that we unnecessarily copy all parameters/buffers to static tensor addresses. One use case is user want to call inductor compile_fx directly on a graph (e.g., vLLM).
To fix this issue, we should add an API for users to specify static input indices which will be used by cudagraph in PT2.
cc @mcarilli @ezyang @eellison @penguinwu @chauhang @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @zou3519
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,964,710,303
|
[dynamic shapes] guard_or_false rewrite for scatter, gather, index metas
|
pianpwk
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,964,707,750
|
[XPU] Triton Windows build failing release 2.7
|
atalman
|
closed
|
[
"module: binaries",
"topic: binaries",
"module: xpu"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Observing following failure:
https://github.com/pytorch/pytorch/actions/runs/14203257536/job/39804771702
Error: Filename longer than 260 characters:
```
2025-04-01T21:28:58.8631176Z ninja: error: Stat(C:/Users/runneruser/AppData/Local/Temp/tmpzgzp7pwt/triton/python/build/cmake.win-amd64-cpython-3.10/_deps/spirv-llvm-translator-subbuild/spirv-llvm-translator-populate-prefix/src/spirv-llvm-translator-populate-stamp/spirv-llvm-translator-populate-patch-info.txt): Filename longer than 260 characters
2025-04-01T21:28:58.8632966Z
2025-04-01T21:28:58.8633695Z CMake Error at C:/actions-runner/_work/pytorch/pytorch/pytorch/Miniconda3/envs/py310/Lib/site-packages/cmake/data/share/cmake-4.0/Modules/FetchContent.cmake:1918 (message):
2025-04-01T21:28:58.8634660Z Build step for spirv-llvm-translator failed: 1
2025-04-01T21:28:58.8635024Z Call Stack (most recent call first):
2025-04-01T21:28:58.8636375Z C:/actions-runner/_work/pytorch/pytorch/pytorch/Miniconda3/envs/py310/Lib/site-packages/cmake/data/share/cmake-4.0/Modules/FetchContent.cmake:1609 (__FetchContent_populateSubbuild)
2025-04-01T21:28:58.8638648Z C:/actions-runner/_work/pytorch/pytorch/pytorch/Miniconda3/envs/py310/Lib/site-packages/cmake/data/share/cmake-4.0/Modules/FetchContent.cmake:2145:EVAL:2 (__FetchContent_doPopulation)
2025-04-01T21:28:58.8640199Z C:/actions-runner/_work/pytorch/pytorch/pytorch/Miniconda3/envs/py310/Lib/site-packages/cmake/data/share/cmake-4.0/Modules/FetchContent.cmake:2145 (cmake_language)
2025-04-01T21:28:58.8641734Z C:/actions-runner/_work/pytorch/pytorch/pytorch/Miniconda3/envs/py310/Lib/site-packages/cmake/data/share/cmake-4.0/Modules/FetchContent.cmake:1978:EVAL:1 (__FetchContent_Populate)
2025-04-01T21:28:58.8643270Z C:/actions-runner/_work/pytorch/pytorch/pytorch/Miniconda3/envs/py310/Lib/site-packages/cmake/data/share/cmake-4.0/Modules/FetchContent.cmake:1978 (cmake_language)
2025-04-01T21:28:58.8644409Z third_party/intel/cmake/FindSPIRVToLLVMTranslator.cmake:23 (FetchContent_Populate)
2025-04-01T21:28:58.8645058Z third_party/intel/lib/Target/SPIRV/CMakeLists.txt:2 (find_package)
```
### Versions
2.8.0
cc @seemethere @malfet @osalpekar @gujinghui @EikanWang @fengyuan14 @guangyey
cc @EikanWang @chuanqi129
| true
|
2,964,704,309
|
[MPS] tril op not handling infs correctly
|
pytorchbot
|
closed
|
[
"open source",
"release notes: mps",
"ciflow/mps"
] | 1
|
COLLABORATOR
|
Fixes #149813
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,964,681,990
|
[CUDAGraph] support meta tensor
|
BoyuanFeng
|
closed
|
[
"Merged",
"module: cuda graphs",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Previously, cudagraph is skipped if the graph contains any meta tensor. However, we should not skip since meta tensor does not have actual computation. This PR fixes the issue.
### Example
```python
import torch
def foobar(x, y):
return x * 2, y * 3
foo_c = torch.compile(mode="reduce-overhead")(foobar)
t = torch.empty((1, 16, 128, 128), device="meta")
y = torch.rand([64], device="cuda")
eager_out = foobar(t, y)
for _ in range(3):
compiled_out = foo_c(t, y)
```
Prior to this PR, above code leads to
```
skipping cudagraphs due to multiple devices: device(type='cuda', index=0), device(type='meta')
```
With this PR, we don't skip.
cc @mcarilli @ezyang @eellison @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,681,518
|
[dynamo] improve graph break message causing skipped frame
|
williamwen42
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: compile ux"
] | 0
|
MEMBER
|
Example:
```python
import torch
class CtxMgr:
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
pass
def fn(x):
with CtxMgr():
assert x is None
torch.compile(fn, backend="eager")(torch.randn(3))
```
Logs:
```
Graph break: skip: from user code at:
File "/data/users/williamwen/pytorch/playground.py", line 16, in fn
assert x is None
Traceback (most recent call last):
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 1233, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 1079, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 779, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 815, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/convert_frame.py", line 736, in transform
tracer.run()
File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 3511, in run
super().run()
File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
^^^^^^^^^^^
File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 646, in inner
jump_graph_break(self, inst, value)
File "/data/users/williamwen/pytorch/torch/_dynamo/symbolic_convert.py", line 594, in jump_graph_break
unimplemented_v2(
File "/data/users/williamwen/pytorch/torch/_dynamo/exc.py", line 517, in unimplemented_v2
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: Should not compile partial graph (data-dependent branching)
Explanation: Dynamo has determined when encountering data-dependent branching (e.g. `if my_tensor.item() > 0:`) that it should not compile the partial graph.
Developer debug context:
from user code:
File "/data/users/williamwen/pytorch/playground.py", line 16, in fn
assert x is None
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
Notes:
- We should hide the internal compiler stack trace if verbose logging is not set
- We should better explain what is meant by "skip"
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,964,678,385
|
Better test coverage on _inductor/scheduler.py
|
exclamaforte
|
open
|
[
"triaged",
"better-engineering",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
I noticed that we have almost no direct tests of the classes in scheduler.py. It's not clear how much of an issue this is as the scheduler is covered by almost every other inductor test. Ideally, we'd get some coverage stats and then add tests to cover the gaps:
- [ ] get coverage of scheduler.py
- [ ] fill in gaps
### Alternatives
_No response_
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,674,925
|
expect fail scan test in sigmoid
|
ydwu4
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary: as titled.
Test Plan: see modified test.
Differential Revision: D72271976
| true
|
2,964,674,616
|
[dynamic shapes] rewrite slice_forward decomp with guard_or_false
|
pianpwk
|
open
|
[
"ciflow/inductor",
"release notes: export"
] | 1
|
CONTRIBUTOR
|
Uses guard_or_false in place of size-oblivious to assume if not already known that the start/end indices are in-bounds.
Adds torch._checks for this, checking `start_val >= 0, end_val <= sizes[dim], start_val >= end_val`, which helps guarantee that the output size at runtime matches the symbolic expression in `end_val - start_val`.
Without these checks the reported symbolic size might not match, e.g. if end_val < start_val, eager returns a size-0 tensor but the symbolic size is negative.
| true
|
2,964,674,274
|
[ROCm] code cleanup of architecture checks
|
apakbin
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 7
|
CONTRIBUTOR
|
This PR replaces several calls to `at::cuda::getCurrentDeviceProperties()->gcnArchName` and `at::cuda::getDeviceProperties(device_index)->gcnArchName` when checking to see if the GPU architecture is in a certain list.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,964,671,453
|
torch.library.custom_op doesn't handle 1-element tuples returns
|
zou3519
|
open
|
[
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher",
"internal ramp-up task"
] | 1
|
CONTRIBUTOR
|
```
import torch
@torch.library.custom_op("mylib::add", mutates_args=())
def add(x: torch.Tensor, y: torch.Tensor) -> tuple[torch.Tensor]:
return (x.clone(),)
x = torch.randn(3)
ret = add(x, x)
```
gives:
```
yset, *args, **kwargs)
720 def redispatch(self, /, keyset, *args, **kwargs):
--> 721 return self._handle.redispatch_boxed(keyset, *args, **kwargs)
RuntimeError: Unable to cast (tensor([-0.3896, 0.1958, -0.0152]),) to Tensor
```
cc @chauhang @penguinwu @bdhirsh
| true
|
2,964,664,187
|
[dynamo] emit only 1 graph break message on unrecoverable data-dependent assert fail
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150471
Addresses https://fb.workplace.com/groups/1075192433118967/permalink/1625299684774903/
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,964,649,000
|
[pytorch][triton] Allow warp spec for FlexAttention kernel
|
mandroid6
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Summary:
Given inductor support for warp-specialization for `TritonTemplateKernel`, this change adds:
- num_consumer_groups
- num_buffers_warp_spec
to the flexattention template generated by inductor in `torch.compile`.
NOTE: Currently default config doesn't enable warp-spec and needs explicit args for num_consumer_groups, num_buffers_warp_spec in the kernel options to enable.
Test Plan:
### Functional Testing
```Py
import torch
from torch.nn.attention.flex_attention import flex_attention
from triton.testing import do_bench
make_tensor = lambda: torch.rand(8, 16, 8192, 128, device="cuda", dtype=torch.bfloat16)
q, k, v = make_tensor(), make_tensor(), make_tensor()
flex_compiled = torch.compile(flex_attention, fullgraph=True)
print(do_bench(lambda: flex_compiled(q, k, v, kernel_options={"num_warps": 4, "num_consumer_groups": 2,
"num_buffers_warp_spec": 3,})))
```
- (best config) without WS: 11.06
- with WS: 9.35
Differential Revision: D70501880
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,640,394
|
[torchrec] update local_shards_wrapper to latest version
|
iamzainhuda
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Summary: Adding new ops, support for empty shards, and fixed initializations for downstream checkpointing.
Test Plan: buck2 run 'fbcode//mode/dev-nosan' fbcode//torchrec/distributed/tests:test_shards_wrapper
Differential Revision: D72271275
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,964,634,759
|
DISABLED test_parity__foreach_abs_fastpath_inplace_cuda_bool (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_inplace_cuda_bool&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39794859365).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_inplace_cuda_bool`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_abs_', keys=('aten::_foreach_abs_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.bool], Tensor[size=(19, 19), device="cuda:0", dtype=torch.bool], Tensor[size=(18, 18), device="cuda:0", dtype=torch.bool], Tensor[size=(17, 17), device="cuda:0", dtype=torch.bool], Tensor[size=(16, 16), device="cuda:0", dtype=torch.bool], Tensor[size=(15, 15), device="cuda:0", dtype=torch.bool], Tensor[size=(14, 14), device="cuda:0", dtype=torch.bool], Tensor[size=(13, 13), device="cuda:0", dtype=torch.bool], Tensor[size=(12, 12), device="cuda:0", dtype=torch.bool], Tensor[size=(11, 11), device="cuda:0", dtype=torch.bool], Tensor[size=(10, 10), device="cuda:0", dtype=torch.bool], Tensor[size=(9, 9), device="cuda:0", dtype=torch.bool], Tensor[size=(8, 8), device="cuda:0", dtype=torch.bool], Tensor[size=(7, 7), device="cuda:0", dtype=torch.bool], Tensor[size=(6, 6), device="cuda:0", dtype=torch.bool], Tensor[size=(5, 5), device="cuda:0", dtype=torch.bool], Tensor[size=(4, 4), device="cuda:0", dtype=torch.bool], Tensor[size=(3, 3), device="cuda:0", dtype=torch.bool], Tensor[size=(2, 2), device="cuda:0", dtype=torch.bool], Tensor[size=(1, 1), device="cuda:0", dtype=torch.bool]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_abs_fastpath_inplace_cuda_bool
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,964,634,694
|
DISABLED test_foreach_l2_large_value_input__foreach_norm_cuda_bfloat16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_foreach_l2_large_value_input__foreach_norm_cuda_bfloat16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39793119621).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_foreach_l2_large_value_input__foreach_norm_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,964,619,998
|
Add some CPython tests to dynamo
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150466
* #147990
* #146506
* #146501
* #146500
CPython tests included:
* test_baseexception.py
* test_cmath.py
* test_complex.py
* test_contextlib.py
* test_dict.py
* test_exceptions.py
* test_float.py
* test_generators.py
* test_generator_stop.py
* test_grammar.py
* test_int_literal.py
* test_int.py
* test_iter.py
* test_list.py
* test_math.py
* test_ordered_dict.py
* test_raise.py
* test_setcomps.py
* test_set.py
* test_sort.py
* test_string.py
* test_sys.py
* test_tuple.py
* test_userdict.py
* test_userlist.py
* test_userstring.py
* unittest/test_assertions.py
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,964,606,785
|
`torch._dynamo.nonstrict_trace` has confusing user code stacktrace
|
StrongerXi
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```python
import torch
@torch._dynamo.nonstrict_trace
def f(x, items):
it = iter(items)
return next(it), x.sin()
opt_f = torch.compile(f, backend="eager", fullgraph=True)
x = torch.randn(3)
dct = {'a': 3, 'b': 3}
ref = f(x, dct.items())
print(ref)
res = opt_f(x, dct.items())
print(res)
# Traceback (most recent call last):
# File "/home/ryanguo99/scratch/test.py", line 15, in <module>
# res = opt_f(x, dct.items())
# ^^^^^^^^^^^^^^^^^^^^^
# File "/home/ryanguo99/repos/pytorch/torch/_dynamo/eval_frame.py", line 667, in _fn
# raise e.with_traceback(None) from e.__cause__
# torch._dynamo.exc.Unsupported:
# For `nonstrict_trace`-ed function, the only allowed input types are basic types (e.g., torch.Tensor, int, float) or pytree containers of those. Here you are calling the function with arguments that contain a value of type <dict_items>, please use one of the following to register the type with pytree:
# * `torch.utils._pytree.register_constant`
# * `torch.utils._pytree.register_dataclass`
# * `torch.utils._pytree.register_pytree_node`
#
#
# from user code:
# File "/home/ryanguo99/repos/pytorch/torch/_dynamo/external_utils.py", line 70, in inner
# return fn(*args, **kwargs)
```
### Error logs
_No response_
### Versions
main 0d96c38b76b, python 3.11
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,964,577,373
|
[BE] Do not allow PyTorch codebase to use `c10::optional`
|
malfet
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"topic: not user facing",
"ciflow/mps",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
Extensions can still rely on it, and we should decorate it with deprecated, but it is a C++20 feature.
XPU still uses it, so exclude XPU builds until https://github.com/intel/torch-xpu-ops/pull/1615 is merged
Test plan:
- https://github.com/pytorch/pytorch/pull/150464/commits/0def9b4acc81f9bcb032f57f8c606a71234564c9 should fail MPS builds
```
/Users/ec2-user/runner/_work/pytorch/pytorch/aten/src/ATen/native/mps/OperationUtils.mm:975:44: error: no template named 'optional' in namespace 'c10'; did you mean 'std::optional'?
c10::optional<int64_t> extra) {
^~~~~~~~~~~~~
std::optional
```
- https://github.com/pytorch/pytorch/pull/150464/commits/a769759dd42cb8b370d9cbfac5c161832ee033b8 should fail CUDA builds
```
/var/lib/jenkins/workspace/torch/csrc/distributed/c10d/CUDASymmetricMemoryOps.cu(530): error: namespace "c10" has no member "nullopt"
input, c10::nullopt, reduce_op, group_name, out);
^
1 error detected in the compilation of
```
Fixes https://github.com/pytorch/pytorch/issues/150313
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,964,529,349
|
[ROCm][TunableOp] Fix UT race condition and reduce UT duration.
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm",
"ciflow/rocm-mi300"
] | 3
|
COLLABORATOR
|
This PR fixes two race conditions that occur when UT tests are run:
- In a particular order within a single shard.
- Concurrently in multiple shards. Each test now gets a unique filename that depends on the test name.
There were two other minor improvements to the UTs:
- matmul_offline_mgpu could occasionally fail if run on 8 GPUs. Criteria was relaxed.
- bmm_tunableop_rocm checks that the rotating buffer is not zero. Otherwise, the test is not useful.
Additionally, several UTs took over 1 minute to run. Their duration was reduced by a combination of setting max tuning iterations to one, setting the rotating buffer size to zero, and/or reducing the matrix dimensions.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,964,477,866
|
enable torch.compile for torch._scaled_mm nvfp4 recipe
|
vkuzo
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: improvements",
"fx"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150462
Summary:
Updates the meta registration for `torch._scaled_mm` to work for the
nvfp4 recipe.
Test Plan:
```bash
pytest test/test_matmul_cuda.py -s -k test_blockwise_nvfp4
```
Reviewers:
Subscribers:
Tasks:
Tags:
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,964,472,635
|
[release] Make pytorch source distribution package respect pep-0517
|
atalman
|
open
|
[
"module: binaries",
"triaged",
"topic: binaries"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I would like to make modifications to Source Distribution package to respect https://peps.python.org/pep-0517/
Our source packaging was initially introduced by https://github.com/pytorch/pytorch/pull/63022
and have not changed since then.
I would like to modify create-release yml to build sdist respecting PEP 0517:
https://github.com/pytorch/pytorch/blob/main/.github/workflows/create_release.yml#L68
PyPi documentation on generating sdist:
https://packaging.python.org/en/latest/tutorials/packaging-projects/#generating-distribution-archives
Currently if one tries to install the tar.gz file used in the release, we get something like this:
```
pip install pytorch-v2.6.0.tar.gz
Processing ./pytorch-v2.6.0.tar.gz
ERROR: Exception:
Traceback (most recent call last):
File "/Users/atalman/miniconda3/lib/python3.9/tarfile.py", line 2617, in next
tarinfo = self.tarinfo.fromtarfile(self)
File "/Users/atalman/miniconda3/lib/python3.9/tarfile.py", line 1295, in fromtarfile
obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)
File "/Users/atalman/miniconda3/lib/python3.9/tarfile.py", line 1231, in frombuf
raise EmptyHeaderError("empty header")
tarfile.EmptyHeaderError: empty header
```
### Versions
2.8.0
cc @seemethere @malfet @osalpekar
| true
|
2,964,426,151
|
[Cherry-pick] Make PyTorch buildable with cmake-4
|
malfet
|
closed
|
[
"module: cpu",
"ciflow/binaries",
"release notes: quantization",
"release notes: releng"
] | 1
|
CONTRIBUTOR
|
This cherry-picks following two PRs into release/2.7 branch
- **[Cmake] Make PyTorch buildable by CMake-4.x (#150203)**
- **Make PyTorch buildable by CMake-4.x on s390x (#150294)**
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,964,418,981
|
[test] testing binary builds for 150226
|
clee2000
|
closed
|
[
"ciflow/binaries",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
testing binary builds for https://github.com/pytorch/pytorch/pull/150226
| true
|
2,964,407,696
|
[Inductor] Refactor wrapper codegen to use Wrapper IR.
|
blaine-rister
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 17
|
CONTRIBUTOR
|
Preparatory refactor for https://github.com/pytorch/pytorch/pull/146942.
# Feature
This PR refactors the existing wrapper codegen into `WrapperLine` subclasses, extending the existing Memory Planning IR into a fully-fledged Wrapper IR. See the diagram below.

The IR currently supports the following ops:
- All existing memory planning IR ops (`AllocateLine`, `FreeIfNotReusedLine`, etc.)
- Reinterpret views (`ReinterpretLine`)
- Kernel definitions (`KernelDefinitionLine`)
- Calls to defined kernels (`KernelCallLine`)
- Calls to extern kernels (`ExternKernelLine`, `ExternKernelAllocLine`)
- Ops with multiple outputs (`MultiOutputLine`)
- Tensor cleanup at the end of a graph (`FreeLine`)
- Leaving comments in code (`CommentLine`)
There are two main motivations for this refactor:
1. Unlike free-form C++ and and Python code, Wrapper IR lines provide structured information about what the wrapper code does. This serves as a natural extension point for other types of wrapper codegen. For example, the parent PR generates FX IR from Wrapper IR. Wrapper IR aims to give new backends enough information to generate wrapper code without needing to modify core Inductor files such as `ir.py`.
2. This design will hopefully promote stronger modularity and encapsulation.
a. Inductor's core compilation passes don't need to worry about whether they're targeting Python, C++, FX or anything else. They can simply focus on generating Wrapper IR, and target-specific code can be refactored into the various backends.
b. Backends do not need to know about all the details and internal state of `V.graph` IR. For example, they don't need to consider whether a buffer has been removed from the graph when generating code. Wrapper IR will hopefully provide a simpler interface for generating wrapper code, which abstracts away the details of device code.
# Implementation details
The implementation mainly consists of separating direct C++/Python codegen into two phases:
1. Emit Wrapper IR lines describing what the wrapper code is supposed to do.
2. Inside the `codegen()` method of each `WrapperLine`, call backend methods which generate pure Python/C++ code using the information stored in the Wrapper IR line. For example, `KernelCallLine` calls `wrapper._generate_kernel_call_helper`, which is overriden by the various Python and C++ backends to generate the final wrapper code.
The main difficulty in implementing this is that we need to be careful that code is generated in the correct order. Wrapper codegen happens in two passes: first we write code into `self.lines` which mainly contains wrapper IR, but can also contain raw Python or C++ lines in some situations. Then, we convert the wrapper IR into the final Python/C++ code in `self.wrapper_call`. Since the same macros may be used in both passes, it's difficult to ensure that code is written to the correct buffer. The easiest solution for this was to implement a context manager overriding the `writeline` method to write to `self.wrapper_call` after memory planning is finished. This way, `writeline` writes to `self.lines` in the first pass, and `self.wrapper_call` in the second. This obviated the need to pass `code` or `writeline` variables all the way through the call stack, which would have touched most of the existing macros.
# Test plan
Since this refactor touches all the existing wrapper codegen classes, the existing CI provides good coverage.
The parent PR introduces new tests for the FX IR backend. Among other things, these tests assert that `self.lines` only contains Wrapper IR lines, and no free-form code. While this would not be true of all programs today, the tests suggests that the IR implemented in this PR is sufficient to cover basic PyTorch usage.
# Future directions
These two goals are only partially realized by this PR. These are several important steps which still undergo direct Python/C++ codegen in core files:
- User-defined Triton kernels.
- Reinterpret views on outputs, from `gen_output_refs()`. (In the parent PR, the FX converter has a custom way of handling this. This can eventually be ported into Wrapper IR.)
- Fallback ops with custom `codegen()` methods, e.g. `ScatterFallback`.
- Misc. C++ lines emitted by the various cpp backends, e.g. declaring constants.
These cases will gradually be handled in subsequent PRs, as the Inductor->FX converter expands its coverage. Given that these refactors are pretty tricky to do, it seems wiser to execute them in stages, as opposed to porting everything to Wrapper IR at once.Some Python and codegen still lives in core files such as `ir.py`, as described in previous sections. Hopefully, this PR will serve as a starting point which moves the codebase towards a more modular design. Over time, we can gradually refactor the remaining codegen (mainly in `ir.py`) into backend classes.
One limitation of this PR is that codegen still happens in two phases during `PythonWrapperCodegen`. First, we generate Wrapper IR into `self.lines`, and from there we generate Python or C++ code into `self.wrapper_call`, `self.header`, etc. In the long term, it would be cleaner to split wrapper IR into its own class which doesn't deal with Python/C++ codegen at all. (See the diagram at the top.) That would strictly enforce the boundary between Wrapper IR and Python/C++ wrapper code. However, this would probably be a much larger refactor.
Another limitation of the current code is that the helper functions have a lot of call args. It's also possible to clean this up by passing Wrapper IR ops e.g. `KernelCallLine` into helper functions like `_generate_kernel_call_helper`, since they store all the arguments. However, that change would likely be prone to merge conflicts, so I would like to save it for follow-up PRs if possible.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,402,519
|
[MPSInductor] Add `store_reduce` method
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150457
* #150452
That restrict the store operation to 0th thread, which should be much better, shouldn't it
(Though I don't observe it in the benchmark)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,392,667
|
[dynamic shapes] add sym_and, sym_or
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
This has been pretty helpful for the size-oblivious rewrite. Wanted the variadic args version to avoid `sym_or(a, sym_or(b, sym_or(c, d)))` in favor of `sym_or(a, b, c, d)`. Happy to change this to ban the 1-arg version.
This is better than plain and/or because the whole symbolic expression gets preserved, and if we guard on it or defer as a runtime assert, we preserve all branches.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,964,389,111
|
[dynamic shapes] oblivious rewrite for meta_select
|
pianpwk
|
open
|
[
"release notes: fx",
"fx",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Uses guard_or_true in place of size-oblivious to assume if not already known, that the index is in-bounds.
Tests to check runtime asserts for out-of-bounds indexing
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,964,380,841
|
torch.compile on MPS: error running compiled RMSNorm
|
manuelcandales
|
open
|
[
"triaged",
"module: mps",
"oncall: pt2",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
torch.compile on MPS generates syntactically incorrect shader for RMSNorm
```python
import torch
model = torch.compile(torch.nn.RMSNorm(2048, device="mps"))
x = torch.randn(2048, device="mps")
y = model(x)
```
Error:
```
Traceback (most recent call last):
File "/Users/mcandales/github/experiment/rms_norm_compile.py", line 27, in <module>
y = model(x)
^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 671, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1453, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1234, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1080, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 782, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 818, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 736, in transform
tracer.run()
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3500, in run
super().run()
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3701, in RETURN_VALUE
self._return(inst)
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3686, in _return
self.output.compile_subgraph(
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1158, in compile_subgraph
self.compile_and_call_fx_graph(
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1451, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1501, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1533, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/__init__.py", line 2355, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 2162, in compile_fx
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 2149, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 101, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1165, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 835, in load
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1150, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 574, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 824, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 1107, in aot_dispatch_autograd
compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1996, in fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 642, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 774, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 759, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1337, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1226, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2199, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2246, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 2872, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/var/folders/d1/qlny9nnj0s97c0pljt5x0b8w0000gn/T/torchinductor_mcandales/54/c54ce5ll7l3ttfvx7q3g3wj6n3v6ivpnewh7uovrfjzbwebkx3bl.py", line 42, in <module>
mps_lib_0 = compile_mps_shader("""
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mcandales/miniconda3/envs/gptfast/lib/python3.12/site-packages/torch/_inductor/runtime/runtime_utils.py", line 181, in compile_mps_shader
raise SyntaxError(f"failed to compile {source} with {err.msg}") from err
torch._inductor.exc.InductorError: SyntaxError: failed to compile
#include <c10/metal/random.h>
#include <c10/metal/special_math.h>
#include <c10/metal/utils.h>
#include <c10/metal/reduction_utils.h>
kernel void generated_kernel(
device float* out_ptr1,
device float* out_ptr2,
constant float* in_ptr0,
constant float* in_ptr1,
uint2 thread_pos [[thread_position_in_grid]],
uint2 group_pos [[thread_position_in_threadgroup]]
) {
auto xindex = thread_pos.x;
auto r0_index = thread_pos.y;
threadgroup float tmp_acc_0[1024];
tmp_acc_0[r0_index] = 0;
for(auto r0_0_cnt = 0; r0_0_cnt < 2; ++r0_0_cnt) {
int r0_0 = 2 * r0_index + r0_0_cnt;
auto tmp0 = in_ptr0[r0_0];
auto tmp1 = tmp0 * tmp0;
tmp_acc_0[r0_index] += tmp1;
}
auto tmp2 = c10::metal::threadgroup_sum(tmp_acc_0, 1024);
auto tmp3 = 2048.0;
auto tmp4 = tmp2 / tmp3;
auto tmp5 = 1.1920928955078125e-07;
auto tmp6 = tmp4 + tmp5;
auto tmp7 = metal::rsqrt(tmp6);
out_ptr1[0] = static_cast<float>(tmp7);
auto tmp9 = in_ptr1[r0_0];
auto tmp8 = tmp0 * tmp7;
auto tmp10 = tmp8 * tmp9;
out_ptr2[r0_0] = static_cast<float>(tmp10);
}
with program_source:2120:29: error: use of undeclared identifier 'r0_0'
auto tmp9 = in_ptr1[r0_0];
^
program_source:2121:21: error: use of undeclared identifier 'tmp0'
auto tmp8 = tmp0 * tmp7;
^
program_source:2123:18: error: use of undeclared identifier 'r0_0'
out_ptr2[r0_0] = static_cast<float>(tmp10);
^
```
### Versions
PyTorch version: 2.8.0.dev20250330
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.1 | packaged by Anaconda, Inc. | (main, Jan 19 2024, 09:45:58) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.8.0.dev20250330
[pip3] torchaudio==2.6.0.dev20250330
[pip3] torchvision==0.18.0.dev20240223
[conda] numpy 1.26.4 py312h7f4fdc5_0
[conda] numpy-base 1.26.4 py312he047099_0
[conda] torch 2.8.0.dev20250330 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250330 pypi_0 pypi
[conda] torchvision 0.18.0.dev20240223 py312_cpu pytorch-nightly
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,964,365,573
|
Proactively remove CompiledTritonKernels before loading from cache/starting inductor compile
|
jamesjwu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150453
We'll still running into this issue intermittently and it's hard to debug; so I thought a more aggressive cache clear strategy may fix it as a stopgap until we can Statically launch cuda kernels and avoid some of this stuff
Differential Revision: [D72257973](https://our.internmc.facebook.com/intern/diff/D72257973/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,352,149
|
[MPS][Testing] Benchmark reduction ops
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150457
* __->__ #150452
That compares eager vs compile
On my M4Pro mini I'm getting the following now
```
[--------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------]
| eager-512x512 | compile-512x512 | eager-1024x1024 | compile-1024x1024 | eager-2048x2048 | compile-2048x2048 | eager-4096x4096 | compile-4096x4096
1 threads: ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
sum (torch.float32) | 121.0 | 201.5 | 130.3 | 772.3 | 179.4 | 1470.5 | 476.1 | 2980.0
max (torch.float32) | 154.1 | 165.9 | 198.7 | 211.6 | 344.2 | 386.9 | 1326.6 | 1345.6
```
| true
|
2,964,349,364
|
[ROCm] Build Pytorch extensions with amdclang++
|
akashveramd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 11
|
CONTRIBUTOR
|
Here are the following modifications made to cpp_extension.py- 1) Changed compiler flag to use --version.
2) Added a feature to convert alpha-numeric string to numeric string for the version string returned by compiler. This was the source of error as the parser was failing on parsing alpha-numeric version string.
Build with following pytorch extensions- Apex, TorchVision, TorchAudio & DeepSpeed.
Unit tested with following pytorch extensions- Apex, TorchVision.
(cherry picked from commit c873aeac35851a7d5000eb7f24561d3f56c2ffbd)
Fixes #ISSUE_NUMBER
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,964,342,317
|
[invoke_subgraph] Do not cache fake tensors for AOTDispatcher first pass
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150486
* __->__ #150450
* #150082
| true
|
2,964,339,210
|
[2.7 RC] [Intel GPU] fused optimizers use FP64 internally which fails on A770
|
aleqs
|
closed
|
[
"triaged",
"module: xpu"
] | 4
|
NONE
|
### 🐛 Describe the bug
torch.optim.SGD/Adam/AdamW all fail when step() is invoked on A770 whenever fused is set to True. Below is the typical exception:
> ...
> File "/home/xxx/dev/nn/train_util.py", line 825, in train_model
> opt.step()
> File "/home/xxx/anaconda3/lib/python3.12/site-packages/torch/optim/optimizer.py", line 485, in wrapper
> out = func(*args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^
> File "/home/xxx/anaconda3/lib/python3.12/site-packages/torch/optim/optimizer.py", line 79, in _use_grad
> ret = func(self, *args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/home/xxx/anaconda3/lib/python3.12/site-packages/torch/optim/sgd.py", line 125, in step
> sgd(
> File "/home/xxx/anaconda3/lib/python3.12/site-packages/torch/optim/sgd.py", line 300, in sgd
> func(
> File "/home/xxx/anaconda3/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
> return fn(*args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^
> File "/home/xxx/anaconda3/lib/python3.12/site-packages/torch/optim/sgd.py", line 513, in _fused_sgd
> torch._fused_sgd_(
> RuntimeError: Required aspect fp64 is not supported on the device
I hacked together a cast to fp32 of every single input into _fused_adamw, and the result is still the same, which means fp64 is used internally while the current software layer doesn't seem to support this
Minimal example:
```
import math
import torch
# change this around to observe the failure
device = "xpu"
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)
model = torch.nn.Sequential(
torch.nn.Linear(3, 1),
torch.nn.Flatten(0, 1)
)
xx = xx.float().to(device)
y = y.float().to(device)
model = model.to(device)
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-3
optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate, fused=True)
for t in range(2000):
y_pred = model(xx)
# Compute and print loss.
loss = loss_fn(y_pred, y)
if t % 100 == 99:
print(t, loss.item())
optimizer.zero_grad(set_to_none=True)
loss.backward()
optimizer.step()
linear_layer = model[0]
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.39
Python version: 3.12.9 | packaged by conda-forge | (main, Feb 14 2025, 08:00:06) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-56-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7900X 12-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 78%
CPU max MHz: 5733.0000
CPU min MHz: 400.0000
BogoMIPS: 9381.97
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc amd_ibpb_ret arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] pytorch-triton-xpu==3.3.0
[pip3] torch==2.7.0+xpu
[pip3] torchaudio==2.7.0+xpu
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.22.0+xpu
[conda] numpy 2.1.3 py312h58c1407_0 conda-forge
[conda] pytorch-triton-xpu 3.3.0 pypi_0 pypi
[conda] torch 2.7.0+xpu pypi_0 pypi
[conda] torchaudio 2.7.0+xpu pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchvision 0.22.0+xpu pypi_0 pypi
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,964,325,535
|
[Windows][inductor] fix blank space break windows file path
|
pytorchbot
|
closed
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Fixes #149310
From origin error message:
```cmd
Command:
cl /I C:/Program Files/Python310/Include /I c:/code/.env/lib/site-packages/torch/include /I c:/code/.env/lib/site-packages/torch/include/torch/csrc/api/include /I c:/code/.env/lib/site-packages/torch/include/TH /I c:/code/.env/lib/site-packages/torch/include/THC /D TORCH_INDUCTOR_CPP_WRAPPER /D STANDALONE_TORCH_HEADER /D C10_USING_CUSTOM_GENERATED_MACROS /DLL /MD /O2 /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /openmp /openmp:experimental C:/Users/user/AppData/Local/Temp/torchinductor_user/ou/coubnfnqsm2gbdzdytufv46jotd6sxsnnhgldiw45pl5yjq5nbvz.cpp /LD /FeC:/Users/user/AppData/Local/Temp/torchinductor_user/ou/coubnfnqsm2gbdzdytufv46jotd6sxsnnhgldiw45pl5yjq5nbvz.pyd /link /LIBPATH:c:/code/.env/Scripts/libs /LIBPATH:c:/code/.env/lib/site-packages/torch/lib torch.lib torch_cpu.lib torch_python.lib sleef.lib
Output:
Microsoft (R) C/C++ Optimizing Compiler Version 19.43.34809 for x86
Copyright (C) Microsoft Corporation. All rights reserved.
cl : Command line warning D9025 : overriding '/openmp' with '/openmp:experimental'
cl : Command line warning D9024 : unrecognized source file type 'Files/Python310/Include', object file assumed
coubnfnqsm2gbdzdytufv46jotd6sxsnnhgldiw45pl5yjq5nbvz.cpp
C:/Users/user/AppData/Local/Temp/torchinductor_user/ou/coubnfnqsm2gbdzdytufv46jotd6sxsnnhgldiw45pl5yjq5nbvz.cpp(21): fatal error C1083: Cannot open include file: 'Python.h': No such file or directory
```
Python installed in `C:/Program Files/Python310` path, and the blank space break the file path.
Solution:
Add quotes to declare Windows file paths, after that:
```cmd
cl /I "C:/Users/Xuhan/.conda/envs/new_build/Include" /I "C:/Users/Xuhan/.conda/envs/new_build/lib/site-packages/torch/include" /I "C:/Users/Xuhan/.conda/envs/new_build/lib/site-packages/torch/include/torch/csrc/api/include" /D TORCH_INDUCTOR_CPP_WRAPPER /D STANDALONE_TORCH_HEADER /D C10_USING_CUSTOM_GENERATED_MACROS /D CPU_CAPABILITY_AVX512 /DLL /MD /O2 /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /openmp /openmp:experimental C:/Users/Xuhan/AppData/Local/Temp/tmp1wsj0m8r/za/czarp3ly5c22ge3hydvnzvad4cjimyr3hkwvofodxqffgil7frfd.cpp /arch:AVX512 /FeC:/Users/Xuhan/AppData/Local/Temp/tmp1wsj0m8r/za/czarp3ly5c22ge3hydvnzvad4cjimyr3hkwvofodxqffgil7frfd.pyd /LD /link /LIBPATH:"C:/Users/Xuhan/.conda/envs/new_build/libs" /LIBPATH:"C:/Users/Xuhan/.conda/envs/new_build/lib/site-packages/torch/lib" "torch.lib" "torch_cpu.lib" "torch_python.lib" "sleef.lib"
```
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,321,011
|
[inductor] Fix inductor windows linker error
|
pytorchbot
|
closed
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150256
Fixes #149889
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,320,594
|
[ONNX] [Dynamo] std.dim needs implementation
|
MilesV64
|
open
|
[
"module: onnx",
"triaged",
"OSS contribution wanted"
] | 4
|
NONE
|
### 🐛 Describe the bug
No decomposition for torch.std.dim
```
<class 'torch.onnx._internal.exporter._errors.DispatchError'>: No ONNX function found for <OpOverload(op='prims.broadcast_in_dim', overload='default')>. Failure message: No decompositions registered for the real-valued input
⬆️
<class 'torch.onnx._internal.exporter._errors.ConversionError'>: Error when translating node %broadcast_in_dim : [num_users=1] = call_function[target=torch.ops.prims.broadcast_in_dim.default](args = (%var, [3, 1, 1], [0]), kwargs = {}). See the stack trace for more information.
```
Full reproduction code:
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x: torch.Tensor) -> torch.Tensor:
return x.std((1, 2), keepdim=True)
m = Model()
input = torch.randn((3, 4, 5), device='cpu')
args = (input,)
ep = torch.onnx.export(
m,
args,
dynamo=True,
report=True
)
print(ep)
```
### Versions
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.9 | packaged by conda-forge | (main, Feb 14 2025, 07:56:32) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Pro
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.3.0.dev20250401
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[conda] Could not collect
| true
|
2,964,296,143
|
caffe2: Fix lint errors in FlashAttentionKernel
|
EricGriffith
|
open
|
[
"module: cpu",
"fb-exported",
"release notes: quantization"
] | 10
|
CONTRIBUTOR
|
Summary: See title
Test Plan: Sandcastle
Differential Revision: D72218753
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,964,294,678
|
DISABLED test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39778937526).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 318, in test_remote_cache_load_function
self.assertEqual(global_stats.fx_graph, Stats(1, 3, 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4094, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=2, num_get_hit=2, num_get_miss=2) != Stats(num_put=1, num_get_hit=3, num_get_miss=1)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,288,657
|
caffe2: Fix lint errors in native/CPUFallback.cpp
|
EricGriffith
|
open
|
[
"module: cpu",
"fb-exported",
"release notes: quantization"
] | 9
|
CONTRIBUTOR
|
Summary: See title
Test Plan: Sandcastle
Differential Revision: D72218921
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,964,261,773
|
cuda.h not found error when missing local CTK
|
tinglvv
|
open
|
[
"module: cuda",
"oncall: releng",
"triaged",
"module: aotinductor"
] | 7
|
COLLABORATOR
|
### 🐛 Describe the bug
Seeing below error with AOT inductor when testing 2.6.0 RC wheel in a plain docker without local CTK.
Opening the issue for tracking the fix. In the case of such error, we should give a clear error message suggesting the missing file is related to CTK installation.
Or should we add the file when it is missing?
Open the issue for later follow-up (non-urgent)
cc @ptrblck @msaroufim @eqy @desertfire @chenyang78 @penguinwu @yushangdi @benjaminglass1 @chauhang @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @malfet @atalman @nWEIdia
```
root@47634bb57d4b:/opt/pytorch/pytorch# python test/inductor/test_aot_inductor_package.py TestAOTInductorPackage_cuda.test_linear
In file included from /usr/local/lib/python3.12/dist-packages/torch/include/torch/csrc/inductor/aoti_runtime/model.h:17,
from /tmp/tmpf2szg6ab/cnz4ulmnfd7mraahh23lgc2lmejzgx67etxgjpcfh3h7yn6pu5h5/cmt6f253zl4hyovrmrmpja5p6g7cjd2i4ookgutf7baavulnnrc6.cpp:4:
/usr/local/lib/python3.12/dist-packages/torch/include/torch/csrc/inductor/aoti_runtime/device_utils.h:14:10: fatal error: cuda.h: No such file or directory
14 | #include <cuda.h>
| ^~~~~~~~
compilation terminated.
```
### Versions
```
root@47634bb57d4b:/opt/pytorch/pytorch# python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.39
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6444Y
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 8
CPU(s) scaling MHz: 100%
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 768 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 32 MiB (16 instances)
L3 cache: 45 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cudnn-frontend==1.11.0
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.11
[pip3] onnx==1.17.0
[pip3] optree==0.14.1
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8b.nvinternal
[pip3] torch==2.6.0
[pip3] torch-geometric==2.6.1
[pip3] torch_tensorrt==2.7.0a0
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.0a0
[pip3] triton==3.2.0
[conda] Could not collect
```
| true
|
2,964,255,377
|
[Inductor] Reland Merge Triton ScaledMM as epilogue to MM template #150045
|
PaulZhang12
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 6
|
CONTRIBUTOR
|
Merges https://github.com/pytorch/pytorch/pull/150438 and https://github.com/pytorch/pytorch/pull/150045. https://github.com/pytorch/pytorch/pull/150045 was already landed, but did not include a change that makes it unable to land internally.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,215,993
|
[dynamo] add dynamo disable reasons to codebase
|
williamwen42
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150440
* #150341
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,964,208,537
|
Update PyTorchStreamReader API to take cpu allocator override
|
huxintong
|
closed
|
[
"caffe2",
"triaged",
"open source",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Summary: Add allocator param in getRecord
Test Plan:
newly added UT
```
buck test caffe2/caffe2/serialize:inline_container_test
```
Differential Revision: D72252585
| true
|
2,964,193,676
|
Fix scaled_mm template migration missing endif block
|
PaulZhang12
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150438
* #150437
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,193,163
|
Consolidate mm_scaled into mm template
|
PaulZhang12
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150438
* __->__ #150437
| true
|
2,964,173,051
|
[Inductor] Hide reinplace_fsdp_all_gather pass behind skip_fsdp_hooks config
|
yf225
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
The `reinplace_fsdp_all_gather` pass is currently only for Traceable FSDP2 and doesn't work together with SimpleFSDP. We should hide the pass behind `skip_fsdp_hooks` config which makes it only apply to Traceable FSDP2.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150436
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,964,147,675
|
[dynamo] Improve trace rules reasoning
|
williamwen42
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: compile ux"
] | 0
|
MEMBER
|
The reasoning infra for trace_rules.py is outdated and not very user friendly.
For example, we get graph break logs such as
```
Attempted to inline function marked as skipped
Explanation: Dynamo developers have intentionally marked that the function `skip` should not be traced.
Hint: Avoid calling the function `skip`.
Hint: Remove the function `case.py` from torch/_dynamo/trace_rules.py. More graph breaks may occur as a result of attempting to trace into the function.
Hint: Please file an issue to PyTorch.
Developer debug context: qualname: skip, name: skip, filename: `case.py`, skip reason: skipped according trace_rules.lookup SKIP_DIRS
```
(what is SKIP_DIRS? and the hint to modify trace_rules.py isn't precise)
And we have a lot of cases where the trace_rules reason is missing:
```
Attempted to call function marked as skipped
Explanation: Dynamo developers have intentionally marked that the function `disable` in file `_dynamo/decorators.py` should not be traced.
Hint: Avoid calling the function `disable`.
Developer debug context: module: torch._dynamo.decorators, qualname: disable, skip reason: <missing reason>
```
Internal example of lack of clarity: https://fb.workplace.com/groups/1075192433118967/permalink/1638325513472320/
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,964,103,934
|
Faster way to test self hosted GPU runner
|
zhe-thoughts
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"ciflow/periodic"
] | 2
|
NONE
|
This is for experimenting with hosting github runners on nvidia managed hardware
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.