repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/pytorch
| 163,519
|
For mixed-precision training, does FSDP2 also need `amp.grad_scaler.GradScaler` ? or is FSDP2 already handled?
|
In mixed-precision training of DDP, `amp.grad_scaler.GradScaler` is needed to dynamically scale the loss, my question is does FSDP2 also need `amp.grad_scaler.GradScaler` ? or is FSDP2 already handled?
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci
|
https://github.com/pytorch/pytorch/issues/163519
|
closed
|
[
"oncall: distributed"
] | 2025-09-22T15:01:43Z
| 2025-09-29T08:19:23Z
| 11
|
EquationWalker
|
huggingface/lerobot
| 1,995
|
Questions about SmolVLA design
|
Hi! I am looking into the details of SmolVLA implementation, and got some questions.
I wonder the following points are necessary, or beneficial for the performance.
1.
https://github.com/huggingface/lerobot/blob/f7283193ea9ae932423e3a1e27524a27fa5c0fe5/src/lerobot/policies/smolvla/smolvlm_with_expert.py#L354C63-L354C74
In the cross-attention layer, the VLM keys and values are linear-projected before the attention interface.
They have compatible shape without the projection, and ROPE is not applied after the projection (although ROPE is applied in the VLM part, interaction between the ROPEd queries and projected keys might not work as rotation?)
2.
https://github.com/huggingface/lerobot/blob/f7283193ea9ae932423e3a1e27524a27fa5c0fe5/src/lerobot/policies/smolvla/modeling_smolvla.py#L566
https://github.com/huggingface/lerobot/blob/f7283193ea9ae932423e3a1e27524a27fa5c0fe5/src/lerobot/policies/smolvla/modeling_smolvla.py#L592C1-L593C1
image and text embeddings are multiplied by `sqrt(dim)` before they are fed to the llm and expert layers.
I could not find the same multiplication in SmolVLM modeling (https://github.com/huggingface/transformers/blob/main/src/transformers/models/smolvlm/modeling_smolvlm.py)
I guess that this multiplication might change the distribution of image-text features.
3.
SmolVLM and SmolVLA are trained with different ROPE max frequency.
It seems like SmolVLM is trained with 100_000, and SmolVLA is trained with 10_000.
4.
It seems like SmolVLM uses causal mask for all LLM layers. (no bidirectional attention for images)
SmolVLA uses similar mask with PI0 (paligemma).
|
https://github.com/huggingface/lerobot/issues/1995
|
open
|
[
"question",
"policies"
] | 2025-09-22T11:53:01Z
| 2025-10-17T01:58:12Z
| null |
gliese581gg
|
huggingface/lerobot
| 1,994
|
How to improve success rate and generalization
|
Hi, I have one question regarding the success rate, if I ensure the object appears in the frame of wrist camera at the beginning of dataset collection/inference, will this lead to higher success rate for pick and place task?
My initial attempt was object appears in the side view camera but does not appear in the wrist camera at the initial point/ beginning of dataset collection/inference.
**Should I ensure object appears in both side view camera and wrist camera at the starting point of program?**
|
https://github.com/huggingface/lerobot/issues/1994
|
closed
|
[
"question",
"policies"
] | 2025-09-22T09:55:53Z
| 2025-09-23T09:26:16Z
| null |
Liu9999ai
|
pytorch/ao
| 3,040
|
IntWeightonly quantized model slower than default model ( x86 machine, A100)
|
My Int4WeightOnly quantized model is slower and more inaccurate in OCR as compared to the default model. Why is this happening?
Here is some info to help you guys
Model - Qwen2-VL-7B-Instruct fine-tuned and saved in 16bit using unsloth
GPU -
<img width="679" height="265" alt="Image" src="https://github.com/user-attachments/assets/8bcceaa8-569f-4dfa-bb67-267f5e82634b" />
code -
```
from unsloth import FastVisionModel
import torch
from PIL import Image
import os
from jiwer import wer, cer
import glob
from tqdm import tqdm
import pandas as pd
# Model paths
adapter_path = "qwen2-ocr"
ckpt_no = adapter_path.split("/")[-1]
# Load both models
from transformers import AutoProcessor, AutoModelForVision2Seq
from transformers import AutoConfig,TorchAoConfig
from torchao.quantization import Int4WeightOnlyConfig
config = AutoConfig.from_pretrained(adapter_path)
quant_config = Int4WeightOnlyConfig(group_size=128)
quantization_config = TorchAoConfig(quant_type=quant_config)
quantized_model = AutoModelForVision2Seq.from_pretrained(
adapter_path,
config=config,
torch_dtype="auto",
device_map="auto",
quantization_config=quantization_config
)
processor = AutoProcessor.from_pretrained(adapter_path)
# Instructions
instruction_ft = "Perform OCR"
# Directory paths
image_dir = "images"
golden_text_dir = "texts"
output_dir = f"qwen-torchao-int4weight"
# Create output directory if it doesn't exist
os.makedirs(output_dir, exist_ok=True)
# Get all image files
image_files = glob.glob(os.path.join(image_dir, "*.jpg"))
# Initialize results storage
results = []
# Process images in batches
BATCH_SIZE = 32 # Adjust based on your GPU memory
total_images = len(image_files)
pbar = tqdm(total=total_images, desc="Processing images")
for i in range(0, len(image_files), BATCH_SIZE):
batch_size = min(BATCH_SIZE, len(image_files) - i) # Handle the last batch correctly
batch_image_paths = image_files[i:i+BATCH_SIZE]
batch_images = []
batch_image_names = []
batch_actuals = []
# Prepare batch data
for image_path in batch_image_paths:
image_name = os.path.basename(image_path)
batch_image_names.append(image_name)
try:
# Load image
image = Image.open(image_path)
batch_images.append(image)
# Load ground truth text
text_file = os.path.join(golden_text_dir, image_name.replace(".jpg", ".txt"))
try:
with open(text_file, 'r', encoding='utf-8') as txt_file:
actual = txt_file.readlines()[0].strip()
batch_actuals.append(actual)
except FileNotFoundError:
print(f"Warning: Ground truth file not found for {image_name}, skipping evaluation")
batch_actuals.append(None)
except Exception as e:
print(f"Error loading {image_name}: {str(e)}")
batch_images.append(None)
batch_actuals.append(None)
# Filter out None values
valid_indices = [idx for idx, img in enumerate(batch_images) if img is not None]
if not valid_indices:
continue
valid_images = [batch_images[idx] for idx in valid_indices]
valid_image_names = [batch_image_names[idx] for idx in valid_indices]
valid_actuals = [batch_actuals[idx] for idx in valid_indices]
try:
# Prepare batch messages
batch_messages = []
for image in valid_images:
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": instruction_ft}
]}
]
batch_messages.append(messages)
# Process batch using Hugging Face's batched inference approach
texts = [
processor.apply_chat_template(msg, add_generation_prompt=True)
for msg in batch_messages
]
# Create batch inputs
batch_inputs = processor(
valid_images,
texts,
add_special_tokens=False,
padding=True,
return_tensors="pt",
).to("cuda")
# Generate outputs in batch
batch_outputs = quantized_model.generate(
**batch_inputs,
use_cache=True,
temperature=0.5,
min_p=0.1,
max_new_tokens=1024
)
# Process batch outputs
input_lengths = [batch_inputs["input_ids"][i].shape[0] for i in range(len(valid_images))]
for idx, (image_name, actual, input_length) in enumerate(zip(valid_image_names, valid_actuals, input_lengths)):
generated_response = processor.tokenizer.decode(batch_outputs[idx][input_length:], skip_special_tokens=True)
# Calculate metrics if ground truth is available
if actual is not None:
ft_word_error = wer(generated_response
|
https://github.com/pytorch/ao/issues/3040
|
open
|
[
"quantize_",
"triaged"
] | 2025-09-22T06:53:54Z
| 2025-10-01T11:10:42Z
| 5
|
Rakshith12-pixel
|
pytorch/torchtitan
| 1,733
|
Gradient accumulation broken in PP
|
### Bug description
Using gradient accumulation is incompatible with PipleineSchedule(..., scale_grads=True) option, which defaults to True.
When this option is set, at each step, all gradients are scaled by the micro-batch size. This works fine for a single gradient accumulation step, but when using multiple steps, this will rescale the total gradient by this factor, not just at the end of gradient accumulation.
The result is that the accumulated gradient is an exponential moving average, rather than a sum. Overall, the resulting gradients are much smaller than they should be and using gradient accumulation with PP is not equivalent to using it without PP -- the loss curves diverge substantially, as well as the gradient-norms are way off.
A secondary consequence is that at every step, it divides the gradients by n_microbatches, which is computationally expensive when applied to a large model.
I identified the same issue in my own pipeline trainer implementation a week or two ago. When checking how Torch Titan addressed the issue, I discovered that Titan probably has the same bug.
I had the time to confirm the presence of the issue today and have submitted https://github.com/pytorch/torchtitan/pull/1732 to resolve the issue.
### Versions
torch 2.10.0.dev20250915+cu126
For anyone who may be interested, I have added support for Torch Titan to my configuration framework, which is what I used for reproducing the issue.
https://github.com/jdinalt/forgather/tree/main/examples/torchtitan
|
https://github.com/pytorch/torchtitan/issues/1733
|
closed
|
[
"high priority",
"triage review"
] | 2025-09-22T05:55:07Z
| 2025-09-24T20:13:06Z
| 8
|
jdinalt
|
huggingface/smol-course
| 248
|
[QUESTION] About applying chat template for base model via `clone_chat_template` from trl
|
In the course [Supervised Fine-Tuning](https://huggingface.co/learn/smol-course/unit1/3), author uses base model `HuggingFaceTB/SmolLM3-3B-Base` but I choose `HuggingFaceTB/SmolLM2-135M` because it is lighter. However, I found that the base model `SmolLM2-135M` does not have its own chat template but it already had special tokens. However, speical tokens may be incorrect, for example, bos_token and eos_token share the same token `<|endoftext|>`
<img width="654" height="305" alt="Image" src="https://github.com/user-attachments/assets/87a4cea8-c372-4540-b617-9c41825f5a7e" />
I also refer to course [LLM Course, Fine-Tuning with SFTTrainer](https://huggingface.co/learn/llm-course/en/chapter11/3?fw=pt#implementation-with-trl) and author uses `setup_chat_format` to create the chat template for base model's tokenizer which does not have its own chat template
However, [`setup_chat_format`](https://github.com/huggingface/trl/blob/86f74b486fda475e5530a451d06b835361d959ac/trl/models/utils.py#L87) only supports `chatml` format and will be deprecated in trl version 0.26.0. That is why I use [`clone_chat_template`](https://github.com/huggingface/trl/blob/86f74b486fda475e5530a451d06b835361d959ac/trl/models/utils.py#L165) instead.
But another issue appears here: while `clone_chat_template` only overwrites eos from source tokenizer to target tokenizer, the `setup_chat_format` overwrites all bos, eos, and pad tokens. After I try to clone `Llama-3.2-Instruct`'s chat template, only eos changes to `<|eot_id|>`
`model, tokenizer, added_tokens = clone_chat_template(model=model, tokenizer=tokenizer, source_tokenizer_path='meta-llama/Llama-3.2-1B-Instruct')`
<img width="633" height="186" alt="Image" src="https://github.com/user-attachments/assets/4428af4d-b8d8-4974-893f-af4033d516ed" />
Question:
1. Why in the base model, although the tokenizer does not have a chat template, it already has special tokens?
2. `clone_chat_template` does not overwrite all special tokens like bos, eos, pad, ... so are there any training SFT impacts, and what is the solution for this?
I am new to SFT and I very appreciate any support. Thank you.
|
https://github.com/huggingface/smol-course/issues/248
|
open
|
[
"question"
] | 2025-09-22T03:03:56Z
| 2025-09-22T19:13:17Z
| null |
binhere
|
huggingface/transformers.js
| 1,419
|
Why is `token-classification` with T5 not available? (`T5ForTokenClassification`)
|
### Question
In python `tranformers` i can do:
```python
model = AutoModelForTokenClassification.from_pretrained("google-t5/t5-base")
```
and use it with `Trainer` to train it (quite successfully).
Or
```python
classifier = pipeline("token-classification", model="google-t5/t5-base")
```
and use it for token classification.
Instead, if I try to use it in `transformers.js` (web, 3.7.3):
```js
classifier = await pipeline('token-classification', "google-t5/t5-base")
```
I receive this error:
```
Unsupported model type: t5
```
How come? Or there is another way to use T5 for token classification in javascript?
|
https://github.com/huggingface/transformers.js/issues/1419
|
open
|
[
"question"
] | 2025-09-21T23:30:22Z
| 2025-09-24T21:42:56Z
| null |
debevv
|
huggingface/transformers.js
| 1,418
|
EmbeddingGemma usage
|
### Question
I'm new to transformers.js
I want to use embeddinggemma into my web app and I've looked at the example on its usage at this link:
https://huggingface.co/blog/embeddinggemma#transformersjs
At the same time I've seen a different code, using pipeline, regarding embeddings:
https://huggingface.co/docs/transformers.js/api/pipelines#pipelinesfeatureextractionpipeline
I'm trying to create a custom pipeline and in typescript I'm building the pipeline like
```ts
class EmbeddingPipeline {
private static instance: Promise<FeatureExtractionPipeline> | null = null;
private static model = 'onnx-community/embeddinggemma-300m-ONNX';
private static readonly task = 'feature-extraction';
// Device rilevato (default wasm)
private static device: 'webgpu' | 'wasm' = 'wasm';
private static deviceInitPromise: Promise<void> | null = null;
private static async detectDeviceOnce(): Promise<void> {
if (this.deviceInitPromise) return this.deviceInitPromise;
this.deviceInitPromise = (async () => {
if (typeof navigator !== 'undefined' && 'gpu' in navigator) {
try {
const adapter = await (navigator as any).gpu.requestAdapter();
if (adapter) {
this.device = 'webgpu';
return;
}
} catch {
// ignore, fallback to wasm
}
}
this.device = 'wasm';
})();
return this.deviceInitPromise;
}
static getSelectedDevice(): 'webgpu' | 'wasm' {
return this.device;
}
static async getInstance(progress_callback?: ProgressCallback): Promise<FeatureExtractionPipeline> {
if (this.instance) return this.instance;
// Rileva device una sola volta
await this.detectDeviceOnce();
const build = async (device: 'webgpu' | 'wasm') =>
pipeline(
this.task,
this.model,
{
progress_callback,
dtype: 'q8',
device
}
) as Promise<FeatureExtractionPipeline>;
this.instance = (async (): Promise<FeatureExtractionPipeline> => {
try {
return await build(this.device);
} catch (e) {
if (this.device === 'webgpu') {
// Fallback automatico a wasm
this.device = 'wasm';
return await build('wasm');
}
throw e;
}
})();
return this.instance;
}
}
const getEmbeddingDevice = () => EmbeddingPipeline.getSelectedDevice();
const embedding_prefixes_per_task: Record<EmbeddingTask, string> = {
'query': "task: search result | query: ",
'document': "title: none | text: ",
};
export type EmbeddingTask = 'query' | 'document';
export const getEmbedding = async (task: EmbeddingTask, text: string): Promise<Float32Array> => {
const extractor = await EmbeddingPipeline.getInstance();
const prefix = embedding_prefixes_per_task[task];
const result = await extractor(`${prefix}${text}`, { pooling: 'mean', normalize: true });
return result.data as Float32Array;
};
```
I'm using the same sentences (with prefixes) used by your example (I'm running both my class and your code to be sure if they matches) and the embedding result is different.
What am I doing wrong? Do you have any reference to some proper docs reference that explain properly how this works?
Thanks
|
https://github.com/huggingface/transformers.js/issues/1418
|
open
|
[
"question",
"v4"
] | 2025-09-21T10:26:22Z
| 2025-11-08T15:33:16Z
| null |
MithrilMan
|
huggingface/diffusers
| 12,359
|
Chroma pipeline documentation bug regarding the `guidance_scale` parameter
|
### Describe the bug
From my understanding, Chroma is a retrained and dedistilled version of the Flux architecture, so it uses true CFG, unlike Flux. I can indeed confirm that this is true by tracing through the source code.
However, currently the documentation for the `guidance_scale` parameter in the `ChromaPipeline.__call__()` method mentions otherwise, presumably because it was copied over from the `FluxPipeline` documentation.
### Reproduction
The current documentation for the `guidance_scale` parameter in the `ChromaPipeline.__call__()` method:
```python
'''
guidance_scale (float, optional, defaults to 3.5) — Embedded guiddance scale is enabled by setting guidance_scale > 1. Higher guidance_scale encourages a model to generate images more aligned with prompt at the expense of lower image quality.
Guidance-distilled models approximates true classifer-free guidance for guidance_scale > 1. Refer to the [paper](https://huggingface.co/papers/2210.03142) to learn more.
'''
```
### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.36.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Running on Google Colab?: No
- Python version: 3.11.9
- PyTorch version (GPU?): 2.7.1+cu128 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.34.4
- Transformers version: 4.55.0
- Accelerate version: 1.10.0
- PEFT version: 0.17.0
- Bitsandbytes version: 0.47.0
- Safetensors version: 0.6.2
- xFormers version: 0.0.31.post1
- Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@stevhliu
|
https://github.com/huggingface/diffusers/issues/12359
|
closed
|
[
"bug"
] | 2025-09-21T08:34:15Z
| 2025-09-22T20:04:15Z
| 1
|
mingyi456
|
pytorch/pytorch
| 163,435
|
[Fuzzer][Eager/Compile Divergence] a var subtract by itself should equal 0?
|
### 🐛 Describe the bug
```
import torch
import sys
torch._dynamo.config.capture_scalar_outputs = True
torch._dynamo.config.capture_dynamic_output_shape_ops = True
torch._inductor.config.emulate_precision_casts = True
def foo(arg0, arg1, arg2, arg3):
t0 = arg0 # size=(), stride=(), dtype=float16, device=cuda
t1 = torch.tanh(t0) # size=(), stride=(), dtype=float16, device=cuda
t2 = arg1 # size=(), stride=(), dtype=float16, device=cuda
t3 = arg2 # size=(), stride=(), dtype=float16, device=cuda
t4 = arg3 # size=(), stride=(), dtype=float16, device=cuda
t5 = t2 + t0 + t3 + t0 + t4 # size=(), stride=(), dtype=float16, device=cuda
t6 = t1 * t1 * t5 # size=(), stride=(), dtype=float16, device=cuda
t7 = (t6) - t6 # size=(), stride=(), dtype=float16, device=cuda
output = t7 # output tensor
return output
arg0 = torch.rand([], dtype=torch.float16, device='cuda', requires_grad=True) # size=(), stride=(), dtype=float16, device=cuda
arg1 = torch.rand([], dtype=torch.float16, device='cuda', requires_grad=True) # size=(), stride=(), dtype=float16, device=cuda
arg2 = torch.rand([], dtype=torch.float16, device='cuda', requires_grad=True) # size=(), stride=(), dtype=float16, device=cuda
arg3 = torch.rand([], dtype=torch.float16, device='cuda', requires_grad=True) # size=(), stride=(), dtype=float16, device=cuda
if __name__ == '__main__':
out_eager = foo(arg0, arg1, arg2, arg3)
out_eager.sum().backward()
print('Eager Success! ✅')
compiled_foo = torch.compile(foo, fullgraph=True, dynamic=True)
out_compiled = compiled_foo(arg0, arg1, arg2, arg3)
out_compiled.sum().backward()
print('Compile Success! ✅')
# Compare outputs (forward)
out_eager_sum = out_eager.sum()
out_compiled_sum = out_compiled.sum()
diff = (out_eager_sum - out_compiled_sum).abs().item()
rel_diff = diff / (out_eager_sum.abs().item() + 1e-12) * 100
print(f'Relative diff (sum): {rel_diff:.6f}%')
if rel_diff > 5:
print(f'❌ Forward output sums differ significantly (relative)!')
print('out_eager_sum:', out_eager_sum.item())
print('out_compiled_sum:', out_compiled_sum.item())
print('Absolute diff:', diff)
print('Relative diff (%):', rel_diff)
sys.exit(1)
```
```
(/home/bobren/local/a/pytorch-env) [22:16] devgpu035:/home/bobren/local/a/pytorch/torchfuzz python /tmp/torchfuzz/fuzz_d9fffb614acbd1dd.py
Eager Success! ✅
Compile Success! ✅
Relative diff (sum): 9441375732.421875%
❌ Forward output sums differ significantly (relative)!
out_eager_sum: 0.0
out_compiled_sum: -9.441375732421875e-05
Absolute diff: 9.441375732421875e-05
Relative diff (%): 9441375732.421875
```
### Versions
N/A
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @coconutruben
|
https://github.com/pytorch/pytorch/issues/163435
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"topic: fuzzer"
] | 2025-09-21T05:19:32Z
| 2025-09-24T17:43:02Z
| 3
|
bobrenjc93
|
pytorch/tutorials
| 3,581
|
Feedback about Parametrizations Tutorial
|
There is the following issue on this page: https://docs.pytorch.org/tutorials/intermediate/parametrizations.html
Parametrization is not a topic known to all. You could add some context about the definition of parametrization to the tutorial, why the need for it was born ? What does it solve ? The go into giving the examples. Adding references for further reading could be very helpful. Especially for beginners.
|
https://github.com/pytorch/tutorials/issues/3581
|
open
|
[] | 2025-09-21T00:21:09Z
| 2025-09-21T00:21:09Z
| 0
|
pentanol2
|
pytorch/pytorch
| 163,359
|
RFC: Support CUDA Stream Protocol
|
### 🚀 The feature, motivation and pitch
Hello! I am the CUDA Python tech lead and I'm filing this RFC to improve the interoperability between Python GPU libraries.
`cuda.core` is an official CUDA Python project: https://nvidia.github.io/cuda-python/cuda-core/latest/index.html. It offers a pythonic, self-contained, lightweight, and official interface over the CUDA programming model. For new Python projects, we encourage them to just use `cuda.core.<experimental>.Stream`.
For existing Python projects such as PyTorch, transitioning to `cuda.core` may or may not be immediately feasible. As a result, we encourage projects that already expose a CUDA stream to Python to follow the CUDA Stream protocol:
https://nvidia.github.io/cuda-python/cuda-core/latest/interoperability.html#cuda-stream-protocol
and add a `__cuda_stream__` method to the stream class, so as to improve interoperability without introducing extra `ExternalStream`-like types.
Here is a PyTorch example of how it'd be used interoperably with `cuda.core`, courtesy of @msaroufim 🙂:
https://github.com/NVIDIA/cuda-python/blob/c4f4ffe83d246eafb6adf1574e5a7c86bbcef944/cuda_core/examples/pytorch_example.py
cc @ptrblck @msaroufim @eqy @jerryzh168 @kkraus14 @pbielak @aterrel @rparolin for vis
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/163359
|
closed
|
[
"module: cuda",
"triaged",
"topic: new features"
] | 2025-09-19T19:23:41Z
| 2025-09-25T19:45:40Z
| 2
|
leofang
|
huggingface/trl
| 4,110
|
How does `trl` know what part of dataset is prompt and completion in the following situation?
|
### Reproduction
```python
import torch
import trl as r
import peft as p
import datasets as d
import accelerate as a
import transformers as t
allowed_entities = ['AGE', 'EYECOLOR', 'GENDER', 'HEIGHT', 'WEIGHT', 'SEX']
entity_mapping = {
"ACCOUNTNAME": "account_name",
"ACCOUNTNUMBER": "account_number",
"AGE": "age",
"AMOUNT": "amount",
"BIC": "bic",
"BITCOINADDRESS": "bitcoin_address",
"BUILDINGNUMBER": "building_number",
"CITY": "city",
"COMPANYNAME": "company_name",
"COUNTY": "county",
"CREDITCARDCVV": "credit_card_cvv",
"CREDITCARDISSUER": "credit_card_issuer",
"CREDITCARDNUMBER": "credit_card_number",
"CURRENCY": "currency",
"CURRENCYCODE": "currency_code",
"CURRENCYNAME": "currency_name",
"CURRENCYSYMBOL": "currency_symbol",
"DATE": "date",
"DOB": "dob",
"EMAIL": "email",
"ETHEREUMADDRESS": "ethereum_address",
"EYECOLOR": "eye_color",
"FIRSTNAME": "first_name",
"GENDER": "gender",
"HEIGHT": "height",
"IBAN": "iban",
"IP": "ip",
"IPV4": "ipv4",
"IPV6": "ipv6",
"JOBAREA": "job_area",
"JOBTITLE": "job_title",
"JOBTYPE": "job_type",
"LASTNAME": "last_name",
"LITECOINADDRESS": "litecoin_address",
"MAC": "mac",
"MASKEDNUMBER": "masked_number",
"MIDDLENAME": "middle_name",
"NEARBYGPSCOORDINATE": "nearby_gps_coordinate",
"ORDINALDIRECTION": "ordinal_direction",
"PASSWORD": "password",
"PHONEIMEI": "phone_imei",
"PHONENUMBER": "phone_number",
"PIN": "pin",
"PREFIX": "prefix",
"SECONDARYADDRESS": "secondary_address",
"SEX": "sex",
"SSN": "ssn",
"STATE": "state",
"STREET": "street",
"TIME": "time",
"URL": "url",
"USERAGENT": "user_agent",
"USERNAME": "username",
"VEHICLEVIN": "vehicle_vin",
"VEHICLEVRM": "vehicle_vrm",
"ZIPCODE": "zip_code"
}
def formatting_function(x):
entities = []
for entity in x['privacy_mask']:
if entity['label'] not in allowed_entities:
entities.append({'value': entity['value'], 'label': entity_mapping[entity['label']]})
prompt = f"Extract all the personal information from the following text and classify it: {x['source_text']}"
completion = str(entities)
return {"text": f"### PROMPT\n{prompt}\n\n### COMPLETION\n{completion}"}
def main():
model_name = "Qwen/Qwen3-0.6B"
dataset_name = "ai4privacy/pii-masking-200k"
quantization = False
quantization_bits = "8"
lora = True
lora_rank = 8
lora_alpha = 16
lora_dropout = 0.05
use_mixed_precision = True
# Training parameters
completion_only_loss = True
output_dir = f"/scratch/bminesh-shah/phi-ner/{model_name.replace('/', '-')}_pii_finetuned_prompt_completion"
learning_rate = 1e-4
num_train_epochs = 10
per_device_train_batch_size = 2
gradient_accumulation_steps = 8
accelerator = a.Accelerator()
dataset = d.load_dataset(dataset_name)
dataset = dataset.filter(lambda x: x['language'] == 'en')
dataset = dataset.remove_columns(['target_text', 'span_labels', 'mbert_text_tokens', 'mbert_bio_labels', 'id', 'language', 'set'])
dataset = dataset['train']
dataset = dataset.train_test_split(test_size=0.2, seed=24, shuffle=True)
print(dataset)
if accelerator.is_main_process:
dataset = dataset.map(formatting_function, remove_columns=['source_text', 'privacy_mask'])
print(dataset)
print(dataset['train'][0])
tokenizer = t.AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
bnb_config = None
if quantization and quantization_bits == "4":
bnb_config = t.BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True)
elif quantization and quantization_bits == "8":
bnb_config = t.BitsAndBytesConfig(load_in_8bit=True)
model = t.AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map={"": accelerator.process_index},
dtype=torch.bfloat16 if use_mixed_precision else torch.float32,
trust_remote_code=True
)
if quantization:
model = p.prepare_model_for_kbit_training(model)
model.config.use_cache = False
model.config.pretraining_tp = 1
model.config.pad_token_id = model.config.eos_token_id
if lora:
lora_config = p.LoraConfig(r=lora_rank, lora_alpha=lora_alpha, lora_dropout=lora_dropout, bias="none", task_type="CAUSAL_LM")
model = p.get_peft_model(model, lora_config)
model.train()
sft_config = r.SFTConfig(
learning_rate=learning_rate,
num_train_epochs=num_train_epochs,
per_device_train_batch_size=per_device_train_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
output_dir=output_dir,
eval_s
|
https://github.com/huggingface/trl/issues/4110
|
closed
|
[
"🐛 bug",
"📚 documentation"
] | 2025-09-19T17:42:26Z
| 2025-09-19T20:02:16Z
| null |
bminesh-shah
|
pytorch/pytorch
| 163,342
|
[CD] - Manywheel CUDA builds failing since Sept 18
|
### 🐛 Describe the bug
This hasn't been seen in a nightly yet, but i just rebased onto `viable/strict` and i'm getting this error in the `ciflow/binaries_wheel` flow and it's happening in other people's jobs too.
Broken Workflow - https://github.com/pytorch/pytorch/actions/workflows/generated-linux-binary-manywheel-nightly.yml
https://github.com/pytorch/pytorch/actions/runs/17856217452/job/50775774205
and
https://github.com/pytorch/pytorch/actions/runs/17855187279
```
In file included from /pytorch/c10/cuda/CUDAException.h:5,
from /pytorch/third_party/fbgemm/fbgemm_gpu/experimental/gen_ai/src/quantize/common/utils.cpp:12:
/pytorch/c10/cuda/CUDAMiscFunctions.h:6:10: fatal error: cuda_runtime.h: No such file or directory
6 | #include <cuda_runtime.h>
```
It looks like fbgemm was recently updated https://github.com/pytorch/pytorch/pull/162590 @pragupta can we check this error?
### Versions
2.10.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @atalman @ptrblck @eqy @jerryzh168
|
https://github.com/pytorch/pytorch/issues/163342
|
closed
|
[
"high priority",
"triage review",
"module: binaries",
"module: cuda",
"triaged",
"module: regression"
] | 2025-09-19T14:29:24Z
| 2025-09-20T12:16:28Z
| 5
|
robert-hardwick
|
huggingface/transformers
| 41,005
|
Are we have Qwen3VL Official Model Published by Alibaba
|
### Model description
Reference - https://huggingface.co/docs/transformers/main/en/model_doc/qwen3_vl#transformers.Qwen3VLForConditionalGeneration
If not when can we expect any guess?
|
https://github.com/huggingface/transformers/issues/41005
|
closed
|
[
"New model"
] | 2025-09-19T13:59:34Z
| 2025-09-20T10:00:04Z
| 1
|
Dineshkumar-Anandan-ZS0367
|
pytorch/pytorch
| 163,331
|
Support Query Bug !!
|
Hey Guys,
I have been working on an ML project, so i have a GPU server (An ancient one) which is backed by CUDA 3.0 .So what is the minimum version supported for PyTorch ?
Thank You :)
|
https://github.com/pytorch/pytorch/issues/163331
|
closed
|
[] | 2025-09-19T10:09:06Z
| 2025-09-20T14:42:36Z
| 2
|
Harishankar14
|
huggingface/transformers
| 40,993
|
HfArgumentParser cannot parse TRL Config
|
### System Info
transformers==4.56.1
trl==0.17.0
I used to apply code below
```python
from transformers import HfArgumentParser
from trl import (
ScriptArguments, ModelConfig, SFTConfig
)
parser = HfArgumentParser((ScriptArguments, SFTConfig, ModelConfig))
script_arguments, trainer_config, model_config = parser.parse_args_into_dataclasses()
```
to parse training args, but after updating transformers to 4.56, it does not work:
```
Traceback (most recent call last):
File "D:\mytest.py", line 5, in <module>
parser = HfArgumentParser((ScriptArguments, SFTConfig, ModelConfig))
File "E:\Anaconda3\envs\myopenai\lib\site-packages\transformers\hf_argparser.py", line 143, in __init__
self._add_dataclass_arguments(dtype)
File "E:\Anaconda3\envs\myopenai\lib\site-packages\transformers\hf_argparser.py", line 260, in _add_dataclass_arguments
raise RuntimeError(
RuntimeError: Type resolution failed for <class 'trl.trainer.sft_config.SFTConfig'>. Try declaring the class in global scope or removing line of `from __future__ import annotations` which opts in Postponed Evaluation of Annotations (PEP 563)
```
How to fix it?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Run
```python
from transformers import HfArgumentParser
from trl import (
ScriptArguments, ModelConfig, SFTConfig
)
parser = HfArgumentParser((ScriptArguments, SFTConfig, ModelConfig))
script_arguments, trainer_config, model_config = parser.parse_args_into_dataclasses()
```
### Expected behavior
It should be work
|
https://github.com/huggingface/transformers/issues/40993
|
closed
|
[
"bug"
] | 2025-09-19T08:29:48Z
| 2025-09-19T09:06:20Z
| 5
|
caoyang-sufe
|
huggingface/lerobot
| 1,978
|
Is there a best fit model to each sim env?
|
I try to train diffusion,smolvla,even pi0 on the aloha with 200k steps, and found that they all perform much worse (with less than 10% success rate) than act policy, why? Did each env task exist a best-fit policy? or there are problems on my training strategy.
|
https://github.com/huggingface/lerobot/issues/1978
|
closed
|
[
"question",
"policies",
"simulation"
] | 2025-09-19T02:45:14Z
| 2025-10-17T11:25:27Z
| null |
shs822
|
pytorch/pytorch
| 163,283
|
RFC move to Pyrefly for Type Checking
|
Currently, mypy is used to typecheck PyTorch, with lint runner and dmypy. We appreciate the community’s work maintaining mypy and type coverage in PyTorch and want to build on that foundation. [Pyrefly](https://pyrefly.org/) is a new standards-compliant Python type checker. The Pyrefly team has been hard at work on building a performant and backwards compatible type checker, that we think can improve the current type checking setup and the development experience for PyTorch users.
For example, the current setup can make it tricky to know which files are being typechecked and in which mode ([strict](https://github.com/pytorch/pytorch/blob/main/mypy-strict.ini) vs. [default](https://github.com/pytorch/pytorch/blob/main/mypy.ini)). Use of `type: ignore` and `# mypy: ignore-errors` to disable checks on certain files, adds to the challenge. We think we can make typechecking simpler, and improve the overall experience.
We are proposing that we add Pyrefly as a typechecker to Pytorch.
The benefits to PyTorch will be:
* Whole repo checks in seconds:
- Eliminates inconsistencies created by dmypy between local and CI revisions
- Fast CI signal
* A richer IDE experience with types that match hover and diagnostics. We support most major IDEs: https://pyrefly.org/en/docs/IDE/#other-editors
* Modern type checking features:
- Container inference, conformance to the typing spec for feature support
- We’ve found over 200 bugs in PyTorch just by enabling Pyrefly for testing!
* Clear configuration and ownership:
- The Pyrefly team will help support typing in PyTorch and respond quickly to issues on our github
Here’s how we would propose to get Pyrefly up and running in PyTorch:
Phase 1:
* Check in Pyrefly configs along with the suppressions needed for Pyrefly to check cleanly
* Add a non-blocking CI linter runner job to observe changes and test the integration
* Gather community feedback on the checker and IDE extension via:
- Download the VSCode Extension
- Run pyrefly through lint runner
Phase 2:
* With community buy in, we’ll swap the checker from mypy to pyrefly over the weekend to be least disruptive
* After Pyrefly has been enabled smoothly for a few days, we’ll cleanup the unused `type: ignores` that remain in the code. We plan to cleanup approximately 600 unused mypy ignores
Phase 3:
* Work with the community to help add types where they are useful in PyTorch and answer questions around typing features and usage
* Set up additional jobs like `pyrefly infer` to ease the process of adding types
* Work with the community to better export types to consumers of PyTorch
We’d love for you to try Pyrefly, share your experiences, and help us make it (and PyTorch) even better 🙂
cc @lolpack @ndmitchell @kinto0 @samwgoldman
|
https://github.com/pytorch/pytorch/issues/163283
|
closed
|
[
"module: typing",
"triaged",
"needs research"
] | 2025-09-18T19:52:40Z
| 2025-11-24T19:20:08Z
| 3
|
maggiemoss
|
huggingface/accelerate
| 3,784
|
AttributeError: 'Accelerator' object has no attribute 'deepspeed_config'. Did you mean: 'deepspeed_plugin'?
|
### System Info
```Shell
- Name: accelerate Version: 1.10.1
- Name: transformers Version: 4.54.0
- Name: deepspeed Version: 0.17.5
- Name: torch Version: 2.8.0
- Name: wandb Version: 0.21.4
```
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [x] My own task or dataset (give details below)
### Reproduction
This is a deepspeed stage 2 config which is in json:
```
json = {
"fp16": {
"enabled": false,
"auto_cast": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": true
},
"amp": {
"enabled": false
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.0003,
"betas": [0.9, 0.999],
"eps": 1e-08,
"weight_decay": 0.001
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.0003,
"warmup_num_steps": 0
}
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 5.000000e+08,
"overlap_comm": false,
"reduce_scatter": true,
"reduce_bucket_size": 9.000000e+05,
"contiguous_gradients": true,
"use_multi_rank_bucket_allreduce": false
},
"zero_state": 2,
"gradient_accumulation_steps": 1,
"gradient_clipping": 1,
"train_micro_batch_size_per_gpu": 4,
"mixed_precision": "bf16",
"communication_data_type": "bf16",
"steps_per_print": inf
}
```
I use `accelerate to spin up 8 workers on an AWS EC2 instance`:
```bash
accelerate launch --config_file configs/deepspeed.yaml scripts/main.py
```
The following error is raised when the `trainer` runs `train`:
```
File "/home/ubuntu/llm-classifiier/scripts/main.py", line 88, in <module>
train_qwen_any(cli_args, run_args)
File "/home/ubuntu/llm-classifiier/scripts/train_qwen.py", line 138, in train_qwen_any
trainer.train()
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer.py", line 2237, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer.py", line 2758, in _inner_training_loop
self.control = self.callback_handler.on_train_end(args, self.state, self.control)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer_callback.py", line 509, in on_train_end
return self.call_event("on_train_end", args, state, control)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer_callback.py", line 556, in call_event
result = getattr(callback, event)(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/integrations/integration_utils.py", line 958, in on_train_end
fake_trainer.save_model(temp_dir)
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer.py", line 3965, in save_model
state_dict = self.accelerator.get_state_dict(self.deepspeed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/accelerate/accelerator.py", line 3903, in get_state_dict
zero3_sharding = self.deepspeed_config["zero_optimization"]["stage"] == 3
^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Accelerator' object has no attribute 'deepspeed_config'. Did you mean: 'deepspeed_plugin'?
```
I am not using zero3 sharding, so I don't know why this is an issue at all!
My deepspeed.yaml looks like this
```
compute_environment: LOCAL_MACHINE
debug: true
deepspeed_config:
deepspeed_config_file: configs/deepspeed_stg2.json
distributed_type: DEEPSPEED
enable_cpu_affinity: false
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
And the actual json file is above.
Because of this I cannot save my models or state_dicts.
### Expected behavior
Unless I am missing something profound, this really shouldn't be happening.
|
https://github.com/huggingface/accelerate/issues/3784
|
closed
|
[] | 2025-09-18T17:07:54Z
| 2025-10-27T15:08:19Z
| 1
|
alexge233
|
huggingface/lerobot
| 1,969
|
how to record a multi-task dataset on so101?
|
I found that only can use "dataset.single_task" to record , but i need to record a dataset contains more than 3 tasks. how to solve it.
|
https://github.com/huggingface/lerobot/issues/1969
|
closed
|
[] | 2025-09-18T10:18:00Z
| 2025-09-21T02:50:59Z
| null |
Temmp1e
|
huggingface/lerobot
| 1,966
|
SO101FollowerEndEffector?
|
I am trying to get inverse kinematics to work on my SO-101, and I found SO100FollowerEndEffector but there is no SO101FollowerEndEffector?
I suspect they are interchangeable, but when I use SO100FollowerEndEffector on my SO-101, it want me to recalibrate it, so I just want to make sure before I break anything.
|
https://github.com/huggingface/lerobot/issues/1966
|
open
|
[
"question",
"robots"
] | 2025-09-17T23:56:38Z
| 2025-10-30T08:56:22Z
| null |
cashlo
|
pytorch/ao
| 3,020
|
How to use FP8 training with MoE models?
|
I’m trying to train a Mixture of Experts (MoE) model with FP8 precision. However, I couldn’t find any documentation or examples that describe how to enable FP8 training for MoE in torchao.
Is FP8 training for MoE models currently supported?
If yes, could you point me to a tutorial or usage guide?
If not, is there a roadmap or prototype feature under development? If this feature is still in progress, could you share the current status and future plan?
Thanks!
|
https://github.com/pytorch/ao/issues/3020
|
open
|
[
"moe"
] | 2025-09-17T12:18:14Z
| 2025-10-02T18:20:44Z
| null |
BIGBALLON
|
pytorch/torchtitan
| 1,716
|
float8 Grouped MM kernels
|
- **Is there any plan to support float8 Grouped MM for llama4 / qwen3 MoE model training?**
- **Is this the correct way to train a MoE model with FP8?**
Currently, the available Grouped GEMM kernels only support float16, and they do not work with float8.
``` python
@expert_parallel
def _run_experts_grouped_mm(
w1: torch.Tensor,
w2: torch.Tensor,
w3: torch.Tensor,
x: torch.Tensor,
num_tokens_per_expert: torch.Tensor,
) -> torch.Tensor:
offsets = torch.cumsum(num_tokens_per_expert, dim=0, dtype=torch.int32)
# grouped mm between a 2D tensor and a 3D tensor
assert x.dim() == 2
h = F.silu(
torch._grouped_mm(x.bfloat16(), w1.bfloat16().transpose(-2, -1), offs=offsets)
)
h = h * torch._grouped_mm(
x.bfloat16(), w3.bfloat16().transpose(-2, -1), offs=offsets
)
out = torch._grouped_mm(h, w2.bfloat16().transpose(-2, -1), offs=offsets).type_as(x)
return out
```
For training large MoE models such as LLaMA4 and Qwen3, float8 is increasingly important to reduce memory footprint and improve training efficiency. However, the lack of float8 Grouped GEMM support becomes a bottleneck when scaling up MoE training.
**Feature request:**
Add support for float8 Grouped GEMM kernels.
Ensure compatibility with MoE training workloads (e.g., LLaMA4, Qwen3).
This would enable more efficient large-scale MoE training under float8 precision.
|
https://github.com/pytorch/torchtitan/issues/1716
|
open
|
[
"question"
] | 2025-09-17T09:57:25Z
| 2025-09-30T02:54:53Z
| null |
BIGBALLON
|
pytorch/pytorch
| 163,153
|
FSDP2 implicit prefetch does not work
|
### 🐛 Describe the bug
I'm using official [example of FSDP2](https://github.com/pytorch/examples/blob/acc295dc7b90714f1bf47f06004fc19a7fe235c4/distributed/FSDP2/example.py) with some small modifcations:
```python
# distributed/FSDP2/example.py
import argparse
import os
import torch
from checkpoint import Checkpointer
from model import ModelArgs, Transformer
from torch.distributed.fsdp import fully_shard, MixedPrecisionPolicy
from utils import inspect_mixed_precision, inspect_model
from torch.profiler import profile, record_function, ProfilerActivity
def verify_min_gpu_count(min_gpus: int = 2) -> bool:
""" verification that we have at least 2 gpus to run dist examples """
has_gpu = torch.accelerator.is_available()
gpu_count = torch.accelerator.device_count()
return has_gpu and gpu_count >= min_gpus
def set_modules_to_forward_prefetch(model, num_to_forward_prefetch):
for i, layer in enumerate(model.layers):
if i >= len(model.layers) - num_to_forward_prefetch:
break
layers_to_prefetch = [
model.layers[i + j] for j in range(1, num_to_forward_prefetch + 1)
]
layer.set_modules_to_forward_prefetch(layers_to_prefetch)
def set_modules_to_backward_prefetch(model, num_to_backward_prefetch):
for i, layer in enumerate(model.layers):
if i < num_to_backward_prefetch:
continue
layers_to_prefetch = [
model.layers[i - j] for j in range(1, num_to_backward_prefetch + 1)
]
layer.set_modules_to_backward_prefetch(layers_to_prefetch)
def main(args):
_min_gpu_count = 2
if not verify_min_gpu_count(min_gpus=_min_gpu_count):
print(f"Unable to locate sufficient {_min_gpu_count} gpus to run this example. Exiting.")
exit()
rank = int(os.environ["LOCAL_RANK"])
if torch.accelerator.is_available():
device_type = torch.accelerator.current_accelerator()
device = torch.device(f"{device_type}:{rank}")
torch.accelerator.set_device_index(rank)
print(f"Running on rank {rank} on device {device}")
else:
device = torch.device("cpu")
print(f"Running on device {device}")
backend = torch.distributed.get_default_backend_for_device(device)
torch.distributed.init_process_group(backend=backend, device_id=device)
torch.manual_seed(0)
vocab_size = 1024
batch_size = 4
seq_len = 1024
model_args = ModelArgs(
n_layers=10,
n_heads=8,
dim=4096,
vocab_size=vocab_size,
max_seq_len=seq_len,
dropout_p=0,
)
with torch.device("meta"):
model = Transformer(model_args)
fsdp_kwargs = {}
if args.mixed_precision:
fsdp_kwargs["mp_policy"] = MixedPrecisionPolicy(
param_dtype=torch.bfloat16,
reduce_dtype=torch.float32,
)
for layer in model.layers:
fully_shard(layer, **fsdp_kwargs)
fully_shard(model, **fsdp_kwargs)
inspect_model(model)
if args.explicit_prefetching:
set_modules_to_forward_prefetch(model, num_to_forward_prefetch=2)
set_modules_to_backward_prefetch(model, num_to_backward_prefetch=2)
checkpointer = Checkpointer("checkpoints", dcp_api=args.dcp_api)
if checkpointer.last_training_time is None:
model.to_empty(device=device)
model.reset_parameters()
else:
checkpointer.load_model(model)
if args.mixed_precision:
inspect_mixed_precision(model)
optim = torch.optim.Adam(model.parameters(), lr=1e-2)
if checkpointer.last_training_time is not None:
checkpointer.load_optim(model, optim)
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
record_shapes=True,
with_stack=True,
) as prof:
for _ in range(10):
if args.explicit_prefetching:
model.unshard()
x = torch.randint(0, vocab_size, (batch_size, seq_len), device=device)
loss = model(x).sum()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
optim.step()
optim.zero_grad()
prof.export_chrome_trace(f"fsdp2_trace_r{torch.distributed.get_rank()}.json")
# checkpointer.save(model, optim)
torch.distributed.destroy_process_group()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="PyTorch FSDP2 example")
parser.add_argument("--explicit-prefetching", action="store_true", default=False)
parser.add_argument("--mixed-precision", action="store_true", default=True)
parser.add_argument("--dcp-api", action="store_true", default=False)
args = parser.parse_args()
main(args)
```
## current behavior
the profiling result shows that the all-gather kernel is not overlapping with computation.
<img width="2540" height="414" alt="Image" src="https://github.com/user-attachments/assets/403c8560-c090-4038-b629-51aa5e3ab1ef" />
From the t
|
https://github.com/pytorch/pytorch/issues/163153
|
closed
|
[
"oncall: distributed"
] | 2025-09-17T09:30:31Z
| 2025-09-17T18:04:42Z
| 1
|
zhc7
|
pytorch/tutorials
| 3,569
|
Feedback about What is torch.nn really?
|
There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/nn_tutorial.html
Github URL for MNIST archive needs to change from:
```
URL = "https://github.com/pytorch/tutorials/raw/main/_static/"
```
to
```
URL = 'https://github.com/pytorch/tutorials/raw/refs/heads/main/_static/'
```
|
https://github.com/pytorch/tutorials/issues/3569
|
open
|
[] | 2025-09-16T20:51:42Z
| 2025-09-16T20:51:42Z
| null |
robertbcalhoun
|
huggingface/lighteval
| 970
|
How to use a configuration file?
|
The documentation makes references to using configuration yaml files like [here](https://huggingface.co/docs/lighteval/main/en/use-litellm-as-backend) but it doesn't give the name of the file or which option to feed the config to lighteval. I tried making a `config.yaml`, `config.yml` in the current directory and trying a `--config` option (doesn't exist).
|
https://github.com/huggingface/lighteval/issues/970
|
closed
|
[] | 2025-09-16T20:13:48Z
| 2025-09-24T22:08:32Z
| null |
oluwandabira
|
huggingface/transformers
| 40,915
|
HfArgumentParser does not support peft.LoraConfig
|
### System Info
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@ydshieh (I am not really sure who to tag here)
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from peft import LoraConfig # v0.17.1
from transformers import HfArgumentParser # Built from source
p = HfArgumentParser(dataclass_types=LoraConfig) # fails
```
### Expected behavior
I would expect LoraConfig to be supported by HfArgumentParser.
As I understand, this fails because HfArgumentParser does not support fields of type (`Optional[List[str], str]`).
Is there a plan to support such fields?
|
https://github.com/huggingface/transformers/issues/40915
|
closed
|
[
"bug"
] | 2025-09-16T16:23:56Z
| 2025-09-23T05:16:14Z
| 5
|
romitjain
|
pytorch/pytorch
| 163,071
|
Lintrunner not flagging CI issues in PRs
|
### 🐛 Describe the bug
The PR https://github.com/pytorch/pytorch/pull/162659 introduced some small changes in the `.github/workflows/pull.yml` workflow, changing the `linux-jammy-py3_10-clang18-asan-build` job.
After merging, lintrunner started flagging the change as inconsistency in the workflows (https://github.com/pytorch/pytorch/actions/runs/17750680257/job/50444889451). But it did not flag in the PR itself.
We should discuss if the rule is valid and how to fix lintrunner so it will always be red in the PR if merging it could cause a lintrunner to be red in trunk.
### Versions
main (trunk)
|
https://github.com/pytorch/pytorch/issues/163071
|
closed
|
[
"module: lint",
"triaged"
] | 2025-09-16T12:56:57Z
| 2025-09-22T15:00:35Z
| 3
|
jeanschmidt
|
huggingface/diffusers
| 12,338
|
`AutoencoderDC` bug with `pipe.enable_vae_slicing()` and decoding multiple images
|
### Describe the bug
When using the Sana_Sprint_1.6B_1024px and the SANA1.5_4.8B_1024px models, I cannot enable VAE slicing when generating multiple images. I guess this issue will affect the rest of the Sana model and pipeline configurations because they all use the same `AutoencoderDC` model.
I traced the issue to the following [line of code](https://github.com/huggingface/diffusers/blob/751e250f70cf446ae342c8a860d92f6a8b78261a/src/diffusers/models/autoencoders/autoencoder_dc.py#L620), and if I remove the `.sample` part the issue seems to be fixed.
I intend to submit a PR for my proposed fix. Can I confirm that this is supposed to be the correct solution?
### Reproduction
```python
from diffusers import SanaSprintPipeline
import torch
pipe = SanaSprintPipeline.from_pretrained("Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers", text_encoder=text_encoder, torch_dtype=torch.bfloat16)
pipe.to("cuda")
pipe.enable_vae_slicing()
prompt = "A girl"
num_images_per_prompt = 8
output = pipe(
prompt=prompt,
height=1024,
width=1024,
num_inference_steps=2,
num_images_per_prompt=num_images_per_prompt,
intermediate_timesteps=1.3,
max_timesteps=1.56830,
timesteps=None
).images
```
### Logs
```shell
Traceback (most recent call last):
File "F:\AI setups\Diffusers\scripts\inference sana-sprint.py", line 24, in <module>
output = pipe(
^^^^^
File "F:\AI setups\Diffusers\diffusers-venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "F:\AI setups\Diffusers\diffusers-venv\Lib\site-packages\diffusers\pipelines\sana\pipeline_sana_sprint.py", line 874, in __call__
image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\AI setups\Diffusers\diffusers-venv\Lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\AI setups\Diffusers\diffusers-venv\Lib\site-packages\diffusers\models\autoencoders\autoencoder_dc.py", line 620, in decode
decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\AI setups\Diffusers\diffusers-venv\Lib\site-packages\diffusers\models\autoencoders\autoencoder_dc.py", line 620, in <listcomp>
decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Tensor' object has no attribute 'sample'
```
### System Info
- 🤗 Diffusers version: 0.36.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Running on Google Colab?: No
- Python version: 3.11.9
- PyTorch version (GPU?): 2.7.1+cu128 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.34.4
- Transformers version: 4.55.0
- Accelerate version: 1.10.0
- PEFT version: 0.17.0
- Bitsandbytes version: 0.47.0
- Safetensors version: 0.6.2
- xFormers version: 0.0.31.post1
- Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@yiyixuxu @DN6
|
https://github.com/huggingface/diffusers/issues/12338
|
closed
|
[
"bug"
] | 2025-09-16T12:23:29Z
| 2025-09-22T06:55:35Z
| 0
|
mingyi456
|
pytorch/pytorch
| 163,066
|
PyTorch is including internal headers, leading to ODR violations
|
### 🐛 Describe the bug
In [functorch/csrc/dim/dim_opcode.c](https://github.com/pytorch/pytorch/blob/e3783a9575b810f9a3f51334270668357463958e/functorch/csrc/dim/dim_opcode.c#L8-L10) and [torch/csrc/dynamo/cpython_defs.c](https://github.com/pytorch/pytorch/blob/e3783a9575b810f9a3f51334270668357463958e/torch/csrc/dynamo/cpython_defs.c#L21-L34), PyTorch is including private headers from CPython and specifically doing so in a way that includes symbols defined in those headers. This causes problems if you ever try to link both PyTorch and CPython together (which we do for static hermetic builds), because the symbols get defined more than once, leading to a violation of the One Definition Rule.
It is probably true that CPython should use `extern` for these symbols (though I'm told that this is not necessarily possible within CPython itself), but also it is definitely true that PyTorch should not be using the private interface of CPython.
I am not super familiar with the reason that this needs to be done so I am not sure what the right solution is. Is this something that can be accomplished without using the internal headers? If not, what is missing from the CPython API that would make it possible to avoid this situation?
### Versions
This is a more abstract issue about the code itself.
cc @malfet @seemethere @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela
|
https://github.com/pytorch/pytorch/issues/163066
|
open
|
[
"module: build",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2025-09-16T10:22:40Z
| 2025-09-24T17:41:27Z
| 6
|
pganssle-google
|
pytorch/pytorch
| 163,061
|
GIL is not released when calling torch.compile kernels
|
### 🐛 Describe the bug
In most cases, PyTorch releases GIL when calling CUDA APIs, but I found the GIL is held when calling torch.compile kernels, is this expected? Is it possible to release GIL when calling torch.compile kernels?
To reproduce, script `torch_compile.py`:
```python
import torch
import triton
import triton.language as tl
def torch_add(x: torch.Tensor, y: torch.Tensor):
return x + y
@torch.compile
def torch_compile_add(x: torch.Tensor, y: torch.Tensor):
return x + y
@triton.jit
def add_kernel(x_ptr,
y_ptr,
output_ptr,
n_elements,
BLOCK_SIZE: tl.constexpr):
pid = tl.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
x = tl.load(x_ptr + offsets, mask=mask)
y = tl.load(y_ptr + offsets, mask=mask)
output = x + y
tl.store(output_ptr + offsets, output, mask=mask)
def triton_add(x: torch.Tensor, y: torch.Tensor):
output = torch.empty_like(x)
n_elements = output.numel()
grid = lambda meta: (triton.cdiv(n_elements, meta['BLOCK_SIZE']), )
add_kernel[grid](x, y, output, n_elements, BLOCK_SIZE=1024)
return output
def main():
x = torch.randn(4096, 4096, device='cuda')
y = torch.randn(4096, 4096, device='cuda')
for _ in range(10):
torch_add(x, y)
torch_compile_add(x, y)
triton_add(x, y)
if __name__ == "__main__":
main()
```
Run it with
```bash
nsys profile -f true --wait primary -t cuda,nvtx,python-gil --cudabacktrace=all --python-backtrace=cuda --python-sampling=true -o torch_compile python torch_compile.py
```
<img width="1539" height="443" alt="Image" src="https://github.com/user-attachments/assets/c6aebdcd-1b25-435e-998a-a7a4208e766b" />
### Versions
Collecting environment information...
PyTorch version: 2.8.0a0+34c6371d24.nv25.08
Is debug build: False
CUDA used to build PyTorch: 13.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 4.0.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1086-nvidia-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 13.0.48
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H200
GPU 1: NVIDIA H200
Nvidia driver version: 575.57.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.12.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.12.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.12.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.12.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.12.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.12.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.12.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.12.0
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 33%
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_windo
|
https://github.com/pytorch/pytorch/issues/163061
|
closed
|
[
"module: performance",
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-09-16T09:00:22Z
| 2025-09-30T06:49:01Z
| 7
|
syuoni
|
pytorch/xla
| 9,646
|
Correct behavior of `torch.ops.xla.write_mlir_debuginfo`
|
## ❓ Correct behavior of `torch.ops.xla.write_mlir_debuginfo`
What is the correct behavior of `torch.ops.xla.write_mlir_debuginfo`? Seems it adds debug info all upstream operations not just a direct upstream op. Is it expected behavior?
```python
import torch
import torch_xla
import torch_xla.experimental.xla_mlir_debuginfo
from torch_xla.stablehlo import (StableHLOExportOptions,
exported_program_to_stablehlo)
class SampleModel(torch.nn.Module):
def forward(self, x, y):
x = x + y
x = x - y
x = torch.ops.xla.write_mlir_debuginfo(x, "MY_SUB")
return x
model = SampleModel()
exported_program = torch.export.export(model,
(torch.rand(10), torch.rand(10)))
mlir_text = exported_program_to_stablehlo(
exported_program).get_stablehlo_text()
print(mlir_text)
```
```
#loc1 = "<XLA_MLIR_DEBUGINFO_BEGIN>MY_SUB<XLA_MLIR_DEBUGINFO_END>xla__device_data"
module @IrToHlo.12 attributes {mhlo.cross_program_prefetches = [], mhlo.input_output_alias = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {
func.func @main(%arg0: tensor<10xf32> "<XLA_MLIR_DEBUGINFO_BEGIN>MY_SUB<XLA_MLIR_DEBUGINFO_END>xla__device_data", %arg1: tensor<10xf32> "<XLA_MLIR_DEBUGINFO_BEGIN>MY_SUB<XLA_MLIR_DEBUGINFO_END>xla__device_data") -> tensor<10xf32> {
%0 = stablehlo.add %arg1, %arg0 : tensor<10xf32> "<XLA_MLIR_DEBUGINFO_BEGIN>MY_SUB<XLA_MLIR_DEBUGINFO_END>aten__add"
%1 = stablehlo.subtract %0, %arg0 : tensor<10xf32> "<XLA_MLIR_DEBUGINFO_BEGIN>MY_SUB<XLA_MLIR_DEBUGINFO_END>aten__sub"
return %1 : tensor<10xf32> [unknown]
} [unknown]
} [unknown]
#loc = [unknown]
#loc2 = "<XLA_MLIR_DEBUGINFO_BEGIN>MY_SUB<XLA_MLIR_DEBUGINFO_END>aten__add"
#loc3 = "<XLA_MLIR_DEBUGINFO_BEGIN>MY_SUB<XLA_MLIR_DEBUGINFO_END>aten__sub"
```
|
https://github.com/pytorch/xla/issues/9646
|
open
|
[
"question",
"stablehlo"
] | 2025-09-16T00:20:05Z
| 2025-09-16T14:01:06Z
| null |
tlsdmstn56
|
huggingface/optimum
| 2,355
|
Support exporting text-ranking for BERT models
|
### Feature request
Currently, `optimum-cli export onnx --model cross-encoder/ms-marco-MiniLM-L-12-v2 cross-encoder--ms-marco-MiniLM-L-12-v2-onnx` says:
```
ValueError: Asked to export a bert model for the task text-ranking (auto-detected), but the Optimum ONNX exporter only supports the tasks feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification for bert. Please use a supported task. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the task text-ranking to be supported in the ONNX export for bert.
```
### Motivation
I'm working on a tool that I intend to distribute to others, for example via `brew install`. It's difficult to packaghe and ship Python, and I also want to prioritize speed of many filesystem and related operations, so I'm writing in Rust, using candle.
It can be a lot of work to implement every single model type by hand in candle. candle-transformers doesn't implement BertForSequenceClassification. Moreover, as model architectures change, I don't want to have to implement each one. It's great to be able to have the entire computation graph stored as data, as in ONNX.
### Your contribution
I'm willing to take a stab at this! If you think it would be helpful, and if you could give a couple pointers how to start!
|
https://github.com/huggingface/optimum/issues/2355
|
closed
|
[
"Stale"
] | 2025-09-15T21:23:35Z
| 2025-10-21T02:10:29Z
| 1
|
kshitijl
|
pytorch/pytorch
| 162,971
|
[CD] Reasonable time constraint for binary builds
|
### 🐛 Describe the bug
It looks like both CUDA+aarch64, Win+XPU and ROCM build are close towards exceeding 6h threshold
- Could we have some sort of a plan on how to deal with those. I.e. can some build dependencies be cached and build ahead of time as part of the docker image?
- Is there a matrix somewhere on what types of runners are currently used, and should we switch to a bigger ones?
### Versions
CI
cc @seemethere @atalman @pytorch/pytorch-dev-infra
|
https://github.com/pytorch/pytorch/issues/162971
|
open
|
[
"module: binaries",
"module: ci",
"triaged"
] | 2025-09-15T16:21:01Z
| 2025-09-23T20:23:03Z
| 1
|
malfet
|
pytorch/pytorch
| 162,957
|
torch.linalg.eigh uses a large amount of memory in pytorch 2.8.0
|
### 🐛 Describe the bug
Running torch.linalg.eigh spikes allocated GPU memory in pytorch 2.8.0. For repeated calls on tensors of different batch dimensions the allocated memory increases successively until reaching a plateau. In 2.7.0 the code below consistently uses ~200 MB, in 2.8.0 2-5 GB were allocated for different runs. Memory usage was monitored with nvidia-smi.
```python
import torch
for i in range(100):
N = torch.randint(4000, 4100, (1,)).item()
cov = torch.randn((N, 3, 3), device="cuda")
cov = cov @ cov.transpose(-1, -2)
cov = cov + torch.eye(3, device="cuda")[None, :, :] * 0.01
val, vec = torch.linalg.eigh(cov)
```
Among the system specified in Versions below, I reproduced the same issue on an RTX 6000 Ada.
### Versions
Collecting environment information...
PyTorch version: 2.8.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A3000 Laptop GPU
Nvidia driver version: 573.24
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.0
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz
CPU family: 6
Model: 141
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
BogoMIPS: 4992.01
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves vnmi avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affec
|
https://github.com/pytorch/pytorch/issues/162957
|
open
|
[
"needs reproduction",
"module: cuda",
"module: memory usage",
"triaged",
"module: linear algebra"
] | 2025-09-15T12:18:40Z
| 2025-09-16T08:08:01Z
| 2
|
fjneumann
|
pytorch/pytorch
| 162,952
|
The FSDPModule.set_requires_gradient_sync should control reduce-scatter sync and all-reduce sync separately
|
### 🚀 The feature, motivation and pitch
The current `FSDPModule.set_requires_gradient_sync` implementation controls both `reduce-scatter` and `all-reduce` together. For the multi-node HSDP scenario (replication between nodes, intra-node parameter sharing), in gradient accumulation periods, turning `reduce-scatter` on but `all-reduce` off reduces unnecessary network communication between nodes without increasing GPU peak memory usage. If `reduce-scatter ` is also turned off, `FSDPParam.unsharded_accumulated_grad` maintains a unshared gradient, which causes GPU peak memory to increase.
## Why does disabling all-reduce sync not increase GPU memory usage
If `all-reduce` is turn off, `FSDPParamGroup` maintains `_partial_reduce_output` representing all the shared gradient of `FSDPParam.sharded_param` maintained by the current group and `FSDPParam.sharded_param.grad` is None. If `all-reduce` is turn on, the gradient is assigned to each `FSDPParam.sharded_param.grad` . So there is no extra GPU memory footprint.
See more code detail at [FSDPParamGroup.post_backward](https://github.com/pytorch/pytorch/blob/main/torch/distributed/fsdp/_fully_shard/_fsdp_param_group.py#L478) and [foreach_reduce](https://github.com/pytorch/pytorch/blob/main/torch/distributed/fsdp/_fully_shard/_fsdp_collectives.py#L447).
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci
|
https://github.com/pytorch/pytorch/issues/162952
|
closed
|
[
"oncall: distributed"
] | 2025-09-15T09:01:29Z
| 2025-09-21T03:01:33Z
| 3
|
EquationWalker
|
pytorch/pytorch
| 162,908
|
new sparse tensor format implementation: tips
|
### 🚀 The feature, motivation and pitch
Hi,
I'm currently working on implementing a new sparse tensor format. I wish to implement a method for the tensor object, such that i can do `A.to_new_format()`, where `A` is a tensor object.
Can someone point me on how to implement this kind of feature directly as a method of the tensor object?
Thanks
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/162908
|
closed
|
[] | 2025-09-14T11:31:49Z
| 2025-09-14T22:07:20Z
| 1
|
ricvigi
|
pytorch/pytorch
| 162,898
|
Script ./export/unflaten.py has some bugs.
|
### 🐛 Describe the bug
I'm using torch.distributed.pipelining to implement Pipeline Parallelism for my model, but I'm encountering the following error:
<img width="2174" height="232" alt="Image" src="https://github.com/user-attachments/assets/fd9e00b0-8be8-4e41-aa27-07d79c568305" />
After reviewing the source code, I found what appears to be a bug in the run_outer() function. The code handles a node.op == "placeholder" and immediately calls run_from(). However, inside run_from(), there's an assert node.op != "placeholder". If the graph has multiple placeholder nodes, this assertion will definitely cause the program to crash. I believe this is a bug, so I've filed this issue.
<img width="1234" height="1180" alt="Image" src="https://github.com/user-attachments/assets/187e6ef5-7e0d-4c4e-9487-39598a6f294d" />
If my assessment is wrong, I would appreciate any advice from the team on how to resolve my error.
Note: My development environment is using PyTorch version 2.6.0, but I've checked your latest version, 2.8.0, and this section of the code appears to be the same.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-PCIE-40GB
GPU 5: NVIDIA A100-PCIE-40GB
GPU 6: NVIDIA A100-PCIE-40GB
GPU 7: NVIDIA A100-PCIE-40GB
Nvidia driver version: 535.261.03
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel Xeon Processor (Skylake, IBRS)
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 4
BogoMIPS: 5985.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ibrs ibpb fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat
L1d cache: 2.5 MiB (80 instances)
L1i cache: 2.5 MiB (80 instances)
L2 cache: 160 MiB (40 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-39
NUMA node1 CPU(s): 40-79
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvid
|
https://github.com/pytorch/pytorch/issues/162898
|
open
|
[
"oncall: distributed",
"module: pipelining"
] | 2025-09-14T05:19:54Z
| 2025-10-05T13:33:18Z
| 1
|
lileicaca
|
pytorch/vision
| 9,215
|
MixUp and CutMix transforms for semantic segmentation
|
Is there any way to use the MixUp and CutMix transforms for semantic segmentation masks? I could not find any documentation on it.
If this functionality does not exist, I will be happy to submit a PR for the same.
Motivation - CutMix is used in SOTA semi-supervised semantic segmentation methods such as [UniMatch](https://arxiv.org/abs/2410.10777) and MixUp is used in knowledge distillation methods such as ["Knowledge distillation: A good teacher is patient and consistent"](https://arxiv.org/abs/2106.05237)
|
https://github.com/pytorch/vision/issues/9215
|
open
|
[] | 2025-09-13T11:23:35Z
| 2025-09-19T18:52:48Z
| 1
|
vedantdalimkar
|
pytorch/pytorch
| 162,870
|
[RFC] library function with 64+ arguments
|
### Custom op support with 64+ arguments
Is there any plan to support 64+ argument? I have a custom kernel that takes 64+ arguments.
```python
import torch
from torch.library import Library, impl, register_fake
num_args = 65
# Create a new custom namespace
my_lib = Library("my_ops", "LIB")
# Define a custom operator with a list of tensors as input
args = ", ".join([f"Tensor t{i}" for i in range(num_args)])
my_lib.define(f"a_func({args}) -> Tensor")
```
```
Traceback (most recent call last):
File "/test/torch_test.py", line 18, in <module>
my_lib.define(f"a_func({args}) -> Tensor")
File "/miniconda3/envs/test-venv/lib/python3.11/site-packages/torch/library.py", line 172, in define
result = self.m.define(schema, alias_analysis, tuple(tags))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: The function schema has 65 arguments but this PyTorch build only supports 64
```
### Alternatives
_No response_
### Additional context
_No response_
cc @anjali411 @chauhang @penguinwu @zou3519 @bdhirsh
|
https://github.com/pytorch/pytorch/issues/162870
|
open
|
[
"triaged",
"module: custom-operators",
"module: library"
] | 2025-09-13T04:03:09Z
| 2025-09-15T23:25:57Z
| 1
|
tlsdmstn56
|
pytorch/pytorch
| 162,859
|
[RFC] support symmetric memory in torch.compile
|
The proposal originally came up in vLLM-compile sync with @ProExpertProg, @Chillee, and @Amir-19 and was also discussed with @ngimel @kwen2501. Recording it here to make sure we're all on the same page.
## Pitch
For any collective operator (built-in or custom), a user can specify which input must have symmetric memory.
torch.compile (Inductor) will figure out where the input is coming from and ensure that it is allocated with symmetric memory.
There are two cases for what type of operator produced the input.
1) built-in operator. Inductor might already preallocate the buffer that is the output of the operator (via memory planning) and it just needs to allocate it with symmetric memory.
```py
requires_symmetric_memory(collective, input=0)
def user_code(x):
y = x.sin()
z = y.cos()
return collective(z)
def inductor_generated_code(x):
with symmetric_memory():
buffer = torch.empty()
triton_inplace_sin_cos_fused(buffer)
return collective(buffer)
```
2) custom operator. Inductor just needs to run the custom operator underneath the symmetric memory context manager. The main risk of this is that more buffers than are needed get allocated with symmetric memory (all tensors produced by the custom op get allocated with symmetric memory), but the user can just re-write their custom op to optimize this
```py
requires_symmetric_memory(collective, input=0)
def user_code(x):
y = custom_op(x)
return collective(z)
def inductor_generated_code(x):
with symmetric_memory():
y = custom_op(x)
return collective(y)
```
## What about eager-mode?
The API to specify which input needs symmetric memory only applies to torch.compile. So a user would end up writing code that looks like:
```py
requires_symmetric_memory(collective, input=0)
def user_code(x):
if torch.compiler.is_compiling():
with symmetric_memory():
y = custom_op(x)
else:
y = custom_op(x)
return collective(z)
```
## What is the API to specify which input needs symmetric memory?
@kwen2501 noted that the choice of which input needs symmetric memory is specific to the collective operator. So one design is just during operator registration, specify that the input needs symmetric memory.
1. torch.library.define("my_collective(SymmMemTensor x) -> Tensor")
2. torch.library.define("my_collective(Tensor x) -> Tensor", symm_mem_hint="x")
Another design is a torch.compiler API:
torch.compiler.specify_symmetric_memory(my_collective, "x").
If we think the choice is actually dynamic (or that some collectives may accept both symmetric and non-symmetric memory?) then this could instead be a context manager:
```py
@torch.compile
def user_code(y):
x = custom_op(y)
with torch.compiler.specify_symmetric_memory(my_collective, "x"):
my_collective(x)
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @coconutruben
|
https://github.com/pytorch/pytorch/issues/162859
|
open
|
[
"oncall: distributed",
"triaged",
"oncall: pt2",
"module: inductor",
"vllm-compile",
"module: vllm",
"module: symm_mem"
] | 2025-09-12T22:27:49Z
| 2025-12-16T18:19:59Z
| 26
|
zou3519
|
pytorch/pytorch
| 162,854
|
Move test_quantization tests to run weekly
|
Currently test_quantization is running on every commit / PR, it's not necessary since we are deprecating the flow: https://docs.pytorch.org/docs/main/quantization.html
Although the API is still used, so we want to reduce the cadence the tests are running to weekly.
Main test file: https://github.com/pytorch/pytorch/blob/0dcd9304aa0ea404c2807cb058660e49c9810c20/test/test_quantization.py#L4
1. We need to find how it is called in CI and remove the run, e.g. remove https://github.com/pytorch/pytorch/blob/0dcd9304aa0ea404c2807cb058660e49c9810c20/tools/testing/modulefinder_determinator.py#L43
2. We need to find how to run weekly jobs, and add test_quantization.py run there
Will likely need dev-infra's help on both of the above.
cc @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry
|
https://github.com/pytorch/pytorch/issues/162854
|
closed
|
[
"oncall: quantization",
"module: ci",
"module: tests"
] | 2025-09-12T21:56:16Z
| 2025-09-24T11:31:14Z
| 1
|
jerryzh168
|
huggingface/lerobot
| 1,923
|
Deploying SmolVLA with a simulator
|
Has anyone been able to deploy the SmolVLA model to control say the SO-100 on a simulator like IsaacSim?
Even if the fine-tuning reliably converges the observed performance on the simulator seems erratic. Do we apply the predicted actions from SmolVLA directly into the Articulation controller as positions?
|
https://github.com/huggingface/lerobot/issues/1923
|
closed
|
[
"question",
"policies",
"simulation"
] | 2025-09-12T21:06:40Z
| 2025-12-11T22:07:02Z
| null |
aditya1709
|
pytorch/torchtitan
| 1,708
|
FSDP + compiled autograd
|
Hi! I was trying out some debug runs using FSDP with compile enabled and found out that compiled autograd doesn't seem to work well with FSDP. (a single gpu run without FSDP seems to work)
Is it possible to make such a setup work or is it just not supported as of now?
Launching a train run with the arguments below
```python
torchrun \
--standalone \
--nproc-per-node 2 \
--role rank \
--tee 3 \
--local-ranks-filter 0 \
-m torchtitan.train \
--job.config_file torchtitan/models/llama3/train_configs/debug_model.toml \
--training.compile \
--parallelism.enable_compiled_autograd \
--activation-checkpoint.mode none
```
fails with an error
```
loss.backward()
File "/usr/local/lib/python3.12/dist-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/usr/local/lib/python3.12/dist-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/usr/local/lib/python3.12/dist-packages/torch/autograd/graph.py", line 829, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/compiled_autograd.py", line 1354, in set_node_origin
raise RuntimeError(
RuntimeError: This compiled backward function was saved by AOTAutogradCache, which does not support
compiled autograd. Please turn off AOTAutogradCache using `TORCHINDUCTOR_AUTOGRAD_CACHE=0`.
```
Setting `TORCHINDUCTOR_AUTOGRAD_CACHE=0` doesn't seem to help much
```
loss.backward()
File "/usr/local/lib/python3.12/dist-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/usr/local/lib/python3.12/dist-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/usr/local/lib/python3.12/dist-packages/torch/autograd/graph.py", line 829, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/compiled_autograd.py", line 1041, in runtime_wrapper
out = compiled_fn(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 372, in __call__
return super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1767, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1778, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 699, in compile_wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 840, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 416, in __call__
raise e
File "/usr/local/lib/python3.12/dist-packages/torch/fx/graph_module.py", line 403, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1767, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1778, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<eval_with_key>.8", line 4, in forward
def forward(self, inputs, sizes, scalars, hooks, packed_data):
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 893, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/utils.py", line 4381, in wrapper
return compiled_fn(flat_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 893, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 1214, in boxed_forward
return compiled_fn(flat_args)
^^^^^^^^^^^^
|
https://github.com/pytorch/torchtitan/issues/1708
|
open
|
[
"module: fsdp",
"module: torch.compile"
] | 2025-09-12T20:42:31Z
| 2025-09-15T16:24:02Z
| 3
|
antony-frolov
|
huggingface/swift-transformers
| 237
|
Please help. Seeing issues with Hub when integrating
|
Hello, I'm trying to integrate WhisperKit via https://github.com/argmaxinc/WhisperKit/blob/main/Package.swift but that seems to bring in [swift-transformers](https://github.com/huggingface/swift-transformers) and Hub. I'm seeing issues as below
Hub.package.swiftinterface:34:32: warning: 'BinaryDistinctCharacter' is not a member type of struct 'Hub.Hub'
23:54:09 32 | public init(_ str: Foundation.NSString)
23:54:09 33 | public init(_ str: Swift.String)
23:54:09 34 | public init(_ character: Hub.BinaryDistinctCharacter)
23:54:09 | `- warning: 'BinaryDistinctCharacter' is not a member type of struct 'Hub.Hub'
23:54:09 35 | public init(_ characters: [Hub.BinaryDistinctCharacter])
23:54:09 36 | public init(stringLiteral value: Swift.String
I'm on xcode 16.4 and using swift 5.10. Please help!! Thanks in advance!
|
https://github.com/huggingface/swift-transformers/issues/237
|
closed
|
[
"question"
] | 2025-09-12T17:06:28Z
| 2025-09-17T15:36:52Z
| null |
rpatnayakuni22
|
pytorch/pytorch
| 162,820
|
[CI][CUDA][Distributed] test_ring_flex_attention failed on 8xB200 Runner
|
### 🐛 Describe the bug
Tracked in umbrella https://github.com/pytorch/pytorch/issues/162178
Job link: https://github.com/pytorch/pytorch/actions/runs/17660052730/job/50193312091
Failure message:
`2025-09-12T05:47:07.8805304Z expect_out, expect_lse = compiled_flex_attention(
2025-09-12T05:47:07.8805570Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 841, in compile_wrapper
2025-09-12T05:47:07.8805776Z raise e.with_traceback(None) from e.__cause__ # User compiler error
2025-09-12T05:47:07.8806030Z torch._dynamo.exc.Unsupported: Attempted to call function marked as skipped
2025-09-12T05:47:07.8806214Z Explanation: Dynamo does not know how to trace the Python builtin `_warnings.warn`.
2025-09-12T05:47:07.8806549Z Hint: If you are attempting to call a logging function (e.g. `_warnings.warn`), you can try adding it to `torch._dynamo.config.reorderable_logging_functions`.
2025-09-12T05:47:07.8806723Z Hint: Please file an issue on GitHub so the PyTorch team can add support for it.
2025-09-12T05:47:07.8806754Z
2025-09-12T05:47:07.8806953Z Developer debug context: module: _warnings, qualname: warn, skip reason: <missing reason>
2025-09-12T05:47:07.8806957Z
2025-09-12T05:47:07.8807256Z For more details about this graph break, please visit: https://meta-pytorch.github.io/compile-graph-break-site/gb/gb0007.html
2025-09-12T05:47:07.8807260Z
2025-09-12T05:47:07.8807348Z from user code:
2025-09-12T05:47:07.8807707Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/attention/flex_attention.py", line 1613, in flex_attention
2025-09-12T05:47:07.8807824Z _warn_once(
2025-09-12T05:47:07.8808100Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/attention/flex_attention.py", line 65, in _warn_once
2025-09-12T05:47:07.8808250Z warnings.warn(message, category, stacklevel=2)
2025-09-12T05:47:07.8808254Z
2025-09-12T05:47:07.8808676Z Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
2025-09-12T05:47:07.8808680Z
2025-09-12T05:47:07.8808683Z
2025-09-12T05:47:07.8808831Z To execute this test, run the following from the base repo dir:
2025-09-12T05:47:07.8809086Z python test/distributed/tensor/test_attention.py RingFlexAttentionTest.test_ring_flex_attention
2025-09-12T05:47:07.8809090Z
2025-09-12T05:47:07.8809286Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
2025-09-12T05:47:07.8809421Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
2025-09-12T05:47:07.8809584Z ================== 1 failed, 5 deselected, 2 rerun in 40.40s ===================`
### Versions
TOT
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
|
https://github.com/pytorch/pytorch/issues/162820
|
open
|
[
"oncall: distributed",
"module: ci",
"module: tests",
"triaged",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 2025-09-12T16:29:01Z
| 2025-09-22T22:23:48Z
| 3
|
nWEIdia
|
pytorch/ao
| 2,989
|
Quantized model is slower than original model!
|
Hello,
I have put together this benchmark and I am wondering why the quantised version is so much slower. Is there something that I have missed or is it simply that the model is small and the overhead of quantization is not worth it in this case?
The results are the following.
```
Benchmarking: model_fp32.onnx
Warming up (100 iterations)...
Running benchmark (1000 iterations)...
Average: 0.014 ms
Median: 0.014 ms
Std Dev: 0.002 ms
Min/Max: 0.012/0.063 ms
Throughput: 70994.7 samples/sec
```
```
Benchmarking: model_quantized.onnx
Warming up (100 iterations)...
Running benchmark (1000 iterations)...
Average: 0.045 ms
Median: 0.044 ms
Std Dev: 0.007 ms
Min/Max: 0.042/0.144 ms
Throughput: 22114.3 samples/sec
```
here is the code
```
import torch
from torchao.quantization import quantize_, Int8DynamicActivationInt4WeightConfig
from torchao.quantization.qat import QATConfig
from torchvision.ops import MLP
import onnxruntime as ort
import numpy as np
import time
import statistics
from typing import Dict, Tuple
input_dims = 512
group_size = 64
def get_model():
return MLP(
in_channels=input_dims,
hidden_channels=[256, 128, 64, 1]
)
def train_loop(m: torch.nn.Module):
optimizer = torch.optim.SGD(m.parameters(), lr=0.001, momentum=0.9, weight_decay=1e-5)
loss_fn = torch.nn.CrossEntropyLoss()
for i in range(10):
example = torch.randn(32,input_dims)
target = torch.randn(32,1)
output = m(example)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
optimizer.zero_grad()
def benchmark_onnx_inference(
model_path: str,
input_shapes: Dict[str, Tuple],
num_warmup: int = 10,
num_iterations: int = 100
) -> Dict:
"""Benchmark ONNX model inference speed"""
print(f"Benchmarking: {model_path}")
# Create inference session
session = ort.InferenceSession(model_path)
# Get input/output info
input_names = [inp.name for inp in session.get_inputs()]
output_names = [out.name for out in session.get_outputs()]
# Prepare inputs with exact shapes provided
inputs = {}
for name, shape in input_shapes.items():
inputs[name] = np.random.randn(*shape).astype(np.float32)
# Warmup
print(f" Warming up ({num_warmup} iterations)...")
for _ in range(num_warmup):
_ = session.run(output_names, inputs)
# Actual benchmark
print(f" Running benchmark ({num_iterations} iterations)...")
times = []
for i in range(num_iterations):
start_time = time.perf_counter()
outputs = session.run(output_names, inputs)
end_time = time.perf_counter()
times.append((end_time - start_time) * 1000) # Convert to ms
# Calculate statistics
avg_time = statistics.mean(times)
median_time = statistics.median(times)
std_time = statistics.stdev(times) if len(times) > 1 else 0
min_time = min(times)
max_time = max(times)
# Determine batch size from first input shape
batch_size = list(input_shapes.values())[0][0] if input_shapes else 1
results = {
'avg_ms': avg_time,
'median_ms': median_time,
'std_ms': std_time,
'min_ms': min_time,
'max_ms': max_time,
'throughput_samples_per_sec': batch_size * 1000 / avg_time,
'all_times': times,
'batch_size': batch_size,
'input_shapes': input_shapes
}
print(f" Average: {avg_time:.3f} ms")
print(f" Median: {median_time:.3f} ms")
print(f" Std Dev: {std_time:.3f} ms")
print(f" Min/Max: {min_time:.3f}/{max_time:.3f} ms")
print(f" Throughput: {batch_size * 1000 / avg_time:.1f} samples/sec")
return results
def comprehensive_quantization_test():
"""Complete test to verify quantization is working"""
print("=== Comprehensive Quantization Verification ===\n")
# Create models
model_fp32 = get_model()
model_quantized = get_model()
# Apply quantization
base_config = Int8DynamicActivationInt4WeightConfig(group_size=group_size)
quantize_(model_quantized, QATConfig(base_config, step="prepare"))
# Train quantized model
train_loop(model_quantized)
quantize_(model_quantized, QATConfig(base_config, step="convert"))
# save models to onnx
torch.onnx.export(model_fp32, torch.randn(1, input_dims), "model_fp32.onnx", dynamo=True)
torch.onnx.export(model_quantized, torch.randn(1, input_dims), "model_quantized.onnx", dynamo=True)
input_shapes = {"input": (1, input_dims)}
results_fp32 = benchmark_onnx_inference(
"model_fp32.onnx",
input_shapes,
num_warmup=100,
num_iterations=1000
)
results_quantized = benchmark_onnx_inference(
"model_quantized.onnx",
input_shapes,
num_warmup=100,
num_iterations=1000
)
comprehensive_quantization_test()
```
|
https://github.com/pytorch/ao/issues/2989
|
open
|
[] | 2025-09-12T05:00:23Z
| 2025-09-12T18:31:28Z
| 8
|
timpiperseek
|
pytorch/pytorch
| 162,782
|
Is `torch.nn.functional.gumbel_softmax` going to be deprecated?
|
Is this function really going to be deprecated going forward? If so I will write my own version. Thanks!
There is the following issue on this page: https://docs.pytorch.org/docs/stable/generated/torch.nn.functional.gumbel_softmax.html
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
|
https://github.com/pytorch/pytorch/issues/162782
|
open
|
[
"module: nn",
"triaged",
"module: deprecation"
] | 2025-09-12T00:48:11Z
| 2025-09-19T17:36:22Z
| 1
|
michaelfortunato
|
pytorch/pytorch
| 162,719
|
linalg.eig does not get parallelized on CPU
|
### 🐛 Describe the bug
I have a lengthy calculation that relies on eigendecomposition of non-Hermitian matrices in one place. The reason I picked PyTorch is the straightforward parallel nature of its ops, however that does not seem to be the case with `eig`. While I know it calls a BLAS routine under the hood, I am actually calculating batches of matrices, so there is potential for a speedup there. However, looking at the code below:
```python
import torch
import timeit
import psutil
import matplotlib.pyplot as plt
import numba
import numpy as np
stmt = "torch.linalg.eig(e)"
runtimes = []
threads = [1] + [t for t in range(2, 30, 2)]
for t in threads:
torch.set_num_threads(t)
try:
numba.set_num_threads(t)
except ValueError:
pass
r = timeit.timeit(
setup="e = torch.randn(200, 25, 25, dtype=torch.cdouble)",
stmt=stmt,
number=100,
globals=globals(),
)
runtimes.append(r)
plt.plot(threads, runtimes)
plt.xlabel("Number of Threads")
plt.ylabel("Runtime (seconds)")
plt.title(stmt)
num_cores = psutil.cpu_count(logical=False)
num_threads = psutil.cpu_count()
if num_threads is not None and num_cores is not None:
plt.axvline(x=num_cores, color='g', linestyle='--', label='Physical Cores')
plt.axvline(x=num_threads, color='r', linestyle='--', label='Logical Cores')
plt.legend()
plt.grid()
plt.show()
```
I get the following relation:
<img width="567" height="455" alt="Image" src="https://github.com/user-attachments/assets/306ee486-84a7-445c-a83a-a9c86ad2c5c9" />
So not only is there no speedup, there is even a slowdown caused by threads!
Since BLAS routines may utilise multiple threads, I compared it with a custom numba based torch op:
```python
@numba.jit(nopython=True, parallel=True, cache=True)
def batch_eig(batch):
shape = batch.shape
batch_dims = shape[:-2] # All dimensions except the last two (matrix dimensions)
n = shape[-1]
total_matrices = 1
for dim in batch_dims:
total_matrices *= dim
flat_batch = batch.reshape(total_matrices, n, n)
eigvecs = np.zeros_like(flat_batch)
eigvals = np.zeros((total_matrices, n), dtype=np.complex128)
for i in numba.prange(total_matrices):
eigvals[i], eigvecs[i] = np.linalg.eig(flat_batch[i])
return eigvals.reshape(batch_dims + (n,)), eigvecs.reshape(batch_dims + (n, n))
@torch.library.custom_op("mylib::eig", mutates_args=())
def eig(pic: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
E, U = batch_eig(pic.numpy())
return torch.from_numpy(E), torch.from_numpy(U)
@eig.register_fake
def _(pic):
eigvals = torch.empty(pic.shape[:-1], dtype=torch.cdouble)
eigvecs = torch.empty(pic.shape, dtype=torch.cdouble)
return eigvals, eigvecs
```
and the result can be seen below:
<img width="554" height="455" alt="Image" src="https://github.com/user-attachments/assets/861a9be7-674e-48a8-bd22-0b824d8a1313" />
Unfortunately, I'm no expert in torch internals, but I also need autograd and don't want to rely on my `numba` based implementation. Perhaps it is connected to the fact that `eig` does not even compile: #159445
### Versions
PyTorch version: 2.8.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro (10.0.26100 64-bit)
GCC version: (MinGW-W64 x86_64-ucrt-posix-seh, built by Brecht Sanders, r2) 14.2.0
Clang version: 19.1.1
CMake version: version 3.30.4
Libc version: N/A
Python version: 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is CUDA available: False
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 576.80
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 13th Gen Intel(R) Core(TM) i5-13600KF
Manufacturer: GenuineIntel
Family: 205
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3500
MaxClockSpeed: 3500
L2CacheSize: 20480
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==2.2.6
[pip3] torch==2.8.0+cpu
[conda] Could not collect
cc @jerryzh168 @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
|
https://github.com/pytorch/pytorch/issues/162719
|
open
|
[
"module: performance",
"module: cpu",
"triaged",
"module: linear algebra"
] | 2025-09-11T12:09:57Z
| 2025-10-02T12:03:49Z
| 5
|
krokosik
|
huggingface/transformers
| 40,815
|
get_decoder feature regression in 4.56.0
|
### System Info
In the release of transformers v4.56.0, this PR https://github.com/huggingface/transformers/pull/39509 introduced a refactor of the public `get_decoder` method which previously existed on modes by moving it to the PreTrainedModel class.
Unfortunately this introduced a significant behavior change in that `*CausalForLM` models no longer have the same behavior of having `get_decoder()` return the underlying base model.
For example a `MistralForCausalLM` model named `model` returns `None` when `model.get_decoder()` is called.
The logic for why is occurring is obvious when looking at the offending PR:
```python
def get_decoder(self):
"""
Best-effort lookup of the *decoder* module.
Order of attempts (covers ~85 % of current usages):
1. `self.decoder`
2. `self.model` (many wrappers store the decoder here)
3. `self.model.get_decoder()` (nested wrappers)
4. fallback: raise for the few exotic models that need a bespoke rule
"""
if hasattr(self, "decoder"):
return self.decoder
if hasattr(self, "model"):
inner = self.model
if hasattr(inner, "get_decoder"):
return inner.get_decoder()
return inner
return None
```
In these cases the `if hasattr(self, "model"):` conditional block is entered, and the underlying model has a `get_decoder` method, as it is a `PreTrainedModel`, as all transformers models are. This block will always be entered. At this point we are now in the decoder itself calling its `get_decoder` method. The decoder has no decoder or model attribute, so the function returns `None`, which is the passed to the parent caller.
There are a couple of ways this could be fixed, but I don't know what their current impact would be on other parts of the code. I may open a PR, but I am quite busy at the moment. @molbap @ArthurZucker since you were the authors and reviewers here, do you mind taking another look at this?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use `get_decoder` on say a `MistralForCausalLM` model.
### Expected behavior
The underlying `model` attribute should be returned for `*ForCausalLM` models, not None, as these models are decoder only models by transformers convention.
|
https://github.com/huggingface/transformers/issues/40815
|
closed
|
[
"bug"
] | 2025-09-11T09:25:12Z
| 2025-09-16T08:57:14Z
| 4
|
KyleMylonakisProtopia
|
huggingface/transformers
| 40,813
|
Incorrect sharding configuration for Starcoder2 model
|
### System Info
Transformers main branch (commit [0f1b128](https://github.com/huggingface/transformers/commit/0f1b128d3359a26bd18be99c26d7f04fb3cba914) )
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.15.0-1030-nvidia-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0a0+5228986c39.nv25.06 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: tensor-parallel
- Using GPU in script?: yes
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Tunning TP inference on `bigcode/starcoder2-7b` throws an error with incorrect tensor shapes due to `base_model_tp_plan` misconfiguration.
`demo.py`:
```
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigcode/starcoder2-7b"
model = AutoModelForCausalLM.from_pretrained(model_id, tp_plan="auto")
model._tp_plan['model.layers.*.mlp.c_proj'] = 'rowwise'
print(f"TP plan: {model._tp_plan}, class: {type(model._tp_plan)}")
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Can I help"
inputs = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
# distributed run
outputs = model(inputs)
# print the output
print(outputs)
```
run with
```
torchrun --nproc_per_node=2 demo.py
```
The correct `base_model_tp_plan` should replace:
```
['model.layers.*.mlp.c_proj'] = 'colwise'
```
with
```
['model.layers.*.mlp.c_proj'] = 'rowwise'
```
### Expected behavior
Throws:
```
(...)
[rank0]: File "/lustre/fs1/portfolios/coreai/users/gkwasniewski/hf-repo/transformers/src/transformers/models/starcoder2/modeling_starcoder2.py", line 65, in forward
[rank0]: hidden_states = self.c_proj(hidden_states)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1857, in _call_impl
[rank0]: return inner()
[rank0]: ^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1805, in inner
[rank0]: result = forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/linear.py", line 125, in forward
[rank0]: return F.linear(input, self.weight, self.bias)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_compile.py", line 51, in inner
[rank0]: return disable_fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 850, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_api.py", line 350, in __torch_dispatch__
[rank0]: return DTensor._op_dispatcher.dispatch(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_dispatch.py", line 160, in dispatch
[rank0]: self.sharding_propagator.propagate(op_info)
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py", line 266, in propagate
[rank0]: OutputSharding, self.propagate_op_sharding(op_info.schema)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py", line 45, in __call__
[rank0]: return self.cache(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py", line 279, in propagate_op_sharding_non_cached
[rank0]: out_tensor_meta = self._propagate_tensor_meta_non_cached(op_schema)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py", line 126, in _propagate_tensor_meta_non_cached
[rank0]: fake_out = op_schema.op(*fake_args, **fake_kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[ra
|
https://github.com/huggingface/transformers/issues/40813
|
closed
|
[
"bug"
] | 2025-09-11T09:02:53Z
| 2025-09-15T08:46:33Z
| 1
|
greg-kwasniewski1
|
huggingface/lerobot
| 1,911
|
How to avoid re-write cache data from pyarrow into parquet everytime?
|
Hi Authors,
When using lerobot dataset in a pytorch dataloader, lerobot dataset will write a huge cache data which is converted from pyarrow to Apache Parquet. How to avoid that?
I can think of two options:
1. Avoid converting to Parquet data and directly read from parquet data. But this may loose reading performance.
2. Can we instead store the Parquet data?
Thanks.
Songlin
|
https://github.com/huggingface/lerobot/issues/1911
|
open
|
[] | 2025-09-10T22:19:25Z
| 2025-09-10T22:19:25Z
| null |
songlinwei-we
|
pytorch/pytorch
| 162,638
|
Gradient Clipping in Pipeline Parallelism Schedules
|
### 🚀 The feature, motivation and pitch
The current PP schedules like `Schedule1F1B` don't seem to have built-in gradient clipping support.
Is there a recommended approach for implementing gradient clipping in pipeline parallelism, and what would be the most efficient way to compute global gradient norms across sharded parameters?
Would it be possible to add gradient clipping as a built-in feature to the PP schedule classes with parameters like `grad_clip_norm` and `clip_interval`?
This would be really helpful for users who need gradient clipping in their PP training workflows, especially for scenarios where training stability is important.
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan
|
https://github.com/pytorch/pytorch/issues/162638
|
open
|
[
"oncall: distributed",
"module: autograd"
] | 2025-09-10T20:48:20Z
| 2025-09-11T15:12:36Z
| 0
|
nvlas
|
pytorch/pytorch
| 162,630
|
[RFC] Intrusive Caching DLPack for Fast Conversion
|
Currently DLPack is being used for Tensor data exchange. This conversion, which involves populating metadata such as shape, data pointer, and strides, can introduce a small but non-negligible overhead, typically in the range of 40-80 nanoseconds on the C++ side. While this latency is already quite low, frequent tensor exchanges—such as those involving model weights or intermediate values used multiple times—can accumulate this overhead.
This RFC addresses the question of whether this overhead can be further reduced, particularly in scenarios where the same tensor is converted to DLPack multiple times during its lifetime.
It does involve change to the c10::TensorImpl data structure, so likely needs to be done with care. This post first put the high-level idea out to the community to seek feedbacks.
## Proposal
We propose an approach that integrates a caching mechanism directly into the framework's tensor object. The high-level concept is as follows:
- **Cache Storage**: A std::unique_ptr<DLManagedTensorVersioned> will be added as a member field to the framework's tensor object (e.g., TensorImpl). This modification requires a change to the framework's internal tensor structure.
- **On-Demand Population**: When the ToDLPack conversion method is called for the first time on a given tensor, the DLManagedTensorVersioned object will be created and populated. The framework's internal metadata will be transferred, and the manager_ctx of the DLManagedTensorVersioned will be set to point back to the TensorImpl itself. The deleter will also be configured at this time.
- **Ref counting integration**
- To prevent the TensorImpl from being deallocated while a DLPack consumer holds a reference, a new reference will be added to the TensorImpl intrusive reference counter each time a DLManagedTensorVersioned is returned.
- The DLManagedTensorVersioned's deleter function will be configured to simply decrement the TensorObj's reference counter. This ensures that the TensorImpl and its cached DLManagedTensorVersioned are not deallocated until all DLPack and internal references are released.
- **Cache Reuse**: For subsequent calls to ToDLPack on the same tensor object, the cached DLManagedTensorVersioned will be directly returned. The only overhead will be a pointer lookup and a reference count increment, which is an extremely fast operation, measured to be **as low as 3.8 nanosecond** in preliminary benchmarks.
## Thread Safety
In C++ environment, different thread may concurrent write to the cached field and it is important to consider thread-safety, so only one cached value is written and returned to the user. Here is an updated version, at high-level:
- Different thread can race to create their own DLManagedTensorVersioned when they find cached field is nullptr
- Use atomic_compare_exchange_strong_explicit to ensure one of the value get stored and only store it when the cached field is nullptr
- Always return the stored value, and if the value is created by another thread, delete the current one and return the value created by another thread
## Expected Benefits and Tradeoffs
- **Significant Performance Improvement**: This caching strategy can reduce the DLPack conversion overhead from 40-80ns to a mere 1ns for repeated conversions.
- **Reduced Redundancy**: Avoids repeated allocation and population of DLManagedTensorVersioned objects for the same tensor.
- **Minimal Cost**: The overhead of this approach is limited to one extra pointer field per tensor object, which is negligible given the typical size of tensor metadata and data.
## Example Implementation
The following C++ code snippet illustrates the proposed mechanism within a hypothetical TensorImpl class that uses intrusive reference counting .
```c++
#include <atomic>
// TensorImpl is a target of an intrusive ptr that contains a reference counter.
// in the context of PyTorch, based on my understanding,
// it would be c10::TensorImpl or something c10::TensorImpl holds like ExtraMeta
class TensorImpl : public intrusive_ptr_target<TensorImpl> {
public:
~TensorImpl() {
// deleting the cached dl managed tensor versioned
// We need to acquire the value in case it is released by another thread
// However, because this destructor is triggered as part of the intrusive pointer deletion
// there is already a memory fence in intrusive pointer deleter triggering to ensure
// all fields of the TensorImpl are visible here, so we do not have to do acquire, actually
// we can even do a non-atomic load here
DLManagedTensorVersioned* cached = cached_dl_managed_tensor_.load(
std::memory_order_relaxed);
if (cached != nullptr) {
delete cached;
}
}
/*!
* \brief Converts the current Tensor to a DLPack Tensor.
* \return The converted DLManagedTensorVersioned pointer.
*/
DLManagedTensorVersioned* ToDLPack() const {
// this function holds a strong reference to the TensorImpl
TensorImpl* self =
|
https://github.com/pytorch/pytorch/issues/162630
|
closed
|
[
"triaged",
"enhancement",
"module: dlpack"
] | 2025-09-10T20:00:53Z
| 2025-09-12T20:26:48Z
| 15
|
tqchen
|
pytorch/pytorch
| 162,606
|
Tensorpipe - ROCm support
|
Raising this issue to discuss on the path forward to enable tensorpipe feature on ROCm.
Why it is required
- UT gap, currently tensorpipe related UTs are skipped on ROCm but executed for CUDA.
Tensorpipe repo was archived few year back and no changes were accepted. Recently https://github.com/pytorch/tensorpipe/commits/main/ it was open back.
As far as I know discussing with @atalman, CI for tensorpipe is removed.
So we want to discuss how to push changes to support it on ROCm.
Old PR, which tried to enable it for ROCm, but got dropped for different reasons.
- https://github.com/pytorch/tensorpipe/pull/398
- https://github.com/pytorch/tensorpipe/pull/401
cc @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @osalpekar @jiayisuse @lw @beauby @pritamdamania87 @mrshenli @jjlilley @gqchen @malfet @atalman @pragupta @dwiddows
|
https://github.com/pytorch/pytorch/issues/162606
|
open
|
[
"module: rocm",
"triaged",
"module: tensorpipe",
"rocm"
] | 2025-09-10T16:03:52Z
| 2025-12-17T02:56:09Z
| 8
|
pruthvistony
|
pytorch/ao
| 2,967
|
Deprecation for IntxWeightOnlyConfig/Int8DynamicActivationIntxWeightConfig (version 1) and the models
|
This issue is tracking the deprecation of the (1) configs (2) model checkpoints quantized with these configs.
What is deprecated:
* IntxWeightOnlyConfig/Int8DynamicActivationIntxWeightConfig with version=1 is now deprecated. Please use version=2 (current default).
* Quantized checkpoints quantized with version 1 config previously are deprecated as well, and we plan to remove the support to load these checkpoints after pytorch 2.11 release (around 9 months from now)
Timeline:
0.14.0: annouce deprecation for version 1 config
after all tensors are migrated: remove support for version 1 config
after pytorch 2.11 release: remove support for version 1 checkpoints
|
https://github.com/pytorch/ao/issues/2967
|
open
|
[] | 2025-09-09T20:35:13Z
| 2025-10-02T20:50:10Z
| 0
|
metascroy
|
pytorch/pytorch
| 162,512
|
Default Google Search to Off in docs
|
<img width="967" height="722" alt="Image" src="https://github.com/user-attachments/assets/820499bb-1237-4a9c-9946-71c67ef88f6d" />
Two comments on the search bar in the new UI:
1. It is inconvenient that the search bar is not on the same screen as the search results, so I cannot see both at the same time.
2. I searched "quantile", which in the new search bar yields no obvious results. Looking a little harder encourages me to click on the .diag result, which then is one more sidebar click away from what I'm actually looking for. Contrast this to the old experience, which directly suggested the right page.
<img width="1053" height="575" alt="Image" src="https://github.com/user-attachments/assets/46fe8b7f-13b7-43cb-8d3c-4c7cf5f3c36d" />
I'm slowly realizing that maybe this poor experience is just cuz the toggle in the search bar that says "Search Google" is on, and turning it off has been better. Should we turn Google Search off by default then?
cc @svekars @sekyondaMeta @AlannaBurke
|
https://github.com/pytorch/pytorch/issues/162512
|
open
|
[
"module: docs",
"triaged"
] | 2025-09-09T18:14:06Z
| 2025-09-09T18:24:50Z
| 1
|
janeyx99
|
huggingface/transformers
| 40,767
|
3D Object Detection Models
|
### Model description
Hi together,
is there a reason or any other thread where 3D models like those at mmdet3d are discussed to be implemented. I have not found any discussion.
Thanks
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
BEVFormer:
https://github.com/fundamentalvision/BEVFormer
|
https://github.com/huggingface/transformers/issues/40767
|
open
|
[
"New model"
] | 2025-09-09T13:16:33Z
| 2025-11-13T21:18:40Z
| 3
|
SeucheAchat9115
|
pytorch/pytorch
| 162,481
|
Incosistent tracking of device activities when calling profiler.step() in torch profiler
|
### 🐛 Describe the bug
Here is a simple example of using profiler's scheduling functionality:
```python
import torch
def bench_kineto(fn, num_tests: int):
flush_l2_size = int(8e9 // 4)
schedule = torch.profiler.schedule(wait=0, warmup=1, active=1, repeat=1)
profiler = torch.profiler.profile(activities=[torch.profiler.ProfilerActivity.CUDA], schedule=schedule)
with profiler:
for i in range(2):
for _ in range(num_tests):
torch.empty(flush_l2_size, dtype=torch.int, device='cuda').zero_()
fn()
profiler.step()
print(num_tests)
print(profiler.key_averages().table(sort_by='cuda_time_total', max_name_column_width=50))
@torch.inference_mode()
def main():
torch.set_default_device("cuda")
torch.set_default_dtype(torch.bfloat16)
a = torch.randn(1024, 1024)
b = torch.randn(1024, 1024)
def func():
return a @ b
bench_kineto(func, 10)
bench_kineto(func, 10)
if __name__ == "__main__":
main()
```
The output is:
```text
10
-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls
-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
void at::native::vectorized_elementwise_kernel<... 0.00% 0.000us 0.00% 0.000us 0.000us 45.480ms 99.71% 45.480ms 606.404us 75
nvjet_tst_128x64_64x8_1x2_h_bz_NNT 0.00% 0.000us 0.00% 0.000us 0.000us 130.304us 0.29% 130.304us 6.858us 19
cudaLaunchKernel 0.26% 118.601us 0.26% 118.601us 2.965us 0.000us 0.00% 0.000us 0.000us 40
cuLaunchKernelEx 0.08% 35.264us 0.08% 35.264us 3.526us 0.000us 0.00% 0.000us 0.000us 10
cudaDeviceSynchronize 99.66% 45.347ms 99.66% 45.347ms 45.347ms 0.000us 0.00% 0.000us 0.000us 1
-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 45.501ms
Self CUDA time total: 45.611ms
10
-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls
-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
void at::native::vectorized_elementwise_kernel<... 0.00% 0.000us 0.00% 0.000us 0.000us 47.279ms 99.71% 47.279ms 606.140us 78
nvjet_tst_128x64_64x8_1x2_h_bz_NNT 0.00% 0.000us 0.00% 0.000us 0.000us 137.343us 0.29% 137.343us 6.867us 20
cudaLaunchKernel 0.26% 121.269us 0.26% 121.269us 3.032us 0.000us 0.00% 0.000us 0.000us 40
cuLaunchKernelEx 0.08% 36.090us 0.08% 36.090us 3.609us 0.000us 0.00% 0.000us 0.000us 10
cudaDeviceSynchronize 99.67% 47.243ms 99.67% 47.243ms 47.243ms 0.000us 0.00% 0.000us 0.000us 1
-------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 47.401ms
Self CUDA time total: 47.416ms
```
What's problematic:
`nvjet_tst_128x64_64x8_1x2_h_bz_NNT` kernel is recorded 19 times or 20 times, while it should be 10 times by design.
Further analysis shows that, if I add `torch.cuda.synchronize()` before calling `profiler.step()`, it works as expected.
Note: the code is adapted from https://docs.pytorch.org/tutorials/reci
|
https://github.com/pytorch/pytorch/issues/162481
|
open
|
[
"oncall: profiler"
] | 2025-09-09T11:58:11Z
| 2025-12-01T18:41:45Z
| 5
|
youkaichao
|
huggingface/lerobot
| 1,899
|
Has anyone tried to export the smolvla as onnx model for deployment?
|
I have tried to test the trained smolvla model on my PC, it works. I want now to deploy the smolvla on our target board.
I looked into the model structure of smolvla, for the vision-encoder and language embedding parts I can refer to the smolvlm and export them as tow onnx models. I think the robot state embedding also needs to be considered to export as a new onnx model.
The most important part of smolvla inference, i met several issues and have no good idea how to export it as a onnx model.l.
Has anyone tried and successfully exported the smolvla as onnx models for deployment? Thanks!
|
https://github.com/huggingface/lerobot/issues/1899
|
open
|
[
"question",
"policies",
"performance"
] | 2025-09-09T10:41:14Z
| 2025-10-07T20:50:12Z
| null |
TankerLee
|
huggingface/huggingface_hub
| 3,339
|
What is the best replacement of HfFileSystem.glob with HfApi
|
In some of our code, we were using something like
```python
hf_fs = HfFileSystem()
files = hf_fs.glob('my/repo/*/model.onnx')
```
But I found that HfFileSystem is much less stable than HfApi, especially in those edge cases (e.g. network unstable)
So what is the best replacement of HfFileSystem.glob with HfApi? Any suggestions?
|
https://github.com/huggingface/huggingface_hub/issues/3339
|
closed
|
[] | 2025-09-09T09:02:07Z
| 2025-09-15T09:12:04Z
| null |
narugo1992
|
huggingface/transformers
| 40,754
|
Potentially incorrect value assignment of Llama4TextModel's output in Llama4ForCausalLM's output?
|
### System Info
**System Info**
- `transformers` version: 4.55.4
- Platform: Linux-6.15.9-201.fc42.x86_64-x86_64-with-glibc2.41
- Python version: 3.13.5
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA RTX A6000
### Who can help?
@ArthurZucker
@amyeroberts
@qubvel
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
**Task Detail**
Obtaining hidden_states from the outputs of Llama4ForCausalLM
**Problem**
In the source code [modeling_llama4.py](https://github.com/huggingface/transformers/blob/v4.55.4/src/transformers/models/llama4/modeling_llama4.py), the outputs of Llama4ForCausalLM contains a *hidden_states* (See [line 642](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L642)), which is assigned with *outputs.hidden_states*. Here, the *outputs* is the output of Llama4TextModel (See [line 619](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L619C9-L619C16)). However, the output of Llama4TextModel consists of a *last_hidden_state* (assigned the value of *hidden_states*) and a *past_key_values*, but no *hidden_states* (See [line 554-557](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L554-L557)).
Thus, I'm wondering if there is either a typo in [line 642](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L642) where the *hidden_states=outputs.hidden_states* should be replaced by *hidden_states=outputs.last_hidden_state*, or a typo in [line 555](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L555C13-L555C45) where the *last_hidden_state=hidden_states* should be replaced by *hidden_states=hidden_states*?
Thank you for your patience!
### Expected behavior
An explanation or a correction of the source code in [modeling_llama4.py](https://github.com/huggingface/transformers/blob/v4.55.4/src/transformers/models/llama4/modeling_llama4.py)
|
https://github.com/huggingface/transformers/issues/40754
|
closed
|
[
"Usage",
"bug"
] | 2025-09-08T12:31:39Z
| 2025-09-16T19:25:03Z
| 3
|
st143575
|
huggingface/transformers
| 40,752
|
How to extract attention weights for the first generated token?
|
**Title:** Request for clarification: How to extract attention weights for the first generated token?
**Description:**
Hi, I'm trying to extract the attention weights **of the first generated token** (i.e., the first new token produced by `generate()`) with respect to the input prompt. However, I'm observing inconsistent behavior in the shape of `attentions` returned by `model.generate(..., output_attentions=True)`.
Here's what I found:
- For `step 0` (the first generation step), `attentions[0][layer].shape` is `(batch, heads, seq_len, seq_len)` — e.g., `[1, 16, 1178, 1178]`, where `seq_len` equals the input prompt length.
- This appears to be the **full self-attention matrix of the prompt context**, not the attention of the newly generated token.
- Starting from `step 1`, the shape becomes `(batch, heads, 1, ctx_len)`, which correctly represents the attention of a single generated token.
**Question:**
- Is there a way to directly extract the attention weights **from the first generated token** (i.e., the query of the first new token attending to the prompt keys)?
- Or is the intended behavior to use the last position of the context attention (i.e., `attentions[0][layer][..., -1, :]`) as a proxy for the generation decision?
**Use Case:**
I want to interpret which parts of the input prompt the model attends to when generating the first output token, for interpretability and analysis purposes.
**Environment:**
- Transformers version: [4.51.3]
- Model: [Qwen3]
- Code snippet:
```python
outputs = model.generate(
input_ids,
output_attentions=True,
return_dict_in_generate=True
)
# outputs.attentions[0][layer] has shape (1, 16, 1178, 1178)
|
https://github.com/huggingface/transformers/issues/40752
|
closed
|
[] | 2025-09-08T09:53:16Z
| 2025-09-08T11:41:22Z
| null |
VincentLHH
|
huggingface/transformers.js
| 1,407
|
Expected time to load a super-resolution model locally
|
### Question
Loading a image super-resolution model locally can take more than 10 seconds on my MacBook Pro (M1 Max). Is this expected behavior?
```javascript
env.allowRemoteModels = false;
env.allowLocalModels = true;
env.backends.onnx.wasm.wasmPaths = `/wasm/`;
const upscaler = ref(null);
onMounted(async () => {
upscaler.value = await pipeline('image-to-image', 'Xenova/swin2SR-realworld-sr-x4-64-bsrgan-psnr', {
dtype: 'fp32',
device: 'webgpu',
})
});
```
Warnings observed during the model loading:
```
ort-wasm-simd-threaded.jsep.mjs:100
2025-09-08 13:58:52.881399 [W:onnxruntime:, session_state.cc:1280 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
ort-wasm-simd-threaded.jsep.mjs:100
2025-09-08 13:58:52.882499 [W:onnxruntime:, session_state.cc:1282 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
```
### System Info
npm: @huggingface/transformers@3.7.2
OS: macOS Sequoia 15.6.1
model: Xenova/swin2SR-realworld-sr-x4-64-bsrgan-psnr
|
https://github.com/huggingface/transformers.js/issues/1407
|
closed
|
[
"question"
] | 2025-09-08T06:26:49Z
| 2025-09-30T19:22:34Z
| null |
ymtoo
|
huggingface/lerobot
| 1,891
|
How to checkout a commit id?
|
The underlying datasets supports a "revision" flag. Does lerobot?
|
https://github.com/huggingface/lerobot/issues/1891
|
closed
|
[] | 2025-09-08T04:39:37Z
| 2025-09-10T22:53:18Z
| null |
richardrl
|
huggingface/transformers
| 40,743
|
Support for 4D attention mask for T5
|
### Feature request
Currently, T5 cannot take 4D attention masks (batch_size, num_heads, seq_len, seq_len) as inputs. Passing a 4D attention_mask and a 4D decoder_attention_mask like so leads to a shape-related exception :
```python
import torch
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small")
model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
input_ids = tokenizer("Where is", return_tensors="pt").input_ids
decoder_input_ids = tokenizer("<pad>", return_tensors="pt").input_ids
batch_size, seq_len = input_ids.shape
tgt_len = decoder_input_ids.shape[1]
num_heads = model.config.num_heads
attention_mask = torch.ones(batch_size, num_heads, seq_len, seq_len)
decoder_attention_mask = torch.ones(batch_size, num_heads, tgt_len, tgt_len).tril(0)
model(
input_ids,
decoder_input_ids=decoder_input_ids,
attention_mask=attention_mask,
decoder_attention_mask=decoder_attention_mask,
)
```
One of the problems in the current code is in the handling of the cross-attention mask. Currently, it is created using the 1D encoder attention mask when supplied. However, in the case of a 4D mask, it seems unclear how to correctly use the encoder mask: therefore, the best solution might be to introduce a new 4D mask argument `cross_attention_mask` of shape (batch_size, num_heads, tgt_len, seq_len)`. This lets the user controls all attention masks if necessary.
### Motivation
4D masks are useful for many purposes, as outlined by #27539 and [this blog post](https://huggingface.co/blog/poedator/4d-masks), but not all models support them.
### Your contribution
I propose to fix the code to handle 4D attention masks, and to add a new `cross_attention_mask` argument to add the possibility to control the cross attention mask manually. I wrote a version of that code in [this fork](https://github.com/Aethor/transformers/tree/t5-4d-attention-mask).
I'm happy to create a PR with my code, but:
1. This is my first transformers contribution, I need help with some things such as handling the "Copy" code duplication mechanism of transformers. Should other similar models with copied functions from T5 be changed as well?
2. Although I wrote a [first test with trivial masks](https://github.com/Aethor/transformers/blob/22dc62edbdbc3f2afeb90a31c75047711c1afc5c/tests/models/t5/test_modeling_t5.py#L1876), I am not entirely sure how to test this
3. I want to be sure that adding the new `cross_attention` mask parameter is the right way to do this and will be approved
|
https://github.com/huggingface/transformers/issues/40743
|
open
|
[
"Feature request"
] | 2025-09-07T07:18:05Z
| 2025-09-09T11:43:33Z
| 5
|
Aethor
|
huggingface/lerobot
| 1,882
|
Pretrain - Code for pretraining smolvla
|
## Guidance on Replicating the Pre-training Process with Community Datasets
Hi team,
First off, thank you for the fantastic work on SmolVLA and for open-sourcing the model and code. It's a great contribution to the community.
I am trying to replicate the pre-training process as described in the original paper. I have located the pre-training data on the Hugging Face Hub, specifically:
- `HuggingFaceVLA/community_dataset_v1`
- `HuggingFaceVLA/community_dataset_v2`
My plan is to download both datasets and merge them into a single directory, for example `/path/to/my/pretrain_data/`, to serve as the input for the pre-training script.
To ensure I am on the right track, I would be grateful if you could provide some guidance on the following points:
1: **Data Preparation & Merging**: Regarding the two datasets (community_dataset_v1 and v2), what is the correct procedure for using them together? Should I manually download and merge their contents into a single local directory? I also noticed the data is in a multi-directory (sharded) format, unlike many simpler single-folder datasets. Does the training code handle this structure automatically once the data is prepared locally?
2: **Dataset Configuration**: How should the combined dataset be specified in the configuration file? My main confusion is that the parameter dataset.repo_id appears to be a required field that accepts a single repository ID. How can I configure the training script to use the merged data from both v1 and v2, which I have stored locally?
3: **Training Script & Execution**: Once the data is correctly prepared and configured, could you please point me to the exact script and provide an example command to launch the pre-training? Since the weight of VLM is initialized, so what I need is the script after initializing VLM weight and then train on large-scale community dataset. In particular, I'd like to ask the `dataset.repo_id` if I store v1 and v2 under the same folder? Since I discovered this param cannot be None.
Any help or pointers to the relevant documentation would be greatly appreciated. I believe a short tutorial or a section in the README on pre-training would also be immensely helpful for others in the community looking to build upon your work.
Thank you for your time and consideration!
|
https://github.com/huggingface/lerobot/issues/1882
|
closed
|
[
"question",
"dataset"
] | 2025-09-07T03:18:04Z
| 2025-09-23T09:06:13Z
| null |
ruiheng123
|
pytorch/ao
| 2,948
|
Deprecation for Int4WeightOnlyConfig (version 1) and the models
|
This issue is tracking the deprecation of the (1) configs (2) model checkpoints quantized with these configs.
What is deprecated:
* We added version 2 Int4WeightOnlyConfig in various PRs in https://github.com/pytorch/ao/issues/2752 and switched the default version to 2 in https://github.com/pytorch/ao/pull/2949, the version 1 config is now deprecated, please use version 2 config to quantize the model
* the quantized checkpoints quantized with version 1 config previously is deprecated as well, and we plan to remove the support to load these checkpoints after pytorch 2.11 release (around 9 months from now)
Timeline:
0.14.0: annouce deprecation for version 1 config
after all tensors are migrated: remove support for version 1 config
after pytorch 2.11 release: remove support for version 1 checkpoints
|
https://github.com/pytorch/ao/issues/2948
|
open
|
[
"tracker"
] | 2025-09-05T23:31:36Z
| 2025-10-02T20:49:44Z
| 0
|
jerryzh168
|
huggingface/transformers
| 40,708
|
When using a custom model, it copies the code into Hugging Face’s cache directory.
|
```
model = AutoModel.from_pretrained(
model_args.model_name_or_path,
trust_remote_code=True,
torch_dtype=compute_dtype,
device_map=device_map,
# init_vision=True,
# init_audio=False,
# init_tts=False,
)
```
`model_args.model_name_or_path=/mnt/241hdd/wzr/MiniCPM-V-CookBook/MiniCPM-V-4_5`
The code actually runs in `/root/.cache/huggingface/modules/transformers_modules/MiniCPM-V-4_5`.
This makes my debugging difficult.
Is there a way to run the code directly?
|
https://github.com/huggingface/transformers/issues/40708
|
closed
|
[] | 2025-09-05T07:21:40Z
| 2025-11-15T08:03:16Z
| 4
|
wzr0108
|
huggingface/transformers
| 40,690
|
Batches loaded from wrong epoch when resuming from second epoch
|
### System Info
**Required system information**
```text
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.15.0-133-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): 2.15.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: no
- GPU type: GRID A100D-16C
```
### Who can help?
@zach-huggingface @SunMarc as it concerns `transfomers`' `Trainer`
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
### **1. Bug description**
Let's take the example of the provided script:
- number of data points: 10
- batch size: 2
So 1 epoch = 5 steps.
If we launch a training until the end and monitor the data order:
- epoch 0: 4, 1, 7, 5, 3, 9, 0, 8, 6, 2
- epoch 1: 5, 6, **|| 1, 2, 0, 8, 9, 3, 7, 4**
- epoch 2: 8, 7, 1, 5, 6, 9, 0, 4, 2, 3
But if we stop the training at step 6 and resume (from character `||`) the training to the end, we get the following data order:
- epoch 0: 4, 1, _7, 5, 3, 9, 0, 8, 6, 2_
- epoch 1: 5, 6 **|| 7, 5, 3, 9, 0, 8, 6, 2**
- epoch 2: 8, 7, 1, 5, 6, 9, 0, 4, 2, 3
We spotted that the `epoch_dataloader.iteration` is not properly set for the first epoch after resuming. It is initially set to 0, this is why it loads the same order as in epoch 0 (cf data order in italic of the last 4 batches of epoch 0).
### **2. Reproducing the error**
The script to run is available at https://github.com/ngazagna-qc/transformers/blob/fix-data-order-resumed-epoch/reproduce_wrong_resumed_epoch.py.
Run:
```shell
python reproduce_wrong_resumed_epoch.py --trainer-class Trainer
```
### Expected behavior
### **3. Bug fix**
We provide the fixed `Trainer` here: https://github.com/ngazagna-qc/transformers/blob/fix-data-order-resumed-epoch/src/transformers/trainer_fixed.py#L56
The fix only consists to add a line to the `_inner_training_loop` method:
```python
if steps_trained_in_current_epoch > 0:
epoch_dataloader = skip_first_batches(epoch_dataloader, steps_trained_in_current_epoch)
#### BEGINNING OF THE FIX ####
epoch_dataloader.iteration = epochs_trained # FIX: set dataloader to correct epoch
#### END OF THE FIX ####
steps_skipped = steps_trained_in_current_epoch
steps_trained_in_current_epoch = 0
rng_to_sync = True
```
It can be tested that this solves the order by running:
```shell
python reproduce_wrong_resumed_epoch.py --trainer-class TrainerFixed
```
|
https://github.com/huggingface/transformers/issues/40690
|
closed
|
[
"bug"
] | 2025-09-04T11:48:41Z
| 2025-12-03T13:14:04Z
| 6
|
ngazagna-qc
|
huggingface/optimum
| 2,347
|
Gemma3n convert to onnx format
|
Hello,
How do I convert the Gemma3n model to the ONNX format using the OptimumCLI command?
Thanks in advance.
|
https://github.com/huggingface/optimum/issues/2347
|
closed
|
[
"Stale"
] | 2025-09-04T09:13:19Z
| 2025-10-15T02:09:55Z
| 2
|
shahizat
|
huggingface/transformers
| 40,680
|
Idea: Exploring Mathematical Extensions for GPT-style Models (teaser)
|
Hi Transformers team 👋,
I’ve been experimenting with a conceptual enhancement to GPT-style architectures—introducing mathematical mechanisms for memory and adaptive learning—while keeping the overall transformer backbone intact.
I’ve documented the approach in Markdown (README + comparison notes), but haven’t published it yet. Before I share more, I’d love your input:
- Does this kind of experimental idea fit within the scope of Transformers?
- Would you be open to viewing or discussing the draft privately?
Looking forward to hearing your thoughts 🙏
|
https://github.com/huggingface/transformers/issues/40680
|
closed
|
[] | 2025-09-04T07:23:29Z
| 2025-10-12T08:02:38Z
| 3
|
muzamil-ashiq
|
pytorch/torchtitan
| 1,680
|
How is SDPA TP parallelized ?
|
In llama3, the TransformerBlock is TP parallelized [here](https://github.com/pytorch/torchtitan/blob/21799393c3e6dc710e694ef1a65852f2136ba58d/torchtitan/models/llama3/infra/parallelize.py#L204 ). However, I do not see any specific TP parallelization for scaled_dot_product . How is SDPA TP parallelized then ?
|
https://github.com/pytorch/torchtitan/issues/1680
|
open
|
[] | 2025-09-04T03:23:27Z
| 2025-09-04T22:11:08Z
| 2
|
githubsgi
|
huggingface/transformers
| 40,647
|
how to get response text during training
|
I want to obtain the inferred output text during the evaluation step in the training process, not just the eval loss.
<img width="1264" height="211" alt="Image" src="https://github.com/user-attachments/assets/9dd432c5-74ea-4290-adff-7865cf3ea481" />
|
https://github.com/huggingface/transformers/issues/40647
|
closed
|
[] | 2025-09-03T10:37:51Z
| 2025-10-12T08:02:43Z
| null |
zyandtom
|
huggingface/diffusers
| 12,276
|
The image is blurry.
|
How to solve image blurriness during fine-tuning?
|
https://github.com/huggingface/diffusers/issues/12276
|
open
|
[] | 2025-09-03T08:29:38Z
| 2025-09-03T08:29:38Z
| 0
|
sucessfullys
|
huggingface/gym-hil
| 32
|
how to perform hil in sim
|
https://github.com/huggingface/gym-hil/issues/32
|
closed
|
[] | 2025-09-02T17:10:05Z
| 2025-09-16T14:02:32Z
| null |
prathamv0811
|
|
pytorch/vision
| 9,202
|
torch thread yield after launch nccl kernel
|
### 🐛 Describe the bug
I'm using torch to benchmark nccl performance. The default nccl version that torch uses is 2.21.5. With default setting, the performance looks normal.
Then I use LD_PRELOAD to use the latest nccl version 2.27.7 instead, and the performance degrades drastically.
nsys shows that with nccl 2.27.7, the thread yield after every nccl call, very close to kernel launch. The yield of torch thread induces launch skew, which causes performance to drop.
<img width="1435" height="315" alt="Image" src="https://github.com/user-attachments/assets/92fc747c-b68e-4112-96a1-346dac9ea704" />
but with nccl 2.21.5 the thread won't yield, and the benchmark performance looks normal.
<img width="1620" height="313" alt="Image" src="https://github.com/user-attachments/assets/13197b34-2291-441a-b695-b04597f383d6" />
I've check the torch source code in ProcessGroupNCCL.cpp and distributed_c10d.py, but I haven't find a clue.
How can I get the right benchmark performance with nccl 2.27.7?
source code:
[bench.py](https://github.com/user-attachments/files/22094686/bench.py)
command:
```
# bench.py
/usr/local/bin/mpirun --allow-run-as-root -np 8 \
-x LD_PRELOAD=/root/nccl/build/lib/libnccl.so.2.27.7 \
python3 ./bench.py -b 8k -e 1024m -f 2 -n 100 -w 5 --op all_reduce
# nccl-tests
/usr/local/bin/mpirun --allow-run-as-root -np 8 \
-x LD_PRELOAD=/root/nccl/build/lib/libnccl.so.2.27.7 \
/root/nccl-tests/build/all_reduce_perf -b 8k -e 1024m -f 2 -n 100 -w 5
```
### Versions
Versions
```
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.28.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, May 27 2025, 17:12:29) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.14.0-3.0.3.kwai.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H800
GPU 1: NVIDIA H800
GPU 2: NVIDIA H800
GPU 3: NVIDIA H800
GPU 4: NVIDIA H800
GPU 5: NVIDIA H800
GPU 6: NVIDIA H800
GPU 7: NVIDIA H800
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel
Model name: Intel(R) Xeon(R) Platinum 8468
BIOS Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,148,150,152,154,156,158,160,162,164,166
|
https://github.com/pytorch/vision/issues/9202
|
closed
|
[] | 2025-09-02T13:09:26Z
| 2025-09-02T13:44:52Z
| 1
|
tobi1031
|
huggingface/transformers
| 40,606
|
GPT-OSS attention backends available for SM120 other than Eager?
|
I was wondering any attention backend we can use for long context if using SM120 GPU? Since the "eager_attention_forward" uses the naive implementation that computes the full attention in one go, which can lead to OOM for large context, but I couldn't use other implementations since they either do not support sinks or SM120.
Many thanks!
|
https://github.com/huggingface/transformers/issues/40606
|
closed
|
[] | 2025-09-02T03:21:16Z
| 2025-10-12T08:02:48Z
| 4
|
TheTinyTeddy
|
pytorch/TensorRT
| 3,803
|
Performance Issue when using tools/llm
|
## ❓ Question
<!-- Your question -->
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.8.0
- CPU Architecture: amd
- OS (e.g., Linux): ubuntu 22.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source): NO
- Are you using local sources or building from archives: NO
- Python version: 3.10
- CUDA version: 12.8
- GPU models and configuration: NVIDIA
- Any other relevant information: directly use torch-tensorrt 2.8.0 wheel with github 2.8.0 tag to run tools/llm
## Additional context
Hi there, I tried to use tools/llm with static_cache_v2 to run qwen2.5 model, and I use such script to run:
python run_llm.py --model Qwen/Qwen2.5-0.5B-Instruct --prompt "What is parallel programming?" --precision FP16 --num_tokens 128 --cache static_v2 --benchmark
when i use nsight system to profiling, I found that using static_cache_v2 would bring launch overhead to tensorrt engine in each prefill / decode block, do you have this problem too? thought this overhead is too much, almost make torch-tensorrt the same speed compared to just enable torch.compile
here is the nsys profiling result: the red line shows there is approximately 1.7ms overhead and no gpu activities at all (when disabling static_cache_v2 there is no such bubbles, thought maybe because shape copy or other operators with static_cache_v2?)
<img width="1488" height="942" alt="Image" src="https://github.com/user-attachments/assets/394800a7-cd8e-40ff-abbf-9a2a4b928aeb" />
looking forward to your reply, thanks a lot!
|
https://github.com/pytorch/TensorRT/issues/3803
|
open
|
[
"question"
] | 2025-09-01T17:10:38Z
| 2025-09-04T08:43:24Z
| null |
ChiikawaSama
|
huggingface/peft
| 2,764
|
merge_and_unload returns the base (prior to fine-tuning) back!!!!
|
I have fine-tune a model using PEFT and now I want to merge the base model to adapter. This is what I am doing:
```
base_model = AutoModelForCausalLM(model_id, device_map = 'auto')
model_finetuned = PeftModel.from_pretrained(base_model, adapter_path)
```
Now the size of `model_finetuned `is roughly 42GB but when I do the following to merge the adapter into base:
`merged_model = model_finetuned.()
`
the size of `merged_model `is 36GB and its performance is like the base model, seems the adapter effect is gone.
I remember I used this feature in the past to get merged model, is anything changed?
This is related post, where the last comment says this is normal, can someone elaborate?
https://github.com/huggingface/peft/issues/868
Can I just save the `model_finetuned ` as my merged model, can someone explain what is going on and why the merge_and_unload() is doing opposite of what it is supposed to do.
|
https://github.com/huggingface/peft/issues/2764
|
closed
|
[] | 2025-09-01T04:07:36Z
| 2025-10-09T15:26:15Z
| 12
|
manitadayon
|
huggingface/lerobot
| 1,822
|
As of 08/31/2025, how do you create a v2.1 dataset from raw data?
|
My search is cursory, but I can't find any tutorial or example on creating a v2.1 dataset on the main branch. So, how do you create a Lerobot dataset in the current version? Should I refer to older commits
|
https://github.com/huggingface/lerobot/issues/1822
|
open
|
[
"question",
"dataset"
] | 2025-08-31T18:29:34Z
| 2025-10-08T13:02:44Z
| null |
IrvingF7
|
huggingface/text-generation-inference
| 3,318
|
Infinite tool call loop: `HuggingFaceModel` and `text-generation-inference`
|
## Description
Hello. Needless to say, amazing library. Please let me know if you'd like me to try something or if you need more info.
I've been going through various local model providers trying to find one that works well, when I cam across a rather shocking bug when running against Huggingface's TGI model host.
The problem appears whether using the OpenAI "compatible" endpoints or the `HuggingfaceModel` with custom `AsyncInferenceClient` and `HuggingFaceProvider`. The latter probably being the official approach, the code included here will be using that.
## System Info
`curl 127.0.0.1:8080/info | jq`:
```json
{
"model_id": "/models/meta-llama/Meta-Llama-3-8B-Instruct",
"model_sha": null,
"model_pipeline_tag": null,
"max_concurrent_requests": 128,
"max_best_of": 2,
"max_stop_sequences": 4,
"max_input_tokens": 8191,
"max_total_tokens": 8192,
"validation_workers": 2,
"max_client_batch_size": 4,
"router": "text-generation-router",
"version": "3.3.4-dev0",
"sha": "9f38d9305168f4b47c8c46b573f5b2c07881281d",
"docker_label": "sha-9f38d93"
}
```
`nvidia-smi`:
```shell
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.64.05 Driver Version: 575.64.05 CUDA Version: 12.9 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 On | Off |
| 40% 54C P2 61W / 450W | 21499MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 4090 Off | 00000000:48:00.0 Off | Off |
| 30% 43C P2 52W / 450W | 21394MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
```
### Information
- [x] Docker
- [ ] The CLI directly
### Tasks
- [x] An officially supported command
- [ ] My own modifications
## Reproduction
### Setup
Here's the `docker-compose.yaml` I'm using to start TGI:
```yaml
services:
text-generation-inference:
image: ghcr.io/huggingface/text-generation-inference:latest
container_name: tgi
ports:
- "8081:80"
volumes:
- ../../../models:/models:ro
- tgi-data:/data
environment:
- RUST_LOG=info
# I have also tested with 3.1-8B and 3.2-3B with the same end results
command: >
--model-id /models/meta-llama/Meta-Llama-3-8B-Instruct
--hostname 0.0.0.0
--port 80
--trust-remote-code
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ["0", "1"]
capabilities: [gpu]
shm_size: "64g"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
volumes:
tgi-data:
driver: local
```
### Code
All code is running in a Jupyter notebook.
Here's the common setup cell:
```python
from huggingface_hub import AsyncInferenceClient
from pydantic_ai.models.huggingface import HuggingFaceModel
from pydantic_ai.providers.huggingface import HuggingFaceProvider
from pydantic_ai.providers.openai import OpenAIProvider
provider = OpenAIProvider(base_url="http://localhost:8081/v1") # Just used to get the model slug
models = await provider.client.models.list()
client = AsyncInferenceClient(base_url="http://localhost:8081/")
print(f"Connected to TGI. Available models: {len(models.data)}")
for model in models.data:
print(f" - {model.id}")
# Create the model instance
agent_model = HuggingFaceModel(
models.data[0].id,
provider=HuggingFaceProvider(hf_client=client, api_key="None"),
# Annoyingly, despite this being basically the default profile, Llama 3's tool calls often fall through to the response without this
profile=ModelProfile(
supports_tools=True,
json_schema_transformer=InlineDefsJsonSchemaTransformer
)
)
```
### Working: Basic requests and history
1. Create the basic agent
```python
from pydantic_ai import Agent
simple_agent = Agent(model=agent_model)
```
2. Make a simple request
```python
simple_result = await simple_agent.run("Tell me a joke.")
simple_result.output # "Why couldn't the bicycle stand up by itself?\n\nBecau
|
https://github.com/huggingface/text-generation-inference/issues/3318
|
open
|
[] | 2025-08-31T08:23:46Z
| 2025-08-31T08:58:13Z
| 1
|
baughmann
|
pytorch/audio
| 4,076
|
[STABLE ABI] Porting rir/rir.cpp rir/ray_tracing.cpp
|
This issue collects tasks that block porting [rir/rir.cpp](https://github.com/pytorch/audio/blob/main/src/libtorchaudio/rir/rir.cpp) and [rir/ray_tracing.cpp](https://github.com/pytorch/audio/blob/main/src/libtorchaudio/rir/ray_tracing.cpp) to use torch stable ABI.
- [ ] implement `mutable_data_ptr<T>()` and `const_data_ptr<T>()` in torch/csrc/stable/tensor_struct.h. For instance, this simplifies porting of expressions like `tensor.data_ptr<scalar_t>()`. Currently, one needs to rewrite this as `reinterpret_cast<scalar_t*>(tensor.data_ptr())` where tensor is a `torch::stable::Tensor`. Not really a blocker but would be nice to have.
Fix available: https://github.com/pytorch/pytorch/pull/161891
- [ ] import `arange` as a stable/ops.h factory function
- [ ] implement `torch::fft::fftshift` and `torch::fft::irfft` as a stable/ops.h operation
Resolution: delete rir/ray_tracing.cpp as unused
- [ ] implement `index` as a `torch::stable::Tensor` method. Can we use torch::indexing::Slice() in torch stable ABI code?
- [ ] expose `AT_DISPATCH_FLOATING_TYPES_AND_HALF` and `AT_DISPATCH_FLOATING_TYPES` to stable ABI. Not really a blocker but would be nice to have.
For a workaround, see https://github.com/pytorch/audio/issues/4078
- [ ] implement `zeros` and `full` as a `stable/ops.h` factory functions. Currently, one can use `new_empty` and `fill_` to mimic these functions. Not really a blocker but would be nice to have.
- [ ] implement `tensor` as a `stable/ops.h` factory function. Currently, one can use `new_empty` but it is really clumsy to mimic `tensor`, especially for CUDA tensors.
- [ ] implement `dot`, `norm`, and `max` as a `torch::stable::Tensor` method or a `stable/ops.h` operation
- [ ] implement `item<T>()` as a `torch::stable::Tensor` template method
For a workaround, see https://github.com/pytorch/audio/issues/4078
^ @NicolasHug @scotts @janeyx99
|
https://github.com/pytorch/audio/issues/4076
|
closed
|
[] | 2025-08-30T19:46:50Z
| 2025-11-04T11:34:21Z
| 2
|
pearu
|
pytorch/audio
| 4,075
|
[STABLE ABI] Porting overdrive.cpp
|
This issue collects tasks that block porting [overdrive.cpp](https://github.com/pytorch/audio/blob/main/src/libtorchaudio/overdrive.cpp) to use torch stable ABI.
- [x] implement `accessor` template as a `torch::stable::Tensor` template method
Fix available: https://github.com/pytorch/pytorch/pull/161967
- [x] can we use `at::parallel_for` in torch stable ABI code?
- [x] expose `AT_DISPATCH_FLOATING_TYPES` to stable ABI, currently one need to implement the dispatch logic using `switch` block. Not a blocker but would nice to have.
For a workaround, see https://github.com/pytorch/audio/issues/4078
^ @NicolasHug @scotts @janeyx99
|
https://github.com/pytorch/audio/issues/4075
|
closed
|
[] | 2025-08-30T19:23:39Z
| 2025-11-20T14:17:04Z
| 0
|
pearu
|
pytorch/audio
| 4,074
|
[STABLE ABI] Porting lfilter.cpp
|
This issue collects tasks that block porting [lfilter.cpp](https://github.com/pytorch/audio/blob/main/src/libtorchaudio/lfilter.cpp) to use torch stable ABI.
- [x] implement `mutable_data_ptr<T>()` and `const_data_ptr<T>()` in torch/csrc/stable/tensor_struct.h. For instance, this simplifies porting of expressions like `tensor.data_ptr<scalar_t>()`. Currently, one needs to rewrite this as `reinterpret_cast<scalar_t*>(tensor.data_ptr())` where `tensor` is a `torch::stable::Tensor`.
Fix available: https://github.com/pytorch/pytorch/pull/161891
- [x] can we use `at::parallel_for` in torch stable ABI code?
- [x] implement `unsqueeze` as a `stable/ops.h` operation
- [x] implement `select` as a `stable/ops.h` operation
- [x] implement `at::matmul` as a `stable/ops.h` operation
- [x] implement `index_put_` as `torch::stable::Tensor` method or a `stable/ops.h` operation. Can we use `torch::indexing::Slice()` in torch stable ABI code?
^ @NicolasHug @scotts @janeyx99
|
https://github.com/pytorch/audio/issues/4074
|
closed
|
[] | 2025-08-30T19:13:55Z
| 2025-12-01T09:41:54Z
| 4
|
pearu
|
pytorch/ao
| 2,914
|
Support for LR-QAT
|
Qualcomm research proposed a technique LR-QAT in their paper "Low-Rank Quantization-Aware Training for LLMs".
The core idea is that the low-rank weights are placed within the quantization grid of the model's weights using a custom downcasting operator.
The unique advantage of this is that it allows for a low rank adapter to control for the impact of quantization while still being absorbed into the main weights at inference, meaning that there's no inference overhead of the technique (and no lossy upcast to merge a LoRA adapter with, for example, NF4 weights in something like QLoRA).
Once a language model has been optimized under this framework, it's still suitable for further fine tuning, meaning that if one self distills a single target model using LR-QAT, it can be trained for a variety of downstream applications.
The memory use is quite favorable (relatively comparable to Q-LoRA), but has a variety of advantages to downstream inference usage.
Now, the good things out of the way, there's a few problems:
- The upcasting operator is a bit of development overhead. It requires a completely bespoke LoRA implementation that's not, to my eye, suitable for integration with existing tools.
- While a lot of logic is shared, I don't think the quantization grid logic will cleanly map into existing code.
- There's also some extra fixed point operators that are going to be a bit of a migraine to deal with.
- While memory-cheap, there is some computational overhead to the technique. I still think it's interesting, and has a lot of really favorable properties, but it's worth bearing in mind.
So, is there any possibility of or interest in adopting this technique within TorchAO? It's a fairly accessible recipe (particularly for end developers) and its inclusion could mean a fairly rich library of accessible model checkpoints up to about 24B parameters in size (as that's around the limit of what I think most developers will be able to optimize on GPUs at home), and my intuition is that even up to around 70B dense models should be accessible on fairly cheap GPU instances, as well.
So far as MoE, I think it'd take a lot of consideration because there's already a lot of ecosystem growing pains surrounding them (see: ongoing issues with expert dispatch in the Huggingface Transformers ecosystem which has been inherited by most finetuning frameworks with the notable exception of Torchtune), and many existing implementations have poor support / prospects for funky operators (particularly LoRA, etc).
|
https://github.com/pytorch/ao/issues/2914
|
open
|
[] | 2025-08-30T18:16:10Z
| 2025-09-04T01:14:35Z
| 1
|
Juahyori
|
huggingface/diffusers
| 12,257
|
[Looking for community contribution] support Wan 2.2 S2V: an audio-driven cinematic video generation model
|
We're super excited about the Wan 2.2 S2V (Speech-to-Video) model and want to get it integrated into Diffusers! This would be an amazing addition, and we're looking for experienced community contributors to help make this happen.
- **Project Page**: https://humanaigc.github.io/wan-s2v-webpage/
- **Source Code**: https://github.com/Wan-Video/Wan2.2#run-speech-to-video-generation
- **Model Weights**: https://huggingface.co/Wan-AI/Wan2.2-S2V-14B
This is a priority for us, so we will try review fast and actively collabrate with you throughout the process :)
|
https://github.com/huggingface/diffusers/issues/12257
|
open
|
[
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2025-08-29T08:04:43Z
| 2025-08-29T10:23:52Z
| 0
|
yiyixuxu
|
pytorch/torchtitan
| 1,661
|
CPU Mode Request
|
Hi all, just getting in to using Torch Titan and have really loved it! One thing I personally would find useful is the ability to do small day to day development on my laptop in a CPU mode. I realize that TorchTitan is a distributed training repo, but I think a lot of researchers would still find a CPU dev/debug mode useful (VMs are expensive for just tracking down my latest brand of random bugs that I introduce to my code base 😅 )
Would there be an appetite for cpu only compatibility? Happy to make a PR as I will be doing this for my own fork.
Thanks,
Donal
|
https://github.com/pytorch/torchtitan/issues/1661
|
open
|
[
"question"
] | 2025-08-29T07:48:31Z
| 2025-08-29T17:32:46Z
| null |
djbyrne
|
pytorch/executorch
| 13,787
|
How to enable XNN_ENABLE_SPARSE in Executorch
|
### 🚀 The feature, motivation and pitch
I would like to ask if there is any plan to support XNN_ENABLE_SPARSE in Executorch.
I am working on a model that contains a significant amount of sparse operations, and I believe enabling XNN_ENABLE_SPARSE could lead to a substantial performance improvement.
Is this feature currently supported? If not, are there any plans to add this in the future roadmap? Any guidance on how to enable it or potential workarounds would be greatly appreciated.
### Alternatives
_No response_
### Additional context
_No response_
### RFC (Optional)
_No response_
cc @digantdesai @mcr229 @cbilgin
|
https://github.com/pytorch/executorch/issues/13787
|
open
|
[
"module: xnnpack"
] | 2025-08-29T04:04:39Z
| 2025-09-08T16:32:36Z
| null |
HKLee2040
|
huggingface/optimum-onnx
| 44
|
How to use streaming inference for onnx models exported from QWEN3-4B models
|
How to use streaming inference for onnx models exported from QWEN3-4B models
|
https://github.com/huggingface/optimum-onnx/issues/44
|
closed
|
[] | 2025-08-29T01:48:07Z
| 2025-10-06T12:29:34Z
| null |
williamlzw
|
huggingface/diffusers
| 12,255
|
[BUG] Misleading ValueError when subclassing StableDiffusionImg2ImgPipeline with a mismatched __init__ signature
|
### Describe the bug
When subclassing diffusers.StableDiffusionImg2ImgPipeline, if the subclass's __init__ signature does not include the requires_safety_checker: bool = True argument, the default .from_pretrained() loader raises a confusing and indirect ValueError.
The official documentation for StableDiffusionImg2ImgPipeline confirms that requires_safety_checker is an explicit keyword argument in its __init__ signature.
The current ValueError (pasted below) reports a component list mismatch between 'kwargs' and 'requires_safety_checker'. This error message hides the true root cause—a TypeError from the signature mismatch—making the problem very difficult to debug.
### Reproduction
The following minimal script reliably reproduces the error.
```
from diffusers import StableDiffusionImg2ImgPipeline
from diffusers.models import AutoencoderKL, UNet2DConditionModel
from diffusers.schedulers import KarrasDiffusionSchedulers
from transformers import CLIPTextModel, CLIPTokenizer
from typing import Optional, Any
# A custom pipeline inheriting from StableDiffusionImg2ImgPipeline,
# but with an incorrect __init__ signature. It incorrectly tries
# to catch `requires_safety_checker` with **kwargs.
class MyCustomPipeline(StableDiffusionImg2ImgPipeline):
def __init__(
self,
vae: AutoencoderKL,
text_encoder: CLIPTextModel,
tokenizer: CLIPTokenizer,
unet: UNet2DConditionModel,
scheduler: KarrasDiffusionSchedulers,
safety_checker: Optional[Any] = None,
feature_extractor: Optional[Any] = None,
image_encoder: Optional[Any] = None,
**kwargs,
):
super().__init__(
vae=vae,
text_encoder=text_encoder,
tokenizer=tokenizer,
unet=unet,
scheduler=scheduler,
safety_checker=safety_checker,
feature_extractor=feature_extractor,
image_encoder=image_encoder,
**kwargs,
)
# This line will fail and raise the misleading ValueError.
# It can be copy-pasted directly to reproduce the bug.
pipe = MyCustomPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
```
### Logs
```shell
ValueError: MyCustomPipeline {
"_class_name": "MyCustomPipeline",
"_diffusers_version": "0.29.0.dev0", # Replace with your version
"feature_extractor": [
"transformers",
"CLIPImageProcessor"
],
"image_encoder": [
null,
null
],
"requires_safety_checker": true,
"safety_checker": [
"stable_diffusion",
"StableDiffusionSafetyChecker"
],
"scheduler": [
"diffusers",
"PNDMScheduler"
],
"text_encoder": [
"transformers",
"CLIPTextModel"
],
"tokenizer": [
"transformers",
"CLIPTokenizer"
],
"unet": [
"diffusers",
"UNet2DConditionModel"
],
"vae": [
"diffusers",
"AutoencoderKL"
]
}
has been incorrectly initialized or <class '__main__.MyCustomPipeline'> is incorrectly implemented. Expected ['feature_extractor', 'image_encoder', 'kwargs', 'safety_checker', 'scheduler', 'text_encoder', 'tokenizer', 'unet', 'vae'] to be defined, but ['feature_extractor', 'image_encoder', 'requires_safety_checker', 'safety_checker', 'scheduler', 'text_encoder', 'tokenizer', 'unet', 'vae'] are defined.
```
### System Info
diffusers version: 0.34.0
Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
Python version: 3.12.11 | [GCC 11.2.0]
PyTorch version: 2.5.1+cu121
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/12255
|
closed
|
[
"bug"
] | 2025-08-28T18:31:14Z
| 2025-08-30T07:41:16Z
| 2
|
BoostZhu
|
pytorch/torchtitan
| 1,653
|
Interleaved 1F1B weight-gradient computation decoupling
|
Hi torchtitan team,
The kimi K2 reports apparently do not use dualpipe, and instead use interleaved 1F1B and "decouple the weight-gradient computation from each micro-batch’s backward pass and execute it in parallel with the corresponding PP communication" to mitigate the PP communication overhead. I am curious how hard it is to implement this with torchtitan.
<img width="1166" height="311" alt="Image" src="https://github.com/user-attachments/assets/484077a5-41e9-417d-af1d-fccd4627228b" />
I tried out interleaved 1F1B in the other thread, but there appear to be significant bubbles:
<img width="1497" height="556" alt="Image" src="https://github.com/user-attachments/assets/f8140eec-95a5-46d7-90fb-8d52e25007ce" />
See https://drive.google.com/drive/folders/1F-d-ETeHbRbkAtuTkgApaWiOoGYotSXj?usp=sharing.
Not sure if it's possible to try out kimi K2 style interleaved 1F1B with DeepSeek v3.
Thanks!
|
https://github.com/pytorch/torchtitan/issues/1653
|
open
|
[
"question",
"module: pipelining"
] | 2025-08-28T18:21:15Z
| 2025-09-05T20:19:24Z
| null |
vwxyzjn
|
huggingface/peft
| 2,759
|
PeftModel trainable parameters with multiple adapters
|
### System Info
peft-0.17.1
python 3.9
### Who can help?
@BenjaminBossan
### Reproduction
**1) modules_to_save gradient true even when is_trainable=False**
The adapters has both modules_to_save and target_modules
```
peft_backbone = PeftModel.from_pretrained(
target_backbone,
safe_encoder_adapter_path1,
adapter_name=adapter_name1,
is_trainable=False
)
status = peft_backbone.get_model_status()
check_trainable_params(target_backbone)
```
```
def check_trainable_params(model, print_layers=True):
total_params = 0
trainable_params = 0
for name, param in model.named_parameters():
num_params = param.numel()
total_params += num_params
if param.requires_grad:
trainable_params += num_params
if print_layers:
print(f"[TRAINABLE] {name} - shape: {tuple(param.shape)}")
elif print_layers:
print(f"[FROZEN] {name} - shape: {tuple(param.shape)}")
print(f"\nTotal parameters: {total_params:,}")
print(f"Trainable parameters: {trainable_params:,}")
print(f"Frozen parameters: {total_params - trainable_params:,}")
print(f"Trainable ratio: {100 * trainable_params / total_params:.2f}%")
return trainable_params, total_params
```
example of printed trainable params
[TRAINABLE] blocks.0.modules_to_save.adapter1.norm1.weight - shape: (1408,)
[FROZEN] blocks.2.attn.qkv.lora_A.adapter1.weight - shape: (32, 1408)
**2) Loading an adapter after using from_pretrained**
```
peft_backbone = PeftModel.from_pretrained(
target_backbone,
safe_encoder_adapter_path1,
adapter_name=modality_name,
is_trainable=False
)
status = peft_backbone.get_model_status()
target_backbone.load_adapter(safe_encoder_adapter_path2, is_trainable=False, adapter_name=adapter2)
status = peft_backbone.get_model_status()
```
status before load_adapter shows {'adapter1': False} while after the load_adapter {'adapter2': False, 'adapter1': True}
I think the issue comes from BaseTurnerLayer.set_adapter that set True all my adapter1 lora layers' gradient while setting properly the adapter2 lora layers' gradient to False.
BaseTurnerLayer.set_adapter is called when doing self.add_adapter in PeftModel.load_adapter.
### Expected behavior
**1) modules_to_save gradient true even when is_trainable=False**
Expecting the gradients for modules_to_save layers to be false. It's working properly for lora layers.
**2) Loading an adapter after using from_pretrained**
Expecting adapter1 to remain gradient false (is_trainable=False during from_pretrained loading) even after loading another adapter.
**Other informations:**
Regarding issue 1), in the code of 2), the modules_to_save for adapter2 were properly set to false when using load_adapter with is_trainable=false.
[TRAINABLE] base_model.model.blocks.39.modules_to_save.adapter1.mlp.fc2.bias - shape: (1408,)
[FROZEN] base_model.model.blocks.39.modules_to_save.adapter2.norm1.weight - shape: (1408,)
More generally, is there any reason peftmodel has to change the requires_gradient of adapters when calling set_adapter? (https://github.com/huggingface/peft/issues/2749)
I assume that it might be related to the fact that there might be a problem to have non activated adapter but with requires_gradient=True?
When using the library I was expecting to be able to set what params needed to be trained on all my adapters upon loading them with from_pretrained and load_adapter (or manually) then simply switch between adapters during the training with set_adapter.
|
https://github.com/huggingface/peft/issues/2759
|
closed
|
[] | 2025-08-28T16:36:25Z
| 2025-10-06T15:04:09Z
| 8
|
NguyenRichard
|
pytorch/ao
| 2,896
|
[CPU][FP8][Inductor] How to support fp8 quant for inductor on CPU
|
What we want to do is to enable FP8 quantization in PyTorch. Similar to INT8 quantization, this requires inserting quantize and dequantize operations into the computational graph. In order to reuse pattern matching logic of int8, we need register FP8 quant and dequant.
To address this, we attempted to register quant in [#2379](https://github.com/pytorch/ao/pull/2379), but the PR was reverted in [#2672](https://github.com/pytorch/ao/pull/2672) because it caused performance regression on H100 GPUs.
It will take a lot of effort to find the root cause of GPU regression.
Maybe we can register quant specifically for CPU, but this requires defining and registering a separate function for CPU.
@jerryzh168 @vkuzo Do you have some suggestions about it?
cc @Xia-Weiwen
I create following test to show the issue.
```python
import os
os.environ["OMP_NUM_THREADS"] = "1"
os.environ["TORCHINDUCTOR_FREEZING"] = "1"
os.environ["TORCH_COMPILE_DEBUG"] = "1"
os.environ["TORCHDYNAMO_PRINT_GUARD_FAILS"] = "1"
import torch
import torchao
dtype = torch.float
qtype = torch.float8_e4m3fn
def dequantize_per_tensor(
tensor: torch.Tensor,
scale: float,
output_dtype: torch.dtype
) -> torch.Tensor:
res = torchao.quantization.quant_primitives._dequantize_affine_float8(
tensor=tensor,
scale=torch.tensor([scale]),
output_dtype=torch.float
)
return res
def quantize_per_tensor(
tensor: torch.Tensor,
scale: float,
) -> torch.Tensor:
return torchao.quantization.quant_primitives._quantize_affine_float8(
tensor=tensor,
scale=torch.tensor([scale]),
float8_dtype=torch.float8_e4m3fn,
)
class FP8QDQLinear(torch.nn.Module):
def __init__(self, in_features, out_features):
super().__init__()
self.weight = torch.randn((out_features, in_features),).to(qtype)
self.weight_scale = 1.0
self.scale = 1.0
self.bias = None
def forward(self, input):
weight = dequantize_per_tensor(
self.weight.data,
self.weight_scale,
dtype,
)
q_input = quantize_per_tensor(
input,
self.scale,
)
dq_input = dequantize_per_tensor(
q_input,
self.scale,
dtype
)
out = torch.nn.functional.linear(dq_input, weight, self.bias)
return out
from torch._inductor import config as inductor_config
from torch._dynamo import config
config.error_on_recompile = True
#inductor_config.cpp_wrapper = True
inductor_config.max_autotune = False
inductor_config.freezing = True
inductor_config.aot_inductor.debug_compile = False
model = FP8QDQLinear(13, 16)
example_inputs = (torch.randn(128, 13),)
with torch.no_grad():
refe = model(*example_inputs)
test_eager = model(*example_inputs)
model = torch.compile(model)
model(*example_inputs)
test = model(*example_inputs)
```
Outputting log on [freezing_patterns.py](https://github.com/pytorch/pytorch/blob/a7c949089af218f71daf3ad25f409f75794e6830/torch/_inductor/fx_passes/freezing_patterns.py#L70) shows that the quant has been decomposed to clamp_min, clamp_max and convert_element_type.
```python
# print(gm)
<lambda>()
def forward(self, arg1_1):
arg0_1 = self._frozen_param0
full_default = torch.ops.aten.full.default([1], 1.0, dtype = torch.float32, layout = torch.strided, device = device(type='cpu'), pin_memory = False)
dequantize_affine_float8 = torch.ops.torchao.dequantize_affine_float8.default(arg0_1, full_default); arg0_1 = None
clamp_min = torch.ops.aten.clamp_min.default(arg1_1, -448.0); arg1_1 = None
clamp_max = torch.ops.aten.clamp_max.default(clamp_min, 448.0); clamp_min = None
convert_element_type = torch.ops.prims.convert_element_type.default(clamp_max, torch.float8_e4m3fn); clamp_max = None
dequantize_affine_float8_1 = torch.ops.torchao.dequantize_affine_float8.default(convert_element_type, full_default); convert_element_type = full_default = None
permute = torch.ops.aten.permute.default(dequantize_affine_float8, [1, 0]); dequantize_affine_float8 = None
mm = torch.ops.aten.mm.default(dequantize_affine_float8_1, permute); dequantize_affine_float8_1 = permute = None
return (mm,)
```
For comparison, here are the results of int8. Quant will be used as a separate operator(torch.ops.quantized_decomposed.quantize_per_tensor.default).
```python
def forward(self, arg4_1):
arg0_1 = self._frozen_param0
arg1_1 = self._frozen_param1
arg2_1 = self._frozen_param2
arg3_1 = self._frozen_param3
dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(arg3_1, arg1_1, arg2_1, 0, -128, 127, torch.int8); arg3_1 = arg1_1 = arg2_1 = None
quantize_per_tensor = torch.ops.quantized_decomposed.quantize_per_tensor.default(arg4_1, 0.027873406186699867, 128, 0, 255, torch.uint8); arg4_1 = None
dequantize_per_tensor = torch.ops.quantized_decomposed.dequant
|
https://github.com/pytorch/ao/issues/2896
|
closed
|
[] | 2025-08-28T06:07:47Z
| 2025-09-21T09:53:16Z
| null |
shiyang-weng
|
pytorch/vision
| 9,196
|
Why am I getting a discrepency between SSDLite Scores and the Full Probability Vector?
|
I am noticitng a slight discrepency between the scores output by the SSDLite model and the Full Probability Vector you get from feeding the features extracted from the backbone through the model head. While the difference is slight, around .004, I find the behavior peculiar and cant find an explanation. Please see the code below:
```
import torch
from torchvision.models.detection import ssdlite320_mobilenet_v3_large
from torchvision.transforms import functional as F
from PIL import Image
import requests
model = ssdlite320_mobilenet_v3_large(weights=True)
model.eval();
model_categories_url = 'https://raw.githubusercontent.com/levan92/coco-classes-mapping/refs/heads/master/coco91.names'
model_categories = requests.get(model_categories_url).text.split('\n')
model_categories.insert(0, '')
image_url = 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg'
image_label = 'dog'
img = Image.open(requests.get(image_url, stream = True).raw)
img_tensor = F.to_tensor(img)
img_tensor = F.resize(img_tensor, [320, 320])
img_tensor = img_tensor.unsqueeze(0)
with torch.no_grad():
# 1. Pass through backbone and head
backbone_output = model.backbone(img_tensor)
#2 Convert OrderedDict to list of feature maps
features = list(backbone_output.values())
head_outputs = model.head(features)
# 3. Compute class logits (before NMS)
class_logits = head_outputs['cls_logits'] # shape: [batch_size, num_anchors, num_classes]
bbox_regression = head_outputs['bbox_regression']
# 4. Apply softmax to get probabilities
class_probs = torch.softmax(class_logits, dim=-1) # shape: [1, num_anchors, num_classes]
class_index = model_categories.index(image_label)
class_detections = class_probs[0, (class_probs[0].argmax(dim = 1) == class_index)]
sorted_indices = torch.argsort(class_detections[:, class_index], descending = True)
print(f"Full Probability Vector Max Value: {class_detections[sorted_indices][0].max(): .4f}")
img_tensor = img_tensor.detach()
model_output = model(img_tensor)
print(f"Model Output Max Sore: {model_output[0]['scores'][0]: .4f}")
>>> Full Probability Vector Max Value: 0.9851
>>> Model Output Max Sore: 0.9890
```
|
https://github.com/pytorch/vision/issues/9196
|
closed
|
[
"question"
] | 2025-08-28T04:24:56Z
| 2025-09-06T14:58:19Z
| null |
Aneesh-Sandhir
|
huggingface/transformers
| 40,462
|
Question about RoPE Implementation in modeling_llama: Should torch.cat be repeat_interleave?
|
Hi,
I was going through the code for `modeling_llama` and the RoPE implementation. I came across the following function:
```
def forward(self, x, position_ids):
inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
position_ids_expanded = position_ids[:, None, :].float()
device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
with torch.autocast(device_type=device_type, enabled=False): # Force float32
freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
emb = torch.cat((freqs, freqs), dim=-1)
cos = emb.cos() * self.attention_scaling
sin = emb.sin() * self.attention_scaling
return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
```
I believe the line `emb = torch.cat((freqs, freqs), dim=-1)` should be replaced with `repeat_interleave`. This is because the cosine/sine angles for matrix multiplication should be structured like:
```
[cos(θ₁), cos(θ₁), cos(θ₂), cos(θ₂), cos(θ₃), cos(θ₃), ...]
```
This way, further down the stream when we compute:
```
q_embed = (q * cos) + (rotate_half(q) * sin)
```
...the values are aligned properly for pairwise rotation. However, the current `torch.cat((freqs, freqs), dim=-1) ` should produce:
```
[cos(θ₁), cos(θ₂), cos(θ₃), cos(θ₁), cos(θ₂), cos(θ₃), ...]
```
which seems incorrect. Am I missing something?
Thanks,
Abhidip
|
https://github.com/huggingface/transformers/issues/40462
|
closed
|
[] | 2025-08-26T16:32:41Z
| 2025-08-27T10:01:11Z
| 2
|
abhidipbhattacharyya
|
huggingface/transformers
| 40,459
|
`use_kernels=True` does not invoke custom kernels
|
### System Info
- `transformers` version: 4.56.0.dev0
- Platform: Linux-5.4.0-216-generic-x86_64-with-glibc2.31
- Python version: 3.12.7
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@ArthurZucker
### Reproduction
```python
import logging
logging.basicConfig(level=logging.INFO)
import torch
from transformers import (
AutoTokenizer, AutoModelForCausalLM,
)
model_id = "openai/gpt-oss-20b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto,
use_kernels=True,
).eval()
messages = [
{"role": "system", "content": "What is Tensor Parallelism?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="low",
).to(model.device)
with torch.inference_mode():
generated = model.generate(
**inputs,
do_sample=False,
temperature=None,
max_new_tokens=64,
disable_compile=True,
)
decoded_generation = tokenizer.batch_decode(generated, skip_special_tokens=True)[0]
print(decoded_generation)
```
### Expected behavior
Noting that I have activated logging, I should be able to see the logs for all the custom kernels being invoked. While the `LigerRMSNorm` is being invoked I do not see the `MegaBlocksMoeMLP` as it should be (as [stated in the modelling file here](https://github.com/huggingface/transformers/blob/263d06fedc17bb28f70dabe2acae562bc617ef9b/src/transformers/models/gpt_oss/modeling_gpt_oss.py#L156)).
I also note that while the `LigerRMSNorm` is invoked but it complains that it cannot be used due to not being compatible with compile:
```
INFO:root:Using layer `LigerRMSNorm` from repo `kernels-community/liger_kernels` (revision: main) for layer `LigerRMSNorm`
INFO:root:Layer does not support torch.compile, using fallback
```
I have used `disable_compile=True,` in the `.generate()` method, which should have taken care of the issue.
### Solution
The way I could invoke the custom kernels was to swap out these lines:
https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L5241-L5243
With the following
```py
from kernels import Device, Mode, kernelize
kernelize(model, device=Device(type=model.device.type), mode=Mode.INFERENCE)
```
While this is not the solution, and we should infer what mode the model is in, I thought of listing the current personal solution down for ease of ideation.
|
https://github.com/huggingface/transformers/issues/40459
|
closed
|
[
"bug"
] | 2025-08-26T13:32:35Z
| 2025-09-16T08:50:55Z
| 1
|
ariG23498
|
huggingface/diffusers
| 12,241
|
WAN2.1 FLF2V: Incorrect MASK Creation????
|
Hello! I think that it is maybe error. (Or not, please explain it for me!!)
In **WanImageToVideoPipeline** class in `pipline_wan_i2v.py`,
<img width="868" height="243" alt="Image" src="https://github.com/user-attachments/assets/8108a9e9-8632-44a1-93b8-abd9ae6a22cd" />
(the code is the part of `prepare_latents` function)
**For I2V**, masking shape like as below:
```
[[1, 0, 0, ... , 0]
[1, 0, 0, ... , 0]
[1, 0, 0, ... , 0]
[1, 0, 0, ... , 0]]
```
I understood: when the mask is 1, input video frame does not change.
(*Mask shape: [1, 4, 21, 60, 104] = [B, C, F, H, W])
**But in the FLF2V case,** masking shape like as below:
```
[[1, 0, 0, ... , 0]
[1, 0, 0, ... , 0]
[1, 0, 0, ... , 0]
**[1, 0, 0, ... , 1]]**
```
Here, **why the last frame mask has 1 only in last channel??**
Is there anyone who can explain this part?
|
https://github.com/huggingface/diffusers/issues/12241
|
open
|
[] | 2025-08-26T12:23:09Z
| 2025-08-27T02:10:49Z
| 1
|
KyujinHan
|
huggingface/lerobot
| 1,792
|
how to train lerobot model offline with offline data?
|
Hi, I'm trying to configure lerobot to train with pre-downloaded models and datasets. I'm stuck, however, with how to organize the model cache and dataset cache, and how to tell the train script I'm using offline everything?
I tried to download the model and dataset:
```
$ hf download lerobot/pi0 --cache-dir ~/lerobot_download/hf_models/lerobot/pi0/
$ hf download lerobot/aloha_sim_transfer_cube_human --repo-type dataset --cache-dir ~/lerobot_download/hf_datasets/lerobot/aloha_sim_transfer_cube_human/
```
|
https://github.com/huggingface/lerobot/issues/1792
|
closed
|
[] | 2025-08-26T10:20:56Z
| 2025-09-03T10:48:37Z
| null |
dalishi
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.