repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
โ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/torchtitan
| 803
|
Gradient Scaling With Pipeline Parallelism
|
The idiomatic way to perform gradient scaling is something like this:
```python
preds = model(inputs)
loss = loss_fn(preds, targets)
scaler.scale(loss).backward()
```
Given that the current PyTorch PP API handles the backward pass *internally*, I find it difficult to do gradient scaling under a PP regime.
```python
if is_first_stage:
pp_schedule.step(inputs) # bwd performed internally
elif is_last_stage:
losses = []
pp_schedule.step(target=targets, losses=losses) # bwd performed internally
else:
pp_schedule.step() # bwd performed internally
loss = (
torch.mean(torch.stack(losses)).to(device)
if is_last_stage
else torch.tensor([-1.0], device=device)
)
# scaler.scale(loss).backward() <-- !? backward pass has already been performed
```
Is there currently a good way to do gradient scaling with Pipeline Parallelism? And if not, will the Pipeline Parallelism API support gradient scaling in the near-term future?
|
https://github.com/pytorch/torchtitan/issues/803
|
open
|
[
"question",
"module: pipelining"
] | 2025-01-24T12:16:16Z
| 2025-02-06T23:28:00Z
| null |
windsornguyen
|
huggingface/trl
| 2,642
|
How to stop `SFTTrainer` from auto tokenizing my messages ?
|
I want to tokenize my text in a custom way in a custom data collator but for some reason i don't know the data keeps being auto tokenized.
I passed `processing_class=None` to stop this but nothing changed, how can i stop the auto tokenization process ?
|
https://github.com/huggingface/trl/issues/2642
|
closed
|
[
"โ question",
"๐ SFT"
] | 2025-01-24T02:58:26Z
| 2025-02-18T18:59:42Z
| null |
MohamedAliRashad
|
pytorch/xla
| 8,617
|
Single core of TPU gives inference results different than the CPU results
|
# Description
I encountered an issue when using PyTorch XLA to train a model on TPU. My main code gives a different results than training with CPU or GPU so I decided to check using a toy example and found that prediction using pytorch XLA gives results different than prediction using CPU.
I also tried to check using pytorch lightning but it gives the same result like CPU so how to setup pytorch xla to give identical results like lightning?
[Notebook](https://www.kaggle.com/code/saadsallam/tpu-cpu)
|
https://github.com/pytorch/xla/issues/8617
|
closed
|
[
"duplicate",
"xla:tpu"
] | 2025-01-23T21:47:15Z
| 2025-02-06T14:39:41Z
| 1
|
mohamedamara7
|
pytorch/tutorials
| 3,254
|
How to download pretrained word language quantized model?
|
In the word language quantized model tutorial, we assume we already have pretrained model.
But where can we download the model?
https://github.com/pytorch/tutorials/blob/main/advanced_source/dynamic_quantization_tutorial.py#L151-L157
|
https://github.com/pytorch/tutorials/issues/3254
|
closed
|
[
"easy",
"docathon-h1-2025"
] | 2025-01-23T20:29:10Z
| 2025-06-04T21:05:05Z
| null |
Achilles718611
|
huggingface/diffusers
| 10,637
|
Issues with FlowMatchEulerDiscreteScheduler.set_timesteps()
|
### Describe the bug
Why does `num_inference_steps` have the default `None`? It's not an `Optional`. It cannot be `None`. This leads to weird error messages if you skip this parameter.
https://github.com/huggingface/diffusers/blob/37c9697f5bb8c96b155d24d5e7382d5215677a8f/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L239
`sigmas` is undocumented:
https://github.com/huggingface/diffusers/blob/37c9697f5bb8c96b155d24d5e7382d5215677a8f/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L241
`mu` is undocumented, even though it can be a required parameter (depending on configuration):
https://github.com/huggingface/diffusers/blob/37c9697f5bb8c96b155d24d5e7382d5215677a8f/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L242
### Reproduction
see above
### Logs
```shell
```
### System Info
HEAD
### Who can help?
@yiyixuxu
|
https://github.com/huggingface/diffusers/issues/10637
|
closed
|
[
"bug"
] | 2025-01-23T20:22:51Z
| 2025-02-16T15:29:08Z
| 4
|
dxqb
|
huggingface/transformers.js
| 1,165
|
Releasing the Florence 2 ONNX conversion script?
|
### Question
Hi,
This might not be the correct place to raise this issue, but I have not found a better option. There have been many requests of people trying to use their tuned Florence 2 models here and in other repos (https://github.com/huggingface/transformers.js/issues/815#issuecomment-2217220254, https://github.com/microsoft/onnxruntime-genai/issues/619, https://github.com/microsoft/onnxruntime/issues/21118, https://huggingface.co/onnx-community/Florence-2-base-ft/discussions/4). @xenova, since you've managed to export these models into ONNX, could you please share the conversion script, even if its just something experimental?
|
https://github.com/huggingface/transformers.js/issues/1165
|
closed
|
[
"question"
] | 2025-01-23T11:35:05Z
| 2025-03-31T10:02:53Z
| null |
ir2718
|
huggingface/transformers
| 35,853
|
How to load a model directly into the GPU memory๏ผ
|
I have enough GPU memory, but not enough CPU memory.When I use the
"from_pretrained" function, the program gets killed due to insufficient memory.
|
https://github.com/huggingface/transformers/issues/35853
|
closed
|
[] | 2025-01-23T09:47:04Z
| 2025-01-23T15:19:01Z
| null |
LiBai531
|
huggingface/nanotron
| 273
|
What is the purpose of "task"
|
What is the purpose of the "tasks" argument in this line?
https://github.com/huggingface/nanotron/blob/9055c664c28a3b430b4e53bfcb5a074068c90f2a/tools/preprocess_data.py#L102C9-L102C28
Thanks
|
https://github.com/huggingface/nanotron/issues/273
|
open
|
[] | 2025-01-23T09:44:35Z
| 2025-02-07T17:09:12Z
| null |
laiviet
|
huggingface/transformers.js
| 1,164
|
`onnxruntime-node` uncompressed too large for NextJS 15 API routes
|
### Question
Hello! I'm trying to deploy `xenova/bge-small-en-v1.5` locally to embed text in an Next 15 API route, but I'm encountering this error with the route's unzipped max size exceeding 250 MB. Wanted to check in to see if there's some error on my side? Doesn't seem like `onnxruntime-node` should be ~720 MB uncompressed by itself? Thanks!

`generateEmbeddingV2()` below is called within the API route.
```typescript
import {
FeatureExtractionPipeline,
layer_norm,
pipeline,
PreTrainedTokenizer,
env,
} from '@huggingface/transformers'
const MAX_TOKENS = 512
const MATRYOSHKA_DIM = 768
let cachedExtractor: FeatureExtractionPipeline | null = null
const getExtractor = async () => {
if (!cachedExtractor) {
cachedExtractor = await pipeline(
'feature-extraction',
'xenova/bge-small-en-v1.5',
{ dtype: 'fp16' }
)
}
return cachedExtractor
}
const chunkText = (text: string, tokenizer: PreTrainedTokenizer) => {
const tokens = tokenizer.encode(text)
const chunks = []
for (let i = 0; i < tokens.length; i += MAX_TOKENS) {
const chunk = tokens.slice(i, i + MAX_TOKENS)
chunks.push(chunk)
}
return chunks.map((chunk) => tokenizer.decode(chunk))
}
export const generateEmbeddingV2 = async (value: string) => {
const extractor = await getExtractor()
const chunks = chunkText(value, extractor.tokenizer)
let embedding = await extractor(chunk[0], { pooling: 'mean' })
embedding = layer_norm(embedding, [embedding.dims[1]])
.slice(null, [0, MATRYOSHKA_DIM])
.normalize(2, -1)
return embedding.tolist()[0]
}
```
I also tried downloading the model file locally, but that didn't work in deployment either.
|
https://github.com/huggingface/transformers.js/issues/1164
|
open
|
[
"question"
] | 2025-01-23T03:28:16Z
| 2025-10-22T20:42:41Z
| null |
raymondhechen
|
huggingface/smolagents
| 322
|
How to capture CodeAgent's full thinking including the code, not just the final response into a variable
|
When we run a CodeAgent in a notebook, it print the question/task, the LLM model used, code (Executing this code, Execution logs) and the Final answer.
The return value from agent.run contrains only the final response.
I'm working on some demos for which I wanted to run a number of tasks, capture all the output (not just the final answer) and write them to an md or html file, so that I can show everything including the code generated by the agent without running the agents live in the demo.
I tried logging, stdout, from contextlib import redirect_stdout, etc but couldn't capture the full output to a variable.
Thanks,
|
https://github.com/huggingface/smolagents/issues/322
|
open
|
[] | 2025-01-23T02:50:34Z
| 2025-01-23T13:17:49Z
| null |
KannamSridharKumar
|
pytorch/torchtitan
| 801
|
[Possible Bug] RoPE here is GPT-J style instead of NeoX/Llama style?
|
I might miss something so please let me know if I do, and in this case I will close the issue.
As we know, GPT-J and NeoX/Llama apply RoPE slightly differently (per hugging face implementation):
- the way GPT-J treats `q, k` as "complex tensor" is an interleaving style: `[q_0_real, q_0_imaginary, q_1_real, q_1_imaginary, ...]`
- the way NeoX/Llama and almost all other RoPE based models treat them by "rotating half": `[q_0_real, q_1_real, ..., q_0_imaginary, q_1_imaginary, ...]` (see [here](https://github.com/huggingface/transformers/blob/2c3a44f9a769e98597d62ecdc7383785318be5a2/src/transformers/models/llama/modeling_llama.py#L150))
The way written here seems interesting:
https://github.com/pytorch/torchtitan/blob/d9898423ecef131825d13c6c8b521a24e889785f/torchtitan/models/llama/model.py#L108
If I'm not mistaken, it is actually an interleaving style because `view_as_complex` uses the last axis as real and imaginary parts which are entries next to each other? I'm able to confirm this by spinning up a notebook session and compare it with hugging face's attention layer side-by-side. After fixing `apply_rotary_emb` it will be possible to match the attention outputs to a very good degree (though I haven't been able to match the outputs of the entire model with hugging face).
These two ways can be unified by carefully rearrange the columns in the weights of `wq` and `wk`, but I don't see it done in the model conversion script https://github.com/pytorch/torchtitan/blob/main/scripts/convert_llama_to_dcp.py
Is it an oversight or did I miss something in the code?
|
https://github.com/pytorch/torchtitan/issues/801
|
closed
|
[] | 2025-01-22T23:32:36Z
| 2025-01-22T23:58:48Z
| 1
|
honglu2875
|
huggingface/smolagents
| 312
|
how to exec a bin and use the output as agent arg ?
|
hi
a simple exec tool as exec(path,[args]) should be in examples.
then an agent call as "use exec(/bin/ls,/bin)" put the result in sql db "(as bin-name) for later use and tell me how much of them are scripts while using sbx -z on each non-scripts"
as a short example
|
https://github.com/huggingface/smolagents/issues/312
|
open
|
[] | 2025-01-22T12:55:22Z
| 2025-01-22T12:55:22Z
| null |
malv-c
|
pytorch/text
| 2,279
|
Could we have Android (Termux) Support?
|
# ุจุณู
ุงููู ุงูุฑุญู
ุงู ุงูุฑุญูู
ุงู
ู ุจุนุฏ ูุงูุตูุงุฉ ู ุงูุณูุงู
ุนูู ุณูุฏูุง ู
ุญู
ุฏ ูุนูู ุขูู ุงุฌู
ุนูู
## Feature/Issue
* building this project on mobile is pretty hard cuz of using ninja witch tries to build everything concurrently and this is got my phone to hang for a few minutes then OOM killed the process.
* also it tries the way the build works is by rebuilding `third-party/*` even if they are already installed on the host device.
## Related
* [Termux Open Issue](https://github.com/termux/termux-packages/issues/19405)
|
https://github.com/pytorch/text/issues/2279
|
open
|
[] | 2025-01-22T05:22:38Z
| 2025-01-22T08:45:23Z
| 0
|
TunifyBasic
|
huggingface/datatrove
| 326
|
How to choose the best timeout value in extractors?
|
Hi,
I do not know how to choose the best timeout threshold for running extractor. Shouldn't this threshold be hardware-aware?
|
https://github.com/huggingface/datatrove/issues/326
|
open
|
[] | 2025-01-22T03:14:58Z
| 2025-02-10T09:53:03Z
| null |
jordane95
|
huggingface/datasets
| 7,377
|
Support for sparse arrays with the Arrow Sparse Tensor format?
|
### Feature request
AI in biology is becoming a big thing. One thing that would be a huge benefit to the field that Huggingface Datasets doesn't currently have is native support for **sparse arrays**.
Arrow has support for sparse tensors.
https://arrow.apache.org/docs/format/Other.html#sparse-tensor
It would be a big deal if Hugging Face Datasets supported sparse tensors as a feature type, natively.
### Motivation
This is important for example in the field of transcriptomics (modeling and understanding gene expression), because a large fraction of the genes are not expressed (zero). More generally, in science, sparse arrays are very common, so adding support for them would be very benefitial, it would make just using Hugging Face Dataset objects a lot more straightforward and clean.
### Your contribution
We can discuss this further once the team comments of what they think about the feature, and if there were previous attempts at making it work, and understanding their evaluation of how hard it would be. My intuition is that it should be fairly straightforward, as the Arrow backend already supports it.
|
https://github.com/huggingface/datasets/issues/7377
|
open
|
[
"enhancement"
] | 2025-01-21T20:14:35Z
| 2025-01-30T14:06:45Z
| 1
|
JulesGM
|
huggingface/peft
| 2,339
|
Peft version upgrade from 0.4.0 to 0.14.0 results in "No module named \u0027peft.utils.config\u0027" error
|
### System Info
Hello,
I'm migrating my sagemaker endpoint from the `huggingface-pytorch-inference:2.1.0-transformers4.37.0-gpu-py310-cu118-ubuntu20.04` image (which is being deprecated) to the `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` image, which is supported.
This new version does not support the 0.4.0 version of peft, so we have upgraded to 1.14.0 and upgraded to a compatible diffusers version. The sagemaker endpoint deploys correctly with these new versions, but once it's run, we receive the following error:
`No module named \u0027peft.utils.config\u0027`
I dug around and found that there' no usage of peft.utils.config in our inference code. The only usage I could find is here, in the peft code itself: https://github.com/huggingface/peft/blob/main/src/peft/config.py. However, in this code, It looks like utils.config does not exist at all.
Here's what I'm currently using:
diffusers==0.32.2
peft==0.14.0
Is the peft library somehow breaking itself by looking for a peft.utils.config that doesn't exist? Have I missed a step that would create the utils.config file? Or is there another hidden dependency using peft.utils.config?
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
Create a sagemaker endpoint using the new `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` huggingface DLC image.
Use a requirements.txt that looks like the following:
diffusers==0.32.2
peft==0.14.0
Observe that all requests to the sagemaker endpoint respond with 500 errors.
### Expected behavior
The Sagemaker endpoint should continue to process requests as it did before the version upgrade (using peft 0.4.0)
|
https://github.com/huggingface/peft/issues/2339
|
closed
|
[] | 2025-01-21T20:00:07Z
| 2025-03-02T15:03:46Z
| 2
|
incchar
|
huggingface/smolagents
| 298
|
How to pass images as input to CodeAgent?
|
Hello,
I want to pass an input image along with the prompt to `CodeAgent.run`. I see that there is an `additional_args` argument but when I pass the image as `{"image": "path/to/image.png"}`, the agent ends up loading the image via pytesseract to read the contents of the image instead of passing it to OpenAI/Anthropic directly. Is there any way that I can ensure that the image is passed along with the prompt so that the model can infer information from it instead of using external libraries to load the image when using the LiteLLM integration?
My code for reference:
```
agent = CodeAgent(
tools=[],
model=LiteLLMModel(
model_id="openai/gpt-4o",
api_key=os.environ.get('OPENAI_API_KEY'),
temperature=1,
top_p=0.95,
),
add_base_tools=True,
additional_authorized_imports=["sqlite3", "csv", "json", "os", "datetime", "requests", "pandas", "numpy", "sys"],
max_steps=10,
)
agent.run(prompt, additional_args={"image": "path/to/image.png"})
```
|
https://github.com/huggingface/smolagents/issues/298
|
closed
|
[] | 2025-01-21T17:14:27Z
| 2025-02-18T18:41:27Z
| null |
DarshanDeshpande
|
huggingface/lerobot
| 650
|
use a camera
|
can I use a camera to collect and train?
|
https://github.com/huggingface/lerobot/issues/650
|
closed
|
[
"question"
] | 2025-01-21T10:35:02Z
| 2025-04-07T15:53:26Z
| null |
lwx2024
|
huggingface/transformers
| 35,807
|
How to change data
| ERROR: type should be string, got "\n\nhttps://huggingface.co/facebook/rag-token-nq\n\nfrom transformers import RagTokenizer, RagRetriever, RagTokenForGeneration\n\ntokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\")\nretriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\nmodel = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\n\ninput_dict = tokenizer.prepare_seq2seq_batch(\"who holds the record in 100m freestyle\", return_tensors=\"pt\") \n\ngenerated = model.generate(input_ids=input_dict[\"input_ids\"]) \nprint(tokenizer.batch_decode(generated, skip_special_tokens=True)[0]) \n\n# should give michael phelps => sounds reasonable\n\n\n\nMy attempts\nhttps://github.com/kim90000/Attempts-with-facebook-rag-token-nq/blob/main/README.md"
|
https://github.com/huggingface/transformers/issues/35807
|
closed
|
[] | 2025-01-21T06:17:09Z
| 2025-02-28T08:03:38Z
| null |
kim90000
|
pytorch/vision
| 8,871
|
SE module is missing in 'class FusedMBConv', 'efficientnet.py'. Is there a reason for it?
|
According to the paper, the FusedMBConv block has an SE module. But I can't find it in the code.
|
https://github.com/pytorch/vision/issues/8871
|
closed
|
[] | 2025-01-21T05:38:16Z
| 2025-01-30T11:34:06Z
| 5
|
Morris-Chen007
|
huggingface/accelerate
| 3,356
|
how to config accelerate on 2 mac machines
|
https://huggingface.co/docs/accelerate/usage_guides/distributed_inference
i use accelerate config and when i run model , it will block and then got an error. means , can not connect IP and port.
\
who can help me.
|
https://github.com/huggingface/accelerate/issues/3356
|
closed
|
[] | 2025-01-20T11:35:35Z
| 2025-02-25T02:20:41Z
| null |
hsoftxl
|
huggingface/transformers.js
| 1,160
|
How to use sentence-transformers/static-similarity-mrl-multilingual-v1 model?
|
### Question
If I try to use `sentence-transformers/static-similarity-mrl-multilingual-v1` it fails on `tokenizer.json` not found. Is it possible to somehow convert the model to use it ? ONNX runtime is already there.
|
https://github.com/huggingface/transformers.js/issues/1160
|
open
|
[
"question"
] | 2025-01-19T15:09:18Z
| 2025-01-19T17:27:49Z
| null |
michalkvasnicak
|
huggingface/diffusers
| 10,606
|
pred_original_sample in FlowMatchEulerDiscreteScheduler
|
Will pred_original_sample be supported in FlowMatchEulerDiscreteScheduler? How to get predicted x_0?
|
https://github.com/huggingface/diffusers/issues/10606
|
closed
|
[] | 2025-01-19T10:02:22Z
| 2025-02-14T12:21:33Z
| 2
|
haofanwang
|
pytorch/vision
| 8,868
|
torchvision version 0.14.0 with cuda version 116 support wheel file suddendly disappeard in download.pytorch.org
|
Dear Commnunity team.
I have been using pytorch 1.13.0 and torchvision version 0.14.0 with cuda version 11.6 for my application(pytorch 2.x is not working for my app and torchvision 0.15 does not support pytorch 1.x)
I was embarrased to find out that torchvision version 0.14.0 with cuda 11.6 has been disappeared all of sudden today.
I have been downloading and installing the packages by the following command from old archives in https://download.pytorch.org/whl/cu116
pip install torch==1.13.0+cu116 torchvision==0.14.0+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
but today, torchvision installation doesn't work. it seems many torchvision files has been missing which support pytorch 1.0
Is there anyway I can get the torchvision==0.14.0+cu116 back or any info you know about why this happened?
Any advice will be big help to me. It seems many torchvision wheel
Thanks in advance
|
https://github.com/pytorch/vision/issues/8868
|
closed
|
[] | 2025-01-19T09:28:21Z
| 2025-01-20T00:40:31Z
| 0
|
chulminkw
|
pytorch/torchtitan
| 797
|
what is the point of first part of this assertion
|
why we need to `assert 0 <= 1`
https://github.com/pytorch/torchtitan/blob/d9898423ecef131825d13c6c8b521a24e889785f/torchtitan/models/llama/model.py#L79
|
https://github.com/pytorch/torchtitan/issues/797
|
closed
|
[] | 2025-01-19T07:05:24Z
| 2025-01-19T15:30:25Z
| null |
gameofdimension
|
huggingface/transformers.js
| 1,157
|
When using StyleTTS/Kokoro for text-to-speech conversion, how can I get the conversion progress?
|
### Question
When using StyleTTS/Kokoro for text-to-speech conversion, how can I get the conversion progress?
```bash
npm i kokoro-js
```
```typescript
const model_id = "onnx-community/Kokoro-82M-ONNX";
const tts = await KokoroTTS.from_pretrained(model_id, {
dtype: "q8", // Options: "fp32", "fp16", "q8", "q4", "q4f16"
});
const text = "Life is like a box of chocolates. You never know what you're gonna get.";
const audio = await tts.generate(text, {
// Use `tts.list_voices()` to list all available voices
voice: "af_bella",
});
audio.save("audio.wav");
```
|
https://github.com/huggingface/transformers.js/issues/1157
|
closed
|
[
"question"
] | 2025-01-18T03:36:28Z
| 2025-10-13T04:46:59Z
| null |
emojiiii
|
pytorch/executorch
| 7,732
|
Be able install ET where torch is compiled from source instead of prebuilt (e.g., nightly, release)
|
Be able install ET where torch is compiled from source instead of prebuilt (e.g., nightly, release)
There are a few use-cases why this is useful:
- If there are cross-dependencies between core vs ET and need to progress in lock steps, then we need to be able to install ET and test against locally compiled core.
- Sometimes prebuilt are not available for torch, for example, Intel Mac. In those cases, users are compiling torch from source. In those cases, we should provide an easy way to integrate into ET.
cc @byjlw
|
https://github.com/pytorch/executorch/issues/7732
|
closed
|
[
"triaged",
"module: user experience"
] | 2025-01-17T18:31:51Z
| 2025-07-28T11:34:10Z
| null |
mergennachin
|
pytorch/xla
| 8,588
|
Run XLA container with DDP in Vertex AI
|
## โ Questions and Help
Hey there! I prepared a Docker container that trains a model using DDP, which works fine in a TPU VM. However, when I run the training job in Vertex AI, it fails. I suspect it's because the `--privileged --net host --shm-size=16G` parameters are not available for the container in Vertex AI. Is there a way to run the container without these parameters, or is there a workaround for Vertex AI?
I also prepared a minimal example.
`run.py`:
```Python
import torch_xla
def mp_fn(index):
print(str(index) + ' is ready.')
if __name__ == '__main__':
torch_xla.launch(
mp_fn,
args=()
)
```
`Dockerfile`:
```
FROM us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.5.0_3.10_tpuvm
COPY run.py /app/run.py
WORKDIR /app/
RUN export PJRT_DEVICE=TPU
ENTRYPOINT ["python"]
CMD ["/app/run.py"]
```
I create v5litepod-8 TPU VM according to [docs](https://cloud.google.com/tpu/docs/run-in-container#train_a_pytorch_model_in_a_docker_container) and run the container as:
`sudo docker run --rm --privileged --net host --shm-size=16G -it us-central1-docker.pkg.dev/my_registry/tpu_fail_example:latest` it works alright.
Now to run the same in Vertex AI
`train-job-spec.yaml`:
```yaml
workerPoolSpecs:
machineSpec:
machineType: ct5lp-hightpu-8t
tpuTopology: 2x4
replicaCount: 1
containerSpec:
imageUri: us-central1-docker.pkg.dev/my_registry/tpu_fail_example:latest
```
And run it:
```bash
gcloud ai custom-jobs create \
--region=us-central1 \
--display-name=$HOSTNAME-tpu-fail \
--config=train-job-spec.yaml
```
It results in error:
```
ERROR 2025-01-15T11:03:07.776877384Z [resource.labels.taskName: workerpool0-0] concurrent.futures.process._RemoteTraceback:
ERROR 2025-01-15T11:03:07.776892524Z [resource.labels.taskName: workerpool0-0] """
ERROR 2025-01-15T11:03:07.776899374Z [resource.labels.taskName: workerpool0-0] Traceback (most recent call last):
ERROR 2025-01-15T11:03:07.776904664Z [resource.labels.taskName: workerpool0-0] File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 246, in _process_worker
ERROR 2025-01-15T11:03:07.776919484Z [resource.labels.taskName: workerpool0-0] r = call_item.fn(*call_item.args, **call_item.kwargs)
ERROR 2025-01-15T11:03:07.776924384Z [resource.labels.taskName: workerpool0-0] File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 205, in _process_chunk
ERROR 2025-01-15T11:03:07.776928944Z [resource.labels.taskName: workerpool0-0] return [fn(*args) for args in chunk]
ERROR 2025-01-15T11:03:07.776935634Z [resource.labels.taskName: workerpool0-0] File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 205, in <listcomp>
ERROR 2025-01-15T11:03:07.776940274Z [resource.labels.taskName: workerpool0-0] return [fn(*args) for args in chunk]
ERROR 2025-01-15T11:03:07.776945034Z [resource.labels.taskName: workerpool0-0] File "/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 58, in _run_thread_per_device
ERROR 2025-01-15T11:03:07.776951384Z [resource.labels.taskName: workerpool0-0] initializer_fn(local_rank, local_world_size)
ERROR 2025-01-15T11:03:07.776955894Z [resource.labels.taskName: workerpool0-0] File "/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 121, in initialize_multiprocess
ERROR 2025-01-15T11:03:07.776960434Z [resource.labels.taskName: workerpool0-0] devices = xm.get_xla_supported_devices()
ERROR 2025-01-15T11:03:07.776972114Z [resource.labels.taskName: workerpool0-0] File "/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_model.py", line 93, in get_xla_supported_devices
ERROR 2025-01-15T11:03:07.776977254Z [resource.labels.taskName: workerpool0-0] devices = torch_xla._XLAC._xla_get_devices()
ERROR 2025-01-15T11:03:07.776981934Z [resource.labels.taskName: workerpool0-0] RuntimeError: Bad StatusOr access: UNKNOWN: TPU initialization failed: Failed to establish SliceBuilder grpc channel to localhost:8482.
ERROR 2025-01-15T11:03:07.776987123Z [resource.labels.taskName: workerpool0-0] """
ERROR 2025-01-15T11:03:07.776993474Z [resource.labels.taskName: workerpool0-0] {"levelname":"ERROR", "message":""}
ERROR 2025-01-15T11:03:07.776998343Z [resource.labels.taskName: workerpool0-0] The above exception was the direct cause of the following exception:
ERROR 2025-01-15T11:03:07.777002583Z [resource.labels.taskName: workerpool0-0] {"levelname":"ERROR", "message":""}
ERROR 2025-01-15T11:03:07.777008234Z [resource.labels.taskName: workerpool0-0] Traceback (most recent call last):
ERROR 2025-01-15T11:03:07.777013183Z [resource.labels.taskName: workerpool0-0] File "/app/tpu_minimal_fail/run.py", line 11, in <module>
ERROR 2025-01-15T11:03:07.777017814Z [resource.labels.taskName: workerpool0-0] torch_xla.launch(
ERROR 2025-01-15T11:03:07.777023334Z [resource.labels.taskName: workerpool0-0] File "/usr/local/lib/python3.10/site-packages/torch_xla/torch_xla.py", line 233, in launch
ERROR 2025-01-15T11:03:07.777027923Z [resource.labe
|
https://github.com/pytorch/xla/issues/8588
|
closed
|
[] | 2025-01-17T11:22:13Z
| 2025-01-27T09:55:30Z
| 1
|
SteshinSS
|
huggingface/transformers.js
| 1,154
|
Text generation pipeline memory spike
|
### Question
## Description
Text generation pipeline has a memory spike at the starting point of every generation request from the instance and settle it down after few seconds. we tested this in lower vram and system memory environment it failed to generate anything because of this issue. also it generate nonsensical bunch of tokens if we pass a long context.
### Screenshots

- Input messages
```
[{
role: "system",
content: "You are a highly skilled meeting summarizer. Your role is to create comprehensive, well-organized summaries
of meetings that capture all essential information while maintaining clarity and accessibility. Follow these
guidelines to generate thorough meeting summaries:
STRUCTURE AND ORGANIZATION:
1. Meeting Metadata
- Date and time of the meeting
- Duration
- Meeting type/purpose
- Attendees (with roles if specified)
- Location/platform used
2. Executive Summary
- Brief 2-3 sentence overview capturing the meeting's main purpose and key outcomes
- Highlight critical decisions or major announcements
3. Detailed Discussion Points
- Organize by agenda items or natural topic transitions
- Maintain chronological order within each topic
- Include for each discussion point:
* Context and background information
* Key arguments or perspectives shared
* Questions raised and answers provided
* Concerns or challenges mentioned
* Solutions proposed
* Related sub-topics that emerged
4. Decisions and Action Items
- Document all decisions made, including:
* The final decision
* Key factors that influenced the decision
* Any dissenting opinions or concerns noted
- For each action item, specify:
* The assigned owner/responsible party
* Specific deliverables or expected outcomes
* Deadlines or timeframes
* Dependencies or prerequisites
* Resources needed or allocated
5. Follow-up Items
- Topics deferred to future meetings
- Scheduled follow-up discussions
- Required approvals or reviews
- Outstanding questions requiring research
IMPORTANT GUIDELINES:
Language and Tone:
- Use clear, professional language
- Maintain objectivity in describing discussions
- Avoid editorializing or interpreting beyond stated information
- Use active voice for clarity and direct attribution
- Include relevant direct quotes when they capture important points precisely
Detail Preservation:
- Capture nuanced discussions, not just high-level points
- Document both majority and minority viewpoints
- Include context for technical terms or project-specific references
- Note any significant non-verbal elements (demonstrations, whiteboard sessions, etc.)
- Preserve the rationale behind decisions, not just the outcomes
Organization Principles:
- Use consistent formatting for similar types of information
- Create clear hierarchical relationships between main topics and subtopics
- Use bullet points and subpoints for complex items
- Include cross-references when topics are interrelated
- Maintain clear distinction between facts, opinions, and decisions
Quality Checks:
- Ensure all agenda items are addressed
- Verify all action items have clear owners and deadlines
- Confirm all decisions are documented with their context
- Check that all participant contributions are fairly represented
- Validate that no discussion points are orphaned or incomplete
FORMAT SPECIFICATIONS:
# Meeting Summary: Meeting Title
## Meeting Details
- Date: Date
- Time: Start Time - End Time
- Location: Location/Platform
- Duration: Duration
- Meeting Type: Type/Purpose
### Attendees
- Name (Role) - Meeting Lead
- Names and roles of other attendees
## Executive Summary
2-3 sentences capturing key outcomes and major decisions
## Key Decisions
1. Decision 1
- Context: Brief context
- Outcome: Final decision
- Rationale: Key factors
2. Decision 2
Same structure as above
## Discussion Topics
### 1. Topic 1
#### Background
Context and background information
#### Key Points Discussed
- Main point 1
* Supporting detail
* Supporting detail
- Main point 2
* Supporting detail
* Supporting detail
#### Outcomes
- Specific outcome or conclusion
- Any decisions made
### 2. Topic 2
Same structure as Topic 1
## Action Items
1. Action Item 1
- Owner: Name
- Deadline: Date
- Deliverable: Specific expected outcome
- Dependencies: Any prerequisites
2. Action Item 2
Same structure as above
## Follow-up Items
- Deferred topic 1
- Scheduled follow-up 1
- Outstanding question 1
## Additional Notes
Any important information that doesn't fit in the above categories
FINAL VERIFICATION CHECKLIST:
1. All agenda items addressed
2. All decisions documented with context
3. All action items have owners and deadlines
4. All participant contributions included
5. All technical terms explained
6. All follow-up items clearly spe
|
https://github.com/huggingface/transformers.js/issues/1154
|
open
|
[
"question"
] | 2025-01-17T06:30:06Z
| 2025-02-07T03:18:49Z
| null |
ashen007
|
pytorch/xla
| 8,587
|
[torch_xla2] Wire `torch_xla2.compile`d function with torch `AutogradFunction`
|
## ๐ Feature
<!-- A clear and concise description of the feature proposal -->
Currently if we wrap with model with `torch_xla2.compile` and want to train the model using the traditional torch training loop similar to https://github.com/pytorch/xla/blob/master/experimental/torch_xla2/examples/basic_training.py
You would notice that it doesn't work.
The reason is because the compile wrapper [`JittableModule`](https://github.com/pytorch/xla/blob/master/experimental/torch_xla2/torch_xla2/interop.py#L50) will eventuall call a `jax.jit`d callable, and torch doesn't know how to compute gradient of that callable.
The solution is to create a `torch.autograd.Function` subclass on the fly, with backward defined to call `jax.vjp` similar to this tutorial: https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html
The result would be that wrapping a model with `torch_xla2.compile` it is still trainable.
## Motivation
Having the forward and backward compiled with jax jit is faster to run.
|
https://github.com/pytorch/xla/issues/8587
|
open
|
[
"enhancement",
"torchxla2"
] | 2025-01-17T01:18:27Z
| 2025-02-11T12:19:27Z
| 0
|
qihqi
|
huggingface/datasets
| 7,372
|
Inconsistent Behavior Between `load_dataset` and `load_from_disk` When Loading Sharded Datasets
|
### Description
I encountered an inconsistency in behavior between `load_dataset` and `load_from_disk` when loading sharded datasets. Here is a minimal example to reproduce the issue:
#### Code 1: Using `load_dataset`
```python
from datasets import Dataset, load_dataset
# First save with max_shard_size=10
Dataset.from_dict({"id": range(1000)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Second save with max_shard_size=10
Dataset.from_dict({"id": range(500)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Load the DatasetDict
loaded_datasetdict = load_dataset("my_sharded_datasetdict")
print(loaded_datasetdict)
```
**Output**:
- `train` has 1350 samples.
- `test` has 150 samples.
#### Code 2: Using `load_from_disk`
```python
from datasets import Dataset, load_from_disk
# First save with max_shard_size=10
Dataset.from_dict({"id": range(1000)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Second save with max_shard_size=10
Dataset.from_dict({"id": range(500)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Load the DatasetDict
loaded_datasetdict = load_from_disk("my_sharded_datasetdict")
print(loaded_datasetdict)
```
**Output**:
- `train` has 450 samples.
- `test` has 50 samples.
### Expected Behavior
I expected both `load_dataset` and `load_from_disk` to load the same dataset, as they are pointing to the same directory. However, the results differ significantly:
- `load_dataset` seems to merge all shards, resulting in a combined dataset.
- `load_from_disk` only loads the last saved dataset, ignoring previous shards.
### Questions
1. Is this behavior intentional? If so, could you clarify the difference between `load_dataset` and `load_from_disk` in the documentation?
2. If this is not intentional, could this be considered a bug?
3. What is the recommended way to handle cases where multiple datasets are saved to the same directory?
Thank you for your time and effort in maintaining this great library! I look forward to your feedback.
|
https://github.com/huggingface/datasets/issues/7372
|
open
|
[] | 2025-01-16T05:47:20Z
| 2025-01-16T05:47:20Z
| 0
|
gaohongkui
|
pytorch/kineto
| 1,028
|
Needs help, how to write trace files to remote storage
|
Recently, we have deployed dynolog in our gpu cluster to collect trace files via kineto on-demand profiling. It needs extra efforts to collect trace files dumped to local storage via `kineto` for distributed applications. We saw that kineto supports dumping traces files to remote storage in https://github.com/facebookincubator/dynolog/blob/main/docs/pytorch_profiler.md, which is exactly what we want. But there's no other docs or tutorials introduce how to use remote storage. Could you provide an introduction or a tip on how to configure kineto to write trace files to remote storage?
|
https://github.com/pytorch/kineto/issues/1028
|
open
|
[] | 2025-01-16T03:52:48Z
| 2025-03-11T20:39:30Z
| null |
staugust
|
pytorch/torchtitan
| 790
|
should we have an extension point for model transforms out of tree?
|
In [torchao](https://github.com/pytorch/ao), we have various low precision training features which are in prototype: MX, int8, bitnet. While we expect most of these to eventually end up in the main torchao APIs, it often takes ~months for a prototype to graduate.
torchtitan is extremely useful for helping us test low precision prototypes in real-world settings. For now, we've been creating unlanded PRs to test functionality (examples: https://github.com/pytorch/torchtitan/pull/614, https://github.com/pytorch/torchtitan/pull/778). Would torchtitan consider building an extension point to support this kind of experimentation fully out-of-tree?
An example of how this could look like:
1. torchtitan provides a "model transformation" hook that it calls at a specified point in the initialization stage (for quantization, that should be after model init and before parallelization / torch.compile)
2. user can provide a custom pass to transform the model (such as a prototype low precision training conversion pass)
I'm not entirely sure on how this hook would be implemented since the current interface of torchtitan is CLI based, but wanted to share the request and start the discussion.
|
https://github.com/pytorch/torchtitan/issues/790
|
closed
|
[
"enhancement"
] | 2025-01-15T19:26:32Z
| 2025-02-26T06:45:52Z
| 17
|
vkuzo
|
pytorch/pytorch
| 144,847
|
torch.compile() In my use case of calling torch.compile(), I have found that the model's data outputs are inconsistent. I suspect that using Triton for operator fusion may have introduced precision deviations. I am unsure how to locate and fix this issue.
|
### ๐ Describe the bug
"My Torch environment is as follows:
2.2.2+cu121
My goal is to use functions related to torch.compile() to optimize the inference time of our model. In fact, it does work and achieves over a 50% reduction in inference time in the default mode.
The model code is as follows:
`"""
copy from https://github.com/alimama-tech/NeurIPS_Auto_Bidding_AIGB_Track_Baseline/blob/main/bidding_train_env/baseline/dd/DFUSER.py
"""
from torch.optim import Adam
import os
from typing import Optional, Tuple, List
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import gin
from .temporal import TemporalUnet
from .basic import (
cosine_beta_schedule,
Losses,
extract,
apply_conditioning,
apply_conditioning_with_fix,
)
class ReduceSum(nn.Module):
def forward(self, x):
return torch.sum(x, dim=-1)
@gin.configurable
class GaussianInvDynDiffusion(nn.Module):
def __init__(self, model, horizon, observation_dim, action_dim, n_timesteps=1000,
clip_denoised=False, predict_epsilon=True, hidden_dim=256,
loss_discount=1.0, returns_condition=False,
condition_guidance_w=0.1,
inv_bias=True,
):
super().__init__()
self.horizon = horizon
self.observation_dim = observation_dim
self.action_dim = action_dim
self.transition_dim = observation_dim + action_dim
self.model = model
self.inv_model = nn.Sequential(
nn.Linear(4 * self.observation_dim, hidden_dim, bias=inv_bias),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim, bias=inv_bias),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim, bias=inv_bias),
nn.ReLU(),
# ReduceSum(),
nn.Linear(hidden_dim, self.action_dim, bias=inv_bias),
)
self.returns_condition = returns_condition
self.condition_guidance_w = condition_guidance_w
betas = cosine_beta_schedule(n_timesteps)
alphas = 1. - betas
alphas_cumprod = torch.cumprod(alphas, axis=0)
alphas_cumprod_prev = torch.cat([torch.ones(1), alphas_cumprod[:-1]])
self.n_timesteps = int(n_timesteps)
self.clip_denoised = clip_denoised
self.predict_epsilon = predict_epsilon
self.register_buffer('betas', betas)
self.register_buffer('alphas_cumprod', alphas_cumprod)
self.register_buffer('alphas_cumprod_prev', alphas_cumprod_prev)
# calculations for diffusion q(x_t | x_{t-1}) and others
self.register_buffer('sqrt_alphas_cumprod', torch.sqrt(alphas_cumprod))
self.register_buffer('sqrt_one_minus_alphas_cumprod', torch.sqrt(1. - alphas_cumprod))
self.register_buffer('log_one_minus_alphas_cumprod', torch.log(1. - alphas_cumprod))
self.register_buffer('sqrt_recip_alphas_cumprod', torch.sqrt(1. / alphas_cumprod))
self.register_buffer('sqrt_recipm1_alphas_cumprod', torch.sqrt(1. / alphas_cumprod - 1))
# calculations for posterior q(x_{t-1} | x_t, x_0)
posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
self.register_buffer('posterior_variance', posterior_variance)
self.register_buffer('posterior_log_variance_clipped',
torch.log(torch.clamp(posterior_variance, min=1e-20)))
self.register_buffer('posterior_mean_coef1',
betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))
self.register_buffer('posterior_mean_coef2',
(1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))
loss_weights = self.get_loss_weights(loss_discount)
self.loss_fn = Losses['state_l2'](loss_weights)
def get_loss_weights(self, discount):
self.action_weight = 1
dim_weights = torch.ones(self.observation_dim, dtype=torch.float32)
discounts = discount ** torch.arange(self.horizon, dtype=torch.float)
discounts = discounts / discounts.mean()
loss_weights = torch.matmul(discounts[:, None], dim_weights[None, :])
if self.predict_epsilon:
loss_weights[0, :] = 0
return loss_weights
# ------------------------------------------ sampling ------------------------------------------#
def predict_start_from_noise(self, x_t, t, noise):
if self.predict_epsilon:
return (
extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
)
else:
return noise
def q_posterior(self, x_start, x_t, t):
posterior_mean = (
extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
)
posterior_variance = extract(self.poste
|
https://github.com/pytorch/pytorch/issues/144847
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-01-15T07:35:30Z
| 2025-04-22T11:18:54Z
| null |
liangshaopeng
|
pytorch/vision
| 8,854
|
Local Windows Torchvision Build fails
|
I am trying to locally build torchvision in a conda environment on my cpu-only windows laptop and even if the build seems to be successful, when I try to import the torchvision package, it fails with this error: **RuntimeError: operator torchvision::nms does not exist**. I tried multiple times ( with different versions of python (3.8 and latest 3.12) in fresh conda environments and the result is the same. What can I do to fix this?
|
https://github.com/pytorch/vision/issues/8854
|
closed
|
[] | 2025-01-14T10:06:38Z
| 2025-02-19T11:58:25Z
| 1
|
alinpahontu2912
|
huggingface/safetensors
| 561
|
Feature Request: Support for Ellipsis (...) in Indexing
|
### Feature request
Thank you very much for your effort in maintaining this great project!
Iโm writing to request the addition of support for ellipsis (...) in `safetensor.safe_open` indexing functionality. This would enhance usability and align SafeTensorโs API more closely with the standard Python indexing conventions used in NumPy and PyTorch.
### Motivation
## What Does Ellipsis (...) Do?
The ellipsis (...) is a shorthand in Python indexing that simplifies working with multi-dimensional arrays. It allows users to skip explicitly specifying a subset of dimensions, particularly when dealing with high-dimensional data. For example:
```python
tensor[..., 0:100, 0:100]
```
This indicates that all dimensions up to the last two should be included in their entirety. The `...` is dynamically replaced by as many colons (: or slice(None)) as needed to account for the unspecified dimensions.
### Your contribution
I can do a PR if it is considered relevant.
## Workaround
A class that deals with the key can be used to transform the key into a slice object, which is supported by safetensors.
```python
from typing import Union, Tuple, Any
from itertools import islice
class SliceTransformer:
__slots__ = ('ndim',) # Optimize memory usage
def __init__(self, ndim: int):
if not isinstance(ndim, int) or ndim < 1:
raise ValueError("ndim must be a positive integer")
self.ndim = ndim
def transform(self, key: Union[slice, int, Tuple[Any, ...], Any]) -> Tuple[slice, ...]:
# Handle single key case without tuple conversion
if isinstance(key, (slice, int)):
result = [slice(key, key + 1) if isinstance(key, int) else key]
result.extend(slice(None) for _ in range(self.ndim - 1))
return tuple(result)
if not isinstance(key, tuple):
raise TypeError(f"Unsupported key type: {type(key)}")
# Pre-allocate result list with known size
result = []
result_append = result.append # Local reference for faster access
# Fast path for common case (no ellipsis)
if Ellipsis not in key:
for item in islice(key, self.ndim):
result_append(slice(item, item + 1) if isinstance(item, int) else item)
result.extend(slice(None) for _ in range(self.ndim - len(result)))
return tuple(result[:self.ndim])
# Handle ellipsis case
ellipsis_idx = key.index(Ellipsis)
remaining_dims = self.ndim - (len(key) - 1)
# Pre-ellipsis items
for item in islice(key, ellipsis_idx):
result_append(slice(item, item + 1) if isinstance(item, int) else item)
# Fill ellipsis slots
result.extend(slice(None) for _ in range(remaining_dims))
# Post-ellipsis items
for item in islice(key, ellipsis_idx + 1, None):
if item is Ellipsis:
raise ValueError("Multiple ellipsis found in key")
result_append(slice(item, item + 1) if isinstance(item, int) else item)
if len(result) != self.ndim:
raise ValueError(f"Key length {len(result)} does not match ndim {self.ndim}")
return tuple(result)
def __getitem__(self, key):
return self.transform(key)
import safetensors.numpy
import safetensors
toy_data = np.random.rand(3, 5, 7, 128, 128)
safetensors.numpy.save_file({"data": toy_data}, "model.safetensors")
# Will not work
with safetensors.safe_open("model.safetensors", "np") as tensor:
tensor.get_slice("data")[..., 0:100, 0:200]
# Will work
with safetensors.safe_open("model.safetensors", "np") as tensor:
tensor_slice = tensor.get_slice("data")
tensor_shape = tensor_slice.get_shape()
new_keys = SliceTransformer(ndim=len(tensor_shape))[..., 0:100, 0:100]
tensor_slice[new_keys]
```
|
https://github.com/huggingface/safetensors/issues/561
|
open
|
[] | 2025-01-14T05:13:54Z
| 2025-01-14T05:13:54Z
| 0
|
csaybar
|
huggingface/diffusers
| 10,566
|
Unnecessary operations in `CogVideoXTransformer3DModel.forward()`?
|
### Describe the bug
Here are few rows of codes in `CogVideoXTransformer3DModel.forward()` :
```py
# 3. Transformer blocks
...
if not self.config.use_rotary_positional_embeddings:
# CogVideoX-2B
hidden_states = self.norm_final(hidden_states)
else:
# CogVideoX-5B
hidden_states = torch.cat([encoder_hidden_states, hidden_states], dim=1)
hidden_states = self.norm_final(hidden_states)
hidden_states = hidden_states[:, text_seq_length:]
# 4. Final block
...
```
where `self.norm_final` is a `LayerNorm` defined by:
```py
self.norm_final = nn.LayerNorm(inner_dim, norm_eps, norm_elementwise_affine)
```
Since the `normalized_shape` of `self.norm_final` is 1-dimension which means only the last dimension will be normalized, it seems that **the "cat -> layernorm -> slice" logic on the 2nd dimension in CogVideoX-5B branch is unnecessary because it does the same thing with**
```py
hidden_states = self.norm_final(hidden_states)
```
These codes is imported via [PR#9203](https://github.com/huggingface/diffusers/pull/9203/files#diff-6e4d5c6638b71b7a0e7de21357c5b55ffd5ff6373dd1ced70070650855830173R469). @zRzRzRzRzRzRzR @yiyixuxu could you possibly walk me through why these changes were necessary? Thanks a lot for your help!
### Reproduction
.
### Logs
```shell
```
### System Info
.
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/10566
|
closed
|
[
"bug",
"stale"
] | 2025-01-14T04:01:20Z
| 2025-02-13T22:11:26Z
| 2
|
townwish4git
|
huggingface/diffusers
| 10,565
|
Different generation with `Diffusers` in I2V tasks for LTX-video
|
### Describe the bug
Hello, I encountered an issue with the generation when attempting the I2V task using `Diffusers`. Is there any difference between the `diffusers` implementation and the `LTX-video-inference scripts` in the I2V task?
- The above is the result from the `inference.py`, and the following is the result generated with `diffuser`.
- Prompts: `a person`
https://github.com/user-attachments/assets/6e2aeeaf-c52b-402c-ae92-aff2d325464b
https://github.com/user-attachments/assets/59f815ad-1746-4ec5-ae1c-a47dcfa0fd02
https://github.com/user-attachments/assets/8ca3c79b-8003-4fa2-82b1-8ae17beccb9c
- test img

Besides, it seems that the text prompt has a significant impact on the I2V generation with 'diffusers'. Could I be missing any important arguments?
https://huggingface.co/docs/diffusers/api/pipelines/ltx_video
- results
https://github.com/user-attachments/assets/c062c21f-5611-4860-ba17-441dd26a8913
https://github.com/user-attachments/assets/991ec853-ee26-43a7-914b-622d115a9b7f
https://github.com/user-attachments/assets/ff3e7f04-c17d-4f0a-9aba-2db68aae792d
https://github.com/user-attachments/assets/f2699759-c36e-4839-bddd-37b84a85e2c7
### Reproduction
- for LTX-video generation
https://github.com/Lightricks/LTX-Video/blob/main/inference.py
```
python inference.py \
--ckpt_path ./pretrained_models/LTX-Video \
--output_path './samples' \
--prompt "A person." \
--input_image_path ./samples/test_cases.png \
--height 512 \
--width 512 \
--num_frames 49 \
--seed 42
```
- for diffuser generation: it seems that the negative prompts are causing the issues. However, even when I remove them, the results are still not satisfactory.
```
import argparse
import torch
from diffusers import LTXVideoTransformer3DModel
from diffusers import LTXImageToVideoPipeline
from diffusers import FlowMatchEulerDiscreteScheduler, AutoencoderKLLTXVideo
from diffusers.utils import export_to_video, load_image, load_video
from moviepy import VideoFileClip, AudioFileClip
import numpy as np
from pathlib import Path
import os
import imageio
from einops import rearrange
from PIL import Image
import random
def seed_everething(seed: int):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
def generate_video(args):
pipe = LTXImageToVideoPipeline.from_pretrained(args.ltx_model_path, torch_dtype=torch.bfloat16)
pipe.to("cuda")
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
image = load_image(args.validation_image)
prompt = "A person."
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
generator = torch.Generator(
device="cuda" if torch.cuda.is_available() else "cpu"
).manual_seed(42)
video = pipe(
image=image,
prompt=prompt,
guidance_scale=3,
# stg_scale=1,
generator=generator,
callback_on_step_end=None,
negative_prompt=negative_prompt,
width=512,
height=512,
num_frames=49,
num_inference_steps=50,
decode_timestep=0.05,
decode_noise_scale=0.025,
).frames[0]
export_to_video(video, args.output_file, fps=24)
```
- for demo images with difference text prompts
https://huggingface.co/docs/diffusers/api/pipelines/ltx_video
```
import torch
from diffusers import LTXImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
pipe = LTXImageToVideoPipeline.from_pretrained("./pretrained_models/LTX-Video", torch_dtype=torch.bfloat16)
pipe.to("cuda")
image = load_image("samples/image.png")
prompt = "A young girl stands."
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
video = pipe(
image=image,
prompt=prompt,
negative_prompt=negative_prompt,
width=704,
height=480,
num_frames=161,
num_inference_steps=50,
).frames[0]
modified_prompt = "-".join(prompt.split()[:14])
export_to_video(video, f"samples/test_out/demo-{modified_prompt}.mp4", fps=24)
```
### Logs
```shell
```
### System Info
torch 2.4.1
torchao 0.7.0
torchvision 0.19.1
diffusers 0.32.1
python 3.10
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/10565
|
open
|
[
"bug",
"stale"
] | 2025-01-14T03:24:06Z
| 2025-09-09T07:21:31Z
| 11
|
Kaihui-Cheng
|
huggingface/transformers.js
| 1,146
|
Why does the local models keep downloading everyday?
|
### Question
Every day when I come back to chat with the local models via transformers.js it downloads the models again. Can't I persisted the downloaded model so that I can chat with them anytime instantly?
Thank you.
|
https://github.com/huggingface/transformers.js/issues/1146
|
closed
|
[
"question"
] | 2025-01-14T02:56:34Z
| 2025-01-18T15:11:09Z
| null |
Nithur-M
|
huggingface/chat-ui
| 1,646
|
Inline audio/video in the output
|
If a model returns a markdown content with an image (``), the chat-ui will display the image inline.
Is there something similar for audio and video? How can a model return audio or video content to the user?
I don't know if this is currently supported or not.
(I'm using the OpenAI endpoint)
btw, tanks a lot for the project, it's very nice!
|
https://github.com/huggingface/chat-ui/issues/1646
|
open
|
[
"enhancement"
] | 2025-01-14T01:20:54Z
| 2025-02-28T11:32:48Z
| 1
|
laurentlb
|
huggingface/lerobot
| 633
|
[Question] How to set training to a local dataset?
|
Is there a way to train on a local dataset without manually adding the `local_files_only` arg to the `make_dataset` function of the train script?
I have set the `LEROBOT_HOME` env variable.
|
https://github.com/huggingface/lerobot/issues/633
|
closed
|
[
"question",
"dataset"
] | 2025-01-13T15:27:00Z
| 2025-10-08T08:37:55Z
| null |
tlpss
|
huggingface/lerobot
| 630
|
Removing episodes from LeRobotDataset
|
Hi, thanks for building this. It's great.
Is there a way to easily remove episodes from a dataset. I had a decent amount of diversity in my episodes, and wanted to reduce it, so I had to remove ~1/2 of the episodes. Rather than rerecording them, I wanted to remove specified episodes (lets say all even episodes). Is there an easy way to do this? I'de tried just removing them from the `episodes.jsonl` file, but it seemed to load all of the episodes, and also deleting unwated episode videos/data and renaming the files through some issues when loading the datasets. Is there a better way to do this?
|
https://github.com/huggingface/lerobot/issues/630
|
closed
|
[
"question",
"dataset",
"stale"
] | 2025-01-13T01:22:32Z
| 2025-10-17T12:07:56Z
| null |
andlyu
|
huggingface/safetensors
| 559
|
serialize & deserialize does not work as the documentation specify.
|
### System Info
- `transformers` version: 4.42.3
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.5.2
- Accelerate version: 0.27.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.1+cu121 (True)
- Tensorflow version (GPU?): 2.15.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 3050 Laptop GPU
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Reproduction
Hi,
Iโm unsure if this is expected behavior or a bug since it does not align with what the documentation for these functions describes. Below is the code to reproduce the issue:
### Expected behavior
```python
import numpy as np
import safetensors
from safetensors.numpy import save_file, load
# Save as a safetensors file
data_ran_uint16 = np.random.randint(0, 255, (2, 2, 2)).astype(np.uint16)
save_file({"toy": data_ran_uint16}, "toy.safetensors")
# Deserialize the file
with open("toy.safetensors", "rb") as f:
fbytes = safetensors.deserialize(f.read())
# Expected to work
serialized = safetensors.serialize({"toy": fbytes[0][1]})
# Workaround
fbytes[0][1]["data"] = bytes(fbytes[0][1]["data"]) # I had to convert the bytearray to bytes
fbytes[0][1]["dtype"] = "uint16" # I had to change the dtype to uint16
fbytes[0][1]["shape"]
serialized = safetensors.serialize({"toy": fbytes[0][1]})
load(serialized)
```
|
https://github.com/huggingface/safetensors/issues/559
|
open
|
[] | 2025-01-12T20:22:57Z
| 2025-01-12T20:23:18Z
| 0
|
csaybar
|
huggingface/transformers.js
| 1,142
|
Make in-browser WebGPU as seamless as in WebLLM
|
### Question
Hi there! ๐
I've noticed something interesting about WebGPU support in browsers:
โ
[WebLLM's demo](https://chat.webllm.ai/) detects and uses my GPU automatically
โ [transformers.js examples](https://huggingface.co/spaces/Xenova/nanollava-1.5-webgpu) fail with:
```Error: no available backend found. ERR: [webgpu] TypeError: e.requestAdapterInfo is not a function```
This ease-of-use difference matters a lot for adoption. I believe reducing friction in GPU setup is crucial for adoption of in-browser ML models - when users need to modify browser settings or follow additional configuration steps, it can significantly impact their willingness to try new applications. WebLLM shows that seamless GPU detection is possible for in-browser ML models.
Environment:
- Chrome 131.0.6778.205
- macOS
Could transformers.js adopt a similar approach to WebLLM for automatic GPU detection? Happy to provide more details if needed!
Best regards
|
https://github.com/huggingface/transformers.js/issues/1142
|
closed
|
[
"question"
] | 2025-01-12T15:06:17Z
| 2025-01-27T11:45:03Z
| null |
Anna-iroro
|
huggingface/peft
| 2,322
|
model merge and unload feature for AdaLora
|
### Feature request
unlike Lora or IA3 adapter type, AdaLora does not provide a method to merge lora adapter weights into original weights so that it can be used as a standalone model. I made that feature for a personal usecase and want to make a PR to make this feature accessible to everyone.
### Motivation
This feature makes people easily merge AdaLora adapter weights into original weights, which makes further finetuning on it possible (i.e. when one wants to resume adalora training for checkpoints that was already trained with adalora, resuming training is not possible with unmerged weights. )
### Your contribution
I'll submit a PR. I followed the example of IA3 `merge_and_unload`
Following is the overview of change :
```
def _unload_and_optionally_merge(
self,
merge: bool = True,
safe_merge: bool = False,
adapter_names: Optional[list[str]] = None,
eps: float = 1e-5
) -> torch.nn.Module:
"""
This method unloads the AdaLoRA adapter modules and optionally merges them into the base model weights.
Args:
merge (`bool`, defaults to `True`):
If True, merges the adapter weights into base model weights.
If False, it will only unload the adapters without merging.
safe_merge (`bool`, defaults to `False`):
If True, performs the merge operation with extra safety checks.
adapter_names (`List[str]`, *optional*):
The list of adapter names to merge. If None, all active adapters will be merged.
eps (`float`, defaults to 1e-5):
Small constant for numerical stability when dividing by ranknum.
Returns:
model (`torch.nn.Module`):
The resulting PyTorch model.
"""
if getattr(self.model, "is_loaded_in_8bit", False):
raise ValueError("Cannot merge adalora layers when the model is loaded in 8-bit mode")
if getattr(self.model, "is_loaded_in_4bit", False):
raise ValueError("Cannot merge adalora layers when the model is loaded in 4-bit mode")
if adapter_names is not None:
raise ValueError("AdaLoRA does not support merging specific adapters. Got adapter_names={adapter_names}")
# Create a copy of the base model state dict to modify
original_state_dict = self.model.state_dict()
if merge:
for name, module in self.model.named_modules():
if hasattr(module, "base_layer") and hasattr(module, "lora_A"):
# Extract base layer weight name
layer_name = name.replace(".lora_A", "")
layer_name = layer_name.replace("base_model.model.", "")
base_weight_name = f"{layer_name}.weight"
# Get SVD parameters
lora_A = module.lora_A["default"] # [r x d_in]
lora_B = module.lora_B["default"] # [d_out x r]
lora_E = module.lora_E["default"] # [r x 1]
# Calculate active ranks
ranknum = (lora_E != 0).sum()
scaling = module.scaling["default"] if hasattr(module, "scaling") else 16
# Safety check if requested
if safe_merge and (torch.isnan(lora_A).any() or torch.isnan(lora_B).any() or torch.isnan(lora_E).any()):
raise ValueError(f"NaN detected in adapter weights for layer {name}")
# Scale A with E: A' = AE
scaled_A = lora_A * lora_E # [r x d_in]
# Compute update: ฮW = BA'
if ranknum > 0:
update = (lora_B @ scaled_A) * scaling / (ranknum + eps)
else:
update = torch.zeros_like(original_state_dict[base_weight_name])
# Update base weights
if base_weight_name in original_state_dict:
original_state_dict[base_weight_name] += update
# Load the merged state dict back into a clean version of the model
self.model.load_state_dict(original_state_dict)
return self.model
def merge_and_unload(
self,
safe_merge: bool = False,
adapter_names: Optional[list[str]] = None,
eps: float = 1e-5
) -> torch.nn.Module:
"""
Merge the active adapters into the base model and unload the adapters.
Args:
safe_merge (`bool`, defaults to `False`):
If True, performs the merge operation with extra safety checks.
adapter_names (`List[str]`, *optional*):
List of adapter names to merge. If None, merges all active adapters.
eps (`floa
|
https://github.com/huggingface/peft/issues/2322
|
closed
|
[] | 2025-01-12T09:20:01Z
| 2025-01-14T12:47:35Z
| 6
|
DaehanKim
|
huggingface/sentence-transformers
| 3,166
|
How to report a security issue responsibly?
|
I have just found a potential security issue in the repo and want to know how I can report it to your team privately, thanks!
|
https://github.com/huggingface/sentence-transformers/issues/3166
|
closed
|
[] | 2025-01-12T04:24:15Z
| 2025-01-12T08:52:43Z
| null |
zpbrent
|
pytorch/vision
| 8,848
|
ValueError for Image size: Height 480 , Width 854 in RAFT
|
### ๐ Describe the bug
...
device = "cuda" if torch.cuda.is_available() else "cpu"
raft_model = raft_small(pretrained=True, progress=False).to(device)
raft_model = raft_model.eval()
transform = transforms.ToTensor()
with torch.no_grad():
list_of_flows = raft_model(old_batch.to(device), new_batch.to(device))
...
### Versions
Hi there,
I am testing the orchvision.models.optical_flow module raft_small, the code is running ok for image size (480, 752), (800,848)..
However, when I test it on Image size: Height 480 , Width 854. The code throw
```
ValueError: The feature encoder should downsample H and W by 8
```
I debug the code on [https://github.com/pytorch/vision/blob/d3beb52a00e16c71e821e192bcc592d614a490c0/torchvision/models/optical_flow/raft.py#L494](url)
```
fmaps = self.feature_encoder(torch.cat([image1, image2], dim=0))
fmap1, fmap2 = torch.chunk(fmaps, chunks=2, dim=0)
if fmap1.shape[-2:] != (h // 8, w // 8):
raise ValueError("The feature encoder should downsample H and W by 8")
```
**Image size: Height 480 , Width 854**
where `fmap1.shape[-2:]` is `torch.Size([60, 107])`, `h // 8 = 60`, but `w // 8 = 106` which triggered the ValueError.
I think this issue is related to output dimension of self.feature_encoder. Looking for help, thx~
|
https://github.com/pytorch/vision/issues/8848
|
closed
|
[] | 2025-01-11T18:24:13Z
| 2025-03-18T12:20:48Z
| 1
|
Neoyning
|
pytorch/torchtitan
| 785
|
Why use RowwiseParallel for nn.Embedding instead of ColwiseParallel?
|
Colwise makes the logic a bit more clear. Rowwise splits on the token dimension, leading to confusion on how the different shards handle tokens that are not present within their shard. From a bit of debugging it seems like there is a special case for this somewhere deep in pytorch source code, but I could not find it.
With colwise, the embedding weight matrix is split on the model dim dimension, so all shards have all the tokens, just different parts of the model dim.
https://github.com/pytorch/torchtitan/blob/main/torchtitan/parallelisms/parallelize_llama.py#L133
```
parallelize_module(
model,
tp_mesh,
{
"tok_embeddings": RowwiseParallel(
input_layouts=Replicate(),
output_layouts=Shard(1),
),
```
Can someone provide some insight?
|
https://github.com/pytorch/torchtitan/issues/785
|
open
|
[
"question"
] | 2025-01-10T15:16:34Z
| 2025-08-21T03:04:35Z
| null |
ghost
|
huggingface/datasets
| 7,365
|
A parameter is specified but not used in datasets.arrow_dataset.Dataset.from_pandas()
|
### Describe the bug
I am interested in creating train, test and eval splits from a pandas Dataframe, therefore I was looking at the possibilities I can follow. I noticed the split parameter and was hopeful to use it in order to generate the 3 at once, however, while trying to understand the code, i noticed that it has no added value (correct me if I am wrong or misunderstood the code).
from_pandas function code :
```python
if info is not None and features is not None and info.features != features:
raise ValueError(
f"Features specified in `features` and `info.features` can't be different:\n{features}\n{info.features}"
)
features = features if features is not None else info.features if info is not None else None
if info is None:
info = DatasetInfo()
info.features = features
table = InMemoryTable.from_pandas(
df=df,
preserve_index=preserve_index,
)
if features is not None:
# more expensive cast than InMemoryTable.from_pandas(..., schema=features.arrow_schema)
# needed to support the str to Audio conversion for instance
table = table.cast(features.arrow_schema)
return cls(table, info=info, split=split)
```
### Steps to reproduce the bug
```python
from datasets import Dataset
# Filling the split parameter with whatever causes no harm at all
data = Dataset.from_pandas(self.raw_data, split='egiojegoierjgoiejgrefiergiuorenvuirgurthgi')
```
### Expected behavior
Would be great if there is no split parameter (if it isn't working), or to add a concrete example of how it can be used.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.27.1
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
|
https://github.com/huggingface/datasets/issues/7365
|
open
|
[] | 2025-01-10T13:39:33Z
| 2025-01-10T13:39:33Z
| 0
|
NourOM02
|
pytorch/TensorRT
| 3,351
|
โ [Question] How to install torch_tensorrt corresponding to pytorch tensorrt version
|
For example, I am using pytorch2.2.1, tensorrt10.2.0, how can I install torch_tensorrt (without changing pytorch, tensorrt versions)
|
https://github.com/pytorch/TensorRT/issues/3351
|
open
|
[
"question"
] | 2025-01-10T07:12:50Z
| 2025-01-15T23:47:47Z
| null |
swearirh
|
huggingface/peft
| 2,319
|
Import error , is it a version issue?
|
### System Info
When I execute the finetune.py file, an error occurs as follows: cannot import name 'prepare_model_for_int8_training'.Is it a version issue? My version is 0.14.0.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
cannot import name 'prepare_model_for_int8_training' from 'peft' (/path/python3.10/site-packages/peft/__init__.py)
### Expected behavior
Who can help me answer this question๏ผthks
|
https://github.com/huggingface/peft/issues/2319
|
closed
|
[] | 2025-01-10T02:34:52Z
| 2025-01-13T10:13:18Z
| 3
|
zhangyangniubi
|
pytorch/audio
| 3,870
|
SQUIM running in real-time
|
I applied SQUIM to assess speech quality as a way to correct the direction-of-arrival of a location-based speech enhancement system. [More info here](https://www.sciencedirect.com/science/article/pii/S1051200424005840).
I'm feeding the last 3-second window of the input to SQUIM, every 0.1 seconds. It is able to respond in less than that time: it featured a maximum response time of 0.0704 seconds. Thus, in terms of response time, SQUIM seems to be able to run in real-time.
However, it does seem to struggle in providing a constant speech quality assessment throughout. I'm using the SI-SDR metric from the objective model. With the a speech recording with no enhancement or spatial variation carried out, the ideal behavior would be that SQUIM provided the same SI-SDR measurement through time, but, as it can be seen in Figure 2 of the aforementioned paper, it does not. It varies wildly, which required some smoothing to work well with the rest of the system.
So here are my questions:
- Is it possible to modify SQUIM for this type of real-time application? I'm assuming it would need some sort of causalness built into it. Or not? I was actually impressed it was able to provide a workable result without any modification. Maybe a fine-tuning would be enough?
- If so, what are the steps you would reccomend that I partake in fine-tuning SQUIM? I've taken a look at [this paper](https://arxiv.org/pdf/2206.12285) that @nateanl provided to another user inquired about it (in #3424), but it is still not clear to me how I should proceed.
- Is SQUIM the best alternative for this? I've looked at other techniques for non-reference speech quality assessment, and it seems SQUIM is up there with the best of them for offline applications. But for real-time scenarios, I'm not sure.
Thank you in advance for any help/guidance you can provide. I'm open to help out in any way, if need be, to make SQUIM work better in real-time applications.
|
https://github.com/pytorch/audio/issues/3870
|
open
|
[] | 2025-01-09T19:43:35Z
| 2025-01-09T19:43:35Z
| 0
|
balkce
|
huggingface/Google-Cloud-Containers
| 138
|
entrypoint.sh for TGI does not implemented requirements.txt installation process
|
Hello team,
Like this sample, https://github.com/huggingface/Google-Cloud-Containers/blob/main/containers/pytorch/inference/gpu/2.3.1/transformers/4.46.1/py311/entrypoint.sh
The entrypoint needs requirements.txt provisioning process.
But in this TGI sample does not contains these procedure.
https://github.com/huggingface/Google-Cloud-Containers/blob/main/containers/tgi/gpu/3.0.1/entrypoint.sh
Is it missing or handled by text_generation_launcher process internally ?
|
https://github.com/huggingface/Google-Cloud-Containers/issues/138
|
closed
|
[
"question"
] | 2025-01-09T08:09:14Z
| 2025-01-21T07:44:52Z
| null |
jk1333
|
huggingface/lerobot
| 623
|
Why different dimensionality state tensor with n_obs_steps vs not?
|
Curious about a design decision - why not have ACT with a [batch, n_obs_steps, state_dim] tensor but assert that n_obs_steps is length 1? Instead of [batch, state_dim]
Currently, we have to detect different dimensionality and handle when we're writing policy-agnostic code
|
https://github.com/huggingface/lerobot/issues/623
|
closed
|
[
"question",
"policies",
"stale"
] | 2025-01-08T18:16:51Z
| 2025-10-19T02:32:27Z
| null |
genemerewether
|
pytorch/TensorRT
| 3,348
|
โ [Question] How to save tensorrt engine ?
|
## โ Question
<!-- Your question -->
## I had already save torch.jit model and infer with pytorch backend successful, but I had tried find some example in project and issue, but I can not find any case, code, example, tutorial to show how to save a tensorrt engine for running by tensorrt backend, can you help me?
|
https://github.com/pytorch/TensorRT/issues/3348
|
open
|
[
"question"
] | 2025-01-08T12:24:23Z
| 2025-01-08T15:10:27Z
| null |
lzcchl
|
huggingface/diffusers
| 10,496
|
NF4 quantized flux models with loras
|
Is there any update here ? With nf4 quantized flux models, i could not use any lora
> **Update**: NF4 serialization and loading are working fine. @DN6 let's brainstorm how we can support it more easily? This would help us unlock doing LoRAs on the quantized weights, too (cc: @BenjaminBossan for PEFT). I think this will become evidently critical for larger models.
>
> `transformers` has a nice reference for us to follow. Additionally, `accelerate` has: https://huggingface.co/docs/accelerate/en/usage_guides/quantization, but it doesn't support NF4 serialization yet.
>
> Cc: @SunMarc for jamming on this together.
>
> _Originally posted by @sayakpaul in https://github.com/huggingface/diffusers/issues/9165#issuecomment-2287694518_
>
|
https://github.com/huggingface/diffusers/issues/10496
|
closed
|
[] | 2025-01-08T11:41:01Z
| 2025-01-13T19:42:03Z
| 12
|
hamzaakyildiz
|
pytorch/torchchat
| 1,453
|
Unabled to import torchao experimental quant_api
|
### ๐ Describe the bug
So i try to export my model and quantize it into .pte file using this command :
python3 torchchat.py export llama3.2-1b-instruct --quantize torchchat/quant_config/mobile.json --output-pte-path llama3.2_1b_instruct.pte
Before I do this, I already activate venv and executorch env,
But i got error :
PyTorch version 2.6.0.dev20241218+cpu available.
Unabled to import torchao experimental quant_api with error: [Errno 2] No such file or directory: '/home/-/torchchat/torchao-build/src/ao/torchao/experimental/quant_api.py'
Using device=cpu
Setting max_seq_length to 128 for ExecuTorch export.
Loading model...
Time to load model: 1.25 seconds
Quantizing the model with: {'embedding': {'bitwidth': 4, 'groupsize': 32}, 'linear:a8w4dq': {'groupsize': 256}}
Killed
I try to find torchao :
Name: torchao
Version: 0.8.0+git2e032c6b
Summary: Package for applying ao techniques to GPU models
Home-page: https://github.com/pytorch-labs/ao
Author:
Author-email:
License:
Location: /home/-/.pyenv/versions/3.10.0/lib/python3.10/site-packages
Requires:
Required-by:
I think maybe this is the problem. I want to know how can I change torchchat to find the path in /home/-/.pyenv/versions/3.10.0/lib/python3.10/site-packages instead of /home/-/torchchat/torchao-build/src/ao/torchao/experimental/quant_api.py
### Versions
PyTorch version: 2.6.0.dev20241218+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.0 (default, Jan 4 2025, 09:08:08) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4050 Laptop GPU
Nvidia driver version: 555.99
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13700H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Vulnerable: No microcode
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of rele
|
https://github.com/pytorch/torchchat/issues/1453
|
closed
|
[] | 2025-01-08T11:05:43Z
| 2025-01-10T12:43:55Z
| 1
|
Arthamna
|
pytorch/torchchat
| 1,452
|
Why Torchchat uses MATH as SDPA backend?
|
### ๐ Describe the bug
Hi maintainers,
I find that, Torchchat uses MATH as SDPA backend in https://github.com/pytorch/torchchat/blob/main/torchchat/generate.py#L542. However, for other libs like vllm, they all accept flash attention as default backend.
So why Torchchat uses MATH as a default backend? Is this required for accuracy? If not, I can help to add an argument to let user set the backend. Thanks!
### Versions
*
|
https://github.com/pytorch/torchchat/issues/1452
|
closed
|
[
"enhancement",
"triaged"
] | 2025-01-08T08:40:03Z
| 2025-01-22T01:57:41Z
| 8
|
yanbing-j
|
huggingface/diffusers
| 10,489
|
Bug in SanaPipeline example?
|
### Describe the bug
I think there might be something wrong with the `SanaPipeline` example code at https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana#diffusers.SanaPipeline
It results in a shape mismatch (see detailed logs below): `mat1 and mat2 shapes cannot be multiplied (600x256000 and 2304x1152)`
I've noticed that the `text_encoder` model looks different depending on the way it is loaded.
* If I **load it with the official example code** (=code in `Reproduction`), `pipeline.text_encoder` looks like this:
```
Gemma2ForCausalLM(
(model): Gemma2Model(
(embed_tokens): Embedding(256000, 2304, padding_idx=0)
(layers): ModuleList(
(0-25): 26 x Gemma2DecoderLayer(
(self_attn): Gemma2Attention(
(q_proj): Linear(in_features=2304, out_features=2048, bias=False)
(k_proj): Linear(in_features=2304, out_features=1024, bias=False)
(v_proj): Linear(in_features=2304, out_features=1024, bias=False)
(o_proj): Linear(in_features=2048, out_features=2304, bias=False)
(rotary_emb): Gemma2RotaryEmbedding()
)
(mlp): Gemma2MLP(
(gate_proj): Linear(in_features=2304, out_features=9216, bias=False)
(up_proj): Linear(in_features=2304, out_features=9216, bias=False)
(down_proj): Linear(in_features=9216, out_features=2304, bias=False)
(act_fn): PytorchGELUTanh()
)
(input_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(pre_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(post_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(post_attention_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
)
)
(norm): Gemma2RMSNorm((2304,), eps=1e-06)
)
(lm_head): Linear(in_features=2304, out_features=256000, bias=False)
)
```
If however I **don't load the components separately** but with the code provided by @lawrence-cj [here](https://github.com/huggingface/diffusers/issues/10334#issuecomment-2558359268) it 1) works and 2) the `text_encoder` looks different:
```
Gemma2Model(
(embed_tokens): Embedding(256000, 2304, padding_idx=0)
(layers): ModuleList(
(0-25): 26 x Gemma2DecoderLayer(
(self_attn): Gemma2Attention(
(q_proj): Linear(in_features=2304, out_features=2048, bias=False)
(k_proj): Linear(in_features=2304, out_features=1024, bias=False)
(v_proj): Linear(in_features=2304, out_features=1024, bias=False)
(o_proj): Linear(in_features=2048, out_features=2304, bias=False)
(rotary_emb): Gemma2RotaryEmbedding()
)
(mlp): Gemma2MLP(
(gate_proj): Linear(in_features=2304, out_features=9216, bias=False)
(up_proj): Linear(in_features=2304, out_features=9216, bias=False)
(down_proj): Linear(in_features=9216, out_features=2304, bias=False)
(act_fn): PytorchGELUTanh()
)
(input_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(pre_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(post_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(post_attention_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
)
)
(norm): Gemma2RMSNorm((2304,), eps=1e-06)
)
```
-> the language modeling head `lm_head` is gone. Is guess that's all expected (?) but I haven't found any documentation of this behaviour or where in the pipeline code this happens.
### Reproduction
```python
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, SanaTransformer2DModel, SanaPipeline
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, AutoModelForCausalLM
quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = AutoModelForCausalLM.from_pretrained(
"Efficient-Large-Model/Sana_600M_1024px_diffusers",
subfolder="text_encoder",
# quantization_config=quant_config,
torch_dtype=torch.float16,
)
quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = SanaTransformer2DModel.from_pretrained(
"Efficient-Large-Model/Sana_600M_1024px_diffusers",
subfolder="transformer",
# quantization_config=quant_config,
torch_dtype=torch.float16,
)
pipeline = SanaPipeline.from_pretrained(
"Efficient-Large-Model/Sana_600M_1024px_diffusers",
text_encoder=text_encoder_8bit,
transformer=transformer_8bit,
torch_dtype=torch.float16,
device_map="balanced",
)
prompt = "a tiny astronaut hatching from an egg on the moon"
image = pipeline(prompt).images[0]
image.save("sana.png")
```
Loading without `quantization_config` because for some reason this does not work on my mac but I tried the same code on a 4090 and it fails there too.
### Logs
```shell
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[5], line 30
|
https://github.com/huggingface/diffusers/issues/10489
|
closed
|
[
"bug"
] | 2025-01-07T17:14:27Z
| 2025-01-08T05:18:05Z
| 2
|
geronimi73
|
pytorch/pytorch
| 144,324
|
FSDP: How to support w8a8 quantization?
|
### ๐ Describe the bug
I replaced nn.Linear with QuantLinear, substituting the nn.Linear operator with an int8 quantized operator.
act_tensor_int8, pertoken_scale = torch_npu.npu_dynamic_quant(x)
quant_out = torch_npu.npu_quant_matmul(act_tensor_int8,
self.weight.to(torch.int8),
self.weight_scale, # weight scale
offset=None,
bias=self.bias,
pertoken_scale=pertoken_scale,
output_dtype=torch.bfloat16)
This change has achieved performance gains on a single GPU. However, when wrapped with FSDP (Fully Sharded Data Parallel) on multiple GPUs,
model_fsdp = FullyShardedDataParallel(model, **settings)
it fails to run because FSDP performs parameter sharding and cannot handle this quantized operator. The error message is as follows:
[rank4]: RuntimeError: call aclnnQuantMatmulV4 failed, detail:E69999: Inner Error!
[rank4]: E69999: [PID: 1182939] 2025-01-07-17:15:19.281.742 op[QuantBatchMatmulV3], [InferShape] dimensions a(12608) and b(128) must be equal[FUNC:InferNDimWithBias][FILE:matmul_infer_fns.cc][LINE:322]
Do you have any good solutions for this issue?
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang @penguinwu
|
https://github.com/pytorch/pytorch/issues/144324
|
closed
|
[
"triaged",
"module: fsdp",
"oncall: pt2"
] | 2025-01-07T13:17:02Z
| 2025-07-02T08:19:36Z
| null |
Lenan22
|
huggingface/distil-whisper
| 164
|
How to finetune distil-whisper/distil-large-v2 model?
|
How to finetune distil-whisper/distil-large-v2 model?
|
https://github.com/huggingface/distil-whisper/issues/164
|
open
|
[] | 2025-01-07T12:59:42Z
| 2025-01-07T13:00:59Z
| null |
dhattareddy
|
pytorch/xla
| 8,541
|
Slow XLA training performance.
|
## โ Questions and Help
I'm evaluating PyTorch-XLA for training, but noticed that there is a big degradation in performance compared to the native pytorch device. Is it a known problem, or is there a problem with the way I use PyTorch-XLA? I tested a simple MNIST training example, comparing the performance between PyTorch CUDA device and XLA CUDA device. The native CUDA device is twice faster.
Appreciate any thoughts, suggestions or links to known performance issues, thanks!
### Environment
note: there is no difference in performance measurements with the latest 2.5.0
- torch 2.4.0
- torch-xla 2.4.0
- torch_xla_cuda_plugin 2.4.0.dev20240902
- torchvision 0.19.0
### How To Reproduce
Run the test program with `xla = True` and `xla = False`
``` python
import os
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import torch_xla.core.xla_model as xm
def get_device(xla):
if xla:
os.environ["PJRT_DEVICE"] = "CUDA"
os.environ["GPU_NUM_DEVICES"] = "1"
import torch_xla_cuda_plugin
from torch_xla.experimental import plugins
import torch_xla.runtime as xr
plugins.use_dynamic_plugins()
plugins.register_plugin('CUDA', torch_xla_cuda_plugin.CudaPlugin())
xr.set_device_type('CUDA')
device = xm.xla_device(devkind="CUDA")
else:
device = torch.device('cuda:0')
os.environ["PJRT_DEVICE"] = "CUDA"
os.environ["GPU_NUM_DEVICES"] = "1"
return device
xla = True
device = get_device(xla)
print(f"Using device: {device}")
class SimpleNN(nn.Module):
def __init__(self):
super(SimpleNN, self).__init__()
self.fc1 = nn.Linear(28 * 28, 512) # number of neurons
self.fc2 = nn.Linear(512, 256) # number of neurons
self.fc3 = nn.Linear(256, 10) # Output layer (10 classes for digits 0-9)
def forward(self, x):
x = x.view(-1, 28 * 28) # Flatten the image
x = torch.relu(self.fc1(x)) # Apply ReLU activation
x = torch.relu(self.fc2(x))
x = self.fc3(x)
return x
# Load the MNIST dataset and apply transformations
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)) # Normalize to [-1, 1]
])
train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
test_dataset = datasets.MNIST(root='./data', train=False, download=True, transform=transform)
train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)
# Initialize the model and move it to the device
model = SimpleNN().to(device)
# Define the loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# Training loop, 20 epochs
for epoch in tqdm(range(20)):
model.train() # Set the model to training mode
running_loss = 0.0
for data, target in tqdm(train_loader):
data, target = data.to(device), target.to(device) # Move data to the device
optimizer.zero_grad() # Zero the gradients
output = model(data) # Get model predictions
loss = criterion(output, target) # Compute the loss
loss.backward() # Backpropagate the gradients
optimizer.step() # Update model parameters
running_loss += loss.item()
if xla:
xm.mark_step()
print(f'Epoch {epoch + 1}, Loss: {running_loss / len(train_loader)}')
# Test the model
model.eval()
correct = 0
total = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device) # Move data to CUDA device
output = model(data)
_, predicted = torch.max(output, 1)
total += target.size(0)
correct += (predicted == target).sum().item()
print(f'Accuracy: {100 * correct / total}%')
```
|
https://github.com/pytorch/xla/issues/8541
|
open
|
[
"performance",
"xla:gpu"
] | 2025-01-07T09:49:12Z
| 2025-02-11T13:50:46Z
| 4
|
tzstoyanov
|
huggingface/doc-builder
| 539
|
How to Deploy huggingface/doc-builder Artifacts to GitHub Pages?
|
Hi,
I am currently working with the `huggingface/doc-builder` and I'm looking to deploy the generated documentation artifacts to GitHub Pages. Could you provide guidance or best practices on how to achieve this?
Specifically, I am interested in understanding:
1. The steps required to configure the deployment process.
2. Any necessary settings or configurations within GitHub Pages.
3. Common pitfalls or issues to be aware of during deployment.
Thank you for your assistance!
|
https://github.com/huggingface/doc-builder/issues/539
|
open
|
[] | 2025-01-07T08:37:05Z
| 2025-01-07T08:37:05Z
| null |
shunk031
|
huggingface/peft
| 2,310
|
Comparison of Different Fine-Tuning Techniques for Conversational AI
|
### Feature request
It would be incredibly helpful to have a clear comparison or support for various fine-tuning techniques specifically for conversational AI. This feature could include insights into their strengths, limitations, and ideal use cases, helping practitioners choose the right approach for their needs.
Hereโs a list of techniques to consider:
LoRa
AdaLoRa
BONE
VeRa
XLora
LN Tuning
VbLora
HRA (Hyperparameter Regularization Adapter)
IA3 (Input-Aware Adapter)
Llama Adapter
CPT (Conditional Prompt Tuning)etc
### Motivation
With the growing number of fine-tuning techniques for conversational AI, it can be challenging to identify the most suitable approach for specific use cases. A comprehensive comparison of these techniquesโhighlighting their strengths, limitations, and ideal scenariosโwould save time, reduce trial-and-error, and empower users to make informed decisions. This feature would bridge the gap between research and practical application, enabling more effective model customization and deployment.
### Your contribution
Iโd be happy to collaborate on this! While I might not have a complete solution right now, Iโm willing to contribute by gathering resources, reviewing papers, or helping organize comparisons. If others are interested in teaming up, we could work together on a PR to make this feature happen. Letโs connect and brainstorm how we can tackle this effectively!
|
https://github.com/huggingface/peft/issues/2310
|
open
|
[
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-01-07T07:07:50Z
| 2025-12-15T09:58:10Z
| 44
|
ImamaDev
|
huggingface/smolagents
| 83
|
How to save/extract executed code
|
Is it possible to save the executed code? It's already in the log. It will be very useful.
ex.
```
โญโ Executing this code: โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ 1 attractions_list = [ โ
โ 2 ["Attraction", "Description"], โ
โ 3 ["Sensoji Temple", "The oldest temple in Tokyo, offering beautiful architecture and a rich history."], โ
โ 4 ["Nakamise Shopping Street", "A historic shopping street with souvenirs and traditional snacks."], โ
โ 5 ["Kibi Dango", "A traditional rice cake snack available at Nakamise Street."], โ
โ 6 ["Asakusa Jinja", "A historic Shinto shrine that survived the bombings during WWII."], โ
โ 7 ["Kimono Experience", "Rent a kimono and walk around Asakusa."], โ
โ 8 ["Asakusa Culture Tourist Information Center", "A building with unique architecture, great for photos."], โ
โ 9 ["Tokyo Skytree", "The tallest structure in Tokyo, offering panoramic views."], โ
โ 10 ["Hanayashiki", "Japanโs oldest amusement park with nostalgic charm."], โ
โ 11 ["Demboin Garden", "A serene Japanese garden adjacent to Sensoji Temple."], โ
โ 12 ["Azuma-bashi Bridge", "An iconic bridge offering views of the Tokyo Skytree."] โ
โ 13 ] โ
โ 14 โ
โ 15 # Convert the list to CSV format (string) โ
โ 16 csv_data = "\n".join([",".join(row) for row in attractions_list]) โ
โ 17 โ
โ 18 # Save the CSV data to file โ
โ 19 save_csv(data=csv_data, filename='asakusa_trip.csv') โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
|
https://github.com/huggingface/smolagents/issues/83
|
closed
|
[] | 2025-01-06T15:40:17Z
| 2025-02-16T17:43:40Z
| null |
Lodimup
|
huggingface/diffusers
| 10,475
|
[SD3]The quality of the images generated by the inference is not as high as on the validation set during fine-tuning?
|
### Describe the bug
Why is the quality of the graphs I generate with `StableDiffusion3Pipeline` not as good as the quality of the images in the validation set in the log generated when using dreambooth_lora for fine tuning?
Maybe I need some other plugin or parameter setting to maintain the same image quality as the validation set?
### Reproduction
```
# Here is my inference code:
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained('./diffusers/stabilityai/stable-diffusion-3-medium-diffusers', torch_dtype=torch.float16).to('cuda')
pipe.load_lora_weights("./my_path/pytorch_lora_weights.safetensors", adapter_name="test_lora")
img = pipe(
"my prompt...",
generator=torch.manual_seed(1),
num_inference_steps=40,
guidance_scale=6
).images[0].save('/root/my_img.png')
```
### Logs
_No response_
### System Info
Diffuser Version: stable-diffusion-3-medium
CUDA Version: 12.4
GPU: NVIDIA A800 80GB
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/10475
|
closed
|
[
"bug",
"stale"
] | 2025-01-06T14:52:57Z
| 2025-02-06T12:17:47Z
| 8
|
ytwo-hub
|
huggingface/datasets
| 7,356
|
How about adding a feature to pass the key when performing map on DatasetDict?
|
### Feature request
Add a feature to pass the key of the DatasetDict when performing map
### Motivation
I often preprocess using map on DatasetDict.
Sometimes, I need to preprocess train and valid data differently depending on the task.
So, I thought it would be nice to pass the key (like train, valid) when performing map on DatasetDict.
What do you think?
### Your contribution
I can submit a pull request to add the feature to pass the key of the DatasetDict when performing map.
|
https://github.com/huggingface/datasets/issues/7356
|
closed
|
[
"enhancement"
] | 2025-01-06T08:13:52Z
| 2025-03-24T10:57:47Z
| null |
jp1924
|
huggingface/diffusers
| 10,468
|
What is accelerate_ds2.yaml๏ผ
|
I can't find accelerate config file named "accelerate_ds2.yaml".
Please give me the file.
Thanks very much!
|
https://github.com/huggingface/diffusers/issues/10468
|
closed
|
[] | 2025-01-06T07:53:06Z
| 2025-01-12T05:32:01Z
| null |
aa327chenge
|
huggingface/transformers
| 35,523
|
How about adding a combined step and epoch feature to save_strategy?
|
### Feature request
Add epoch+steps functionality to save_strategy
### Motivation
I often set save_strategy to epoch for saving, but sometimes I need to run experiments with steps.
Recently, I had to compare checkpoints saved at both epoch and step intervals, which required running the experiment twice and was quite cumbersome. Having a combined feature would be really helpful. What do you think?
### Your contribution
I can add the epoch+steps functionality to save_strategy.
|
https://github.com/huggingface/transformers/issues/35523
|
closed
|
[
"Feature request"
] | 2025-01-06T02:21:22Z
| 2025-02-17T00:02:42Z
| null |
jp1924
|
huggingface/transformers
| 35,512
|
Perhaps your features (`videos` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
|
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.46.1
- Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
- Python version: 3.10.16
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
@ArthurZucker
class BatchEncoding(UserDict):
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
<s> [INST] What are the names of some famous actors that started their careers on Broadway? [/INST] Some famous actors that started their careers on Broad[78/1906]
de:
1. Hugh Jackman
2. Meryl Streep
3. Denzel Washington
4. Julia Roberts
5. Christopher Walken
6. Anthony Rapp
7. Audra McDonald
8. Nathan Lane
9. Sarah Jessica Parker
10. Lin-Manuel Miranda</s>
label_ids:
[-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 2909, 8376, 16760, 369,
2774, 652, 26072, 356, 24331, 3024, 28747, 28705, 13, 28740, 28723, 22389, 4299, 1294, 28705, 13, 28750, 28723, 351, 1193, 28714, 4589, 615, 28705, 13, 28770, 2872
3, 4745, 10311, 5924, 28705, 13, 28781, 28723, 19526, 18021, 28705, 13, 28782, 28723, 17561, 9863, 269, 28705, 13, 28784, 28723, 15089, 399, 763, 28705, 13, 28787,
28723, 14421, 520, 25999, 28705, 13, 28783, 28723, 20514, 19029, 28705, 13, 28774, 28723, 12642, 24062, 19673, 28705, 13, 28740, 28734, 28723, 6678, 28733, 2356,
3009, 9154, 5904, 2]
labels:
Some famous actors that started their careers on Broadway include:
1. Hugh Jackman
2. Meryl Streep
3. Denzel Washington
4. Julia Roberts
5. Christopher Walken
|
https://github.com/huggingface/transformers/issues/35512
|
closed
|
[
"bug"
] | 2025-01-05T06:51:26Z
| 2025-02-13T08:45:39Z
| null |
yxy-kunling
|
huggingface/diffusers
| 10,452
|
pipe.disable_model_cpu_offload
|
**Is your feature request related to a problem? Please describe.**
If I enable the following in Gradio interface
sana_pipe.enable_model_cpu_offload()
and during next generation I want to disable cpu offload, how to do it? I mentioned Gradio specifically as command line inference will not have this problem unless after initializing pipe you generate multiple times with and without cpu offload.
I already searched but nothing found
https://github.com/search?q=repo%3Ahuggingface%2Fdiffusers%20disable_model_cpu_offload&type=code
**Describe the solution you'd like.**
Add method to disable for
1. enable_model_cpu_offload()
2. enable_sequential_cpu_offload()
**Describe alternatives you've considered.**
I will have to delete the pipe completely and load again for each inference in Gradio UI
Kindly suggest if any alternative solution.
```
import torch
from diffusers import SanaPipeline
pipe = SanaPipeline.from_pretrained(
"Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers", torch_dtype=torch.float32
)
pipe.to("cuda")
pipe.text_encoder.to(torch.bfloat16)
pipe.transformer = pipe.transformer.to(torch.bfloat16)
pipe.enable_model_cpu_offload()
image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana"')[0]
image[0].save("output.png")
pipe.disable_model_cpu_offload()
image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana 1"')[0]
image[0].save("output1.png")
```
P.S. How to delete a pipe completely so all models are removed completely and GPU memory is freed
I did checked documentation but unable to find find anything relevant
https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana.py
https://github.com/huggingface/diffusers/blob/4e44534845d35248436abf87688906f52e71b868/src/diffusers/pipelines/pipeline_utils.py
|
https://github.com/huggingface/diffusers/issues/10452
|
closed
|
[] | 2025-01-04T16:39:01Z
| 2025-01-07T08:29:32Z
| 3
|
nitinmukesh
|
huggingface/diffusers
| 10,448
|
Load DDUF file with Diffusers using mmap
|
DDUF support for diffusers is there and DDUF support mmap.
But diffusers example doesn't use or support mmap,
How can I load DDUF file to diffusers with mmap?
```
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"DDUF/FLUX.1-dev-DDUF", dduf_file="FLUX.1-dev.dduf", torch_dtype=torch.bfloat16
).to("cuda")
image = pipe(
"photo a cat holding a sign that says Diffusers", num_inference_steps=50, guidance_scale=3.5
).images[0]
image.save("cat.png")
```
|
https://github.com/huggingface/diffusers/issues/10448
|
open
|
[
"stale"
] | 2025-01-04T00:42:09Z
| 2025-02-03T15:02:46Z
| 1
|
adhikjoshi
|
huggingface/lerobot
| 613
|
Starting off with pretrained models
|
Are there any pretrained models available that can be fine tuned using our own dataset for tasks like pick and place and manipulation?
|
https://github.com/huggingface/lerobot/issues/613
|
closed
|
[
"question",
"stale"
] | 2025-01-03T21:09:40Z
| 2025-10-08T20:53:09Z
| null |
rabhishek100
|
huggingface/optimum
| 2,148
|
Support for Exporting Specific Sub-Modules (e.g., Encoder, Decoder)
|
### Feature request
Currently, when converting transformer models (like T5, but potentially others) to ONNX using the Optimum library, it appears to generate a single ONNX file encompassing the entire model architecture (both encoder and decoder). This occurs regardless of the specific task option selected during conversion.
```
optimum-cli export onnx --model . . --task text-classification
optimum-cli export onnx --model . . --task feature-extraction
```
I propose a feature that provides users with more granular control over the ONNX export process. Specifically, this feature should allow users to selectively export specific sub-modules of a transformer model, such as:
* Only the encoder
* Only the decoder
* Potentially other distinct components of the model
This enhancement would enable users to optimize ONNX models for specific use cases where only a portion of the full model is required.
Evidence of the feasibility and need for this is the existence of separately exported encoder and decoder ONNX models for various transformer architectures on Hugging Face:
- https://huggingface.co/dmmagdal/flan-t5-large-onnx-js/tree/main/onnx
- https://huggingface.co/onnx-community/Florence-2-base-ft/tree/main/onnx
### Motivation
I am encountering a limitation with the current ONNX export functionality in Optimum. When converting transformer models, the resulting ONNX file invariably includes the entire model, even when I only require a specific part, like the encoder.
This is frustrating because:
* **Increased Model Size:** The generated ONNX model is larger than necessary, consuming more storage and potentially impacting loading times.
* **Performance Overhead:** When deploying the ONNX model for tasks that only utilize a specific sub-module (e.g., using only the encoder for embedding generation), the presence of the unnecessary decoder can introduce performance overhead.
* **Lack of Flexibility:** The current approach lacks the flexibility to tailor the exported ONNX model to specific application needs.
As observed on Hugging Face, users have successfully exported individual components (like encoders and decoders) of various transformer models to ONNX. This indicates that it's technically possible and a desirable workflow. The Optimum library should provide a more direct and user-friendly way to achieve this without requiring manual workarounds.
### Your contribution
While my direct expertise in the internal workings of the Optimum library for ONNX export is limited, I am willing to contribute by:
* **Testing:** Thoroughly testing any implementation of this feature on various transformer models.
* **Providing Feedback:** Offering detailed feedback on the usability and effectiveness of the new feature.
* **Sharing Use Cases:** Providing specific use cases and examples that highlight the benefits of this functionality.
|
https://github.com/huggingface/optimum/issues/2148
|
closed
|
[
"Stale"
] | 2025-01-03T14:48:36Z
| 2025-04-08T02:09:03Z
| 4
|
happyme531
|
pytorch/vision
| 8,836
|
Question: Modify Resnet File structure and how to import it
|
Hi, I would like to modify the structure of the model [Resnet50 ](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). My goal is neither to add nor to remove layers, only to replace the convolutions that in the code are made by the pytorch nn.Conv function by convolutions made by the Nvidia CUTLASS library (https://github.com/NVIDIA/cutlass/blob/main/examples/python/02_pytorch_extension_grouped_gemm.ipynb).
I don't intend either to retrain or to modify weights, only to substitute the call to the convolutions with the call to a convolution of cutlass in a similar way to how I describe it in the pytorch forum: https://discuss.pytorch.org/t/using-diffetent-conv2d-ops-with-pre-trained-models/214367
My question is if it is possible, within the guidelines of the repository and then how can I import the file [Resnet50 ](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) from another file or pytorch as it would be done with https://pytorch.org/vision/main/models.html.
Thanks.
|
https://github.com/pytorch/vision/issues/8836
|
closed
|
[] | 2025-01-03T12:43:50Z
| 2025-04-08T15:45:32Z
| null |
IzanCatalan
|
huggingface/smolagents
| 52
|
How to implement human in the loop?
|
How to implement human in the loop?
There are two scenarios: one where more information and input from the user are required, and another where the user's consent is needed to perform a certain action.
|
https://github.com/huggingface/smolagents/issues/52
|
closed
|
[] | 2025-01-03T12:19:01Z
| 2025-02-18T18:49:15Z
| null |
waderwu
|
huggingface/lerobot
| 611
|
Can ACT policy support pushT task?
|
I want to train the ACT policy with pushT dataset, but the evaluation accuracy is only 0%.

Here is my yaml
[act_pusht.txt](https://github.com/user-attachments/files/18299197/act_pusht.txt)
And my training command is
''
python lerobot/scripts/train.py \
hydra.run.dir=outputs/train/2025_1_3_1654_act_pusht \
hydra.job.name=act_pusht \
policy=act_pusht \
policy.use_vae=true \
env=pusht \
env.task=PushT-v0 \
dataset_repo_id=lerobot/pusht \
training.offline_steps=50000 \
training.save_freq=25000 \
training.eval_freq=5000 \
eval.n_episodes=50 \
wandb.enable=false \
device=cuda
''
|
https://github.com/huggingface/lerobot/issues/611
|
closed
|
[
"question",
"policies",
"stale"
] | 2025-01-03T11:30:40Z
| 2025-10-19T02:32:28Z
| null |
Kimho666
|
pytorch/tutorials
| 3,211
|
๐ก [REQUEST] - Making the tutorial more coherent
|
### ๐ Describe the improvement or the new tutorial
The 3-series tutorial set (linked in existing tutorial set) is disconnected in term of concepts being introduced and reused; like the
- "Dataset" which is introduced in first tutorial but is not leveraged in next;
- Intricate details like explanation of use of `torch.LongTensor` is skipped in part 2 (generating)
I wish to modify the tutorials content by:
- adding a linear flow of concepts and then updating the code in follow up concepts such that the end-user is aware of what is different from last time.
- Add details in explanation of what we are doing and why
- Add pictures that reinforce what is we are doing and how is it related to big picture we wish to do.
### Existing tutorials on this topic
Tutorials with the issue
- https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html
- https://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html
- https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html
### Additional context
Hey,
I love tech-writing and wish to make tech adoption easier for all.
Bit of my works you can find.
- https://github.com/LunaticMaestro/Content-Based-Book-Recommender
- I am author of the AI Core tutorials (m ex-SAP employee): https://developers.sap.com/group.ai-core-get-started-basics.html
|
https://github.com/pytorch/tutorials/issues/3211
|
open
|
[
"nlp"
] | 2025-01-03T08:46:30Z
| 2025-04-16T18:11:36Z
| 1
|
LunaticMaestro
|
pytorch/torchtitan
| 770
|
How many H100 GPUs should I use to train Llama-3.1-70B models with Torchtitan?
|
I am planning to train the Llama-3.1-70B model using the Torchtitan framework and need advice on the optimal number of NVIDIA H100 GPUs required. My goal is to ensure efficient training in terms of time and cost, while maintaining a balance between hardware usage and model convergence. Iโd appreciate insights on batch size considerations, GPU memory utilization, and any recommended configurations for Torchtitan with this model. Additionally, if there are any benchmarks or past experiences with similar setups, please share them.
Thanks!
|
https://github.com/pytorch/torchtitan/issues/770
|
closed
|
[] | 2025-01-03T02:21:50Z
| 2025-01-04T04:46:32Z
| null |
jacklanda
|
pytorch/executorch
| 7,486
|
How to run ExecuTorch on Linux with aarch64-oe-linux-gcc11.2?
|
Hi, I am new to ExecuTorch and currently trying to build and run it on a Linux-based Qualcomm board (QCS/QCM8550). The board's specifications are:
OS: Linux
Compiler: aarch64-oe-linux-gcc11.2
SOC Model: 66
Hexagon Arch: V73
I noticed that most guides are focused on Android environments. Could you please provide any hints or suggestions for building and running ExecuTorch on Linux with this setup?
Any help or resources would be greatly appreciated!
Thank you in advance!
cc @mergennachin @byjlw
|
https://github.com/pytorch/executorch/issues/7486
|
closed
|
[
"module: doc",
"need-user-input",
"triaged"
] | 2025-01-03T00:28:56Z
| 2025-02-04T02:42:53Z
| null |
suhyun01150
|
huggingface/optimum
| 2,147
|
Convert Stable Diffusion Inpainting model to FP16 with FP32 inputs
|
### Feature request
I've used [this script](https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/conv_sd_to_onnx.py) to convert models to ONNX in FP16 format but maintaining the FP32 inputs. One of the models that I converted was [Stable Diffusion 2 Inpainting](https://huggingface.co/jdp8/sd-2-inpainting-fp16) to FP16 and tried to use it in ONNX Runtime and ONNX Runtime Web but it doesn't give me the expected results in either engine. I also converted [the model](https://huggingface.co/jdp8/optimum-sd-2-inpainting-onnx-fp32) with the Optimum conversion script to FP32 and this model gives me the expected result in ONNX Runtime. Results are shown below:
Input Image:

Mask Image:

Correct Onnx Runtime Output (converted with Optimum script):

Incorrect Onnx Runtime Output (converted with Stable-Diffusion-ONNX-FP16 script):

Incorrect Onnx Runtime Web Output (converted with Stable-Diffusion-ONNX-FP16 script):

I've also used the Optimum conversion script to convert the model to FP16 and this worked but the inputs are expected to be FP16. This datatype does not exist in JavaScript (specifically, `Float16Array`) and therefore cannot be used in ONNX Runtime Web.
With that being said, is it possible to convert a model to FP16 but leaving the inputs as FP32 in order for the UNET to be less than 2 GB?
### Motivation
I would like to run Stable Diffusion Inpainting in ONNX Runtime Web and for the UNET to be less than 2GB. The FP16 model that I have at the moment gives me an output that is not as expected in ONNX Runtime and ONNX Runtime Web. So far, only the Optimum models give me a correct output in ONNX Runtime but I would like to use this in ONNX Runtime Web.
### Your contribution
I am willing to contribute to this change given some guidance. Not sure how difficult it would be but I believe it would be similar to how it's implemented in [the script](https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/conv_sd_to_onnx.py) mentioned beforehand.
|
https://github.com/huggingface/optimum/issues/2147
|
closed
|
[] | 2025-01-02T21:28:43Z
| 2025-01-25T00:15:54Z
| 0
|
jdp8
|
huggingface/diffusers
| 10,433
|
[Docs] Broken Links in a Section of Documentation
|
### Broken Links in a Section of Documentation
>Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
In this section of docs `reuse components across pipelines` link is broken or Not Directed to Proper Link
`reuse components across pipelines` should be directed to [Reuse a pipeline](https://huggingface.co/docs/diffusers/en/using-diffusers/loading#reuse-a-pipeline) this section instead of [Load pipelines](https://huggingface.co/docs/diffusers/en/using-diffusers/loading#reuse-components-across-pipelines) section in following File.
<details>
<summary>In these docs</summary>
docs/source/en/api/pipelines/animatediff.md
docs/source/en/api/pipelines/attend_and_excite.md
docs/source/en/api/pipelines/audioldm.md
docs/source/en/api/pipelines/audioldm2.md
docs/source/en/api/pipelines/blip_diffusion.md
docs/source/en/api/pipelines/controlnet.md
docs/source/en/api/pipelines/controlnet_flux.md
docs/source/en/api/pipelines/controlnet_hunyuandit.md
docs/source/en/api/pipelines/controlnet_sd3.md
docs/source/en/api/pipelines/controlnet_sdxl.md
docs/source/en/api/pipelines/controlnetxs.md
docs/source/en/api/pipelines/controlnetxs_sdxl.md
docs/source/en/api/pipelines/dance_diffusion.md
docs/source/en/api/pipelines/ddpm.md
docs/source/en/api/pipelines/dit.md
docs/source/en/api/pipelines/i2vgenxl.md
docs/source/en/api/pipelines/kandinsky.md
docs/source/en/api/pipelines/kandinsky3.md
docs/source/en/api/pipelines/kandinsky_v22.md
docs/source/en/api/pipelines/latent_diffusion.md
docs/source/en/api/pipelines/marigold.md
docs/source/en/api/pipelines/musicldm.md
docs/source/en/api/pipelines/paint_by_example.md
docs/source/en/api/pipelines/panorama.md
docs/source/en/api/pipelines/pix2pix.md
docs/source/en/api/pipelines/self_attention_guidance.md
docs/source/en/api/pipelines/semantic_stable_diffusion.md
docs/source/en/api/pipelines/shap_e.md
docs/source/en/api/pipelines/stable_unclip.md
docs/source/en/api/pipelines/text_to_video.md
docs/source/en/api/pipelines/text_to_video_zero.md
docs/source/en/api/pipelines/unclip.md
docs/source/en/api/pipelines/unidiffuser.md
docs/source/en/api/pipelines/value_guided_sampling.md
</details>
---
Some Links of `reuse components across pipelines` are broken in these files below.
<details>
<summary>In these docs</summary>
docs/source/en/api/pipelines/allegro.md
docs/source/en/api/pipelines/cogvideox.md
docs/source/en/api/pipelines/latte.md
docs/source/en/api/pipelines/ltx_video.md
docs/source/en/api/pipelines/lumina.md
docs/source/en/api/pipelines/pixart.md
docs/source/en/api/pipelines/sana.md
</details>
---
And `docs/source/en/api/pipelines/hunyuan_video.md` and `docs/source/en/api/pipelines/hunyuandit.md` are not in proper format
@stevhliu
|
https://github.com/huggingface/diffusers/issues/10433
|
closed
|
[] | 2025-01-02T18:24:44Z
| 2025-01-06T18:07:39Z
| 0
|
SahilCarterr
|
huggingface/transformers
| 35,485
|
How to run the model on another machine and send the answer to another machine.
|
### System Info
transformers 4.31.0 , window os , python 3.10.12
### Who can help?
vision models: @amyeroberts, @qubvel
I have tried using this model on my machine myself, and it works normally, but the processing is very slow because the GPU on my machine is not that powerful. However, I have a server with a strong GPU. If I install this model on the server and run the code on my machine, when it reaches the video processing stage, it sends the task to the server, and the server sends back the result. Then my machine will print the answer and display the result. Is this possible? If so, how can I do it?
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
.
### Expected behavior
I expect it to work in a hybrid way between my computer and the server to achieve faster results.
|
https://github.com/huggingface/transformers/issues/35485
|
closed
|
[
"bug"
] | 2025-01-02T10:03:42Z
| 2025-01-07T10:20:46Z
| null |
ixn3rd3mxn
|
huggingface/accelerate
| 3,320
|
How to save self-defined model with deepspeed zero 3?
|
### System Info
```Shell
- `Accelerate` version: 1.0.1
- Python version: 3.10.0
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- PyTorch MLU available: False
- PyTorch MUSA available: False
- System RAM: 128.00 GB
- GPU type: NVIDIA H20
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': True, 'zero3_save_16bit_model': True, 'zero_stage': 3}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
My custom model inherits from torch.nn.Module.
I am training this model with 4 H20 GPUs using deepspeed zero 3.
I am trying to save a checkpoint with these code:
`
### save model
if (idx % args.save_per_steps == 0) and (idx != 0):
accelerator.wait_for_everyone()
if (accelerator.is_local_main_process):
accelerator.print('Saving model ...')
save_dir = os.path.join(args.save_path, args.save_name + '_epoch_' + str(epoch) + '_step_' + str(idx))
accelerator.print('Getting state dict ...')
state_dict = accelerator.get_state_dict(model)
accelerator.print('Unwraping model ...')
unwrapped_model = accelerator.unwrap_model(model)
accelerator.print('Saving checkpoint ...')
unwrapped_model.save_checkpoint(save_dir, idx, state_dict)
accelerator.print('Model saved!')
accelerator.wait_for_everyone()
`
### Expected behavior
The code stuck when getting state dict.
I also tried `accelerator.save_model` but it couldn't work.
I am wondering what's the recommendโ way to save and load a large model training with deepspeed zero 3?
Thank you very much.
|
https://github.com/huggingface/accelerate/issues/3320
|
closed
|
[] | 2025-01-02T08:15:36Z
| 2025-02-10T15:07:18Z
| null |
amoyplane
|
pytorch/executorch
| 7,467
|
How to run Qwen using Executorch?
|
### ๐ The doc issue
Hi! I just wanted to how, how would I go about running Qwen using executorch? I was able to create the .pte file for Qwen. The example for Llama had a step 'Create a llama runner for android'. Do we have to do something similar for Qwen by creating a custom runner? Also the Qwen repository on Hugging Face Hub does not have a 'tokenizer.model' file, but the Llama example requires it for running inference using the adb shell. How to navigate around this?
### Suggest a potential alternative/fix
_No response_
cc @mergennachin @cccclai @helunwencser @dvorjackz
|
https://github.com/pytorch/executorch/issues/7467
|
closed
|
[
"triaged",
"module: llm"
] | 2025-01-02T07:16:56Z
| 2025-08-28T21:17:24Z
| null |
Arya-Hari
|
huggingface/diffusers
| 10,425
|
Euler Flow Matching Scheduler Missing Documentation for Parameters
|
### Describe the bug
The Euler flow matching scheduler in Hugging Face Diffusers is missing clear documentation for its parameters, making it difficult for users to understand how to configure the scheduler effectively for different use cases.
### Reproduction
Steps to Reproduce:
Visit the Hugging Face Diffusers documentation page and locate the section for the Euler flow matching scheduler.
Try to find documentation for the schedulerโs parameters.
Notice that the documentation does not clearly define the parameters or explain their effects.
### Logs
_No response_
### System Info
Hugging Face Diffusers version: 0.16.1
PyTorch version: 2.1.0
CUDA version: 11.8
CPU: Intel Core i7-12700K
GPU: NVIDIA RTX 3090
### Who can help?
@sayakpaul @DN6
|
https://github.com/huggingface/diffusers/issues/10425
|
closed
|
[
"bug"
] | 2025-01-02T01:37:38Z
| 2025-01-02T01:38:38Z
| 0
|
hanshengzhu0001
|
huggingface/transformers.js
| 1,130
|
Tips on Converting Newer Models
|
### Question
๐๐Happy New Year to the incredible Transformers.js team!๐๐
As I work on converting new (text-generation) models for use with Transformers.js.
Here's what i've tried since last week :
* python converter script
* optimum cli onnx
* onnx-community/convert-to-onnx spaces
the problem i encounter as i move forward to newer models, i realize that the converter is looking for specific files like the ff below which is easy to convert both locally and online:

while some newer models consist files like of the ff below which i couldn't convert:

i have no problem with pc specs at all, i maybe missing some steps, rules or understanding converting models. Iโd greatly appreciate any tips, best practices, or resources you could share to streamline the process and ensure compatibility.
Much Appreciated!
|
https://github.com/huggingface/transformers.js/issues/1130
|
open
|
[
"question"
] | 2025-01-01T05:32:09Z
| 2025-01-01T05:32:09Z
| null |
josephencila
|
huggingface/lerobot
| 606
|
Dataset does not support length of feature shape > 1
|
Hi,
Thank you for this excellent project!
I am trying to create a custom dataset with additional sensory information (such as tactile data) which is an Array3D tensor, but find that when I use the approach mentioned in #547, there is no support to add custom tensor like observations to the episode buffer.
Specifically there are assertions that require the feature shape to be a 1D array at most [here](https://github.com/huggingface/lerobot/blob/59e275743499c5811a9f651a8947e8f881c4058c/lerobot/common/datasets/utils.py#L274)
|
https://github.com/huggingface/lerobot/issues/606
|
closed
|
[
"question",
"dataset",
"stale"
] | 2024-12-31T21:08:26Z
| 2025-10-19T02:32:29Z
| null |
akashsharma02
|
huggingface/finetrainers
| 169
|
How to build a dataset for finetuning CogVideoX I2V 1.5
|
Hi,
I want to finetune the CogVideoX I2V 1.5 (5B) model. I have a set of videos that I want to use, but first I need to preprocess them so they meet the requirements of the model. Do I have to make sure that my fine-tuning dataset meets the generation properties of the model? That is, in the case of CogVideoX 1.5, the videos should be:
- Min(W, H) = 768
- 768 โค Max(W, H) โค 1360
- Max(W, H) % 16 = 0
- Video Length: 5 seconds or 10 seconds
- Frame Rate: 16 frames / second
Do I need to make sure that all my fine-tuning videos follow those guidelines?
|
https://github.com/huggingface/finetrainers/issues/169
|
closed
|
[] | 2024-12-31T19:55:00Z
| 2025-03-08T23:43:31Z
| null |
royvelich
|
pytorch/torchtitan
| 765
|
Can I load from non-FSDP optimizer state with FSDP2?
|
I have been running training on a different framework with FSDP1, where I saved the states with FULL_STATE_DICT - leading to optimizer states that are in a normal `torch.save` format. I'd love to resume from this checkpoint - is this currently supported by FSDP2 / DCP? When I naively try `dcp.load` it resulted in a shard index out of range error.
|
https://github.com/pytorch/torchtitan/issues/765
|
closed
|
[
"question"
] | 2024-12-31T15:52:59Z
| 2025-01-28T18:47:26Z
| null |
syncdoth
|
huggingface/diffusers
| 10,416
|
Euler flow matching scheduler is missing documentation for parameters
|

I think there are some undocumented parameters here.
|
https://github.com/huggingface/diffusers/issues/10416
|
closed
|
[] | 2024-12-31T13:15:35Z
| 2025-01-09T18:54:41Z
| 4
|
bghira
|
huggingface/chat-ui
| 1,636
|
Any way to pass authorization header from Oauth2 down to custom endpoint?
|
## Describe your feature request
It would be nice to be able to pass the authorization header from Oauth2 to custom endpoint. I have an endpoint that mimicks TGI and I would like to authenticate every request in order to protect the api,
## Implementation idea
Just pass an authorization header from frontend to bff and pass it further to the endpoint. It could be a custom header if that would conflict with the current authorization configuration for endpoints. The current configuration allows to pass a static auth header, but I want to be able to pass the jwt of the authenticated user.
|
https://github.com/huggingface/chat-ui/issues/1636
|
open
|
[
"enhancement"
] | 2024-12-31T13:00:22Z
| 2024-12-31T13:00:22Z
| 0
|
corte
|
huggingface/diffusers
| 10,415
|
[Pipelines] Add AttentiveEraser
|
### Model/Pipeline/Scheduler description
Iโve worked on a project called AttentiveEraser, which is a tuning-free method for object removal in images using diffusion models. The code for this project is built upon modifications to existing Diffusers pipelines, so it should be relatively straightforward to integrate it into the library.
## About AttentiveEraser
AttentiveEraser enhances object removal capabilities by using self-attention redirection guidance. It supports different levels of mask precision (semantic segmentation, bounding boxes, and hand-drawn masks) and effectively fills in removed regions by leveraging the generative power of diffusion models.
## Help Needed
As someone new to this process, Iโm unsure how to properly package this into a Diffusers pipeline. Is anyone interested in collaborating on this integration or able to provide guidance on the steps I should take next?
Iโd love to contribute this feature to the community, and the relevant code is already available!
Code: <https://github.com/Anonym0u3/AttentiveEraser>
Looking forward to any suggestions or assistance!

### Open source status
- [X] The model implementation is available.
- [X] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/diffusers/issues/10415
|
closed
|
[
"stale"
] | 2024-12-31T07:44:48Z
| 2025-02-05T15:54:43Z
| 7
|
Anonym0u3
|
huggingface/diffusers
| 10,414
|
[<languageCode>] Translating docs to Chinese
|
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community ๐.
Who would want to translate? Please follow the ๐ค [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about Diffusers ๐ค).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/diffusers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.
* ๐ If you'd like others to help you with the translation, you can also post in the ๐ค [forums](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63).
Thank you so much for your help! ๐ค
|
https://github.com/huggingface/diffusers/issues/10414
|
closed
|
[] | 2024-12-31T06:45:21Z
| 2024-12-31T06:49:52Z
| 0
|
S20180576
|
huggingface/peft
| 2,301
|
How to pass in an attention _ mask that is one dimension more than input _ ids
|
### System Info
Hello, how can I pass in `attention_mask` that has one more dimension than `input_ids`, for example: `output = peft_model.generate(input_ids,attention_mask=attention_mask,max_new_tokens=100)` The `input_ids` dimension is [bitch_size,N], and the `attention_mask` dimension is [bitch_size,N,N].
Under this condition, when the above line of code is run, the following error will be reported:
File "/root/anaconda3/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py", line 179, in _expand_mask bsz, src_len = mask.size()
ValueError: too many values โโto unpack (expected 2)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
`
input_ids = torch.cat([
(torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|mmu|>']).to(device),
(torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|soi|>']).to(device),
image_tokens,
(torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|eoi|>']).to(device),
(torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|sot|>']).to(device),
input_ids
], dim=1).long()
attention_mask = create_attention_mask_for_mmu(input_ids.to(device),
eoi_id=int(uni_prompting.sptids_dict['<|eoi|>']))
cont_toks_list = peft_model.generate(input_ids,attention_mask=attention_mask,max_new_tokens=100)`
### Expected behavior
Read the model for fine-tuning and reasoning.
|
https://github.com/huggingface/peft/issues/2301
|
closed
|
[] | 2024-12-31T02:26:14Z
| 2025-02-07T15:03:57Z
| null |
Chinesehou97
|
pytorch/pytorch
| 143,988
|
Add a knob to control how many blocks are used by persistent matmul/attn kernels
|
### ๐ The feature, motivation and pitch
We train a transformer-style model using FSDP, and we have a very good overlap between the matmul kernels (from cuBLAS) and the NCCL operation in the background. However, when profiling, we have observed that the **matmuls take 2x as long** to complete when they are overlapped with a NCCL kernel!
We believe this is easily explained: we're running on H100 GPUs and, upon inspection, all the matmuls look like they are using "persistent" kernels. That is, they launch as many CUDA blocks as there are SMs on the GPU (i.e., 132) and each of these blocks will process several tiles in a row. What we're observing is thus a form of "wave quantization" where, due to NCCL occupying some SMs, not all blocks of the matmuls can be scheduled at once, thus breaking them into two waves, which thus take twice as long to complete.
Since NCCL only occupies ~10% of the SMs, it would be much more efficient if the matmuls tried to launch a number of blocks that corresponds to ~90% of the SMs. This would allow the two kernels to run simultaneously in a single wave, with the matmuls only being ~10% slower, not ~50%!
For that, however, we need PyTorch to add a new knob allowing us to control such a value, and to forward that knob when launching its cuBLAS kernels (and others).
### Alternatives
None. We couldn't find any environment variable provided by cuBLAS that allows us to override the number of blocks launched.
### Additional context
With longer NCCL kernels, matmuls take a long time:
<img width="1555" alt="Screenshot 2024-12-30 at 17 29 23" src="https://github.com/user-attachments/assets/d91d192e-16e9-4108-9d8e-5cb7caef80f6" />
With shorter NCCL kernels, the non-overlapped matmuls now take less time:
<img width="1439" alt="Screenshot 2024-12-30 at 17 29 42" src="https://github.com/user-attachments/assets/6e1fff67-b1a8-4b3b-a582-6648fc8b00bf" />
cc @ptrblck @msaroufim @eqy @csarofeen @xwang233 @jianyuh @nikitaved @pearu @mruberry @walterddr @Lezcano
|
https://github.com/pytorch/pytorch/issues/143988
|
closed
|
[
"module: cuda",
"triaged",
"module: cublas",
"module: linear algebra"
] | 2024-12-30T16:31:05Z
| 2025-07-10T11:20:38Z
| null |
lw
|
huggingface/diffusers
| 10,411
|
How to call the training of lora weights obtained from examples/concestency_stiffness/train_lcm-distill-lora_std_wds. py
|
I followed https://github.com/huggingface/diffusers/tree/main/examples/consistency_distillation The provided tutorial trained the final Lora weight, but did not find a way to call it. May I ask if you can provide me with a demo of running and calling this weight? Thank you very much!
the training set:
```
#!/bin/bash
# Define the variables
PRETRAINED_TEACHER_MODEL="/ai/yzy/latent-consistency-model-main/stable-diffusion-v1-5"
OUTPUT_DIR="/ai/yzy/latent-consistency-model-main/output_sd001"
RESOLUTION=512
LORA_RANK=64
LEARNING_RATE=1e-6
LOSS_TYPE='huber'
ADAM_WEIGHT_DECAY=0.0
MAX_TRAIN_STEPS=1000
MAX_TRAIN_SAMPLES=4000000
DATALOADER_NUM_WORKERS=4
TRAIN_SHARDS_PATH_OR_URL='/ai/yzy/latent-consistency-model-main/00000.tar'
VALIDATION_STEPS=200
CHECKPOINTING_STEPS=200
CHECKPOINTS_TOTAL_LIMIT=10
TRAIN_BATCH_SIZE=8
GRADIENT_ACCUMULATION_STEPS=1
SEED=453645634
# Run the training script
python ./LCM_Training_Script/consistency_distillation/train_lcm_distill_lora_sd_wds.py \
--pretrained_teacher_model=$PRETRAINED_TEACHER_MODEL \
--output_dir=$OUTPUT_DIR \
--mixed_precision=fp16 \
--resolution=$RESOLUTION \
--lora_rank=$LORA_RANK \
--learning_rate=$LEARNING_RATE \
--loss_type=$LOSS_TYPE \
--adam_weight_decay=$ADAM_WEIGHT_DECAY \
--max_train_steps=$MAX_TRAIN_STEPS \
--max_train_samples=$MAX_TRAIN_SAMPLES \
--dataloader_num_workers=$DATALOADER_NUM_WORKERS \
--train_shards_path_or_url=$TRAIN_SHARDS_PATH_OR_URL \
--validation_steps=$VALIDATION_STEPS \
--checkpointing_steps=$CHECKPOINTING_STEPS \
--checkpoints_total_limit=$CHECKPOINTS_TOTAL_LIMIT \
--train_batch_size=$TRAIN_BATCH_SIZE \
--gradient_checkpointing \
--enable_xformers_memory_efficient_attention \
--gradient_accumulation_steps=$GRADIENT_ACCUMULATION_STEPS \
--use_8bit_adam \
--resume_from_checkpoint=latest \
--seed=$SEED
```
the output:

|
https://github.com/huggingface/diffusers/issues/10411
|
closed
|
[] | 2024-12-30T12:06:07Z
| 2024-12-31T07:21:40Z
| null |
yangzhenyu6
|
huggingface/text-embeddings-inference
| 461
|
How to Set the Threshold for gte-multilingual-reranker
|
I want to use the gte-multilingual-reranker-base model to re-rank the retrieved documents and discard some of them based on a threshold. I have seen examples on Hugging Face where the logits are used as the output scores, but how can I determine the appropriate threshold?
|
https://github.com/huggingface/text-embeddings-inference/issues/461
|
open
|
[] | 2024-12-30T11:39:48Z
| 2025-02-09T06:29:02Z
| null |
ketusrai
|
huggingface/optimum
| 2,140
|
KeyError: 'swinv2 model type is not supported yet in NormalizedConfig.
|
### System Info
```shell
Google Colab
T4 GPU
transformers Version: 4.47.1
optimum Version: 1.24.0.dev0
```
### Who can help?
@michaelbenayoun, @JingyaHuang, @echarlaix
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
from optimum.onnxruntime import ORTModelForVision2Seq
model = ORTModelForVision2Seq.from_pretrained("/content/swin-xlm-image-recognition", export=True, use_cache=False)
model.save_pretrained("swin-xlm-image-recognition-onnx")
### Expected behavior
How to solve this issue? I am trying to convert my VisionEncoderDecoderModel to onnx using optimum, but I am getting this error: `KeyError: 'swinv2 model type is not supported yet in NormalizedConfig. Only albert, bart, bert, blenderbot, blenderbot-small, bloom, falcon, camembert, codegen, cvt, deberta, deberta-v2, deit, distilbert, donut-swin, electra, encoder-decoder, gemma, gpt2, gpt-bigcode, gpt-neo, gpt-neox, gptj, imagegpt, llama, longt5, marian, markuplm, mbart, mistral, mixtral, mpnet, mpt, mt5, m2m-100, nystromformer, opt, pegasus, pix2struct, phi, phi3, phi3small, poolformer, regnet, resnet, roberta, segformer, speech-to-text, splinter, t5, trocr, vision-encoder-decoder, vit, whisper, xlm-roberta, yolos, qwen2, granite are supported. If you want to support swinv2 please propose a PR or open up an issue.'`
The encoder is "swinv2" and the decoder is "xlm-roberta".
|
https://github.com/huggingface/optimum/issues/2140
|
open
|
[
"bug"
] | 2024-12-30T10:29:14Z
| 2024-12-30T10:29:14Z
| 0
|
Billybeast2003
|
huggingface/optimum-intel
| 1,096
|
How to use trainer.train() with OVModelForCausalLM() model
|
I am currently converting a local LLM to Open Vino, I would like to fine tune my model with the Trainer function but I get an error stating: AttributeError: 'OVModelForCausalLM' object has no attribute 'named_children'
Please let me know if there is a way to fine tune openVino models that are loaded with OVModelForCausalLM().
Attached is my script
[Fine_Tuning_mistral_7b_v3 (2).zip](https://github.com/user-attachments/files/18271287/Fine_Tuning_mistral_7b_v3.2.zip)
|
https://github.com/huggingface/optimum-intel/issues/1096
|
closed
|
[] | 2024-12-29T23:54:26Z
| 2025-02-27T14:54:20Z
| null |
CJames1261
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.