repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/diffusers
9,900
Potential bug in repaint?
https://github.com/huggingface/diffusers/blob/dac623b59f52c58383a39207d5147aa34e0047cd/src/diffusers/schedulers/scheduling_repaint.py#L322 According to line5 of algorithm 1 in the paper, the second part in line 322 should remove the `**0.5`? thanks!
https://github.com/huggingface/diffusers/issues/9900
closed
[]
2024-11-10T10:41:26Z
2024-12-16T19:38:22Z
3
jingweiz
pytorch/vision
8,722
The link of **Multi-view Stereo Correspondence** doesn't exist in the doc
### 📚 The doc issue [The link](http://matthewalunbrown.com/patchdata/patchdata.html) of **Multi-view Stereo Correspondence** doesn't exist in [the doc](https://pytorch.org/vision/stable/datasets.html#image-pairs) as shown below: ![Screenshot 2024-11-10 102207](https://github.com/user-attachments/assets/a279a8a3-831b-4d03-8993-96caabdd5e4b) ### Suggest a potential alternative/fix _No response_
https://github.com/pytorch/vision/issues/8722
open
[ "module: documentation" ]
2024-11-10T01:31:15Z
2024-11-27T17:56:47Z
3
hyperkai
pytorch/serve
3,362
Trying to find a doc explaining how the scaling works (min_worker to max_worker)
### 📚 The doc issue Can anyone help out? ### Suggest a potential alternative/fix _No response_
https://github.com/pytorch/serve/issues/3362
open
[]
2024-11-09T22:01:02Z
2024-11-09T22:01:02Z
null
lschaupp
huggingface/finetrainers
82
[question] what is the difference between cofgvideo scheduler and normal diffuers scheduler
### Feature request / 功能建议 CogVideoXDPMScheduler VS DPMSCheduler CogVideoXDDIMScheduler VS DDIM Scheduler Hi Aryan, is there any sampling difference between these two sampler? @a-r-r-o-w ### Motivation / 动机 / ### Your contribution / 您的贡献 /
https://github.com/huggingface/finetrainers/issues/82
closed
[]
2024-11-09T17:15:57Z
2024-12-19T14:43:23Z
null
foreverpiano
huggingface/optimum
2,092
Add support for RemBERT in the ONNX export
### Feature request Add RemBERT to supported architectures for ONNX export. ### Motivation The support for [RemBert](https://huggingface.co/docs/transformers/model_doc/rembert) was previously available in Transformers see [here](https://github.com/huggingface/transformers/issues/16308). However, now it seems that RemBERT is no longer supported. ### Your contribution I can help by testing implementation or providing the code if provided by some tutorial. I was not able to find documentation on how to do that.
https://github.com/huggingface/optimum/issues/2092
closed
[ "onnx" ]
2024-11-08T15:12:34Z
2024-12-02T13:54:10Z
1
mlynatom
pytorch/xla
8,366
Export training model to StableHlo
## ❓ Questions and Help The export API only supports `torch.nn.module` as input, is any method to export a training model with **step_fn** to StableHlo? Here is a simple training case from [example](https://github.com/pytorch/xla/blob/6454b42fd404d13f2008730ed4ad33b3a91723e3/examples/train_resnet_base.py#L16): ```python def __init__(self): ... self.device = torch_xla.device() self.model = torchvision.models.resnet50().to(self.device) self.optimizer = optim.SGD(self.model.parameters(), weight_decay=1e-4) self.loss_fn = nn.CrossEntropyLoss() ... def run_optimizer(self): self.optimizer.step() def step_fn(self, data, target): self.optimizer.zero_grad() output = self.model(data) loss = self.loss_fn(output, target) loss.backward() self.run_optimizer() return loss ``` The guidance https://pytorch.org/xla/master/features/stablehlo.html#torch-export-to-stablehlo only introduced how to export the original `self.model`, but it didn't tell how to export the model with Optimizer and Loss functions.
https://github.com/pytorch/xla/issues/8366
closed
[]
2024-11-08T08:02:01Z
2025-01-09T02:00:38Z
3
Zantares
huggingface/lerobot
502
Low accuracy for diffusion policy+aloha env+sim_transfer_cude_human dataset
I'm trying to use diffusion model and aloha env to train on sim_transfer_cude_human dataset. But after 60000 training step, the evaluation accuracy is only 2%-6%. Idont know why? If I load pre-trained act policy, the accuracy can reach 80%.
https://github.com/huggingface/lerobot/issues/502
open
[ "question", "simulation" ]
2024-11-08T02:20:14Z
2025-11-29T02:48:27Z
null
Kimho666
pytorch/torchchat
1,358
Create doc and tests for distributed inference
### 🚀 The feature, motivation and pitch Once distributed inference integration into torchchat is functional, let's add a docs/distributed.md with an example, and plumb that example into `.ci/scripts/run-docs distributed`. (updown.py extracts all commands between triple backticks into a test script.) torchchat has the same runners as pytorch/pytorch, so at least a minimal 2 or 4 GPU setup on a single node would be great. Not sure whether we can run multi-node testing, you can suppress commands from tests with `[skip default]: begin` and `[skip default]: end` around those commands. cc: @mreso @lessw2020 @kwen2501 ### Alternatives None ### Additional context _No response_ ### RFC (Optional) _No response_
https://github.com/pytorch/torchchat/issues/1358
closed
[ "documentation", "actionable", "Distributed", "triaged" ]
2024-11-08T02:08:33Z
2025-01-18T06:15:01Z
2
mikekgfb
huggingface/local-gemma
41
How to load from file?
How to load model from file, eg. .h5 file, instead of downloading the model? Especially the model saved by keras_nlp.
https://github.com/huggingface/local-gemma/issues/41
open
[]
2024-11-07T03:01:25Z
2024-11-07T03:03:31Z
null
datdq-abivin
pytorch/FBGEMM
3,338
how to add -r in build instructions ?
<img width="1053" alt="image" src="https://github.com/user-attachments/assets/63c8565c-55b6-4ee0-a209-60862c51fe68">
https://github.com/pytorch/FBGEMM/issues/3338
open
[]
2024-11-07T02:06:21Z
2024-11-07T06:03:40Z
null
zhaozheng09
pytorch/xla
8,359
Query regarding using 1 chip (2 cores of TPU v3) for Inference
## ❓ Questions and Help Hello, I am trying to benchmark the performance of TPU v3 for inference. However, I would like to use 2 cores (1 chip). Please point me to any documentation that I can get started on. Also, is it possible to launch 2 inferences on 2 cores as separate independent processes? (This would just give 2x the performance of one core) Thanks again, Deepak
https://github.com/pytorch/xla/issues/8359
open
[ "question", "xla:tpu" ]
2024-11-06T18:03:21Z
2025-02-18T12:45:15Z
null
deepakkumar2440
pytorch/vision
8,713
`torchvision.ops.boxes.batched_nms` slow on large box numbers
### 🐛 Describe the bug ## Description `torchvision.ops.boxes.batched_nms` on CUDA GPU slows down considerably when then number of bounding boxes involved increases. The slow down is associated with Device -> Host transfer, and is linked to the iterative part of the Non Maximum Suppression (NMS) algorithm. In a nutshell the IoU map is computed on the device, then the mask is copied to the CPU to perform the iterative unwrap, which result is copied back to the device (from [here and below](https://github.com/pytorch/vision/blob/868a3b42f4bffe29e4414ad7e4c7d9d0b4690ecb/torchvision/csrc/ops/cuda/nms_kernel.cu#L136)). The mask size grows quadratically with the number of input bounding boxes and we see a large TX rate when running on 30_000+ boxes. In comparison the [OpenLabs mmcv](https://github.com/open-mmlab/mmcv) solution does the same thing for the IoU map but runs a custom kernel to do the unwrap directly on the device. The[ implemented kernel](https://github.com/open-mmlab/mmcv/blob/71437a361cc8918fc398ae408267cf019f4ca03f/mmcv/ops/csrc/common/cuda/nms_cuda_kernel.cuh#L76) is not very efficient compute wise but save the data transfer cost, which is the main bottleneck. I benchmarked `torchvision` batched_nms against `mmcv`'s on `V100` and `A100` GPUs. ![A100_bench_rel_loglog](https://github.com/user-attachments/assets/12fbc0c7-e883-446d-8e3d-c753072abd5b) ![V100_bench_rel_loglog](https://github.com/user-attachments/assets/15fa6971-1f70-4355-93ea-094f3b9d9509) Both figures show the speed factor when comparing a solution to `torchvision.ops.boxes._batched_nms_vanilla` (there is 2 nms in torchvision, selected based on the number of elements. Here , `torchvision.ops.boxes._batched_nms_vanilla` is used a base comparison and we compare `torchvision.ops.boxes._batched_nms_coordinate_trick` and `mmcv` batched_nms). From 30k boxes and above `mmcv` NMS is x20+ faster. Is there a reason why we keep this GPU -> CPU transfer ? Could we improve the scalability by having a similar on-device additional kernel ? ## Additional informations * All boxes are from the same class * Benchmark has been done using `torch.utils.benchmark.Timer` on 100 examples for each NMS. * I did not know if this should be put as Bug report or Feature request. ### Versions ``` PyTorch version: 2.5.0+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.24.1 Libc version: glibc-2.35 Python version: 3.10.14 (main, May 14 2024, 06:11:20) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.10.219-208.866.amzn2.x86_64-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB Nvidia driver version: 535.183.01 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 96 On-line CPU(s) list: 0-95 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 Stepping: 7 BogoMIPS: 5999.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke Hypervisor vendor: KVM Virtualization type: full L1d cache: 1.5 MiB (48 instances) L1i cache
https://github.com/pytorch/vision/issues/8713
closed
[]
2024-11-06T12:58:13Z
2025-02-20T17:16:10Z
1
Ghelfi
huggingface/diffusers
9,876
Why isn’t VRAM being released after training LoRA?
### Describe the bug When I use train_dreambooth_lora_sdxl.py, the VRAM is not released after training. How can I fix this? ### Reproduction Not used. ### Logs _No response_ ### System Info - 🤗 Diffusers version: 0.31.0.dev0 - Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.17 - Running on Google Colab?: No - Python version: 3.8.20 - PyTorch version (GPU?): 2.2.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.25.2 - Transformers version: 4.45.2 - Accelerate version: 1.0.1 - PEFT version: 0.13.2 - Bitsandbytes version: 0.44.1 - Safetensors version: 0.4.5 - xFormers version: not installed - Accelerator: NVIDIA H800, 81559 MiB - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/9876
open
[ "bug", "stale" ]
2024-11-06T11:58:59Z
2024-12-13T15:03:25Z
14
hjw-0909
pytorch/ao
1,230
How to skip decomposition of dequantize_affine and quantize_affine custom ops in inductor?
I want to use the `torch.ops.quant.quantize_affine` (Q) and `torch.ops.quant.dequantize_affine` (DQ) to represent a quant model DAG in QDQ style, and do quant fusion using inductor's [pattern matcher](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/pattern_matcher.py), for instance: ``` x(i8) w(i8) b(i32) x(i8) w(i8) b(i32) | | | | | | DQ DQ DQ | | | \ | / \ | / torch.ops.aten.linear.default -> my_q_linear_triton_impl | | Q | | | y(i8) y(i8) ``` However, since `torch.ops.quant.quantize_affine` and `torch.ops.quant.dequantize_affine` are registered to inductor's decomposition table, as well as with `CompositeImplicitAutograd` flag, they are decomposed in aot_autograd. I wonder how to preserve the original Q-DQ ops after aot_autograd? I noticed that the torch's built-in custom Q-DQ ops, such as `torch.ops.quantized_decomposed.quantize_per_tensor` and `torch.ops.quantized_decomposed.dequantize_per_tensor`, can be preserved after aot_autograd, and there are [pattern rewrites](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/fx_passes/quantization.py) based on these Q-DQ ops. (BTW, what's the relationship between torchao and torch.ao module, will torchao be merged into torch.ao in the future?)
https://github.com/pytorch/ao/issues/1230
closed
[]
2024-11-06T08:01:46Z
2024-11-12T05:35:06Z
null
Nullkooland
huggingface/diffusers
9,866
Flux controlnet can't be trained, do this script really work?
### Describe the bug run with one num processes, the code broke down and returns: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by run with more than one processes, the code broke down and returns: Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. ### Reproduction just follow the instructions and it will be reproduced ### Logs _No response_ ### System Info diffusers v0.32 ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/9866
closed
[ "bug", "stale" ]
2024-11-05T08:51:57Z
2024-12-05T15:19:12Z
4
liuyu19970607
pytorch/executorch
6,655
How To Building and Running Llama 3.2 1B Instruct with Qualcomm AI Engine Direct Backend?
### Right Case When I follow the doc : https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#enablement, I export the Llama3.2-1B-Instruct:int4-spinquant-eo8 model to xnnpack backend pte successfully, and working alright on cpu. [ ![SpinQuant_XNNPACK](https://github.com/user-attachments/assets/4a9da7f9-e68b-4682-8fde-88bae0b4800f) ](url) ### Bad Case But as the link: https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md, when I export to the qnn backend using mode Llama3.2-1B-Instruct, I can get the out pte file, but when I make it running on the android device, it not working right. **I export pte file like this:** python -m examples.models.llama.export_llama --checkpoint "${MODEL_DIR}/consolidated.00.pth" -p "${MODEL_DIR}/params.json" -kv --disable_dynamic_shape --qnn --pt2e_quantize qnn_16a4w -d fp32 --metadata '{"get_bos_id":128000, "get_eos_ids":[128009, 128001]}' --soc_model SM8550 --output_name="llama3_2_ptq_qnn_.pte" **This is the part of output when I export** INFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_permute_copy_default_979, aten.permute_copy.default INFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_squeeze_copy_dims_175, aten.squeeze_copy.dims INFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_add_tensor_79, aten.add.Tensor INFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_select_copy_int_512, aten.select_copy.int INFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_rms_norm_default_32, aten.rms_norm.default INFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_view_copy_default_288, aten.view_copy.default INFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_permute_copy_default_980, aten.permute_copy.default INFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_convolution_default_112, aten.convolution.default INFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_permute_copy_default_981, aten.permute_copy.default INFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_view_copy_default_289, aten.view_copy.default INFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: quantized_decomposed_dequantize_per_tensor_tensor, quantized_decomposed.dequantize_per_tensor.tensor [INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters [INFO] [Qnn ExecuTorch]: Destroy Qnn context [INFO] [Qnn ExecuTorch]: Destroy Qnn device [INFO] [Qnn ExecuTorch]: Destroy Qnn backend /home/hebaotong/AI/Executorch/executorch_new/executorch/exir/emit/_emitter.py:1512: UserWarning: Mutation on a buffer in the model is detected. ExecuTorch assumes buffers that are mutated in the graph have a meaningless initial state, only the shape and dtype will be serialized. warnings.warn( INFO:root:Required memory for activation in bytes: [0, 17552384] modelname: llama3_2_ptq_qnn_ output_file: llama3_2_ptq_qnn_.pte INFO:root:Saved exported program to llama3_2_ptq_qnn_.pte **Screenshot of run status** ![PTQ_QNN](https://github.com/user-attachments/assets/c2959707-51cc-4f9d-982f-41186ee3ddfe)
https://github.com/pytorch/executorch/issues/6655
open
[ "partner: qualcomm", "triaged", "module: qnn", "module: llm" ]
2024-11-05T08:00:19Z
2025-12-19T19:15:57Z
null
baotonghe
pytorch/serve
3,357
413 Request Entity Too Large
### 📚 The doc issue When making a request, sometimes 413 Request Entity Too Large is reported. Is there any configuration for torchserve that can increase the threshold of request size? ### Suggest a potential alternative/fix _No response_
https://github.com/pytorch/serve/issues/3357
open
[]
2024-11-05T02:38:59Z
2025-01-12T05:21:34Z
1
pengxin233
pytorch/tutorials
3,143
New Search Engine should link to right branch (stable/main/preview pr branch)
The search feature should match the branch that the docs loaded in. Why? The use case I often have is I use the search bar to quickly navigate to the page I had just edited in my PR to see how it'd render in prod. The new search engine produces results that always directs to stable, though, so there's no easy way to navigate to the page I wanted to check. This is a regression from the previous search experience. For example, in the following preview, I'm looking for the custom operators page, which I had modified in the PR. I search for it in the preview docs, but all the results point to stable. Ideally, these would point to the docs built for my PR branch, which was the old behavior. ![image](https://github.com/user-attachments/assets/640c00d2-e6ec-4413-a81a-530aedd0f447) It would also be good for those look at docs on main to stay in docs on main (vs be redirected to stable). ## Alternatives Allow the old search engine
https://github.com/pytorch/tutorials/issues/3143
closed
[ "regression" ]
2024-11-04T19:42:26Z
2024-11-19T19:19:34Z
0
janeyx99
pytorch/xla
8,355
Offer user guide instructions to users to leverage various `libtpu` versions
## 📚 Documentation Offer user guide instructions to users to leverage various `libtpu` versions. We want users to have a clear understanding of how to set their expectations when choosing between different libtpu options. Here is a snippet of various libtpu version. I will add more details (as needed) to this bug. ``` # Install latest libtpu release $ pip install libtpu -f https://storage.googleapis.com/libtpu-wheels/index.html # Install specific libtpu release $ pip install libtpu==x.y.z -f https://storage.googleapis.com/libtpu-wheels/index.html # Install latest libtpu nightly build $ pip install libtpu --pre -f https://storage.googleapis.com/libtpu-wheels/index.html # Install specific libtpu nightly build $ pip install libtpu==0.0.3.dev20241029 -f https://storage.googleapis.com/libtpu-wheels/index.html ``` asking @mikegre-google to help with adding this information to the READM cc @tengyifei to assist
https://github.com/pytorch/xla/issues/8355
closed
[ "usability", "documentation" ]
2024-11-04T18:12:13Z
2025-03-03T18:32:33Z
15
miladm
huggingface/optimum-quanto
346
How to support activation 4bit quantization?
As mentioned in title.
https://github.com/huggingface/optimum-quanto/issues/346
closed
[ "Stale" ]
2024-11-04T09:59:21Z
2024-12-10T02:10:31Z
null
Ther-nullptr
pytorch/vision
8,714
I am using the torchvision-0.13.1+cu113 version, but it seems that it does not have the datapoints package. How can I solve this issue?
### 🐛 Describe the bug I am using the torchvision-0.13.1+cu113 version, but it seems that it does not have the datapoints package. How can I solve this issue? ### Versions I am using the torchvision-0.13.1+cu113 version, but it seems that it does not have the datapoints package. How can I solve this issue?
https://github.com/pytorch/vision/issues/8714
closed
[]
2024-11-04T07:23:48Z
2024-12-11T09:35:34Z
5
jiangsu415
huggingface/transformers
34,591
How to retrain the GLIP model on the Object365 dataset
Since I made some modifications to the GLIP model, I need to perform some pre-training again to improve performance. I replaced `_base_ = [../_base_/datasets/coco_detection.py]` with `_base_ = [../_base_/datasets/objects365v1_detection.py]` in `glip_atss_swin-t_a_fpn_dyhead_16xb2_ms-2x_funtune_coco.py` to train on Object365. Is this correct?
https://github.com/huggingface/transformers/issues/34591
closed
[]
2024-11-04T03:54:17Z
2024-11-04T06:46:17Z
null
Polarisamoon
huggingface/diffusers
9,847
Merge Lora weights into base model
I have finetuned the stable diffusion model and would like to merge the lora weights into the model itself. Currently I think in PEFT this is supported using `merge_and_unload` function but I seem to not find this option in diffusers. So is there any way to get a base model but with finetuned weights and If i am not wrong only unet part of model weights needs to be merged. This is necessary for the tasks like feature extraction.
https://github.com/huggingface/diffusers/issues/9847
closed
[]
2024-11-02T18:00:28Z
2024-11-03T03:03:45Z
1
yaswanth19
huggingface/chat-ui
1,550
Add full-text search in chat history
## Describe your feature request Allow users to search for specific keywords or phrases within the chat history, making it easier to find and recall previous conversations. ## Screenshots (if relevant) An example of the search bar placement could be found in #1079 ## Implementation idea One possible implementation could be to use a library to index the chat history data. This would allow for efficient and scalable search functionality. The search bar could be added to the chat history interface, and when a user enters a search query, it would send a request to the search index to retrieve relevant results. The results could be displayed in a dropdown list or a separate search results page, with links to the original chat messages. ## Previous proposals and why this one is different I'm aware that a similar proposal was made in the past #243, but it was rejected in favor of using the browser's page search functionality (ctrl + F). However, I'd like to argue that page search does not provide the same functionality as a dedicated full-text search in chat history. Here's why: - Page search is limited to the currently loaded chat history and previous chat names, whereas a dedicated search would allow users to search across the entire conversation history, even if it's not currently loaded on the page. - Page search does not provide any contextual information, such as the date and time of the message, or the conversation, whereas a dedicated search could provide this information and make it easier for users to understand the context of the search results. Given these differences, I believe that a dedicated full-text search in chat history is a valuable feature that would greatly improve the user experience, and I'd like to propose it again for consideration. Personally, I tend to create a new chat for each small problem to keep the LLM focused on what's important. As a result, I end up with too many chats with similar names, which makes the browser page search nearly useless.
https://github.com/huggingface/chat-ui/issues/1550
closed
[ "enhancement" ]
2024-11-01T19:27:41Z
2025-05-28T15:03:19Z
5
kadykov
pytorch/torchchat
1,338
can't build AOTI runner
### 🐛 Describe the bug `torchchat/utils/scripts/build_native.sh aoti` Fails with ``` Building aoti native runner... Defaulting TORCHCHAT_ROOT to /home/warden/source/torchchat/torchchat/utils/scripts/../../.. since it is unset. ~/source/torchchat ~/source/torchchat Synchronizing submodule url for 'tokenizer/third-party/abseil-cpp' Synchronizing submodule url for 'tokenizer/third-party/re2' Synchronizing submodule url for 'tokenizer/third-party/sentencepiece' ~/source/torchchat -- VERSION: 0.2.1 -- Not Found TCMalloc: TCMALLOC_LIB-NOTFOUND -- Using ET BUILD DIR: --[et-build]-- -- TORCHCHAT_ROOT="/home/warden/source/torchchat" -- Looking for excutorch in /home/warden/source/torchchat/et-build/install -- Could NOT find executorch (missing: executorch_DIR) -- Caffe2: CUDA detected: 12.0 -- Caffe2: CUDA nvcc is: /usr/bin/nvcc -- Caffe2: CUDA toolkit directory: /usr -- Caffe2: Header version is: 12.0 -- Could NOT find nvtx3 (missing: nvtx3_dir) -- USE_CUDNN is set to 0. Compiling without cuDNN support -- USE_CUSPARSELT is set to 0. Compiling without cuSPARSELt support -- USE_CUDSS is set to 0. Compiling without cuDSS support -- USE_CUFILE is set to 0. Compiling without cuFile support -- Autodetected CUDA architecture(s): 8.9 8.6 -- Added CUDA NVCC flags for: -gencode;arch=compute_89,code=sm_89;-gencode;arch=compute_86,code=sm_86 -- Configuring done (0.3s) -- Generating done (0.1s) -- Build files have been written to: /home/warden/source/torchchat/cmake-out [1/4] Linking CXX static library tokenizer/third-party/sentencepiece/src/libsentencepiece.a [2/4] Building CXX object tokenizer/CMakeFiles/tokenizer.dir/tiktoken.cpp.o FAILED: tokenizer/CMakeFiles/tokenizer.dir/tiktoken.cpp.o /usr/bin/c++ -I/home/warden/source/torchchat/tokenizer -I/home/warden/source/torchchat/tokenizer/third-party/sentencepiece/src -I/home/warden/source/torchchat/tokenizer/third-party/re2 -I/home/warden/source/torchchat/tokenizer/third-party/abseil-cpp -D_GLIBCXX_USE_CXX11_ABI=0 -MD -MT tokenizer/CMakeFiles/tokenizer.dir/tiktoken.cpp.o -MF tokenizer/CMakeFiles/tokenizer.dir/tiktoken.cpp.o.d -o tokenizer/CMakeFiles/tokenizer.dir/tiktoken.cpp.o -c /home/warden/source/torchchat/tokenizer/tiktoken.cpp In file included from /home/warden/source/torchchat/tokenizer/tiktoken.cpp:18: /home/warden/source/torchchat/tokenizer/base64.h:37:11: error: ‘uint32_t’ does not name a type 37 | constexpr uint32_t DECODE_TABLE[] = { | ^~~~~~~~ /home/warden/source/torchchat/tokenizer/base64.h:29:1: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’? 28 | #include <string> +++ |+#include <cstdint> 29 | #include <string_view> /home/warden/source/torchchat/tokenizer/base64.h:57:13: error: variable or field ‘validate’ declared void 57 | inline void validate(uint32_t v) { | ^~~~~~~~ /home/warden/source/torchchat/tokenizer/base64.h:57:22: error: ‘uint32_t’ was not declared in this scope 57 | inline void validate(uint32_t v) { | ^~~~~~~~ /home/warden/source/torchchat/tokenizer/base64.h:57:22: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’? /home/warden/source/torchchat/tokenizer/base64.h: In function ‘void base64::detail::decode(const std::string_view&, std::string&)’: /home/warden/source/torchchat/tokenizer/base64.h:70:3: error: ‘uint32_t’ was not declared in this scope 70 | uint32_t val = 0; | ^~~~~~~~ /home/warden/source/torchchat/tokenizer/base64.h:70:3: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’? /home/warden/source/torchchat/tokenizer/base64.h:72:3: error: ‘uint8_t’ was not declared in this scope 72 | uint8_t c = input[0]; | ^~~~~~~ /home/warden/source/torchchat/tokenizer/base64.h:72:3: note: ‘uint8_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’? /home/warden/source/torchchat/tokenizer/base64.h:73:12: error: ‘DECODE_TABLE’ was not declared in this scope 73 | auto v = DECODE_TABLE[c]; | ^~~~~~~~~~~~ /home/warden/source/torchchat/tokenizer/base64.h:73:25: error: ‘c’ was not declared in this scope 73 | auto v = DECODE_TABLE[c]; | ^ /home/warden/source/torchchat/tokenizer/base64.h:74:3: error: ‘validate’ was not declared in this scope 74 | validate(v); | ^~~~~~~~ /home/warden/source/torchchat/tokenizer/base64.h:75:3: error: ‘val’ was not declared in this scope 75 | val = v; | ^~~ /home/warden/source/torchchat/tokenizer/base64.h: In function ‘void base64::detail::decode_1_padding(const std::string_view&, std::string&)’: /home/warden/source/torchchat/tokenizer/base64.h:105:3: error: ‘uint32_t’ was not declared in this scope 105 | uint32_t val = 0; | ^~~~~~~~ /home/warden/source/torchchat/tokenizer/base64.h:105:3: note: ‘uint32_t’ is defined in
https://github.com/pytorch/torchchat/issues/1338
closed
[]
2024-11-01T17:52:21Z
2024-11-01T21:36:12Z
1
byjlw
huggingface/diffusers
9,837
[Feature] Is it possible to customize latents.shape / prepare_latent for context parallel case?
**Is your feature request related to a problem? Please describe.** One may need to extend the code to context parallel case and the latent sequence length needs to get divided. Instead of copying all the code of pipeline.py, the minimum modification is just adding few lines about dividing the latent shape and all_gather the result from the output. I suggest adding this feature so doing the monkey patch will be easier.
https://github.com/huggingface/diffusers/issues/9837
closed
[ "stale" ]
2024-11-01T14:32:05Z
2024-12-01T15:07:36Z
3
foreverpiano
huggingface/diffusers
9,836
[Feature] Can we record layer_id for DiT model?
**Is your feature request related to a problem? Please describe.** Some layerwise algorithm may be based on layer-id. just need some simple modification for transformer2Dmodel and its inner module like attention part, batch_norm part. just pass the layer_id as an extra parameter.
https://github.com/huggingface/diffusers/issues/9836
closed
[ "stale" ]
2024-11-01T14:26:31Z
2025-01-27T01:31:21Z
9
foreverpiano
huggingface/diffusers
9,835
unused parameters lead to error when training contrlnet_sd3
### Discussed in https://github.com/huggingface/diffusers/discussions/9834 <div type='discussions-op-text'> <sup>Originally posted by **Zheng-Fang-CH** November 1, 2024</sup> ![b1fa13bdb595284dce31e3cf189876b](https://github.com/user-attachments/assets/12faa0fc-acb8-4c98-ba03-b0e41bc9075a) Is there someone meet this question? I have this error no matter I train it on single gpu or multi gpu.</div>
https://github.com/huggingface/diffusers/issues/9835
closed
[]
2024-11-01T13:57:03Z
2024-11-17T07:33:25Z
6
Daryu-Fan
huggingface/diffusers
9,833
SD3.5-large. Why is it OK when calling with a single thread, but not with multiple threads?
### Describe the bug First, I created a SD3.5-large service: ```python import os os.environ["CUDA_VISIBLE_DEVICES"] = "1" import uuid from diffusers import BitsAndBytesConfig, SD3Transformer2DModel, DDIMScheduler, DDPMParallelScheduler from diffusers import StableDiffusion3Pipeline import torch from transformers import T5EncoderModel import time from flask import request, jsonify import logging import sys import flask app = flask.Flask("sd_server") handler = logging.StreamHandler(sys.stdout) handler.setFormatter(logging.Formatter("[%(asctime)s] %(levelname)s in %(module)s: %(message)s")) app.logger.handlers.clear() app.logger.addHandler(handler) app.logger.setLevel(logging.INFO) # model pipeline model_id = "../stable-diffusion-3.5-large" nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model_nf4 = SD3Transformer2DModel.from_pretrained( model_id, subfolder="transformer", quantization_config=nf4_config, torch_dtype=torch.bfloat16 ) model_nf4 = model_nf4.to("cuda:0") pipeline = StableDiffusion3Pipeline.from_pretrained( model_id, transformer=model_nf4, torch_dtype=torch.bfloat16 ) # pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) # pipeline.scheduler = DDPMParallelScheduler.from_config(pipeline.scheduler.config) pipeline = pipeline.to("cuda:0") # # diffusers/t5-nf4 # t5_nf4 = T5EncoderModel.from_pretrained("text_encoder_3", torch_dtype=torch.bfloat16) # t5_nf4 = t5_nf4.to("cuda:0") # pipeline = StableDiffusion3Pipeline.from_pretrained( # model_id, # transformer=model_nf4, # text_encoder_3=t5_nf4, # torch_dtype=torch.bfloat16 # ) # pipeline = pipeline.to("cuda:0") def generate_uuid_filename(extension=".jpeg"): filename = f"{uuid.uuid4()}{extension}" return filename def image_generation(prompt, negative_prompt, width, height, save_path, num_inference_steps=28, guidance_scale=4.5, max_sequence_length=512): image = pipeline( prompt=prompt, negative_prompt=negative_prompt, num_inference_steps=num_inference_steps, width=width, height=height, guidance_scale=guidance_scale, max_sequence_length=max_sequence_length, ).images[0] file_name = generate_uuid_filename() image.save(os.path.join(save_path, file_name)) torch.cuda.empty_cache() return f"{file_name}保存完毕..." def update_prompt(req_data): trans = {"natural":["cinematic photo ```%s``` , photograph, film, bokeh, professional, 4k, highly detailed", "drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly"], "vivid":["HDR photo of ``%s``` . High dynamic range, vivid, rich details, clear shadows and highlights, realistic, intense, enhanced contrast, highly detailed", "flat, low contrast, oversaturated, underexposed, overexposed, blurred, noisy"]} style = "natural" try: if req_data.get('style') != None: if req_data.get('style') in trans.keys(): style = req_data.get('style') except: pass import re try: req_data["promptEnglish"] = re.findall(r'\\"(.+)\\"',req_data["promptEnglish"])[0] except: pass prompt = trans[style][0]%req_data["promptEnglish"] negative_prompt = trans[style][1] if req_data["negativePromptEnglish"] not in [None ,'']: negative_prompt = req_data["negativePromptEnglish"] return prompt, negative_prompt @app.route('/api/text_to_img', methods=['POST']) def route(): res = {"id": "", "object": "image", "created":int(time.time()), "data":[]} req_data = request.json app.logger.info(req_data) prompt, negative_prompt = update_prompt(req_data) app.logger.info(prompt+"|"+negative_prompt) width = int(req_data["size"].split("x")[0]) height= int(req_data["size"].split("x")[1]) res["data"] = image_generation(prompt, negative_prompt, width, height, './') return jsonify(res) if __name__ == '__main__': app.run(host='0.0.0.0',port=12571,threaded=True, debug=False) ``` Then I called this service concurrently and the following problems occurred: ```bash [2024-11-01 07:32:12,370] INFO in app: {'prompt': '', 'promptEnglish': 'A capybara holding a sign that reads Hello Fast World', 'negative_prompt': '', 'negativePromptEnglish': None, 'style': 'natural', 'size': '1024x1024'} [2024-11-01 07:32:12,371] INFO in app: cinematic photo ```A capybara holding a sign that reads Hello Fast World``` , photograph, film, bokeh, professional, 4k, highly detailed|drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly 4%|███▋
https://github.com/huggingface/diffusers/issues/9833
closed
[ "bug" ]
2024-11-01T08:00:04Z
2024-11-02T02:14:50Z
1
EvanSong77
huggingface/diffusers
9,825
Support IPAdapters for FLUX pipelines
### Model/Pipeline/Scheduler description IPAdapter for FLUX is available now, do you have any plans to add IPAdapter to FLUX pipelines? ### Open source status - [X] The model implementation is available. - [X] The model weights are available (Only relevant if addition is not a scheduler). ### Provide useful links for the implementation model implementation: * https://github.com/XLabs-AI/x-flux/blob/main/src/flux/xflux_pipeline.py#L55 model weights: * https://huggingface.co/XLabs-AI/flux-ip-adapter-v2 * https://huggingface.co/XLabs-AI/flux-ip-adapter
https://github.com/huggingface/diffusers/issues/9825
closed
[ "help wanted", "wip", "contributions-welcome", "IPAdapter" ]
2024-10-31T23:07:32Z
2024-12-21T17:49:59Z
10
chenxiao111222
huggingface/diffusers
9,822
Loading SDXL loras into Flux
### Describe the bug Currently it's possible to load SDXL loras without warning into Flux. ### Reproduction Is it possible for you to implement a raise a warning (and an error when a boolean is active) when the list of layers here is zero: https://github.com/huggingface/diffusers/blob/41e4779d988ead99e7acd78dc8e752de88777d0f/src/diffusers/loaders/lora_pipeline.py#L1905 ### Logs _No response_ ### System Info ubuntu ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/9822
closed
[ "bug" ]
2024-10-31T18:01:29Z
2024-12-10T14:37:32Z
8
christopher5106
huggingface/datasets
7,268
load_from_disk
### Describe the bug I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that? ### Steps to reproduce the bug when trying to load data using load_From_disk after being saved using save_to_disk ### Expected behavior run out of disk space ### Environment info lateest version
https://github.com/huggingface/datasets/issues/7268
open
[]
2024-10-31T11:51:56Z
2025-07-01T08:42:17Z
3
ghaith-mq
pytorch/xla
8,342
Instructions in CONTRIBUTING.md for using VS Code don't seem to work
## 📚 Documentation I've followed the instructions in CONTRIBUTING.md to set up a dev environment using VS Code. Next I run python and then tried to import torch_xla as xla and I get an error: ``` >>> import torch_xla as xla Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/workspaces/xla_vs_files/pytorch/xla/torch_xla/__init__.py", line 259, in <module> from .stablehlo import save_as_stablehlo, save_torch_model_as_stablehlo File "/workspaces/xla_vs_files/pytorch/xla/torch_xla/stablehlo.py", line 18, in <module> from torch_xla._dynamo import dynamo_bridge File "/workspaces/xla_vs_files/pytorch/xla/torch_xla/_dynamo/dynamo_bridge.py", line 20, in <module> from torch._inductor.fx_passes.post_grad import ConstructorMoverPass File "/usr/local/lib/python3.10/site-packages/torch/_inductor/fx_passes/post_grad.py", line 22, in <module> from .. import config, ir, pattern_matcher File "/usr/local/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 96, in <module> from .lowering import fallback_node_due_to_unsupported_type File "/usr/local/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 6639, in <module> from . import kernel File "/usr/local/lib/python3.10/site-packages/torch/_inductor/kernel/__init__.py", line 1, in <module> from . import mm, mm_common, mm_plus_mm, unpack_mixed_mm File "/usr/local/lib/python3.10/site-packages/torch/_inductor/kernel/mm.py", line 16, in <module> from torch._inductor.codegen.cpp_gemm_template import CppPackedGemmTemplate File "/usr/local/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_gemm_template.py", line 14, in <module> from ..kernel.mm_common import mm_args File "/usr/local/lib/python3.10/site-packages/torch/_inductor/kernel/mm_common.py", line 10, in <module> from torch._inductor.select_algorithm import realize_inputs File "/usr/local/lib/python3.10/site-packages/torch/_inductor/select_algorithm.py", line 22, in <module> from filelock import FileLock ModuleNotFoundError: No module named 'filelock' ``` So it appears something isn't configured correctly. If I follow the instructions for directly using a container, everything works as expected.
https://github.com/pytorch/xla/issues/8342
closed
[ "documentation" ]
2024-10-30T18:16:38Z
2024-10-30T18:36:37Z
1
mikegre-google
huggingface/peft
2,188
How to change 'modules_to_save' setting when reloading a lora finetuned model
### System Info - `transformers` version: 4.36.2 - Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.19 - Huggingface_hub version: 0.24.6 - Safetensors version: 0.4.5 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @BenjaminBossan ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder - [X] My own task or dataset (give details below) ### Reproduction @BenjaminBossan 1. I use lora to finetune whisper,and get the model A. The settings are ``` config = LoraConfig(r=8, lora_alpha=16,target_modules=target_modules,modules_to_save=modules_to_save,lora_dropout=0.05, bias="none") model = get_peft_model(model, config) ``` and then I change the source code of model A, I add an additional layer. I now want to train a model with an extra layer based on the lora trained model A. I use: ``` model_lora_path = "../lora_path/" + 'checkpoint-56416' model = PeftModel.from_pretrained(model,model_lora_path,ignore_mismatched_sizes=True).cuda() ``` But the model LoraConfig's "modules_to_save" can not be changed, I want to store the additional layer in to 'adapter_model.safetensors' How can I change my code? In short, I want to add parameters to modules_to_save in LoraConfig during the reload process based on the trained lora model so that the additional layer can be stored. I tried to use `model.peft_config['default'].modules_to_save.extend(modules_to_save)` to add the “modules_to_save” but it doesn't work. ### Expected behavior Change reload lora model's LoraConfig settings
https://github.com/huggingface/peft/issues/2188
closed
[]
2024-10-30T12:26:37Z
2024-12-08T15:03:37Z
null
dengchengxifrank
huggingface/huggingface.js
996
@huggingface/hub: how to use `modelInfo` with proper typing
THe `modelInfo` method is allowing the caller to define which field will be provided, it has been added in https://github.com/huggingface/huggingface.js/pull/946 https://github.com/huggingface/huggingface.js/blob/186ab738e2f9c7c3613330d45e44848186958815/packages/hub/src/lib/model-info.ts#L9-L11 Here is an example ```typescript $: const info = await modelInfo({ name: "openai-community/gpt2", }); $: console.log(info); { id: '621ffdc036468d709f17434d', name: 'openai-community/gpt2', private: false, task: 'text-generation', downloads: 13764131, gated: false, likes: 2334, updatedAt: 2024-02-19T10:57:45.000Z } ``` We can ask for additional fields, using the `additionalFields`. Here is an example ```typescript $: const info = await modelInfo({ name: "openai-community/gpt2", additionalFields: ['author'], }); $: console.log(info); { // ... omitted author: 'openai-community', } ``` However I am not able to find proper typing for the method calling and return type. The return type of `modelInfo` is the following https://github.com/huggingface/huggingface.js/blob/186ab738e2f9c7c3613330d45e44848186958815/packages/hub/src/lib/model-info.ts#L21 The additionalFields is the following https://github.com/huggingface/huggingface.js/blob/186ab738e2f9c7c3613330d45e44848186958815/packages/hub/src/lib/model-info.ts#L15 But, I am getting an error when doing the following ```typescript const info = await modelInfo<'author'>({ name: "openai-community/gpt2", additionalFields: ['author'], }); ``` `TS2344: Type string does not satisfy the constraint never` I am also interesting in getting the full `ApiModelInfo` object, but I am not able to use the method with the right typing :thinking: . cc @coyotte508 :)
https://github.com/huggingface/huggingface.js/issues/996
closed
[]
2024-10-30T10:41:36Z
2024-10-30T12:02:47Z
null
axel7083
huggingface/diffusers
9,802
Multidiffusion (panorama pipeline) is missing segmentation inputs?
I'm looking at the multidiffusion panorama pipeline page (https://huggingface.co/docs/diffusers/en/api/pipelines/panorama). It looks like there is no way to specify the segmentation and associated prompts as in the original paper https://multidiffusion.github.io/ . If the code only has the panorama capability and not the region based generation using segmentation and prompts, then it should be extended to include the regional generation... If it does have region based generation then the documentation should be updated to show how to use it!
https://github.com/huggingface/diffusers/issues/9802
open
[ "stale" ]
2024-10-29T20:15:15Z
2024-12-24T15:03:30Z
5
jloveric
pytorch/TensorRT
3,267
❓ [Question] How do you properly deploy a quantized model with tensorrt
## ❓ Question I have a PTQ model and a QAT model trained with the official pytorch API following the quantization tutorial, and I wish to deploy them on TensorRT for inference. The model is metaformer-like using convolution layers as token mixer. One part of the quantized model looks like this: ![image](https://github.com/user-attachments/assets/8efe5705-9044-4609-98ad-74e4be1c5ba0) ## What you have already tried I have tried different ways to make things work: 1. the package torch2trt: there's huge problem with dynamic input. The dataset consists of different inputs (B,C,H,W) where H and W are not necessarily the same. There's a torch2trt-dynamic package but I think there are bugs in the plugins. The code basically looks like this: `model_trt = torch2trt( model_fp32, [torch.randn(1, 11, 64, 64).to('cuda')], max_batch_size=batch_size, fp16_mode=False, int8_mode=True, calibrator= trainLoader, input_shapes=[(None, 11, None, None)] )` 3. torch.compile() with backends=tensorrt. When I was trying to compile the PTQ model, there's RuntimeError: quantized::conv2d (ONEDNN): data type of input should be QUint8. And when I was trying to use the QAT model, there's W1029 14:21:17.640402 139903289382080 torch/_dynamo/utils.py:1195] [2/0] Unsupported: quantized nyi in meta tensors with fake tensor propagation. Here's the code I used: `trt_gm = torch.compile( model, dynamic= True, backend="tensorrt",) ` 4. try to convert the torch model to an onnx model, then convert it into the trt engine. There are several problems in this case: - The onnx model is runs weirdly slow with onnx runtime. Furthermore, the loss calculated is extremely high. Here's an example: ![image](https://github.com/user-attachments/assets/fb0f1f3a-2c5c-4f8d-8bf5-c6a3b5aac6ac) - I tried to visualize the quantized ONNX model with Netron because converting the quantized ONNX model to TRT engine always raise ![image](https://github.com/user-attachments/assets/b09bf68a-9a8a-4ce0-8fbb-04c5bc30bf71) This is the problematic part of the graph ![image](https://github.com/user-attachments/assets/9c11d90d-8880-4e1f-9716-342edb1c4864) The rightmost DequantizeLinear node is causing problem. I checked the x and found that it's an in32 constant array and the x_scale is a float32 constant array. The output of this node turned out to be the bias passed into the Conv layer. There must be something wrong in the behavior of the conversion. When doing quantization with the pytorch API, only activations and weights were observed by the defined observer, so I was expecting only the leftmost and the middle DequantizeLinear Nodes while bias should be stored in fp32 and directly passed into the Conv layer. Using onnx_simplified is not able to get rid of the node. With the incompatibility between the conversion of quantized torch model to ONNX model, I'm not able to further convert the model into trt engine. I've considered using the onnx API for quantization, but the performance drop thing from unquantized original torch model to ONNX model is quite concerning. The converting code looks like this: `torch.onnx.export( quantized_model, dummy_input, args.onnx_export_path, input_names=["input"], output_names=["output"], opset_version=13, export_params= True, keep_initializers_as_inputs=False, dynamic_axes= {'input': {0:'batch_size', 2: "h", 3: "w"}, 'output': {0:'batch_size', 2: "h", 3: "w"} } )` ## Environment > Build information about Torch-TensorRT can be found by turning on debug messages - PyTorch Version: 2.3.1 - CPU Architecture: x86_64 - OS: Ubuntu 20.04.4 LTS - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): conda - Are you using local sources or building from archives: No - Python version: 3.9.19 - CUDA version: 12.1 - GPU models and configuration: - Torch_TensorRT: 2.3.0 - torch2trt: 0.5.0 - onnx:1.16.1 ## Additional context Personally I think the torch.compile() API is the most possible for me to successfully convert the quantized model since there's no performance drop. Does anyone has relevant experience on handling quantized model?
https://github.com/pytorch/TensorRT/issues/3267
open
[ "question" ]
2024-10-29T15:06:54Z
2025-03-03T22:30:06Z
null
Urania880519
pytorch/torchtitan
658
Questions about FSDP2 support and memory usage.
What is current support of FSDP2 in main pytorch? I just see this here https://github.com/pytorch/pytorch/blob/main/torch/distributed/_composable/fully_shard.py#L45 > "`torch.distributed._composable.fully_shard` will be removed after PyTorch 2.5." Will FSDP2 be deprecated? Can FSDP1 work with DTensor as well as TP? I tried FSDP2 in my new project, but I got higher GPU Memory usage compared to FSDP1, what might this cause? The model is a 10B DiT-like model with extra embedding layer compared to LLMs. My main concern is that should I need to take more modules warpped with fully_shard to reduce the memory usage? Since the transformer block is quite similar to llama, I use the same fully_sahrd warp with your project.
https://github.com/pytorch/torchtitan/issues/658
closed
[ "question" ]
2024-10-29T11:09:01Z
2025-08-21T02:57:19Z
null
tangjiasheng
huggingface/transformers.js
1,000
Error while converting LLama-3.1:8b to ONNX
### Question Hey @xenova, Thanks a lot for this library! I tried converting [`meta-llama/Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) to ONNX using the following command (on `main`): ```bash python -m scripts.convert --quantize --model_id "meta-llama/Llama-3.1-8B-Instruct" ``` Using the following `requirements.py` file (in a fresh env): ``` transformers[torch]==4.43.4 onnxruntime==1.19.2 optimum==1.21.3 onnx==1.16.2 onnxconverter-common==1.14.0 tqdm==4.66.5 onnxslim==0.1.31 --extra-index-url https://pypi.ngc.nvidia.com onnx_graphsurgeon==0.3.27 ``` But got the following error: ``` Framework not specified. Using pt to export the model. Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:27<00:00, 6.99s/it] Automatic task detection to text-generation-with-past (possible synonyms are: causal-lm-with-past). Using the export variant default. Available variants are: - default: The default ONNX variant. ***** Exporting submodel 1/1: LlamaForCausalLM ***** Using framework PyTorch: 2.5.0 Overriding 1 configuration item(s) - use_cache -> True We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) /site-packages/transformers/models/llama/modeling_llama.py:1037: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if sequence_length != 1: Traceback (most recent call last): File "/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "scripts/convert.py", line 462, in <module> main() File "scripts/convert.py", line 349, in main main_export(**export_kwargs) File "/site-packages/optimum/exporters/onnx/__main__.py", line 365, in main_export onnx_export_from_model( File "/site-packages/optimum/exporters/onnx/convert.py", line 1170, in onnx_export_from_model _, onnx_outputs = export_models( File "/site-packages/optimum/exporters/onnx/convert.py", line 776, in export_models export( File "/site-packages/optimum/exporters/onnx/convert.py", line 881, in export export_output = export_pytorch( File "/site-packages/optimum/exporters/onnx/convert.py", line 577, in export_pytorch onnx_export( File "/site-packages/torch/onnx/__init__.py", line 375, in export export( File "/site-packages/torch/onnx/utils.py", line 502, in export _export( File "/site-packages/torch/onnx/utils.py", line 1564, in _export graph, params_dict, torch_out = _model_to_graph( File "/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph graph = _optimize_graph( File "/site-packages/torch/onnx/utils.py", line 663, in _optimize_graph _C._jit_pass_onnx_graph_shape_type_inference( RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name. ``` I saw this somewhat related issue #967, but the error didn't happen on the ONNX library (I think `v3` has been merged now). Do you have a fix for larger models such as this one? I also tried with [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct), but I got the same error, even though I see [here](https://huggingface.co/onnx-community/Llama-3.2-3B-Instruct) that you managed to convert it successfully. Thanks!
https://github.com/huggingface/transformers.js/issues/1000
open
[ "question" ]
2024-10-29T09:40:14Z
2024-10-29T09:40:14Z
null
charlesbvll
pytorch/torchchat
1,334
Multimodal Eval Enablement (Looking for Developer to Implement Design)
### 🚀 The feature, motivation and pitch ***Please note that since the actual implementation is going to be simple, and the design has already been reviewed, the purpose of this GitHub Issue is to look for a developer to implement this feature ASAP.*** LLM eval stands for the process of assessing the perplexity, performance and capabilities of LLMs, usually by having the model complete one or a series of tasks and assigning them scores. Torchchat is already using EleutherAI’s [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to do eval on text LLM ([code pointer](https://github.com/pytorch/torchchat/blob/11dcbebe6bd2ee933f7302b4e14baa23761abc0c/torchchat/usages/eval.py#L198)). Recently, torchtune has worked with EleutherAI to enable eval on text-image models in the harness, and has integrated this feature into torchtune ([code pointer](https://github.com/pytorch/torchtune/blob/d0c6460b51fc18245b3da0220568e10b3de06b63/recipes/eleuther_eval.py#L40)). Torchchat wants to just copy that solution from torchtune for text-image models. Without the ability to do eval on multimodal LLMs, the enablement of multimodal LLMs on torchchat is incomplete. It’s critical to understand how well torchchat performs with image inputs. ### Additional context ## Assumptions * The eval for text LLMs is already enabled on torchchat. Code pointer to the [core eval function](https://github.com/pytorch/torchchat/blob/11dcbebe6bd2ee933f7302b4e14baa23761abc0c/torchchat/usages/eval.py#L172) and the [main function](https://github.com/pytorch/torchchat/blob/11dcbebe6bd2ee933f7302b4e14baa23761abc0c/torchchat/usages/eval.py#L226). * The Llama 3.2-11b multimodal model has been onboarded to torchchat, and in the future there will be more multimodal LLMs on torchchat. * EleutherAI’s [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) has enabled eval on llama3.2-11b, thus we don’t need to make code changes in EleutherAI repo. ## The Main Goal A torchchat user can run eval on the llama 3.2-11b model (which image-text-in, text-out). Note that we don’t need to worry about the internals of how the eval happens because we will only be calling the EleutherAI’s eval libraries and report the metrics it returns. The user interface will be a commandline `python torchchat.py eval <model-name>` with additional arguments specifying detailed requirements for the eval tasks. The result will be printed out on the terminal which include the following metrics: * Tasks that have been run * The score to each task * The time it took to run each task ### RFC (Optional) # Design ## Overview In this design, the multimodal eval in torchchat will borrow from the implementation of multimodal eval in torchtune which utilizes EleutherAI’s [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The reason we can do this is that torchchat uses the same Llama 3.2-11b model definition as torchtune. ## Details ### The Core Eval Implementation #### [Preferred] Approach A: import the implementation of `HFMultimodalLM` from torchtune directly The easiest implementation is to import the implementation of <code>HFMultimodalLM </code>directly from torchtune, then call <code>evaluate()</code> with this wrapper class passed in. </em> Here’s torchtune’s implementation of `HFMultimodalLM`: [code pointer](https://github.com/pytorch/torchtune/blob/ced1a840300b1ab550dac4fc2054b187f5b45c8c/recipes/eleuther_eval.py#L68). *Pseudocode:* ``` # In eval.py from torchtune.recipes.eleuther_eval import _VLMEvalWrapper if model is text-based: do the existing text-based model eval elif model is text-image-based: eval_results = evaluate(_VLMEvalWrapper(...)) ``` The pros and cons of this solution is discussed in the following “Alternatives Discussion” section. This solution should be the one to start with given how quick it can enable multimodal eval on torchchat. If for some unforeseen reason that it doesn’t work, then take the following approach that requires more work. #### Approach B: copy the implementation of `HFMultimodalLM` from torchtune 1. Creating a wrapper class that overrides class <code>[HFMultimodalLM](https://github.com/EleutherAI/lm-evaluation-harness/blob/0845b588303f1f59af98dd1c5bdbd78a9e75a1e2/lm_eval/models/hf_vlms.py#L30)</code>, which is an abstract Hugging Face model class for multimodal models. The implementation of this class can be copied from torchtune, [code pointer](https://github.com/pytorch/torchtune/blob/ced1a840300b1ab550dac4fc2054b187f5b45c8c/recipes/eleuther_eval.py#L68). 2. Then call <code>evaluate()</code> with this wrapper class passed in. *Pseudocode:* ``` # In eval.py from lm_eval.models.hf_vlms import HFMultimodalLM from lm_eval.evaluator import evaluate class VLMEvalWrapper(HFMultimodalLM): ...# implementation if model is text-based: do the existing text-
https://github.com/pytorch/torchchat/issues/1334
closed
[ "enhancement", "good first issue", "actionable", "Llama 3.2- Multimodal", "triaged" ]
2024-10-29T01:01:50Z
2025-03-25T06:24:18Z
26
Olivia-liu
huggingface/chat-ui
1,545
Support markdown & code blocks in text input
## Describe your feature request Would be nice to support code block in the text input bar, that would make it easier to input code. we could also support basic markdown features like bold or italic, maybe not headings tho. ## Screenshots (if relevant) Try https://claude.ai/new to see an example of how this could work
https://github.com/huggingface/chat-ui/issues/1545
open
[ "enhancement", "front" ]
2024-10-28T08:42:58Z
2024-11-11T20:26:32Z
2
nsarrazin
huggingface/peft
2,181
How can I do to export mode format as gguf
### Feature request This is a good project,I just got it today and encountered some problems. my any code ``` python from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Qwen2-0.5B") model = AutoModelForCausalLM.from_pretrained("model") model.save_pretrained('directory') ``` I need gguf file deploy by ollama.Whern I export model format as gguf. I use ```shell !python llama.cpp/convert_hf_to_gguf.py directory ``` but it error ``` INFO:hf-to-gguf:Loading model: directory Traceback (most recent call last): File "/Users/xu756/AIGC/llama.cpp/convert_hf_to_gguf.py", line 4436, in <module> main() File "/Users/xu756/AIGC/llama.cpp/convert_hf_to_gguf.py", line 4404, in main hparams = Model.load_hparams(dir_model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/xu756/AIGC/llama.cpp/convert_hf_to_gguf.py", line 462, in load_hparams with open(dir_model [/](https://file+.vscode-resource.vscode-cdn.net/) "config.json", "r", encoding="utf-8") as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: 'directory/config.json' ``` <img width="1328" alt="image" src="https://github.com/user-attachments/assets/4d74c66e-b092-47f2-b570-b6e35767a6ce"> ### Motivation I need gguf file deploy by ollama. Is there any other way to deploy the PEFT model? Thank you very much. ### Your contribution I simply reproduced it on top
https://github.com/huggingface/peft/issues/2181
closed
[]
2024-10-26T13:51:45Z
2024-10-26T13:59:18Z
null
xu756
pytorch/xla
8,327
Add documentations for persistent caching
## 📚 Documentation Add documentations for persistent caching; the [current documentation](https://github.com/pytorch/xla/blob/310ff8f41858db7782f97542e76aeb60fa527d14/API_GUIDE.md#compilation-caching) briefly explains how to enable the cache. Though, it does little to 1. introduce the feature 2. explain what problem it solves 3. how it works 4. how it can be transferred from one VM to another VM 5. what it's limitations are Let's add the new documentation under https://github.com/pytorch/xla/tree/master/docs cc @mikegre-google to help review this documentation
https://github.com/pytorch/xla/issues/8327
open
[ "documentation" ]
2024-10-26T01:01:36Z
2024-10-26T01:01:37Z
0
miladm
huggingface/diffusers
9,772
Support ControlNetPlus Union if not already supported
It's not clear if ControlNetPlus is already supported by diffusers https://github.com/xinsir6/ControlNetPlus/tree/main/pipeline which consists of union controlnet for SDXL. This model seems to support the only SDXL segmentation that I'm aware of. If not already supported, it should be! https://github.com/xinsir6/ControlNetPlus/tree/main
https://github.com/huggingface/diffusers/issues/9772
closed
[ "help wanted", "Good second issue", "contributions-welcome" ]
2024-10-25T17:43:43Z
2024-12-11T17:07:54Z
5
jloveric
huggingface/transformers.js
994
Will these mistakes have an impact?
### Question After AutoProcessor.from_pretrained is loaded, an error occurred, and the error message is as follows: ````typescript ort-wasm-simd-thread…jsep.wasm:0x10367e0 2024-10-25 20:11:31.705399 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf. ort-wasm-simd-thread…jsep.wasm:0x10367e0 2024-10-25 20:11:31.706300 [W:onnxruntime:, session_state.cc:1170 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.  ````
https://github.com/huggingface/transformers.js/issues/994
open
[ "question" ]
2024-10-25T12:17:03Z
2024-11-12T11:10:11Z
null
aidscooler
pytorch/vision
8,696
PyTorch & Torchvision compatible issue on Jetson Orin
### 🐛 Describe the bug Previous discussion: https://forums.developer.nvidia.com/t/pytorch-torchversion-compatible-issue-on-l4t35-5-0/310929/9 ```bash daniel@daniel-nvidia:~/Work/yolov5$ python detect.py --weights yolov5s.pt --source ../../Videos/Worlds_longest_drone_fpv_one_shot.mp4 WARNING ⚠️ Python>=3.10 is required, but Python==3.8.10 is currently installed /home/daniel/.local/lib/python3.8/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/daniel/.local/lib/python3.8/site-packages/torchvision/image.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSsb'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source? warn( detect: weights=['yolov5s.pt'], source=../../Videos/Worlds_longest_drone_fpv_one_shot.mp4, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_format=0, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 🚀 v7.0-378-g2f74455a Python-3.8.10 torch-2.1.0a0+41361538.nv23.06 CUDA:0 (Orin, 7451MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients Traceback (most recent call last): File "detect.py", line 437, in <module> main(opt) File "detect.py", line 432, in main run(**vars(opt)) File "/home/daniel/.local/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "detect.py", line 210, in run pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det) File "/home/daniel/Work/yolov5/utils/general.py", line 1104, in non_max_suppression i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS File "/home/daniel/.local/lib/python3.8/site-packages/torchvision/ops/boxes.py", line 40, in nms _assert_has_ops() File "/home/daniel/.local/lib/python3.8/site-packages/torchvision/extension.py", line 46, in _assert_has_ops raise RuntimeError( RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install. ``` ### Versions ```bash daniel@daniel-nvidia:~/Work/yolov5$ python -c "import torch; import torchvision; print(f'PyTorch version: {torch.__version__}'); print(f'Torchvision version: {torchvision.__version__}')" /home/daniel/.local/lib/python3.8/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/daniel/.local/lib/python3.8/site-packages/torchvision/image.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSsb'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source? warn( PyTorch version: 2.1.0a0+41361538.nv23.06 Torchvision version: 0.16.1+fdea156 ``` ``` daniel@daniel-nvidia:~/Work$ python collect_env.py Collecting environment information... PyTorch version: 2.1.0a0+41361538.nv23.06 Is debug build: False CUDA used to build PyTorch: 11.4 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (aarch64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.16.3 Libc version: glibc-2.31 Python version: 3.8.10 (default, Sep 11 2024, 16:02:53) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.10.192-tegra-aarch64-with-glibc2.29 Is CUDA available: True CUDA runtime version: 11.4.315 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0 /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0 /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0 /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0 /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0 HIP runtime ver
https://github.com/pytorch/vision/issues/8696
open
[]
2024-10-25T07:12:11Z
2024-10-25T07:28:44Z
0
lida2003
huggingface/transformers.js
993
How do I know the loading progress when loading .onnx file?
### Question Because the .onnx file is large(about 170M),I decided to provide a loading progress. Code as below: ```` typescript const modelSettings = { // Do not require config.json to be present in the repository config: { model_type: "custom" }, subfolder: "", process_callback: (progress) => { modelLoadingProgress.value = Math.round(progress * 100); console.log("model : " + progress) } }; modelSettings.device = "webgpu"; modelSettings.dtype = "fp32"; model = await AutoModel.from_pretrained('briaai/RMBG-1.4', modelSettings); ```` I found the process_callback never been called. Can anyone help?
https://github.com/huggingface/transformers.js/issues/993
open
[ "question" ]
2024-10-25T05:52:12Z
2024-10-25T17:54:30Z
null
aidscooler
huggingface/finetrainers
70
How to set the resolutions when finetuning I2V model?
I want to train a video diffusion with lower resolutions. I set the height_buckets=256 and width_buckets=256 in prepare_dataset.sh and process the data. But I run into the following error while run the train_image_to_video_lora.sh script. ValueError: It is currently not possible to generate videos at a different resolution that the defaults. This should only be the case with 'THUDM/CogVideoX-5b-I2V'.If you think this is incorrect, please open an issue at https://github.com/huggingface/diffusers/issues. How to set the hyperparameters to train with different resolutions?
https://github.com/huggingface/finetrainers/issues/70
closed
[]
2024-10-25T05:36:19Z
2024-11-11T18:27:29Z
null
TousakaNagio
huggingface/optimum
2,080
"ValueError: Trying to export a codesage model" while trying to export codesage/codesage-large
### System Info ```shell optimum 1.23.2 MacOS 14.7 Python 3.9 ``` ### Who can help? @michaelbenayoun ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) This is a PyTorch embedding model released by AWS, as described here: https://www.linkedin.com/posts/changsha-ma-9ba7a485_yes-code-needs-its-own-embedding-models-activity-7163196644258226176-bFSW Hoping I can use it with RAG under ollama for code understanding. ``` huggingface-cli download codesage/codesage-large optimum-cli export onnx --model codesage/codesage-large codesage-large-onnx --task default --trust-remote-code ``` The error: "ValueError: Trying to export a codesage model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type codesage to be supported natively in the ONNX export." I am grateful for any help you can provide! ### Expected behavior An exported ONNX file.
https://github.com/huggingface/optimum/issues/2080
open
[ "bug" ]
2024-10-25T05:27:22Z
2024-10-25T05:27:22Z
0
TurboEncabulator9000
pytorch/pytorch
138,888
How to Implement multi-card parallel Inference by torchrun?
Hello everyone, I'm trying to achieve a goal of using trochrun for dual-card parallel inference. Then I have two questions. First, I found that torchrun is mainly used for model training, so can it be used for model inference? If can, my inference process is divided into two parts: model loading and inference. I only want to load the model once and then infer multiple times. How can I implement it? Thank you. cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
https://github.com/pytorch/pytorch/issues/138888
closed
[ "oncall: distributed" ]
2024-10-25T03:52:20Z
2024-11-27T01:05:31Z
null
lcf2610
huggingface/chat-ui
1,543
RFC enable multimodal and tool usage at once for OAI endpoints ?
https://github.com/huggingface/chat-ui/blob/8ed1691ecff94e07d10dfb2874d3936d293f4842/src/lib/server/endpoints/openai/endpointOai.ts#L191C53-L191C65 Just played around with combining both of this What do you think about making tool calling only if no image is in conversation ? Otherwise we need to insert models twice, once for multi modal and once for tool usage. A quick solution could be just checking if image_url is part in one of the messages and if it is skip the tools check Just struggled around because the upload file button was there but didnt worked to do something with the uploaded image until checking the code. @nsarrazin wdyt ?
https://github.com/huggingface/chat-ui/issues/1543
open
[]
2024-10-24T17:37:50Z
2024-10-24T17:39:14Z
0
flozi00
pytorch/tutorials
3,113
💡 [REQUEST] - Update tutorials with device-generic APIs
### 🚀 Describe the improvement or the new tutorial We should use the latest device-generic APIs when they come out in 2.6 in all tutorials to improve readability. ### Existing tutorials on this topic https://pytorch.org/tutorials/beginner/basics/buildmodel_tutorial is an example of one we should update. There is most likely more. ### Additional context cc @guangyey that might be a good follow up to consider before 2.6
https://github.com/pytorch/tutorials/issues/3113
closed
[]
2024-10-24T17:33:29Z
2025-01-29T09:35:10Z
3
albanD
huggingface/transformers.js
991
Loading models from "non-URL" locations in the browser
### Question Hi! I have an application where the model files will be pre-loaded in a custom format into the browsers IndexDb. Based on my understanding, transformer.js currently only supports loading models by URL and then caches them in the browser cache. Getting the model files from the IndexDb instead, seems a little tricky, as it would require to "copy" a lot of the loading logic. Other ideas were to use a ServiceWorker to intercept the model download and mock the response with the files from IndexDb, or to write the files directly into browser cache that transformer.js uses. Both solutions seem hacky... So, before I embark on writing my own loading logic, I wanted to ask, if you have any ideas or suggestions on how to approach this? Thanks in advance!
https://github.com/huggingface/transformers.js/issues/991
open
[ "question" ]
2024-10-24T12:18:19Z
2024-12-04T19:30:07Z
null
AKuederle
huggingface/finetrainers
68
How to set the hyperparameters when finetuning I2V model with LoRA?
File "/home/shinji106/ntu/cogvideox-factory/training/dataset.py", line 411, in __iter__ self.buckets[(f, h, w)].append(data) KeyError: (16, 320, 720) The resolution is (13, 320, 480) so the key of self.bucket does not match with input. How do I set the hyperparameters when running the prepare_dataset.sh and train_image_to_video_lora.sh so that the key will match?
https://github.com/huggingface/finetrainers/issues/68
closed
[]
2024-10-24T08:06:33Z
2025-01-10T23:40:06Z
null
TousakaNagio
huggingface/datasets
7,249
How to debugging
### Describe the bug I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the processing, but when I wished to do more complex processing, I found that I was unable to debug (even the simple samples were inaccessible). There are no errors reported, and I am able to print the _info,_split_generators and _generate_examples messages, but I am unable to access the breakpoints. ### Steps to reproduce the bug # my_dataset.py import json import datasets class MyDatasetConfig(datasets.BuilderConfig): def __init__(self, **kwargs): super(MyDatasetConfig, self).__init__(**kwargs) class MyDataset(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("1.0.0") BUILDER_CONFIGS = [ MyDatasetConfig( name="default", version=VERSION, description="myDATASET" ), ] def _info(self): print("info") # breakpoints return datasets.DatasetInfo( description="myDATASET", features=datasets.Features( { "id": datasets.Value("int32"), "text": datasets.Value("string"), "label": datasets.ClassLabel(names=["negative", "positive"]), } ), supervised_keys=("text", "label"), ) def _split_generators(self, dl_manager): print("generate") # breakpoints data_file = "data.json" return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_file} ), ] def _generate_examples(self, filepath): print("example") # breakpoints with open(filepath, encoding="utf-8") as f: data = json.load(f) for idx, sample in enumerate(data): yield idx, { "id": sample["id"], "text": sample["text"], "label": sample["label"], } #main.py import os os.environ["TRANSFORMERS_NO_MULTIPROCESSING"] = "1" from datasets import load_dataset dataset = load_dataset("my_dataset.py", split="train", cache_dir=None) print(dataset[:5]) ### Expected behavior Pause at breakpoints while running debugging ### Environment info pycharm
https://github.com/huggingface/datasets/issues/7249
open
[]
2024-10-24T01:03:51Z
2024-10-24T01:03:51Z
null
ShDdu
huggingface/sentence-transformers
3,015
How to customize the dataloader? e.g. Custom Data Augmentation
Hi, I've always been used to the old .fit behaviour where I could pass in the good DataLoader, implementing the Dataset myself, according to my needs. With the new trainer interface, how am I supposed to tweak the dataloader? Let's say I want to apply some random transformations to the input text, how can I do it right now? Of course, changing the original dataset, augmenting it statically, is a no-go. Thanks!
https://github.com/huggingface/sentence-transformers/issues/3015
open
[]
2024-10-23T17:11:13Z
2024-11-15T10:32:35Z
null
msciancalepore98
huggingface/diffusers
9,756
Could not find loading_adapters.ipynb
### Describe the bug while reading doc [Load adapters](https://huggingface.co/docs/diffusers/using-diffusers/loading_adapters) I tried to open in Colab to run an example on this page. <img width="504" alt="open_colab" src="https://github.com/user-attachments/assets/0b1397f1-d266-4d83-84ab-276ea796a2a4"> It will get Notebook not found on a new page. It can't find loading_adapters.ipynb in [huggingface/notebooks](https://github.com/huggingface/notebooks) ### Reproduction I follow the doc and write down a Google Colab [Google Colab loading_adapters](https://colab.research.google.com/drive/1pYpvsOf6U9CAZfughY1aUltUQTFsw4OI) Can I contribute a pr for this? Do you know how I can do this? Commit to notebook repo? Or something different? ### Logs _No response_ ### System Info Google Colab ### Who can help? @stevhliu @sayakpaul
https://github.com/huggingface/diffusers/issues/9756
closed
[ "bug" ]
2024-10-23T13:03:11Z
2024-11-01T15:27:56Z
6
thliang01
huggingface/accelerate
3,190
How to save the optimizer state while enabling Deepspeed to save the model
### System Info ```Shell Unrelated to configuration ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`) - [X] My own task or dataset (give details below) ### Reproduction ``` unwrapped_model = accelerator.unwrap_model(transformer) unwrapped_model.save_pretrained(save_directory, save_function=accelerator.save, state_dict=accelerator.get_state_dict(transformer)) ``` I am using Deepspeed Zero2. I want to save the model state and optimizer state, but the current `save_pretrained()` only supports saving the model state. How can I save the optimizer state? ### Expected behavior I would like to know if it supports saving optimizer state and how to use it. THANKS!
https://github.com/huggingface/accelerate/issues/3190
closed
[]
2024-10-23T11:58:08Z
2024-11-01T02:53:38Z
null
ITerydh
huggingface/diffusers
9,750
Is it possible to provide img2img code for CogView3?
Is it possible to provide img2img code for CogView3?
https://github.com/huggingface/diffusers/issues/9750
open
[ "stale", "contributions-welcome" ]
2024-10-23T07:40:38Z
2024-12-20T15:04:01Z
3
ChalvYongkang
pytorch/serve
3,352
GPU not detected inside torchserve docker container
### 🐛 Describe the bug I am trying to create a Docker image for my custom handler of diffusers. I can create the Docker image and then a Docker container from it, but the Docker container is not able to detect the GPU. I have used the official TorchServe Docker image from Docker Hub, but it still cannot use the GPU inside the container. I have also added --gpus all in the Docker container run command, but it still does not work. How can I enable the GPU inside the container so that my custom handler can use it? ### Error logs ``` WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. 2024-10-23T06:11:55,474 [DEBUG] main org.pytorch.serve.util.ConfigManager - xpu-smi not available or failed: Cannot run program "xpu-smi": error=2, No such file or directory 2024-10-23T06:11:55,498 [WARN ] main org.pytorch.serve.util.ConfigManager - Your torchserve instance can access any URL to load models. When deploying to production, make sure to limit the set of allowed_urls in config.properties 2024-10-23T06:11:55,513 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager... 2024-10-23T06:11:55,560 [INFO ] main org.pytorch.serve.metrics.configuration.MetricConfiguration - Successfully loaded metrics configuration from /home/venv/lib/python3.9/site-packages/ts/configs/metrics.yaml 2024-10-23T06:11:55,750 [INFO ] main org.pytorch.serve.ModelServer - Torchserve version: 0.12.0 TS Home: /home/venv/lib/python3.9/site-packages Current directory: /home/model-server Temp directory: /home/model-server/tmp Metrics config path: /home/venv/lib/python3.9/site-packages/ts/configs/metrics.yaml Number of GPUs: 1 Number of CPUs: 12 Max heap size: 1966 M Python executable: /home/venv/bin/python Config file: /home/model-server/config.properties Inference address: http://0.0.0.0:8080 Management address: http://0.0.0.0:8081 Metrics address: http://0.0.0.0:8082 Model Store: /home/model-server/model-store Initial Models: all Log dir: /home/model-server/logs Metrics dir: /home/model-server/logs Netty threads: 0 Netty client threads: 0 Default workers per model: 1 Blacklist Regex: N/A Maximum Response Size: 6553500 Maximum Request Size: 6553500 Limit Maximum Image Pixels: true Prefer direct buffer: false Allowed Urls: [file://.*|http(s)?://.*] Custom python dependency for model allowed: true Enable metrics API: true Metrics mode: LOG Disable system metrics: false Workflow Store: /home/model-server/model-store CPP log config: N/A Model config: {"text-to-image": {"1.0": {"defaultVersion": true,"marName": "text-to-image.mar","minWorkers": 1,"maxWorkers": 1,"batchSize": 4,"maxBatchDelay": 5000,"responseTimeout": 120}}} System metrics command: default Model API enabled: true 2024-10-23T06:11:55,762 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Loading snapshot serializer plugin... 2024-10-23T06:11:55,763 [DEBUG] main org.pytorch.serve.ModelServer - Loading models from model store: text-to-image.mar 2024-10-23T06:12:10,680 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model text-to-image 2024-10-23T06:12:10,681 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model text-to-image 2024-10-23T06:18:40,296 [INFO ] main org.pytorch.serve.wlm.ModelManager - Installed custom pip packages for model text-to-image 2024-10-23T06:18:40,297 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model text-to-image loaded. 2024-10-23T06:18:40,297 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: text-to-image, count: 1 2024-10-23T06:18:40,329 [DEBUG] W-9000-text-to-image_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.9/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9000, --metrics-config, /home/venv/lib/python3.9/site-packages/ts/configs/metrics.yaml] 2024-10-23T06:18:40,334 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel. 2024-10-23T06:18:40,443 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://0.0.0.0:8080 2024-10-23T06:18:40,444 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerSocketChannel. 2024-10-23T06:18:40,446 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://0.0.0.0:8081 2024-10-23T06:18:40,446 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: EpollServerSocketChannel. 2024-10-23T06:18:40,458 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://0.0.0.0:8082 Model server started. 2024-10-23T06:18:40,741 [WARN ] pool-3-thread-1 org.pytorch.serve.metrics.MetricCollector - worker pid is not available yet. 2024-10-23T06:18:41,407 [INFO ] pool-3-thread-1 TS_METRICS - CPUUtilization.Percent:0.0|#Level:Hos
https://github.com/pytorch/serve/issues/3352
closed
[]
2024-10-23T06:47:13Z
2024-10-23T10:36:06Z
1
dummyuser-123
pytorch/xla
8,301
Provide debugging and troubleshooting tips to Pallas developer
## 📚 Documentation Please provide documentation on how to troubleshoot pallas issues. One place we can put this information is in this [Pallas doc](https://github.com/pytorch/xla/blob/master/docs/source/features/pallas.md) cc @mikegre-google to help review the upcoming PR
https://github.com/pytorch/xla/issues/8301
open
[ "documentation" ]
2024-10-22T22:50:15Z
2024-10-25T21:58:56Z
0
miladm
huggingface/optimum
2,076
Problem converting tinyllama to onnx model with optimum-cli
### System Info ```shell main branch newest local pip install ``` ### Who can help? @michaelbenayoun ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) optimum-cli export onnx --model /home/wangzhiqun/TinyLlama-1.1B-Chat-v1.0 --task text-generation --batch_size 1 --sequence_length 128 tinyllama_onnx_file ### Expected behavior To specify the batch_size and sequence_length, I use the following "optimum-cli export onnx --model /home/wangzhiqun/TinyLlama-1.1B-Chat-v1.0 --task text-generation --batch_size 1 --sequence_length 128 tinyllama_onnx_file". But the exported onnx model still holds the shape [batch_size, sequence_length]. How can I specify the fixed dimensions?
https://github.com/huggingface/optimum/issues/2076
open
[ "bug" ]
2024-10-22T06:23:51Z
2024-10-22T06:36:42Z
0
hayyaw
pytorch/torchtitan
639
How to load previous distributed checkpoint after using FP8Linear + torch.compile?
FP8Linear + torch.compile is changing the parameters's name. If I do convert to FP8Linear -> torch.compile -> fsdp2 wrapping -> load distributed ckpt, the parameters's names do not match with the ckpt we want to resume from. And it's not straightforward to change the parameters's names in the distributed ckpt. Thus, my question is what's the expected solution for this workflow? Thanks a lot!
https://github.com/pytorch/torchtitan/issues/639
closed
[]
2024-10-21T23:27:33Z
2024-10-25T18:35:40Z
null
goldhuang
pytorch/ao
1,132
What is the expected inference steps after I apply torchao in training?

Hello, I have integrated torchao to my training. But I don't think it's 100% clear what the inference should be like. Should I use the converted FP8 linear layer to do inference? Is delayed scaling supposed to work in inference? Or, should I use the original linear layer to do inference? Thanks a lot in advance if you can help to clarify!
https://github.com/pytorch/ao/issues/1132
closed
[ "float8" ]
2024-10-21T22:19:57Z
2024-12-09T18:59:50Z
null
goldhuang
pytorch/torchtitan
638
What is the expected inference steps after I apply torchao in training?
Hello, I have integrated torchao to my training. But I think it's not very clear what the inference should be like. Should I use the converted FP8 linear layer to do inference? Is delayed scaling supposed to work in inference? Or, should I use the original linear layer to do inference? Thanks in advance if you can help to clarify!
https://github.com/pytorch/torchtitan/issues/638
open
[ "question" ]
2024-10-21T22:19:06Z
2024-10-22T03:33:39Z
null
goldhuang
pytorch/xla
8,295
litepod and tpu sample not working anymore: https://cloud.google.com/tpu/docs/pytorch-pods
## 🐛 Bug Sample located here doesn't seem to work on tpu v5e16 pod (previous did as of 3 days ago) https://cloud.google.com/tpu/docs/pytorch-pods ## To Reproduce Following the steps here: https://cloud.google.com/tpu/docs/pytorch-pods Before running the example: 1. set up SSH key pair using: ssh-keygen -t rsa -f .ssh/google_compute_engine -C user 2. added SSH to project meta via gcp console 3. propagate key to tpu vm: eval `ssh-agent -s` ssh-add ~/.ssh/google_compute_engine Only other change than what is in the sample is changing tpu to v5litepod-16. The vm is created and all looks correct, but the process hangs. This occurs when getting the xla device. Output on the error is below. Thank you very much for the help! Exact same procedure was working consistently until yesterday. gcloud compute tpus tpu-vm ssh tpu-vm-sample --zone=us-central1-a --project=sample_tpu_project --worker=all --command="PJRT_DEVICE=TPU python3 ~/xla/test/test_train_mp_imagenet.py \ --fake_data \ --model=resnet50 \ --num_epochs=1 2>&1 | tee ~/logs.txt" Using ssh batch size of 1. Attempting to SSH into 1 nodes with a total of 4 workers. SSH: Attempting to connect to worker 0... SSH: Attempting to connect to worker 1... SSH: Attempting to connect to worker 2... SSH: Attempting to connect to worker 3... concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3.10/concurrent/futures/process.py", line 246, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/lib/python3.10/concurrent/futures/process.py", line 205, in _process_chunk return [fn(*args) for args in chunk] File "/usr/lib/python3.10/concurrent/futures/process.py", line 205, in <listcomp> return [fn(*args) for args in chunk] File "/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/runtime.py", line 95, in wrapper return fn(*args, **kwargs) File "/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 59, in _run_thread_per_device initializer_fn(local_rank, local_world_size) File "/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/runtime.py", line 95, in wrapper return fn(*args, **kwargs) File "/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 125, in initialize_multiprocess devices = xm.get_xla_supported_devices() File "/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/core/xla_model.py", line 99, in get_xla_supported_devices devices = torch_xla._XLAC._xla_get_devices() RuntimeError: Bad StatusOr access: UNKNOWN: TPU initialization failed: Worker failed to join a slice within 15m """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/temp_user/xla/test/test_train_mp_imagenet.py", line 381, in <module> xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=FLAGS.num_cores) File "/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/runtime.py", line 95, in wrapper return fn(*args, **kwargs) File "/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 38, in spawn return pjrt.spawn(fn, nprocs, start_method, args) File "/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 214, in spawn run_multiprocess(spawn_fn, start_method=start_method) File "/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/runtime.py", line 95, in wrapper return fn(*args, **kwargs) File "/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 174, in run_multiprocess replica_results = list( File "/home/temp_user/.local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 175, in <genexpr> itertools.chain.from_iterable( File "/usr/lib/python3.10/concurrent/futures/process.py", line 570, in _chain_from_iterable_of_lists for element in iterable: File "/usr/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator yield _result_or_cancel(fs.pop()) File "/usr/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel return fut.result(timeout) File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result return self.__get_result() File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception RuntimeError: Bad StatusOr access: UNKNOWN: TPU initialization failed: Worker failed to join a slice within 15m concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3.10/concurrent/futures/process.py", line 246, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/lib/python3.10/concurrent/futures/process.py", line 205, in _process_chunk return [fn(*args) fo
https://github.com/pytorch/xla/issues/8295
closed
[]
2024-10-21T16:43:31Z
2024-10-22T00:54:24Z
8
ttdd11
huggingface/diffusers
9,731
How to use Playground2.5 to train lora with own dataset to generate pictures of a specific style?
### Describe the bug Hi, I have been working on training models using the same dataset as "stabilityai/stable-diffusion-xl-base-1.0" with the script examples/text_to_image/train_text_to_image_lora_sdxl.py, and I achieved quite promising results. Now, I am trying to further improve the performance by switching to Dreambooth. I am currently using playground2.5 with examples/dreambooth/train_dreambooth_lora_sdxl.py. However, after multiple parameter tuning attempts, the performance is still not as good as the SDXL base model. I am unsure what might be causing this. ### Reproduction ![image](https://github.com/user-attachments/assets/339a0e9b-de08-408d-a43a-495f86b5e1df) ### Logs _No response_ ### System Info - 🤗 Diffusers version: 0.31.0.dev0 - Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.17 - Running on Google Colab?: No - Python version: 3.8.20 - PyTorch version (GPU?): 2.2.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.25.2 - Transformers version: 4.45.2 - Accelerate version: 1.0.1 - PEFT version: 0.13.2 - Bitsandbytes version: 0.44.1 - Safetensors version: 0.4.5 - xFormers version: not installed - Accelerator: NVIDIA H800, 81559 MiB - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/9731
open
[ "bug", "stale" ]
2024-10-21T12:10:12Z
2024-11-20T15:03:04Z
null
hjw-0909
huggingface/diffusers
9,727
FLUX.1-dev dreambooth save problem trained on multigpu
### Describe the bug I tried to train flux using accelerate and deepspeed, but when using two L40s, the model could not be saved properly. What is the problem? ### Reproduction train.sh: accelerate launch --config_file config.yaml train_flux.py \ --pretrained_model_name_or_path="./FLUX.1-dev" \ --resolution=1024 \ --train_batch_size=1 \ --output_dir="output1" \ --num_train_epochs=10 \ --checkpointing_steps=5 \ --validation_steps=500 \ --max_train_steps=40001 \ --learning_rate=4e-05 \ --seed=12345 \ --mixed_precision="fp16" \ --revision="fp16" \ --use_8bit_adam \ --gradient_accumulation_steps=1 \ --gradient_checkpointing \ --lr_scheduler="constant_with_warmup" --lr_warmup_steps=2500 \ config.yaml: compute_environment: LOCAL_MACHINE debug: false deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 1.0 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: false zero_stage: 2 distributed_type: DEEPSPEED downcast_bf16: 'no' gpu_ids: 0,1 enable_cpu_affinity: false machine_rank: 0 main_training_function: main mixed_precision: fp16 num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ### Logs ```shell Using /home/oppoer/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... No modifications detected for re-loaded extension module utils, skipping build step... Loading extension module utils... Time to load utils op: 0.00030350685119628906 seconds 10/21/2024 02:58:18 - INFO - __main__ - ***** Running training ***** 10/21/2024 02:58:18 - INFO - __main__ - Num examples = 2109730 10/21/2024 02:58:18 - INFO - __main__ - Num batches each epoch = 1054865 10/21/2024 02:58:18 - INFO - __main__ - Num Epochs = 1 10/21/2024 02:58:18 - INFO - __main__ - Instantaneous batch size per device = 1 10/21/2024 02:58:18 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 2 10/21/2024 02:58:18 - INFO - __main__ - Gradient Accumulation steps = 1 10/21/2024 02:58:18 - INFO - __main__ - Total optimization steps = 40001 Steps: 0%| | 0/40001 [00:00<?, ?it/s]Passing `txt_ids` 3d torch.Tensor is deprecated.Please remove the batch dimension and pass it as a 2d torch Tensor Using /home/oppoer/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... No modifications detected for re-loaded extension module utils, skipping build step... Loading extension module utils... Time to load utils op: 0.0007116794586181641 seconds [2024-10-21 02:58:29,496] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648 Steps: 0%| | 1/40001 [00:11<127:38:44, 11.49s/it, loss=0.544, lr=0]Passing `txt_ids` 3d torch.Tensor is deprecated.Please remove the batch dimension and pass it as a 2d torch Tensor [2024-10-21 02:58:36,774] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2147483648, reducing to 1073741824 Steps: 0%| | 2/40001 [00:18<100:07:40, 9.01s/it, loss=0.36, lr=0]Passing `txt_ids` 3d torch.Tensor is deprecated.Please remove the batch dimension and pass it as a 2d torch Tensor [2024-10-21 02:58:44,052] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1073741824, reducing to 536870912 Steps: 0%| | 3/40001 [00:26<91:19:39, 8.22s/it, loss=0.543, lr=0]Passing `txt_ids` 3d torch.Tensor is deprecated.Please remove the batch dimension and pass it as a 2d torch Tensor [2024-10-21 02:58:51,324] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 536870912, reducing to 268435456 Steps: 0%| | 4/40001 [00:33<87:10:01, 7.85s/it, loss=1.14, lr=0]Passing `txt_ids` 3d torch.Tensor is deprecated.Please remove the batch dimension and pass it as a 2d torch Tensor [2024-10-21 02:58:58,612] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 268435456, reducing to 134217728 Steps: 0%|
https://github.com/huggingface/diffusers/issues/9727
closed
[ "bug" ]
2024-10-21T03:37:23Z
2024-10-29T06:38:00Z
1
jyy-1998
huggingface/diffusers
9,726
FLUX.1-dev dreambooth problem trained on multigpu
### Describe the bug I tried to use accelerate and deepspeed to train flux, and it worked fine when using two L40s, but an error occurred when using two a100s. What is the reason? ### Reproduction train.sh: accelerate launch --config_file config.yaml train_flux.py \ --pretrained_model_name_or_path="./FLUX.1-dev" \ --resolution=1024 \ --train_batch_size=1 \ --output_dir="output0" \ --num_train_epochs=10 \ --checkpointing_steps=5 \ --validation_steps=500 \ --max_train_steps=40001 \ --learning_rate=4e-05 \ --seed=12345 \ --mixed_precision="fp16" \ --revision="fp16" \ --use_8bit_adam \ --gradient_accumulation_steps=1 \ --gradient_checkpointing \ --lr_scheduler="constant_with_warmup" --lr_warmup_steps=2500 \ --mask_accept_threshold=0.6 \ --empty_prompt_prob=0.1 \ --dilate_factor=4 \ --crop_img \ --mask_cover_percent=0.0 \ --mask_cover_percent_person=0.5 \ config.yaml: compute_environment: LOCAL_MACHINE debug: false deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 1.0 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: false zero_stage: 2 distributed_type: DEEPSPEED downcast_bf16: 'no' gpu_ids: 0,1 enable_cpu_affinity: false machine_rank: 0 main_training_function: main mixed_precision: fp16 num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ### Logs ```shell Installed CUDA version 11.8 does not match the version torch was compiled with 11.7 but since the APIs are compatible, accepting this combination Installed CUDA version 11.8 does not match the version torch was compiled with 11.7 but since the APIs are compatible, accepting this combination Using /home/oppoer/.cache/torch_extensions/py310_cu117 as PyTorch extensions root... [1/3] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/opt/conda/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -DBF16_AVAILABLE -c /opt/conda/lib/python3.10/site-packages/deepspeed/ops/csrc/common/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o [2/3] c++ -MMD -MF cpu_adam.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/opt/conda/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -L/usr/local/cuda/lib64 -lcudart -lcublas -g -march=native -fopenmp -D__AVX512__ -D__ENABLE_CUDA__ -DBF16_AVAILABLE -c /opt/conda/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/cpu_adam.cpp -o cpu_adam.o [3/3] c++ cpu_adam.o custom_cuda_kernel.cuda.o -shared -lcurand -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/usr/local/cuda/lib64 -lcudart -o cpu_adam.so Loading extension module cpu_adam... Time to load cpu_adam op: 27.327727794647217 seconds Loading extension module cpu_adam... Time to load cpu_adam op: 21.32274580001831 seconds Adam Optimizer #0 is created with AVX512 arithmetic capability. Config: alpha=0.001000, betas=(0.900000, 0.999000), weight_decay=0.000100, adam_w=1 [2024-10-21 03:05:17,566] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.9.3, git-hash=unknown, git-branch=unknown Adam Optimizer #0 is created with AVX512 arithmetic capability. Config: alpha=0.001000, betas=(0.900000, 0.999000), weight_decay=0.000100, adam_w=1 10/21/2024 03:06:08 - INFO - torch.distributed.dis
https://github.com/huggingface/diffusers/issues/9726
closed
[ "bug" ]
2024-10-21T03:20:44Z
2024-10-21T03:32:42Z
0
jyy-1998
huggingface/tokenizers
1,661
How to Read Information in Large Tokenizer's Vocabulary
TLDR; This is how the byte-level BPE works. Main advantages are: - Smaller vocabularies - No unknown token This is totally expected behavior. The byte-level BPE converts all the Unicode code points into multiple byte-level characters: 1. Each Unicode code point is decomposed into bytes (1 byte for ASCII characters, and up to 4 bytes for UTF-8 Unicode code points) 2. Each byte value gets a "visible" character assigned to it from the beginning of the Unicode table. This is especially important because there are a lot of control characters, so we can't just have a simple mapping ASCII Table character <-> byte value. So some characters get other representations, like for example the white space `U+0020` becomes `Ġ`. The purpose is, by doing so, you end up with an initial alphabet of 256 tokens. These 256 tokens can then be merged together to represent any other token in the vocabulary. This results in smaller vocabularies, that won't ever need an "unknown" token. _Originally posted by @n1t0 in https://github.com/huggingface/tokenizers/issues/203#issuecomment-605105611_ @n1t0 Thank you for your previous responses. I have been working with a large tokenizer of a LLM, and I've noticed that the vocabulary contains a significant amount of information that like these unreadable codes. I wonder if there are any methods or tools available to help me read and interpret the information in the tokenizer's vocabulary. For example, is there a way to map these tokens back to their original words or phrases, or any other approach to make the vocabulary more interpretable?
https://github.com/huggingface/tokenizers/issues/1661
closed
[]
2024-10-20T13:38:53Z
2024-10-21T07:29:43Z
null
kaizhuanren
pytorch/torchtitan
636
DDP + Pipeline parallelism
For fine tuning/training with `PP + DDP`, is there documentation or modification that can be done to achieve this using torchtitan? The following check in `parallelize_llama.py` was the point of error when trying the configuration on my end. `if world_mesh.ndim > 1: raise RuntimeError("DDP has not supported > 1D parallelism")` The use case I am imagining is: for a host with multiple GPUs that is responsible for a particular pipeline stage (part of model), as long as there is enough memory `DDP` might be a viable option.
https://github.com/pytorch/torchtitan/issues/636
closed
[ "question" ]
2024-10-20T12:36:55Z
2024-11-08T00:03:05Z
null
prathameshtd
pytorch/torchtitan
635
data shuffling
I understand that the current version of the code doesn't shuffle the data during training, _i.e._ examples are consumed in order in each rank (in fact, there's a note to that effect [here](https://github.com/pytorch/torchtitan/blob/0edd2fb36c8c3468086986efd049e9bb0ff3414e/torchtitan/datasets/hf_datasets.py#L99)). I'm kind of new to large-scale LLM training, so I was just wondering if this is common practice in LLM training. It seems not ideal potentially, since consecutive gradients will likely be more correlated than under random shuffling. If I wanted to randomly shuffle the data during training, how could I go about doing that? I thought about using `ds.shuffle()` before splitting the dataset by node [here](https://github.com/pytorch/torchtitan/blob/0edd2fb36c8c3468086986efd049e9bb0ff3414e/torchtitan/datasets/hf_datasets.py#L101C22-L101C43), but that would (pseudo-)shuffle the data rows, which doesn't seem quite right, since I think we really want to shuffle concatenated `seq_len` long chunks of text instead.
https://github.com/pytorch/torchtitan/issues/635
closed
[ "question" ]
2024-10-20T03:39:35Z
2024-10-24T02:08:43Z
null
eminorhan
huggingface/diffusers
9,719
`disable_progress_bar` is ignored for some models (Loading checkpoint shards)
### Describe the bug When loading some pipelines, `diffusers.utils.logging.disable_progress_bar()` doesn't disable all progress bars. In particular the "Loading checkpoint shards" progress bar still appears. The "Loading pipeline components..." progress bar, however, is disabled as expected. Models I found, where this occurs, are: * [`stabilityai/stable-diffusion-3-medium-diffusers`](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers) * [`black-forest-labs/FLUX.1-schnell`](https://huggingface.co/black-forest-labs/FLUX.1-schnell) The image generation progress bar also doesn't respect this setting, but can be disabled with `pipe.set_progress_bar_config(disable=True)`. When files are downloaded, the progress bars are also not disabled. These two cases seem like they might be intentional. Are they? Is there better way to disable progress bars globally for diffusers? Can the "Loading checkpoint shards" progress bar be disabled specifically? ### Reproduction ```python import diffusers diffusers.utils.logging.disable_progress_bar() # pipe = diffusers.StableDiffusion3Pipeline.from_pretrained('stabilityai/stable-diffusion-3-medium-diffusers') pipe = diffusers.FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-schnell') pipe('test') ``` ### Logs ```shell >>> pipe = diffusers.FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-schnell') Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.56s/it] You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers >>> ``` ### System Info Google Colab or locally: - 🤗 Diffusers version: 0.30.3 - Running on Google Colab?: No - Python version: 3.12.7 - PyTorch version (GPU?): 2.5.0+cu124 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.26.0 - Transformers version: 4.45.2 - Accelerate version: 1.0.1 - PEFT version: not installed - Bitsandbytes version: not installed - Safetensors version: 0.4.5 - xFormers version: not installed ### Who can help? @sayakpaul @DN6
https://github.com/huggingface/diffusers/issues/9719
closed
[ "bug" ]
2024-10-19T17:42:37Z
2024-10-19T19:29:12Z
2
JonasLoos
pytorch/tutorials
3,100
💡 [REQUEST] - Add minGRU Tutorial for Efficient Sequence Modeling
### 🚀 Describe the improvement or the new tutorial I propose adding a tutorial on implementing and using minGRU (minimal Gated Recurrent Unit) to the PyTorch tutorials. This addition would provide valuable insights into efficient sequence modeling techniques for the PyTorch community. - Efficiency: Up to 1324x faster than standard GRU for 4096-token sequences, with comparable accuracy. - Competitive Performance: Matches state-of-the-art models like Mamba in language modeling and reinforcement learning. - Learning Tool: Bridges simple RNNs and complex attention-based models, aiding learner progression. ### Benefits for PyTorch users: - Efficient Sequence Processing: Implement and train RNNs for long sequences, crucial for modern NLP and time series analysis. - Parallel Training Skills: Learn to leverage parallel computing for RNN training, applicable to various deep learning tasks. - Versatile Solution: Practical alternative to traditional RNNs and complex models, balancing efficiency and performance. ### Paper [were rnns all we need](https://arxiv.org/pdf/2410.01201) ### Existing tutorials on this topic _No response_ ### Additional context If you guys like this idea, I'm ready to jump in! I could have a PR ready as soon as tomorrow. I'm thinking of contributing a tutorial on how to use or train minGRU for language modeling @svekars @albanD
https://github.com/pytorch/tutorials/issues/3100
closed
[]
2024-10-19T16:35:32Z
2025-04-16T22:02:23Z
1
dame-cell
huggingface/optimum
2,069
High CUDA Memory Usage in ONNX Runtime with Inconsistent Memory Release
### System Info ```shell Optimum version: 1.22.0 Platform: Linux (Ubuntu 22.04.4 LTS) Python version: 3.12.2 ONNX Runtime Version: 1.19.2 CUDA Version: 12.1 CUDA Execution Provider: Yes (CUDA 12.1) ``` ### Who can help? @JingyaHuang @echarlaix ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) ```python def load_model(self, model_name): session_options = ort.SessionOptions() session_options.add_session_config_entry('cudnn_conv_use_max_workspace', '0') session_options.enable_mem_pattern = False session_options.arena_extend_strategy = "kSameAsRequested" session_options.gpu_mem_limit = 10 * 1024 * 1024 * 1024 model = ORTModelForSeq2SeqLM.from_pretrained(model_name, provider="CUDAExecutionProvider", session_options=session_options) tokenizer = AutoTokenizer.from_pretrained(model_name) return tokenizer, model def inference(self, batch, doc_id='-1'): responses, status = '', False try: encodings = self.tokenizer(batch, padding=True, truncation=True, max_length=8192, return_tensors="pt").to(self.device) with torch.no_grad(): generated_ids = self.model.generate( encodings.input_ids, max_new_tokens=1024 ) responses = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True) status = True except Exception as e: logger.error(f"Failed to do inference on LLM, error: {e}") torch.cuda.empty_cache() return status, responses ``` ### Expected behavior I expect the CUDA memory to decrease and be released after processing smaller inputs, optimizing memory usage for subsequent inputs. ![Picture1](https://github.com/user-attachments/assets/a188ede0-2287-4603-a84e-ba62d309a940)
https://github.com/huggingface/optimum/issues/2069
closed
[ "question", "Stale" ]
2024-10-19T02:45:54Z
2024-12-25T02:02:08Z
null
niyathimariya
pytorch/data
1,344
Delete datapipes and dataloader 2 documentation
### 📚 The doc issue Since these are gone on main, we should delete nightly documentation as well. Basically they need to disappear from here: https://pytorch.org/data/main/ ### Suggest a potential alternative/fix _No response_
https://github.com/meta-pytorch/data/issues/1344
closed
[ "documentation" ]
2024-10-18T23:14:59Z
2024-10-19T20:29:46Z
0
andrewkho
huggingface/transformers.js
981
Any gotcha's with manually adding items to transformers-cache?
### Question For [papeg.ai](https://www.papeg.ai) I've implemented that the service worker caches `.wasm` files from `jsDelivir` that Transformers.js [wasn't caching itself yet](https://github.com/huggingface/transformers.js/issues/685#issuecomment-2325125036). I've been caching those filesi n the 'main' Papeg.ai cache until now, but I want to switch to saving those files in the `transformers-cache` instead. That would (hopefully) make it so that the .wasm files don't have to be downloaded again if I update papeg.ai (which clears the papeg.ai cache). And vice-versa: the transformers cache could be fully cleared independently of the papeg.ai cache (ideally Transformers.js would manage all this itself). - Is this a reasonable idea? - Is this in line with your plans for a future improved caching system? Or do you, for example, plan to keep wasm, onnx and config files in separate caches, like WebLLM? - Will Transformers.js even look for those .wasm files in `transformers-cache` first? With the service worker this doesn't technically matter, as requests to jsDelivir are captured anyway. But the service worker isn't always available. Tangentially, would it be an idea to (also) store the code and wasm files on Huggingface itself? Because of EU privacy regulations, and good privacy design in general, I'd like to keep third parties that the site needs to connect to to an absolute minimum. I'd love to eliminate jsDelivir, and only rely on Github and HuggingFace. Or is there perhaps a way to tell Transformers.js where to look? Then I could host the files on Github/HuggingFace manually. Just for fun, here's a service worker code snippet that, from now on, stores the jsDelivir files in the transformers-cache: ``` let target_cache = cacheName; if(request.url.indexOf('https://cdn.jsdelivr.net/npm/@huggingface/transformers') != -1){ console.log("service_worker: saving to transformers-cache: ", request.url); target_cache = 'transformers-cache'; } caches.open(target_cache) .then(function(cache) { cache.put(request, fetch_response_clone); }) .catch((err) => { console.error("service worker: caught error adding to cache: ", err); }) ```
https://github.com/huggingface/transformers.js/issues/981
open
[ "question" ]
2024-10-18T12:53:07Z
2024-10-18T12:56:21Z
null
flatsiedatsie
huggingface/transformers
34,241
How to output token by token use transformers?
### System Info ... ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ... ### Expected behavior How to output token by token use transformers?
https://github.com/huggingface/transformers/issues/34241
closed
[ "Discussion", "bug" ]
2024-10-18T09:45:19Z
2024-11-26T08:04:43Z
null
xuanzhangyang
huggingface/lerobot
477
Collecting human operated datasets in simulation
Hello, Can you provide info on how human supervision was provided for the simulated datasets (e.g. `lerobot/aloha_sim_transfer_cube_human`)? I am starting to setup a similar MuJoCo gym environment for the Stretch (https://github.com/mmurray/gym-stretch) and I would like to collect/train on some human teleop data, but it seems like the current `control_robot.py` script and data collection examples are setup only for physical robots. Is there a branch somewhere with the code used to collect `lerobot/aloha_sim_transfer_cube_human` that I can reference? Thanks!
https://github.com/huggingface/lerobot/issues/477
closed
[ "question", "dataset", "simulation" ]
2024-10-17T23:24:17Z
2025-10-08T08:49:32Z
null
mmurray
pytorch/pytorch
138,280
Refactor FlexibleLayout to separate out "this stride can be changed" and "how this buffer is allocated can be changed"
### 🚀 The feature, motivation and pitch Currently, we have two layouts: - FixedLayout - FlexibleLayout Where FixedLayout basically means "We already decided the layout, don't change it" while FlexibleLayout means "we are free to change this layout". However, I think there are actually two different components of "decided this layout": 1. What is the output **stride** of this layout? 2. Who allocates the actual buffer for this tensor? I believe conflating these causes some problems: - For inductor template tuning, we care about the **stride** of the output layout, but we don't care who allocated the buffer (e.g. if it's just a view into a larger concat buffer). And Elias points out that he noticed this too here: https://github.com/pytorch/pytorch/pull/132554#issue-2445835622 - For Yifu's recent PR (https://github.com/pytorch/pytorch/pull/138029), he cares about "who allocates the buffer for this layout", but he doesn't care about "what is the actual stride of this layout". My proposal is that we scrap our current subclasses of Layout and refactor it into: ``` class Layout: stride: FlexibleStride or FixedStride allocator: NonOwningAllocator (view into another allocation) or Flexible or SymmMem ``` cc: @eellison @yifuwang @shunting314 @jansel ### Alternatives _No response_ ### Additional context _No response_ cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @aakhundov @coconutruben @jataylo @ezyang @yf225 @chenyang78 @ColinPeppler @desertfire
https://github.com/pytorch/pytorch/issues/138280
open
[ "triaged", "oncall: pt2", "module: inductor", "internal ramp-up task" ]
2024-10-17T23:10:36Z
2025-12-02T17:11:15Z
null
Chillee
huggingface/lighteval
365
[FT] Using lighteval to evaluate a model on a single sample, how?
Thank you the team for the great work. I have a question. Can you please help me to use lighteval to evaluate a model on a single sample? For example, if I have an input from mmlu I, my model generates output O, how can I use lighteval to evaluate O with using the Acc metric? Thanks!
https://github.com/huggingface/lighteval/issues/365
closed
[ "feature" ]
2024-10-17T12:43:45Z
2024-10-24T10:12:54Z
null
dxlong2000
huggingface/diffusers
9,700
Flux inversion
current img2img is not so well, [RF Inversion](https://rf-inversion.github.io/)) provides an inverse method for Flux real image editing, can we implement it using diffusers? or how can we use DDIM inversion in Flux?
https://github.com/huggingface/diffusers/issues/9700
closed
[]
2024-10-17T07:03:59Z
2024-12-17T16:00:30Z
8
yuxu915
pytorch/pytorch
138,179
How to resolve the libfmt.a conflict in React Native.
### 🚀 The feature, motivation and pitch I want to develop a React Native module that primarily integrates LibTorch and includes some methods for loading models and making predictions. I created the module using `npx create-expo-module` and then proceeded with the development. When I run `pod install `in ios, it prompts me that `"The 'Pods-expoptexample' target has libraries with conflicting names: libfmt.a." `This issue does not occur when I build without installing LibTorch. I would like to know what I should do to avoid this problem. ### Alternatives I tried the following methods, but none of them resolved the issue: 1.I added the following code to the Podfile to exclude libfmt.a: ```ruby installer.pods_project.targets.each do |target| target.build_configurations.each do |config| config.build_settings['EXCLUDED_SOURCE_FILE_NAMES'] ||= [] config.build_settings['EXCLUDED_SOURCE_FILE_NAMES'] << 'libfmt.a' end end ``` 2.I tried `pod deintegrate` and then `pod install`. ### Additional context **this is my Podfile** ```ruby require File.join(File.dirname(`node --print "require.resolve('expo/package.json')"`), "scripts/autolinking") require File.join(File.dirname(`node --print "require.resolve('react-native/package.json')"`), "scripts/react_native_pods") require 'json' podfile_properties = JSON.parse(File.read(File.join(__dir__, 'Podfile.properties.json'))) rescue {} ENV['RCT_NEW_ARCH_ENABLED'] = podfile_properties['newArchEnabled'] == 'true' ? '1' : '0' ENV['EX_DEV_CLIENT_NETWORK_INSPECTOR'] = podfile_properties['EX_DEV_CLIENT_NETWORK_INSPECTOR'] use_autolinking_method_symbol = ('use' + '_native' + '_modules!').to_sym origin_autolinking_method = self.method(use_autolinking_method_symbol) self.define_singleton_method(use_autolinking_method_symbol) do |*args| if ENV['EXPO_UNSTABLE_CORE_AUTOLINKING'] == '1' Pod::UI.puts('Using expo-modules-autolinking as core autolinking source'.green) config_command = [ 'node', '--no-warnings', '--eval', 'require(require.resolve(\'expo-modules-autolinking\', { paths: [require.resolve(\'expo/package.json\')] }))(process.argv.slice(1))', 'react-native-config', '--json', '--platform', 'ios' ] origin_autolinking_method.call(config_command) else origin_autolinking_method.call() end end platform :ios, podfile_properties['ios.deploymentTarget'] || '13.4' install! 'cocoapods', :deterministic_uuids => false prepare_react_native_project! target 'expoptexample' do use_expo_modules! config = use_native_modules! use_frameworks! :linkage => podfile_properties['ios.useFrameworks'].to_sym if podfile_properties['ios.useFrameworks'] use_frameworks! :linkage => ENV['USE_FRAMEWORKS'].to_sym if ENV['USE_FRAMEWORKS'] use_react_native!( :path => config[:reactNativePath], :hermes_enabled => podfile_properties['expo.jsEngine'] == nil || podfile_properties['expo.jsEngine'] == 'hermes', # An absolute path to your application root. :app_path => "#{Pod::Config.instance.installation_root}/..", :privacy_file_aggregation_enabled => podfile_properties['apple.privacyManifestAggregationEnabled'] != 'false', ) post_install do |installer| react_native_post_install( installer, config[:reactNativePath], :mac_catalyst_enabled => false, :ccache_enabled => podfile_properties['apple.ccacheEnabled'] == 'true', ) # This is necessary for Xcode 14, because it signs resource bundles by default # when building for devices. installer.target_installation_results.pod_target_installation_results .each do |pod_name, target_installation_result| target_installation_result.resource_bundle_targets.each do |resource_bundle_target| resource_bundle_target.build_configurations.each do |config| config.build_settings['CODE_SIGNING_ALLOWED'] = 'NO' end end end # Exclude libfmt.a to avoid naming conflicts installer.pods_project.targets.each do |target| target.build_configurations.each do |config| config.build_settings['EXCLUDED_SOURCE_FILE_NAMES'] ||= [] config.build_settings['EXCLUDED_SOURCE_FILE_NAMES'] << 'libfmt.a' end end end post_integrate do |installer| begin expo_patch_react_imports!(installer) rescue => e Pod::UI.warn e end end end ``` ### this is my .podspec file ```ruby require 'json' package = JSON.parse(File.read(File.join(__dir__, '..', 'package.json'))) Pod::Spec.new do |s| s.name = 'ExpoPt' s.version = package['version'] s.summary = package['description'] s.description = package['description'] s.license = package['license'] s.author = package['author'] s.homepage = package['homepage'] s.platforms = { :ios => '13.4',
https://github.com/pytorch/pytorch/issues/138179
closed
[ "triage review" ]
2024-10-17T06:37:21Z
2024-10-21T17:35:00Z
null
wangyujiaoflag
pytorch/xla
8,270
Clarify that torch_xla2 is only recommended for inference
## 📚 Documentation <!-- A clear and concise description of what content is an issue. --> My understanding is that torch_xla2 is only recommended for inference. Address this in the [README](https://github.com/pytorch/xla/tree/master/experimental/torch_xla2)
https://github.com/pytorch/xla/issues/8270
closed
[ "question", "documentation" ]
2024-10-17T04:53:36Z
2025-02-27T13:08:45Z
null
cloudchrischan
huggingface/diffusers
9,698
Unable to Retrieve Intermediate Gradients with CogVideoXPipeline
### Describe the bug When generating videos using the CogVideoXPipeline model, we need to access the gradients of intermediate tensors. However, we do not require additional training or parameter updates for the model. We tried using register_forward_hook to capture the gradients, but this approach failed because the CogVideoXPipeline disables gradient calculations. Specifically, in pipelines/cogvideo/pipeline_cogvideox.py at line 478, gradient tracking is turned off with @torch.no_grad(). How can we resolve this issue and retrieve the gradients without modifying the model’s parameters or performing extra training? ### Reproduction Sample Code pipe = CogVideoXPipeline.from_pretrained( "THUDM/CogVideoX-2b", torch_dtype=torch.float16 ) video = pipe( prompt=prompt, num_videos_per_prompt=1, num_inference_steps=50, num_frames=49, guidance_scale=6, generator=torch.Generator(device="cuda").manual_seed(42), ).frames[0] Pipeline Code Reference pipelines/cogvideo/pipeline_cogvideox.py at line 478 @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, prompt: Optional[Union[str, List[str]]] = None, negative_prompt: Optional[Union[str, List[str]]] = None, height: int = 480, width: int = 720, ### Logs _No response_ ### System Info Diffusers version: 0.30.3 ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/9698
closed
[ "bug" ]
2024-10-17T04:30:56Z
2024-10-27T10:24:41Z
4
lovelyczli
huggingface/diffusers
9,697
train_text_to_image_sdxl training effect is very poor
I use DeepSpeed for training: train_text_to_image_sdxl.py 1.The data volume is 231 pieces 2. deepspeed json ![企业微信截图_17291359065532](https://github.com/user-attachments/assets/f82ad033-d786-4fe4-9264-3b6236304170) 3.Training Script ![企业微信截图_17291362274700](https://github.com/user-attachments/assets/ae5a6207-dbc8-4dde-b5d7-dcdaa0ac2783) 4.After training, use the training prompt words again,The generated effect is as follows: ![企业微信截图_17291363542986](https://github.com/user-attachments/assets/004d3e51-de2e-453b-864a-803794659d2c) May I ask everyone, what is the reason for the poor generation effect?
https://github.com/huggingface/diffusers/issues/9697
closed
[]
2024-10-17T03:40:17Z
2024-10-17T08:32:44Z
2
wzhiyuan2016
huggingface/finetrainers
41
cannot access local variable 'gradient_norm_before_clip' where it is not associated with a value
During both I2V and t2V training, sometimes I encountered the error ``` [rank1]: File "/root/projects/cogvideox-factory/training/cogvideox_text_to_video_lora.py", line 762, in main [rank1]: "gradient_norm_before_clip": gradient_norm_before_clip, [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: UnboundLocalError: cannot access local variable 'gradient_norm_before_clip' where it is not associated with a value ``` This is probably [here](https://github.com/a-r-r-o-w/cogvideox-factory/blob/a6c246c29d11d78e4aa3fb4b137c5ffd8d719d94/training/cogvideox_text_to_video_lora.py#L715) in the following code ``` if accelerator.sync_gradients: gradient_norm_before_clip = get_gradient_norm(transformer.parameters()) accelerator.clip_grad_norm_(transformer.parameters(), args.max_grad_norm) gradient_norm_after_clip = get_gradient_norm(transformer.parameters()) ``` somehow `accelerator.sync_gradients` is false sometimes. Is there a quick fix? Is it only for logging?
https://github.com/huggingface/finetrainers/issues/41
closed
[]
2024-10-16T18:34:19Z
2024-12-06T08:09:46Z
null
Yuancheng-Xu
huggingface/finetrainers
40
How to load the fine-tuned I2V model's LoRA module
I have successfully fine-tuned an I2V model (locally, without pushing to HF) and would like to load it for inference. I use the following code suggested in the readme ``` model_name = "THUDM/CogVideoX-5b-I2V" pipe = CogVideoXImageToVideoPipeline.from_pretrained( model_name, torch_dtype=torch.bfloat16 ).to("cuda") pipe.load_lora_weights("MyLocalLoRAPath", adapter_name=["cogvideox-lora"]) pipe.set_adapters(["cogvideox-lora"], [1.0]) ``` However I encounter the error ``` File ~/anaconda3/envs/cogvideox-i2v/lib/python3.11/site-packages/diffusers/loaders/lora_pipeline.py:2451, in CogVideoXLoraLoaderMixin.load_lora_into_transformer(cls, state_dict, transformer, adapter_name, _pipeline): if adapter_name in getattr(transformer, "peft_config", {}): aise ValueError( f"Adapter name {adapter_name} already in use in the transformer - please select a new adapter name." ) TypeError: unhashable type: 'list' ``` Note: in the trained LoRA folders, there is only a `pytorch_lora_weights.safetensors`
https://github.com/huggingface/finetrainers/issues/40
closed
[]
2024-10-16T17:25:21Z
2024-12-03T03:01:23Z
null
Yuancheng-Xu
pytorch/pytorch
138,073
`export()` fails for `full((n,), v)` but succeeds for `ones((n,)) * v` where `v` is dynamic
### 🐛 Describe the bug When using `torch.full((n,), v)` to create a tensor with a dynamic value, one receives a `Pending unbacked symbols` error. A simple workaround is to use `torch.ones((n,)) * v`, but unless I'm missing something the former should work just as well. Below is a minimal example to reproduce the error: ```python import torch import torch._dynamo import torch.export class FullConstNDynamicV(torch.nn.Module): def forward(self, x): n = 7 v = x[0, 0] out = torch.full((n,), v) # Replacing the above line with the following will fix export 'Pending unbacked symbols' error: # out = torch.ones((n,)) * v return out input_tensor = torch.ones(1, 100) torch.export.export(FullConstNDynamicV(), (input_tensor, )) ``` Note that an example that uses a dynamic value for `n` but non-dynamic `v` does work. I have [a sample notebook available](https://colab.research.google.com/drive/1L5lNvDs94tLj-qPwUzT52IADWY0WacS_?usp=sharing) to review the following cases: ``` OK: OnesConstNDynamicV Error: FullConstNDynamicV OK: OnesDynamicNConstV OK: FullDynamicNConstV OK: OnesDynamicNDynamicV Error: FullDynamicNDynamicV ``` Where OK means the corresponding code was export OK, and Error if it produced an error... It is expected that all modules should be export OK. ### Versions Ran in Google Colab. ``` PyTorch version: 2.4.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.30.4 Libc version: glibc-2.35 Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.1.85+-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) CPU @ 2.20GHz CPU family: 6 Model: 79 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 ``` cc @ezyang @chauhang @penguinwu @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
https://github.com/pytorch/pytorch/issues/138073
closed
[ "oncall: pt2", "module: dynamic shapes", "module: dynamo", "oncall: export" ]
2024-10-16T13:29:52Z
2025-03-26T17:56:33Z
null
kwikwag
huggingface/transformers.js
975
Supporting Multiple Pipelines?
### Question First of all, thank you so much for creating transformers.js! This is a fantastic library, and I had lots of fun building with it! I have a question regarding using pipelines API: Would it be possible to start multiple pipelines? For example, instead of using just one pipeline to run inference, can we create a pool of pipelines and push jobs into this pool, potentially better utilize the multi-cores on modern laptops? The goal here is really to understand if there's ways to utilize multi-cores. No worries if not! I just want to understand where the limits are. Thanks!
https://github.com/huggingface/transformers.js/issues/975
closed
[ "question" ]
2024-10-16T08:06:44Z
2024-10-21T15:58:20Z
null
kelayamatoz
huggingface/chat-ui
1,525
Standardize Chat Prompt Templates to Use Jinja Format
## Describe your feature request Currently, the `chatPromptTemplate` for each model that can be set in env uses **Handlebars** format. However, the `chat_prompt` in the actual model's `tokenizer_config.json` uses **Jinja** format. This inconsistency is causing significant inconvenience. Since **Jinja** is widely used and preferred, it would be beneficial to standardize on **Jinja** format for both `chatPromptTemplate` and `chat_prompt`. This will improve consistency and ease of use for developers. ## Screenshots (if relevant) ## Implementation idea To implement this change, the following steps can be taken: 1. Update Codebase: Update the codebase to handle **Jinja** templates for `chatPromptTemplate`. 2. Documentation: Update the documentation to reflect this change and provide examples of how to use **Jinja** templates. 3. Testing: Thoroughly test the changes to ensure compatibility and that all existing templates work correctly with the new format.
https://github.com/huggingface/chat-ui/issues/1525
open
[ "enhancement" ]
2024-10-16T05:26:12Z
2024-11-20T00:44:16Z
8
calycekr
pytorch/torchtitan
620
Is there way to offload training memory to DRAM (using FSDP2?) for training Llama3-8B with torchtitan?
I am training Llama3-8B using 2 RTX A6000ada 48GB, but got OOM. Is there way to offload training memory to DRAM (using FSDP2?) for training Llama3-8B with torchtitan? Thanks! ***Error message: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 112.00 MiB. GPU 0 has a total capacity of 47.48 GiB of which 92.81 MiB is free. Including non-PyTorch memory, this process has 46.71 GiB memory in use. Of the allocated memory 45.56 GiB is allocated by PyTorch, and 448.27 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) ***Here is my training config: # torchtitan Config.toml # NOTE: this toml config is a preset for 64 A100 GPUs. [job] dump_folder = "./outputs" description = "Llama 3 8B training" [profiling] enable_profiling = true save_traces_folder = "profile_trace" profile_freq = 100 [metrics] log_freq = 10 enable_tensorboard = true save_tb_folder = "tb" [model] name = "llama3" flavor = "8B" norm_type = "rmsnorm" # layernorm / np_layernorm / rmsnorm / fused_rmsnorm tokenizer_path = "./torchtitan/datasets/tokenizer/original/tokenizer.model" [optimizer] name = "AdamW" lr = 3e-4 [training] batch_size = 2 #1 seq_len = 256 #512 #8192 warmup_steps = 200 # lr scheduler warm up max_norm = 1.0 # grad norm clipping steps = 1000 data_parallel_replicate_degree = 1 #1 data_parallel_shard_degree = -1 #-1 tensor_parallel_degree = 2 #1 compile = true dataset = "c4" [experimental] pipeline_parallel_degree = 1 #1 enable_async_tensor_parallel = true [checkpoint] enable_checkpoint = false #false folder = "checkpoint" interval_type = "steps" interval = 500 model_weights_only = false export_dtype = "bfloat16" #32 async_mode = "disabled" # ["disabled", "async", "async_with_pinned_mem"] [activation_checkpoint] mode = 'selective' # ['none', 'selective', 'full'] selective_ac_option = 'op' # 'int' = ac every positive int layer or 'op', ac based on ops policy [float8] enable_float8_linear = true enable_fsdp_float8_all_gather = true precompute_float8_dynamic_scale_for_fsdp = true
https://github.com/pytorch/torchtitan/issues/620
closed
[ "question" ]
2024-10-15T19:54:17Z
2024-10-28T22:27:50Z
null
0781532
pytorch/serve
3,348
Getting started guide client samples broken ?
### 🐛 Describe the bug following the getting started guide: https://github.com/pytorch/serve/blob/master/docs/getting_started.md i get following error messages when trying to run the client examples. Am I doing something wrong? ### Error logs ``` serve$ python -m grpc_tools.protoc --proto_path=frontend/server/src/main/resources/proto/ --python_out=ts_scripts --grpc_python_out=ts_scripts frontend/server/src/main/resources/proto/inference.proto frontend/server/src/main/resources/proto/management.proto google/rpc/status.proto: File not found. inference.proto:6:1: Import "google/rpc/status.proto" was not found or had errors. inference.proto:32:14: "google.rpc.Status" is not defined. ``` and ``` serve$ python ts_scripts/torchserve_grpc_client.py infer densenet161 examples/image_classifier/kitten.jpg Traceback (most recent call last): File "[..]serve/ts_scripts/torchserve_grpc_client.py", line 7, in <module> import inference_pb2 ModuleNotFoundError: No module named 'inference_pb2' ``` ### Installation instructions followed the getting started guide: https://github.com/pytorch/serve/blob/master/docs/getting_started.md ### Model Packaging from getting started guide ### config.properties from getting started guide ### Versions ------------------------------------------------------------------------------------------ Environment headers ------------------------------------------------------------------------------------------ Torchserve branch: torchserve==0.12.0 torch-model-archiver==0.12.0 Python version: 3.10 (64-bit runtime) Python executable: /home/nikste/workspace-abnoba/serving_test/venv/bin/python Versions of relevant python libraries: captum==0.6.0 numpy==1.24.3 nvgpu==0.10.0 pillow==10.3.0 psutil==5.9.8 requests==2.32.0 torch==2.4.0+cu121 torch-model-archiver==0.12.0 torch-workflow-archiver==0.2.15 torchaudio==2.4.0+cu121 torchserve==0.12.0 torchvision==0.19.0+cu121 wheel==0.42.0 torch==2.4.0+cu121 **Warning: torchtext not present .. torchvision==0.19.0+cu121 torchaudio==2.4.0+cu121 Java Version: OS: Ubuntu 22.04.5 LTS GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.30.3 Environment: library_path (LD_/DYLD_): /usr/local/cuda-11.8/lib64: ### Repro instructions getting started guide ### Possible Solution missing packages in the requirements?
https://github.com/pytorch/serve/issues/3348
open
[]
2024-10-15T16:47:42Z
2024-12-26T04:00:44Z
1
nikste
huggingface/alignment-handbook
201
Full parameter fine-tuning keeps consuming system RAM and lead to crash
I am using alignment handbook to perform a full parameter fine-tuning of llama3 models with Deepspeed stage 2 on my own dataset which is relatively large (400k+ records). The training was performed on a slurm cluster with two nodes (each has 4 H100 GPUs). I have noticed that during the training, the system memory utilization keeps increasing even though I set torch_empty_cache_steps=500. I wonder if there is something wrong with the HF trainer? Any suggestions how to fix/debug? There is also a similar issue at https://github.com/huggingface/transformers/issues/30119 - Below is the system ram usage report from wandb: ![Screenshot 2024-10-15 at 10 41 49 AM](https://github.com/user-attachments/assets/1201d5ad-26ee-4d15-81c1-9ef33128bba0) ![Screenshot 2024-10-15 at 10 41 46 AM](https://github.com/user-attachments/assets/200b887c-38bd-40f9-a160-e61c14c25870) ![Screenshot 2024-10-15 at 10 41 43 AM](https://github.com/user-attachments/assets/4fee96b4-fd08-4073-a17a-dd7d4cfd8e34) - my config: ```yaml # Model arguments model_name_or_path: ~/models/Meta-Llama-3-8B model_revision: main torch_dtype: bfloat16 attn_implementation: flash_attention_2 # Data training arguments chat_template: "{{ bos_token }}{% if messages[0]['role'] == 'system' %}{% set system_message = '### System Instruction: ' + messages[0]['content'] | trim + '' %}{% set messages = messages[1:] %}{% else %}{% set system_message = '' %}{% endif %}{{ bos_token + system_message }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '### Context: ' + message['content'] | trim + '' }}{% elif message['role'] == 'assistant' %}{{ '### Result: ' + message['content'] | trim + ' ' + eos_token }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '### Result: ' }}{% endif %}" dataset_mixer: ~/data/processed_data_open_sourced_xml_to_text/merged_open_sourced_xml_to_text_dataset: 1.0 dataset_splits: - train_sft - test_sft preprocessing_num_workers: 4 dataloader_num_workers: 2 # SFT trainer config bf16: true do_eval: true # evaluation_strategy: epoch eval_strategy: epoch max_grad_norm: 1.0 # gradient_accumulation_steps: 16 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: False log_level: info logging_steps: 5 logging_strategy: steps learning_rate: 2.0e-05 lr_scheduler_type: cosine_with_min_lr # cosine_with_min_lr lr_scheduler_kwargs: min_lr: 5e-6 optim: adamw_torch # adamw_torch paged_adamw_32bit galore_adamw lion_32bit optim_target_modules: all-linear weight_decay: 0.01 max_seq_length: 12800 packing: false dataset_num_proc: 16 max_steps: -1 num_train_epochs: 1 output_dir: /~/alignment-handbook/experiments/models/llama3 overwrite_output_dir: true per_device_eval_batch_size: 1 per_device_train_batch_size: 1 # this is per device, you need to manual calculate global batch by per device * gas * gpu * node gradient_accumulation_steps: 8 push_to_hub: false remove_unused_columns: true report_to: - wandb # - tensorboard save_strategy: "steps" save_steps: 500 torch_empty_cache_steps: 500 save_total_limit: 30 seed: 42 warmup_ratio: 0.1 ``` - training launch script (brief version) ```sh #!/bin/bash #SBATCH --job-name=train #SBATCH --nodes=2 #SBATCH --ntasks-per-node=1 #SBATCH --gpus-per-node=4 #SBATCH --gpus-per-task=4 #SBATCH --cpus-per-task=32 #SBATCH --mem=512gb #SBATCH --time=96:00:00 #SBATCH --output=output #SBATCH --partition=batch # apptainer CONTAINER=pt2402.sif TRAIN_CONF=config.yaml DEEPSPEED_CONF=deepspeed_zs2.json CMD=torchrun \ --nproc_per_node=$SLURM_GPUS_ON_NODE \ --nnode=$SLURM_JOB_NUM_NODES \ --node_rank=$SLURM_NODEID \ --master_addr=$PRIMARY \ --master_port=$PRIMARY_PORT \ ${ROOT}/scripts/run_sft.py \ $TRAIN_CONF \ --deepspeed=$DEEPSPEED_CONF \ --tee=3 srun --jobid $SLURM_JOB_ID apptainer exec --nv $CONTAINER bash -c $CMD ``` - deepspeed config: ```json { "fp16": { "enabled": false, "loss_scale": 0, "auto_cast": false, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "consecutive_hysteresis": false, "min_loss_scale": 1 }, "bf16": { "enabled": true }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "weight_decay": "auto", "betas": "auto", "eps": "auto", "torch_adam": true, "adam_w_mode": true } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": 1e-8, "warmup_max_lr": "auto", "warmup_num_steps": "auto", "total_num_steps": "auto" } }
https://github.com/huggingface/alignment-handbook/issues/201
closed
[]
2024-10-15T15:04:18Z
2024-10-17T18:56:53Z
2
xiyang-aads-lilly
huggingface/chat-ui
1,522
Add example prompt field to tools
## Describe your feature request This lets the user specify a prompt that would call the tool. It can be shown as a demo if you're not sure how to use a tool. We should show it somewhere in the UI so the user can easily start a conversation from that demo. It can also be used for validating that a tool works. (run the example server-side, if the tool does not get called or does not return an output then something is wrong and dont let users publish it) ## Implementation idea Storing the prompt itself is straightforward since you can just store it as a string. Most tools use file inputs though so we should ideally also support that, which means storing example files in the DB.
https://github.com/huggingface/chat-ui/issues/1522
open
[ "enhancement", "front", "back", "tools" ]
2024-10-15T12:42:42Z
2024-10-15T12:42:43Z
0
nsarrazin
pytorch/torchtitan
619
Question about torch.compile has better throughput with 128-GPUs than 8-GPUs
Thank you for publishing the paper. I hope to get your answers to the following questions.: Normally, the training speed will decline as the number of GPUs increases. However, in the paper, with the torch.compile technology, the speed with 128 GPUs is better than that with 8 GPUs. ![compile](https://github.com/user-attachments/assets/d6ea4dc3-6dd1-4286-a5d4-aee754b22c55)
https://github.com/pytorch/torchtitan/issues/619
closed
[ "question" ]
2024-10-15T09:14:25Z
2024-11-19T21:37:23Z
null
dz1iang
huggingface/optimum
2,060
Support int8 tinyllama tflite export.
### Feature request tflite exporter for decoder only llms such as tinyllama ### Motivation Some platforms only support full int8 op and full int8 tflite models can be deployed. Is there a support plan? Looking forward to your reply, thank you. ### Your contribution no
https://github.com/huggingface/optimum/issues/2060
closed
[ "feature-request", "Stale" ]
2024-10-15T03:25:54Z
2024-12-09T02:11:36Z
1
hayyaw
huggingface/diffusers
9,673
high cpu usage when loading multiple loras at once.
### Describe the bug Hi, I was making a synthesis system using celery and diffusers, and I found the cpu usage of program goes high when loading loras, it is okay when I use just one worker, but it becomes hard when using 8 workers at once. It happens when lora loaded first time, and I think it is because of peft, because I didn't get any trouble before peft support. so Is there any way to lower cpu usage when loading loras? or is there any way not to use peft when sdxl lora loading? ### Reproduction ```python # test lora downloaded from https://civitai.com/models/150986/blueprintify-sd-xl-10 from diffusers import AutoPipelineForText2Image import torch from uuid import uuid4 from tqdm import tqdm pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") num_of_iterations = 10 for _ in tqdm(range(num_of_iterations)): lora_name = str(uuid4().hex) pipeline.load_lora_weights( "./test", weight_name="lora.safetensors", adapter_name=lora_name, low_cpu_mem_usage=True, ) pipeline.set_adapters([lora_name], adapter_weights=[1.0]) ``` ### Logs _No response_ ### System Info torch==2.1.1+cu121 diffusers==0.30.3 accelerate==0.32.1 peft==0.13.0 transformers==4.42.3 python==3.9.5 ### Who can help? @sayakpaul
https://github.com/huggingface/diffusers/issues/9673
closed
[ "bug" ]
2024-10-15T01:49:37Z
2024-10-15T05:07:40Z
5
gudwns1215
huggingface/datasets
7,226
Add R as a How to use from the Polars (R) Library as an option
### Feature request The boiler plate code to access a dataset via the hugging face file system is very useful. Please addd ## Add Polars (R) option The equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well. ```r library(polars) df <- pl$read_parquet("hf://datasets/SALURBAL/core__admin_cube_public/core__admin_cube_public.parquet") ``` ## Polars (python) option ![image](https://github.com/user-attachments/assets/8f1bcd19-e578-4b18-b324-7cc00b80ac0a) ## Libraries Currently ![image](https://github.com/user-attachments/assets/0cf50063-f9db-443c-97b4-3ef0664b6e6e) ### Motivation There are many data/analysis/research/statistics teams (particularly in academia and pharma) that use R as the default language. R has great integration with most of the newer data techs (arrow, parquet, polars) and having this included could really help in bringing this community into the hugging faces ecosystem. **This is a small/low-hanging-fruit front end change but would make a big impact expanding the community** ### Your contribution I am not sure which repositroy this should be in, but I have experience in R, Python and JS and happy to submit a PR in the appropriate repository.
https://github.com/huggingface/datasets/issues/7226
open
[ "enhancement" ]
2024-10-14T19:56:07Z
2024-10-14T19:57:13Z
null
ran-codes
huggingface/lerobot
472
How to resume training with a higher offline steps than initial set up?
### System Info ```Shell - `lerobot` version: unknown - Platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.25.2 - Dataset version: 3.0.1 - Numpy version: 1.26.4 - PyTorch version (GPU?): 2.4.1 (True) - Cuda version: 11080 - Using GPU in script?: <fill in> ``` ### Information - [X] One of the scripts in the examples/ folder of LeRobot - [X] My own task or dataset (give details below) ### Reproduction 1. python lerobot/scripts/train.py \ hydra.run.dir=outputs/train/pusht\ device=cuda env=pusht_act \ env.task=pusht-v0 \ dataset_repo_id= takuzennn/pusht_v0 \ policy=act_pusht \ training.eval_freq=2000 \ training.log_freq=250 \ training.offline_steps=300000 \ training.save_model=true \ training.save_freq=2000 \ eval.n_episodes=30 \ eval.batch_size=12 \ wandb.enable=true \ 2. python lerobot/scripts/train.py \ hydra.run.dir=outputs/train/pusht \ training.offline_steps=800000 \ resume=true ### Expected behavior I expect it to stop at 800000 steps, but it still stops at 300000 steps.
https://github.com/huggingface/lerobot/issues/472
closed
[]
2024-10-13T19:28:04Z
2024-10-22T05:51:42Z
null
Takuzenn