repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js
| 973
|
I would like to help
|
### Question
Hi, I would like to help with the project. Is there anything that needs to be done?
Currently I found an issue, probably in ONNXRuntime. I will look into it next week.
Here is example of WebGPU Whisper that works with mobile platforms including iPhone and Android: https://github.com/FL33TW00D/whisper-turbo
Current Transformers.js solution have some bugs. It will crash after model loading, page will restart on mobile device. I tried to connect remote debugging to Chrome PC via some ios remote debugging bridge, but it just restarts and I cannot get any logs. Any help how to get logs would be appreciated as I don't have much experience with iOS Safari debugging and I also happen to have Windows PC.
Here is photo from Safari - iPhone, you can see it does not support float32, but only float16. I suspect this is the issue and there are like 3 separate pull requests in ONNX to fix something around float16 support. But I did not have time to merge all current ONNX PRs and build it yet. First I would like to see some log with actual error

This is what I will be working on next weekend.
If there is something else I should look into or help with testing, let me know.
Thank you for great project and great work! :-)
|
https://github.com/huggingface/transformers.js/issues/973
|
open
|
[
"question"
] | 2024-10-12T20:29:07Z
| 2024-10-14T19:37:51Z
| null |
cyberluke
|
huggingface/diffusers
| 9,661
|
from_pretrained: filename argument removed?
|
**What API design would you like to have changed or added to the library? Why?**
I do believe there was a `filename` argument in the past to load a specific checkpoint in a huggingface repository. It appears that this has been removed with no replacement.
**What use case would this enable or better enable? Can you give us a code example?**
It's impossible to use any of the checkpoints here https://huggingface.co/SG161222/Realistic_Vision_V6.0_B1_noVAE/tree/main without manually downloading and using `from_single_file`. The checkpoint I want to load is called `Realistic_Vision_V6.0_NV_B1_fp16.safetensors`, but it seems that the procedure in `from_pretrained` tries to force and impose a specific name on the user. I understand the need for standards, but many have not respected the standards in the past and now these models cannot be used without additional work.
|
https://github.com/huggingface/diffusers/issues/9661
|
closed
|
[
"stale"
] | 2024-10-12T20:02:31Z
| 2024-11-13T00:37:52Z
| 4
|
oxysoft
|
pytorch/torchchat
| 1,297
|
Can torchat call /use the models already downloaded under Ollama?
|
### 🚀 The feature, motivation and pitch
Can torchat pick up the models that have already been downloaded by Ollama. Is there a way to use them without downloading them again with a hf user id?
`PS C:\Users\siva> ollama list
NAME ID SIZE
qwen2.5-coder:latest 87098ba7390d 4.7 GB
llama3.2:latest a80c4f17acd5 2.0 GB
`
### Alternatives
_No response_
### Additional context
_No response_
### RFC (Optional)
_No response_
|
https://github.com/pytorch/torchchat/issues/1297
|
closed
|
[] | 2024-10-12T16:35:12Z
| 2024-10-15T15:22:03Z
| 1
|
sivaramn
|
huggingface/transformers
| 34,107
|
How to specific customized force_token_ids in whisper
|
```
ValueError: A custom logits processor of type <class 'transformers.generation.logits_process.ForceTokensLogitsProcessor'> with values <transformers.generation.logits_process.ForceTokensLogitsProcessor object at 0x7f4230cfac50> has been passed to `.generate()`, but it has already been created with the values <transformers.generation.logits_process.ForceTokensLogitsProcessor object at 0x7f422829c510>. <transformers.generation.logits_process.ForceTokensLogitsProcessor object at 0x7f422829c510> has been created by passing the corresponding arguments to generate or by the model's config default values. If you just want to change the default values of logits processor consider passing them as arguments to `.generate()` instead of using a custom logits processor
```
this way don't work:
```
inputs = inputs.to(self.model.dtype)
with torch.no_grad():
if forced_decoder_ids is not None:
generated_ids = self.model.generate(
inputs, forced_decoder_ids=forced_decoder_ids
)
else:
generated_ids = self.model.generate(inputs)
```
|
https://github.com/huggingface/transformers/issues/34107
|
closed
|
[
"Generation",
"Audio"
] | 2024-10-12T07:34:38Z
| 2024-12-28T08:06:48Z
| null |
MonolithFoundation
|
pytorch/torchtitan
| 610
|
[Compile] Understand why FSDP2 saves both SDPA out and wo in for bwd
|
With FSDP2 and transformer block compile, `torch.compile` saves both the SDPA output and the contiguous transposed tensor for backward:
https://github.com/pytorch/torchtitan/blob/7e93822e402c3f470bb7ddb925bbc43701bf8573/torchtitan/models/llama/model.py#L210-L213
However, with simpleFSDP with full model compile, `torch.compile` only saves the SDPA output. This means that FSDP2 saves an extra `(bs, seq_len, dim)` tensor per transformer block.
Traditionally, SDPA output is required for SDPA backward, and the input to `wo` is required for the `wo` backward. However, it may be profitable memory-wise to recompute one from the other (e.g. recompute SDPA output from undo-ing the transpose of `wo` input).
One question is why the activations saved for backward differ between simple FSDP with full model compile vs. FSDP2 with transformer block compile.
|
https://github.com/pytorch/torchtitan/issues/610
|
open
|
[
"question",
"module: torch.compile"
] | 2024-10-11T15:29:04Z
| 2025-12-10T18:30:41Z
| null |
awgu
|
pytorch/ao
| 1,057
|
How to use float8 with SM89 hardware - i.e. NVIDIA A6000 ADA?
|
I am running torchao: 0.5 and torch: '2.5.0a0+b465a5843b.nv24.09' on an NVIDIA A6000 ADA card (sm89) which supports FP8.
I ran the generate.py code from the benchmark:
python generate.py --checkpoint_path $CHECKPOINT_PATH --compile --compile_prefill --write_result /root/benchmark_results__baseline.txt
> Average tokens/sec: 57.01
> Average Bandwidth: 855.74 GB/s
> Peak Memory Usage: 16.19 GB
> Model Size: 15.01 GB
> 20241011143042, tok/s= 57.01, mem/s= 855.74 GB/s, peak_mem=16.19 GB, model_size=15.01 GB quant: None, mod: Meta-Llama-3-8B, kv_quant: False, compile: True, compile_prefill: True, dtype: torch.bfloat16, device: cuda repro: python generate.py --checkpoint_path /models/Meta-Llama-3-8B/consolidated.00.pth --device cuda --precision torch.bfloat16 --compile --compile_prefill --num_samples 5 --max_new_tokens 200 --top_k 200 --temperature 0.8
python generate.py --checkpoint_path $CHECKPOINT_PATH --compile --compile_prefill --quantization float8wo --write_result /root/benchmark_results__float8wo.txt`
> Average tokens/sec: 57.00
> Average Bandwidth: 855.62 GB/s
> Peak Memory Usage: 16.19 GB
> Model Size: 15.01 GB
> 20241011143316, tok/s= 57.00, mem/s= 855.62 GB/s, peak_mem=16.19 GB, model_size=15.01 GB quant: float8wo, mod: Meta-Llama-3-8B, kv_quant: False, compile: True, compile_prefill: True, dtype: torch.bfloat16, device: cuda repro: python generate.py
--quantization float8wo --checkpoint_path /models/Meta-Llama-3-8B/consolidated.00.pth --device cuda --precision torch.bfloat16 --compile --compile_prefill --num_samples 5 --max_new_tokens 200 --top_k 200 --temperature 0.8
The `float8wo` flag does not appear to be doing anything. Am I missing a step? Thanks!
|
https://github.com/pytorch/ao/issues/1057
|
closed
|
[
"question",
"float8"
] | 2024-10-11T14:40:38Z
| 2025-01-24T18:24:46Z
| null |
vgoklani
|
pytorch/pytorch
| 137,779
|
Flex attention with mask depending on queries and keys lengths (or how to implement `causal_lower_right` masking)
|
### 🐛 Describe the bug
I tried to implement the `causal_lower_right` masking in flex attention. This requires the masking function to know the difference in lengths of keys and queries:
```python
QL = query.size(2)
KL = key.size(2)
def causal_mask(b, h, q_idx, kv_idx):
return q_idx - QL >= kv_idx - KL
```
It is easy to use it with flex attention and it works on the first call to flex attention (regardless of using `torch.compile` on it or not). However, it fails on a call with differently shaped `query` and `key` matrices.
I don't know if the usage of queries and keys shape is allowed. If it is, then the second call shouldn't fail. If it is not allowed, then how can one implement `causal_lower_right` masking, which requires knowing the shapes?
Full reproduction code:
```python
import torch
from torch.nn.attention.flex_attention import create_block_mask, flex_attention
def causal_attention(
query,
key,
value,
):
# all shapes Bs x Nh x Len x Dim
B = query.size(0)
H = query.size(1)
QL = query.size(2)
KL = key.size(2)
def causal_mask(b, h, q_idx, kv_idx):
return q_idx - QL >= kv_idx - KL
block_mask = create_block_mask(causal_mask, B, H, QL, KL, device=query.device)
return flex_attention(
query,
key,
value,
None,
block_mask,
)
def test(ql, kl):
bs = 32
nh = 8
hd = 64
q = torch.rand(
bs, nh, ql, hd, dtype=torch.bfloat16, device="cuda", requires_grad=True
)
k = torch.rand(
bs, nh, kl, hd, dtype=torch.bfloat16, device="cuda", requires_grad=True
)
v = torch.rand(
bs, nh, kl, hd, dtype=torch.bfloat16, device="cuda", requires_grad=True
)
causal_attention(q, k, v)
print(f"test({ql}, {kl}) worked")
print("torch.__version__", torch.__version__)
# First calls always succeed.
test(512, 512)
test(512, 512)
# These calls fail, unless the above are commented out.
test(512, 1024)
test(512, 1024)
test(512, 512)
```
Traceback:
```
torch.__version__ 2.6.0.dev20241009
test(512, 512) worked
test(512, 512) worked
Traceback (most recent call last):
File "/home/janek/projects/llm_ng/flex_trouble.py", line 52, in <module>
test(512, 1024)
File "/home/janek/projects/llm_ng/flex_trouble.py", line 42, in test
causal_attention(q, k, v)
File "/home/janek/projects/llm_ng/flex_trouble.py", line 20, in causal_attention
return flex_attention(
File "/mnt/scratch/janek/pixi/babydragon-12050631407633866471/envs/nightly/lib/python3.10/site-packages/torch/nn/attention/flex_attention.py", line 1113, in flex_attention
out, lse = torch.compile(
File "/mnt/scratch/janek/pixi/babydragon-12050631407633866471/envs/nightly/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 487, in _fn
return fn(*args, **kwargs)
File "/mnt/scratch/janek/pixi/babydragon-12050631407633866471/envs/nightly/lib/python3.10/site-packages/torch/nn/attention/flex_attention.py", line 1100, in _flex_attention_hop_wrapper
def _flex_attention_hop_wrapper(*args, **kwargs):
File "/mnt/scratch/janek/pixi/babydragon-12050631407633866471/envs/nightly/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 654, in _fn
return fn(*args, **kwargs)
File "<eval_with_key>.9", line 28, in forward
File "/mnt/scratch/janek/pixi/babydragon-12050631407633866471/envs/nightly/lib/python3.10/site-packages/torch/_higher_order_ops/flex_attention.py", line 113, in __call__
raise RuntimeError("Other buffers must be tensors.")
RuntimeError: Other buffers must be tensors.
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241009
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
cc @zou3519 @bdhirsh @penguinwu @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng @ezyang @chauhang @ydwu4
|
https://github.com/pytorch/pytorch/issues/137779
|
closed
|
[
"triaged",
"oncall: pt2",
"module: pt2-dispatcher",
"module: flex attention"
] | 2024-10-11T13:21:40Z
| 2024-11-12T00:12:28Z
| null |
janchorowski
|
huggingface/finetrainers
| 25
|
how to fix it ? training/cogvideox_text_to_video_lora.py FAILED
|
### System Info / 系統信息
cuda11.8
x2 3090
linux ubuntu 22.04 lts
pytorch2.4
### Information / 问题信息
- [X] The official example scripts / 官方的示例脚本
- [X] My own modified scripts / 我自己修改的脚本和任务
### Reproduction / 复现过程
andb: You can sync this run to the cloud by running:
wandb: wandb sync /home/dev_ml/cogvideox-factory/wandb/offline-run-20241011_154425-t76nveyh
wandb: Find logs at: wandb/offline-run-20241011_154425-t76nveyh/logs
[rank0]:I1011 15:44:57.956000 124307873129088 torch/_dynamo/utils.py:335] TorchDynamo compilation metrics:
[rank0]:I1011 15:44:57.956000 124307873129088 torch/_dynamo/utils.py:335] Function, Runtimes (s)
[rank0]:V1011 15:44:57.956000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats constrain_symbol_range: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.956000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats evaluate_expr: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _simplify_floor_div: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _maybe_guard_rel: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _find: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats has_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats size_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats simplify: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _update_divisible: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats replace: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _maybe_evaluate_static: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.958000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats get_implications: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.958000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats get_axioms: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.958000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats safe_expand: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
[rank0]:V1011 15:44:57.958000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats uninteresting_files: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
W1011 15:45:01.515000 129677780091520 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 177223 closing signal SIGTERM
E1011 15:45:02.282000 129677780091520 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 177222) of binary: /home/dev_ml/cogvideox-factory/venv/bin/python3.10
Traceback (most recent call last):
File "/home/dev_ml/cogvideox-factory/venv/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1159, in launch_command
multi_gpu_launcher(args)
File "/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 793, in multi_gpu_launcher
distrib_run.run(args)
File "/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in run
elastic_launch(
File "/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
training/cogvideox_text_to_video_lora.py FAILED
---------------------------------
|
https://github.com/huggingface/finetrainers/issues/25
|
closed
|
[] | 2024-10-11T08:49:23Z
| 2024-12-23T07:40:41Z
| null |
D-Mad
|
huggingface/finetrainers
| 22
|
What resolution size is recommended for MP4 videos? What should the bitrate be set to? Should the video use H.264 or H.265 encoding?
|
About Dataset Preparation,
What resolution size is recommended for MP4 videos? What should the bitrate be set to? Should the video use H.264 or H.265 encoding?
example: 1280X720, 5mbps below. recommended H.264 encoder.
Is any suggestion here?
|
https://github.com/huggingface/finetrainers/issues/22
|
closed
|
[] | 2024-10-11T05:12:57Z
| 2024-10-14T07:20:36Z
| null |
Erwin11
|
huggingface/accelerate
| 3,156
|
how to load model with fp8 precision for inference?
|
### System Info
```Shell
is it posible to load the model using accelerate library with fp8 inference?
i have H100 gpu accesses.
```
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-72B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Expected behavior
...
|
https://github.com/huggingface/accelerate/issues/3156
|
closed
|
[] | 2024-10-11T04:31:47Z
| 2024-12-02T15:07:58Z
| null |
imrankh46
|
huggingface/diffusers
| 9,643
|
Flux does not support multiple Controlnets?
|
### Describe the bug
I'm encountering an issue with the FluxControlNetPipeline. The `controlnet` parameter is supposed to accept a `List[FluxControlNetModel]`. However, when I attempt to execute my code, I run into the following error:
```
Traceback (most recent call last):
File "/opt/tiger/test_1/h.py", line 8, in <module>
pipe = FluxControlNetPipeline.from_pretrained('/mnt/bn/x/sd_models/flux_schnell/', controlnet=controlnet, torch_dtype=torch.bfloat16).to("cuda")
File "/opt/tiger/miniconda3/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/opt/tiger/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 940, in from_pretrained
model = pipeline_class(**init_kwargs)
File "/opt/tiger/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux_controlnet.py", line 206, in __init__
self.register_modules(
File "/opt/tiger/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 162, in register_modules
library, class_name = _fetch_class_library_tuple(module)
File "/opt/tiger/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 731, in _fetch_class_library_tuple
library = not_compiled_module.__module__.split(".")[0]
AttributeError: 'list' object has no attribute '__module__'. Did you mean: '__mul__'?
```
### Reproduction
```
import torch
from diffusers import FluxControlNetPipeline, FluxControlNetModel
controlnet = [
FluxControlNetModel.from_pretrained("InstantX/FLUX.1-dev-controlnet-canny", torch_dtype=torch.bfloat16),
FluxControlNetModel.from_pretrained("InstantX/FLUX.1-dev-controlnet-canny", torch_dtype=torch.bfloat16),
]
pipe = FluxControlNetPipeline.from_pretrained('/mnt/bn/x/sd_models/flux_schnell/', controlnet=controlnet, torch_dtype=torch.bfloat16).to("cuda")
```
### Logs
_No response_
### System Info
- 🤗 Diffusers version: 0.31.0.dev0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Running on Google Colab?: No
- Python version: 3.10.14
- PyTorch version (GPU?): 2.3.1+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.24.5
- Transformers version: 4.38.2
- Accelerate version: 0.33.0
- PEFT version: 0.12.0
- Bitsandbytes version: 0.44.1
- Safetensors version: 0.4.4
- xFormers version: 0.0.27
- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/9643
|
closed
|
[
"bug"
] | 2024-10-11T03:47:06Z
| 2024-10-11T17:39:20Z
| 1
|
RimoChan
|
huggingface/diffusers
| 9,639
|
How to use my own trained lora in local computer?
|
local_model_path = r"D:\downloads\FLUX.1-schnell"
pipe = FluxPipeline.from_pretrained(local_model_path, torch_dtype=torch.bfloat16)
#lora not working by this way
pipe.load_lora_weights("XLabs-AI/flux-lora-collection", weight_name="disney_lora.safetensors")
pipe.load_lora_weights(r"D:\AI\stable-diffusion-webui-forge\models\Lora\myflux\myhsr.safetensors")
pipe.fuse_lora()
pipe.unload_lora_weights()
#pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
pipe.enable_sequential_cpu_offload()
But it seems not loading my own lora properly.
|
https://github.com/huggingface/diffusers/issues/9639
|
closed
|
[] | 2024-10-10T23:19:47Z
| 2024-11-10T08:49:08Z
| null |
derekcbr
|
pytorch/benchmark
| 2,499
|
How is TorchBench applied to testing new versions of PyTorch?
|
Hello, may I ask what tasks will be used for end-to-end testing before the release of the new version of PyTorch?
Will the test focus on the consistency of metrics between the previous and subsequent versions, such as the loss of training tasks, iteration speed, etc
|
https://github.com/pytorch/benchmark/issues/2499
|
open
|
[] | 2024-10-10T16:40:53Z
| 2024-10-16T20:28:47Z
| null |
HLH13297997663
|
huggingface/evaluation-guidebook
| 14
|
[TOPIC] How to design a good benchmark depending on your eval goals
|
Eval goals can be finding a good model for you vs ranking models vs choosing a good training config.
Request by Luca Soldaini
Cf https://x.com/soldni/status/1844409854712218042
|
https://github.com/huggingface/evaluation-guidebook/issues/14
|
closed
|
[] | 2024-10-10T16:20:40Z
| 2025-09-18T08:31:15Z
| null |
clefourrier
|
huggingface/diffusers
| 9,633
|
Confusion about accelerator.num_processes in get_scheduler
|
In the example code from [train_text_to_image_sdxl.py](https://github.com/huggingface/diffusers/blob/e16fd93d0a40156c1f49fde07f6f2eb438983927/examples/text_to_image/train_text_to_image_sdxl.py#L974):
```python
num_warmup_steps = args.lr_warmup_steps * args.gradient_accumulation_steps
```
But in [train_text_to_image.py](https://github.com/huggingface/diffusers/blob/e16fd93d0a40156c1f49fde07f6f2eb438983927/examples/text_to_image/train_text_to_image.py#L830):
```python
num_warmup_steps_for_scheduler = args.lr_warmup_steps * accelerator.num_processes
```
Why is there such a difference in these two cases?
|
https://github.com/huggingface/diffusers/issues/9633
|
closed
|
[
"stale"
] | 2024-10-10T08:39:12Z
| 2024-11-09T15:37:33Z
| 5
|
hj13-mtlab
|
huggingface/transformers.js
| 968
|
It's ready
|
### Question
The project I've been working on for the part few months is now ready-enough to reveal to the world. Transformers.js is an essential part of it, and I just want to say thank you for your amazing work.
https://www.papeg.ai
As you can see in the source code, there are lots of workers that implement Transformers.js workers; translation, image description, STT, TTS, speaker verification, image- and music generation, RAG embedding, and more!
https://github.com/flatsiedatsie/papeg_ai
Keep on rockin' !
// Reddit post: https://www.reddit.com/r/LocalLLaMA/comments/1g0jehn/ive_been_working_on_this_for_6_months_free_easy/
(Feel free to close this issue at any time)
|
https://github.com/huggingface/transformers.js/issues/968
|
closed
|
[
"question"
] | 2024-10-10T04:39:48Z
| 2025-05-29T22:49:24Z
| null |
flatsiedatsie
|
pytorch/torchtitan
| 608
|
why is xformers not used for attention computation?
|
Curious why xformers is not used? Is it for simplicity or is there performance reason.
|
https://github.com/pytorch/torchtitan/issues/608
|
closed
|
[
"question"
] | 2024-10-09T23:21:23Z
| 2024-11-22T00:15:17Z
| null |
jason718
|
pytorch/xla
| 8,245
|
Improve documentation for `get_memory_info`
|
## 📚 Documentation
Improve documentation for `get_memory_info`. This feature is lightly defined in [PyTorchXLA documentation page](https://pytorch.org/xla/release/r2.4/index.html#torch_xla.core.xla_model.get_memory_info). Please provide an explanation on what details it pulls and potentially offer examples.
Additionally, it's important to draw a documentation that clarifies how `get_memory_info` API works such that users can easily compare/contrast it against [`torch.cuda.mem_get_info`](https://pytorch.org/docs/stable/generated/torch.cuda.mem_get_info.html)
cc @mikegre-google to help follow up
@JackCaoG
|
https://github.com/pytorch/xla/issues/8245
|
open
|
[
"enhancement",
"usability"
] | 2024-10-09T20:33:18Z
| 2025-02-27T13:10:42Z
| 0
|
miladm
|
pytorch/TensorRT
| 3,224
|
❓ [Question] How to decide if an Op should support dynamic shape or not
|
## ❓ Question
<!-- Your question -->
Since only part of the ops support dynamic shapes, and some are not. What's the criteria to decide if an op supports dynamic shape or not?
For some existing ops, which are not marked as `supports_dynamic_shapes=True`, can I write a converter that wraps the existing converter, and mark my own converter with high priority? Is this the recommended way?
or should I just turn on `assume_dynamic_shape_support`, which seems to be a flag globally for all converters ?
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.4.1
- CPU Architecture: x86_64
- OS (e.g., Linux): linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.11.9
- CUDA version: 12.1
- GPU models and configuration: Nvidia L4
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/3224
|
open
|
[
"question"
] | 2024-10-09T16:46:56Z
| 2024-10-30T23:52:26Z
| null |
sean-xiang-applovin
|
huggingface/datasets
| 7,211
|
Describe only selected fields in README
|
### Feature request
Hi Datasets team!
Is it possible to add the ability to describe only selected fields of the dataset files in `README.md`? For example, I have this open dataset ([open-llm-leaderboard/results](https://huggingface.co/datasets/open-llm-leaderboard/results?row=0)) and I want to describe only some fields in order not to overcomplicate the Dataset Preview and filter out some fields
### Motivation
The `Results` dataset for the Open LLM Leaderboard contains json files with a complex nested structure. I would like to add `README.md` there to use the SQL console, for example. But if I describe the structure of this dataset completely, it will overcomplicate the use of Dataset Preview and the total number of columns will exceed 50
### Your contribution
I'm afraid I'm not familiar with the project structure, so I won't be able to open a PR, but I'll try to help with something else if possible
|
https://github.com/huggingface/datasets/issues/7211
|
open
|
[
"enhancement"
] | 2024-10-09T16:25:47Z
| 2024-10-09T16:25:47Z
| 0
|
alozowski
|
pytorch/xla
| 8,240
|
XLA2 does not work with jax 0.4.34 (but did work on jax 0.4.33)
|
## 🐛 Bug
A toy example of MNIST using XLA2 does not work on the latest version of jax (0.4.34) on Trillium machine of 64 cores (V6e-64) but downgrading to 0.4.33 fixes the issue
## To Reproduce
1. Download the toy training example from [here](https://gist.githubusercontent.com/Chaosruler972/2461fe9d5a7a558ff4cb257ce88ad702/raw/1c354fbdae9dae2ff83917341aea957172897e71/mnist.py)
2. Allocate a V6e-64 trillium TPU at GCP
3. copy that file using gcp scp to all the VM machines
4. prepare an environment containing torch_xla2 (refer to the[ readme here](https://github.com/pytorch/xla/blob/master/experimental/torch_xla2/README.md))
5. install 0.4.43 jax/lib from pip
```
install jax==0.4.33 jaxlib==0.4.33 libtpu-nightly==0.1.dev20241008+nightly -f https://storage.googleapis.com/libtpu-releases/index.html
```
6. run your training, verify it is working well
7. upgrade to jax 0.4.44
```
install jax==0.4.33 jaxlib==0.4.33 libtpu-nightly==0.1.dev20241008+nightly -f https://storage.googleapis.com/libtpu-releases/index.html
```
8. run your training again, note how the training loop exits without warning/messages after the loss was extracted
## Expected behavior
small varying results between the scripts when running on different version of jax
## Environment
- Reproducible on XLA backend TPU
- Using Trillum 64 machine
- torch_xla2 version: 0.0.1
|
https://github.com/pytorch/xla/issues/8240
|
closed
|
[
"bug",
"torchxla2"
] | 2024-10-09T14:35:32Z
| 2025-03-04T18:22:21Z
| 3
|
zmelumian972
|
huggingface/transformers.js
| 965
|
Error: cannot release session. invalid session id
|
### Question
I'm trying to get ASR + segmentation to run on a mobile phone (Pixel 6A, 6GB ram). This time on Brave mobile ;-)
ASR alone works fine. But I have a question about also getting the speaker recognition to run (segmentation+verification).
In the example implementation a `promiseAll` is used to run both ASR and Segmentation in paralel. For my implementation I've tried to run them one after the other, hoping that this would mean less memory is needed. E.g:
- Create ASR instance
-- Get text and chunks from audio
- Dispose of ASR instance
- Create segmentation instance
-- Get segments from audio
- Dispose of segmentation instance
- Create verification instance
-- Run verification on chunks of audio from each segment
- Dispose of verification instance
I don't know if it's related, but I noticed the error below:
<img width="550" alt="Screenshot 2024-10-09 at 15 11 13" src="https://github.com/user-attachments/assets/27873ca1-218b-44b9-8d9a-3af3a46bdb5c">
My questions are:
- Is it a valid assumption that doing things consequtively will allow this cascade to run on devices with less memory? Or was there a good reason that a promiseAll was used?
- What does the error mean?
- Is running them consecutively part of why the error occurs?
- Can I use `quantized` with the segmentation and verification models in order to save memory? Currently the ASR (tiny-whisper.en_timestamped) is 114MB, and then the segmentation and verification seem to be 512 MB together.
I haven't split up loading the segmentation and verification instances yet, as I thought I'd get your opinion first.
```
class SegmentationSingleton {
static instance = null;
static segmentation_model_id = 'onnx-community/pyannote-segmentation-3.0';
static segmentation_instance = null;
static segmentation_processor = null;
static loaded_segmentation = false;
static verification_model_id = 'Xenova/wavlm-base-plus-sv'; // Xenova/wavlm-base-plus-sv
//static verification_model_id = 'onnx-community/wespeaker-voxceleb-resnet34-LM';
static verification_instance = null;
static verification_processor = null;
static instance_exists(){
return this.segmentation_instance != null;
}
static set_to_null(var_to_null=null){
if(typeof var_to_null == 'string' && typeof this[var_to_null] != 'undefined'){
this[var_to_null] = null;
//console.log("SegmentationSingleton: set_to_null: ", var_to_null);
}
}
//static async getInstance(progress_callback=null,model_name='onnx-community/whisper-base_timestamped',preferences={},load_segmentation=true) {
static async getInstance(progress_callback=null,preferences={}) {
//console.log("Whisper_worker: SegmentationSingleton: getInstance");
if(self.is_mobile){
console.log("mobile, so setting quantized to true for segmentation AI's");
preferences['quantized'] = true;
}
this.loaded_segmentation = true
console.log("segmentationSingleton: creating segmentation instances");
this.segmentation_processor ??= AutoProcessor.from_pretrained(this.segmentation_model_id, {
...preferences,
progress_callback,
});
this.segmentation_instance ??= AutoModelForAudioFrameClassification.from_pretrained(this.segmentation_model_id, {
// NOTE: WebGPU is not currently supported for this model
// See https://github.com/microsoft/onnxruntime/issues/21386
device: 'wasm',
//dtype: 'fp32',
dtype: 'q8',
...preferences,
progress_callback,
});
if(this.verification_model_id.endsWith('wespeaker-voxceleb-resnet34-LM')){
self.similarity_threshold = 0.5;
self.perfect_simillarity_threshold = 0.7;
}
else{
self.similarity_threshold = 0.95;
self.perfect_simillarity_threshold = 0.98;
}
this.verification_processor ??= AutoProcessor.from_pretrained(this.verification_model_id, {
device: 'wasm',
dtype: 'fp32',
//device: 'webgpu',
//dtype: 'q8',
...preferences,
progress_callback,
});
this.verification_instance ??= AutoModel.from_pretrained(this.verification_model_id, {
device: 'wasm',
dtype: 'fp32',
//device: 'webgpu',
//dtype: 'q8',
...preferences,
progress_callback,
});
return Promise.all([this.segmentation_processor, this.segmentation_instance, this.verification_processor, this.verification_instance]);
}
}
```
|
https://github.com/huggingface/transformers.js/issues/965
|
open
|
[
"question"
] | 2024-10-09T13:57:48Z
| 2024-10-09T15:51:02Z
| null |
flatsiedatsie
|
huggingface/chat-ui
| 1,509
|
(BUG) Oath login splash is BROKEN/does NOT work
|
On newer versions of chat-ui the login splash screen does not work. Say for instance you have oauth setup and are not logged in. You should get a popup prompting you to logina nd not see the interface. This used to work without a problem. I just realized this no longer working on the newer versions. I have oauth set up through huggingface working perfectly.
Note.. even though the splash is not shown someone would be prevented from using the chatbot as it just wont work if your not logged in. However i kinda like the splash.. Anyone know how to get this working again?? already messed with it? save me some time. thank you huggingface for creating this project. Are we going to be getting any of the newer options being implemented into Huggingchat like specifically the continue button and new search/agent control popup panel vs just search on/off?? Thanks and wish yall the best
***Splash on 0.8.4 (Working)

***Splash on 0.9.3 (Not Working)

|
https://github.com/huggingface/chat-ui/issues/1509
|
closed
|
[
"bug"
] | 2024-10-08T18:06:01Z
| 2024-11-27T15:02:46Z
| 2
|
bpawnzZ
|
huggingface/trl
| 2,196
|
How to exit training when the loss is less than a specified value in SFTTrainer?
|
I asked this question in ChatGPT first, it gave the answer below:
```
from trl import SFTTrainer
from transformers import TrainingArguments
from unsloth import is_bfloat16_supported
# Define customized Trainer class
class CustomSFTTrainer(SFTTrainer):
def __init__(self, *args, min_loss_threshold=0.001, **kwargs):
super().__init__(*args, **kwargs)
self.min_loss_threshold = min_loss_threshold
def train(self, *args, **kwargs):
# Rewrite the train() method to monitor the loss.
for step, batch in enumerate(self.get_train_dataloader()):
outputs = self.model(**batch)
loss = outputs.loss
loss.backward()
self.optimizer.step()
self.lr_scheduler.step()
self.optimizer.zero_grad()
# If the loss is less than a specified value, exit training.
if loss.item() < self.min_loss_threshold:
print(f"Stopping training early at step {step} as loss {loss.item()} is below threshold {self.min_loss_threshold}")
break
# Print loss log.
if step % self.args.logging_steps == 0:
print(f"Step {step}, Loss: {loss.item()}")
# Initialize the customized Trainer.
trainer = CustomSFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=ds_split['train'],
dataset_text_field="text",
max_seq_length=max_seq_length,
dataset_num_proc=2,
min_loss_threshold=0.001, # Specify the loss threshold
args=TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=4,
warmup_steps=5,
max_steps=200,
learning_rate=2e-4,
fp16=not is_bfloat16_supported(),
bf16=is_bfloat16_supported(),
logging_steps=1,
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="linear",
seed=3407,
output_dir="outputs",
),
)
trainer.train()
```
However, the code above occurred error as below:
`# Calls into the C++ engine to run the backward pass RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2, 482, 3584]], which is output 0 of MulBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). `
I feedbacked the erorr to ChatGPT, it advised to add 2 lines in the code:
```
...
loss = outputs.loss
# Avoid inplace-updating
loss = loss.clone()
loss.backward()
...
```
I re-ran the code, it occurred errors as below:
```
RuntimeError Traceback (most recent call last)
[<ipython-input-8-079eb3ca0b07>](https://localhost:8080/#) in <cell line: 2>()
1 torch.autograd.set_detect_anomaly(True)
----> 2 trainer_stats = trainer.train()
3 frames
[/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py](https://localhost:8080/#) in _engine_run_backward(t_outputs, *args, **kwargs)
767 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs)
768 try:
--> 769 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
770 t_outputs, *args, **kwargs
771 ) # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2, 256, 3584]], which is output 0 of MulBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
```
What should I do?
|
https://github.com/huggingface/trl/issues/2196
|
closed
|
[
"❓ question",
"🏋 SFT"
] | 2024-10-08T03:13:27Z
| 2024-10-08T10:39:51Z
| null |
fishfree
|
huggingface/safetensors
| 532
|
Documentation about multipart safetensors
|
### Feature request
Add examples to documentation about handling with multipart safetensors files (`*-00001.safetensors`, `*-00002.safetensors`, etc). How to load/save them?
### Motivation
This is widespread format but README and Docs don't contain enough information about it.
### Your contribution
Can't help by myself
|
https://github.com/huggingface/safetensors/issues/532
|
closed
|
[] | 2024-10-07T20:14:48Z
| 2025-01-03T17:36:31Z
| 6
|
attashe
|
pytorch/audio
| 3,838
|
How to train a real-time av-asr pretrain model
|
### 🚀 The feature
There is an example for hubert training [here](https://github.com/pytorch/audio/tree/main/examples/self_supervised_learning), but has no example about real-time av-asr for other languages.
### Motivation, pitch
I'm woking on lipreading without a pretrained model to continue train the pretrained model like real-time av-asr.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/audio/issues/3838
|
open
|
[] | 2024-10-07T12:23:32Z
| 2024-10-07T12:23:32Z
| null |
Zhaninh
|
huggingface/diffusers
| 9,599
|
Why there is no LoRA only finetune example of FLUX.1?
|
**Is your feature request related to a problem? Please describe.**
The only example of LoRA finetune for FLUX.1 I discovered is here:
https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_flux.py
which is a dreambooth example. The dreambooth is VRAM intensive and not useful for scenario that dataset is big enough and does not need regularization images.
**Describe the solution you'd like.**
A LoRA only example for FLUX.1
**Describe alternatives you've considered.**
Provide some tips for me to modify by myself.
|
https://github.com/huggingface/diffusers/issues/9599
|
closed
|
[] | 2024-10-07T06:22:54Z
| 2024-10-09T12:48:32Z
| 3
|
eeyrw
|
huggingface/chat-ui
| 1,506
|
Add support for local models
|
## Describe your feature request
I was looking for an open-source alternative to PocketPal, which allows to converse with local models on iOS and Android https://apps.apple.com/us/app/pocketpal-ai/id6502579498 and I was wondering if HuggingChat could be this alternative? The idea is to have an e2e open-source solution, providing e2e privacy.
I hope I didn't miss anything in the app allowing to support this.
Thanks
## Screenshots (if relevant)
## Implementation idea
I'm happy to help provided support from the community and the HuggingFace team. I have experience on web development, but not with running LLM on mobile.
|
https://github.com/huggingface/chat-ui/issues/1506
|
closed
|
[
"enhancement"
] | 2024-10-06T20:18:24Z
| 2024-10-07T13:45:45Z
| 3
|
arnaudbreton
|
pytorch/torchchat
| 1,278
|
AOTI Export ignores user --device flag - expected behavior?
|
### 🐛 Describe the bug
Hi all,
I ran into some confusion when trying to export llama3 on my system. I have a small graphics card (8GB VRAM on an AMD GPU) but a decent amount of RAM (24GB). Obviously, the model won't fit on my GPU un-quantized but it should fit into my RAM + swap.
I tried running:
```
python3 torchchat.py export llama3 --output-dso-path exportedModels/llama3.so --quantize torchchat/quant_config/desktop.json --device cpu
```
However, I ran into multiple HIP OOM errors (basically equivalent to CUDA). Why would we try to allocate CUDA memory if the target device is CPU?
On further inspection, during export, the device is replaced with whatever is present in the quantize config:
In `cli.py`
https://github.com/pytorch/torchchat/blob/b21715835ab9f61e23dbcf32795b0c0a2d654908/torchchat/cli/cli.py#L491C10-L494C1
```
args.device = get_device_str(
args.quantize.get("executor", {}).get("accelerator", args.device)
)
```
In this case, the device in `desktop.json` is "fast". The `get_device_str` function replaces this with "cuda" simply based on `torch.cuda.is_available` without consulting the flag I passed in.
## Other cases
Doing a quick grep of the repo, I only found one other case in `generate.py` where `torch.cuda.is_available()` is consulted for monitoring memory usage. We should be careful switching based simply on `torch.cuda.is_available()` and make sure to pin to the user's request if we're using ambiguous devices like "fast".
Another small issue - since I use AMD GPU, the default `install/install_requirements.sh` will download the CPU only version instead of the ROCm version of PyTorch. To use my GPU, I have to re-run the torch installation manually. Luckily, it's quite easy to find this command at https://pytorch.org/get-started/locally/ . Should be straightforward to check of ROCm is available on the system during this script - we can just run `rocminfo` & check if the command is available.
### Versions
```
wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
--2024-10-06 12:03:44-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8001::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8001::154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 23357 (23K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[===========================================================================================>] 22.81K --.-KB/s in 0.02s
2024-10-06 12:03:44 (1.10 MB/s) - ‘collect_env.py’ saved [23357/23357]
Collecting environment information...
PyTorch version: 2.4.1+rocm6.1
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.1.40091-a8dbc0c19
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.4-060104-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon RX 6700S (gfx1030)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.1.40091
MIOpen runtime version: 3.1.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 6900HS with Radeon Graphics
CPU family: 25
Model: 68
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU max MHz: 4933.8862
CPU min MHz: 1600.0000
BogoMIPS: 6587.56
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce
|
https://github.com/pytorch/torchchat/issues/1278
|
closed
|
[
"bug",
"good first issue",
"actionable"
] | 2024-10-06T19:06:51Z
| 2024-11-16T01:15:38Z
| 5
|
vmpuri
|
pytorch/torchchat
| 1,277
|
Android demo app poor model performance
|
### 🐛 Describe the bug
I wanted to try the new Llama 3.2 1B parameter model on mobile. I downloaded the model and generated the `pte` like so:
```
python torchchat.py download llama3.2-1b
python torchchat.py export llama3.2-1b --quantize torchchat/quant_config/mobile.json --output-pte-path llama3_2-1b.pte
```
Then I pushed `llama3_2-1b.pte` file and `tokenizer.model` files to the mobile phone using `adb`.
I executed the demo app in `torchchat/edge/android/torchchat` using Android Studio with `.aar` file provided on the TorchChat repo readme.
However, when I chat with the AI its responses are very useless and feel quite different than what I get with the same prompt on my computer:


Is there a problem with the default quantization parameters? I tried to not quantize but then the app crashed when loading the model.
### Versions
Collecting environment information...
PyTorch version: 2.5.0.dev20240901
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.4 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.4
Libc version: N/A
Python version: 3.10.0 (default, Mar 3 2022, 03:54:28) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-14.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] executorch==0.5.0a0+286799c
[pip3] numpy==1.26.4
[pip3] torch==2.5.0.dev20240901
[pip3] torchao==0.5.0+git0916b5b
[pip3] torchaudio==2.5.0.dev20240901
[pip3] torchsr==1.0.4
[pip3] torchtune==0.3.0.dev20240928+cpu
[pip3] torchvision==0.20.0.dev20240901
[conda] executorch 0.5.0a0+286799c pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.5.0.dev20240901 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20240901 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchtune 0.3.0.dev20240928+cpu pypi_0 pypi
[conda] torchvision 0.20.0.dev20240901 pypi_0 pypi
|
https://github.com/pytorch/torchchat/issues/1277
|
closed
|
[
"actionable",
"Mobile - Android",
"ExecuTorch"
] | 2024-10-06T15:10:55Z
| 2024-10-25T08:19:10Z
| 11
|
fran-aubry
|
pytorch/xla
| 8,223
|
how to use torch.float16 in diffusers pipeline with pytorch xla
|
## ❓ Questions and Help
```
import diffusers, torch, os
import torch_xla.core.xla_model as xm
pipeline = diffusers.DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", safety_checker=None, use_safetensors=True, torch_dtype=torch.float16)
# Move the model to the first TPU core
pipeline = pipeline.to(xm.xla_device())
image = pipeline("a cloud tpu winning a kaggle competition", num_inference_steps=20).images[0]
image
```
I run the above code in kaggle
and get
```
RuntimeError Traceback (most recent call last)
Cell In[2], line 8
6 # Move the model to the first TPU core
7 pipeline = pipeline.to(xm.xla_device())
----> 8 image = pipeline("a cloud tpu winning a kaggle competition", num_inference_steps=20).images[0]
9 image
File /usr/local/lib/python3.8/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /usr/local/lib/python3.8/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:1000, in StableDiffusionPipeline.__call__(self, prompt, height, width, num_inference_steps, timesteps, sigmas, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, ip_adapter_image, ip_adapter_image_embeds, output_type, return_dict, cross_attention_kwargs, guidance_rescale, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, **kwargs)
997 latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
999 # predict the noise residual
-> 1000 noise_pred = self.unet(
1001 latent_model_input,
1002 t,
1003 encoder_hidden_states=prompt_embeds,
1004 timestep_cond=timestep_cond,
1005 cross_attention_kwargs=self.cross_attention_kwargs,
1006 added_cond_kwargs=added_cond_kwargs,
1007 return_dict=False,
1008 )[0]
1010 # perform guidance
1011 if self.do_classifier_free_guidance:
File /usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.8/site-packages/diffusers/models/unets/unet_2d_condition.py:1169, in UNet2DConditionModel.forward(self, sample, timestep, encoder_hidden_states, class_labels, timestep_cond, attention_mask, cross_attention_kwargs, added_cond_kwargs, down_block_additional_residuals, mid_block_additional_residual, down_intrablock_additional_residuals, encoder_attention_mask, return_dict)
1164 encoder_hidden_states = self.process_encoder_hidden_states(
1165 encoder_hidden_states=encoder_hidden_states, added_cond_kwargs=added_cond_kwargs
1166 )
1168 # 2. pre-process
-> 1169 sample = self.conv_in(sample)
1171 # 2.5 GLIGEN position net
1172 if cross_attention_kwargs is not None and cross_attention_kwargs.get("gligen", None) is not None:
File /usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.8/site-packages/torch/nn/modules/conv.py:463, in Conv2d.forward(self, input)
462 def forward(self, input: Tensor) -> Tensor:
--> 463 return self._conv_forward(input, self.weight, self.bias)
File /usr/local/lib/python3.8/site-packages/torch/nn/modules/conv.py:459, in Conv2d._conv_forward(self, input, weight, bias)
455 if self.padding_mode != 'zeros':
456 return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
457 weight, bias, self.stride,
458 _pair(0), self.dilation, self.groups)
--> 459 return F.conv2d(input, weight, bias, self.str
|
https://github.com/pytorch/xla/issues/8223
|
open
|
[
"bug"
] | 2024-10-06T00:02:41Z
| 2025-02-27T13:17:50Z
| null |
ghost
|
huggingface/tokenizers
| 1,644
|
How to build a custom tokenizer on top of a exsiting Llama 3.2 tokenizer?
|
Hi,
I was trying to create a custom tokenizer for a different language which is not included in llama 3.2 tokenizer.
I could not find exactly what tokenizer I can use from hf which is exact alternative to Llama's tokenizer [link](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py), so that I will be able to train a new tokenizer.
Currently I am using following code to train a tokenizer, but final example does not match with the one Llama 3.2 has.
I would be nice if anyone could share their experience of adapting a Llama model to a new language.
```
import json
import argparse
from datasets import load_dataset, concatenate_datasets
from tokenizers import SentencePieceBPETokenizer
from transformers import LlamaTokenizerFast, AutoTokenizer
from tqdm import tqdm
from typing import List
hf_datasets = ["yakhyo/uz-wiki", "yakhyo/uz-news", "agentlans/high-quality-english-sentences"]
def normalize_text(text: str) -> str:
"""
Normalize Uzbek characters, replacing variations of o‘, o', o`, and ’ (curved apostrophe).
"""
return text.replace("‘", "'").replace("`", "'").replace("’", "'").replace("()", "")
def prepare_datasets(datasets_list: List[str]):
all_data = []
for dataset_name in datasets_list:
try:
data = load_dataset(dataset_name)
for split in ["train", "test", "validation"]:
try:
all_data.append(data[split])
except KeyError:
pass
except:
print(f"dataset: `{dataset_name}` not found, skipping...")
concat_data = []
for data in tqdm(all_data):
data = data.map(lambda example: {"text": normalize_text(example["text"])})
data = data.remove_columns([col for col in data.column_names if col != "text"])
concat_data.append(data)
return concatenate_datasets(concat_data)
def main(args):
dataset = prepare_datasets(hf_datasets)
# select num_samples from the dataset
dataset = dataset.shuffle(seed=42).select(range(len(dataset)))
# Create a SentencePieceBPETokenizer
tokenizer = SentencePieceBPETokenizer(
replacement="Ġ"
)
# Train the SentencePieceBPETokenizer on the dataset
tokenizer.train_from_iterator(
iterator=dataset['text'],
vocab_size=args.vocab_size,
show_progress=True,
special_tokens=[
"<unk>",
"<s>",
"</s>",
"<pad>"
],
)
# Save the tokenizer
tokenizer.save("new-sentencepiece-tokenizer.json", pretty=True)
# Load reference tokenizer
if args.reference_tokenizer is not None:
reference_tokenizer = AutoTokenizer.from_pretrained(args.reference_tokenizer)
reference_tokenizer.save_pretrained("reference-tokenizer")
else:
raise ValueError(
"No tokenizer name provided or no hub token provided. Try using --reference_tokenizer 'meta-llama/Llama-2-7b-hf'")
# Read and dump the json file for the new tokenizer and the reference tokenizer
with open("new-sentencepiece-tokenizer.json") as f:
new_llama_tokenizer_json = json.load(f)
with open("reference-tokenizer/tokenizer.json") as f:
reference_tokenizer_json = json.load(f)
# Add the reference tokenizer's config to the new tokenizer's config
new_llama_tokenizer_json["normalizer"] = reference_tokenizer_json["normalizer"]
new_llama_tokenizer_json["pre_tokenizer"] = reference_tokenizer_json["pre_tokenizer"]
new_llama_tokenizer_json["post_processor"] = reference_tokenizer_json["post_processor"]
new_llama_tokenizer_json["decoder"] = reference_tokenizer_json["decoder"]
new_llama_tokenizer_json["model"]['fuse_unk'] = reference_tokenizer_json["model"]['fuse_unk']
new_llama_tokenizer_json["model"]['byte_fallback'] = reference_tokenizer_json["model"]['byte_fallback']
# Dump the new tokenizer's config
with open("new-sentencepiece-tokenizer.json", "w") as f:
json.dump(new_llama_tokenizer_json, f, indent=2, ensure_ascii=False)
# Load the new tokenizer as a LlamaTokenizerFast
new_llama_tokenizer = LlamaTokenizerFast(
tokenizer_file="new-sentencepiece-tokenizer.json",
unk_token="<unk>",
unk_token_id=0,
bos_token="<s>",
bos_token_id=1,
eos_token="</s>",
eos_token_id=2,
pad_token="<pad>",
pad_token_id=3,
padding_side="right",
)
# Save the new tokenizer
new_llama_tokenizer.save_pretrained("new-llama-tokenizer")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Llama Tokenizer using SentencePieceBPE")
parser.add_argument(
"--reference_tokenizer",
type=str,
default=None,
help="The name of the reference tokenizer to use"
)
parser.ad
|
https://github.com/huggingface/tokenizers/issues/1644
|
closed
|
[
"training"
] | 2024-10-05T13:18:55Z
| 2025-02-26T12:06:15Z
| null |
yakhyo
|
pytorch/xla
| 8,222
|
unsupported operand type(s) for %: 'int' and 'NoneType'
|
## ❓ Questions and Help
I follow the https://github.com/pytorch/xla/blob/master/contrib/kaggle/pytorch-xla-2-0-on-kaggle.ipynb
but the code in image = pipeline(prompt, callback=lambda *args: xm.mark_step(), generator=generator).images[0]
get
```
TypeError Traceback (most recent call last)
Cell In[8], line 4
1 generator = torch.Generator().manual_seed(0)
2 # xm.mark_step compiles and executes the graph after each iteration.
3 # The first few steps will be much slower than the rest.
----> 4 image = pipeline(prompt, callback=lambda *args: xm.mark_step(), generator=generator).images[0]
5 image
File /usr/local/lib/python3.8/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /usr/local/lib/python3.8/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:1035, in StableDiffusionPipeline.__call__(self, prompt, height, width, num_inference_steps, timesteps, sigmas, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, ip_adapter_image, ip_adapter_image_embeds, output_type, return_dict, cross_attention_kwargs, guidance_rescale, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, **kwargs)
1033 if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
1034 progress_bar.update()
-> 1035 if callback is not None and i % callback_steps == 0:
1036 step_idx = i // getattr(self.scheduler, "order", 1)
1037 callback(step_idx, t, latents)
TypeError: unsupported operand type(s) for %: 'int' and 'NoneType'
```
how to fix the problem?
|
https://github.com/pytorch/xla/issues/8222
|
closed
|
[
"question",
"xla:tpu"
] | 2024-10-05T12:11:52Z
| 2025-02-27T13:20:08Z
| null |
ghost
|
pytorch/xla
| 8,216
|
Random OOM and crashes
|
## ❓ Questions and Help
I've found that I'm unable to train more than ~20-80K steps without a crash and it's difficult to figure out how to debug this. In a typical PyTorch training run, I would get a clear OOM message at a particular line, or any other error and this would be printed to log/console.
However, about half the time, my training run simply exits with no message on any rank, and the other half the time it's clearly due to memory with a "Resource Exhausted" message. The issue is it's not clear where this new allocation happens (I have a fairly standard decoder based transformer, not even any eval batches, and I'm not using any eager modes). I tried to switch to nightly to get a recent dataloader memory fix, but that doesn't seem to fix it.
I know there are many flags that can be used for debugging, but it's unclear exactly which ones can be used during training without a large performance hit. I've done all the suggested steps including profiling, and making sure there isn't re-compiliation happening, etc. Perhaps it would be good to clarify the impact of the flags somewhere to make it clear which are safe—and any other advice on how to debug this would be great!
Also, I should note this occurs with SPMD multi-node training, I have not spent time testing other modes, but this has happened with between 2 and 8 TPUv4 VMs, both in DDP-like configurations and several other mesh configurations
|
https://github.com/pytorch/xla/issues/8216
|
closed
|
[
"question",
"distributed",
"xla:tpu"
] | 2024-10-04T18:51:52Z
| 2025-02-27T13:21:33Z
| null |
alexanderswerdlow
|
pytorch/xla
| 8,215
|
how to use all tpu core in pytorch xla
|
## ❓ Questions and Help
I follow the code in https://github.com/pytorch/xla/blob/master/contrib/kaggle/distributed-pytorch-xla-basics-with-pjrt.ipynb
But use xmp.spawn(print_device, args=(lock,), nprocs=8, start_method='fork')
the source code
```
import os
os.environ.pop('TPU_PROCESS_ADDRESSES')
import torch_xla.core.xla_model as xm
import torch_xla.distributed.xla_multiprocessing as xmp
import multiprocessing as mp
lock = mp.Manager().Lock()
def print_device(i, lock):
device = xm.xla_device()
with lock:
print('process', i, device)
xmp.spawn(print_device, args=(lock,), nprocs=8, start_method='fork')
```
WARNING:root:Unsupported nprocs (8), ignoring...
process 4 xla:0
process 5 xla:1
process 0 xla:0
process 1 xla:1
process 2 xla:0
process 3 xla:1
process 6 xla:0
process 7 xla:1
xla just can see 2 xla device. But when I run xm.get_xla_supported_devices() it list all ['xla:0', 'xla:1', 'xla:2', 'xla:3', 'xla:4', 'xla:5', 'xla:6', 'xla:7'] I want to know how to use all tpu cores?
|
https://github.com/pytorch/xla/issues/8215
|
closed
|
[
"question",
"distributed",
"xla:tpu"
] | 2024-10-04T02:54:18Z
| 2025-02-27T13:22:25Z
| null |
ghost
|
pytorch/torchchat
| 1,262
|
Support Granite Code 3B/8B
|
### 🚀 The feature, motivation and pitch
The `torchchat` framework provides an excellent platform for embedding models into many different edge-centric platforms.
The [Granite Code models](https://huggingface.co/collections/ibm-granite/granite-code-models-6624c5cec322e4c148c8b330), specifically the [3B-128k](https://huggingface.co/ibm-granite/granite-3b-code-instruct-128k) and [8B-128k](https://huggingface.co/ibm-granite/granite-8b-code-instruct-128k) variants, are a family of models from IBM that support a wide variety of code-related tasks. The models are released under the Apache-3 license and are therefore well-suited to embedded use-cases where code intelligence is needed.
The request here is to extend the model support in `torchchat` to support running the 3B and 8B long-context variants of Granite Code in order to enable usage of these models across embedded use-cases.
### Alternatives
Depending on the goals of the `torchchat` framework, extending support to non-llama models may or may not be a project goal. There are other embedded frameworks out there (notably `llama.cpp` and the many projects that wrap it), so these can be used to run Granite Code in embedded environments. Our goal at IBM is to provide users with as many choices as possible on how to run all of our Granite family models, so our hope is that `torchchat` can be a strong piece of this story!
### Additional context
The 3B and 8B models use the `llama` architecture in `transformers`, so they are _close_ to fully supported as-is. There are a few crucial pieces that are present in the `transformers` implementation that are missing in `torchchat`:
* Safetensors support: https://github.com/pytorch/torchchat/issues/1249
* Tied word embeddings: https://github.com/pytorch/torchchat/issues/1252
* Bias tensors: https://github.com/pytorch/torchchat/issues/1250
* Non-tiktoken/sentencepiece tokenizers: https://github.com/pytorch/torchchat/issues/1251
### RFC (Optional)
I've worked through the initial steps of solving all of these outstanding issues (see the corresponding issues). Once these are solved, the addition of these Granite Code models should consist of the following steps:
* Adding new entries to [models.json](https://github.com/pytorch/torchchat/blob/main/torchchat/model_config/models.json)
* Adding the right set of model-specific params to [model_params](https://github.com/pytorch/torchchat/tree/main/torchchat/model_params)
|
https://github.com/pytorch/torchchat/issues/1262
|
closed
|
[] | 2024-10-03T16:18:08Z
| 2024-12-19T10:13:55Z
| 0
|
gabe-l-hart
|
huggingface/datasets
| 7,196
|
concatenate_datasets does not preserve shuffling state
|
### Describe the bug
After concatenate datasets on an iterable dataset, the shuffling state is destroyed, similar to #7156
This means concatenation cant be used for resolving uneven numbers of samples across devices when using iterable datasets in a distributed setting as discussed in #6623
I also noticed that the number of shards is the same after concatenation, which I found surprising, but I don't understand the internals well enough to know whether this is actually surprising or not
### Steps to reproduce the bug
```python
import datasets
import torch.utils.data
def gen(shards):
yield {"shards": shards}
def main():
dataset1 = datasets.IterableDataset.from_generator(
gen, gen_kwargs={"shards": list(range(25))} # TODO: how to understand this?
)
dataset2 = datasets.IterableDataset.from_generator(
gen, gen_kwargs={"shards": list(range(25, 50))} # TODO: how to understand this?
)
dataset1 = dataset1.shuffle(buffer_size=1)
dataset2 = dataset2.shuffle(buffer_size=1)
print(dataset1.n_shards)
print(dataset2.n_shards)
dataset = datasets.concatenate_datasets(
[dataset1, dataset2]
)
print(dataset.n_shards)
# dataset = dataset1
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=0,
)
for i, batch in enumerate(dataloader):
print(batch)
print("\nNew epoch")
dataset = dataset.set_epoch(1)
for i, batch in enumerate(dataloader):
print(batch)
if __name__ == "__main__":
main()
```
### Expected behavior
Shuffling state should be preserved
### Environment info
Latest datasets
|
https://github.com/huggingface/datasets/issues/7196
|
open
|
[] | 2024-10-03T14:30:38Z
| 2025-03-18T10:56:47Z
| 1
|
alex-hh
|
huggingface/diffusers
| 9,575
|
diffusers version update to 0.27.0 from 0.20.0, training code seems not work
|
I have trained an inpainting model using diffusers 0.20.0. The trained model works as expected. However, something seems wrong when I update the diffusers version to 0.27.0, while keeping the training code and other requirements the same. The training code runs successfully, but the inference outputs look like noise. Is there any point that should be noticed in this case?
|
https://github.com/huggingface/diffusers/issues/9575
|
closed
|
[] | 2024-10-03T14:30:21Z
| 2024-10-15T08:58:36Z
| 4
|
huangjun12
|
pytorch/serve
| 3,339
|
Clarification on minWorkers and maxWorkers parameters
|
### 📚 The doc issue
I have some questions related to model parameters:
1. I know there is no autoscaling in Torchserve, and looking at code, models will scale `minWorkers` number of workers on startup. `maxWorkers` seems to be only used when downscaling a model, meaning if `currentWorkers > maxWorkers`, it will kill `currentWorkers - maxWorkers` workers (`WorkloadManager.java:151`). Given that we'll only scale/downscale number of workers on `scaleWorkers` API call, is there any practical use case of setting `minWorkers` != `maxWorkers`? For example in `examples/cloud_storage_stream_inference/config.properties` `minWorkers` is set to 10 and `maxWorkers` to 1000, when do we want that?
2. In `docs/getting_started.md` it reads: `If you specify model(s) when you run TorchServe, it automatically scales backend workers to the number equal to available vCPUs (if you run on a CPU instance) or to the number of available GPUs (if you run on a GPU instance).`. I can't find any evidence of this behavior in the code, could somebody clarify how if this statement is true and how does it work?
Thank you!
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/3339
|
open
|
[] | 2024-10-03T13:07:00Z
| 2024-10-03T13:07:00Z
| 0
|
krzwaraksa
|
huggingface/transformers
| 33,909
|
How to implement weight decay towards the pre-trained model?
|
Hello, let me one question.
If using HF Trainer for supervised fune-tuning, how do I implement penalizing the distance between starting and current weights? This was shown to be effective in https://arxiv.org/abs/1706.03610
|
https://github.com/huggingface/transformers/issues/33909
|
open
|
[
"Usage",
"Feature request"
] | 2024-10-03T11:18:53Z
| 2024-10-22T13:16:26Z
| null |
sedol1339
|
pytorch/serve
| 3,338
|
throughput increase non-linearly with number of workers
|
### 🐛 Describe the bug
I am hosting a bert-like model using below torchserve config.
```
inference_address=http://localhost:8080
management_address=http://localhost:8081
metrics_address=http://localhost:8082
load_models=model_name=weights.mar
async_logging=true
job_queue_size=200
models={ "model_name": { "1.0": { "minWorkers": 8 , "batchSize": 8 , "maxBatchDelay": 10 } } }
```
I have 8 GPUs, this setting will give me 1 worker per gpu.
then I did load test with both k6 and locust, and below shows the relationship between number of workers(from 1 to 8) and throughput.

As can be seen in the chart, gpu usage is dropping when number of workers increased, so it feels like the load balancer in torchserve leads to the inefficiency. Anyone can give me some clues how can I improve the throughput further?
### Error logs
throughput increase non-linearly with number of workers
### Installation instructions
torchserve = "^0.10.0"
### Model Packaging
torchserve = "0.10.0"
### config.properties
inference_address=http://localhost:8080
management_address=http://localhost:8081
metrics_address=http://localhost:8082
load_models=model_name=weights.mar
async_logging=true
job_queue_size=200
models={ "model_name": { "1.0": { "minWorkers": 8 , "batchSize": 8 , "maxBatchDelay": 10 } } }
### Versions
$ python serve/ts_scripts/print_env_info.py
------------------------------------------------------------------------------------------
Environment headers
------------------------------------------------------------------------------------------
Torchserve branch:
torchserve==0.10.0
torch-model-archiver==0.11.0
Python version: 3.11 (64-bit runtime)
Python executable: /home/me/.cache/pypoetry/virtualenvs/pre-deploy-j4GApv9r-py3.11/bin/python
Versions of relevant python libraries:
numpy==1.24.3
nvgpu==0.10.0
pillow==10.4.0
psutil==6.0.0
requests==2.32.3
torch==2.3.1+cu121
torch-model-archiver==0.11.0
torch_tensorrt==2.3.0+cu121
torchserve==0.10.0
torchvision==0.18.1
transformers==4.44.2
wheel==0.44.0
torch==2.3.1+cu121
**Warning: torchtext not present ..
torchvision==0.18.1
**Warning: torchaudio not present ..
Java Version:
OS: Debian GNU/Linux 12 (bookworm)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: 14.0.6
CMake version: version 3.25.1
Is CUDA available: Yes
CUDA runtime version: N/A
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 550.54.15
cuDNN version: None
Environment:
library_path (LD_/DYLD_):
### Repro instructions
wget http://mar_file.mar
torch-model-archiver ...
torchserve --start
### Possible Solution
_No response_
|
https://github.com/pytorch/serve/issues/3338
|
open
|
[] | 2024-10-03T07:32:22Z
| 2024-10-08T10:33:28Z
| 2
|
vandesa003
|
pytorch/ao
| 1,002
|
How to calibrate a w8a8 quantized model?
|
I used the following code to quantize an LLM, employing an w8a8 quantization setting:
```python
model = AutoModelForCausalLM.from_pretrained("./Qwen1.5-0.5B-Chat").to(dtype=torch.bfloat16, device='cpu')
quantize_(model, int8_dynamic_activation_int8_weight())
```
Everything is running smoothly, but the model's accuracy has decreased significantly. How can I calibrate a quantized model to enhance its accuracy?
---
I have another question:
I printed out a parameter and noticed that the weights were quantized using per-channel quantization. What is the purpose of the fp16 AffineQuantizedTensor? Shouldn't the activation only require one scale parameter when using per-tensor quantization?
I'm not very familiar with the quantization mechanism in PyTorch, and I hope you can give me some tips.
```plaintxt
Parameter Name: model.layers.0.self_attn.q_proj.weight
Parameter Shape: torch.Size([1024, 1024])
Parameter Values: LinearActivationQuantizedTensor(AffineQuantizedTensor(data=tensor([[ 0.2148, -0.1196, -0.0898, ..., -0.0388, 0.0869, 0.0898],
[ 0.0830, -0.2188, -0.1436, ..., 0.0566, 0.0679, 0.0830],
[ 0.0552, -0.2480, -0.1621, ..., 0.0242, 0.0688, 0.0830],
...,
[ 0.0742, -0.0417, -0.1641, ..., -0.0356, 0.1543, -0.0566],
[-0.0640, 0.0771, 0.2695, ..., 0.0537, -0.1982, 0.0938],
[-0.1216, 0.1025, -0.1074, ..., -0.0327, 0.1592, -0.1123]],
dtype=torch.bfloat16)..., shape=torch.Size([1024, 1024]), block_size=(1, 1024), device=cpu, dtype=torch.bfloat16, requires_grad=False, layout_tensor=PlainAQTLayout(data=tensor([[ 72, -40, -30, ..., -13, 29, 30],
[ 22, -58, -38, ..., 15, 18, 22],
[ 16, -72, -47, ..., 7, 20, 24],
...,
[ 25, -14, -55, ..., -12, 52, -19],
[-19, 23, 80, ..., 16, -59, 28],
[-26, 22, -23, ..., -7, 34, -24]], dtype=torch.int8)... , scale=tensor([0.0030, 0.0038, 0.0034, ..., 0.0030, 0.0034, 0.0047],
dtype=torch.bfloat16)... , zero_point=tensor([0, 0, 0, ..., 0, 0, 0])... , layout_type=PlainLayoutType())), <function _int8_symm_per_token_reduced_range_quant at 0x751a4815fe20>)
```
|
https://github.com/pytorch/ao/issues/1002
|
closed
|
[] | 2024-10-03T03:55:31Z
| 2024-10-04T01:26:58Z
| null |
chenghuaWang
|
huggingface/datasets
| 7,189
|
Audio preview in dataset viewer for audio array data without a path/filename
|
### Feature request
Huggingface has quite a comprehensive set of guides for [audio datasets](https://huggingface.co/docs/datasets/en/audio_dataset). It seems, however, all these guides assume the audio array data to be decoded/inserted into a HF dataset always originates from individual files. The [Audio-dataclass](https://github.com/huggingface/datasets/blob/3.0.1/src/datasets/features/audio.py#L20) appears designed with this assumption in mind. Looking at its source code it returns a dictionary with the keys `path`, `array` and `sampling_rate`.
However, sometimes users may have different pipelines where they themselves decode the audio array. This feature request has to do with wishing some clarification in guides on whether it is possible, and in such case how users can insert already decoded audio array data into datasets (pandas DataFrame, HF dataset or whatever) that are later saved as parquet, and still get a functioning audio preview in the dataset viewer.
Do I perhaps need to write a tempfile of my audio array slice to wav and capture the bytes object with `io.BytesIO` and pass that to `Audio()`?
### Motivation
I'm working with large audio datasets, and my pipeline reads (decodes) audio from larger files, and slices the relevant portions of audio from that larger file based on metadata I have available.
The pipeline is designed this way to avoid having to store multiple copies of data, and to avoid having to store tens of millions of small files.
I tried [test-uploading parquet files](https://huggingface.co/datasets/Lauler/riksdagen_test) where I store the audio array data of decoded slices of audio in an `audio` column with a dictionary with the keys `path`, `array` and `sampling_rate`. But I don't know the secret sauce of what the Huggingface Hub expects and requires to be able to display audio previews correctly.
### Your contribution
I could contribute a tool agnostic guide of creating HF audio datasets directly as parquet to the HF documentation if there is an interest. Provided you help me figure out the secret sauce of what the dataset viewer expects to display the preview correctly.
|
https://github.com/huggingface/datasets/issues/7189
|
open
|
[
"enhancement"
] | 2024-10-02T16:38:38Z
| 2024-10-02T17:01:40Z
| 0
|
Lauler
|
huggingface/transformers.js
| 958
|
Zombies in memory - something is blocking (re)loading of Whisper after a page is closed and re-opened
|
### Question
I've been trying to debug this issue all afternoon, but haven't gotten any further. The code runs on desktop, but not on Android Chrome.
This is with V3 Alpha 19.
<img width="571" alt="Screenshot 2024-10-02 at 16 06 16" src="https://github.com/user-attachments/assets/c5fbb2cb-0cdf-431a-8099-021d19a10384">
<img width="569" alt="Screenshot 2024-10-02 at 16 06 40" src="https://github.com/user-attachments/assets/d09a6b09-0a05-4d38-af0e-d1c88a08003c">
<img width="569" alt="Screenshot 2024-10-02 at 16 06 56" src="https://github.com/user-attachments/assets/fc3de899-dfdb-425a-92c1-69e3c40b4fd8">
|
https://github.com/huggingface/transformers.js/issues/958
|
closed
|
[
"question"
] | 2024-10-02T14:10:27Z
| 2024-10-18T12:47:17Z
| null |
flatsiedatsie
|
pytorch/vision
| 8,669
|
performance degradation in to_pil_image after v0.17
|
### 🐛 Describe the bug
`torchvision.transforms.functional.to_pil_image `is much slower when converting torch.float16 image tensors to PIL Images based on my benchmarks (serializing 360 images):
Dependencies:
```
Python 3.11
Pillow 10.4.0
```
Before (torch 2.0.1, torchvision v0.15.2, [Code here](https://github.com/pytorch/vision/blob/fa99a5360fbcd1683311d57a76fcc0e7323a4c1e/torchvision/transforms/functional.py#L244)): 23 seconds
After ( torch 2.2.0, torchvision v0.17, [Code here](https://github.com/pytorch/vision/blob/b2383d44751bf85e58cfb9223bbf4e5961c09fa1/torchvision/transforms/functional.py#L245)): 53 seconds
How to reproduce:
```python
import torch
from torchvision.transforms.functional import to_pil_image
rand_img_tensor = torch.rand(3, 512, 512, dtype=torch.float16)
start_time = time.time()
for _ in range(50):
pil_img = to_pil_image(rand_img_tensor)
end_time = time.time()
print(end_time - start_time) # seconds
```
Run the above script with both versions of dependencies listed, and the time difference is apparent.
The cause seems to be [this PR](https://github.com/pytorch/vision/commit/15c166ac127db5c8d1541b3485ef5730d34bb68a)
|
https://github.com/pytorch/vision/issues/8669
|
open
|
[] | 2024-10-02T08:25:01Z
| 2024-10-25T13:06:15Z
| 5
|
seymurkafkas
|
huggingface/diffusers
| 9,567
|
[community] Improving docstrings and type hints
|
There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!
Our convention looks like:
```python3
def function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int] = None, parameter_3: float = 42.0) -> Civilization:
r"""
Function that creates a simulation.
Args:
parameter_1 (`str` or `List[str]`):
Description of game level.
parameter_2 (`int`, *optional*):
Kardashev scale of civilization.
parameter_3 (`float`, defaults to `42.0`):
Difficulty scale.
Returns:
[`~simulations.objects.Civilization`]
A civilization simulation with provided initialization parameters.
"""
```
Some examples that don't follow the docstring convention are:
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L89): missing explanations
- [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L132): does not contain mixin-related documentation whereas as [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L154) does
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/utils/import_utils.py#L672): function explanation after "Args", but should be before
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/pipelines/deepfloyd_if/pipeline_output.py#L14): same reason as above
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L518): incorrect indentation
There are also many places where docstrings are completely missing or inadequately explained. If you feel something needs an improvement, you can open a PR with your suggestions too! Additionally, type hints are not appropriate/correctly used at many occurrences and mismatch the accompanying docstrings - these could use an improvement too!
Please limit your PRs to changes to a single file in each PR. Changes must be only related to docstrings/type hints. Feel free to ping either @yiyixuxu, @stevhliu or me for reviews.
|
https://github.com/huggingface/diffusers/issues/9567
|
closed
|
[
"documentation",
"good first issue",
"contributions-welcome"
] | 2024-10-02T03:20:44Z
| 2025-11-13T22:45:59Z
| 16
|
a-r-r-o-w
|
huggingface/datasets
| 7,186
|
pinning `dill<0.3.9` without pinning `multiprocess`
|
### Describe the bug
The [latest `multiprocess` release](https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17) requires `dill>=0.3.9` which causes issues when installing `datasets` without backtracking during package version resolution. Is it possible to add a pin for multiprocess so something like `multiprocess<=0.70.16` so that the `dill` version is compatible?
### Steps to reproduce the bug
NA
### Expected behavior
NA
### Environment info
NA
|
https://github.com/huggingface/datasets/issues/7186
|
closed
|
[] | 2024-10-01T22:29:32Z
| 2024-10-02T06:08:24Z
| 0
|
shubhbapna
|
pytorch/torchchat
| 1,249
|
Support Huggingface models from safetensors
|
### 🚀 The feature, motivation and pitch
There are many models on Huggingface that are published as `safetensors` rather than `model.pth` checkpoints. The request here is to support converting and loading those checkpoints into a format that is usable with `torchchat`.
There are several places where this limitation is currently enforced:
* [_download_hf_snapshot](https://github.com/pytorch/torchchat/blob/main/torchchat/cli/download.py#L36) method explicitly ignores `safetensors` files.
* [convert_hf_checkpoint](https://github.com/pytorch/torchchat/blob/main/torchchat/cli/convert_hf_checkpoint.py#L44) explicitly looks for `pytorch_model.bin.index.json` which would be named differently for models that use `safetensors` (e.g. `model.safetensors.index.json`)
* [convert_hf_checkpoint](https://github.com/pytorch/torchchat/blob/main/torchchat/cli/convert_hf_checkpoint.py#L99) only supports `torch.load` to load the `state_dict` rather than `safetensors.torch.load`
### Alternatives
Currently, this `safetensors` -> `model.pth` can be accomplished manually after downloading a model locally, so this could be solved with documentation instead of code.
### Additional context
This issue is a piece of the puzzle for adding support for Granite Code 3b/8b which use the `llama` architecture in `transormers`, but take advantage several pieces of the architecture that are not currently supported by `torchchat`. The work-in-progress for Granite Code can be found on my fork: https://github.com/gabe-l-hart/torchchat/tree/GraniteCodeSupport
### RFC (Optional)
I have a working implementation to support `safetensors` during download and conversion that I plan to submit as a PR. The changes address the three points in code referenced above:
1. Allow the download of `safetensors` files in `_download_hf_snapshot`
* I'm not yet sure how to avoid double-downloading weights for models that have both `safetensors` and `model.pth`, so will look to solve this before concluding the work
2. When looking for the tensor index file, search for all files ending in `.index.json`, and if a single file is found, use that one
3. When loading the `state_dict`, use the correct method based on the type of file (`torch.load` or `safetensors.torch.load`)
|
https://github.com/pytorch/torchchat/issues/1249
|
closed
|
[] | 2024-10-01T22:07:59Z
| 2024-10-04T19:18:22Z
| 2
|
gabe-l-hart
|
pytorch/torchtitan
| 594
|
Support Gemma2 in torchtitan
|
Are there any plans to support Gemma2 in the torchtitan? I tried to use torchtitan to finetune Gemma2 model, but stuck on the following problem: how to parallelize tied layer in Gemma2 model? Maybe somebody kwon the solution for this problem 😄
|
https://github.com/pytorch/torchtitan/issues/594
|
closed
|
[
"bug",
"question"
] | 2024-10-01T11:50:15Z
| 2025-03-20T18:32:31Z
| null |
pansershrek
|
huggingface/chat-ui
| 1,499
|
Error 500 "RPError" | OpenID Connect + SafeNet Trusted Access (STA)
|
Hello,
I would like to deploy OpenID Connect with SafeNet Trusted Access (STA).
From this 3-minute video, I've done all the steps, except for OAuth.tools which I don't use :
https://www.youtube.com/watch?v=hSWXFSadpQQ
Here's my bash script that deploys the containers | ```deploy.sh``` :
```bash
#!/bin/bash
# previous containers removed
sudo docker rm -f ollama
sudo docker rm -f mongodb
sudo docker rm -f chat-ui
sudo docker rm -f nginx
# previous networks removed
sudo docker network rm backend >/dev/null 2>&1
sudo docker network rm proxy >/dev/null 2>&1
# create networks
sudo docker network create backend
sudo docker network create proxy
# ollama
sudo docker run -d -p 11434:11434 -e HTTPS_PROXY="${HTTPS_PROXY}" -v /home/<my-user>/chat-ui/ollama:/root/.ollama --name ollama --network backend ollama-with-ca
sleep 5
sudo docker exec ollama taskset -c 0-40 ollama run llama3.1
# mongodb
sudo docker run -d -p 27017:27017 -v mongodb-data:/data/db --name mongodb --network backend mongo:latest
# chat-ui
sudo docker run -d -p 3000:3000 -e HTTPS_PROXY="${HTTPS_PROXY}" --mount type=bind,source="$(pwd)/.env.local",target=/app/.env.local -v chat-ui:/data --name chat-ui --network backend ghcr.io/huggingface/chat-ui-db
sudo docker network connect proxy chat-ui
# nginx
sudo docker run -d -p 80:80 -p 443:443 -v "$(pwd)/nginx:/etc/nginx/conf.d" -v "$(pwd)/ssl:/etc/ssl" --name nginx --network proxy nginx:latest
```
Here's my ```nginx``` configuration :
```nginx
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name <my-chat-ui>.fr;
return 301 https://$host$request$uri;
}
server {
listen 443 ssl;
server_name <my-chat-ui>.fr;
ssl_certificate /etc/ssl/chat-ui.crt;
ssl_certificate_key /etc/ssl/chat-ui.key;
proxy_connect_timeout 60;
proxy_send_timeout 60;
proxy_read_timeout 60;
send_timeout 60;
client_max_body_size 2G;
proxy_buffering off;
client_header_buffer_size 8k;
location / {
proxy_pass http://chat-ui:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
add_header 'Access-Control-Allow-Origin' 'https://<my-chat-ui>.fr' always;
}
}
```
Finally, here's my ```.env.local``` using Llama3.1 8B model :
```.env
MONGODB_URL=mongodb://mongodb:27017
HF_TOKEN=hf_*****
OPENID_CONFIG=`{
"PROVIDER_URL": "https://idp.eu.safenetid.com/auth/realms/<realm-ID>-STA/protocol/openid-connect/auth",
"CLIENT_ID": "*****",
"CLIENT_SECRET": "*****",
"SCOPES": "openid profile"
}`
MODELS=`[
{
"name": "Ollama | Llama3.1",
"id": "llama3.1-8b",
"description": "llama3.1-8b",
"chatPromptTemplate": "<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\n\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}<|start_header_id|>assistant<|end_header_id|>\n\n",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": ["<|end_of_text|>", "<|eot_id|>"]
},
"endpoints": [
{
"type": "ollama",
"url" : "http://ollama:11434",
"ollamaName" : "llama3.1:latest"
}
]
}
]`
```
And I got this error when I press on "Login" button :

When I do the command ```sudo docker logs chat-ui```, I see this line :
```{"level":50,"time":1727703253975,"pid":30,"hostname":"fe9d8f548283","locals":{"sessionId":"3b700cd7b4efc2a2b47c0f13134904e01f01c3b7d6ff05c6726390e19ea5d431"},"url":"https://ia.chu-lyon.fr/login","params":{},"request":{},"message":"Internal Error","error":{"name":"RPError"},"errorId":"8d7d74e3-b12c-4c1e-9dc5-9847d5e61ea2","status":500}```
**Note that by adding the ```OPENID_CONFIG``` (with probably incorrect data), the application stops working completely and I can't launch prompts or delete/edit existing ones !**
**When I comment ```OPENID_CONFIG```, everything starts working properly again.**
I don't really know what to put exactly, especially for ```PROVIDER_URL``` and ```SCOPES```.
Can you help me to resolve this issue ?
Thanks in advance.
|
https://github.com/huggingface/chat-ui/issues/1499
|
open
|
[
"support"
] | 2024-09-30T12:54:16Z
| 2024-09-30T12:57:51Z
| 0
|
avirgos
|
huggingface/diffusers
| 9,560
|
FP32 training for sd3 controlnet
|
Hi,
I have been use `examples\controlnet\train_controlnet_sd3.py` for controlnet training for a while, and I have some confusion and would like your advice
1. In the line 1097:
`vae.to(accelerator.device, dtype=torch.float32)`
It seems we should use fp32 for VAE, but as far as I know, SD3 currently has no fp32 checkpoints, so does it really work if we populate fp16 into fp32?
2. Before running the train script, `accelerate config` can specify whether to use mixed precision or not, since SD3 only has fp16 checkpoint at present, I don't know how to choose this option, whether to choose 'fp16' or 'no'.
Really appreciate your advice!
@sayakpaul @DavyMorgan
|
https://github.com/huggingface/diffusers/issues/9560
|
closed
|
[
"stale"
] | 2024-09-30T08:07:04Z
| 2024-10-31T15:13:19Z
| 11
|
xduzhangjiayu
|
huggingface/huggingface_hub
| 2,578
|
What is the highest Python version currently supported?
|
### Describe the bug
I utilized Hugging Face Spaces to construct my application, which was built using Gradio, zerogpuspace, and the link is: https://huggingface.co/spaces/tanbw/CosyVoice
In the readme.md, I specified the Python version as 3.8.9, but the version of Python that the application prints out is still 3.1. What is the highest Python version currently supported?



### Reproduction
_No response_
### Logs
_No response_
### System info
```shell
- huggingface_hub version: 0.24.5
- Platform: Linux-5.10.223-211.872.amzn2.x86_64-x86_64-with-glibc2.36
- Python version: 3.10.13
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/user/.cache/huggingface/token
- Has saved token ?: False
- Configured git credential helpers: store
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.0.1
- Jinja2: 3.1.4
- Graphviz: N/A
- keras: N/A
- Pydot: N/A
- Pillow: 10.4.0
- hf_transfer: 0.1.8
- gradio: 4.44.0
- tensorboard: N/A
- numpy: 1.26.4
- pydantic: 2.7.0
- aiohttp: 3.10.0
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /home/user/.cache/huggingface/hub
- HF_ASSETS_CACHE: /home/user/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/user/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: True
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
```
|
https://github.com/huggingface/huggingface_hub/issues/2578
|
closed
|
[
"bug"
] | 2024-09-29T14:37:38Z
| 2024-09-30T07:05:29Z
| null |
tanbw
|
huggingface/diffusers
| 9,555
|
[Flux Controlnet] Add control_guidance_start and control_guidance_end
|
It'd be nice to have `control_guidance_start` and `control_guidance_start` parameters added to flux Controlnet and Controlnet Inpainting pipelines.
I'm currently making experiments with Flux Controlnet Inpainting but the results are poor even with a `controlnet_conditioning_scale` set to 0.6.
I have to set `controlnet_conditioning_scale` to 0.4 to have non broken results.
Maybe giving more control with the guidance start and end would help reach better results ?
|
https://github.com/huggingface/diffusers/issues/9555
|
closed
|
[
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-09-29T12:37:39Z
| 2024-10-10T12:29:03Z
| 8
|
simbrams
|
huggingface/hub-docs
| 1,435
|
How to check if a space is duplicated from another one using HF API?
|
I cannot find any related specifications in the documentation...Thanks!
|
https://github.com/huggingface/hub-docs/issues/1435
|
open
|
[] | 2024-09-28T23:52:08Z
| 2025-01-16T17:08:34Z
| null |
zhimin-z
|
huggingface/diffusers
| 9,551
|
How to use x-labs flux controlnet models in diffusers?
|
### Model/Pipeline/Scheduler description
The following controlnets are supported in Comfy UI, but was wondering how we can use these in diffusers as well for developers. Afaik, there is no from_single_file method for FluxControlNet to load the safetensors?
### Open source status
- [x] The model implementation is available.
- [x] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
https://huggingface.co/XLabs-AI/flux-controlnet-canny
https://huggingface.co/XLabs-AI/flux-controlnet-canny-v3
_No response_
|
https://github.com/huggingface/diffusers/issues/9551
|
closed
|
[] | 2024-09-28T20:01:15Z
| 2024-09-29T06:59:46Z
| null |
neuron-party
|
huggingface/text-generation-inference
| 2,583
|
How to turn on the KV cache when serve a model?
|
### System Info
TGI 2.3.0
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
The TTFT is really slower than VLLM. Can't be improved? if so how to turn on the KV cache when launch a model?
```
model=HuggingFaceH4/zephyr-7b-beta
# share a volume with the Docker container to avoid downloading weights every run
volume=$PWD/data
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data \
ghcr.io/huggingface/text-generation-inference:2.3.0 --model-id $model
```
### Expected behavior
Improve the TTFT and latency
|
https://github.com/huggingface/text-generation-inference/issues/2583
|
open
|
[] | 2024-09-28T19:32:15Z
| 2024-10-25T12:47:02Z
| null |
hahmad2008
|
pytorch/torchchat
| 1,222
|
Clear model download documents
|
### 🐛 Describe the bug
From the README, its not very clear how to download different flavor/sizes of the models from HF, unless someone go to the next section and find the inventory list https://github.com/pytorch/torchchat#download-weights
might be helpful to add the inventory list command upper before the the download command.
Also as we have 3.2 it would be great to update the docs.
```
/torchchat$ python3 torchchat.py list
Model Aliases Downloaded
-------------------------------------------- ---------------------------------------------------------- -----------
meta-llama/llama-2-7b-hf llama2-base, llama2-7b
meta-llama/llama-2-7b-chat-hf llama2, llama2-chat, llama2-7b-chat
meta-llama/llama-2-13b-chat-hf llama2-13b-chat
meta-llama/llama-2-70b-chat-hf llama2-70b-chat
meta-llama/meta-llama-3-8b llama3-base
meta-llama/meta-llama-3-8b-instruct llama3, llama3-chat, llama3-instruct Yes
meta-llama/meta-llama-3-70b-instruct llama3-70b
meta-llama/meta-llama-3.1-8b llama3.1-base
meta-llama/meta-llama-3.1-8b-instruct llama3.1, llama3.1-chat, llama3.1-instruct
meta-llama/meta-llama-3.1-70b-instruct llama3.1-70b
meta-llama/meta-llama-3.1-8b-instruct-tune llama3.1-tune, llama3.1-chat-tune, llama3.1-instruct-tune
meta-llama/meta-llama-3.1-70b-instruct-tune llama3.1-70b-tune
meta-llama/meta-llama-3.2-1b llama3.2-1b-base
meta-llama/meta-llama-3.2-1b-instruct llama3.2-1b, llama3.2-1b-chat, llama3.2-1b-instruct
meta-llama/llama-guard-3-1b llama3-1b-guard, llama3.2-1b-guard
meta-llama/meta-llama-3.2-3b llama3.2-3b-base
meta-llama/meta-llama-3.2-3b-instruct llama3.2-3b, llama3.2-3b-chat, llama3.2-3b-instruct
meta-llama/llama-3.2-11b-vision llama3.2-11B-base, Llama-3.2-11B-Vision-base
meta-llama/llama-3.2-11b-vision-instruct llama3.2-11B, Llama-3.2-11B-Vision, Llama-3.2-mm
meta-llama/codellama-7b-python-hf codellama, codellama-7b
meta-llama/codellama-34b-python-hf codellama-34b
mistralai/mistral-7b-v0.1 mistral-7b-v01-base
mistralai/mistral-7b-instruct-v0.1 mistral-7b-v01-instruct
mistralai/mistral-7b-instruct-v0.2 mistral, mistral-7b, mistral-7b-instruct
openlm-research/open_llama_7b open-llama, open-llama-7b
stories15m
stories42m
stories110m
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0.dev20240901+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.30.3
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1068-aws-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s):
|
https://github.com/pytorch/torchchat/issues/1222
|
closed
|
[
"documentation",
"actionable"
] | 2024-09-27T22:16:38Z
| 2024-09-30T16:02:55Z
| 4
|
HamidShojanazeri
|
pytorch/xla
| 8,088
|
Is this content still relevant?
|
## 📚 Documentation
xla/docs/README contains the following text. Is this text still relevant? The link to CircleCi is broken and I'm not sure if this information is useful:
-------------------------------
## Publish documentation for a new release.
CI job `pytorch_xla_linux_debian11_and_push_doc` is specified to run on `release/*` branches, but it was not
run on release branches due to "Only build pull requests" setting. Turning off "Only build pull requests" will result
in much larger volumes in jobs which is often unnecessary. We're waiting for [this feature request](https://ideas.circleci.com/ideas/CCI-I-215)
to be implemented so that we could override this setting on some branches.
Before the feature is available on CircleCi side, we'll use a manual process to publish documentation for release.
[Documentation for master branch](http://pytorch.org/xla/master/) is still updated automatically by the CI job.
But we'll need to manually commit the new versioned doc and point http://pytorch.org/xla to the documentation of new
stable release.
Take 2.3 release as example:
```
# Build pytorch/pytorch:release/2.3 and pytorch/xla:release/2.3 respectively.
# In pytorch/xla/docs
./docs_build.sh
git clone -b gh-pages https://github.com/pytorch/xla.git /tmp/xla
cp -r build/* /tmp/xla/release/2.3
cd /tmp/xla
# Update `redirect_url` in index.md
git add .
git commit -m "Publish 2.3 documentation."
git push origin gh-pages
```
--------------------------------------
I would suggest we remove this and replace it with instuctions on how to update index.rst to include any new documentation on pytorch.org.
|
https://github.com/pytorch/xla/issues/8088
|
closed
|
[
"question",
"documentation"
] | 2024-09-27T22:02:37Z
| 2025-03-06T13:05:38Z
| null |
mikegre-google
|
pytorch/TensorRT
| 3,192
|
❓ [Question] When should I use Torch-TensorRT instead of TensorRT ?
|
I generally use NVIDIA's TensorRT as the inference framework. I want to know the advantages and disadvantages of Torch-TensorRT compared to TensorRT, so that I can decide when to use Torch-TensorRT. I guess Torch-TensorRT might be simpler and more user-friendly. Also, have you tested and compared their inference speed and GPU memory usage amont?
|
https://github.com/pytorch/TensorRT/issues/3192
|
closed
|
[
"question"
] | 2024-09-27T15:51:32Z
| 2024-10-02T16:22:54Z
| null |
EmmaThompson123
|
huggingface/transformers.js
| 948
|
Getting Local models/wasm working with Create React App
|
### Question
I realize there's been a lot of talk about this in other issues, but I'm trying to gather if getting local-only model and wasm files will work with Create React App. I'm using `WhisperForConditionalGeneration` from `@huggingface/transformers` version `3.0.0-alpha.9`.
My setup:
```
env.allowRemoteModels = false;
env.allowLocalModels = true;
env.backends.onnx.wasm.wasmPaths = process.env.PUBLIC_URL + "/dictation/";
env.localModelPath = process.env.PUBLIC_URL + "/dictation/models/";
```
... and in my `{packagename}/public/models` folder I've got:
```
ort-wasm-simd-threaded.jsep.wasm
models/config.json
models/generation_config.json
models/preprocessor_config.json
models/tokenizer_config.json
models/tokenizer.json
models/onnx/decoder_model_merged_q4.onnx
models/onnx/encoder_model.onnx
```
This returns the `SyntaxError: Unexpected token '<', "<!DOCTYPE "... is not valid JSON` error that has been [discussed in other issues](https://github.com/xenova/transformers.js/issues/142). If I set `env.allowRemoteModels = true;` and
`env.allowLocalModels = false;`, and clear my application cache, this works fine. My questions on that:
1. How can I get the `wasm` file to load locally only? It caches fine and calls locally (
http://localhost:3000/dictation/ort-wasm-simd-threaded.jsep.wasm) after the initial CDN call, but I don't want to rely on an external CDN.
2. How can I get the model files to only call locally? (we will need to further train our own models). I have yet to get this working, but I assume the above error is to blame.
3. The main question: is this a limitation with CRA? I noticed that if I load the wasm file from the CDN first, it caches fine locally. It's just that initial call to the wasm local file (if not cached from the CDN) that fails, which people have said may be a CRA issue.
Thanks! Sorry for the long-winded question. Happy to provide any more code if needed.
|
https://github.com/huggingface/transformers.js/issues/948
|
closed
|
[
"question"
] | 2024-09-26T20:42:33Z
| 2024-09-26T21:26:30Z
| null |
stinoga
|
huggingface/blog
| 2,369
|
How to finetune jina-embeddings-v3 by lora?
|
https://github.com/huggingface/blog/issues/2369
|
open
|
[] | 2024-09-26T07:25:16Z
| 2024-09-26T07:25:16Z
| null |
LIUKAI0815
|
|
pytorch/vision
| 8,661
|
references/segmentation/coco_utils might require merging rles?
|
https://github.com/pytorch/vision/blob/6d7851bd5e2bedc294e40e90532f0e375fcfee04/references/segmentation/coco_utils.py#L27-L41 Above seems to assume that objects are not occluded, not merging rles from `frPyObjects`. In such case, i think it must be changed to
```python
rles = coco_mask.frPyObjects(polygons, height, width)
rle = coco_mask.merge(rles)
mask = coco_mask.decode(rle)
```
Is there any specific reason for this, or am I wrong?
|
https://github.com/pytorch/vision/issues/8661
|
open
|
[] | 2024-09-26T02:53:47Z
| 2024-10-11T13:36:25Z
| 1
|
davidgill97
|
huggingface/text-generation-inference
| 2,569
|
Question: What is preferred way to cite TGI/repo? Didnt see a citation file.
|
https://github.com/huggingface/text-generation-inference/issues/2569
|
open
|
[] | 2024-09-26T02:07:42Z
| 2024-09-26T02:07:42Z
| null |
mkultraWasHere
|
|
huggingface/lerobot
| 454
|
Venv isn't needed in docker
|
I noticed in your docker files you are using a virtual environment. Docker is already a virtual environment at the system level. Is there a reason for using a python virtual environment as well? Typically, this is redundant/unnecessary and you'd only use venv or similar on your local machine.
If there isn't a good reason we could go ahead and delete these dependencies from the docker images.
|
https://github.com/huggingface/lerobot/issues/454
|
closed
|
[
"enhancement",
"question",
"stale"
] | 2024-09-25T16:33:17Z
| 2025-10-23T02:29:11Z
| null |
MichaelrMentele
|
pytorch/xla
| 8,071
|
Optimizer Memory in AdamW/Adam vs SGD
|
## ❓ Questions and Help
It is to my understanding that Adam should use more memory than SGD because it keeps track of more parameters. However, when I look at my profiles between Adam and SGD optimizers and see that they use roughly the same amount of memory.
Does torch XLA somehow do optimizations on the optimizers to reduce the memory usage or something else? Any guidance on how to investigate this would be appreciated!
|
https://github.com/pytorch/xla/issues/8071
|
closed
|
[] | 2024-09-25T16:01:53Z
| 2024-11-16T20:30:20Z
| 1
|
dangthatsright
|
pytorch/audio
| 3,835
|
Not building CUDA 12.6
|
### 🐛 Describe the bug
It's not building with last version of cuda 12.6.1 in jetson agx orin
```bash
#!/usr/bin/env bash
set -ex
echo "Building torchaudio ${TORCHAUDIO_VERSION}"
apt-get update
apt-get install -y --no-install-recommends \
git \
pkg-config \
libffi-dev \
libsndfile1
rm -rf /var/lib/apt/lists/*
apt-get clean
git clone --branch v${TORCHAUDIO_VERSION} --recursive --depth=1 https://github.com/pytorch/audio /opt/torchaudio
cd /opt/torchaudio
git checkout v${TORCHAUDIO_VERSION}
BUILD_SOX=1 python3 setup.py bdist_wheel --verbose --dist-dir /opt
cd ../
rm -rf /opt/torchaudio
pip3 install --no-cache-dir --verbose /opt/torchaudio*.whl
pip3 show torchaudio && python3 -c 'import torchaudio; print(torchaudio.__version__);'
twine upload --verbose /opt/torchaudio*.whl || echo "failed to upload wheel to ${TWINE_REPOSITORY_URL}"
```
```bash
src/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/cuda/include -Wall -D_GLIBCXX_USE_CXX11_ABI=1 -O3 -DNDEBUG -std=gnu++17 -fPIC -D_GLIBCXX_USE_CXX11_ABI=1 -MD -MT src/libtorio/ffmpeg/CMakeFiles/_torio_ffmpeg4.dir/pybind/pybind.cpp.o -MF src/libtorio/ffmpeg/CMakeFiles/_torio_ffmpeg4.dir/pybind/pybind.cpp.o.d -o src/libtorio/ffmpeg/CMakeFiles/_torio_ffmpeg4.dir/pybind/pybind.cpp.o -c /opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp
In file included from /usr/local/lib/python3.10/dist-packages/torch/include/c10/util/Exception.h:5,
from /usr/local/lib/python3.10/dist-packages/torch/include/ATen/BlasBackend.h:3,
from /usr/local/lib/python3.10/dist-packages/torch/include/ATen/Context.h:3,
from /usr/local/lib/python3.10/dist-packages/torch/include/ATen/ATen.h:7,
from /usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /opt/torchaudio/src/libtorio/ffmpeg/ffmpeg.h:3,
from /opt/torchaudio/src/libtorio/ffmpeg/hw_context.h:3,
from /opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp:1:
/opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp: In function ‘int torio::io::{anonymous}::{anonymous}::read_func(void*, uint8_t*, int)’:
/opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp:125:19: warning: comparison of integer expressions of different signedness: ‘long unsigned int’ and ‘int’ [-Wsign-compare]
125 | chunk_len <= request,
| ~~~~~~~~~~^~~~~~~~~~
In file included from /opt/torchaudio/build/temp.linux-aarch64-cpython-310/_deps/f4-src/include/libavutil/avutil.h:296,
from /opt/torchaudio/build/temp.linux-aarch64-cpython-310/_deps/f4-src/include/libavutil/samplefmt.h:24,
from /opt/torchaudio/build/temp.linux-aarch64-cpython-310/_deps/f4-src/include/libavcodec/avcodec.h:31,
from /opt/torchaudio/src/libtorio/ffmpeg/ffmpeg.h:10,
from /opt/torchaudio/src/libtorio/ffmpeg/hw_context.h:3,
from /opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp:1:
/opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp: In function ‘int torio::io::{anonymous}::read_bytes(void*, uint8_t*, int)’:
/opt/torchaudio/build/temp.linux-aarch64-cpython-310/_deps/f4-src/include/libavutil/common.h:105:25: warning: comparison of integer expressions of different signedness: ‘std::basic_string_view<char>::size_type’ {aka ‘long unsigned int’} and ‘int’ [-Wsign-compare]
105 | #define FFMIN(a,b) ((a) > (b) ? (b) : (a))
| ~~~~^~~~~
/opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp:202:19: note: in expansion of macro ‘FFMIN’
202 | auto num_read = FFMIN(wrapper->src.size() - wrapper->index, buf_size);
| ^~~~~
[82/92] /usr/bin/c++ -DTORIO_FFMPEG_EXT_NAME=_torio_ffmpeg6 -DUSE_C10D_GLOO -DUSE_C10D_MPI -DUSE_C10D_NCCL -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_RPC -DUSE_TENSORPIPE -D_torio_ffmpeg6_EXPORTS -I/opt/torchaudio/src -I/usr/include/python3.10 -I/opt/torchaudio/build/temp.linux-aarch64-cpython-310/_deps/f6-src/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/cuda/include -Wall -D_GLIBCXX_USE_CXX11_ABI=1 -O3 -DNDEBUG -std=gnu++17 -fPIC -D_GLIBCXX_USE_CXX11_ABI=1 -MD -MT src/libtorio/ffmpeg/CMakeFiles/_torio_ffmpeg6.dir/pybind/pybind.cpp.o -MF src/libtorio/ffmpeg/CMakeFiles/_torio_ffmpeg6.dir/pybind/pybind.cpp.o.d -o src/libtorio/ffmpeg/CMakeFiles/_torio_ffmpeg6.dir/pybind/pybind.cpp.o -c /opt/torchaudio/src/libtorio/ffmpeg/pybind/pybind.cpp
In file included from /usr/local/lib/python3.10/dist-packages/torch/include/c10/util/Exception.h:5,
from /usr/local/lib/python3.10/dist-packages/torch/include/ATen/BlasBackend.h:3,
from /usr/loca
|
https://github.com/pytorch/audio/issues/3835
|
closed
|
[] | 2024-09-25T10:10:21Z
| 2025-01-08T12:54:20Z
| 2
|
johnnynunez
|
huggingface/diffusers
| 9,528
|
load_ip_adapter for distilled sd models
|
Is it possible to load IP-Adapter for distilled SD v1 or v2 based models such as nota-ai/bk-sdm-tiny or nota-ai/bk-sdm-v2-tiny?
When I tried to load ip adapter using bk-sdm-tiny
```python
pipe.load_ip_adapter(
"h94/IP-Adapter",
subfolder="models",
weight_name="ip-adapter-plus_sd15.bin",
low_cpu_mem_usage=False,
ignore_mismatched_sizes=True
)
```
I got errors, probably because of differences in unet structures.
```
RuntimeError: Error(s) in loading state_dict for IPAdapterAttnProcessor2_0:
size mismatch for to_k_ip.0.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for to_v_ip.0.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([640, 768]).
```
How can I solve this problems?
|
https://github.com/huggingface/diffusers/issues/9528
|
closed
|
[
"stale"
] | 2024-09-25T04:31:00Z
| 2025-01-12T06:01:40Z
| 7
|
kmpartner
|
pytorch/examples
| 1,289
|
Does torchrun + FSDP create multiple copies of the same dataset and model?
|
In the [example T5 training code](https://github.com/pytorch/examples/blob/cdef4d43fb1a2c6c4349daa5080e4e8731c34569/distributed/FSDP/T5_training.py#L77C24-L77C35), the main function creates a copy of the model and dataset regardless of the worker rank before passing it to FSDP. Does this mean that there are n copies of the model and dataset when running the script with torchrun and n processes?
|
https://github.com/pytorch/examples/issues/1289
|
open
|
[] | 2024-09-25T03:59:24Z
| 2024-09-25T04:25:55Z
| 1
|
tsengalb99
|
huggingface/chat-ui
| 1,486
|
Getting 403 on chat ui config for aws sagemaker endpoint
|
Hi All,
Looking into configuring chat ui with aws sagemaker endpoint and getting following error:

```
DOTENV_LOCAL was found in the ENV variables. Creating .env.local file.
{"level":30,"time":1727231014113,"pid":23,"hostname":"fbe21dc3ad38","msg":"Starting server..."}
{"level":30,"time":1727231014147,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] Begin check..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Update search assistants\" already applied. Skipping..."}
Listening on 0.0.0.0:3000
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Update deprecated models in assistants with the default model\" should not be applied for this run. Skipping..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Add empty 'tools' record in settings\" already applied. Skipping..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Convert message updates to the new schema\" already applied. Skipping..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Convert message files to the new schema\" already applied. Skipping..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Trim message updates to reduce stored size\" already applied. Skipping..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Reset tools to empty\" already applied. Skipping..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] All migrations applied. Releasing lock"}
{"level":30,"time":1727231014207,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-25T00:00:00.000Z","dateField":"createdAt","span":"day","type":"conversation","msg":"Computing conversation stats"}
{"level":30,"time":1727231014216,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-25T00:00:00.000Z","dateField":"updatedAt","span":"day","type":"conversation","msg":"Computing conversation stats"}
{"level":30,"time":1727231014219,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-25T00:00:00.000Z","dateField":"createdAt","span":"day","type":"message","msg":"Computing conversation stats"}
{"level":30,"time":1727231014220,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-22T00:00:00.000Z","dateField":"updatedAt","span":"week","type":"conversation","msg":"Computing conversation stats"}
{"level":30,"time":1727231014224,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-22T00:00:00.000Z","dateField":"createdAt","span":"week","type":"conversation","msg":"Computing conversation stats"}
{"level":30,"time":1727231014227,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-01T00:00:00.000Z","dateField":"createdAt","span":"month","type":"message","msg":"Computing conversation stats"}
{"level":30,"time":1727231014229,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-25T00:00:00.000Z","dateField":"createdAt","span":"day","type":"conversation","msg":"Computed conversation stats"}
{"level":30,"time":1727231014229,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-25T00:00:00.000Z","dateField":"updatedAt","span":"day","type":"conversation","msg":"Computed conversation stats"}
{"level":30,"time":1727231014230,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-25T00:00:00.000Z","dateField":"createdAt","span":"day","type":"message","msg":"Computed conversation stats"}
{"level":30,"time":1727231014230,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-22T00:00:00.000Z","dateField":"updatedAt","span":"week","type":"conversation","msg":"Computed conversation stats"}
{"level":30,"time":1727231014231,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-22T00:00:00.000Z","dateField":"createdAt","span":"week","type":"message","msg":"Computing conversation stats"}
{"level":30,"time":1727231014235,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-01T00:00:00.000Z","dateField":"createdAt","span":"month","type":"message","msg":"Computed conversation stats"}
{"level":30,"time":1727231014236,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-22T00:00:00.000Z","dateField":"createdAt","span":"week","type":"conversation","msg":"Computed conversation stats"}
{"level":30,"time":1727231014236,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-22T00:00:00.000Z","dateField":"createdAt","span":"week","type":"message","msg":"Computed conversation stats"}
{"level":30,"time":1727231014238,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-01T00:00:00.000Z","dateField":"createdAt","span":"month","type":"conversation","msg":"Computing conversation stats"}
{"level":30,"time":1727231014239,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-01T00:00:00.000Z","dateField":"updatedAt","span":"month","type":"conversation","msg":"Computing conve
|
https://github.com/huggingface/chat-ui/issues/1486
|
open
|
[
"support"
] | 2024-09-25T02:41:08Z
| 2024-09-25T02:41:08Z
| 0
|
nauts
|
huggingface/chat-macOS
| 7
|
Asking "what time is it?" will always return the local time of Paris, regardless of your location (⌘R+)
|
<img width="487" alt="Screenshot 2024-09-24 at 11 54 17 AM" src="https://github.com/user-attachments/assets/02d26c05-ae37-4caf-a3ff-5bc6aec42068">
I wonder how can we localize questions like this. I've tried ⌘R+ which always gives me the local time of Paris. Qwen2.5-72B and Llama 3.1 make up another non-specific time that's not my local time. I have web-search enabled too, and I can see that they're using it too, but they can't get it right, even when I give them my exact location both in the model's system prompt on HuggingChat, or in the chat context of the app itself.
|
https://github.com/huggingface/chat-macOS/issues/7
|
open
|
[
"good first issue"
] | 2024-09-24T23:09:31Z
| 2024-10-23T20:08:57Z
| null |
Reza2kn
|
huggingface/diffusers
| 9,520
|
UNetMotionModel.dtype is really expensive to call, is it possible to cache it during inference?
|
**What API design would you like to have changed or added to the library? Why?**
we are using class UNetMotionModel(ModelMixin, ConfigMixin, UNet2DConditionLoadersMixin, PeftAdapterMixin)
and its `forward()` implementation is calling self.dtype, which is very expensive

from my profiling trace result, calling self.dtype takes 6-10ms each time.
can we somehow cache it to save time?

I took a look at ModelMixin.dtype() property function, it get all parameters of the model into tuple to check only first parameter's dtype, i don't thinkmake sense to do this everytime. right?

**What use case would this enable or better enable? Can you give us a code example?**
We are using this model to do video generation, so the inference is running repeatedly. Is it easy to optimize this ~10ms latency?
Thanks!
|
https://github.com/huggingface/diffusers/issues/9520
|
closed
|
[
"wip",
"performance"
] | 2024-09-24T18:03:28Z
| 2025-01-02T13:40:51Z
| 7
|
xiang9156
|
huggingface/chat-ui
| 1,484
|
Header prompt displayed using Llama3.1 with ollama
|
Hello,
I'm using the ```llama3.1:latest``` model with ```ollama``` and I'm having trouble correctly initializing the ```chatPromptTemplate``` variable.
I used this Github issue to initialize this variable : https://github.com/huggingface/chat-ui/issues/1035
Here is my ```.env.local``` file :
```.env
MONGODB_URL=mongodb://mongodb:27017
HF_TOKEN=<hf-token>
PUBLIC_APP_NAME=<name>
MODELS=`[
{
"name": "Ollama | Llama3.1",
"chatPromptTemplate": "<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\n\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": ["<|end_of_text|>", "<|eot_id|>"]
},
"endpoints": [
{
"type": "ollama",
"url" : "http://ollama:11434",
"ollamaName" : "llama3.1:latest"
}
]
}
]`
```
But ```<|start_header_id|>assistant<|end_header_id|>``` appears on every response :

Can you help me make it disappear by modifying ```chatPromptTemplate``` variable ?
Thanks in advance.
|
https://github.com/huggingface/chat-ui/issues/1484
|
closed
|
[
"support"
] | 2024-09-24T13:33:16Z
| 2024-09-30T08:43:06Z
| 3
|
avirgos
|
pytorch/xla
| 8,059
|
Poor performance with 1 GPU?
|
Hello, I am trying to evaluate the impact of XLA in our models but before that I want to be sure that I know how to adapt our code and execute XLA models without problem.
GPU: Nvidia 4090 GTX 24GB
Cuda 12.2
```bash
$ pip freeze | grep torch
torch==2.4.0
torch-xla==2.4.0
torch_xla_cuda_plugin @ https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla_cuda_plugin-2.4.0-py3-none-any.whl#sha256=208085526f67739c2ea2ab15f1707935b2cfee7c1501116a524cfaa8d7b252d2
torchvision==0.19.0
```
I have been trying a simple model with MNIST
```python
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torch_xla.core.xla_model as xm
import torch_xla.runtime as xr
from tqdm import tqdm
import random
from torch_xla.amp import syncfree, GradScaler, autocast
import torch_xla.debug.metrics as met
def random_seed(seed_value, use_cuda):
np.random.seed(seed_value) # cpu vars
torch.manual_seed(seed_value) # cpu vars
random.seed(seed_value) # Python
if use_cuda:
torch.cuda.manual_seed(seed_value)
torch.cuda.manual_seed_all(seed_value) # gpu vars
torch.backends.cudnn.deterministic = True #needed
torch.backends.cudnn.benchmark = False
random_seed(42,True)
XLA = True
# Enable XLA SPMD execution mode.
# xr.use_spmd()
if XLA:
device = xm.xla_device()
else:
device = "cuda"
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
model = ToyModel()
model.to(device)
transform = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.1307,), (0.3081,))
])
train_dataset = torchvision.datasets.MNIST(
'.', train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=32, shuffle=False
)
n_epochs = 10
criterion = torch.nn.MSELoss()
if XLA:
optimizer = syncfree.SGD(model.parameters(), lr=0.1) # torch_xla
else:
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
if XLA:
scaler = GradScaler(use_zero_grad=True) # torch_xla
else:
scaler = torch.amp.GradScaler()
for epoch in tqdm(range(n_epochs)):
xm.mark_step()
for i, (images, labels) in tqdm(enumerate(train_loader), leave=False):
if not XLA:
optimizer.zero_grad()
if i >= 2000:
break
images = images.to(device)
labels = labels.to(device)
# Forward pass
if XLA:
autoamp = autocast(device, dtype=torch.bfloat16)
else:
autoamp = torch.autocast(device)
with autoamp:
outputs = model(images)
loss = F.nll_loss(outputs, labels)
# Backward
scaler.scale(loss).backward()
if XLA:
gradients = xm._fetch_gradients(optimizer)
xm.all_reduce('sum', gradients, scale=1.0 / xr.world_size())
scaler.step(optimizer)
scaler.update()
xm.mark_step()
print(loss)
```
And I haven't see any performance improvement, at best the execution time is the same. I thought that maybe the model was being recompiled too many times or something, so I followed https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md
Metrics are
```
Metric: DeviceLockWait
TotalSamples: 37520
Accumulator: 113ms908.380us
ValueRate: 475.217us / second
Rate: 159.174 / second
Percentiles: 1%=000.972us; 5%=000.989us; 10%=000.999us; 20%=001.010us; 50%=004.627us; 80%=004.978us; 90%=005.046us; 95%=005.112us; 99%=005.205us
Metric: InputOutputAliasCount
TotalSamples: 2
Accumulator: 42.00
ValueRate: 21.95 / second
Rate: 1.04547 / second
Percentiles: 1%=8.00; 5%=8.00; 10%=8.00; 20%=8.00; 50%=34.00; 80%=34.00; 90%=34.00; 95%=34.00; 99%=34.00
Metric: IrValueTensorToXlaData
TotalSamples: 37508
Accumulator: 02s925ms072.075us
ValueRate: 007ms438.792us / second
Rate: 159.175 / second
Percentiles: 1%=030.320us; 5%=030.752us; 10%=030.926us; 20%=031.205us; 50%=059.240us; 80%=061.600us; 90%=062.326us; 95%=062.728us; 99%=067.959us
Metric: LazyTracing
TotalSamples: 3525066
Accumulator: 46s352ms512.571us
ValueRate: 216ms224.1
|
https://github.com/pytorch/xla/issues/8059
|
closed
|
[] | 2024-09-24T13:24:42Z
| 2024-11-17T19:39:48Z
| 3
|
Patataman
|
pytorch/xla
| 8,057
|
PjRtComputationClient::ExecuteReplicated core dump when encountering a scalar
|
## ❓ Questions and Help
In my test code, I found that there might be PjRtData as the type argument(the argument is a scalar), and then the core dump.
https://github.com/pytorch/xla/blob/master/torch_xla/csrc/runtime/pjrt_computation_client.cc#L806
I wrote a test function earlier that tried to transform all arguments manually, but core dumped.


|
https://github.com/pytorch/xla/issues/8057
|
open
|
[
"question",
"distributed"
] | 2024-09-24T10:35:31Z
| 2025-03-31T21:30:22Z
| null |
mars1248
|
pytorch/audio
| 3,834
|
Ability to build manylinux2014 compliant wheels for other archs (ppc64le)
|
### 🚀 The feature
I'd like to have the possibility to create manylinux2014 compliant wheels for ppc64le. Is there a documentation for this?
### Motivation, pitch
PowerPC has in-core accelerator engines (MMA, Matrix-mulitply assist) which focused on AI inferencing and packages such as torch/audio/vision are preferred to have prebuilt manylinux wheels.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/audio/issues/3834
|
open
|
[] | 2024-09-23T21:59:39Z
| 2024-09-23T21:59:39Z
| 0
|
mgiessing
|
huggingface/diffusers
| 9,508
|
AnimateDiff SparseCtrl RGB does not work as expected
|
Relevant comments are [this](https://github.com/huggingface/diffusers/pull/8897#issuecomment-2255416318) and [this](https://github.com/huggingface/diffusers/pull/8897#issuecomment-2255478105).
AnimateDiff SparseCtrl RGB does not work similar to other implementations and cannot replicate their outputs. This makes me believe that there is something incorrect with our SparseControlNet or MotionAdapter implementation.
When comparing the results of the [original](https://github.com/guoyww/AnimateDiff)/[Comfy](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved) implementation to Diffusers implementation, one can notice that if an image is used with an unrelated prompt, the Diffusers implementation ignores the image and just follows the prompt whereas the other implementations try to incorporate both.
Since the original and Comfy implementations produce this behaviour consistently, this seems more like a problem with Diffusers implementation. However, I've not been able to spot differences in implementation just by comparing the code visually. I also tried matching outputs layerwise and it seemed to be alright (although I didn't investigate this as deeply as I should have due to other priorities).
If someone from the community actively following/using the AnimateDiff implementations can help determine the cause of this bug, it would be really awesome and helpful.
|
https://github.com/huggingface/diffusers/issues/9508
|
open
|
[
"bug",
"help wanted",
"stale",
"contributions-welcome",
"advanced"
] | 2024-09-23T21:42:54Z
| 2025-08-10T16:47:50Z
| 9
|
a-r-r-o-w
|
pytorch/xla
| 8,049
|
How to run XLA with CPU offloaded models
|
## ❓ Questions and Help
How do you run models that are offloaded to the CPU, Trying to work with ```enable_sequential_cpu_offload``` or ```enable_model_cpu_offload```, when running ```torch_xla.sync()/xm.mark_step() ```, the graph seems to not exclude such factor, and in turn takes much more memory than when only running the model on CPU. For example, reportedly running maximum at 25GB on the CPU but takes up 170GB on XLA devices, this is tested with EasyAnimate V4 model generating a 960x1680 24fps video. If needed, I can provide code if this has not been implemented.
```RuntimeError: Bad StatusOr access: RESOURCE_EXHAUSTED: Compilation failure: Aborting compilation early because it's unlikely to have enough device memory. Requires 170.73G, has 14.71G available. If more detailed logging is desired, set --xla_tpu_impure_oom_fast_exit_threshold=-1```
|
https://github.com/pytorch/xla/issues/8049
|
open
|
[
"enhancement",
"performance"
] | 2024-09-23T10:59:06Z
| 2025-03-31T15:42:09Z
| null |
radna0
|
huggingface/lerobot
| 451
|
Inquiry about Implementation of "Aloha Unleashed"
|
First and foremost, I would like to extend my heartfelt gratitude for your incredible work on the Lerobo project.
I recently came across the paper "Aloha Unleashed" published by the Aloha team a few months ago, and I am curious to know if there are any plans to implement the methodologies and findings from this paper into the Lerobo project.
Thank you once again for your hard work and for providing such a fantastic tool to the community. I look forward to your response.
paper link:https://aloha-unleashed.github.io/
|
https://github.com/huggingface/lerobot/issues/451
|
open
|
[
"question",
"robots"
] | 2024-09-23T09:14:56Z
| 2025-08-20T19:42:37Z
| null |
lightfate
|
pytorch/TensorRT
| 3,173
|
❓ [Question] torchscript int8 quantization degradation in recent versions
|
TS INT8 degradation later versions
Hi all, I get a degradation in results after an INT8 quantization with torchscript, after updating my torch_tensorrt, torch and tensorrt versions. I have listed the dependencies for both cases below, is this expected?
Earlier Version (Works Well):
Torch: 2.0.1
CUDA: 11.8
torch_tensorrt: 1.4.0
Tensorrt: 8.5.3.1
GPU: A100
Python: 3.9
Later Version (Degradation in Results): Torch 2.4.0
CUDA 12.1
torch_tensorrt: 2.4.0
Tensorrt: 10.1.0
GPU: A100
Python: 3.11
Script (Approximately, as I can't submit the model):
```
import torch
import time
from pathlib import Path
import PIL
import PIL.Image
import torch_tensorrt
import torch_tensorrt.ptq
from torchvision.transforms.functional import to_tensor, center_crop
from torch.utils.data import Dataset, DataLoader
class CalibrationDataset(Dataset):
def __init__(self, tile_size: int, model: torch.nn.Module, dtype: torch.dtype) -> None:
self._tile_size = tile_size
self._images = [f for f in Path("images").glob("**/*")]
self._length = len(self._images)
print("Dataset size:", self._length)
self._model = model
self._dtype = dtype
def __len__(self) -> int:
return self._length
def _to_tensor(self, img_path: Path) -> torch.Tensor:
pil_img = PIL.Image.open(img_path).convert("RGB")
return to_tensor(pil_img).to(device="cuda", dtype=self._dtype).unsqueeze(0)
def __getitem__(self, idx: int) -> tuple[torch.Tensor, torch.Tensor]:
print(f"CalibrationDataset called with {idx=}")
input_file = self._images[idx]
input_tensor = center_crop(self._to_tensor(input_file), output_size=self._tile_size)
return input_tensor, self._model(input_tensor)
def compile_to_tensort_and_quantize() -> None:
HALF = True
dtype = torch.float16
batch_size, tile_size = 1, 538
model = ImageToImageModel.create(checkpoint = "base", half=HALF, device=torch.device("cuda"))# Proprietary upscaling model, cannot submit code
with torch.no_grad():
calibration_dataset = CalibrationDataset(tile_size=tile_size, model=model, dtype=dtype)
testing_dataloader = DataLoader(
calibration_dataset, batch_size=4, shuffle=True, num_workers=0,)
calibrator = torch_tensorrt.ptq.DataLoaderCalibrator(
testing_dataloader,
cache_file="./calibration.cache",
use_cache=False,
algo_type=torch_tensorrt.ptq.CalibrationAlgo.ENTROPY_CALIBRATION_2,
device=torch.device("cuda"),
)
dummy_input = torch.randn(1, 3, tile_size, tile_size, device=torch.device("cuda"), dtype=dtype)
inputs = torch.randn(1, 3, tile_size, tile_size, device=torch.device("cuda"), dtype=dtype)
torch_script_module = torch.jit.trace(model, example_inputs=inputs)
with torch_tensorrt.logging.debug():
trt_ts_module = torch_tensorrt.compile(
torch_script_module,
truncate_long_and_double=True,
inputs=[dummy_input],
enabled_precisions={torch.int8},
calibrator=calibrator,
device={
"device_type": torch_tensorrt.DeviceType.GPU,
"gpu_id": 0,
"dla_core": 0,
"allow_gpu_fallback": False,
"disable_tf32": False
},
)
torch.jit.save(trt_ts_module, "trt_OLD.ts")
print("Benchmark")
times = []
for _ in range(5):
t1 = time.monotonic()
out = trt_ts_module(inputs)
print(out)
torch.cuda.synchronize()
times.append(time.monotonic() - t1)
print(times)
if __name__ == "__main__":
compile_to_tensort_and_quantize()
```
Note: In the later version, need to switch `import torch_tensorrt.ptq` to `import torch_tensorrt.ts.ptq`, the rest of the script is identical
While the previous versions work well (I get a quantized model that produces close-enough results to the original model), for the later version, I get garbage outputs (I can see there is something wrong with the calibration as the output tensor values is always within a small range 0.18-0.21, whereas it should take any value between -1,1). I'm posting the quantization script approximately, however, I cannot post the model details unfortunately, as it's proprietary.
Would appreciate all forms of help :), also would love to submit a fix for the underlying issue (if one is present).
|
https://github.com/pytorch/TensorRT/issues/3173
|
open
|
[
"question"
] | 2024-09-22T14:46:00Z
| 2024-09-23T16:44:03Z
| null |
seymurkafkas
|
huggingface/text-generation-inference
| 2,541
|
How to serve local models with python package (not docker)
|
### System Info
`pip install text-generation `with version '0.6.0'
I need to use python package not docker
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
```
from text_generation import Client
# Initialize the client
client = Client("/path/to/model/locally")
# Generate text
response = client.generate("Your input text here")
```
error:
```
MissingSchema: Invalid URL '/path/to/model/locally': No scheme supplied. Perhaps you meant [/path/to/model/locally](/path/to/model/locally?
```
also I tried this as with some models also on huggingface and local models doesn't work!
```
from text_generation import InferenceAPIClient
client = InferenceAPIClient("NousResearch/Meta-Llama-3.1-8B-Instruct")
text = client.generate("Why is the sky blue?").generated_text
print(text)
# ' Rayleigh scattering'
# Token Streaming
text = ""
for response in client.generate_stream("Why is the sky blue?"):
if not response.token.special:
text += response.token.text
print(text)
```
error:
```
NotSupportedError: Model `NousResearch/Meta-Llama-3.1-8B-Instruct` is not available for inference with this client.
Use `huggingface_hub.inference_api.InferenceApi` instead.
```
### Expected behavior
- I can load any model ( local or form HF hub)
|
https://github.com/huggingface/text-generation-inference/issues/2541
|
open
|
[] | 2024-09-20T21:10:09Z
| 2024-09-26T06:55:50Z
| null |
hahmad2008
|
huggingface/competitions
| 41
|
how to debug a script submission
|
is there way to see logs or errors of a script based submission
|
https://github.com/huggingface/competitions/issues/41
|
closed
|
[] | 2024-09-20T18:04:44Z
| 2024-09-30T16:08:42Z
| null |
ktrapeznikov
|
huggingface/diffusers
| 9,485
|
Can we allow making everything on gpu/cuda for scheduler?
|
**What API design would you like to have changed or added to the library? Why?**
Is it possible to allow setting every tensor attribute of scheduler to cuda device?
In https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lcm.py
It looks like that attributes like `scheduler.alphas_cumprod` are tensors on cpu, but the scheduler.set_timesteps() allows setting `scheduler.timesteps` to gpu/cuda device. Isn't this causing device mismatch when indexing scheduler.alphas_cumprod with scheduler.timesteps? Below is the code snippet that the pipline is indexing a cpu tensor(alphas_cumprod) with a gpu tensor(timestep)

I simply added following lines to print the timestep and self.alphas_cumprod type and device at the begining of the `scheduler.step()`
```
print("Printing scheduler.step() timestep")
print(type(timestep))
print(isinstance(timestep, torch.Tensor))
print(timestep.device)
print("Printing scheduler.step() self.alphas_cumprod")
print(type(self.alphas_cumprod))
print(isinstance(self.alphas_cumprod, torch.Tensor))
print(self.alphas_cumprod.device)
```
Output when running text-to-image:
```
Printing scheduler.step() timestep
<class 'torch.Tensor'>
True
cuda:0
Printing scheduler.step() self.alphas_cumprod
<class 'torch.Tensor'>
True
cpu
```
**What use case would this enable or better enable? Can you give us a code example?**
We are using a modified LCMScheduler (99% same as the original LCMScheduler) for video generations, it's generating frames repeatedly in a loop. for most of the time, this step doesn't cause performance issue. But we did see intermittent high cpu usage and latency for `alpha_prod_t = self.alphas_cumprod[timestep]`. And from torch.profiler and tracing output, it. shows high latency for this specific step. We are wondering if this is the performance bottleneck.

|
https://github.com/huggingface/diffusers/issues/9485
|
open
|
[
"stale",
"scheduler",
"performance"
] | 2024-09-20T12:38:16Z
| 2024-12-17T15:04:46Z
| 14
|
xiang9156
|
pytorch/serve
| 3,325
|
Kserve management api for registering new models
|
I have a setup where the Kserve endpoint is mounted to PVC, which reads model files on startup and loads them.
Is it possible to register a new version of the model (after I added it to PVC) without restarting whole Kserve endpoints with other models and expanding config.properties?
Torchserve supports this use case but I can't find documentation to do it on Kserve.
|
https://github.com/pytorch/serve/issues/3325
|
open
|
[
"question"
] | 2024-09-20T10:47:44Z
| 2024-09-20T19:28:03Z
| null |
matej14086
|
huggingface/optimum
| 2,032
|
ONNX support for decision transformers
|
### Feature request
I am trying to train off-line RL using decision transformer, convert to .onnx.
```
from pathlib import Path
from transformers.onnx import FeaturesManager
feature = "sequence-classification"
# load config
model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=feature)
onnx_config = model_onnx_config(model.config)
# export
onnx_inputs, onnx_outputs = transformers.onnx.export(
#preprocessor=tokenizer,
model=model,
config=onnx_config,
opset=13,
output=Path("trained_models/DT-model.onnx")
)
```
Get the below error:
```
KeyError: "decision-transformer is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'clip', 'codegen', 'convbert', 'convnext', 'data2vec-text', 'data2vec-vision', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'groupvit', 'ibert', 'imagegpt', 'layoutlm', 'layoutlmv3', 'levit', 'longt5', 'longformer', 'marian', 'mbart', 'mobilebert', 'mobilenet-v1', 'mobilenet-v2', 'mobilevit', 'mt5', 'm2m-100', 'owlvit', 'perceiver', 'poolformer', 'rembert', 'resnet', 'roberta', 'roformer', 'segformer', 'squeezebert', 'swin', 't5', 'vision-encoder-decoder', 'vit', 'whisper', 'xlm', 'xlm-roberta', 'yolos'] are supported. If you want to support decision-transformer please propose a PR or open up an issue."
```
### Motivation
I would want to use trained models in Godot-RL-Agents. Currently agents are trained using PPO OR imitation learning and bothe support onnx format. Supporting decision transformers could hugely help training models navigating complex scenarios.
### Your contribution
I would be interested to raise a PR. But at this time, I have no idea how to go about this. With little bit of guidance, I can try.
|
https://github.com/huggingface/optimum/issues/2032
|
closed
|
[
"onnx"
] | 2024-09-20T08:45:28Z
| 2024-11-25T13:00:02Z
| 1
|
ra9hur
|
huggingface/setfit
| 558
|
How to improve the accuracy while classifying short text with less context
|
Hi, my usecase is to classify Job Title into Functional Areas. I finetuned `all-mpnet-base-v2` with the help of setfit by providing some 10+ examples for each class (Functional Areas).
I got `82%` accuracy on running the evaluation on my test set. I observed some of the simple & straightforward job titles are classified into wrong label with `0.6` score.
For example:
```
Query: SDET
Predicted Label: Big Data / DWH / ETL
Confidence Scores:
Label: Accounting / Finance, Confidence: 0.0111
Label: Backend Development, Confidence: 0.0140
Label: Big Data / DWH / ETL, Confidence: 0.6092
```
Here **SDET** should have labelled as `QA / SDET` but it is classified to `Big Data / DWH / ETL` with `0.62` score. Few shot examples used for both classes doesn't have anything in common which could confuse the model except one example whose title is `Data Quality Engineer` and it is under `Big Data / DWH / ETL`.
**Few shot examples** (added only for 2 here)
```py
{ "QA / SDET": [
"Quality Assurance Engineer",
"Software Development Engineer in Test (SDET)",
"QA Automation Engineer",
"Test Engineer",
"QA Analyst",
"Manual Tester",
"Automation Tester",
"Performance Test Engineer",
"Security Test Engineer",
"Mobile QA Engineer",
"API Tester",
"Load & Stress Test Engineer",
"Senior QA Engineer",
"Test Automation Architect",
"QA Lead",
"QA Manager",
"End-to-End Tester",
"Game QA Tester",
"UI/UX Tester",
"Integration Test Engineer",
"Quality Control Engineer",
"Test Data Engineer",
"DevOps QA Engineer",
"Continuous Integration (CI) Tester",
"Software Test Consultant"
],
"Big Data / DWH / ETL": [
"Big Data Engineer",
"Data Warehouse Developer",
"ETL Developer",
"Hadoop Developer",
"Spark Developer",
"Data Engineer",
"Data Integration Specialist",
"Data Pipeline Engineer",
"Data Architect",
"Database Administrator",
"ETL Architect",
"Data Lake Engineer",
"Informatica Developer",
"DataOps Engineer",
"BI Developer",
"Data Migration Specialist",
"Data Warehouse Architect",
"ETL Tester",
"Big Data Platform Engineer",
"Apache Kafka Engineer",
"Snowflake Developer",
"Data Quality Engineer",
"Data Ingestion Engineer",
"Big Data Consultant",
"ETL Manager"
]
}
```
**TrainingArgs**
```py
args = TrainingArguments(
batch_size=16,
num_epochs=1,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
)
```
**Here is the complete set of functional areas.**
```py
functional_areas = [
"Accounting / Finance",
"Backend Development",
"Big Data / DWH / ETL",
"Brand Management",
"Content Writing",
"Customer Service",
"Data Analysis / Business Intelligence",
"Data Science / Machine Learning",
"Database Admin / Development",
"DevOps / Cloud",
"Embedded / Kernel Development",
"Event Management",
"Frontend Development",
"Full-Stack Development",
"Functional / Technical Consulting",
"General Management / Strategy",
"IT Management / IT Support",
"IT Security",
"Mobile Development",
"Network Administration",
"Online Marketing",
"Operations Management",
"PR / Communications",
"QA / SDET",
"SEO / SEM",
"Sales / Business Development"
]
```
My guess is accuracy is low because of short text (which is just job title). Please suggest few things which I can try out to improve the accuracy of the model.
|
https://github.com/huggingface/setfit/issues/558
|
open
|
[] | 2024-09-20T06:09:07Z
| 2024-11-11T11:23:31Z
| null |
29swastik
|
huggingface/safetensors
| 527
|
[Question] Comparison with the zarr format?
|
Hi,
I know that safetensors are widely used nowadays in HF, and the comparisons made in this repo's README file make a lot of sense.
However, I am now surprised to see that there is no comparison with zarr, which is probably the most widely used format to store tensors in an universal, compressed and scalable way.
Is there any particular reason why safetensors was created instead of just using zarr, which has been around for longer (and has nice benefits such as good performance in object storage reads and writes)?
Thank you!
|
https://github.com/huggingface/safetensors/issues/527
|
open
|
[] | 2024-09-19T13:32:17Z
| 2025-01-13T17:56:46Z
| 13
|
julioasotodv
|
huggingface/transformers
| 33,584
|
How to fine tune Qlora with Custum trainer.
|
Full model fine-tuning code is given below. How can i modify the code to train Qlora based model.
```import sys
import os
current_directory = os.path.dirname(os.path.abspath(__file__))
sys.path.append(current_directory)
from src.custom_dataset import RawFileDataset
import copy
import random
from dataclasses import dataclass, field
from typing import Optional, Dict, Sequence
import os
import torch
import torch.distributed
import transformers
from transformers import Trainer
IGNORE_INDEX = -100
DEFAULT_PAD_TOKEN = "[PAD]"
DEFAULT_EOS_TOKEN = "</s>"
DEFAULT_BOS_TOKEN = "</s>"
DEFAULT_UNK_TOKEN = "</s>"
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(default="facebook/opt-125m")
@dataclass
class DataArguments:
data_path: str = field(default=None, metadata={"help": "Path to the training data."})
train_file: str = field(default=None, metadata={"help": "train file name"})
val_file: str = field(default=None, metadata={"help": "val file name"})
@dataclass
class TrainingArguments(transformers.TrainingArguments):
cache_dir: Optional[str] = field(default=None)
optim: str = field(default="adamw_torch")
model_max_length: int = field(
default=512,
metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."},
)
def safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str):
"""Collects the state dict and dump to disk."""
state_dict = trainer.model.state_dict()
if trainer.args.should_save:
cpu_state_dict = {key: value.cpu() for key, value in state_dict.items()}
del state_dict
trainer._save(output_dir, state_dict=cpu_state_dict) # noqa
def smart_tokenizer_and_embedding_resize(
special_tokens_dict: Dict,
tokenizer: transformers.PreTrainedTokenizer,
model: transformers.PreTrainedModel,
):
"""Resize tokenizer and embedding.
Note: This is the unoptimized version that may make your embedding size not be divisible by 64.
"""
num_new_tokens = tokenizer.add_special_tokens(special_tokens_dict)
model.resize_token_embeddings(len(tokenizer))
if num_new_tokens > 0:
input_embeddings = model.get_input_embeddings().weight.data
output_embeddings = model.get_output_embeddings().weight.data
input_embeddings_avg = input_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
output_embeddings_avg = output_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
input_embeddings[-num_new_tokens:] = input_embeddings_avg
output_embeddings[-num_new_tokens:] = output_embeddings_avg
def _tokenize_fn(strings: Sequence[str], tokenizer: transformers.PreTrainedTokenizer) -> Dict:
"""Tokenize a list of strings."""
tokenized_list = [
tokenizer(
text,
return_tensors="pt",
padding="longest",
max_length=tokenizer.model_max_length,
truncation=True,
)
for text in strings
]
input_ids = labels = [tokenized.input_ids[0] for tokenized in tokenized_list]
input_ids_lens = labels_lens = [
tokenized.input_ids.ne(tokenizer.pad_token_id).sum().item() for tokenized in tokenized_list
]
return dict(
input_ids=input_ids,
labels=labels,
input_ids_lens=input_ids_lens,
labels_lens=labels_lens,
)
def preprocess(
sources: Sequence[str],
targets: Sequence[str],
tokenizer: transformers.PreTrainedTokenizer,
) -> Dict:
"""Preprocess the data by tokenizing."""
examples = [s + t for s, t in zip(sources, targets)]
examples_tokenized, sources_tokenized = [_tokenize_fn(strings, tokenizer) for strings in (examples, sources)]
input_ids = examples_tokenized["input_ids"]
labels = copy.deepcopy(input_ids)
for label, source_len in zip(labels, sources_tokenized["input_ids_lens"]):
label[:source_len] = IGNORE_INDEX
return dict(input_ids=input_ids, labels=labels)
@dataclass
class DataCollatorForSupervisedDataset(object):
"""Collate examples for supervised fine-tuning."""
tokenizer: transformers.PreTrainedTokenizer
def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]:
### one can customize here, since we set the T for joint loss as 2
batch_input_ids1, batch_input_ids2 = [], []
batch_attention_mask1, batch_attention_mask2 = [], []
batch_labels1, batch_labels2 = [], []
for instance in instances:
instance1, instance2 = instance["instance_1"], instance["instance_2"]
batch_input_ids1.append(instance1["input_ids"])
batch_input_ids2.append(instance2["input_ids"])
batch_attention_mask1.append(instance1["attention_mask"])
batch_attention_mask2.append(instan
|
https://github.com/huggingface/transformers/issues/33584
|
closed
|
[
"trainer",
"Quantization"
] | 2024-09-19T09:40:00Z
| 2024-10-28T08:05:06Z
| null |
ankitprezent
|
huggingface/diffusers
| 9,470
|
Prompt scheduling in Diffusers like A1111
|
Hi everyone, I have a question that how to implement the [prompt scheduling feature](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing) in A1111 by diffusers library.
**Example prompt:** Official portrait of a smiling world war ii general, `[male:female:0.99]`, cheerful, happy, detailed face, 20th century, highly detailed, cinematic lighting, digital art painting by Greg Rutkowski.

|
https://github.com/huggingface/diffusers/issues/9470
|
closed
|
[] | 2024-09-19T09:07:30Z
| 2024-10-19T17:22:23Z
| 5
|
linhbeige
|
huggingface/chat-ui
| 1,476
|
Update docs to explain how to use `tokenizer` field for chat prompt formats
|
## Bug description
In README.md, it's stated that the prompts used in production for HuggingChat can be found in PROMPTS.md.
However, PROMPTS.md has not been updated for 7 months and there are several prompts missing for newer models.
|
https://github.com/huggingface/chat-ui/issues/1476
|
open
|
[
"bug",
"documentation"
] | 2024-09-18T22:49:53Z
| 2024-09-20T18:05:05Z
| null |
horsten
|
huggingface/transformers.js
| 935
|
Is converting a Gemma 2B quantized compatible with transformers.js/onnx?
|
### Question
I'm new to dev and wanted to know if converting a gemma 2b using the Optimum converter would work for this model?
|
https://github.com/huggingface/transformers.js/issues/935
|
open
|
[
"question"
] | 2024-09-18T15:57:55Z
| 2024-09-24T20:26:53Z
| null |
iamhenry
|
huggingface/dataset-viewer
| 3,063
|
Simplify test code where a dataset is set as gated
|
[huggingface_hub@0.25.0](https://github.com/huggingface/huggingface_hub/releases/tag/v0.25.0) provides an API to set a repository as gated.
We had included a custom version of `update_repo_settings` because it lacked a `gated` parameter. Now we can switch back to the `huggingface_hub` method
https://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/jobs/cache_maintenance/tests/utils.py#L41
https://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/services/admin/tests/fixtures/hub.py#L24
https://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/services/worker/tests/fixtures/hub.py#L35
|
https://github.com/huggingface/dataset-viewer/issues/3063
|
closed
|
[
"good first issue",
"tests",
"refactoring / architecture",
"dependencies"
] | 2024-09-18T09:08:14Z
| 2025-07-17T15:00:40Z
| null |
severo
|
huggingface/transformers.js
| 934
|
Repeating tokens in TextStreamer
|
### Question
```
import {
AutoTokenizer,
AutoModelForCausalLM,
TextStreamer,
InterruptableStoppingCriteria,
} from "@huggingface/transformers";
class TextGenerationPipeline {
static model = null;
static tokenizer = null;
static streamer = null;
static async getInstance(
progress_callback = null,
model_id = "onnx-community/Phi-3.5-mini-instruct-onnx-web",
) {
this.tokenizer = AutoTokenizer.from_pretrained(model_id, {
progress_callback,
});
this.model = AutoModelForCausalLM.from_pretrained(model_id, {
// dtype: "q4",
dtype: "q4f16",
device: "webgpu",
use_external_data_format: true,
progress_callback,
});
return Promise.all([this.tokenizer, this.model]);
}
}
const stopping_criteria = new InterruptableStoppingCriteria();
let past_key_values_cache = null;
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
if (request.action === "initializeLlmModel") {
console.log("setting up llm");
const initialize = async () => {
const [tokenizer, model] = await TextGenerationPipeline.getInstance(
(x) => {
console.log(x);
},
request.model_id,
);
const inputs = tokenizer("a");
const generatedOutput = await model.generate({
...inputs,
max_new_tokens: 1,
});
console.log(generatedOutput);
sendResponse({ status: "success" });
};
initialize();
return true;
}
if (request.action === "generateText") {
console.log("generating text");
async function generateText() {
const [tokenizer, model] = await TextGenerationPipeline.getInstance();
const text_callback_function = (output) => {
console.log(output);
if (output) {
chrome.runtime.sendMessage({
action: "chatMessageChunk",
chunk: output,
});
}
};
const streamer = new TextStreamer(tokenizer, {
skip_prompt: true,
skip_special_tokens: true,
callback_function: text_callback_function,
});
const inputs = tokenizer.apply_chat_template(request.messages, {
add_generation_prompt: true,
return_dict: true,
});
const { past_key_values, sequences } = await model.generate({
...inputs,
past_key_values: past_key_values_cache,
// Sampling
// do_sample: true,
// top_k: 3,
// temperature: 0.2,
max_new_tokens: 1024,
stopping_criteria,
return_dict_in_generate: true,
streamer,
});
past_key_values_cache = past_key_values;
const decoded = tokenizer.batch_decode(sequences, {
skip_special_tokens: false,
});
console.log(decoded);
sendResponse({ generatedOutput: decoded, status: "success" });
}
generateText();
return true;
}
});
```
In the `text_callback_function` it is sending same token multiple times. What could be the reason? I am handling it on the frontend for the time being but was wondering what is the reason? What am I doing wrong here?
Thank you so much for the help in advance!
|
https://github.com/huggingface/transformers.js/issues/934
|
closed
|
[
"question"
] | 2024-09-18T02:53:36Z
| 2025-10-13T04:50:11Z
| null |
chandeldivyam
|
huggingface/transformers.js
| 933
|
Uncaught (in promise) TypeError: r.logits is not iterable
|
### Question
Hey guys,
I have been trying to train a model for text classification then convert it to an onnx file for use in transformers js following this video
https://www.youtube.com/watch?v=W_lUGPMW_Eg
I keep getting the error Uncaught (in promise) TypeError: r.logits is not iterable
Any ideas on where I might be going wrong or if something has changed since this was released?
This is my basic code, I have python hosting the files locally
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>TinyBERT Model in Vanilla JS</title>
</head>
<body>
<h1>TinyBERT Model Inference</h1>
<p>Enter text for classification:</p>
<input type="text" id="inputText" placeholder="Enter your text here" size="50"/>
<button id="runModel">Run Model</button>
<p><strong>Prediction:</strong> <span id="prediction"></span></p>
<script type="module">
import { pipeline, env } from "https://cdn.jsdelivr.net/npm/@xenova/transformers";
document.getElementById('runModel').addEventListener('click', async function () {
const inputText = document.getElementById('inputText').value;
// Load the TinyBERT model for sequence classification from local files
const classifier = await pipeline('text-classification', './finalModel/');
// Run the model to get the prediction
const result = await classifier(inputText);
// Display the result
document.getElementById('prediction').innerText = JSON.stringify(result);
});
</script>
</body>
</html>
```
|
https://github.com/huggingface/transformers.js/issues/933
|
open
|
[
"question"
] | 2024-09-16T20:26:02Z
| 2024-09-17T19:35:26Z
| null |
Joseff-Evans
|
huggingface/chat-ui
| 1,472
|
Mistral api configuration without Cloudflare
|
I'd like to setup a local deployment using **only the mistral API**: https://docs.mistral.ai/api.
Can i use ChatUI without an HF deployment and Cloudflare account?
I leave the .env unchanged and overwrite the env.local with the following code
```yml
AGENT_ID=<my_agent_id_from_mistral>
MISTRAL_API_KEY==<mytoken>
MODELS='[
{
"name": "mistral-large",
"displayName": "mistralai",
"description": "Mistral standard",
"websiteUrl": "https://docs.mistral.ai/",
"preprompt": "",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"top_k": 5,
"stream": true,
"agent_id": "{AGENT_ID}",
"tool_choice": "auto",
"max_new_tokens": 4096
},
"endpoints": [
{
"type": "openai",
"baseURL": "https://api.mistral.ai/v1",
"defaultHeaders": {
"Authorization": "Bearer {MISTRAL_API_KEY}"
}
}
]
},
{
"name": "mistral-embed",
"displayName": "Mistral-embedbedings",
"description": "Mistral embedding model.",
"chunkCharLength": 1024,
"endpoints": [
{
"type": "openai",
"baseURL": "https://api.mistral.ai/v1",
"defaultHeaders": {
"Authorization": "Bearer {MISTRAL_API_KEY}"
}
}
]
}
]'
MONGODB_URL=mongodb://localhost:27017/
PUBLIC_APP_ASSETS=chatui
PUBLIC_APP_COLOR=blue
PUBLIC_APP_NAME="Mistral Local"
```
Not quite sure though if the agend_id is overwritten by the "name".
|
https://github.com/huggingface/chat-ui/issues/1472
|
open
|
[
"support"
] | 2024-09-16T18:51:09Z
| 2024-09-17T08:43:40Z
| 0
|
JonasMedu
|
huggingface/transformers.js
| 932
|
Best small model for text generation?
|
### Question
I'm looking to build a AI Journaling app that helps you reflect from your journal entries
I'm looking for a model like (GPT or Claude) that will take the selected text and provide insights based on a prompt I provide
In this case the prompt will provide suggestions based on psychology techniques like CBT and ACT to help you with your life.
Any ideas on which small model will be able to accomplish this? I've tried GPT2, t5- small, and I couldn't get Phi-3 to work
|
https://github.com/huggingface/transformers.js/issues/932
|
open
|
[
"question"
] | 2024-09-16T18:06:23Z
| 2024-09-26T08:06:35Z
| null |
iamhenry
|
pytorch/xla
| 8,022
|
Add documentation for `pip install[pallas]`
|
## 📚 Documentation
Please add installation documentation for `pip install[pallas]` to the landing page README instructions: https://github.com/pytorch/xla/blob/master/setup.py#L318
Accordingly, this documentation should clearly explain how users choose between the two: https://pypi.org/project/torch-xla/
cc @JackCaoG @ManfeiBai @jiawenliu64 @zpcore
|
https://github.com/pytorch/xla/issues/8022
|
open
|
[
"documentation"
] | 2024-09-16T15:50:14Z
| 2024-09-16T15:50:15Z
| 0
|
miladm
|
huggingface/distil-whisper
| 149
|
How to load using openai-whisper package to load the model?
|
How to load using openai-whisper package to load the model?
|
https://github.com/huggingface/distil-whisper/issues/149
|
open
|
[] | 2024-09-15T15:08:46Z
| 2024-09-15T15:08:46Z
| null |
lucasjinreal
|
huggingface/competitions
| 40
|
How to modify the competition
|
Hi! I created a new competition using the [tool given here](https://huggingface.co/spaces/competitions/create). All good up till here.
Then I had the space automatically running. To modify the competition, I cloned the repository of the space locally with the command given on the UI
```
git clone https://huggingface.co/spaces/cmdgentest/commandgen
```
When I inspected the contents, it had only two files - `Dockerfile` and `README.md`. This was surprising as i expected the files mentioned [here](https://huggingface.co/docs/competitions/en/competition_repo).
However, I still created these files myself and pushed the changes to the spaces repo. Once the space was restarted and running, I still wasn't able to see the changes I made.
At this point I am confused where exactly should I put files like `conf.json` in my case.
|
https://github.com/huggingface/competitions/issues/40
|
closed
|
[
"stale"
] | 2024-09-15T13:45:26Z
| 2024-10-08T15:06:28Z
| null |
dakshvar22
|
huggingface/speech-to-speech
| 101
|
I am really really curious about how to set up this project on a server to serve multiple users. I have been trying for a long time but haven't come up with a very good solution.
|
https://github.com/huggingface/speech-to-speech/issues/101
|
open
|
[] | 2024-09-15T13:42:18Z
| 2025-02-04T15:44:31Z
| null |
demoBBB
|
|
pytorch/torchchat
| 1,147
|
[distributed][perf] ensure that all decoding ops are happening on gpu with no cpu sync
|
### 🐛 Describe the bug
per @kwen2501 - when we are doing decoding step:
~~~
next_token = torch.tensor([decode_results[0][0]], device=device)
~~~
"nit: I am not sure if the use of torch.tensor here would cause a sync from GPU to CPU (to get the scalar) then move to the GPU again (to create the tensor).
If there is no use of next_token in CPU domain, better to just use index op here.
Or, is decode_results already on CPU? Hmm, then we'd need to think about how to arrange these CPU ops and GPU ops. Ideally, you would like to fire the send right after step()."
### Versions
n/a
|
https://github.com/pytorch/torchchat/issues/1147
|
open
|
[
"performance",
"Distributed"
] | 2024-09-15T00:09:56Z
| 2024-09-17T22:57:11Z
| 0
|
lessw2020
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.