repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/accelerate
| 2,474
|
how to turn off fp16 auto_cast?
|
i notice that the deepspeed config always set my `auto_cast=True` and this is my data
```
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_multinode_launcher: standard
gradient_clipping: 1.0
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_offload_param_pin_memory: true
zero3_offload_optimizer_pin_memory: true
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
max_live_parameters: 1e9
max_reuse_distance: 1e9
round_robin_gradients: true
deepspeed_hostfile: /opt/tiger/hostfile
distributed_type: DEEPSPEED
fsdp_config: {}
main_training_function: main
mixed_precision: fp16
use_cpu: false
```
this is my deepspeed log:
```
[2024-02-21 19:35:40,143] [INFO] [config.py:958:print_user_config] json = {
"train_batch_size": 512,
"train_micro_batch_size_per_gpu": 64,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"nvme_path": null
},
"offload_param": {
"device": "cpu",
"nvme_path": null
},
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_clipping": 1.0,
"steps_per_print": inf,
"fp16": {
"enabled": true,
"auto_cast": true
},
"bf16": {
"enabled": false
},
"zero_allow_untested_optimizer": true
}
```
|
https://github.com/huggingface/accelerate/issues/2474
|
closed
|
[] | 2024-02-21T11:54:51Z
| 2025-02-18T08:53:20Z
| null |
haorannlp
|
huggingface/chat-ui
| 852
|
what is the difference between "chat-ui-db" docker image and "chat-ui" docker image?
|
I found there are 2 packages in the chat-ui repository: one is chat-ui and the other is chat-ui-db. what is the difference between "chat-ui-db" docker image and "chat-ui" docker image?
I've pulled two images from the mirror site: huggingface/text-generation-inference:1.4 and mongo:latest.
I hope to use the two images( huggingface/text-generation-inference:1.4 and mongo:latest.) and the image of chat-ui or chat-ui-db to implement the local large model Q&A service. What should I do? Should I use "chat-ui-db" docker image or Should I use "chat-ui" docker image.
What should i do to complete my task of local large model Q&A service? Can anyone give detailed help?
|
https://github.com/huggingface/chat-ui/issues/852
|
closed
|
[] | 2024-02-21T09:31:07Z
| 2024-02-23T02:58:03Z
| null |
majestichou
|
huggingface/instruction-tuned-sd
| 22
|
How to use a custom image for validation
|
Hello,
I tried using a custom image for validation since I'm training it on a custom style i uploaded my val image on hub as the mountain.png but it always gives me error for unidentified also for mountain.png it shows validation summary on wandb but for my val image it shows nothing.
Do i need to change something somewhere also how does it compare the val images for loss do i need to put the style image of original image somewhere
|
https://github.com/huggingface/instruction-tuned-sd/issues/22
|
closed
|
[] | 2024-02-21T08:15:30Z
| 2024-02-22T05:49:11Z
| null |
roshan2024nar
|
huggingface/gsplat.js
| 67
|
How to set the background color of the scene
|
Hi:
Want to know how to set the background color of the scene,now it's black
|
https://github.com/huggingface/gsplat.js/issues/67
|
open
|
[] | 2024-02-21T05:49:33Z
| 2024-02-26T09:32:25Z
| null |
jamess922
|
huggingface/gsplat.js
| 66
|
How to adjust the axis of rotation?
|
When the model's z-axis is not perpendicular to the ground plane, the rotation effect may feel unnatural, as is the case with this model: testmodel.splat.
[testmodel.zip](https://github.com/huggingface/gsplat.js/files/14353919/testmodel.zip)
I would like to rotate the model along an axis that is perpendicular to the ground. Are there any parameters available to adjust the axis of rotation?
|
https://github.com/huggingface/gsplat.js/issues/66
|
closed
|
[] | 2024-02-21T04:13:01Z
| 2024-02-23T02:37:59Z
| null |
gotoeasy
|
huggingface/sentence-transformers
| 2,494
|
How to get embedding vector when input is tokenized already
|
First, thank you so much for sentence-transformer.
How to get embedding vector when input is tokenized already?
i guess sentence-transformer can `.encode(original text)`.
But i want to know there is way like `.encode(token_ids )` or `.encode(token_ids, attention_masks)`
This is my background below
>
> I trained model using sentence-transformer. and i add few layers to this model for classification.
>
> and then i want to train model to update all of parameter (including added layers).
>
> but DataLoader cuda() support only tokens_id not text , so first i tokenized text using `model.tokenizer()` .
>
> so, it is already tokenized i need to know how to get embedding if i have token_ids,
regards
|
https://github.com/huggingface/sentence-transformers/issues/2494
|
open
|
[] | 2024-02-20T22:38:18Z
| 2024-02-23T10:01:07Z
| null |
sogmgm
|
huggingface/optimum
| 1,703
|
How can I export onnx-model for Qwen/Qwen-7B?
|
### Feature request
I need to export the model named qwen to accelerate.
```optimum-cli export onnx --model Qwen/Qwen-7B qwen_optimum_onnx/ --trust-remote-code```
### Motivation
I want to export the model qwen to use onnxruntime
### Your contribution
I can give the input and output.
|
https://github.com/huggingface/optimum/issues/1703
|
open
|
[
"onnx"
] | 2024-02-20T13:22:08Z
| 2024-02-26T13:19:19Z
| 1
|
smile2game
|
huggingface/accelerate
| 2,463
|
How to initialize Accelerator twice but with different setup within the same code ?
|
### System Info
```Shell
Hello I want to initialize accelerate once for the training and another time for the inference.
Looks like it does not work and the error message is not clear. Is there a way to reset the previously initialized accelerate and then initialize with inference setup?
For training I am doing :
accelerator = Accelerator(kwargs_handlers=[process_group_kwargs])
model,test_loader, valid_loader, optimizer, scheduler = accelerator.prepare(
model, test_loader, valid_loader, optimizer, scheduler)
For inference I want to do: accelerator = Accelerator()
model, valid_loader, optimizer = eval_accelerator.prepare(model, valid_loader, optimizer)
For inference, I do no want to use optimizer but I get error as I am using zero_stage: 1, So I used the optimizer I used during training. But then I was getting batch size error for the valid set then I prepare the valid loader one more time after initializing the Accelerator. Still during inference I am getting error on the preparation.
Any idea how to fix this?
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
1. Initialize Accelerator for training
2. Once the training is done, initialize again for the inference.
### Expected behavior
I just want to prepare the accelerate for the inference task once the training is done.
|
https://github.com/huggingface/accelerate/issues/2463
|
closed
|
[] | 2024-02-20T13:17:26Z
| 2024-03-30T15:06:15Z
| null |
soneyahossain
|
pytorch/TensorRT
| 2,648
|
❓ Debugger deactivate
|
## ❓ Question
How can I deactivate the debugger?
## What you have already tried
When I run any executable that uses Torch-TensorRT, I get a lot of debugger messages:
```log
...
DEBUG: [Torch-TensorRT - Debug Build] - Attempting to run engine (ID: __torch___torchvision_models_resnet_ResNet_trt_engine_)
INFO: [Torch-TensorRT - Debug Build] - Execution profiling is enabled, find results here:
Device selection profile: /tmp/__torch___torchvision_models_resnet_ResNet_trt_engine__device_config_profile.trace
Input packing profile: /tmp/__torch___torchvision_models_resnet_ResNet_trt_engine__input_profile.trace
Output packing profile: /tmp/__torch___torchvision_models_resnet_ResNet_trt_engine__output_profile.trace
TRT enqueue profile: /tmp/__torch___torchvision_models_resnet_ResNet_trt_engine__enqueue_profile.trace
Engine execution profile: /tmp/__torch___torchvision_models_resnet_ResNet_trt_engine__engine_exectuion_profile.trace
DEBUG: [Torch-TensorRT - Debug Build] - Current Device: Device(ID: 0, Name: Xavier, SM Capability: 7.2, Type: GPU)
DEBUG: [Torch-TensorRT - Debug Build] - Requested padding of dimensions to 1 but found 4 dimensions, not going to pad
DEBUG: [Torch-TensorRT - Debug Build] - Input Name: input_0 Shape: [1, 3, 224, 224]
DEBUG: [Torch-TensorRT - Debug Build] - Output Name: output_0 Shape: [1, 1000]
INFO: [Torch-TensorRT - Debug Build] -
...
```
I think for some reason I am compiling in debug/developer mode (if there is such a thing). I have tried compiling Torch-TensorRT using:
```bash
bazel build //:libtorchtrt --platforms //toolchains:jetpack_5.0 --linkopt=-Wl,--strip-all --copt=-O3
```
I hoped, with the `--linkopt=-Wl,--strip-all` option to have solved my problem. Is there any way to deactivate the debugger? I am using the C++ API. Is there either anything in the compilation stage, or any routine to integrate in my code that can help me run my code with the logger disabled?
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version: 2.0
- CPU Architecture: x64
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): source
- Build command you used (if compiling from source): [build tutorial](https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048)
- Are you using local sources or building from archives: [ref tutorial](https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048)
- Python version: 3.8
- CUDA version: 11.4
- GPU models and configuration:
- Any other relevant information: Jetson AGX Xavier
## Additional context
TensorRT version: 1.4 Release version
|
https://github.com/pytorch/TensorRT/issues/2648
|
closed
|
[
"question"
] | 2024-02-20T05:56:41Z
| 2024-02-20T06:15:13Z
| null |
AndreasKaratzas
|
huggingface/chat-ui
| 840
|
LLama.cpp error - String must contain at least 1 character(s)"
|
I keep getting this error after adding LLAMA-CPP inference endpoint locally. Adding this line causes this error.
```
"endpoints": [
{
"url": "http://localhost:8080",
"type": "llamacpp"
}
]
```
Not sure how to fix it.
```
[
{
"code": "too_small",
"minimum": 1,
"type": "string",
"inclusive": true,
"exact": false,
"message": "String must contain at least 1 character(s)",
"path": [
0,
"endpoints",
0,
"accessToken"
]
}
]
ZodError: [
{
"code": "too_small",
"minimum": 1,
"type": "string",
"inclusive": true,
"exact": false,
"message": "String must contain at least 1 character(s)",
"path": [
0,
"endpoints",
0,
"accessToken"
]
}
]
at get error [as error] (file:///C:/Users/SRU/Desktop/chatui/node_modules/zod/lib/index.mjs:538:31)
at ZodArray.parse (file:///C:/Users/SRU/Desktop/chatui/node_modules/zod/lib/index.mjs:638:22)
at C:\Users\SRU\Desktop\chatui\src\lib\server\models.ts:75:40
at async instantiateModule (file:///C:/Users/SRU/Desktop/chatui/node_modules/vite/dist/node/chunks/dep-529
```
Full Config:
```
# Use .env.local to change these variables
# DO NOT EDIT THIS FILE WITH SENSITIVE DATA
MONGODB_URL=mongodb://localhost:27017/
MONGODB_DB_NAME=chat-ui
MONGODB_DIRECT_CONNECTION=false
COOKIE_NAME=hf-chat
HF_TOKEN=#hf_<token> from from https://huggingface.co/settings/token
HF_API_ROOT=https://api-inference.huggingface.co/models
OPENAI_API_KEY=#your openai api key here
HF_ACCESS_TOKEN=#LEGACY! Use HF_TOKEN instead
# used to activate search with web functionality. disabled if none are defined. choose one of the following:
YDC_API_KEY=#your docs.you.com api key here
SERPER_API_KEY=#your serper.dev api key here
SERPAPI_KEY=#your serpapi key here
SERPSTACK_API_KEY=#your serpstack api key here
USE_LOCAL_WEBSEARCH=#set to true to parse google results yourself, overrides other API keys
SEARXNG_QUERY_URL=# where '<query>' will be replaced with query keywords see https://docs.searxng.org/dev/search_api.html eg https://searxng.yourdomain.com/search?q=<query>&engines=duckduckgo,google&format=json
WEBSEARCH_ALLOWLIST=`[]` # if it's defined, allow websites from only this list.
WEBSEARCH_BLOCKLIST=`[]` # if it's defined, block websites from this list.
# Parameters to enable open id login
OPENID_CONFIG=`{
"PROVIDER_URL": "",
"CLIENT_ID": "",
"CLIENT_SECRET": "",
"SCOPES": ""
}`
# /!\ legacy openid settings, prefer the config above
OPENID_CLIENT_ID=
OPENID_CLIENT_SECRET=
OPENID_SCOPES="openid profile" # Add "email" for some providers like Google that do not provide preferred_username
OPENID_PROVIDER_URL=https://huggingface.co # for Google, use https://accounts.google.com
OPENID_TOLERANCE=
OPENID_RESOURCE=
# Parameters to enable a global mTLS context for client fetch requests
USE_CLIENT_CERTIFICATE=false
CERT_PATH=#
KEY_PATH=#
CA_PATH=#
CLIENT_KEY_PASSWORD=#
REJECT_UNAUTHORIZED=true
MODELS=`[
{
"name": "mistralai/Mistral-7B-Instruct-v0.1",
"displayName": "mistralai/Mistral-7B-Instruct-v0.1",
"description": "Mistral 7B is a new Apache 2.0 model, released by Mistral AI that outperforms Llama2 13B in benchmarks.",
"chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": ["</s>"]
},
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"endpoints": [
{
"url": "http://localhost:8080",
"type": "llamacpp"
}
]
}
]`
OLD_MODELS=`[]`
PUBLIC_ORIGIN=#https://huggingface.co
PUBLIC_SHARE_PREFIX=#https://hf.co/chat
PUBLIC_GOOGLE_ANALYTICS_ID=#G-XXXXXXXX / Leave empty to disable
PUBLIC_PLAUSIBLE_SCRIPT_URL=#/js/script.js / Leave empty to disable
PUBLIC_ANNOUNCEMENT_BANNERS=`[
{
"title": "Code Llama 70B is available! 🦙",
"linkTitle": "try it",
"linkHref": "https://huggingface.co/chat?model=codellama/CodeLlama-70b-Instruct-hf"
}
]`
PARQUET_EXPORT_DATASET=
PARQUET_EXP
|
https://github.com/huggingface/chat-ui/issues/840
|
open
|
[
"bug",
"models"
] | 2024-02-19T13:33:24Z
| 2024-02-22T14:51:48Z
| 2
|
szymonrucinski
|
huggingface/datatrove
| 93
|
Tokenization for Non English data
|
Hi HF team
I want to thank you for this incredible work.
And I have a question, I want to apply pipeline of deduplication for Arabic data.
For this I should change the tokenizer I think, And if yes is there a tip for this,
for this should I just edit the tokenizer here
`class SentenceDedupFilter(PipelineStep):
type = "🫂 - DEDUPS"
name = "💥 sentence-deduplication stage 3"
def __init__(
self,
data_folder: DataFolderLike,
n_sentences: int = 3,
min_doc_words: int = 50,
exclusion_writer: DiskWriter = None,
):
"""Args:
data_folder: data folder to get duplicate files.
min_doc_words: min amount of words for each document
"""
from nltk import load
super().__init__()
self.data_folder = get_datafolder(data_folder)
self.n_sentences = n_sentences
self.min_doc_words = min_doc_words
**self._tokenizer = load("tokenizers/punkt/english.pickle")**
self.exclusion_writer = exclusion_writer`
any recommendations please?
Thanks
|
https://github.com/huggingface/datatrove/issues/93
|
closed
|
[
"question"
] | 2024-02-19T11:02:04Z
| 2024-04-11T12:47:24Z
| null |
Manel-Hik
|
pytorch/pytorch
| 120,194
|
model loaded with torch._export.aot_load does not report what file is not found during inference and Cuda driver error.
|
### 🐛 Describe the bug
when I load a pt2 model exported with torch._export in one Docker container from the image `ghcr.io/pytorch/pytorch-nightly:2.3.0.dev20240211-cuda12.1-cudnn8-devel` I get a working inference.
But when I run it in another container derived from the same base image, I get a CUDA driver error. I can't track down the error because the error message doesn't give me anything to go on. I've confirmed that nvidia-smi, nvcc --version, the torch version and all environment variables from `docker inspect` are the same between the two running containers. I can't identify anywhere that another torch version is installed and I can't see any other cuda versions installed in `/usr/local` that might cause a conflict.
```
import torch
model = torch._export.aot_load("./compiled_model_satlas/satlas_pt2.so", device="cuda")
device = torch.device("cuda:" + str(torch.cuda.current_device()))
torch.cuda.set_device(device)
print("Current device:", device)
test_im_ts = torch.randn((9*4, 256, 256)).to(device)
x = torch.stack(6*[test_im_ts], dim=0)
outputs_aot, _ = model(x)
```
the error is below
```
Error: CUDA driver error: file not found
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[10], line 3
1 test_im_ts = torch.randn((9*4,256,256)).to(device)
2 x = torch.stack(6*[test_im_ts], dim=0)
----> 3 outputs_aot, _ = model(x)
File /opt/conda/lib/python3.10/site-packages/torch/_export/__init__.py:421, in aot_load.<locals>.optimized(*args, **kwargs)
419 out_spec = pytree.treespec_loads(call_spec[1])
420 flat_inputs = pytree.tree_flatten((args, reorder_kwargs(kwargs, in_spec)))[0]
--> 421 flat_outputs = runner.run(flat_inputs) # type: ignore[attr-defined]
422 return pytree.tree_unflatten(flat_outputs, out_spec)
RuntimeError: run_func_( container_handle_, input_handles.data(), input_handles.size(), output_handles.data(), output_handles.size(), cuda_stream_handle, proxy_executor_handle_) API call failed at ../torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 75
```
### Versions
Details for the container where inference fails
```
PyTorch version: 2.3.0.dev20240210
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 545.23.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 10
CPU max MHz: 4600.0000
CPU min MHz: 800.0000
BogoMIPS: 6399.96
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch
|
https://github.com/pytorch/pytorch/issues/120194
|
closed
|
[
"triaged",
"oncall: pt2",
"module: aotinductor"
] | 2024-02-19T07:12:30Z
| 2025-02-07T08:44:15Z
| null |
rbavery
|
huggingface/safetensors
| 443
|
Efficient key-wise streaming
|
### Feature request
I'm interested in streaming the tensors in a model key by key without having to hold all keys at the same time in memory. Something like this:
```python
with safe_open("model.safetensors", framework="pt", device="cpu") as f:
for key in f.keys():
tensor = f.get_tensor(stream=True)
# `tensor` will be garbage collected in the next GC pass
# as soon as the next iteration removes the only reference to it
```
### Motivation
When I use `safetensors.safe_open` to load multiple models, the memory usage does not drop down even when the deserialized tensors do not have a reference held to them. This is a key by key streamed merge of 5 stable diffusion 1.5 checkpoints using a weighted sum:
(each vertical gray line is ~8GB)

For reference, this is my successful attempt at reading keys memory efficient in python:
https://github.com/ljleb/sd-mecha/blob/9548ef83dd5d3fccdaf09c8b22dee7a0a7727613/sd_mecha/streaming.py#L12
And this is my successful attempt at making writing keys memory efficient:
https://github.com/ljleb/sd-mecha/blob/9548ef83dd5d3fccdaf09c8b22dee7a0a7727613/sd_mecha/streaming.py#L156
Which looks like this:

Note that my implementation is relatively slow compared to simply using safetensors directly (approximately 1.1x to 1.3x slower according to some quick test I made). Is there any way the same could be achieved but in a more computationally efficient way using the rust bindings? Specifically, I need to stream the keys and the tensors without them being held somewhere else in memory.
### Your contribution
I don't really know Rust but if nobody has time for this and there isn't a problem with my suggested approach to the API above, I will eventually have to implement this efficiently in one way or another for my merging lib.
|
https://github.com/huggingface/safetensors/issues/443
|
closed
|
[
"Stale"
] | 2024-02-18T23:22:09Z
| 2024-04-17T01:47:28Z
| 4
|
ljleb
|
huggingface/community-events
| 200
|
How to prepare audio dataset for whisper fine-tuning with timestamps?
|
I am trying to prepare a dataset for whisper fine-tuning , and I have a lot of small segment clip , most of them less than 6 seconds, I read the paper, but didn’t understand this paragraph:
“ When a final transcript segment is only partially included in the current 30- second audio chunk, we predict only its start time token for the segment when in timestamp mode, to indicate that the subsequent decoding should be performed on an audio window aligned with that time, otherwise we truncate the audio to not include the segment”
So when should I add the final segment if it is partially included in the current 30-second chunk, and when should I truncate the chunk without it, and if I added it how to extract only relevant transcription?
To make it clear:
```
| window | window |
|segment|-----segment---|--segment--|
```
assume that every window is 30 seconds, how to get the correct relevant transcription of the partially included segments?
Anyone could help?
|
https://github.com/huggingface/community-events/issues/200
|
open
|
[] | 2024-02-18T19:50:33Z
| 2024-02-18T19:55:06Z
| null |
omarabb315
|
huggingface/diffusers
| 7,010
|
How to set export HF_HOME on Kaggle?
|
Kaggle temporary disk is slow once again and I want models to be downloaded into working directory.
I have used the below command but it didn't work. Which command I need?
`!export HF_HOME="/kaggle/working"`
|
https://github.com/huggingface/diffusers/issues/7010
|
closed
|
[
"bug"
] | 2024-02-18T11:15:21Z
| 2024-02-18T14:39:08Z
| null |
FurkanGozukara
|
huggingface/optimum-benchmark
| 126
|
How to obtain the data from the 'forward' and 'generate' stages?
|
I used the same configuration file to test the model, but the results obtained are different from those of a month ago. In the result files from a month ago, data from both the forward and generate stages were included; however, the current generated result files only contain information from the prefill and decode stages. Here is the configuration file:
defaults:
- backend: pytorch # default backend
- launcher: process # default launcher
- benchmark: inference # default benchmark
- experiment # inheriting experiment schema
- _self_ # for hydra 1.1 compatibility
- override hydra/job_logging: colorlog # colorful logging
- override hydra/hydra_logging: colorlog # colorful logging
experiment_name: pytorch_qwen7b
model: Qwen/Qwen-7B
device: cpu
launcher:
device_isolation: true
benchmark:
memory: true
input_shapes:
batch_size: 1
sequence_length: 256
new_tokens: 1000
hub_kwargs:
trust_remote_code: true
hydra:
run:
dir: runs/${experiment_name}
sweep:
dir: sweeps/${experiment_name}
job:
chdir: true
env_set:
OVERRIDE_BENCHMARKS: 1
CUDA_VISIBLE_DEVICES: 0
CUDA_DEVICE_ORDER: PCI_BUS_ID
|
https://github.com/huggingface/optimum-benchmark/issues/126
|
closed
|
[] | 2024-02-18T09:48:44Z
| 2024-02-19T16:06:24Z
| null |
WCSY-YG
|
huggingface/chat-ui
| 838
|
Explore the possibility for chat-ui to use OpenAI assistants API structure.
|
Hi @nsarrazin , I wanted to explore how we could collaborate in making chat-ui more work with OpenAI standards to make it more less opinionated over hosted inference provider. I need it as I am part of a team open-sourcing the GPTs platform https://github.com/OpenGPTs-platform and we will be leveraging chat-ui as the client. So I was hoping we could align our objectives so that we can have a healthy collaboration instead of just diverging. The main point I wanted to touch on is as follows.
Is there any interest in transforming the backend to one that follows the OpenAI assistants API structure so that we may better align ourselves to the OpenAI standard? Based on the disord announcement "...Message API with OpenAI compatibility for HF...", HF seems to signal that they are pushing in that direction so it would make sense to support that on the chat-ui. I havent looked too deep into the codebase but I imagine we will need to refactor the backend endpoints to support assistants API endpoints and then use the openai client to make the requests.
I am more than open to suggestions, and I look forward to exploring how we could collab!
|
https://github.com/huggingface/chat-ui/issues/838
|
open
|
[
"enhancement",
"good first issue",
"back"
] | 2024-02-17T21:39:49Z
| 2024-12-26T05:55:47Z
| 4
|
CakeCrusher
|
huggingface/candle
| 1,720
|
How to define custom ops with arbitrary number of tensors ?
|
I dived into the issues and repo about the subject, because I wanted to be able to call cuda kernels regarding 3D gaussian splatting, and the way to invoke those kernel seems to be custom ops. But right now, we only have
```
CustomOp1(Tensor, std::sync::Arc<Box<dyn CustomOp1 + Send + Sync>>),
CustomOp2(
Tensor,
Tensor,
std::sync::Arc<Box<dyn CustomOp2 + Send + Sync>>,
),
CustomOp3(
Tensor,
Tensor,
Tensor,
std::sync::Arc<Box<dyn CustomOp3 + Send + Sync>>,
)
```
And those gsplat kernels have way more in and/or out tensors depending on the operation.
I can think of ways to do it, but I was wondering if there was a _**good**_ way to do it?
|
https://github.com/huggingface/candle/issues/1720
|
open
|
[] | 2024-02-16T21:38:16Z
| 2024-03-13T13:44:17Z
| null |
jeanfelixM
|
huggingface/chat-ui
| 837
|
Cannot find assistants UI in the repo
|
Hi @nsarrazin I recently cloned the chat-ui and I noticed that the new assistants ui is missing, at the very least from the main branch.
Is the assistants ui in the repo somwhere?
If not is there any plans on making it open-source?
If so when?
|
https://github.com/huggingface/chat-ui/issues/837
|
closed
|
[] | 2024-02-16T20:13:39Z
| 2024-02-17T21:29:08Z
| 4
|
CakeCrusher
|
pytorch/pytorch
| 120,079
|
Use sys.settrace or torch function mode to compute how much of a model was not covered by Dynamo
|
### 🐛 Describe the bug
Suppose you have a model with a bunch of graph breaks / WON'T CONVERT. How much of the model have you managed to capture versus not capture? There are two metrics you could use to figure this out:
* When you run the model in eager mode, it will have run some number calls to torch functions. You can count how many of these calls occur outside of Dynamo compiled regions, compared to those captured in Dynamo regions. This gives you "missing torch function call captures / total number of torch function calls in torch.compile region"
* When you run the model in eager mode, you will run some number of bytecodes. You can use sys.settrace to count how many bytecodes are processed in the eager region, and get "number of bytecodes evaluated outside of Dynamo region / total number of bytecodes"
This can give you a much better idea of how much of the model you've managed to capture, as opposed to just number of graph breaks.
### Versions
main
cc @chauhang @penguinwu @msaroufim @bdhirsh @anijain2305 @zou3519
|
https://github.com/pytorch/pytorch/issues/120079
|
open
|
[
"feature",
"low priority",
"module: logging",
"triaged",
"oncall: pt2"
] | 2024-02-16T14:54:04Z
| 2025-07-11T18:03:17Z
| null |
ezyang
|
huggingface/dataset-viewer
| 2,456
|
Link to the endpoint doc page in case of error?
|
eg. https://datasets-server.huggingface.co/parquet
could return
```json
{"error":"Parameter 'dataset' is required. Read the docs at https://huggingface.co/docs/datasets-server/parquet"}
```
or
```json
{"error":"Parameter 'dataset' is required.", "docs": "https://huggingface.co/docs/datasets-server/parquet"}
```
instead of
```json
{"error":"Parameter 'dataset' is required"}
```
|
https://github.com/huggingface/dataset-viewer/issues/2456
|
open
|
[
"documentation",
"question",
"api",
"P2"
] | 2024-02-15T11:11:44Z
| 2024-02-15T11:12:12Z
| null |
severo
|
pytorch/text
| 2,230
|
how to install libtorchtext for cpp project use? please give some operation .thanks
|
## 🐛 Bug
**Describe the bug** A clear and concise description of what the bug is.
**To Reproduce** Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior** A clear and concise description of what you expected to happen.
**Screenshots** If applicable, add screenshots to help explain your problem.
**Environment**
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py) (or
fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
python -c "import torchtext; print(\"torchtext version is \", torchtext.__version__)"
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
**Additional context** Add any other context about the problem here.
|
https://github.com/pytorch/text/issues/2230
|
open
|
[] | 2024-02-15T04:01:32Z
| 2024-02-15T04:01:32Z
| null |
mullerhai
|
pytorch/audio
| 3,746
|
how to install libtorchaudio for cpp project ?
|
### 🐛 Describe the bug
HI ,I git clone audio project ,then add libtorch path to the audio CMakeTxt, try to make && make install ,but all finish ,I cannot find libtorchaudio.dylib file on my macos intel, only libtorchaudio.so libtorchaudio_sox.so in /usr/local/torchaudio
### Versions
latest
|
https://github.com/pytorch/audio/issues/3746
|
open
|
[] | 2024-02-15T02:28:30Z
| 2024-02-15T02:28:30Z
| null |
mullerhai
|
pytorch/torchx
| 824
|
Determine scheduler from component level
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our
[documentation](https://pytorch.org/torchx).
### Question
<!-- your question here -->
Is it possible to tell or fill in at runtime which scheduler gets used in component logic? For example, if I have a ddp component, within the component, before I return specs.AppDef, can I set for example a macro that would tell me which scheduler this component gets ran with?
For example, I want to be setting some environment variables but differentiate based on which scheduler gets used.
|
https://github.com/meta-pytorch/torchx/issues/824
|
open
|
[] | 2024-02-14T23:01:27Z
| 2024-02-16T01:56:46Z
| 1
|
ryxli
|
huggingface/gsplat.js
| 64
|
How to render from a set of camera position?
|
Hi, I am trying to render the scene from a set of camera position/rotation that I load from a JSON file.
I think the right way is first to disable the "orbitControls" (engine.orbitControls.enabled = false;) and then set the camera position/rotation manually like this: 'camera.data.update(position, rotation);'. Am I right?
Any suggestion/recommendation is welcome!
|
https://github.com/huggingface/gsplat.js/issues/64
|
closed
|
[] | 2024-02-14T16:11:28Z
| 2024-02-19T18:13:38Z
| null |
vahidEtt
|
huggingface/chat-ui
| 824
|
what port is used by the websearch?
|
i put the chat in a container in a cluster with my mongodb.
the web search stopped working, i think it might be related to me not opening a port for the web search to access the web and could not find a doc that describes how the web search works.
would love to know what port/s i should open and bit more details in general.
thank in advance.
|
https://github.com/huggingface/chat-ui/issues/824
|
open
|
[
"support",
"websearch"
] | 2024-02-14T11:15:22Z
| 2024-02-14T12:52:25Z
| null |
kaplanyaniv
|
huggingface/transformers.js
| 586
|
Does `WEBGPU` Truly Enhance Inference Time Acceleration?
|
### Question
Recently, I've been extensively utilizing transformers.js to load transformer models, and Kudos to the team for this wonderful library ...
Specifically, I've been experimenting with version 2.15.0 of transformers.js.
Despite the fact that the model runs on the `web-assembly backend`, I've noticed some slowness in inference. In an attempt to address this issue, I experimented with` webgpu inference` using the `v3` branch. However, the inference time did not meet my expectations.
Is it possible for webgpu to significantly accelerate the inference time?
|
https://github.com/huggingface/transformers.js/issues/586
|
closed
|
[
"question"
] | 2024-02-14T09:23:52Z
| 2024-10-18T13:30:13Z
| null |
kishorekaruppusamy
|
huggingface/chat-ui
| 823
|
WebSearch uses the default model instead of current model selected
|
I have multiple models in my .env.local and it seems the WebSearch uses the default model to perform its search content extraction instead of the currently selected model (the one that I'm asking the question to...) Is it possible to add a config option to use same model for everything?
|
https://github.com/huggingface/chat-ui/issues/823
|
open
|
[
"enhancement",
"back",
"models"
] | 2024-02-14T07:52:59Z
| 2024-02-14T13:07:20Z
| 4
|
ihubanov
|
huggingface/trl
| 1,327
|
how to save/load model?
|
I've tried save model via:
ppo_trainer.save_pretrained("./model_after_rl")
and load the model via:
model = AutoModelForCausalLMWithValueHead.from_pretrained("./model_after_rl")
ref_model = AutoModelForCausalLMWithValueHead.from_pretrained("./model_after_rl")
But the performance is same to without any reinforcement learning, when I add the loaded model to a new PPO trainer, freeze the model and test again.
|
https://github.com/huggingface/trl/issues/1327
|
closed
|
[] | 2024-02-14T06:56:07Z
| 2024-04-24T15:05:14Z
| null |
ADoublLEN
|
huggingface/accelerate
| 2,440
|
How to properly gather results of PartialState for inference on 4xGPUs
|
### System Info
```Shell
torch==2.2.0
transformers==4.37.2
accelerate==0.27.0
```
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
Hi, my question may look like stupid but I want to ask for clarification, because I didn't find it in [documentation](https://huggingface.co/docs/accelerate/main/en/usage_guides/distributed_inference#sending-chunks-of-a-batch-automatically-to-each-loaded-model)
I have 2 million documents to process with ner model. And also I have 4 GPU. I don't wanna write script with multiprocess and manually handle each gpu. I decided to try use accelerate.
```python
# Assume there are two processes
from accelerate import PartialState
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
model = AutoModelForTokenClassification.from_pretrained('ner')
tokenizer = AutoTokenizer.from_pretrained('ner')
ner = pipeline('token-classification', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
state = PartialState()
ner.to(state)
# here the list of the list, I wanna treat like a list of batches
data = [[{'text': 'text1', 'id': 1}, {'text': 'text2', 'id': 2}], [{'text': 'text3', 'id': 3}, {'text': 'text4', 'id': 4}] ]
results = []
with state.split_between_processes(data) as inputs:
output = ner([i['text'] for i in inputs], max_length=128)
for i, o in zip(inputs, outputs):
i['annotation'] = o
results.append(i)
```
And my question is: Am I properly gather results or it could be problems because its distributed between different process.
How to properly gather results when use `split_between_processes`?
### Expected behavior
Documentation will have more examples how to gather data.
|
https://github.com/huggingface/accelerate/issues/2440
|
closed
|
[] | 2024-02-13T14:00:13Z
| 2024-03-23T15:07:26Z
| null |
ZeusFSX
|
huggingface/chat-ui
| 818
|
Settings Page Freezes
|
When I go to settings to change model (after I ran a convo with a model), the UI settings page can't be closed. It freezes. Right now I have to keep reloading the page to use it
|
https://github.com/huggingface/chat-ui/issues/818
|
closed
|
[
"question",
"support"
] | 2024-02-13T13:30:01Z
| 2024-02-16T09:41:23Z
| null |
lordsoffallen
|
huggingface/candle
| 1,701
|
How to train my own YOLOv8 model?
|
Candle provides an example of YOLOv8, which is very useful to use.
But I don't know how to train on my own dataset? Can handle directly load the model trained by pytorch?
|
https://github.com/huggingface/candle/issues/1701
|
open
|
[] | 2024-02-13T01:56:49Z
| 2024-03-18T13:45:07Z
| null |
mzdk100
|
huggingface/transformers.js
| 585
|
Using a server backend to generate masks - doublelotus
|
### Question
Hi there, just continuing on from my question on - https://huggingface.co/posts/Xenova/240458016943176#65ca9d9c8e0d94e48742fad7.
I've just been reading through your response and initially I was trying it using a python backend and attempted to mimic the worekr.js code like so:
```py
from transformers import SamModel, SamProcessor, AutoProcessor
import numpy as np
model = SamModel.from_pretrained("Xenova/sam-vit-large")
processor = AutoProcessor.from_pretrained("Xenova/sam-vit-large")
```
but was running into this error (as I'm assuming that model isn't supported for a python backend
OSError: Xenova/sam-vit-large does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.
The main reason behind trying this is because when I tried with sam-vit-base on the web app it was quite slow in generating the image embeddings, would using a node.js server to do that with the onnx server as you suggested be much faster or is there a better way to achieve that?
|
https://github.com/huggingface/transformers.js/issues/585
|
open
|
[
"question"
] | 2024-02-13T00:06:20Z
| 2024-02-28T19:29:26Z
| null |
jeremiahmark
|
huggingface/chat-ui
| 817
|
Question: Can someone explain "public app data sharing with model authors" please?
|
I am struggling to understand in which way data can or is actually shared with whom when the setting `shareConversationsWithModelAuthors` is activated (which it is by default)?
```javascript
{#if PUBLIC_APP_DATA_SHARING === "1"}
<!-- svelte-ignore a11y-label-has-associated-control -->
<label class="flex items-center">
<Switch
name="shareConversationsWithModelAuthors"
bind:checked={$settings.shareConversationsWithModelAuthors}
/>
<div class="inline cursor-pointer select-none items-center gap-2 pl-2">
Share conversations with model authors
</div>
</label>
<p class="text-sm text-gray-500">
Sharing your data will help improve the training data and make open models better over time.
</p>
{/if}
```
What exactly will or can happen when this is activated?
Thanks!
|
https://github.com/huggingface/chat-ui/issues/817
|
closed
|
[
"question"
] | 2024-02-12T19:18:03Z
| 2024-02-16T14:32:18Z
| null |
TomTom101
|
pytorch/pytorch
| 119,604
|
How to deal with mypy checking fx_node.args[i].meta?
|
# Issue
It's common in Inductor FX passes to do something like this
```
node: torch.fx.Node = ...
arg1: torch.fx.Argument = node.args[0]
arg2: torch.fx.Argument = node.args[1]
a, b = arg1.meta, arg2.meta
# do something with a & b
```
However, mypy will call this out ([see](https://mypy.readthedocs.io/en/stable/error_code_list.html#check-that-attribute-exists-in-each-union-item-union-attr)). It's checking that each attribute (i.e. `meta`) exists in each of the type listed for [fx.node.Argument](https://github.com/pytorch/pytorch/blob/a7f82b7d628eb2b966bc53e593dcf32049b2b10e/torch/fx/node.py#L26-L34).
```
Item ... of "tuple[Any, ...] | list[Any] | dict[str, Any] | slice | range | Node | str | int | float | bool | complex | dtype | Tensor | device | memory_format | layout | OpOverload | None" has no attribute "meta" [union-attr]
```
# Workarounds
1. Do some runtime checks to assure mypy that it's okay. For eg:
- `isinstance(arg1, torch.fx.Node)`
- `if hasattr(arg1, 'meta):`
2. Slap on a # type: ignore[union-attr]
3. Surface a `node.args` getter method in fx/node.py. Illustrated in code below.
```
def get_arg(arg_type: Type[T], i: int) -> T:
assert(isinstance(self.args[i], arg_type))
return self.args[i]
```
# Thoughts
At the moment, (2), the slap on approach, seems to be present in quite a few places. Here's two examples ([1](https://github.com/pytorch/pytorch/pull/119085/files#diff-800bd8ca3e84db0b1988eb1c289bbe892b2acfcd013c2ff04117ce9bd5615480L346), [2](https://github.com/pytorch/pytorch/pull/119422/files#diff-118f7e6a8110f30c6894a530eea254b6cff4338add31d83825365b6cac47bdc5R368-R374)).
```
$ grep -P -rn "meta.* # type: ignore\[union-attr\]" torch/_inductor | wc -l
22
```
I think we could follow (1) everytime we want to call `node.args[0].meta` or handle this at a level lower like (3) surfacing a getter method. Or maybe there's a 4th option?
|
https://github.com/pytorch/pytorch/issues/119604
|
closed
|
[] | 2024-02-09T22:42:44Z
| 2024-02-10T00:01:10Z
| null |
ColinPeppler
|
pytorch/pytorch
| 119,590
|
Decide whether / how to ban SAC + inplace ops in eager
|
SAC exists as an API today (see [code](https://github.com/pytorch/pytorch/blob/main/torch/utils/checkpoint.py#L1256)), but:
(1) it "context" fn has a pt2-specific name
(1) We have a warning in the docs that it should only be used with `torch.compile`
(2) We have no warning or error that gets emitted at runtime if you actually use SAC with eager mode.
My understanding is that the main issue with always-allowing SAC to be used in eager has to do with handling for inplace ops. More diagnosis was in this issue: https://github.com/pytorch/pytorch/issues/113737
I think it can be summarized by this repro, where eager mode vs. "eager mode + SAC" produce different outputs, when an inplace op is involved:
```
import torch
from torch._custom_op.functional import register_functional_op
import torch.utils.checkpoint
from torch.utils.checkpoint import checkpoint, _pt2_selective_checkpoint_context_fn_gen
def custom_policy(mode, func, *args, **kwargs):
return func in [torch.ops.aten.mm.default]
def selective_checkpointing_context_fn():
return _pt2_selective_checkpoint_context_fn_gen(custom_policy)
def gn(x, y):
return torch.selu_(torch.matmul(x, y))
def fn(x, y):
return torch.utils.checkpoint.checkpoint(
gn,
x,
y,
use_reentrant=False,
context_fn=selective_checkpointing_context_fn,
)
x = torch.arange(16, dtype=torch.float32, requires_grad=True).reshape(4, 4).detach().requires_grad_(True)
y = torch.arange(16, dtype=torch.float32, requires_grad=True).reshape(4, 4).detach().requires_grad_(True)
out1 = gn(x, y)
print(out1)
out1.sum().backward()
print(out1)
out2 = fn(x, y)
print(out2)
# With SAC + eager mode:
# (1) "out" is an activation saved for backward
# (2) selu_() is part of the recompute, which mutates out **again**, during the backward pass!
# Invoking the backward will mutate out!
out2.sum().backward()
print(out2)
# False
print(torch.allclose(out1, out2))
```
Just to collect some possible options:
(1) [easiest] Ban SAC completely in eager
(2) [medium] Ban SAC in eager whenever there are any inplace ops
(3) [hard?] figure out how to detect exactly the case when outputs/gradients would diverge without SAC, and ban those cases
(4) [hard?] figure out how to functionalize away an mutations in an SAC region that would have changed numerics.
cc @soulitzer @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @Lezcano @Varal7
|
https://github.com/pytorch/pytorch/issues/119590
|
closed
|
[
"module: activation checkpointing",
"module: autograd",
"triaged",
"needs design"
] | 2024-02-09T20:33:05Z
| 2024-06-27T20:13:20Z
| null |
bdhirsh
|
huggingface/transformers.js
| 581
|
How can we use the sam-vit-huge in the production?
|
### Question
The size of ONNX files for sam-vit-huge is around 600MB. If I am using the implementation mentioned in the documentation, it downloads these files first before performing the image segmentation. Is there a better way to avoid downloading these files and reduce the time it takes? Additionally, the model is taking too much time to generate embeddings when using sam-vit-huge or sam-vit-large.
|
https://github.com/huggingface/transformers.js/issues/581
|
open
|
[
"question"
] | 2024-02-09T17:54:43Z
| 2024-02-09T17:54:43Z
| null |
moneyhotspring
|
huggingface/dataset-viewer
| 2,434
|
Create a new step: `config-features`?
|
See https://github.com/huggingface/datasets-server/issues/2215: the `features` part can be heavy, and on the Hub, when we call /rows, /filter or /search, the features content does not change; there is no need to create / serialize / transfer / parse it.
We could:
- add a new /features endpoint
- or add a `features: bool` parameter to all the endpoints that return rows to include the features in the response.
The only exception is when a new commit happens, and the features have changed. But the Hub could check the `X-Revision` value and reload the page in case of a mismatch.
|
https://github.com/huggingface/dataset-viewer/issues/2434
|
open
|
[
"question",
"refactoring / architecture",
"P2"
] | 2024-02-09T14:13:10Z
| 2024-02-15T10:26:35Z
| null |
severo
|
huggingface/diffusers
| 6,920
|
How to merge a lot of embedding into a single file
|
I create a lot of embedding through textual inversion, but I couldn't found a file to merge this ckpt
|
https://github.com/huggingface/diffusers/issues/6920
|
open
|
[
"stale"
] | 2024-02-09T08:18:42Z
| 2024-03-13T15:02:51Z
| null |
Eggwardhan
|
pytorch/pytorch
| 119,479
|
torch._constrain_as_value and related APIs accept Tensor, but this is typically not what you want
|
### 🐛 Describe the bug
Internal xref: https://fb.workplace.com/groups/6829516587176185/posts/6829896033804907/
Because we are willing to call item() on scalar Tensor, these APIs will "work" but they will keep generating fresh unbacked symbols, so the value range ends up not getting used by anything. Would be good to warn or error if you try to pass in a Tensor to these APIs.
### Versions
main
cc @msaroufim @bdhirsh @anijain2305 @zou3519
|
https://github.com/pytorch/pytorch/issues/119479
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 2024-02-08T20:13:23Z
| 2024-09-13T03:10:12Z
| null |
ezyang
|
pytorch/pytorch
| 119,473
|
Document how to override autocast rules properly
|
Since autocast is implemented as a dispatcher feature, and each rule is a relatively simple kernel being registered on the right key for the right kernel.
Overriding these rules can be done today by replacing the kernel registered by default with a custom one that does the appropriate casting before redispatching down in a similar way as it is done in the generic kernel we use https://github.com/pytorch/pytorch/blob/def572929b2311b769ef79e66aebc70384b0f456/aten/src/ATen/autocast_mode.h#L467-L473 .
All the tools are available to do this from a C++ extension via TORCH_LIBRARY* macros
Missing pieces for python when using torch.library:
- A way to call cached_cast from python https://github.com/pytorch/pytorch/blob/def572929b2311b769ef79e66aebc70384b0f456/aten/src/ATen/autocast_mode.cpp#L200C8-L200C19
- A public way to disable keys in python to enable calling down
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
|
https://github.com/pytorch/pytorch/issues/119473
|
open
|
[
"triaged",
"module: amp (automated mixed precision)"
] | 2024-02-08T19:02:00Z
| 2024-02-08T20:43:22Z
| null |
albanD
|
pytorch/serve
| 2,933
|
https://github.com/pytorch/serve/issues/2870 - New Release Required for this Fix
|
### 🐛 Describe the bug
Team,
seems like worker auto recovery fix in this PR. Can we create patch release so that we can proceed with production update?
Thanks
Regards,
Deepak Kumar A
### Error logs
NA
### Installation instructions
NA
### Model Packaing
NA
### config.properties
_No response_
### Versions
0.8.1
### Repro instructions
0.8.1
### Possible Solution
_No response_
|
https://github.com/pytorch/serve/issues/2933
|
closed
|
[] | 2024-02-08T14:23:49Z
| 2024-03-20T21:51:41Z
| 2
|
DeepakkumarArumugam
|
huggingface/transformers
| 28,924
|
How to disable log history from getting printed every logging_steps
|
I'm writing a custom ProgressCallback that modifies the original ProgressCallback transformers implementation and adds some additional information/data to the tqdm progress bar. Here's what I have so far, and it works nicely and as intended.
```python
class ProgressCallback(TrainerCallback):
"""A [`TrainerCallback`] that displays the progress of training or evaluation.
Specifically, it shows:
1. Time spent so far in training or evaluation.
2. Estimated time remaining for training or evaluation.
3. Iterations per second.
4. Loss.
5. Number of input tokens seen so far.
"""
def __init__(self):
self.training_bar = None
self.prediction_bar = None
self.current_step: int = 0
self.loss: float = math.nan
self.num_input_tokens_seen = format_number_suffix(0)
def on_train_begin(self, args, state, control, **kwargs):
if state.is_world_process_zero:
self.training_bar = tqdm(total=state.max_steps, dynamic_ncols=True)
def on_step_end(self, args, state, control, **kwargs):
if state.is_world_process_zero:
self.training_bar.update(state.global_step - self.current_step)
self.current_step = state.global_step
def on_prediction_step(self, args, state, control, eval_dataloader=None, **kwargs):
if state.is_world_process_zero and has_length(eval_dataloader):
if self.prediction_bar is None:
self.prediction_bar = tqdm(
total=len(eval_dataloader),
leave=self.training_bar is None,
dynamic_ncols=True,
)
self.prediction_bar.update(1)
def on_evaluate(self, args, state, control, **kwargs):
if state.is_world_process_zero:
if self.prediction_bar is not None:
self.prediction_bar.close()
self.prediction_bar = None
def on_predict(self, args, state, control, **kwargs):
if state.is_world_process_zero:
if self.prediction_bar is not None:
self.prediction_bar.close()
self.prediction_bar = None
def on_log(self, args, state, control, logs=None, **kwargs):
if state.is_world_process_zero and self.training_bar is not None:
# The last callback_handler.on_log() call in the training loop logs `train_loss` as opposed to `loss`.
# From some digging through transformers code, the `train_loss` is the average training loss
# during training.
# See: https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/trainer.py#L2025-L2026
self.loss = (
state.log_history[-1]["loss"]
if state.log_history and "loss" in state.log_history[-1]
else state.log_history[-1]["train_loss"]
)
self.num_input_tokens_seen = format_number_suffix(state.num_input_tokens_seen)
self.training_bar.set_postfix_str(
f"loss: {self.loss:.4f}, tokens: {self.num_input_tokens_seen}",
)
def on_train_end(self, args, state, control, **kwargs):
if state.is_world_process_zero:
self.training_bar.close()
self.training_bar = None
```
In my trainer arguments, I explicitly `disable_tdqm` so I can pass this as a custom callback in place of the original ProgressCallback. I also set `logging_steps` to 1 so that I can get metrics back from every step through the `log_history` attribute in the TrainerState object.
The challenge I'm having is that it logs the metric to stdout, but I am not sure where that actually comes from in the code. I don't want that behavior since I want to surface relevant information directly in my TQDM progress back through my callback. Looking at the transformers trainer, I've narrowed down that metrics get pass to `on_log` in the callback, and that seems to happen from within this function at the end of each step of training and then again at the end of training: https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/trainer.py#L2224
When I set a breakpoint at the end of `on_log` in my callback, I can confirm that the logs object doesn't get printed to stdout. So it happens somewhere between that and this looping to get to the next train step, but not sure if I am missing something obvious since I'm still new to the transformers codebase.
Here's what I see in my output:
```
***** Running training *****
Num examples = 183
Num Epochs = 3
Instantaneous batch size per device = 1
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 16
Total optimization steps = 33
Number of trainable parameters = 256
3%|██▍ | 1/33 [00:01<00:34, 1.07s/it, loss
|
https://github.com/huggingface/transformers/issues/28924
|
closed
|
[] | 2024-02-08T10:23:28Z
| 2024-02-08T17:26:02Z
| null |
arnavgarg1
|
huggingface/alignment-handbook
| 120
|
(QLoRA) DPO without previous SFT
|
Because of the following LLM-Leaderboard measurements, I want to perform QLoRA DPO without previous QLoRA SFT:
```
alignment-handbook/zephyr-7b-dpo-qlora: +Average: 63.51; +ARC 63.65; +HSwag 85.35; -+MMLU 63.82; ++TQA: 47.14; (+)Win 79.01; +GSM8K 42.08;
alignment-handbook/zephyr-7b-sft-qlora: -Average: 59; (+)ARC 60.07; (-)HSwag 82.36; -MMLU 61.65; -TQA: 38.88; -Win 76.8; -GSM8K 34.27;
mistralai/Mistral-7B-v0.1: Average: 60.97; ARC 59.98; HSwag 83.31; MMLU 64.16; TQA: 42.15; Win 78.37; GSM8K 37.83;
```
As you can see, there is catastrophic forgetting in `zephyr-7b-sft-qlora` in almost all tasks, especially in MMLU, TruthfulQA, and GSM8K. Thus I wonder why do SFT at all?
In more detail
============
Q1: Why is there so much catastrophic forgetting in `zephyr-7b-sft-qlora` ? Due to the following improvements by DPO, the dataset seems to be apt.
Q2: Why is SFT performed before DPO at all? Is it some prerequisite, like SFT training the model to follow instructions at all, before DPO aligning the responses to instructions with human preferences?
Q3: I tried the following for DPO without previous SFT:
Modify `recipes/zephyr-7b-beta/dpo/config_qlora.yaml` by using `model_name_or_path: mistralai/Mistral-7B-v0.1` and then calling `scripts/run_dpo.py` on it:
```
echo -e "2,3c2\n< model_name_or_path: mistralai/Mistral-7B-v0.1\n< model_revision: main\n---\n> model_name_or_path: alignment-handbook/zephyr-7b-sft-qlora\n36c35\n< gradient_accumulation_steps: 8\n---\n> gradient_accumulation_steps: 2\n40c39\n< hub_model_id: zephyr-7b-dpo-qlora-no-sft\n---\n> hub_model_id: zephyr-7b-dpo-qlora\n49,51c48,50\n< output_dir: data/zephyr-7b-dpo-qlora-no-sft # It is handy to append `hub_model_revision` to keep track of your local experiments\n< per_device_train_batch_size: 1\n< per_device_eval_batch_size: 2\n---\n> output_dir: data/zephyr-7b-dpo-qlora # It is handy to append `hub_model_revision` to keep track of your local experiments\n> per_device_train_batch_size: 4\n> per_device_eval_batch_size: 8\n53,55d51\n< report_to:\n< - tensorboard\n< - wandb" | patch recipes/zephyr-7b-beta/dpo/config_qlora.yaml
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_dpo.py recipes/zephyr-7b-beta/dpo/config_qlora.yaml
```
However, I get the error described at https://github.com/huggingface/alignment-handbook/issues/93. The solution there inspired me to do the following (so I don't have to go into the cache to replace tokenizer configs): Add in line 77 of `src/alignment/data.py`
```
tokenizer.chat_template = "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['conten\
t'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] \
== 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{\
'<|assistant|>' }}\n{% endif %}\n{% endfor %}"
```
But Mistral's `default_chat_template` already allows system messages, so the problem seems to be that the dialogs in the dataset really do not alternate between user and assistant messages. Right? What is the reason for this?
Mistrals `default_chat_template` causing the error message:
```
{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% elif false == true and not '<<SYS>>' in messages[0]['content'] %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical,
racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don\'t know
the answer to a question, please don\'t share false information.' %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set conte
nt = '<<SYS>>\n' + system_message + '\n<</SYS>>\n\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}{% elif message['role'] == 'system' %}{{ '<<SYS>>\n' + content.strip() + '\n<</SYS>>\n\n' }}{% elif message['role'] == 'assistant' %}{{ ' ' + content.strip() + ' ' + eos_token }}{% endif %}{%
|
https://github.com/huggingface/alignment-handbook/issues/120
|
open
|
[] | 2024-02-08T09:56:50Z
| 2024-02-09T22:15:10Z
| 1
|
DavidFarago
|
huggingface/transformers.js
| 577
|
Getting 'fs is not defined' when trying the latest "background removal" functionality in the browser?
|
### Question
I copied the code from https://github.com/xenova/transformers.js/blob/main/examples/remove-background-client/main.js to here, but I'm getting this error with v2.15.0 of @xenova/transformers.js:
```
Uncaught ReferenceError: fs is not defined
at env.js:36:31
at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/env.js [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:258:3)
at runtime-base.ts:322:21
at runModuleExecutionHooks (runtime-base.ts:376:5)
at instantiateModule (runtime-base.ts:321:5)
at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)
at esmImport (runtime-utils.ts:205:18)
at hub.js:6:2
at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/utils/hub.js [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:783:3)
at runtime-base.ts:322:21
at runModuleExecutionHooks (runtime-base.ts:376:5)
at instantiateModule (runtime-base.ts:321:5)
at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)
at esmImport (runtime-utils.ts:205:18)
at tokenizers.js:21:2
at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/tokenizers.js [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:6729:3)
at runtime-base.ts:322:21
at runModuleExecutionHooks (runtime-base.ts:376:5)
at instantiateModule (runtime-base.ts:321:5)
at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)
at esmImport (runtime-utils.ts:205:18)
at pipelines.js:14:2
at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/pipelines.js [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:17183:3)
at runtime-base.ts:322:21
at runModuleExecutionHooks (runtime-base.ts:376:5)
at instantiateModule (runtime-base.ts:321:5)
at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)
at esmImport (runtime-utils.ts:205:18)
at 8484b_@xenova_transformers_src_5fe153._.js:17215:237
at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/transformers.js [app-client] (ecmascript) {module evaluation} (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:17228:3)
at runtime-base.ts:322:21
at runModuleExecutionHooks (runtime-base.ts:376:5)
at instantiateModule (runtime-base.ts:321:5)
at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)
at esmImport (runtime-utils.ts:205:18)
at _b29e97._.js:19146:268
at [project]/app/remove/background/page.tsx [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/_b29e97._.js:19389:3)
at runtime-base.ts:322:21
at runModuleExecutionHooks (runtime-base.ts:376:5)
at instantiateModule (runtime-base.ts:321:5)
at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)
at commonJsRequire (runtime-utils.ts:230:18)
at requireModule (react-server-dom-turbopack-client.browser.development.js:154:23)
at initializeModuleChunk (react-server-dom-turbopack-client.browser.development.js:1336:17)
at readChunk (react-server-dom-turbopack-client.browser.development.js:1146:7)
at mountLazyComponent (react-dom.development.js:16652:19)
at beginWork$1 (react-dom.development.js:18388:16)
at beginWork (react-dom.development.js:26791:14)
at performUnitOfWork (react-dom.development.js:25637:12)
at workLoopSync (react-dom.development.js:25353:5)
```
Any idea what is wrong and how to fix it? Here is my code, which basically a direct React.js port of the background removal example you all shared:
```tsx
'use client'
import {
AutoModel,
AutoProcessor,
env,
PreTrainedModel,
Processor,
RawImage,
} from '@xenova/transformers'
import React, {
MouseEvent,
useCallback,
useEffect,
useRef,
useState,
} from 'react'
import _ from 'lodash'
import FileDropzone from '~/components/FileDropzone'
// Since we will download the model from the Hugging Face Hub, we can skip the local model check
env.allowLocalModels = false
// Proxy the WASM backend to prevent the UI from freezing
env.backends.onnx.wasm.proxy = true
function useModel(): {
model?: PreTrainedModel
processor?: Processor
} {
const [model, setModel] = useState<PreTrainedModel>()
const [processor, setProcessor] = useState<Processor>()
useEffect(() => {
AutoModel.from_pretrained('briaai/RMBG-1.4', {
config: { model_type: 'custom' },
}).then(m => {
setModel(m)
})
AutoProcessor.from_pretrained('briaai/RMBG-1.4', {
config: {
|
https://github.com/huggingface/transformers.js/issues/577
|
open
|
[
"question"
] | 2024-02-08T04:34:59Z
| 2024-11-26T05:20:22Z
| null |
lancejpollard
|
pytorch/serve
| 2,930
|
How would you deploy a new model on a torch server running within a container?
|
I am looking for options to use torchserve to deploy multiple models at once. However, in the documentation and guides I cannot find examples where it is done. The examples usually describe a scenario of starting a torchserve container for a given model.
My question is if I have a torchserve container running, is there a way to deploy a new model to it without restarting the container and without downtime for the models already running on the server?
I assume I need to copy the model archive in the proper place within the container and register it via the API, although I am not sure this is possible and ok to do?
What would you advise me?
Perhaps, it would be nice to have some documentation on this.
|
https://github.com/pytorch/serve/issues/2930
|
closed
|
[] | 2024-02-07T14:51:06Z
| 2024-02-07T16:33:20Z
| 1
|
mihailyanchev
|
huggingface/transformers.js
| 575
|
Can GPU acceleration be used when using this library in a node.js environment?
|
### Question
Hello, I have looked into the GPU support related issue, but all mentioned content is related to webGPU. May I ask if GPU acceleration in the node.js environment is already supported? Refer: https://github.com/microsoft/onnxruntime/tree/main/js/node
|
https://github.com/huggingface/transformers.js/issues/575
|
closed
|
[
"question"
] | 2024-02-07T03:37:50Z
| 2025-01-20T15:05:00Z
| null |
SchneeHertz
|
pytorch/vision
| 8,259
|
support for convnextv2
|
### 🚀 The feature
is there any plan for adding convext-v2
### Motivation, pitch
ConvNeXt-V2 introduce FCMAE self sup pretrain and gain the performance for 0.5~1.5% top1 acc.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/vision/issues/8259
|
open
|
[] | 2024-02-07T01:45:29Z
| 2024-02-07T01:45:29Z
| 0
|
chaoer
|
huggingface/dataset-viewer
| 2,408
|
Add task tags in /hub-cache?
|
On the same model as https://github.com/huggingface/datasets-server/pull/2386, detect and associate tags to a dataset to describe the tasks it can be used for.
Previously discussed at https://github.com/huggingface/datasets-server/issues/561#issuecomment-1250029425
|
https://github.com/huggingface/dataset-viewer/issues/2408
|
closed
|
[
"question",
"feature request",
"P2"
] | 2024-02-06T11:17:19Z
| 2024-06-19T15:43:15Z
| null |
severo
|
huggingface/dataset-viewer
| 2,407
|
Remove env var HF_ENDPOINT?
|
Is it still required to set HF_ENDPOINT as an environment variable?
https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/resources.py#L41-L45
|
https://github.com/huggingface/dataset-viewer/issues/2407
|
closed
|
[
"duplicate",
"question",
"refactoring / architecture",
"P2"
] | 2024-02-06T11:11:24Z
| 2024-02-06T14:53:12Z
| null |
severo
|
huggingface/chat-ui
| 786
|
Can't get Mixtral to work with web-search
|
I have been following this project for a while and recently tried setting up oobabooga Mixtral-8x7b
I used the official prompt template used in huggingface.co :
```
<s> {{#each messages}}{{#ifUser}}[INST]{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}} {{content}}</s> {{/ifAssistant}}{{/each}}
```
Normal chat works, and summarization for the title works, but web-search does not.
It always gives the full answer instead of a search term.

Here is my local.env:
```
MONGODB_URL=mongodb://localhost:27017
USE_LOCAL_WEBSEARCH=true
PUBLIC_APP_ASSETS=chatui
HF_ACCESS_TOKEN=hf_none
PUBLIC_APP_DESCRIPTION="ChatGPT But Open Source!"
PUBLIC_APP_NAME=ChatGPT
MODELS=`[
{
"name": "LocalGPT",
"description": "Mixtral is a great overall model",
"chatPromptTemplate" : "<s> {{#each messages}}{{#ifUser}}[INST]{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}} {{content}}</s> {{/ifAssistant}}{{/each}}",
"preprompt": "",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python and give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.3,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 2048,
"stop": ["</s>"]
},
"endpoints": [{
"type" : "openai",
"baseURL": "http://127.0.0.1:5000/v1"
}]
}
]`
```
|
https://github.com/huggingface/chat-ui/issues/786
|
open
|
[] | 2024-02-06T07:14:08Z
| 2024-02-16T10:45:40Z
| 2
|
iChristGit
|
pytorch/kineto
| 864
|
Question about how to run "make test" correctly?
|
Hi guys,
Follow the steps in [README.md](https://github.com/pytorch/kineto/tree/main/libkineto), I have succeed to build Libkineto. Then, I start to run the tests with the command "make test", but it doesn't change anything. In this [CMakeLists.txt](https://github.com/pytorch/kineto/blob/main/libkineto/CMakeLists.txt) file, it seems like that you just add the test folder in this project but do not build anything, so I am very confused about how to "make test" and what is the meanning of
> (if tests are built)
Anyway... Could somebody tell me how to build and run the code in the test folder? Thanks
|
https://github.com/pytorch/kineto/issues/864
|
open
|
[
"bug"
] | 2024-02-06T06:01:11Z
| 2024-04-23T15:45:46Z
| null |
PriscillaJCorn
|
huggingface/dataset-viewer
| 2,402
|
Reduce resources for /filter and /search?
|
They have nearly 0 traffic. https://grafana.huggingface.tech/d/i7gwsO5Vz/global-view?orgId=1&from=now-6h&to=now
Should we reduce the number of pods? How to configure the right level?
|
https://github.com/huggingface/dataset-viewer/issues/2402
|
closed
|
[
"question",
"infra",
"P2",
"prod"
] | 2024-02-05T21:44:56Z
| 2024-02-28T17:55:50Z
| null |
severo
|
pytorch/examples
| 1,229
|
If I am training on a SINGLE GPU, should this "--dist-backend 'gloo'" argument be added to the command?
|
@Jaiaid
Is this **"--dist-backend 'gloo'"** be included in the terminal command if using a **SINGLE GPU** or having just one GPU on the machine?
Is the following example command correct for SINGLE GPU?
python main.py **--dist-backend 'gloo'** -a resnet18 [imagenet-folder with train and val folders]
Is that what your new committed warning implies?
|
https://github.com/pytorch/examples/issues/1229
|
closed
|
[] | 2024-02-05T17:11:50Z
| 2024-02-07T08:01:12Z
| 10
|
HassanBinHaroon
|
huggingface/dataset-viewer
| 2,390
|
Store the repo visibility (public/private) to filter webhooks
|
See https://github.com/huggingface/datasets-server/pull/2389#pullrequestreview-1862425050
Not sure if we want to do it, or wait for the Hub to provide more finely scoped webhooks. See also #2208, where we wanted to store metadata about the datasets.
|
https://github.com/huggingface/dataset-viewer/issues/2390
|
closed
|
[
"question",
"P2"
] | 2024-02-05T12:37:30Z
| 2024-06-19T15:37:36Z
| null |
severo
|
huggingface/transformers.js
| 567
|
Does await pipeline() support multithreading? I've tried all kinds of multithreaded calls and it still returns the results one by one in order.
|
### Question
Does await pipeline() support multithreading? I've tried all kinds of multithreaded calls and it still returns the results one by one in order.
|
https://github.com/huggingface/transformers.js/issues/567
|
open
|
[
"question"
] | 2024-02-05T11:12:34Z
| 2024-02-05T11:12:34Z
| null |
a414166402
|
huggingface/transformers.js
| 565
|
How can i use this Model for image matting?
|
### Question
https://github.com/ZHKKKe/MODNet?tab=readme-ov-file
They have ONNX file and the python cli usage looks simple, but I can't find how to use with transformers.js.
```
!python -m demo.image_matting.colab.inference \
--input-path demo/image_matting/colab/input \
--output-path demo/image_matting/colab/output \
--ckpt-path ./pretrained/modnet_photographic_portrait_matting.ckpt
```
|
https://github.com/huggingface/transformers.js/issues/565
|
closed
|
[
"question"
] | 2024-02-05T09:28:28Z
| 2024-02-07T11:33:26Z
| null |
cyio
|
huggingface/transformers.js
| 564
|
Can models from user disks load and run in my HF space?
|
### Question
Im fiddling around with the react-translator template.
What I have accomplished so far:
- Run local (on disk in public folder) model in localhost webapp.
- Run hosted (on HF) model in localhost webapp.
- Run hosted (on HF) model in HF Space webapp.
What i want to accomplish but can't figure out:
- Use local (on disk in any folder) model in HF Space webapp.
Is this possible?
From what i understand so far, local models have to be in the public folder of the webapp, but that defeats the purpose of my webapp, which would be to allow users to benchmark models from any folder of their disk in my HF Space.
Preferably the user would provide a path or use drag'n'drop to provide their model folder location on the disk and the webapp would then proceed to load the model from the provided location into the application cache.
The reason i need this specific setup is because i work on a benchmarking tool and I don't want to force users to host their models on HF in order to be able to benchmark them.
|
https://github.com/huggingface/transformers.js/issues/564
|
closed
|
[
"question"
] | 2024-02-05T08:00:55Z
| 2024-06-07T01:17:24Z
| null |
saferugdev
|
huggingface/transformers
| 28,860
|
Question: How do LLMs learn to be "Generative", as we often describe them?
|
(Please forgive me and let me know if I'm not allowed to ask this kind of question here. I'm so sorry if I'm bothering everyone.)
AFAIK to be called "generative", a model should have the ability to learn the joint probability over the training data. In the case of LLMs, we apply the chain rule of Bayes' formula to achieve this by leveraging the autoregressive method for every token of each input text sequence. For example, with a text sequence of 4 tokens, it can be written as:
```
p(x4,x3,x2,x1) = p(x4|x3,x2,x1) * p(x3|x2,x1) * p(x2|x1) * p(x1)
```
where `x1` denotes the 1st token, `x2` denotes the 2nd token and so on, respectively.
I understand the conditional terms `p(x_n|...)` where we use cross-entropy to calculate their losses. However, I'm unsure about the probability of the very first token `p(x1)`. How is it calculated? Is it in some configurations of the training process, or in the model architecture, or in the loss function?
IMHO, if the model doesn't learn `p(x1)` properly, the entire formula for Bayes' rule cannot be completed, and we can't refer to LLMs as "truly generative". Am I missing something here?
I asked the [same question on `nanoGPT` repo](https://github.com/karpathy/nanoGPT/issues/432) and [on HN](https://news.ycombinator.com/item?id=39249301). I'm also reading Transformer codes from this repo, but I haven't found the answer I'm looking for yet. Could someone please enlighten me? Thank in advance!
|
https://github.com/huggingface/transformers/issues/28860
|
closed
|
[] | 2024-02-05T07:10:23Z
| 2024-02-05T12:22:27Z
| null |
metalwhale
|
huggingface/sentence-transformers
| 2,470
|
BGE Reranker / BERT Crossencoder Onnx model latency issue
|
I am using the Int8 quantized version of BGE-reranker-base model converted to the Onnx model. I am processing the inputs in batches. Now the scenario is that I am experiencing a latency of 20-30 secs with the original model. With the int8 quantized and onnx optimized model, the latency was reduced to 8-15 secs keeping all the configurations the same like hardware, batch processing, and everything I used with the original torch model.
I am using Flask as an API server, on a quad-core machine.
I want further to reduce the model latency of the Onnx model. How can I do so?
Also please suggest anything more I can do during the deployment
|
https://github.com/huggingface/sentence-transformers/issues/2470
|
open
|
[
"question"
] | 2024-02-05T05:54:18Z
| 2024-02-09T06:59:51Z
| null |
ojasDM
|
pytorch/xla
| 6,464
|
How to benchmark PyTorch XLA code properly
|
## ❓ Questions and Help
Hi! I'm trying to benchmark some pytorch XLA code, and can't find a way how to do it correctly.
For simplicity what's I'm benchmarking is `torch.matmul(a, b)`. Firstly I created the most straightforward version of benchmarking, inspired by cuda & triton benchmarking code:
```
# create tensors
a = torch.randn((N, K), device=device, dtype=dtype)
b = torch.randn((K, M), device=device, dtype=dtype)
def fn():
torch.matmul(a, b)
benchmark(fn) # here I'm doing warmup runs/multiple fn runs
```
This way it didn't work, effectively rendering benchmark to be immediate.
I realized that no work is actually happening since tensors are lazy, so I've added `xm.unlazy` calls after `fn` run with `matmul` result tensor. However I still was getting numbers which look like no work is being done.
My theory was that since that structure of computation is not changing backend is reusing results. So I tried to regenerate inputs on each iteration. I tried different approaches, with full regenerate, or with some ways so prepare is faster, such as:
```
def prepare():
a[0, 0] += 1
b[0, 0] += 1
return [a.clone().detach(), b.clone().detach()]
```
But with neither of my attempts I was able to achieve proper measurement of `matmul` function. I feel like I'm either measuring compilation speed, or no-op speed. Any tips on how to write this benchmark / establish better mental model when / how to avoid recompilation of the code, but still execution of it?
Thanks in advance!
|
https://github.com/pytorch/xla/issues/6464
|
closed
|
[
"question"
] | 2024-02-05T00:55:57Z
| 2025-04-21T13:15:33Z
| null |
ttim
|
huggingface/chat-ui
| 774
|
Where are the image and pdf upload features when running on locally using this repo?
|
I see there are issues and features being talked about and added for the image upload and parsing PDFs as markdown etc. However, I dont see these features in when I cloned this repo and started chatui using "npm run dev" locally.
Am I missing something?
#641 are the features I am talking about.
|
https://github.com/huggingface/chat-ui/issues/774
|
closed
|
[] | 2024-02-05T00:41:05Z
| 2024-02-05T08:48:29Z
| 1
|
zubu007
|
huggingface/chat-ui
| 771
|
using openai api key for coporate
|
Hi
We are working with an open ai key for our corporate ( it has a corporate endpoint)
this is how we added the model to .env.local
```
MODELS=`[
{
"name": "Corporate local instance of GPT 3.5 Model",
"endpoints": [{
"type": "openai",
"url": "corporate url"
}],
"userMessageToken": "User: ",
"assistantMessageToken": "Assistant: ",
"messageEndToken": "</s>",
"preprompt": " ",
"prepromptUrl": "http://127.0.0.1:8000/preprompt.txt",
"parameters": {
"temperature": 0.9,
"max_new_tokens": 1024,
"truncate": 31000
},
```
The problem I can't connet t to the model there are authentications issues. this is what we get:
anyone else tried to connect with corporate openai api key?
How can we solve this?
we can connect to the model using python so this is not an issue with the credentials.
|
https://github.com/huggingface/chat-ui/issues/771
|
open
|
[
"models"
] | 2024-02-04T11:23:59Z
| 2024-02-06T15:01:50Z
| 1
|
RachelShalom
|
huggingface/optimum-neuron
| 460
|
[QUESTION] What is the difference between optimum-neuron and transformers-neuronx?
|
I would like to understand the differences between this optimum-neuron and [transformers-neuronx](https://github.com/aws-neuron/transformers-neuronx).
|
https://github.com/huggingface/optimum-neuron/issues/460
|
closed
|
[] | 2024-02-02T18:27:46Z
| 2024-03-27T11:04:52Z
| null |
leoribeiro
|
pytorch/tensordict
| 656
|
[Feature Request] Docs don't mention how to install tensordict / that it's a seperate package from torch
|
## Motivation
As a user, the first thing I'd want to see when looking at a docs for a package is something like:
```
pip install <package>
```
Or
```
conda install <package>
```
This seems like it's currently missing from the docs [here](https://pytorch.org/tensordict). It is included in the Github readme, but when googling "tensordict" the docs on pytorch.org come up first.
Reason it would be good to include is that the docs seem to be sub-docs of pytorch, and therefore at first glance it isn't clear that `tensordict` is not included in the `pytorch` package distribution, and needs to be installed separately.
## Solution
This to be added to the docs.
## Alternatives
## Additional context
## Checklist
- [x] I have checked that there is no similar issue in the repo (**required**)
(Happy to add this if it seems like a good idea.)
|
https://github.com/pytorch/tensordict/issues/656
|
closed
|
[
"enhancement"
] | 2024-02-02T17:50:58Z
| 2024-02-05T13:49:01Z
| null |
sradc
|
pytorch/tutorials
| 2,858
|
Better specify `torch.compile behaviour` on nested function/module
|
### 📚 The doc issue
Can we better specify the behavior and eventually the best practices when decorating a function or compiling a module and the effect on the nested modules and nested function call?
https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html
### Suggest a potential alternative/fix
_No response_
cc @sekyondaMeta @svekars @kit1980 @williamwen42 @msaroufim @ezyang @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
|
https://github.com/pytorch/tutorials/issues/2858
|
closed
|
[
"medium",
"docathon-h1-2024"
] | 2024-02-02T12:22:05Z
| 2024-08-30T21:40:03Z
| 10
|
bhack
|
huggingface/dataset-viewer
| 2,376
|
Should we increment "failed_runs" when error is "ResponseAlreadyComputedError"?
|
Related to https://github.com/huggingface/datasets-server/issues/1464: is it really an error?
|
https://github.com/huggingface/dataset-viewer/issues/2376
|
closed
|
[
"question",
"P2"
] | 2024-02-02T12:08:31Z
| 2024-02-22T21:16:12Z
| null |
severo
|
huggingface/autotrain-advanced
| 484
|
How to ask question AutoTrained LLM , If I ask question dosn't return any answer
|
Hi,
LLM training was successful , But I asked any question from my trained context and it was not answered.How to ask proper question?
rom transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "bert-base-uncased_finetuning"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="cuda",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
Some weights of BertLMHeadModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
example
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1128: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1136: UserWarning: Input length of input_ids is 24, but `max_length` is set to 20. This can lead to unexpected behavior. You should consider increasing `max_new_tokens`.
warnings.warn(
|
https://github.com/huggingface/autotrain-advanced/issues/484
|
closed
|
[
"stale"
] | 2024-02-02T09:29:07Z
| 2024-03-04T15:01:36Z
| null |
charles-123456
|
huggingface/chat-ui
| 761
|
Does chat-ui support offline deployment? I have downloaded the weights to my local computer.
|
I have downloaded the weights to my local computer. Due to network issues, I am unable to interact with the huggingface website. Can I do offline deployment based on chat-ui and downloaded weights from huggingface? Do I not need to set HF_TOKEN=<your access token>?Does that mean I don't need to set HF_TOKEN=<your access token> in the .env.local file?
|
https://github.com/huggingface/chat-ui/issues/761
|
closed
|
[
"support"
] | 2024-02-02T07:57:19Z
| 2024-02-04T03:23:25Z
| 2
|
majestichou
|
huggingface/transformers.js
| 557
|
how to cast types?
|
### Question
I have the following code:
```
const pipe = await pipeline('embeddings');
const output = await pipe([
'The quick brown fox jumps over the lazy dog',
]);
const embedding = output[0][0];
```
`output[0][0]` causes a typescript error:
<img width="748" alt="CleanShot 2024-02-01 at 23 38 04@2x" src="https://github.com/xenova/transformers.js/assets/2908721/6e7a1e58-bfbf-4a9d-96e3-83b771c7be99">
|
https://github.com/huggingface/transformers.js/issues/557
|
open
|
[
"question"
] | 2024-02-02T04:38:20Z
| 2024-02-08T19:01:06Z
| null |
pthieu
|
huggingface/diffusers
| 6,819
|
How to let diffusers use local code for pipelineinstead of download it online everytime We use it?
|
I tried to use the instaflowpipeline from example/community to.run my test However, even after i git cloned the repository to my environment it still Keep trying to Download the latest object of the instaflow pipeline code Unfortunately in my area is hard for the environment to download it directly from rawgithub. I tried to change the downloaded code to let it just use these code already in my environment But find it hard to change the path to url.
I would be appreciated if someone could find an proper answer . Thank you for your time and happy lunar new year!
|
https://github.com/huggingface/diffusers/issues/6819
|
closed
|
[] | 2024-02-02T02:53:48Z
| 2024-11-28T05:44:10Z
| null |
Kevin-shihello-world
|
huggingface/diffusers
| 6,817
|
How to use class_labels in the Unet2DConditionalModel or Unet2DModel when forward?
|
Hi, I want to know what the shape or format of "class" is if I want to add the class condition to the unet? Just set the **classe_labels** 0, 1, 2, 3?
Unet2DModel: **class_labels** (torch.FloatTensor, optional, defaults to None) — Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.
Unet2DConditionalModel: **class_labels** (torch.Tensor, optional, defaults to None) — Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. timestep_cond — (torch.Tensor, optional, defaults to None): Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed through the self.time_embedding layer to obtain the timestep embeddings.
|
https://github.com/huggingface/diffusers/issues/6817
|
closed
|
[] | 2024-02-02T02:17:40Z
| 2024-02-07T07:31:35Z
| null |
boqian-li
|
huggingface/sentence-transformers
| 2,465
|
How to load lora model to sentencetransformer model?
|
Dear UKPlab team,
My team and myself are working on a RAG project and right now we are fine tuning a retrieval model using peft library. The issue is once we have the model fine-tuned, we couldn't load the local config and checkpoints using `sentencetransformer`.
Here is our hierarchy of the local path of the peft model
- adapter_config.json
- adapter_model.safetensors
- ....
When I look into the `sentence-transformers` package, the issue comes from the class```Transformer.py``` which doesn't consider the situation that the model path is a ```peftmodel``` path:
` config = AutoConfig.from_pretrained(model_name_or_path, **model_args, cache_dir=cache_dir)`
So we have to comment this line and delete the `config` attribute at all and in the `_load_model` method, only keep this code:
`self.auto_model = AutoModel.from_pretrained(model_name_or_path, cache_dir=cache_dir)`
Sincerely request. Could you please fix this issue or could you please tell me the correct way to load a peft model using sentencetransformer class?
|
https://github.com/huggingface/sentence-transformers/issues/2465
|
closed
|
[] | 2024-02-02T00:18:04Z
| 2024-11-08T12:32:36Z
| null |
Shengyun-Si
|
huggingface/amused
| 3
|
How to generate multiple images?
|
Thank you for your amazing work! Could you kindly explain how to generate multiple images at a time? Thankyou
|
https://github.com/huggingface/amused/issues/3
|
closed
|
[] | 2024-02-01T18:03:30Z
| 2024-02-02T10:36:09Z
| null |
aishu194
|
huggingface/alignment-handbook
| 110
|
DPO loss on different datasets
|
In parallel with #38, tho i am relating to full training instead of lora.
When i use a different set of prefs (ie chosen and rejected) but still same instructions (ultrafeedback), i get extremely low eval/train loss, where it drops sharply in the beginning. In contrast to training on the original prefs as in the case of ultrafeedback_binarised.
On my pref dataset (Eval loss)

on original pref dataset (eval loss)

train loss (mine)

original

reward margin (mine)

original reward

This huge diff in scale seems to occur when i use pref datasets that are sampled from the reference policy instead of in the case of ultrafeedback, where it is sampled from various policies.
Moreover this huge decrease in loss actually cause the DPO-ed model to perform worse across various benchmarks. Is there any intuition regarding this?
|
https://github.com/huggingface/alignment-handbook/issues/110
|
open
|
[] | 2024-02-01T15:49:29Z
| 2024-02-01T15:49:29Z
| 0
|
wj210
|
huggingface/chat-ui
| 757
|
Which (temperature) configurations for Zephyr chat interface?
|
Hi, I apologise for what is maybe an obvious question but where can I find the exact configurations for the model offered on the HF Zephyr Chat interface on https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat for Zephyr 7B beta? I'm especially interested to see the temperature settings and wasn't able to find this information.
|
https://github.com/huggingface/chat-ui/issues/757
|
closed
|
[
"support"
] | 2024-02-01T14:27:12Z
| 2024-02-01T14:47:13Z
| 3
|
AylaRT
|
huggingface/diffusers
| 6,804
|
How to only offload some parts but not whole model into cpu?
|
Using enable_cpu_offload() will offload the whole model into cpu, which can occupy a large part of cpu memory. How can I just offload a part of model into cpu?
|
https://github.com/huggingface/diffusers/issues/6804
|
closed
|
[] | 2024-02-01T07:43:04Z
| 2024-02-02T04:59:43Z
| null |
blx0102
|
huggingface/transformers.js
| 553
|
How to convert BAAI/bge-m3 for Transformers.js?
|
### Question
I tried to convert https://huggingface.co/BAAI/bge-m3 to ONNX using the instructions at https://github.com/xenova/transformers.js?tab=readme-ov-file#convert-your-models-to-onnx but I'm getting errors.
```shell
$ python -m scripts.convert --model_id BAAI/bge-m3
Framework not specified. Using pt to export to ONNX.
Automatic task detection to feature-extraction (possible synonyms are: default, mask-generation, sentence-similarity).
Using the export variant default. Available variants are:
- default: The default ONNX variant.
Using framework PyTorch: 2.0.1
Overriding 1 configuration item(s)
- use_cache -> False
================ Diagnostic Run torch.onnx.export version 2.0.1 ================
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Saving external data to one file...
Post-processing the exported models...
Deduplicating shared (tied) weights...
Validating ONNX model models/BAAI/bge-m3/model.onnx...
-[✓] ONNX model output names match reference model (last_hidden_state)
- Validating ONNX Model output "last_hidden_state":
-[✓] (2, 16, 1024) matches (2, 16, 1024)
-[✓] all values close (atol: 0.0001)
The ONNX export succeeded and the exported model was saved at: models/BAAI/bge-m3
```
```shell
cat test.js
```
```js
import { pipeline } from './src/transformers.js'
const extractor = await pipeline('feature-extraction', 'BAAI/bge-m3', {
quantized: false,
cache_dir: './models',
local_files_only: true,
})
const embedding = await extractor('hello there', { pooling: 'mean', normalize: true })
console.log(JSON.stringify(Array.from(embedding.data), null, 2))
```
```shell
2024-01-31 20:35:16.548 node[64946:11650151] 2024-01-31 20:35:16.548343 [E:onnxruntime:, inference_session.cc:1532 operator()] Exception during initialization: /Users/runner/work/1/s/onnxruntime/core/optimizer/initializer.cc:31 onnxruntime::Initializer::Initializer(const onnx::TensorProto &, const onnxruntime::Path &) !model_path.IsEmpty() was false. model_path must not be empty. Ensure that a path is provided when the model is created or loaded.
Error: Exception during initialization: /Users/runner/work/1/s/onnxruntime/core/optimizer/initializer.cc:31 onnxruntime::Initializer::Initializer(const onnx::TensorProto &, const onnxruntime::Path &) !model_path.IsEmpty() was false. model_path must not be empty. Ensure that a path is provided when the model is created or loaded.
at new OnnxruntimeSessionHandler (***/transformers.js/node_modules/onnxruntime-node/dist/backend.js:27:92)
at ***/transformers.js/node_modules/onnxruntime-node/dist/backend.js:64:29
at process.processTicksAndRejections (node:internal/process/task_queues:77:11)
Something went wrong during model construction (most likely a missing operation). Using `wasm` as a fallback.
Aborted(Error: ENOENT: no such file or directory, open '***/transformers.js/dist/ort-wasm-simd-threaded.wasm')
failed to asynchronously prepare wasm: RuntimeError: Aborted(Error: ENOENT: no such file or directory, open '***/transformers.js/dist/ort-wasm-simd-threaded.wasm'). Build with -sASSERTIONS for more info.
Aborted(RuntimeError: Aborted(Error: ENOENT: no such file or directory, open '***/transformers.js/dist/ort-wasm-simd-threaded.wasm'). Build with -sASSERTIONS for more info.)
***/transformers.js/node_modules/onnxruntime-web/dist/ort-web.node.js:6
...
...
...
Error: no available backend found. ERR: [wasm] RuntimeError: Aborted(Error: ENOENT: no such file or directory, open '***/transformers.js/dist/ort-wasm-simd-threaded.wasm'). Build with -sASSERTIONS for more info.
at ***/transformers.js/node_modules/onnxruntime-common/dist/ort-common.node.js:6:11822
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async m.create (***/transformers.js/node_modules/onnxruntime-common/dist/ort-common.node.js:6:11480)
at async constructSession (file://***/transformers.js/src/models.js:140:16)
at async Promise.all (index 1)
at async XLMRobertaModel.from_pretrained (file://***/transformers.js/src/models.js:793:20)
at async AutoModel.from_pretrained (file://***/transformers.js/src/models.js:5166:20)
at async Promise.all (index 1)
at async loadItems (file://***/transformers.js/src/pipelines.js:3116:5)
at async pipeline (file://***/transformers.js/src/pipelines.js:3056:21)
Node.js v20.9.0
```
|
https://github.com/huggingface/transformers.js/issues/553
|
closed
|
[
"question"
] | 2024-02-01T01:40:02Z
| 2024-02-08T22:17:29Z
| null |
devfacet
|
pytorch/torchx
| 813
|
Docker build verbosity
|
## Description
Changing the docker image build to its low level implementation so it can be more verbose.
## Motivation/Background
Building the docker image can take quite some time, and for new users this makes it seem like the program is stuck (especially since the default base image that includes torchx is so big). Making it more verbose is not only a quality of life improvement for all users of the docker workspace, it also gives better visibility into the build process, potentially allowing optimization on the dockerfile.
On a side note, what is the rational for naming Dockerfile.torchx instead of just using a normal Dockerfile, is there a difference in the format?
## Detailed Proposal
Replacing the current docker build API call with its low level implementation. This would require instantiating a low-level client and processing the build event stream to show in real time to docker build commands. Also a processing function for the stream to be printing to screen correctly.
## Alternatives
## Additional context/links
https://github.com/pytorch/torchx/blob/19497eb1d2649f66cd12ca1eeed77353085f07e0/torchx/workspace/docker_workspace.py#L118
|
https://github.com/meta-pytorch/torchx/issues/813
|
closed
|
[] | 2024-01-31T18:49:35Z
| 2024-04-11T17:42:34Z
| 3
|
ccharest93
|
pytorch/tutorials
| 2,859
|
Correctness of when to call `set_device` in the docs for DDP
|
### 📚 The doc issue
In the docs tutorial on [how to set up Multi-GPU training](https://pytorch.org/tutorials/beginner/ddp_series_multigpu.html), it is suggested that the following is the proper way to setup each process (initializing the, e.g., NCCL, process group and then calling `torch.cuda.set_device(rank)`):
```python
def ddp_setup(rank: int, world_size: int):
"""
Args:
rank: Unique identifier of each process
world_size: Total number of processes
"""
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
init_process_group(backend="nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
```
However, these issues suggest that the proper way is to call `set_device` before initializing the process group:
- https://github.com/pytorch/pytorch/issues/54550#issuecomment-808703316
- https://github.com/pytorch/pytorch/issues/18689#issuecomment-479042701
Which is the correct order? Are there pauses or slowdowns if the order changes?
### Suggest a potential alternative/fix
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @wz337 @tianyu-l @wconstab @yf225
|
https://github.com/pytorch/tutorials/issues/2859
|
closed
|
[] | 2024-01-31T18:06:42Z
| 2024-05-07T17:10:56Z
| 5
|
craymichael
|
huggingface/diffusers
| 6,785
|
How to finetune stable diffusion img2img(like instructpix2pix or controlnet) model with only one input channel?
|
Hello, experts!
I want to finetune stable diffusion img2img(like instructpix2pix or controlnet) model with only one input channel or greyscale image? I saw official docs says it is ok to increase the input channel from 4 to 9, but I want to know that is this ok to decrease the input channel to be one for finetuning?
Thanks in advance!
|
https://github.com/huggingface/diffusers/issues/6785
|
closed
|
[] | 2024-01-31T09:17:56Z
| 2024-01-31T09:27:43Z
| null |
sapkun
|
huggingface/accelerate
| 2,399
|
How to use vscode to debug the acceleration program with breakpoints? I checked a lot of information, but still didn't find a solution
|
How to use vscode to debug the acceleration program with breakpoints? I checked a lot of information, but still didn't find a solution

|
https://github.com/huggingface/accelerate/issues/2399
|
closed
|
[] | 2024-01-31T09:00:32Z
| 2024-03-10T15:05:56Z
| null |
kejia1
|
huggingface/datatrove
| 72
|
Tokenization in Minhash deduplication
|
Hi,
I have noticed that the tokenization is different from those adopted by previous papers.
For example, this [paper](https://arxiv.org/abs/2107.06499) uses space tokenization, [refinedweb](https://arxiv.org/abs/2306.01116) states that they used GPT-2 tokenizer, while datatrove adopts nltk to extract n-grams.
I'm wondering whether the results obtained by different tokenization methods are consistent.
|
https://github.com/huggingface/datatrove/issues/72
|
closed
|
[
"question"
] | 2024-01-31T02:33:17Z
| 2024-02-01T15:36:24Z
| null |
jordane95
|
huggingface/peft
| 1,419
|
How to torch.jit.trace a peft model
|
### Feature request
Need an example of how to trace a peft model.
### Motivation
Hi, I'm trying to deploy a Lora-finetuned llama model on Nvidia Triton server. For that I need to `traced_model = torch.jit.trace(model, model_input_dict, strict=False)`, however I encountered issues like `Tracing failed sanity checks! ERROR: Graphs differed across invocations!`
and terminal output was like:
```
/python3.10/site-packages/transformers/models/llama/modeling_llama.py:598: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
/python3.10/site-packages/bitsandbytes/autograd/_functions.py:300: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if prod(A.shape) == 0:
/python3.10/site-packages/bitsandbytes/autograd/_functions.py:322: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization
warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
/python3.10/site-packages/bitsandbytes/functional.py:2016: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
nnz = nnz_row_ptr[-1].item()
/python3.10/site-packages/bitsandbytes/functional.py:1714: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert prod(list(shapeA)) > 0, f'Input tensor dimensions need to be > 0: {shapeA}'
/python3.10/site-packages/bitsandbytes/functional.py:1717: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if shapeA[0] == 0 and dimsA == 2:
/python3.10/site-packages/bitsandbytes/functional.py:1719: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
elif shapeA[1] == 0 and dimsA == 3:
/python3.10/site-packages/bitsandbytes/functional.py:1741: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
shapeA[-1] == shapeB[-1]
/python3.10/site-packages/bitsandbytes/functional.py:1826: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
new_row_stats.shape[0] == row_stats.shape[0]
/python3.10/site-packages/bitsandbytes/functional.py:1829: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
new_col_stats.shape[0] == col_stats.shape[0]
/python3.10/site-packages/transformers/models/llama/modeling_llama.py:120: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if seq_len > self.max_seq_len_cached:
/python3.10/site-packages/transformers/models/llama/modeling_llama.py:350: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
/python3.10/site-packages/transformers/models/llama/modeling_llama.py:357: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python value
|
https://github.com/huggingface/peft/issues/1419
|
closed
|
[] | 2024-01-30T22:56:10Z
| 2024-02-06T09:16:07Z
| null |
dcy0577
|
huggingface/gsplat.js
| 56
|
how to change the camera clipping - and a feature request: add rotate control
|
Hello and thank you for your great work!
I am a coding noob but managed to use the jsfiddle example to set up a page on which I can display my splats.
Is it possible to change the clipping (and other) settings for the camera? If so, where should I look??
And for the request; never mind, I was not paying attention
Thanks again!!
|
https://github.com/huggingface/gsplat.js/issues/56
|
closed
|
[] | 2024-01-30T19:20:35Z
| 2024-01-31T16:51:30Z
| null |
murcje
|
huggingface/accelerate
| 2,395
|
Question: how to apply device map to a paired model
|
Hello everybody,
I have been experimenting with Mistral models and have written a small second model to be paired with it. However, I have a machine with 2 GPUs and would like to use both. I am aware that the parallelization `accelerate` uses is based on splitting the data by batches. How can I apply the device map from the Mistral model to my small second model?
## Additional information
The second model which I have written injects a signal into the Mistral model at a strategic layer. However, this is done in a way that removes the possibility of inlining as I do not want to rewrite the model. How can I apply the same device map from the Mistral model?
|
https://github.com/huggingface/accelerate/issues/2395
|
closed
|
[] | 2024-01-30T19:17:52Z
| 2024-02-01T19:18:08Z
| null |
EricLBuehler
|
pytorch/cpuinfo
| 221
|
How to obtain information of CPU frequency?
|
if (core->processor_count == 1) {
printf("\t%" PRIu32 ": 1 processor (%" PRIu32 "), Frequency: %" PRIu64 " Hz\n",
i,
core->processor_start,
core->frequency);
}
Frequency output 0
|
https://github.com/pytorch/cpuinfo/issues/221
|
open
|
[
"enhancement"
] | 2024-01-30T03:21:26Z
| 2025-12-30T22:59:44Z
| null |
yichenchenyi
|
pytorch/text
| 2,227
|
Fail to import torchtext KeyError: 'SP_DIR'
|
## ❓ Questions and Help
**Description**
I failed to import torchtext with the following error. I tried it with a fresh conda env install (under a different python version) and still got the same issue.
Originally I was able to use torchtext (I remember installed from pip) in an env of python 3.11, but then it raised error with the dataset module, so I updated torchtext with pip and started getting kernel crush for pytorch import. So I did some uninstall and install of the pytorch and torchtext packages from different sources (conda or pip) and couldn't fix the issue. Even a new conda env using python 3.10 raised the same error. I don't know what is messed up.
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[3], line 1
----> 1 import torchtext
File ~/miniconda3/envs/ml2/lib/python3.10/site-packages/torchtext/__init__.py:6
3 from torch.hub import _get_torch_home
5 # the following import has to happen first in order to load the torchtext C++ library
----> 6 from torchtext import _extension # noqa: F401
8 _TEXT_BUCKET = \"https://download.pytorch.org/models/text/\"
10 _CACHE_DIR = os.path.expanduser(os.path.join(_get_torch_home(), \"text\"))
File ~/miniconda3/envs/ml2/lib/python3.10/site-packages/torchtext/_extension.py:7
4 import torch
5 from torchtext._internal import module_utils as _mod_utils
----> 7 _LIB_DIR = Path(os.environ[\"SP_DIR\"]) / \"torch\" / \"lib\"
10 def _get_lib_path(lib: str):
11 suffix = \"pyd\" if os.name == \"nt\" else \"so\"
File ~/miniconda3/envs/ml2/lib/python3.10/os.py:680, in _Environ.__getitem__(self, key)
677 value = self._data[self.encodekey(key)]
678 except KeyError:
679 # raise KeyError with the original key value
--> 680 raise KeyError(key) from None
681 return self.decodevalue(value)
KeyError: 'SP_DIR'
```
```
# packages in environment at /Users/cecilia/miniconda3/envs/ml2:
#
# Name Version Build Channel
annotated-types 0.6.0 pyhd8ed1ab_0 conda-forge
appnope 0.1.3 pyhd8ed1ab_0 conda-forge
asttokens 2.4.1 pyhd8ed1ab_0 conda-forge
brotli-python 1.1.0 py310h9e9d8ca_1 conda-forge
bzip2 1.0.8 h10d778d_5 conda-forge
ca-certificates 2023.11.17 h8857fd0_0 conda-forge
catalogue 2.0.10 py310h2ec42d9_0 conda-forge
certifi 2023.11.17 pyhd8ed1ab_0 conda-forge
charset-normalizer 3.3.2 pyhd8ed1ab_0 conda-forge
click 8.1.7 unix_pyh707e725_0 conda-forge
cloudpathlib 0.16.0 pyhd8ed1ab_0 conda-forge
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
comm 0.2.1 pyhd8ed1ab_0 conda-forge
confection 0.1.4 py310h1cef2ca_0 conda-forge
cymem 2.0.8 py310h9e9d8ca_1 conda-forge
cython-blis 0.7.10 py310hf0b6da5_2 conda-forge
debugpy 1.8.0 py310h9e9d8ca_1 conda-forge
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
double-conversion 3.3.0 he965462_0 conda-forge
exceptiongroup 1.2.0 pyhd8ed1ab_2 conda-forge
executing 2.0.1 pyhd8ed1ab_0 conda-forge
filelock 3.13.1 pyhd8ed1ab_0 conda-forge
fsspec 2023.12.2 pyhca7485f_0 conda-forge
gmp 6.3.0 h93d8f39_0 conda-forge
gmpy2 2.1.2 py310hb691cb2_1 conda-forge
icu 73.2 hf5e326d_0 conda-forge
idna 3.6 pyhd8ed1ab_0 conda-forge
importlib-metadata 7.0.1 pyha770c72_0 conda-forge
importlib_metadata 7.0.1 hd8ed1ab_0 conda-forge
ipykernel 6.29.0 pyh3cd1d5f_0 conda-forge
ipython 8.20.0 pyh707e725_0 conda-forge
jedi 0.19.1 pyhd8ed1ab_0 conda-forge
jinja2 3.1.3 pyhd8ed1ab_0 conda-forge
joblib 1.3.2 pyhd8ed1ab_0 conda-forge
jupyter_client 8.6.0 pyhd8ed1ab_0 conda-forge
jupyter_core 5.7.1 py310h2ec42d9_0 conda-forge
langcodes 3.3.0 pyhd8ed1ab_0 conda-forge
libabseil 20230802.1 cxx17_h048a20a_0 conda-forge
libblas 3.9.0
|
https://github.com/pytorch/text/issues/2227
|
closed
|
[] | 2024-01-30T02:50:25Z
| 2024-02-08T02:04:18Z
| 1
|
cecilialee
|
pytorch/xla
| 6,411
|
SPMD Global Batch size vs. --per_device_train_batch_size
|
## ❓ Questions and Help
Hey all,
Am looking to solidify my understanding and seeking a clarification on the SPMD user guide: https://github.com/pytorch-tpu/transformers/blob/llama2-google-next-training/SPMD_USER_GUIDE.md
I see it says:
_global_batch_size: The global batch size to use. Note that this value is supplied to the per_device_train_batch_size flag, since currently HuggingFace treats SPMD as a single-device program. This will change in future releases._
I'd like to ask 2 questions here, to ensure my understanding is correct:
1) With respect to the blog https://pytorch.org/blog/high-performance-llama-2/ and Figure 2, where it says, notably for the V4-32 use-case: "per device batch" = 16, Global Batch = 256, what was the argument to run_clm.py ? Was it
--per_device_train_batch_size 256 ?
If it was indeed "--per_device_train_batch_size 256 " , is the "Per Device Batch" in Figure 2 just a simple calculation of 256/16 TPUv4-32 chips, and NOT an actual argument to run_clm.py ?
2) Related, am looking to understand what (future release) project is tracking a refinement of how Global Batch Size is specified for a multi-device configuration ?
Many thanks,
Isaac
|
https://github.com/pytorch/xla/issues/6411
|
closed
|
[
"question",
"distributed"
] | 2024-01-30T00:16:54Z
| 2025-04-21T13:20:54Z
| null |
isaacr
|
huggingface/diffusers
| 6,755
|
how to train a lora in inpainting model?
|
Is there a script to train Lora in SD 1.5 inpainting?
Is there any script to train Lora in SD 1.5 inpainting that works?
try this
https://github.com/huggingface/diffusers/tree/main/examples/research_projects/dreambooth_inpaint
but it gives error
`RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn`
@thedarkzeno @patil-suraj
|
https://github.com/huggingface/diffusers/issues/6755
|
closed
|
[
"stale"
] | 2024-01-29T21:14:57Z
| 2024-11-22T01:39:54Z
| null |
loboere
|
pytorch/TensorRT
| 2,624
|
❓ undefined reference when Building Torch-TensorRT
|
## ❓ Question
<!-- Your question -->
## What you have already tried
I'm trying to build **Torch-TensorRT version 2.3.0a0**.
I successfully built **Torch 2.3.0.dev**.
When building Torch-TensorRT, if I comment **http_archive** for **libtorch** and **libtorch_pre_cxx11_abi** and use the **new_local_repository** for both of them I get an undefined reference error when running **sudo PYTHONPATH=$PYTHONPATH python3 setup.py install**
Now If I leave http_archive for libtorch and libtorch_pre_cxx11_abi as default I can "successfully" build Torch-TensorRT but when trying to import it to any python code I get:
ImportError: /home/nick/.local/lib/python3.8/site-packages/torch_tensorrt/lib/libtorchtrt.so: undefined symbol: _ZN3c106detail23torchInternalAssertFailEPKcS2_jS2_RKSs
In the pyproject.toml file I can see that Torch.2.3.0 is mandatory for building Torch-TensorRT and that is the version of torch installed and running in my environment.
Not sure on how to proceed since it seems I have all the required packages installed.
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.3.0a0+git4aa1f99
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): source
- Build command you used (if compiling from source): sudo python3 setup.py build develop
- Are you using local sources or building from archives: local
- Python version: 3.8
- CUDA version: 12.1
- GPU models and configuration: 2080 ti
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/2624
|
open
|
[
"question"
] | 2024-01-29T18:26:34Z
| 2024-11-19T08:23:07Z
| null |
nicholasguimaraes
|
huggingface/optimum-benchmark
| 116
|
How to use optimum-benchmark for custom testing of my model
|
I am currently using Intel® Extension for Transformers to quantize a model, and I wonder if it is possible to utilize optimum-benchmark for testing the model. Alternatively, if there are other methods to load large models, could I conduct tests using optimum-benchmark after loading the model? Many thanks; this has been a real challenge for me, as I'm unsure how to properly test an optimized large-scale model.
|
https://github.com/huggingface/optimum-benchmark/issues/116
|
closed
|
[] | 2024-01-29T04:07:36Z
| 2024-02-19T16:07:06Z
| null |
WCSY-YG
|
pytorch/vision
| 8,236
|
segmentation fault when importing torchvision
|
### 🐛 Describe the bug
Get Segment Fault when import torchvision
## Platform:
Macbook Pro 2018 13.3' with macOS 14.3
## Pytorch Version
2.1.2
## Torchvision Version:
0.16.2
## How to Reproduce
input below in shell terminal
```sh
python -c 'import torchvision'
```
then the output is
```sh
zsh: segmentation fault python -c 'import torchvision'
```
### Versions
PyTorch version: 2.1.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.3 (x86_64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.1.0.2.5)
CMake version: version 3.28.1
Libc version: N/A
Python version: 3.11.7 (main, Dec 15 2023, 12:09:04) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] torch==2.1.2
[pip3] torchaudio==2.1.2
[pip3] torchdata==0.7.1
[pip3] torchtext==0.16.2
[pip3] torchvision==0.16.2
[conda] blas 1.0 mkl https://repo.anaconda.com/pkgs/main
[conda] mkl 2023.1.0 h8e150cf_43560 https://repo.anaconda.com/pkgs/main
[conda] mkl-service 2.4.0 py311h6c40b1e_1 https://repo.anaconda.com/pkgs/main
[conda] mkl_fft 1.3.8 py311h6c40b1e_0 https://repo.anaconda.com/pkgs/main
[conda] mkl_random 1.2.4 py311ha357a0b_0 https://repo.anaconda.com/pkgs/main
[conda] numpy 1.26.3 py311h728a8a3_0 https://repo.anaconda.com/pkgs/main
[conda] numpy-base 1.26.3 py311h53bf9ac_0 https://repo.anaconda.com/pkgs/main
[conda] torch 2.1.2 pypi_0 pypi
[conda] torchaudio 2.1.2 pypi_0 pypi
[conda] torchdata 0.7.1 pypi_0 pypi
[conda] torchtext 0.16.2 pypi_0 pypi
[conda] torchvision 0.16.2 pypi_0 pypi
|
https://github.com/pytorch/vision/issues/8236
|
closed
|
[] | 2024-01-29T01:02:48Z
| 2024-01-31T17:17:50Z
| 9
|
Romeo-CC
|
huggingface/chat-ui
| 747
|
.env.local config for llama-2-7b.Q4_K_S.gguf with llama.cpp server
|
I am using the following .env.local with llama-2-7b.Q4_K_S.gguf and llama prompt template
```
MODELS=`[
{
"name": "llama-2-7b.Q4_K_S.gguf",
"chatPromptTemplate": "<s>[INST] <<SYS>>\n{{preprompt}}\n<</SYS>>\n\n{{#each messages}}{{#ifUser}}{{content}} [/INST] {{/ifUser}}{{#ifAssistant}}{{content}} </s><s>[INST] {{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 2048,
"stop": ["</s>"]
},
"endpoints": [
{
"url": "http://127.0.0.1:8080",
"type": "llamacpp"
}
]
}
]`
```
I am trying to get this work with chat-ui and it doesn't work and chat-ui is frozen. However server is receiving request from client.
<img width="1171" alt="image" src="https://github.com/huggingface/chat-ui/assets/106691906/e15147c5-5178-46b4-bc8c-d66bf4cfe1e3">
|
https://github.com/huggingface/chat-ui/issues/747
|
open
|
[
"support"
] | 2024-01-29T00:54:19Z
| 2024-02-22T14:54:08Z
| 3
|
smamindl
|
huggingface/chat-ui
| 746
|
settings page does not reflect selected Theme
|
Settings page is always light/white regardless of the Theme selected (Dark or Light).
Is this intentional or we just did not have time to respect the selected Theme?
If we need to fix this, how much work load do you expect? Just small change on the main settings page (settings/+layout.svelte) or do we need to change every UI piece in settings? I might want to fix this if this is not huge.
thanks
|
https://github.com/huggingface/chat-ui/issues/746
|
open
|
[
"question",
"front"
] | 2024-01-28T23:09:38Z
| 2024-01-29T11:48:59Z
| null |
hungryalgo
|
huggingface/transformers.js
| 547
|
Text to speech generation using Xenova/mms-tts-por
|
### Question
Hi! First of all, thank you for the awesome library, it's been handy so far!
I've got 2 questions regarding TTS:
- I'm using the model above to create a Brazilian Portuguese spoken audio and would like to know if there are options for this model, eg.: changing the voice from male to female, and the intonation.
- I discovered another model `facebook/mms-tts-por` in the compatible languages list, but I'm getting the following error: "'Could not locate file: "https://huggingface.co/facebook/mms-tts-por/resolve/main/tokenizer.json".'". Is transformer.js compatible with it?
Thanks in advance
|
https://github.com/huggingface/transformers.js/issues/547
|
closed
|
[
"question"
] | 2024-01-28T13:51:21Z
| 2025-01-13T22:15:35Z
| null |
Darksoulsong
|
huggingface/diffusers
| 6,739
|
how to generate images based on the text token embedding outputted from CLIP. token_embedding module?
|
how to generate images based on the text token embedding outputted from CLIP. token_embedding module?
|
https://github.com/huggingface/diffusers/issues/6739
|
closed
|
[
"stale",
"should-move-to-discussion"
] | 2024-01-28T08:51:45Z
| 2024-11-19T09:27:00Z
| null |
FlyGreyWolf
|
huggingface/transformers.js
| 546
|
header is not define
|
### Question

|
https://github.com/huggingface/transformers.js/issues/546
|
closed
|
[
"question"
] | 2024-01-28T07:59:10Z
| 2024-01-28T09:28:27Z
| null |
BipulRahi
|
huggingface/datasets
| 6,624
|
How to download the laion-coco dataset
|
The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco
|
https://github.com/huggingface/datasets/issues/6624
|
closed
|
[] | 2024-01-28T03:56:05Z
| 2024-02-06T09:43:31Z
| null |
vanpersie32
|
huggingface/datasets
| 6,623
|
streaming datasets doesn't work properly with multi-node
|
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitted, I don’t have to use `DistributedSampler` (also they don't work with iterable datasets anyway)?
But in this case I noticed that the:
First iteraton:
first GPU will get → [1, 2]
first GPU will get → [3, 4]
Second iteraton:
first GPU will get → [5]
first GPU will get → Nothing
which actually creates an issue since in case of `DistributedSampler`, the samples are repeated internally to ensure non of the GPUs at any iteration is missing any data for gradient sync.
So my questions are:
1. Here since splitting is happening before hand, how to make sure each GPU get’s a batch at each iteration to avoid gradient sync issues?
2. Do we need to use `DistributedSampler`? If yes, how?
3. in the docstrings of `split_dataset_by_node`, this is mentioned: *"If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.n_shards % world_size == 0`), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples."* Can you explain the last part here?
4. If `dataset.n_shards % world_size != 0`, is it possible to shard the streaming dataset on the fly to avoid the case where data is missing?
### Motivation
Somehow streaming datasets should work with DDP since for big LLMs a lot of data is required and DDP/multi-node is mostly used to train such models and streaming can actually help solve the data part of it.
### Your contribution
Yes, I can help in submitting the PR once we get mutual understanding on how it should behave.
|
https://github.com/huggingface/datasets/issues/6623
|
open
|
[
"enhancement"
] | 2024-01-27T23:46:13Z
| 2025-12-08T12:26:20Z
| 29
|
rohitgr7
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.