repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/huggingface.js
| 609
|
[Question] What is the correct way to access commit diff results via http?
|
Data I am interested in:

Here's the endpoint to list commits
https://huggingface.co/api/models/SimonMA/Codellama-7b-lora-rps-adapter/commits/main
|
https://github.com/huggingface/huggingface.js/issues/609
|
closed
|
[] | 2024-04-05T12:00:15Z
| 2024-04-09T18:40:05Z
| null |
madgetr
|
huggingface/dataset-viewer
| 2,661
|
Increase the number of backfill workers?
|
Today, it's 8. Let's try increasing it and see if it speeds up the backfill job.
The current throughput is 577 datasets/minute.
|
https://github.com/huggingface/dataset-viewer/issues/2661
|
open
|
[
"question",
"P2",
"prod"
] | 2024-04-05T10:42:11Z
| 2024-04-05T16:42:13Z
| null |
severo
|
pytorch/TensorRT
| 2,730
|
❓ [Question] Running LayerNorm in fp16
|
## ❓ Question
<!-- Your question -->
## What you have already tried
I am trying to convert a transformer model to TRT in fp16 (fp32 works fine 🙂). It includes bunch of LayerNorms, all of them have explicit casting of inputs to fp32, i.e:
``` python
class LayerNormFP32(nn.LayerNorm):
def forward(self, x):
return super().forward(x.float()).type(x.dtype)
```
I am getting warnings about precisions of the layers:
```
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Detected layernorm nodes in FP16: %126 : Tensor = aten::layer_norm(%input.9, %127, %self.decoder.layers.0.attn_ln.weight.1, %370, %129, %130), scope: __module.decoder/__module.decoder.layers.0/__module.decoder.layers.0.attn_ln
...
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Running layernorm after self-attention in FP16 may cause overflow. Exporting the model to the latest available ONNX opset (later than opset 17) to use the INormalizationLayer, or forcing layernorm layers to run in FP32 precision can help with preserving accuracy.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT encountered issues when converting weights between types and that could affect accuracy.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Check verbose logs for the list of affected weights.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - - 2 weights are affected by this issue: Detected FP32 infinity values and converted them to corresponding FP16 infinity.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - - 27 weights are affected by this issue: Detected subnormal FP16 values.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - - 3 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
```
I checked dtype of the mentioned weights in the trace that I pass to `torch_tensorrt.compile` and they are correctly in fp32, even though the warnings state the opposite.
The warning suggets two solutions (use INormalizationLayer or force FP32 precisions) but I have no idea ho to achieve it.
This might be a related: https://github.com/pytorch/TensorRT/pull/2509 (or https://github.com/NVIDIA/TensorRT/issues/3101)
Any ideas how to resolve or debug this issue?
## Environment
- Python 3.11.8
- torch 2.2.1
- torch_tensorrt 2.2.0
- a100
|
https://github.com/pytorch/TensorRT/issues/2730
|
open
|
[
"question"
] | 2024-04-05T09:06:28Z
| 2025-04-25T12:01:41Z
| null |
Tomiinek
|
huggingface/transformers
| 30,066
|
How to calculate the mAP on this network?
|
### System Info
I want to evaluate my network with the mean Average Precision. I don't know how to get the class-id of my gt data. Are there any examples to calculate the mAP with this library?
I use the DetrForObjectDetection with my own dataset.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
this is my code to save the loss in a csv file. I also want to save the mAP in this file.
def on_train_epoch_end(self, trainer, pl_module):
train_loss = trainer.callback_metrics.get("training_loss").item()
val_loss = trainer.callback_metrics.get("validation/loss").item()
with open(self.file_path, 'a', newline='') as csvfile:
writer = csv.writer(csvfile)
if not self.header_written:
writer.writerow(["Epoch", "Train Loss", "Validation Loss"])
self.header_written = True
writer.writerow([pl_module.current_epoch, train_loss, val_loss])
### Expected behavior
I tried to get the data with this code:
gt_boxes = []
detected_boxes = []
for batch in self.val_dataloader:
pixel_values = batch['pixel_values'].to(pl_module.device)
pixel_mask = batch['pixel_mask'].to(pl_module.device)
labels = batch['labels']
# train_idx = batch['train_idx']
outputs = pl_module(pixel_values=pixel_values, pixel_mask=pixel_mask)
target_sizes = torch.tensor([image.shape[-2:] for image in pixel_values]).to(pixel_values.device)
detections = image_processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.5)[0]
for i in range(len(detections['scores'])):
prob_score = detections['scores'][i].item()
class_pred = detections['labels'][i].item()
box = detections['boxes'][i].detach().cpu().numpy()
detected_boxes.append([class_pred, prob_score, *box])
for label in labels:
gt_box = label['boxes']
for box in gt_box:
gt_boxes.append(box)
image_height = 2048
image_width = 2048
gt_boxes_abs = []
for box in gt_boxes:
x_min, y_min, width, height = box
x_max = x_min + width
y_max = y_min + height
x_min_abs = int(x_min * image_width)
y_min_abs = int(y_min * image_height)
x_max_abs = int(x_max * image_width)
y_max_abs = int(y_max * image_height)
class_id = ???
difficult = ???
crowd = ???
gt_boxes_abs.append([x_min_abs, y_min_abs, x_max_abs, y_max_abs, class_id, difficult, crowd])
adjusted_detected_boxes = []
converted_boxes = []
for box in detected_boxes:
class_id = box[0]
confidence = box[1]
x_min = box[2]
y_min = box[3]
x_max = box[4]
y_max = box[5]
converted_boxes.append([x_min, y_min, x_max, y_max, class_id, confidence])
|
https://github.com/huggingface/transformers/issues/30066
|
closed
|
[] | 2024-04-05T08:32:31Z
| 2024-06-08T08:04:08Z
| null |
Sebi2106
|
huggingface/optimum-quanto
| 152
|
How does quanto calibrate torch functions?
|
I have learned quanto calibrate ops in module forms by adding module hooks, but how about torch functions like `torch.sigmoid`, `torch.elu`, and `torch.log` etc?
I think the output scale of `torch.sigmoid` could be directly evaluated similarly to quanto's approach with `softmax`. Additionally, `torch.elu` might be substituted with `torch.nn.ELU`.
However, I'm uncertain how functions like `torch.log`, which are unbounded and lack explicit module forms will be calibrated within quanto.
|
https://github.com/huggingface/optimum-quanto/issues/152
|
closed
|
[
"question"
] | 2024-04-05T06:49:51Z
| 2024-04-11T09:41:55Z
| null |
shuokay
|
huggingface/candle
| 2,007
|
How to run inference of a (very) large model across mulitple GPUs ?
|
It is mentioned on README that candle supports multi GPU inference, using NCCL under the hood. How can this be implemented ? I wonder if there is any available example to look at..
Also, I know PyTorch has things like DDP and FSDP, is candle support for multi GPU inference comparable to these techniques ?
|
https://github.com/huggingface/candle/issues/2007
|
open
|
[] | 2024-04-04T13:52:46Z
| 2024-08-12T04:53:54Z
| null |
jorgeantonio21
|
huggingface/candle
| 2,006
|
How to get different outputs for the same prompt?
|
I used a gemma, it always returned same outputs for same prompt.
How can I get different outputs? Is there any method or parameter for sampling? (I even doubt that `top_p` works.)
|
https://github.com/huggingface/candle/issues/2006
|
closed
|
[] | 2024-04-04T10:43:31Z
| 2024-04-13T11:17:36Z
| null |
Hojun-Son
|
huggingface/chat-ui
| 975
|
is it possible to hide the setting from the users? most users do not want to create assistants, and they just want to use existing ones.
|
In the left-hand corner of hugginchat, "Assistants" and "Settings" are visible. We are considering whether it is possible to hide these options from our users, as they have expressed no interest in creating assistants and prefer to use existing ones. Many thanks for your kind help.. Howard
|
https://github.com/huggingface/chat-ui/issues/975
|
open
|
[] | 2024-04-04T07:33:25Z
| 2024-04-04T07:33:25Z
| 0
|
hjchenntnu
|
huggingface/transformers.js
| 679
|
Speech Recognition/Whisper word level scores or confidence output
|
### Question
Hey,
Big thanks for awesome project!
It possible to add score/confidence for word level output when using Speech Recognition/Whisper model?
Would appreciate any direction/comments or suggestion where to dig to add it.
Happy to submit PR if I will success in it.
Thanks!
|
https://github.com/huggingface/transformers.js/issues/679
|
open
|
[
"question"
] | 2024-04-04T07:04:00Z
| 2024-04-04T07:04:00Z
| null |
wobbble
|
huggingface/transformers
| 30,034
|
What is the data file format of `run_ner.py`?
|
### Feature request
What is the correct format for custom dataset in run_ner.py? Would it be possible to include a few lines on this with a helpful example?
### Motivation
I am using the example script run_ner.py from [huggingface](https://github.com/huggingface)/transformers It is not possible to use standard conll format for the model fine-tuning of run_ner.
### Your contribution
We could include this in the corresponding readme.
|
https://github.com/huggingface/transformers/issues/30034
|
closed
|
[
"Good First Issue"
] | 2024-04-04T06:36:30Z
| 2024-04-08T11:50:00Z
| null |
sahil3773mehta
|
huggingface/datasets
| 6,777
|
.Jsonl metadata not detected
|
### Describe the bug
Hi I have the following directory structure:
|--dataset
| |-- images
| |-- metadata1000.csv
| |-- metadata1000.jsonl
| |-- padded_images
Example of metadata1000.jsonl file
{"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white label on the left side of the triangle", "image": "images/212734.png", "gaussian_padded_image": "padded_images/p_212734.png"}
{"caption": "an eye-level full shot of a large elephant and a baby elephant standing in a watering hole on the left side is a small elephant with its head turned to the right of dry land, trees, and bushes", "image": "images/212735.png", "gaussian_padded_image": "padded_images/p_212735.png"}
.
.
.
I'm trying to use dataset = load_dataset("imagefolder", data_dir='/dataset/', split='train') to load the the dataset, however it is not able to load according to the fields in the metadata1000.jsonl .
please assist to load the data properly
also getting
```
File "/workspace/train_trans_vae.py", line 1089, in <module>
print(get_metadata_patterns('/dataset/'))
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 499, in get_metadata_patterns
raise FileNotFoundError(f"The directory at {base_path} doesn't contain any metadata file") from None
FileNotFoundError: The directory at /dataset/ doesn't contain any metadata file
```
when trying
```
from datasets.data_files import get_metadata_patterns
print(get_metadata_patterns('/dataset/'))
```
### Steps to reproduce the bug
dataset Version: 2.18.0
make a similar jsonl and similar directory format
### Expected behavior
creates a dataset object with the column names, caption,image,gaussian_padded_image
### Environment info
dataset Version: 2.18.0
|
https://github.com/huggingface/datasets/issues/6777
|
open
|
[] | 2024-04-04T06:31:53Z
| 2024-04-05T21:14:48Z
| 5
|
nighting0le01
|
pytorch/TensorRT
| 2,724
|
[Question] Model converted using TensorRT is slower than native Pytorch
|
Hi All,
We try to run `resent18` model faster than just running the torchvision version on GPU, therefore we planned to convert and quantize the model using TensorRT. However, we did not witness a performance boost after the conversion.
We tried to play with the `ir` mode using both `torch_compile` and `dynamo` in addition we tried varying values of `optimization_level` which also did not help.
Adding here code snip:
```python
import logging
import time
import torch_tensorrt
import torchvision
import torch
from torch.utils.data import DataLoader
from src.utils.utils import set_logger
set_logger()
@torch.no_grad()
def benchmark(model, inputs):
times = list()
for i in range(100):
t = time.time()
model(inputs)
torch.cuda.synchronize()
times.append(time.time() - t)
return sum(times) / len(times), times
if __name__ == '__main__':
# dataset = torchvision.datasets.STL10(
# root='/tmp/data',
# split='train',
# download=True,
# transform=torchvision.transforms.ToTensor()
# )
# loader = DataLoader(dataset, batch_size=2)
bs = 128
dummy_input = torch.rand(bs, 3, 96, 96).cuda()
model = torchvision.models.resnet18(pretrained=True)
model.fc = torch.nn.Linear(512, 10) # Change the output layer to have 10 classes
model.cuda()
model.eval()
ir_mode = "dynamo"
# ir_mode = "torch_compile"
trt_mod = torch_tensorrt.compile(
model,
ir=ir_mode,
inputs=[torch_tensorrt.Input((bs, 3, 96, 96))],
enabled_precisions={torch.float32},
device=torch.device('cuda:0'),
optimization_level=5,
)
avg_time, times = benchmark(model, dummy_input)
logging.info(f"Model pytorch 32fp: {avg_time}")
avg_time, times = benchmark(trt_mod, dummy_input)
logging.info(f"Model compiled to TensorRT 32fp: {avg_time}")
avg_time, times = benchmark(model.half(), dummy_input.half())
logging.info(f"Model 16fp: {avg_time}")
trt_mod = torch_tensorrt.compile(
model.half(),
ir=ir_mode,
inputs=[torch_tensorrt.Input((bs, 3, 96, 96), dtype=torch.half)],
enabled_precisions={torch.float16},
device=torch.device('cuda:0'),
optimization_level=5,
)
avg_time, times = benchmark(trt_mod, dummy_input.half())
logging.info(f"Model compiled to TensorRT 16fp: {avg_time}")
```
**Adding Logs for running with `dynamo`**:
```
torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/opt/conda/envs/faster-whisper/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/opt/conda/envs/faster-whisper/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
warnings.warn(msg)
INFO:torch_tensorrt.dynamo._compiler:Compilation Settings: CompilationSettings(precision=torch.float32, debug=False, workspace_size=0, min_block_size=5, torch_executed_ops=set(), pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=5, use_python_runtime=False, truncate_long_and_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False, device=Device(type=DeviceType.GPU, gpu_id=0), require_full_compilation=False, disable_tf32=False, sparse_weights=False, refit=False, engine_capability=<EngineCapability.DEFAULT: 0>, num_avg_timing_iters=1, dla_sram_size=1048576, dla_local_dram_size=1073741824, dla_global_dram_size=536870912, output_format='exported_program')
INFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:TRT INetwork construction elapsed time: 0:00:00.304469
INFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:Using optimization level 5
INFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:Build TRT engine elapsed time: 0:00:17.935200
INFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:TRT Engine uses: 113246208 bytes of Memory
/opt/conda/envs/faster-whisper/lib/python3.9/site-packages/torch/_export/exported_program.py:333: UserWarning: Unable to execute the generated python source code from the graph. The graph module will no longer be directly callable, but you can still run the ExportedProgram, and if needed, you can run the graph module eagerly using torch.fx.Interpreter.
warnings.warn(
INFO:root:Model pytorch 32fp: 0.021189916133880615
INFO:root:Model compiled to TensorRT 32fp: 0.02402569055557251
INFO:root:Model
|
https://github.com/pytorch/TensorRT/issues/2724
|
closed
|
[
"question"
] | 2024-04-03T18:28:20Z
| 2024-04-23T18:41:05Z
| null |
AvivSham
|
pytorch/xla
| 6,880
|
test_train_mp_mnist.py failing for CUDA when GPU_NUM_DEVICES=1
|
## 🐛 Bug
Following [How to run with PyTorch/XLA:GPU](https://github.com/pytorch/xla/blob/master/docs/gpu.md#how-to-run-with-pytorchxlagpu) to test CUDA PJRT plugin. Running a model hangs when GPU_NUM_DEVICES is set to 1. For >1 values works as expected.
## To Reproduce
<!--
It is really important for the team to have a quick repro, which requires no setup work.
The quicker is the repro to be run, the higher the chances the bug will be addressed sooner.
The best way to create quick repros is to create a Colab based on the following template:
https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#using-debug_runpy-to-collect-debug-information
Things to avoid in repros is the need to download datasets which require setting up keys or other login information, like Kaggle downloads for example.
Another example are Colab which mount user's Google Drive storages.
Using a fake data generator could be a solution, in case the dataset cannot be easily downloaded without setting up credentials:
https://github.com/pytorch/xla/blob/784b4d4f21751a54be0029a95f47d3896561c2a9/test/test_train_mp_mnist.py#L65
-->
Steps to reproduce the behavior:
1. GPU_NUM_DEVICES=1 python test/test_train_mp_mnist.py --fake_data
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. Or better use the Colab template: https://github.com/pytorch/xla/blob/master/contrib/colab/issue-report.ipynb -->
```
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1712043952.582653 14258 service.cc:145] XLA service 0x556cec57f460 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1712043952.582772 14258 service.cc:153] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0
I0000 00:00:1712043952.582792 14258 service.cc:153] StreamExecutor device (1): Tesla V100-SXM2-32GB, Compute Capability 7.0
I0000 00:00:1712043952.586167 14258 se_gpu_pjrt_client.cc:853] Using BFC allocator.
I0000 00:00:1712043952.586310 14258 gpu_helpers.cc:107] XLA backend allocating 25559924736 bytes on device 0 for BFCAllocator.
I0000 00:00:1712043952.586418 14258 gpu_helpers.cc:107] XLA backend allocating 25559924736 bytes on device 1 for BFCAllocator.
I0000 00:00:1712043952.586488 14258 gpu_helpers.cc:147] XLA backend will use up to 8519974912 bytes on device 0 for CollectiveBFCAllocator.
I0000 00:00:1712043952.586563 14258 gpu_helpers.cc:147] XLA backend will use up to 8519974912 bytes on device 1 for CollectiveBFCAllocator.
/usr/local/lib/python3.8/site-packages/torch_xla/core/xla_model.py:105: UserWarning: `devkind` argument is deprecated and will be removed in a future release.
warnings.warn("`devkind` argument is deprecated and will be removed in a "
Epoch 1 train begin 07:45:53
2024-04-02 07:46:03.713411: E external/xla/xla/service/rendezvous.cc:38] This thread has been waiting for `acquire clique for rank 0; clique=devices=[0,1]; stream=0; run_id=0` for 10 seconds and may be stuck. Expected 2 threads to join the rendezvous, but not all of them arrived on time.
2024-04-02 07:46:03.713778: E external/xla/xla/service/rendezvous.cc:38] This thread has been waiting for `acquire clique for rank 1; clique=devices=[0,1]; stream=0; run_id=1` for 10 seconds and may be stuck. Expected 2 threads to join the rendezvous, but not all of them arrived on time.
```
## Environment
- Reproducible on XLA backend [CPU/TPU/CUDA]: CUDA
- Image: us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_cuda_12.1
|
https://github.com/pytorch/xla/issues/6880
|
closed
|
[] | 2024-04-03T09:58:30Z
| 2024-04-08T11:27:27Z
| 3
|
mmakevic-amd
|
huggingface/lighteval
| 143
|
Do an intro notebook on how to use `lighteval`
|
https://github.com/huggingface/lighteval/issues/143
|
closed
|
[
"documentation"
] | 2024-04-03T07:53:25Z
| 2024-12-05T10:18:42Z
| null |
clefourrier
|
|
huggingface/accelerate
| 2,614
|
How to I selectively apply accelerate to trainers
|
I have two trainers in a script, one is SFTTrainer and one is PPOTrainer, both from trl library. Is it possible to only apply accelerate to PPOTrainer?
|
https://github.com/huggingface/accelerate/issues/2614
|
closed
|
[] | 2024-04-03T06:39:05Z
| 2024-05-21T15:06:36Z
| null |
zyzhang1130
|
huggingface/sentence-transformers
| 2,568
|
How to improve sentence-transformers' performance on CPU?
|
On the CPU, I tried huggingface‘s optimization.onnx and sentence_transformers and I found that on the task of feature_extraction, optimization.onnx was not as good as sentence_transformers in batch encoding performance.
My question is, are sentence_transformers the current ceiling on CPU performance?
|
https://github.com/huggingface/sentence-transformers/issues/2568
|
closed
|
[] | 2024-04-03T02:09:14Z
| 2024-04-23T09:17:39Z
| null |
chensuo2048
|
pytorch/serve
| 3,065
|
improve security doc for model security check
|
### 📚 The doc issue
The model url provided by cx potentially can contain unsafe content. Existing security lacks the summary of guidance to cx to overcome this issue.
### Suggest a potential alternative/fix
TorchServe provides 3 different levels security check to address this issue. TorchServe Security doc can be updated to provide guidance for cx.
- option1: allowed urls
- option2: cx plugin is a flexible solution which allows cx to add the security check they prefer.
- option3: prod infra (cloud service or internal company infra) provide AOT security check.
|
https://github.com/pytorch/serve/issues/3065
|
closed
|
[
"documentation",
"security"
] | 2024-04-02T19:14:36Z
| 2024-04-17T18:25:42Z
| 0
|
lxning
|
huggingface/datasets
| 6,773
|
Dataset on Hub re-downloads every time?
|
### Describe the bug
Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whenever I run the below function `load_borderlines_hf`, it downloads the entire dataset from the hub and then does the other logic:
https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80
Let me know what I'm doing wrong here, or if it's a bug with the `datasets` library itself. On the hub I have my data stored in CSVs, but several columns are lists, so that's why I have the code to map splitting on `;`. I looked into dataset loading scripts, but it seemed difficult to set up. I have verified that other `datasets` and `models` on my system are using the cache properly (e.g. I have a 13B parameter model and large datasets, but those are cached and don't redownload).
__EDIT: __ as pointed out in the discussion below, it may be the `map()` calls that aren't being cached properly. Supposing the `load_dataset()` retrieve from the cache, then it should be the case that the `map()` calls also retrieve from the cached output. But the `map()` commands re-execute sometimes.
### Steps to reproduce the bug
1. Copy and paste the function from [here](https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80) (lines 80-100)
2. Run it in Python `load_borderlines_hf(None)`
3. It completes successfully, downloading from HF hub, then doing the mapping logic etc.
4. If you run it again after some time, it will re-download, ignoring the cache
### Expected behavior
Re-running the code, which calls `datasets.load_dataset('manestay/borderlines', 'territories')`, should use the cached version
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.21-150500.55.7-default-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0
|
https://github.com/huggingface/datasets/issues/6773
|
closed
|
[] | 2024-04-02T17:23:22Z
| 2024-04-08T18:43:45Z
| 5
|
manestay
|
huggingface/transformers.js
| 677
|
How you debug/measure Python -> Javascript ONNX Conversion
|
### Question
I have converted a couple ONNX models to use ONNXRuntimeWeb from using the Python onnx version as the source. Ive spent weeks debugging though. What's your strategy for comparing tensor values, etc, with these onnx models?
Ive console log'd N# of values from the tensor/array to see if the values have diverged far but it can get fatiguing. I can't simply just dump a numpy array and compare
|
https://github.com/huggingface/transformers.js/issues/677
|
open
|
[
"question"
] | 2024-04-02T16:16:22Z
| 2024-04-02T16:18:03Z
| null |
matbeedotcom
|
huggingface/transformers.js
| 676
|
How to use fp16 version of the model file?
|
### Question
example files: https://huggingface.co/Xenova/modnet/tree/main/onnx
|
https://github.com/huggingface/transformers.js/issues/676
|
closed
|
[
"question"
] | 2024-04-02T12:10:24Z
| 2024-04-03T02:56:52Z
| null |
cyio
|
huggingface/chat-ui
| 969
|
Display does not automatically update after receiving message
|
After receiving the message, the chat page does not update and is always in the loading state. The received message can only be displayed after refreshing the page or switching sessions.

|
https://github.com/huggingface/chat-ui/issues/969
|
open
|
[
"question"
] | 2024-04-02T06:14:59Z
| 2024-04-03T04:26:23Z
| null |
w4rw4r
|
pytorch/rl
| 2,053
|
[QUESTION] How to reset only certain nested parts of a key with TensorDictPrimer?
|
Hi, I have an observation spec for a multi-agent environment which looks like this:
```
CompositeSpec(
agents: CompositeSpec(
observation: UnboundedContinuousTensorSpec(
shape=torch.Size([100, 2, 14]),
space=None,
device=cuda:0,
dtype=torch.float32,
domain=continuous),
episode_reward: UnboundedContinuousTensorSpec(
shape=torch.Size([100, 2, 1]),
space=None,
device=cuda:0,
dtype=torch.float32,
domain=continuous),
edge_index: UnboundedContinuousTensorSpec(
shape=torch.Size([100, 2, 2, 2]),
space=None,
device=cuda:0,
dtype=torch.float32,
domain=continuous), device=cuda:0, shape=torch.Size([100, 2])),
...
```
Here, the key ("agents", "edge_index") is a special field that I populate once upon creating the env and never want to change.
My problem is that I would like to add a recurrent policy, which requires tracking the hidden state for each agent. I read the Recurrent DQN [tutorial](https://pytorch.org/rl/tutorials/dqn_with_rnn.html#policy), but the LSTMModule's make_tensordict_primer() does not quite work for me as it is designed for the single-agent case.
Thus I have tried to write a custom TensorDictPrimer transform, like so:
```
existing_obs_spec = env.observation_spec
hidden_state_spec = UnboundedContinuousTensorSpec(shape=(*env.observation_spec["agents"].shape[:2], cfg.actor.gru.num_layers, cfg.actor.gru.hidden_size), device=cfg.env.device)
existing_obs_spec[("agents", "hidden_state")] = hidden_state_spec
env.append_transform(TensorDictPrimer(existing_obs_spec))
```
However I notice that on environment resets, this TensorDictPrimer now overwrites all the fields in this spec with 0s. I have attempted to specify the TensorDictPrimer's input keys as solely the ("agents", "hidden_state") key I want to zero-out, but when I do so, I end up losing the other nested keys under "agents" on reset.
Am I misunderstanding the usage of TensorDictPrimer? Any help would be appreciated.
|
https://github.com/pytorch/rl/issues/2053
|
closed
|
[] | 2024-04-02T02:53:19Z
| 2024-04-18T15:04:25Z
| null |
kfu02
|
huggingface/dataset-viewer
| 2,654
|
Tutorial about how to start/run my own local dataset server.
|
Hey,
I'm new to the dataset server and rookie in the Web field. I wanted to build my own dataset server however, is there any tutorial that can guide me to build my own dataset server?
Many Thanks
|
https://github.com/huggingface/dataset-viewer/issues/2654
|
closed
|
[] | 2024-04-02T01:30:12Z
| 2024-05-11T15:03:50Z
| null |
ANYMS-A
|
huggingface/accelerate
| 2,603
|
How to load a FSDP checkpoint model
|
I have fine tuned gemma 2b model using FSDP and these are the below files available under the checkpoint
```
optimizer_0 pytorch_model_fsdp_0 rng_state_0.pth rng_state_1.pth scheduler.pt trainer_state.json
```
How can i load the above FSDP object?
kindly help me with this issue,
|
https://github.com/huggingface/accelerate/issues/2603
|
closed
|
[] | 2024-04-01T16:53:24Z
| 2024-05-11T15:06:21Z
| null |
nlpkiddo-2001
|
pytorch/TensorRT
| 2,723
|
❓ [Question] Output shape error in deconvolution layer when model is quantized with pytorch-quantization and using torch-tensorrt via torchscript
|
## ❓ Question
While using a simple model with int8 quantization (pytorch-quantization) when the output layer is deconvolution, torchscript to torch-tensorrt conversion fails with wrong number of output channels. If a conv layer is used instead of deconv, it works without an error.
## What you have already tried
```ruby
import torch_tensorrt
import torch
import torch.nn as nn
import torchvision
from tqdm import tqdm
from torchvision import transforms
from pytorch_quantization.tensor_quant import QuantDescriptor
from pytorch_quantization import quant_modules
from pytorch_quantization import nn as quant_nn
from pytorch_quantization import calib
import torch.nn.functional as F
class customodel(nn.Module):
def __init__(self):
super().__init__()
self.e11 = nn.Conv2d(3, 64, kernel_size=3, padding=1)
self.e12 = nn.Conv2d(64, 64, kernel_size=3, padding=1)
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.upconv4 = nn.ConvTranspose2d(64,64, kernel_size=2, stride=2)
self.d41 = nn.Conv2d(128, 64, kernel_size=3, padding=1)
self.d42 = nn.Conv2d(64, 64, kernel_size=3, padding=1)
self.outconv = nn.ConvTranspose2d(64,10, kernel_size=1)
def forward(self, x):
x1 = F.relu(self.e11(x))
x2 = F.relu(self.e12(x1))
pool1 = self.pool1(x2)
up4 = self.upconv4(pool1)
merge4 = torch.cat([up4, x2], dim=1)
y = F.relu(self.d41(merge4))
y = F.relu(self.d42(y))
y = self.outconv(y)
return y
def collect_stats(model, data_loader, num_batches):
for name, module in model.named_modules():
if isinstance(module, quant_nn.TensorQuantizer):
if module._calibrator is not None:
module.disable_quant()
module.enable_calib()
else:
module.disable()
for i, (image, _) in tqdm(enumerate(data_loader), total=num_batches):
model(image.cuda())
if i >= num_batches:
break
for name, module in model.named_modules():
if isinstance(module, quant_nn.TensorQuantizer):
if module._calibrator is not None:
module.enable_quant()
module.disable_calib()
else:
module.enable()
def compute_amax(model, **kwargs):
for name, module in model.named_modules():
if isinstance(module, quant_nn.TensorQuantizer):
if module._calibrator is not None:
if isinstance(module._calibrator, calib.MaxCalibrator):
module.load_calib_amax()
else:
module.load_calib_amax(**kwargs)
def main():
quant_modules.initialize()
quant_desc_input = QuantDescriptor(calib_method='histogram')
quant_nn.QuantConv2d.set_default_quant_desc_input(quant_desc_input)
quant_nn.QuantConvTranspose2d.set_default_quant_desc_input(quant_desc_input)
quant_nn.QuantLinear.set_default_quant_desc_input(quant_desc_input)
model = customodel().cuda()
train_dataset = torchvision.datasets.CIFAR10(root = './data',
train = True,
transform = transforms.Compose([
transforms.Resize((572,572)),
transforms.ToTensor(),
transforms.Normalize(mean = (0.1307,), std = (0.3081,))]),download = True)
num_samples = int(0.03 * len(train_dataset))
train_dataset_subset = torch.utils.data.Subset(train_dataset, range(num_samples))
train_loader = torch.utils.data.DataLoader(dataset=train_dataset_subset,
batch_size = 12,
shuffle = True)
with torch.no_grad():
collect_stats(model,train_loader, num_batches=10)
compute_amax(model, method="percentile", percentile=99.99)
quant_nn.TensorQuantizer.use_fb_fake_quant = True
with torch.no_grad():
data = iter(train_loader)
images, _ = next(data)
jit_model = torch.jit.trace(model, images.to("cuda"))
torch.jit.save(jit_model, "custom.pt")
def main2():
model = torch.jit.load('/content/custom.pt').eval()
compile_spec = {"inputs": [torch_tensorrt.Input([2,3,572,572])],
"enabled_precisions":torch.int8,
}
trt_mod = torch_tensorrt.compile(model, **compile_spec,ir='torchscript')
if __name__ == '__main__':
main()
main2()
```
```
ERROR: [Torch-TensorRT TorchScript Conversion Context] - 4: (Unnamed Layer* 53) [Deconvolution]: weight input tensor shape not consistent with the nbOutputMaps in addConvolutionNd/addDeconvolutionNd API. Expected output channels 64 kernel spatial dims [1,1]. But got output channels 10 kernel spatial dims [1,1]
ERROR: [Torch-
|
https://github.com/pytorch/TensorRT/issues/2723
|
closed
|
[
"question"
] | 2024-04-01T15:39:16Z
| 2024-05-22T18:51:32Z
| null |
oazeybekoglu
|
huggingface/datasets
| 6,769
|
(Willing to PR) Datasets with custom python objects
|
### Feature request
Hi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code:
```
class MyClass:
pass
dataset = datasets.Dataset.from_list([
dict(a=MyClass(), b='hello'),
])
```
It gives error:
```
ArrowInvalid: Could not convert <__main__.MyClass object at 0x7a852830d050> with type MyClass: did not recognize Python value type when inferring an Arrow data type
```
I guess it is because Dataset forces to convert everything into arrow format. However, is there any ways to make the scenario work? Thanks!
### Motivation
(see above)
### Your contribution
Yes, I am happy to PR!
Cross-posted: https://discuss.huggingface.co/t/datasets-with-custom-python-objects/79050?u=fzyzcjy
EDIT: possibly related https://github.com/huggingface/datasets/issues/5766
|
https://github.com/huggingface/datasets/issues/6769
|
open
|
[
"enhancement"
] | 2024-04-01T13:18:47Z
| 2024-04-01T13:36:58Z
| 0
|
fzyzcjy
|
pytorch/rl
| 2,052
|
[BUG?] How to handle next with custom environment and check_env_specs()
|
I recently starting learning TorchRL so it's possible that this is a misunderstanding on my part and not an actual bug.
## Describe the bug
I'm trying to setup a simple spatial arrangement problem using a custom environment. There are N blocks each with an x, y position and a size. My action consists of a block index and x and y deltas. The observation spec is setup to hold the updated positions and sizes of the blocks. The action spec is setup to hold the index and delta. For now, reward is just the distance from center for each block so the network is only trying to learn to move blocks to the center of the space. For state, I include distance from center.
When check_env_specs() is run it fails indicating that the real tensor contains the next state but the fake tensor does not.
>.venv/lib/python3.11/site-packages/torchrl/envs/utils.py:160: UserWarning: The expected key set and actual key set differ. This will work but with a slower thr$
Actual - Expected keys={('next', 'state', 'distance_from_center')}.
warnings.warn(
Traceback (most recent call last):
File "spatial-arrangement/mwe.py", line 115, in <module>
check_env_specs(env)
File ".venv/lib/python3.11/site-packages/torchrl/envs/utils.py", line 634, in check_env_specs
raise AssertionError(
AssertionError: The keys of the specs and data do not match:
- List of keys present in real but not in fake: {('next', 'state', 'distance_from_center')},
- List of keys present in fake but not in real: set().
`check_env_specs` calls `env.fake_tensordict()` to create `fake_tensordict` whose keys are later compared to `real_tensordict` with keys obtained from a rollout. Unless "next" is explicitly added the key check will not pass because the created fake_tensordict will only contain observation, reward and done but not next.
https://github.com/pytorch/rl/blob/cd540bf96a9c998e89a59382b1961fd8a2bc57f0/torchrl/envs/common.py#L2840-L2845
## To Reproduce
The following is an MWE that shows the failure.
```python
import torch
from torchrl.envs import EnvBase
from torchrl.envs.utils import check_env_specs
from torchrl.data import BoundedTensorSpec, CompositeSpec, UnboundedContinuousTensorSpec
from tensordict import TensorDict
NUM_BLOCKS = 4
class BlockArrangementEnv(EnvBase):
def __init__(self):
super().__init__()
self.observation_spec = CompositeSpec({
"observation": CompositeSpec({
"positions": BoundedTensorSpec(
low=0.0,
high=1.0,
shape=torch.Size([NUM_BLOCKS, 2]),
dtype=torch.float32
),
"sizes": BoundedTensorSpec(
low=0.1,
high=1.0,
shape=torch.Size([NUM_BLOCKS, 2]),
dtype=torch.float32
)
}),
})
self.state_spec = CompositeSpec({
"state": CompositeSpec({
"distance_from_center": UnboundedContinuousTensorSpec(
shape=torch.Size([NUM_BLOCKS]),
dtype=torch.float32
),
})
})
self.action_spec = CompositeSpec({
"action": CompositeSpec({
"index": BoundedTensorSpec(
low=0,
high=NUM_BLOCKS - 1,
shape=torch.Size([1]),
dtype=torch.int
),
"delta": BoundedTensorSpec(
low=-1.0,
high=1.0,
shape=torch.Size([2]),
dtype=torch.float32
)
})
})
self.reward_spec = UnboundedContinuousTensorSpec(
shape=torch.Size([NUM_BLOCKS]),
dtype=torch.float32
)
def _reset(self, td):
return TensorDict({
"observation": {
"positions": torch.rand([NUM_BLOCKS, 2]),
"sizes": torch.FloatTensor(NUM_BLOCKS, 2).uniform_(0.1, 1.0),
},
"state": {
"distance_from_center": torch.rand([NUM_BLOCKS]),
}
}, batch_size=[])
def _step(self, td, **kwargs):
return TensorDict({
"observation": {
"positions": torch.rand([NUM_BLOCKS, 2]),
"sizes": torch.FloatTensor(NUM_BLOCKS, 2).uniform_(0.1, 1.0),
},
"state": {
"distance_from_center": torch.rand([NUM_BLOCKS]),
},
"reward": torch.rand([NUM_BLOCKS]),
"done": torch.tensor(False)
}, batch_size=[])
def _set_seed(self, seed):
pass
env = BlockArrangementEnv()
check_env_specs(env)
```
## Expected behavior
I'm not expecting that I need to add next explicitly anywhere since it seems
|
https://github.com/pytorch/rl/issues/2052
|
closed
|
[
"bug"
] | 2024-03-31T22:10:49Z
| 2024-04-02T12:00:35Z
| null |
mneilly
|
huggingface/optimum-quanto
| 146
|
Question about the gradient of QTensor and QBitTensor
|
I am confused by the gradient of the Quantizer and QBitTensor. Take QTensor as the example:
The evaluation of forward is:
```txt
data = base / scale (1)
data = round(data) (2)
data = clamp(data, qmin, qmax) (3)
```
I think the graidents should be:
```txt
grad_div = 1 / scale (1)
grad_round = 1 (2) # refer to "straight though estimator": https://arxiv.org/abs/1308.3432
grad_clamp = 1 if qmin < data < qmax else 0 (3)
```
According to chain rule, the gradient of Quantizer should be `grad_div * grad_round * grad_clamp` which is equal to `1 / scale if qmin < base/scale < qmax else 0`
I have reached QTensor's unit test and I find that dequantize is applied to QTensor before backward. I am confused by `Quantizer. backward` and the `dequantize` behavior before backward.
|
https://github.com/huggingface/optimum-quanto/issues/146
|
closed
|
[
"question"
] | 2024-03-31T14:33:10Z
| 2024-04-24T13:51:20Z
| null |
shuokay
|
pytorch/text
| 2,253
|
PyTorch 2.4 is not supported by TorchText
|
Working on this for days trying to install torchtext with pytorch 2.4 and no luck.
The error message I receive:
```
torchtext 0.17.2 depends on torch==2.2.2
The user requested (constraint) torch==2.4.0.dev20240324+cu121
```
So it seems impossible to use torchtext with the latest version of pytorch.
Is there any way to solve this issue without having to downgrade to pytorch 2.2.2?
|
https://github.com/pytorch/text/issues/2253
|
open
|
[] | 2024-03-31T05:07:53Z
| 2025-08-11T14:46:49Z
| 2
|
grant541
|
huggingface/transformers.js
| 673
|
Is dit-base supported
|
### Question
There is a [Huggingface repo](https://huggingface.co/Xenova/dit-base) for the ONNX version of the dit-base model but I can't seem to make it work.
I keep getting the following error:

Is the model currently supported?
|
https://github.com/huggingface/transformers.js/issues/673
|
closed
|
[
"question"
] | 2024-03-31T01:18:42Z
| 2024-03-31T01:48:24Z
| null |
Maxzurek
|
huggingface/datatrove
| 143
|
Understand the output of deduplication
|
Hi
I have arabic split from the CC trying to deduplicate it
I used datatrove for this with a small example
I got in my output folder two files
0000.c4_dup and 0000.c4_sig
Could you help me to understand this output
I cannot read its content as it's c/00000.c4_sig is not UTF-8 encoded and seems to be binary files
where should I see the nex text deduplicated
Thanks in advance
|
https://github.com/huggingface/datatrove/issues/143
|
closed
|
[
"question"
] | 2024-03-30T23:16:21Z
| 2024-05-06T09:30:43Z
| null |
Manel-Hik
|
huggingface/candle
| 1,971
|
How to use `topk`?
|
I am trying to use `topk` to implement X-LoRA in Candle, and want to perform `topk` in the last dimension. Specifically, I need the `indices` return value (as returned by [`torch.topk`](https://pytorch.org/docs/stable/generated/torch.topk.html)).
These indices will either be used to creaste a mask to zero out all the values which are _not_ in the topk, and/or used to apply scalings on the nonzero values. This is a may be hard to understand, as such please see [this](https://github.com/EricLBuehler/xlora/blob/3637d1e00854649e8b9162f8f87233248577162c/src/xlora/xlora_insertion.py#L50-L63) snippet from our X-LoRA library.
Is there a way to implement this with the current Candle functions, or is this planned to be implemented as a function?
---
After looking at the Mixtral MoE selection implementation, I cannot really understand it:
> https://github.com/huggingface/candle/blob/3144150b8d1b80b2c6b469dcab5b717598f0a458/candle-transformers/src/models/mixtral.rs#L302-L323
How does this work? Thanks!
|
https://github.com/huggingface/candle/issues/1971
|
closed
|
[] | 2024-03-30T20:29:45Z
| 2024-07-23T02:02:58Z
| null |
EricLBuehler
|
huggingface/transformers.js
| 671
|
What is involved in upgrading to V3?
|
### Question
In anticipation of being able to [generate music](https://github.com/xenova/transformers.js/issues/668) with musicGen I'm attempting to switch my project over to version 3, which I was able to build on my mac.
I noticed that when using SpeechT5, the voice sounds completely garbled. I've attached a zip with two example WAV files.
[audio_wav_examples.zip](https://github.com/xenova/transformers.js/files/14806203/audio_wav_examples.zip)
I suspect I'm overlooking something, and need to upgrade some other things too? So my question is: could you give a broad overview of all the parts I need to upgrade?
Things I've checked or tried:
- Whisper Speech to Text is still working after 'dropping in' the new version.
- Cleared caches (the JS caches)
- Grabbing 'official' package from the [link to the JSDelivr repository](https://cdn.jsdelivr.net/npm/@xenova/transformers@3.0.0-alpha.0) in the V3 readme, but that doesn't work, which I assume is just an auto-build glitch.
- Switching WAV generation code to the one in Transformers.js V3 example.
- Switching to the [example webworker](https://github.com/xenova/transformers.js/blob/v3/examples/text-to-speech-client/src/worker.js) in the V3 branch, which looks very different, but it had no effect. (The old code was basically `synthesizer = await pipeline('text-to-speech', 'Xenova/speecht5_tts', { quantized: false });`).
- The wav blob from the worker has the same issue as the raw Float32 array, so the issue is not in the way I was playing those arrays.
|
https://github.com/huggingface/transformers.js/issues/671
|
closed
|
[
"question"
] | 2024-03-29T18:09:23Z
| 2024-03-31T13:50:27Z
| null |
flatsiedatsie
|
huggingface/datasets
| 6,764
|
load_dataset can't work with symbolic links
|
### Feature request
Enable the `load_dataset` function to load local datasets with symbolic links.
E.g, this dataset can be loaded:
├── example_dataset/
│ ├── data/
│ │ ├── train/
│ │ │ ├── file0
│ │ │ ├── file1
│ │ ├── dev/
│ │ │ ├── file2
│ │ │ ├── file3
│ ├── metadata.csv
while this dataset can't:
├── example_dataset_symlink/
│ ├── data/
│ │ ├── train/
│ │ │ ├── sym0 -> file0
│ │ │ ├── sym1 -> file1
│ │ ├── dev/
│ │ │ ├── sym2 -> file2
│ │ │ ├── sym3 -> file3
│ ├── metadata.csv
I have created an example dataset in order to reproduce the problem:
1. Unzip `example_dataset.zip`.
2. Run `no_symlink.sh`. Training should start without issues.
3. Run `symlink.sh`. You will see that all four examples will be in train split, instead of having two examples in train and two examples in dev. The script won't load the correct audio files.
[example_dataset.zip](https://github.com/huggingface/datasets/files/14807053/example_dataset.zip)
### Motivation
I have a very large dataset locally. Instead of initiating training on the entire dataset, I need to start training on smaller subsets of the data. Due to the purpose of the experiments I am running, I will need to create many smaller datasets with overlapping data. Instead of copying the all the files for each subset, I would prefer copying symbolic links of the data. This way, the memory usage would not significantly increase beyond the initial dataset size.
Advantages of this approach:
- It would leave a smaller memory footprint on the hard drive
- Creating smaller datasets would be much faster
### Your contribution
I would gladly contribute, if this is something useful to the community. It seems like a simple change of code, something like `file_path = os.path.realpath(file_path)` should be added before loading the files. If anyone has insights on how to incorporate this functionality, I would greatly appreciate your knowledge and input.
|
https://github.com/huggingface/datasets/issues/6764
|
open
|
[
"enhancement"
] | 2024-03-29T17:49:28Z
| 2025-04-29T15:06:28Z
| 1
|
VladimirVincan
|
huggingface/transformers.js
| 670
|
Are tokenizers supposed to work in the browser?
|
### Question
I'd love to use some pretrained tokenizers, right in my browser. On a number of occasions, I've tried to use this library to load and use a tokenizer in my browser, but it always fails with an error like this:
```
Uncaught (in promise) SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data
getModelJSON hub.js:584
loadTokenizer tokenizers.js:62
from_pretrained tokenizers.js:4398
gv9xs tok.js:3
gv9xs tok.js:9
newRequire dev.42f35062.js:71
<anonymous> dev.42f35062.js:122
<anonymous> dev.42f35062.js:145
hub.js:584:16
gv9xs tok.js:3
AsyncFunctionThrow self-hosted:856
(Async: async)
gv9xs tok.js:9
newRequire dev.42f35062.js:71
<anonymous> dev.42f35062.js:122
<anonymous> dev.42f35062.js:145
```
Is there anything I can do to make this work? My code is rather simple:
```
import { AutoTokenizer } from '@xenova/transformers'
;(async function () {
const tokenizer = await AutoTokenizer.from_pretrained(
'Xenova/bert-base-uncased'
)
console.log(tokenizer)
const { input_ids } = await tokenizer('I love transformers!')
console.log(input_ids)
})()
```
I serve this code via a Parcel development server, but it's never worked for me. Any advice would be greatly appreciated!
|
https://github.com/huggingface/transformers.js/issues/670
|
closed
|
[
"question"
] | 2024-03-29T16:10:46Z
| 2024-03-29T16:53:21Z
| null |
Vectorrent
|
pytorch/serve
| 3,054
|
Building frontend from source in docker
|
### 📚 The doc issue
Not able to find a way to add frontend modelserver jar as part of docker image to host a torchserve model
I was trying to learn making changes to frontend for a small fix in customizedMetadata on management api. the metadata is not json parsed. Adding the changes did not surface when i hosted the model.
```
[
{
"modelName": "toy-ranker",
"modelVersion": "2024-03-29-10:36",
"modelUrl": "toy-ranker.mar",
"runtime": "python",
"minWorkers": 4,
"maxWorkers": 4,
"batchSize": 1,
"maxBatchDelay": 100,
"loadedAtStartup": true,
"workers": [
.
.
.
],
"jobQueueStatus": {
"remainingCapacity": 1000,
"pendingRequests": 0
},
"customizedMetadata": "{\n \"input1-name\": \"something\",\n \"input2-name\": \"something2\"\n}"
}
]
```
### Suggest a potential alternative/fix
Documentation on docker/ on how to build the frontend from source.
cc @agunapal
|
https://github.com/pytorch/serve/issues/3054
|
closed
|
[
"triaged",
"docker"
] | 2024-03-29T15:54:29Z
| 2024-04-04T16:54:06Z
| 0
|
harshita-meena
|
huggingface/transformers.js
| 669
|
TinyLlama Conversion
|
### Question
I ran the converter script on the tinyllama repo for both the TinyLlama models ([intermediate step 1431K 3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) and [chat v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)) and uploaded them to my repo ([intermediate step 1431K 3T](https://huggingface.co/dmmagdal/tinyllama-1.1B-intermediate-step-1431k-3T-onnx-js) [chat v1.0](https://huggingface.co/dmmagdal/tinyllama-1.1B-chat-v1.0-onnx-js); I also have uploads where the quantized flag was enabled).
When I try to run either of my converted models with the `AutoModelForCausalLM` or `pipeline`, I get the following error:
```
Error: Could not locate file: "https://huggingface.co/dmmagdal/tinyllama-1.1B-chat-v1.0-onnx-js/resolve/main/onnx/decoder_model_merged.onnx".
```
This error seems to be correct in that I do not have that file in my repo. Was there something I did wrong in the conversion process or is the model not fully supported by transformers.js?
I'm not sure how or if it relates to the TinyLlama repo you have here: https://huggingface.co/Xenova/TinyLLama-v0/tree/main
|
https://github.com/huggingface/transformers.js/issues/669
|
closed
|
[
"question"
] | 2024-03-29T14:50:06Z
| 2025-10-13T04:57:32Z
| null |
dmmagdal
|
huggingface/datatrove
| 142
|
Deduplicating local data throws an error
|
Hi,
I have data in my local machine in the format of a jsonl file and I want to deduplicate it. I'm using the following example:
`sent_dedup_config = SentDedupConfig(
n_sentences=3,
split_sentences=False, # set to False to split on \n instead
only_dedup_in_index=True,
min_doc_words=50,
)
FINDER_WORKERS = 10 # this will speed up/parallelize step 2
def run_example():
pipeline_1 = [
JsonlReader("CC_data_inputs/"),
SentenceDedupSignature(output_folder="cc_output/sigs", config=sent_dedup_config, finder_workers=FINDER_WORKERS),
]
pipeline_2 = [SentenceFindDedups(data_folder="cc_output/sigs", output_folder="cc_output/dups", config=sent_dedup_config)]
pipeline_3 = [
JsonlReader(data_folder="CC_data_inputs/"),
SentenceDedupFilter(data_folder="cc_output/dups", config=sent_dedup_config),
]
executor_1: PipelineExecutor = LocalPipelineExecutor(pipeline=pipeline_1, workers=4, tasks=4)
executor_2: PipelineExecutor = LocalPipelineExecutor(pipeline=pipeline_2, workers=1, tasks=FINDER_WORKERS)
executor_3: PipelineExecutor = LocalPipelineExecutor(pipeline=pipeline_3, workers=4, tasks=4)
print(executor_1.run())
print(executor_2.run())
print(executor_3.run())
`
I edited the first pipeline to just read the jsonl file (assuming that my data is ready directly for step 2). When I run the code, it throws this error:
Traceback (most recent call last):
File "/home/ubuntu/deduplication/sentence_deduplication.py", line 4, in <module>
from datatrove.pipeline.dedup.sentence_dedup import SentDedupConfig
ImportError: cannot import name 'SentDedupConfig' from 'datatrove.pipeline.dedup.sentence_dedup' (/home/ubuntu/miniconda3/lib/python3.11/site-packages/datatrove/pipeline/dedup/sentence_dedup.py)
My data consists of a set of 5 jsonl files inside the folder CC_data_inputs. I just reinstalled the datatrove library. Could you help me figure it out?
|
https://github.com/huggingface/datatrove/issues/142
|
closed
|
[
"question"
] | 2024-03-29T12:31:30Z
| 2024-04-24T14:15:58Z
| null |
Manel-Hik
|
pytorch/pytorch
| 122,959
|
RuntimeError with PyTorch's MultiheadAttention: How to resolve shape mismatch?
|
### 🐛 Describe the bug
I'm encountering an issue regarding the input shape for PyTorch's MultiheadAttention. I have initialized MultiheadAttention as follows:
`attention = MultiheadAttention(embed_dim=1536, num_heads=4)`
The input tensors have the following shapes:
- query.shape is torch.Size([1, 1, 1536])
- Both key.shape and value.shape are torch.Size([1, 23, 1536])
However, when attempting to use these inputs, I encounter the following error:
`RuntimeError Traceback (most recent call last)
Cell In[15], [line 1](vscode-notebook-cell:?execution_count=15&line=1)
----> [1](vscode-notebook-cell:?execution_count=15&line=1) _ = cal_attn_weight_embedding(attention, top_j_sim_video_embeddings_list)
File [~/main/reproduct/choi/make_embedding.py:384](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/main/reproduct/choi/make_embedding.py:384), in cal_attn_weight_embedding(attention, top_j_sim_video_embeddings_list)
[381](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/main/reproduct/choi/make_embedding.py:381) print(embedding.shape)
[383](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/main/reproduct/choi/make_embedding.py:383) # attentionを計算
--> [384](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/main/reproduct/choi/make_embedding.py:384) output, attn_weights = attention(thumbnail, embedding, embedding)
[385](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/main/reproduct/choi/make_embedding.py:385) # attn_weight shape: (1, 1, j+1)
[387](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/main/reproduct/choi/make_embedding.py:387) attn_weights = attn_weights.squeeze(0).unsqueeze(-1) # shape: (j+1, 1)
File [~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
[1496](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1496) # If we don't have any hooks, we want to skip the rest of the logic in
[1497](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1497) # this function, and just call forward.
[1498](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1498) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
[1499](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1499) or _global_backward_pre_hooks or _global_backward_hooks
[1500](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1500) or _global_forward_hooks or _global_forward_pre_hooks):
-> [1501](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501) return forward_call(*args, **kwargs)
[1502](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1502) # Do not call functions when jit is used
[1503](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1503) full_backward_hooks, non_full_backward_hooks = [], []
File [~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/activation.py:1205](https://vscode-remote+ssh-002dremote-002bvt003.vscode-resource.vscode-cdn.net/home/wake/main/reproduct/choi/~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/activation.py:1205), in MultiheadAttention.forward(self, query, key, value, key_padding_mask, need_weights, attn_mask, average_attn_weights, is_causal)
[1191](https://vscode-remote+ssh
|
https://github.com/pytorch/pytorch/issues/122959
|
closed
|
[] | 2024-03-29T09:19:45Z
| 2025-01-22T12:08:21Z
| null |
YuyaWake
|
pytorch/pytorch
| 122,957
|
How to export torch.optim.LBFGS using torch.onnx.export
|
### 🚀 The feature, motivation and pitch
I have a python code that solve linear equations with torch.optim.LBFGS. And I want to make it work in C++. One posible way is to use libtorch. But I wander if I can export it like nn.Module with torch.onnx.export.
Here is my python code:
```
import torch
import torch.nn as nn
import onnxruntime as rt
from torch.autograd import Variable
def test(jac_t, state):
n_actions = 5
dt = 0.01
# target = torch.randn(n_actions, 1)
target = torch.tensor([[ 0.0754],
[ 1.2151],
[-1.4920],
[ 1.1642],
[ 0.2289]])
mat = torch.matmul(jac_t, target) * dt + state
init = torch.randn(n_actions, 1)
init = torch.tensor([[-0.3018],
[ 1.1070],
[-1.4571],
[ 1.0705],
[-0.8479]])
q_dot = Variable(init, requires_grad=True)
v = [q_dot]
optimizer = torch.optim.LBFGS(v)#, lr=0.1)
for i in range(0, 10):
def cost():
optimizer.zero_grad()
next_state = torch.matmul(jac_t, q_dot) * dt + state
d = torch.pow(next_state - mat, 2).sum()
d.backward()
return d
optimizer.step(cost)
d = cost()
if d < 1e-3:
break
return init
class Test(nn.Module):
def __init__(self):
super().__init__()
self.linear1 = nn.Linear(5, 1)
self.linear2 = nn.Linear(1, 1)
def forward(self, jac_t, state):
out = self.linear1(jac_t), self.linear2(state)
c = test(jac_t, state)
return out, c
if __name__ == '__main__':
ttt = Test()
ttt.eval()
n_actions = 5
# jac_t = torch.randn(6, n_actions)
jac_t = torch.tensor([[ 2.0041, 2.2399, -0.0553, 1.4054, 0.2301],
[ 1.4019, -2.3094, -1.0461, 0.7753, 1.0787],
[-0.6338, 0.1553, -1.1531, 1.0613, -0.2952],
[ 0.0541, -0.3652, -0.5361, 2.0200, 0.9431],
[ 0.4075, 1.4435, -1.5067, -0.5096, 0.7448],
[-0.6440, -0.6492, 0.3728, -2.8277, -1.1983]])
# state = torch.randn(6, 1)
state = torch.tensor([[-1.1193],
[ 0.2084],
[-1.4547],
[-1.2416],
[ 0.9738],
[ 1.6379]])
torch.onnx.export(ttt, (jac_t, state), 'ttt.onnx')
a = ttt.forward(jac_t, state)
print('a', a[-1])
sess = rt.InferenceSession('ttt.onnx')
b = sess.run(None, {"jac_t": jac_t.numpy(), "state": state.numpy()})
print('b', b[-1])
```
The outputs of a and b are the same (both close to the value of target), which meas that the inference with the exported onnx file do some calculation like python code. But if I comment out `init = torch.tensor([[-0.3018]...` and use `init = torch.randn(n_actions, 1)`, the output of b will be wrong.
So I guess the calculation of the exported onnx module is not dynamic. It records the way to add/multiply to the result, something like Computational Graphs. In fact I have to use `out = self.linear1(jac_t), self.linear2(state)` to put jac_t, state into Computational Graphs.
What's the proper way to export torch.optim.LBFGS?
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
|
https://github.com/pytorch/pytorch/issues/122957
|
open
|
[
"module: onnx",
"module: optimizer",
"triaged"
] | 2024-03-29T08:42:49Z
| 2024-07-22T09:48:29Z
| null |
shekmun
|
huggingface/optimum-intel
| 642
|
How to apply LoRA adapter to a model loaded with OVModelForCausalLM()?
|
In the transformers library, we can load multiple adapters to the original model by load_adapter then switch the specified adapter with set_adapter like below.
```
# base model
model = AutoModelForCausalLM.from_pretrained(
model_name,
)
# load multiple adapters
model.load_adapter("model/adapter1/", "adapter1")
model.load_adapter("model/adapter2/", "adapter2")
# switch adapter
model.set_adapter("adapter2")
```
Now I want to apply LoRA adapters with OpenVINO, but I can't find an example of it.
Is it possible to do it with OVModelForCausalLM?
|
https://github.com/huggingface/optimum-intel/issues/642
|
closed
|
[] | 2024-03-29T01:13:44Z
| 2024-08-03T12:34:21Z
| null |
nai-kon
|
pytorch/pytorch
| 122,916
|
MPS torch.where() is giving objectively incorrect results, leading to critical calculation errors
|
### 🐛 Describe the bug
I think I have an example of how MPS can get completely different results from CPU. Hopefully the simplicity of this example will be clear and helpful. This may be related to a previous issue noted on this forum (#84936).
```python
import numpy as np
import torch
mps_device = torch.device("mps")
## Create a numpy matrix with many zeros
np.random.seed(0)
Numpy_Test = np.random.random(200000000)
indices = np.random.choice(np.arange(Numpy_Test.size), replace=False,size=int(Numpy_Test.size * 0.6))
Numpy_Test[indices] = 0
Numpy_Matrix = Numpy_Test.reshape((20000,10000))
## Get the indices of non-zero values in the matrix, and convert these indices into a numpy array
indices = np.where(Numpy_Matrix != 0)
indices = np.asarray(indices)
## Use numpy, torch, or a torch.mps object to find where indices[1] == 8000
# Using np.where
np.where(indices[1] == 8000)[0]
array([ 19165, 27061, 39165, ..., 79979029, 79987021, 79995171])
# Using torch.where
torch.where(torch.from_numpy(indices)[1] == 8000)[0]
tensor([ 19165, 27061, 39165, ..., 79979029, 79987021, 79995171])
# Using torch.where with an NPS object
torch.where(torch.from_numpy(indices)[1].to(mps_device) == 8000)[0]
tensor([ 19165, 27061, 39165, ..., 79979032, 79987024, 79995168], device='mps:0')
```
Notice how the first two np.where and torch.where examples give them same results, but when using the tensor converted to MPS we get different results?
If I've not made an obvious mistake, this is a clear example of how MPS completely ruins calculations, because in this case, the indexes change, and all downstream calculations become meaningless.
### Versions
torch version v0.2.1 and v0.2.0
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr
|
https://github.com/pytorch/pytorch/issues/122916
|
closed
|
[
"triaged",
"module: 64-bit",
"module: correctness (silent)",
"module: mps"
] | 2024-03-28T19:56:17Z
| 2025-03-01T16:19:53Z
| null |
aradley
|
huggingface/transformers
| 29,948
|
How to All Utilize all GPU's when device="balanced_low_0" in GPU setting
|
### System Info
I know that while loading the model in "balanced_low_0" GPU setting the model is loaded into all GPU's apart from 0: GPU. Where the 0: GPU is left to do the text inference. (i.e. text inference as in performing all the calculation to generate response inside the LLM)
So, as per the give device parameter my model is loaded onto 1,2,3 GPU's and 0: GPU is left for inference.
| ID | GPU | MEM |
| 0 | 0% | 3% |
| 1 | 0% | 83% |
| 2 | 0% | 82% |
| 3 | 0% | 76% |
Question: How can i also utilize the remaining 1,2,3 GPU's to perform text inference not only 0:GPU?
Context: "balanced_low_0" evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the generate function for Transformers models
Reference: https://huggingface.co/docs/accelerate/en/concept_guides/big_model_inference#designing-a-device-map
CC:
@gante @ArthurZucker and @younesbelkada
Apologies if the ticket is raised under different bucket
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
na
### Expected behavior
na
|
https://github.com/huggingface/transformers/issues/29948
|
closed
|
[] | 2024-03-28T19:54:09Z
| 2024-05-07T13:43:08Z
| null |
kmukeshreddy
|
huggingface/dataset-viewer
| 2,649
|
Should we support /filter on columns that contain SQL commands?
|
See the `schema` column on https://huggingface.co/datasets/motherduckdb/duckdb-text2sql-25k. Clicking on any of the 'classes' leads to an error
<img width="1209" alt="Capture d’écran 2024-03-28 à 15 11 50" src="https://github.com/huggingface/datasets-server/assets/1676121/3aaf779f-0465-429a-bafb-1a16ff5f2901">
The erroneous URL is:
https://datasets-server.huggingface.co/filter?dataset=motherduckdb%2Fduckdb-text2sql-25k&config=default&split=train&offset=0&length=100&where=schema%3D%27CREATE+TABLE+%22venue%22+%28%0A++%22venueId%22+INTEGER+NOT+NULL%2C%0A++%22venueName%22+VARCHAR%28100%29%2C%0A++%22venueInfo%22+JSON%2C%0A++PRIMARY+KEY+%28%22venueId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22author%22+%28%0A++%22authorId%22+INTEGER+NOT+NULL%2C%0A++%22authorName%22+VARCHAR%2850%29%2C%0A++%22authorPublications%22+INT%5B%5D%2C%0A++PRIMARY+KEY+%28%22authorId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22dataset%22+%28%0A++%22datasetId%22+INTEGER+NOT+NULL%2C%0A++%22datasetName%22+VARCHAR%2850%29%2C%0A++%22datasetInfo%22+STRUCT%28v+VARCHAR%2C+i+INTEGER%29%2C%0A++PRIMARY+KEY+%28%22datasetId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22journal%22+%28%0A++%22journalId%22+INTEGER+NOT+NULL%2C%0A++%22journalName%22+VARCHAR%28100%29%2C%0A++%22journalInfo%22+MAP%28INT%2C+DOUBLE%29%2C%0A++PRIMARY+KEY+%28%22journalId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22keyphrase%22+%28%0A++%22keyphraseId%22+INTEGER+NOT+NULL%2C%0A++%22keyphraseName%22+VARCHAR%2850%29%2C%0A++%22keyphraseInfo%22+VARCHAR%2850%29%5B%5D%2C%0A++PRIMARY+KEY+%28%22keyphraseId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22paper%22+%28%0A++%22paperId%22+INTEGER+NOT+NULL%2C%0A++%22title%22+VARCHAR%28300%29%2C%0A++%22venueId%22+INTEGER%2C%0A++%22year%22+INTEGER%2C%0A++%22numCiting%22+INTEGER%2C%0A++%22numCitedBy%22+INTEGER%2C%0A++%22journalId%22+INTEGER%2C%0A++%22paperInfo%22+UNION%28num+INT%2C+str+VARCHAR%29%2C%0A++PRIMARY+KEY+%28%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22journalId%22%29+REFERENCES+%22journal%22%28%22journalId%22%29%2C%0A++FOREIGN+KEY%28%22venueId%22%29+REFERENCES+%22venue%22%28%22venueId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22cite%22+%28%0A++%22citingPaperId%22+INTEGER+NOT+NULL%2C%0A++%22citedPaperId%22+INTEGER+NOT+NULL%2C%0A++%22citeInfo%22+INT%5B%5D%2C%0A++PRIMARY+KEY+%28%22citingPaperId%22%2C%22citedPaperId%22%29%2C%0A++FOREIGN+KEY%28%22citedpaperId%22%29+REFERENCES+%22paper%22%28%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22citingpaperId%22%29+REFERENCES+%22paper%22%28%22paperId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22paperDataset%22+%28%0A++%22paperId%22+INTEGER%2C%0A++%22datasetId%22+INTEGER%2C%0A++%22paperDatasetInfo%22+JSON%2C%0A++PRIMARY+KEY+%28%22datasetId%22%2C+%22paperId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22paperKeyphrase%22+%28%0A++%22paperId%22+INTEGER%2C%0A++%22keyphraseId%22+INTEGER%2C%0A++%22paperKeyphraseInfo%22+JSON%2C%0A++PRIMARY+KEY+%28%22keyphraseId%22%2C%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22paperId%22%29+REFERENCES+%22paper%22%28%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22keyphraseId%22%29+REFERENCES+%22keyphrase%22%28%22keyphraseId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22writes%22+%28%0A++%22paperId%22+INTEGER%2C%0A++%22authorId%22+INTEGER%2C%0A++%22writesInfo%22+JSON%2C%0A++PRIMARY+KEY+%28%22paperId%22%2C%22authorId%22%29%2C%0A++FOREIGN+KEY%28%22paperId%22%29+REFERENCES+%22paper%22%28%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22authorId%22%29+REFERENCES+%22author%22%28%22authorId%22%29%0A%29%3B%27
```json
{"error":"Parameter 'where' contains invalid symbols"}
```
It's because the content includes some of the forbidden symbols:
https://github.com/huggingface/datasets-server/blob/4dddea2e6a476d52ba5be0c7c64fb8eca9827935/services/search/src/search/routes/filter.py#L53
Do you think it's possible to support the above query? Or should we handle the error on the Hub (not easy to do more than currently)?
|
https://github.com/huggingface/dataset-viewer/issues/2649
|
open
|
[
"question",
"api",
"P2"
] | 2024-03-28T14:14:01Z
| 2024-03-28T14:24:34Z
| null |
severo
|
pytorch/serve
| 3,051
|
Can torchserve return image data?
|
### 📚 The doc issue
I have a model that outputs byte data of an image. I would like to ask how torchserve should return this type of data?
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/3051
|
closed
|
[
"triaged"
] | 2024-03-28T07:24:56Z
| 2024-04-02T22:53:39Z
| 1
|
pengxin233
|
huggingface/accelerate
| 2,593
|
How to use training function rather than training scripts in multi GPUs and multi node?
|
I confirmed that the Multi-gpu launcher is executed based on the training function using the PrepareForLaunch function in "accelerate/examples/multigpu_remote_launcher.py".
Usually, the "accelerate launch" or "python -m torch.distributed.run" command is used for multi-node, but is there a way to utilize a training function like the PrepareForLaunch function?
|
https://github.com/huggingface/accelerate/issues/2593
|
closed
|
[] | 2024-03-28T07:05:50Z
| 2024-05-05T15:06:26Z
| null |
wlsghks4043
|
pytorch/TensorRT
| 2,720
|
❓ [Question] compiled ExportedProgram is slower than uncompiled model
|
## ❓ Question
<!-- Your question -->
I tried compiling a few models with `torch_tensorrt.compile(model, inputs, ir='dynamo', ...)` and each one of them was slower than the respective uncompiled model. I was wondering if I was using torch_tensorrt incorrectly.
## What you have already tried
A minimum example:
```
import torch
import torch_tensorrt
import time
model = torch.hub.load('pytorch/vision:v0.10.0', 'mobilenet_v2', pretrained=True)
model.eval().cuda()
inputs = [
torch_tensorrt.Input(
shape=torch.Size((1, 3, 480, 640)),
dtype=torch.float,
)
]
trt_model = torch_tensorrt.compile(model, inputs=inputs, ir='dynamo', truncate_long_and_double=True, enabled_precisions={torch.half}, opt_level='max')
```
The inference time was measured as below:
```
x = torch.rand((1, 3, 480, 640)).cuda() - 0.5
# warm up
for _ in range(10):
trt_model(x)
total_time = 0
for _ in range(20):
start = time.time()
out = trt_model(x)
total_time += time.time() - start
print(total_time / 20)
```
On average the uncompiled model inference time is 4ms and compiled model 9ms.
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.2.1
- CPU Architecture: x86_64
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip intall torch torch_tensorrt
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.11
- CUDA version: 12.3
- GPU models and configuration: NVIDIA GeForce RTX 4050
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/2720
|
open
|
[
"question"
] | 2024-03-28T06:08:21Z
| 2024-04-02T22:02:01Z
| null |
Qi-Zha0
|
huggingface/alignment-handbook
| 144
|
Can we please add the option to work with a tokenized dataset, escpailly for the CPT task.
|
Since we have the CPT task now, it would be nice to have the ability to feel a tokenized and packed dataset directly.
|
https://github.com/huggingface/alignment-handbook/issues/144
|
open
|
[] | 2024-03-27T18:31:58Z
| 2025-02-27T16:23:06Z
| 1
|
shamanez
|
huggingface/transformers.js
| 668
|
Is it possible to run a music / sounds generation model?
|
### Question
I'd love to create a browser-based music generation tool, or one that can turn text into sound effects. Is that supported?
I guess my more general question is: can Transformers.js run pretty much any .onnx I throw at it, or does each model require some level of implementation before it can be used?
|
https://github.com/huggingface/transformers.js/issues/668
|
closed
|
[
"question"
] | 2024-03-27T18:22:31Z
| 2024-05-13T21:17:54Z
| null |
flatsiedatsie
|
huggingface/optimum-quanto
| 139
|
Dequantizing tensors using quanto
|
I noticed the quantized models have these 4 additional features, for every weight in the original, e.g:
```
model.layers.0.mlp.down_proj.activation_qtype,
model.layers.0.mlp.down_proj.input_scale,
model.layers.0.mlp.down_proj.output_scale,
model.layers.0.mlp.down_proj.weight_qtype
```
I guess `qtype` refers to the quantized datatype, and `scale` probably refers to the scaling factor used during quantization? Although what is the difference between `input_scale` and `output scale`? Is it possible to recreate the exact original tensor using these values and the quantized weight?
If yes, then what would the formula be for the dequantization?
|
https://github.com/huggingface/optimum-quanto/issues/139
|
closed
|
[
"question"
] | 2024-03-27T18:00:34Z
| 2024-04-11T09:22:29Z
| null |
raunaks13
|
huggingface/safetensors
| 458
|
Safetensors uses excessive RAM when saving files
|
Safetensors uses around twice the RAM that `torch.save`:
```python
import resource
import torch
from safetensors.torch import save_file
torch.save({'tensor': torch.randn((500000000))}, 'test.torch')
print(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)
save_file({'tensor': torch.randn((500000000))}, 'test.safetensors')
print(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)
```
Output:
```
2308324
4261528
```
I believe this is because safetensors loads the full tensor in the `prepare` function instead of streaming it. Is it possible to stream the writes instead? For instance, having a `prepare_metadata` function that generates the metadata first, writing that first, then each individual tensor.
|
https://github.com/huggingface/safetensors/issues/458
|
closed
|
[
"Stale"
] | 2024-03-27T12:11:38Z
| 2024-05-02T01:47:32Z
| 1
|
sheepymeh
|
pytorch/text
| 2,249
|
Why torchtext needs to reinstall torch
|
Hi team, I am trying to install torchtext with torch 2.2.1-cu121 installed. But once I run `pip install torchtext` the pip will install torch 2.2.1 cpu version for me, is there any way to avoid this?
The output log:
```bash
Successfully installed torch-2.2.2+cu121 torchaudio-2.2.2+cu121 torchvision-0.17.2+cu121
PS :/scratch/github/scgpt$ pip uninstall torchtext
Found existing installation: torchtext 0.17.1
Uninstalling torchtext-0.17.1:
Would remove:
/anaconda/envs/scgpt/lib/python3.11/site-packages/torchtext-0.17.1.dist-info/*
/anaconda/envs/scgpt/lib/python3.11/site-packages/torchtext/*
Proceed (Y/n)?
Successfully uninstalled torchtext-0.17.1
PS :/scratch/github/scgpt$ pip install -U torchtext --no-cache
Collecting torchtext
Downloading torchtext-0.17.1-cp311-cp311-manylinux1_x86_64.whl.metadata (7.6 kB)
Requirement already satisfied: tqdm in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torchtext) (4.66.2)
Requirement already satisfied: requests in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torchtext) (2.28.1)
Collecting torch==2.2.1 (from torchtext)
Downloading torch-2.2.1-cp311-cp311-manylinux1_x86_64.whl.metadata (26 kB)
Requirement already satisfied: numpy in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torchtext) (1.26.3)
Requirement already satisfied: torchdata==0.7.1 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torchtext) (0.7.1)
Requirement already satisfied: filelock in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (3.9.0)
Requirement already satisfied: typing-extensions>=4.8.0 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (4.8.0)
Requirement already satisfied: sympy in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (1.12)
Requirement already satisfied: networkx in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (3.2.1)
Requirement already satisfied: jinja2 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (3.1.2)
Requirement already satisfied: fsspec in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (2024.2.0)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (12.1.105)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (12.1.105)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (12.1.105)
Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (8.9.2.26)
Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (12.1.3.1)
Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (11.0.2.54)
Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (10.3.2.106)
Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (11.4.5.107)
Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (12.1.0.106)
Requirement already satisfied: nvidia-nccl-cu12==2.19.3 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (2.19.3)
Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (12.1.105)
Requirement already satisfied: triton==2.2.0 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torch==2.2.1->torchtext) (2.2.0)
Requirement already satisfied: urllib3>=1.25 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from torchdata==0.7.1->torchtext) (1.26.13)
Requirement already satisfied: nvidia-nvjitlink-cu12 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch==2.2.1->torchtext) (12.4.99)
Requirement already satisfied: charset-normalizer<3,>=2 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from requests->torchtext) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from requests->torchtext) (3.4)
Requirement already satisfied: certifi>=2017.4.17 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from requests->torchtext) (2022.12.7)
Requirement already satisfied: MarkupSafe>=2.0 in /anaconda/envs/scgpt/lib/python3.11/site-packages (from jinja2->torch=
|
https://github.com/pytorch/text/issues/2249
|
open
|
[] | 2024-03-27T11:19:41Z
| 2024-03-27T11:23:04Z
| 0
|
WhenMelancholy
|
huggingface/transformers
| 29,897
|
How to finetune a language model after extent token embeddings?
|
If I add some new tokens for a language model, I will get some random initialized weights in embeddings and lm_head. Is there any official way to train only these new weights? Or all I can do is adding hooks to the tensors to zero the gradient for weights I do not want to change?
|
https://github.com/huggingface/transformers/issues/29897
|
closed
|
[] | 2024-03-27T08:20:24Z
| 2024-03-27T15:01:04Z
| null |
bluewanderer
|
pytorch/TensorRT
| 2,718
|
❓ [Question] Can TensorRT load and run torch_tensorrt models directly?
|
Can TensorRT load and run torch_tensorrt models directly? I want to export my pytorch model and deploy it with TensorRT.
|
https://github.com/pytorch/TensorRT/issues/2718
|
closed
|
[
"question"
] | 2024-03-27T07:46:57Z
| 2024-06-07T01:10:43Z
| null |
theNefelibata
|
huggingface/text-generation-inference
| 1,677
|
how to get the latest version number?
|
In the document, I use "docker run ghcr.io/huggingface/text-generation-inference:latest" to run the latest version of tgi. But in a production environment, I need to fix the version number. I can't find any webpage similar to [docker hub](https://hub.docker.com/r/pytorch/manylinux-cuda102). So how can I use docker command line to get the version list of huggingface/text-generation-inference?
|
https://github.com/huggingface/text-generation-inference/issues/1677
|
closed
|
[] | 2024-03-27T05:43:49Z
| 2024-03-29T02:30:10Z
| null |
fancyerii
|
pytorch/pytorch
| 122,756
|
How to reduce memory usage for large matrix calculations?
|
A_ = torch.sigmoid(torch.matmul(x, x.t()))
x is the feature of tens of thousands of nodes, the shape is 700,000*8, 8 is the number of features extracted from each node.
Calculation requires several t of memory. How to reduce memory overhead?
|
https://github.com/pytorch/pytorch/issues/122756
|
open
|
[
"triaged"
] | 2024-03-27T02:06:03Z
| 2024-04-01T15:59:16Z
| null |
bowensuuu
|
pytorch/serve
| 3,045
|
gRPC Model Metadata using Open Inference Protocol
|
### 🐛 Describe the bug
Consider a system where a feature service fetches model metadata that has information on what feature to fetch and finally infer from the model. In order for me fetch this metadata regarding inputs and outputs I am trying to use the recently added [Open inference protocol](https://github.com/pytorch/serve/blob/master/frontend/server/src/main/resources/proto/open_inference_grpc.proto).
while trying to infer using grpcurl, it shows me the name and version of the model.
```
grpcurl -plaintext -d '{"name": "toy-ranker"}' -proto serve/frontend/server/src/main/resources/proto/open_inference_grpc.proto localhost:79 org.pytorch.serve.grpc.openinference.GRPCInferenceService/ModelMetadata
{
"name": "toy-ranker",
"versions": [
"2024-03-26-15:33"
]
}
```
with simple curl, the output is REST API does not add anything[ model custom](https://github.com/pytorch/serve/blob/master/frontend/server/src/main/java/org/pytorch/serve/http/api/rest/OpenInferenceProtocolRequestHandler.java#L58-L66) to it.
```
$ curl http://localhost:80/v2
{
"name": "Torchserve",
"version": "0.10.0",
"extenstion": [
"kserve",
"kubeflow"
]
}
```
I was trying to understand where it sets this metadata so i can impute it accordingly. I could not find a way for it to set [inputs and outputs](https://github.com/pytorch/serve/blob/master/frontend/server/src/main/java/org/pytorch/serve/grpcimpl/OpenInferenceProtocolImpl.java#L155-L180).
Do you know of how the metadata is set if so in torchserve.
### Error logs
n/a
### Installation instructions
Dockerfile on top of latest torchserve image
```
from pytorch/torchserve-nightly:latest-gpu
ENV TS_OPEN_INFERENCE_PROTOCOL oip
```
### Model Packaing
mnist model can be used, independent of model type.
### config.properties
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082
enable_metrics_api=true
model_metrics_auto_detect=true
metrics_mode=prometheus
number_of_netty_threads=32
job_queue_size=1000
enable_envvars_config=true
model_store=/home/model-server/model-store
workflow_store=/home/model-server/wf-store
load_models=all
### Versions
------------------------------------------------------------------------------------------
Environment headers
------------------------------------------------------------------------------------------
Torchserve branch:
**Warning: torchserve not installed ..
**Warning: torch-model-archiver not installed ..
Python version: 3.11 (64-bit runtime)
Python executable: /home/hmeena/.pyenv/versions/airflow/bin/python
Versions of relevant python libraries:
requests==2.31.0
**Warning: torch not present ..
**Warning: torchtext not present ..
**Warning: torchvision not present ..
**Warning: torchaudio not present ..
Java Version:
OS: CentOS Linux release 7.5.1804 (Core)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39)
Clang version: 3.4.2 (tags/RELEASE_34/dot2-final)
CMake version: N/A
Environment:
library_path (LD_/DYLD_): :/search/dist/bin:/search/dist/bin
### Repro instructions
[Model from old issue ](https://github.com/pytorch/serve/issues/2951#issuecomment-1984168898)i created can be used.
### Possible Solution
Take an input metadata file that can be exposed on both gRPC and [REST](https://github.com/pytorch/serve/blob/master/frontend/server/src/main/java/org/pytorch/serve/http/api/rest/OpenInferenceProtocolRequestHandler.java#L58-L66) metadata endpoints. One example is on the lines of seldon metadata ep that exposes [this information](https://github.com/SeldonIO/seldon-core/blob/master/examples/models/metadata/models/init-metadata/Model.py#L15-L28).
|
https://github.com/pytorch/serve/issues/3045
|
open
|
[
"OIP"
] | 2024-03-26T20:52:16Z
| 2024-04-02T22:54:39Z
| 1
|
harshita-meena
|
pytorch/xla
| 6,822
|
Loading large model (e.g. LLMs)
|
## ❓ Questions and Help
Hi, I'm trying to load large models on TPU-V4 Pod. I saw the discussions in the issues about torchdistX and meta devices. I'm wondering is there any good or recommended solution now?
I am having trouble installing torchdistX with torch/torchXLA 2.2.0 and the LLaMA model I'm loading doesn't have reset_params as well. There are also some discussions on about reset_params in the issues.
|
https://github.com/pytorch/xla/issues/6822
|
closed
|
[
"question",
"dataloading"
] | 2024-03-26T18:20:16Z
| 2025-04-18T12:43:08Z
| null |
tsb0601
|
huggingface/optimum-quanto
| 134
|
Should quanto use int dtype in AffineQuantizer instead of uint?
|
According to code in https://github.com/huggingface/quanto/blob/main/quanto/tensor/qbitstensor.py#L34 I find quanto use uint dtype to store the quantized value in affine quantizer, while in symmetric quantizer it is int dtype
https://github.com/huggingface/quanto/blob/main/quanto/tensor/qtensor.py#L62.
Taking hardware into consideration, If we quantize both weight and activation to int types, will it save the cost of GPU or NPU since this only requires integer-type MAC arrays
|
https://github.com/huggingface/optimum-quanto/issues/134
|
closed
|
[
"question"
] | 2024-03-26T14:21:25Z
| 2024-04-11T09:25:09Z
| null |
shuokay
|
huggingface/hub-docs
| 1,257
|
Add section about deprecation of script-based datasets?
|
Asked here: https://github.com/huggingface/datasets-server/issues/2385#issuecomment-2017984722
> Perhaps a little bit of suggestion from me is to include a disclaimer in the docs so that others are aware that developing a custom script is not supported.
It would also help answer the discussions + we could link in the error message directly.
---
On the other hand, maybe we just want to deprecate it sooner than later, and not spend too much time on this.
|
https://github.com/huggingface/hub-docs/issues/1257
|
open
|
[
"question"
] | 2024-03-26T13:20:27Z
| 2024-03-26T17:49:50Z
| null |
severo
|
pytorch/xla
| 6,820
|
Help RoPE fusion
|
## ❓ Questions and Help
I use the set of tools pytorch/torch xla/openxla, and I want to fuse the operator RoPE into a custom operator, so that the hardware can operate directly. Do you think which layer I should do this better? In the xla pass? Define a RoPE operator in the python layer? Or has the existing framework already implemented this problem of mine?
|
https://github.com/pytorch/xla/issues/6820
|
closed
|
[
"question"
] | 2024-03-26T11:54:24Z
| 2025-04-18T12:45:22Z
| null |
ckfgihub
|
huggingface/candle
| 1,941
|
[help] how to update a portion of a long tensor
|
I'm aware of the closed issue(#1163 ) and understand that Var is mutable and Tensor is immutable by design. But I find it hard to impl some logic if it's impossible to update a portion of a Tensor.
For example, how can I generate a pairwise combination from two 2d tensors:
```rust
let a = Tensor::new(&[[1.0], [2.0]], &device)?;
let b = Tensor::new(&[[3.0], [4.0]], &device)?;
// how to generate a tensor that is the pair combination of the two?
// [[1, 3], [1, 4], [2, 3], [2, 4]]
let c = Tensor::zeros(&[2, 2, 1], DType::F32, &device)?;
for i in 0..a.dim(0)? {
for j in 0..b.dim(0)? {
// won't work!
// here we cannot set the content of the tensor via `set`
c.i((i, j)).set(Tensor::cat(&[&a, &b], 0)?);
}
}
```
|
https://github.com/huggingface/candle/issues/1941
|
closed
|
[] | 2024-03-26T11:47:56Z
| 2024-04-07T15:42:45Z
| null |
michael8090
|
huggingface/optimum
| 1,776
|
How to convert a model(tf_model.h5) with tokenizer folder to the onnx format
|
### Feature request
I have trained the TensorFlow model using the Transformers library and saved the trained model and tokenizer in a folder named MODEL_WITH_TOKENIZER. The model is stored inside the folder in a **.h5** format - **tf_model.h5**
Here is the folder structure.

I want to convert the model to .onnx format
Should I convert the entire MODEL_WITH_TOKENIZER folder to .onnx or only the tf_model.h5 file to onnx?
what are the steps
### Motivation
Hi, I have trained the TensorFlow model using the Transformers library and saved the trained model and tokenizer in a folder named MODEL_WITH_TOKENIZER. The model is stored in the **.h5** format - **model.h5**
Here is the folder structure.

I want to convert the model to .onnx format
Should I convert the entire MODEL_WITH_TOKENIZER folder to .onnx or only the tf_model.h5 file to onnx?
what are the steps
### Your contribution
I have trained the TensorFlow model using the Transformers library and saved the trained model and tokenizer in a folder named MODEL_WITH_TOKENIZER. The model is stored in the **.h5** format - **tf_model.h5**
Here is the folder structure.

I want to convert the model to .onnx format
Should I convert the entire MODEL_WITH_TOKENIZER folder to .onnx or only the tf_model.h5 file to onnx?
what are the steps
|
https://github.com/huggingface/optimum/issues/1776
|
open
|
[
"onnx"
] | 2024-03-26T10:48:02Z
| 2024-10-14T13:35:13Z
| null |
pradeepdev-1995
|
huggingface/alignment-handbook
| 142
|
Efficient dialog data format for KTO training
|
I have dialogs in the shareGPT format (see below) and for each `gpt` turn a label (thumbs up or thumbs down). But for KTO training, I have only seen datasets with the columns `prompt`, `completion` and `label` (see e.g. https://huggingface.co/datasets/trl-lib/kto-mix-14k).
Do I need to unwind my shareGPT dialogs (see below) for KTO training, or is there some more efficient format I can use?
How should the dialog history be encoded in the `prompt` column (see below)?
shareGPT-Format:
```
{"conversations":[
{"from":"system","value":"You are a friendly assistant for ....\n"},
{"from":"human","value":"Hello, I am Sam and ..."},
{"from":"gpt","value":"Welcome Sam, so you ...."},
{"from":"human","value":"Yes, but ...."},
{"from":"gpt","value":"Then ..."}
]}
```
Transformed to KTO, with `prompt` column as close as possible to https://huggingface.co/datasets/trl-lib/kto-mix-14k:
```
prompt, completion, label
[ { "content": "You are a friendly assistant for ....\n", "role": "system" }, { "content": "Hello, I am Sam and ...", "role": "human" }], {"role":"gpt","content":"Welcome Sam, so you ...."}, true
[ { "content": "You are a friendly assistant for ....\n", "role": "system" }, { "content": "Hello, I am Sam and ...", "role": "human" }, {"role":"gpt","content":"Welcome Sam, so you ...."}, {"role":"human","content":"Yes, but ...."}], {"role":"gpt","content":"Then ..."}, false
``
|
https://github.com/huggingface/alignment-handbook/issues/142
|
open
|
[] | 2024-03-26T10:29:38Z
| 2024-03-26T10:30:08Z
| 0
|
DavidFarago
|
huggingface/transformers.js
| 664
|
How to confirm if webgpu actually working in the backend with inferencing
|
### Question
Hi Team,
Thanks for the awsome library.
Recently I am experimenting to run background remove model in the client side using webgpu. I came across this solution https://huggingface.co/spaces/Xenova/remove-background-webgpu.
Tried to replicate the same in my local using your V3 branch.
The way I have used it is as below.
```
const model = await AutoModel.from_pretrained('briaai/RMBG-1.4', {
// Do not require config.json to be present in the repository
config: { model_type: 'custom' },
device: 'webgpu',
dtype: 'fp32'
})
```
I can see significant improvement while enabling `device: 'webgpu',` instead of wasm.
Question 1:
How can I confirm if the webgpu is being used in the backend while inferencing as I can see in both of the case (with webgpu and without webgpu) the `ort-wasm-simd.jsep.wasm` file is getting loaded. why we are not loading `ort.webgpu.min`?
SS

Question 2:
It would be helpfull if you can share the repo for this `https://huggingface.co/spaces/Xenova/remove-background-webgpu ` as the code in huggingface is bundled.
Thanks in advance!!
|
https://github.com/huggingface/transformers.js/issues/664
|
open
|
[
"question"
] | 2024-03-26T08:17:05Z
| 2024-07-24T06:13:50Z
| null |
abiswas529
|
pytorch/serve
| 3,042
|
Custom class handler missing BaseHandler
|
### 📚 The doc issue
I believe the docs for a custom class level entry point are missing the base-class `BaseHandler`. If i'm mistaken, please close this issue.
Link: https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handler-with-class-level-entry-point
### Suggest a potential alternative/fix
Replace `class ModelHandler(object):` with `class ModelHandler(BaseHandler):`
|
https://github.com/pytorch/serve/issues/3042
|
open
|
[
"documentation"
] | 2024-03-26T06:54:31Z
| 2024-03-26T20:41:02Z
| 0
|
swstack
|
huggingface/dataset-viewer
| 2,630
|
Take spawning.io opted out URLs into account in responses?
|
In particular, for images (assets / cached-assets).
Raised internally: https://huggingface.slack.com/archives/C040J3VPJUR/p1702578556307069?thread_ts=1702577137.311409&cid=C040J3VPJUR
|
https://github.com/huggingface/dataset-viewer/issues/2630
|
open
|
[
"question",
"P2"
] | 2024-03-25T11:49:49Z
| 2024-03-25T11:49:58Z
| null |
severo
|
huggingface/datasets
| 6,756
|
Support SQLite files?
|
### Feature request
Support loading a dataset from a SQLite file
https://huggingface.co/datasets/severo/test_iris_sqlite/tree/main
### Motivation
SQLite is a popular file format.
### Your contribution
See discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal)
In particular: a SQLite file can contain multiple tables, which might be matched to multiple configs. Maybe the detail of splits and configs should be defined in the README YAML, or use the same format as for ZIP files: `Iris.sqlite::Iris`.
See dataset here: https://huggingface.co/datasets/severo/test_iris_sqlite
Note: should we also support DuckDB files?
|
https://github.com/huggingface/datasets/issues/6756
|
closed
|
[
"enhancement"
] | 2024-03-25T11:48:05Z
| 2024-03-26T16:09:32Z
| 3
|
severo
|
huggingface/dataset-viewer
| 2,629
|
Detect when a new commit only changes the dataset card?
|
Ideally, when we change the contents of the dataset card (not the YAML part), the responses computed by the datasets server should not be recomputed, because they will lead to the same results.
asked here (private slack channel): https://huggingface.slack.com/archives/C04N96UGUFM/p1701862863691809
> Sometimes I don't modify the dataset cards of datasets that have too many configs because I don't want to break the viewer for too long. I think we can detect when the change is only about the content dataset card and the dataset itself didn't change ?
|
https://github.com/huggingface/dataset-viewer/issues/2629
|
closed
|
[
"question",
"improvement / optimization",
"P2"
] | 2024-03-25T10:57:36Z
| 2024-06-19T16:02:33Z
| null |
severo
|
huggingface/dataset-viewer
| 2,627
|
Replace our custom "stale bot" action with the GitHub's one?
|
See `actions/stale@v5`
```yaml
name: Mark inactive issues as stale
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-issue-stale: 30
days-before-issue-close: -1
stale-issue-label: "stale"
stale-issue-message: "This issue is stale because it has been open for 30 days with no activity."
close-issue-message: "This issue was closed because it has been inactive for X days since being marked as stale."
days-before-pr-stale: -1
days-before-pr-close: -1
repo-token: ${{ secrets.GITHUB_TOKEN }}
```
from https://huggingface.slack.com/archives/C493XH5FX/p1701942940388579?thread_ts=1701932787.319359&cid=C493XH5FX
|
https://github.com/huggingface/dataset-viewer/issues/2627
|
open
|
[
"question",
"ci",
"P2"
] | 2024-03-25T10:48:47Z
| 2024-03-25T10:49:02Z
| null |
severo
|
pytorch/examples
| 1,242
|
Pytorch is insufficiently opinionated
|
### 🐛 Describe the bug
## Context
Machine learning models can be trained on secret, synthetic, or biased data to create seemingly authoritative probability estimates used for abusive purposes in legal contexts. In Jessica Logan's case, her 911 call was used as "evidence" [(when interpreted by a poorly trained and overly confident detective)](https://www.propublica.org/article/911-call-analysis-jessica-logan-evidence) that she had killed her baby.
As an example, an affiliate of Tracy Harpster [1] currently conspires to create an AI startup (Deceptio AI) to launder voodoo (via hidden datasets) to increase the odds of a wrongful conviction based on 911 call audio. Deceptio AI's website is excluded from the Internet Archive; this is a clear indication that Deceptio AI believes itself better off hidden.
This practice is [spreading throughout the law enforcement system](https://www.propublica.org/article/911-call-analysis-fbi-police-courts) faster than judges and investigators grounded in reality can possibly counter it.
The emergence of a webpage where a LEO can anonymously upload an audio clip and receive a "guilty" or "not guilty" certificate will crystallize the cost of this issue.
From [3]:
> A couple of years ago, he and his two business partners, including one who has decades of experience in statement analysis, decided to join forces and create software that essentially has the brain of a veteran analyst.
>
> “We've come up with an AI now that can detect deception in a person's written or spoken words,” Carson said.
>
> In simple terms, Carson said a client of the business would go on their website, Deceptio.AI, and companies that purchase the software can input statements and the program determines how truthful the statement is and why it may or may not be the whole truth.
>
> “Then we're going to simply click analyze statement and then what the section does is it gives you a probability of truthfulness,” Carson said when demonstrating how Deceptio works. “Now, what we see is anything that falls 85% and under means it's a highly deceptive statement.”
From [4]:
> He designed the platform for widespread usage and said it requires no training. The co-founders have bootstrapped the startup over the past two years and recently opened their first funding round.
>
> “We’re seeking the right investors,” Carson explained. “Those are the ones that understand the goal and vision of what a true AI tool like this will mean long-term. We’ve turned down a couple already.”
>
> Carson said the company has also collected and stored a massive amount of human behavioral data, called [pattern of life analysis](https://cambridge-intelligence.com/pattern-of-life-analysis/#:~:text=Pattern%20of%20life%20analysis%20is,large%20quantities%20of%20observed%20data.). He said Deceptio’s database “literally maps” deceptiveness in the human psyche.
>
> He noted that Cathie Wood, CEO of St. Petersburg-based ARK Invest, frequently mentions [the value of AI entrepreneurs amassing proprietary data](https://stpetecatalyst.com/generative-ai-takes-center-stage-at-synapse-summit/). Carson called Deceptio’s information, which does not include personal information, “exceptionally proprietary.”
>
> “To our knowledge, there isn’t anyone else on the planet doing what we’re doing,” he added. “Let alone amassing the type of life intelligence data we’re collecting.”
[1] Statement Analysis by Mark McClish and Tracy Harpster: https://web.archive.org/web/20231004064240/https://www.statementanalysis.com/bio/
[2] Deceptio AI: https://www.deceptio.ai/
[3] https://web.archive.org/web/20240325092556/https://baynews9.com/fl/tampa/news/2023/09/14/deceptio-ai-detects-lies
[4] https://web.archive.org/web/20231002083318/https://stpetecatalyst.com/st-pete-startup-uses-ai-to-detect-lies/
If you believe similarly useless discrimators and their corporate reproductive organs will not be created for other abusive purposes, e.g. by banks, landlords, insurance firms, school administrators, university regents, forensic investigators, farmers, miners, doctors, or nurses, you are simply not paying attention.
* Pytorch version: 2.2.1
* Operating System and version: Ubuntu 20.04
## Your Environment
* Installed using source? [yes/no]: yes
* Are you planning to deploy it using docker container? [yes/no]: no
* Is it a CPU or GPU environment?: Both
* Which example are you using: all
* Link to code or data to repro [if any]:
## Expected Behavior
PyTorch should prohibit users from creating discriminators or generators intended for use on the real world which are trained with data not representative of the real world.
## Current Behavior
Anyone with an NVIDIA GPU can download PyTorch and train a model on fake datasets, then re-sell access to the model as an "investigative service."
## Possible Solution
Destroy PyTorch.
## Steps to Reproduce
Deceptio.AI
1. https://www.propublica.org/article/911-call-analy
|
https://github.com/pytorch/examples/issues/1242
|
closed
|
[] | 2024-03-25T09:13:15Z
| 2024-03-26T07:17:14Z
| 0
|
ghost
|
huggingface/candle-paged-attention
| 1
|
How to use candle-paged-attention in candle models?
|
Could you provide an example of candle-paged-attention for actual usage in candle models (candle-examples)? Is this crate ready to be used in candle? i.e., tested in end2end model inference? I'm a little bit confused about the construction of block_tables and context_lens.
|
https://github.com/huggingface/candle-paged-attention/issues/1
|
open
|
[] | 2024-03-25T09:09:24Z
| 2024-03-25T12:07:13Z
| null |
guoqingbao
|
pytorch/examples
| 1,241
|
RuntimeError in Partialconv-master
|
## 📚 Documentation
I am getting this error in signal_handling.py file
<img width="426" alt="image" src="https://github.com/pytorch/examples/assets/126889261/0881dd8e-abb2-467f-bab4-818f3f856418">
that is in miniconda3/lib/python3.12/site-packages/torch/utils/data/_utils/signal_handling.py
How can I fix this?
|
https://github.com/pytorch/examples/issues/1241
|
open
|
[] | 2024-03-24T21:37:03Z
| 2024-03-26T07:17:49Z
| 1
|
shaSaaliha
|
huggingface/optimum
| 1,769
|
Accuracy change with BetterTransformer
|
When transforming the model into BetterTransformer model I'm seeing accuracy drop on the models.
The output scores changes considerably (upto 1-2 decimal points of precision).
**Is accuracy change expected when switching to BetterTransformer ?** I'm not performing any ORT compilation or quantization on the model.
From what I know FlashAttention is not supposed to change any accuracy since it is an exact attention score algorithm. Hence I'm not sure what is causing this change in score.
Steps to reproduce
```
from transformers import AutoModelForSequenceClassification , AutoTokenizer
from optimum.bettertransformer import BetterTransformer
tokenizer=AutoTokenizer.from_pretrained("BAAI/bge-reranker-large")
original_model = AutoModelForSequenceClassification.from_pretrained("BAAI/bge-reranker-large").to('cuda:0')
transformed_model = BetterTransformer.transform(original_model, keep_original_model=True).to('cuda:0')
sentences_batch=[['do you like fox cookies', 'fox big brown fox']]
inputs = tokenizer(sentences_batch,padding=True,truncation=True,return_tensors="pt",max_length=512,).to('cuda:0')
better_transformer_scores = transformed_model(**inputs, return_dict=True).logits.view(-1).float()
print(f"BetterTransfomer output: {better_transformer_scores.detach().cpu().numpy().tolist()}")
vanilla_model_scores = original_model(**inputs, return_dict=True).logits.view(-1).float()
print(f"Vanilla model output :{vanilla_model_scores.detach().cpu().numpy().tolist()}")
```
Output
```
BetterTransfomer output: [-7.378745079040527]
Vanilla model output :[-7.3596720695495605]
```
##### System state:
* Package version:
* transformers == 4.39.1
* optimum == 1.17.1
* torch == 2.2.1
* Instance Type : AWS p3.2xlarge ( GPU V100) . (Tied it on A100 as well )
* CUDA Version: 12.2
* GPU Driver Version: 535.104.12
|
https://github.com/huggingface/optimum/issues/1769
|
closed
|
[
"bettertransformer",
"Stale"
] | 2024-03-24T01:28:15Z
| 2025-01-15T02:01:10Z
| 7
|
kapilsingh93
|
pytorch/PiPPy
| 988
|
How to use PiPPy for large models that won't fit on one GPU
|
Hello, I was wondering If someone could provide an example or some guidance on how to use PiPPy for models, that will not fit on one GPU. I want to run pipeline parallelism with Llama2 70B on a node with multiple a100 gpus. However, if I run the pippy_llama.py example, every process will just try to load the whole model on the GPU corresponding to its local rank, which will cause a CUDA out of memory error.
|
https://github.com/pytorch/PiPPy/issues/988
|
open
|
[
"high-pri"
] | 2024-03-23T15:49:18Z
| 2024-03-30T00:08:01Z
| null |
aspiridon0v
|
huggingface/optimum-quanto
| 129
|
Performance of quanto quants vs bnb, AWQ, GPTQ, GGML ?
|
I was wondering if there were any comparisons done looking at the speed and ppl of `quanto` quantizations with respect to the other quantization techniques out there.
|
https://github.com/huggingface/optimum-quanto/issues/129
|
closed
|
[
"question"
] | 2024-03-23T11:37:33Z
| 2024-04-11T09:22:47Z
| null |
nnethercott
|
huggingface/transformers
| 29,826
|
How to convert pretrained hugging face model to .pt for deploy?
|
I'm attempting to convert this [model](https://huggingface.co/UrukHan/wav2vec2-russian) in .pt format. It's working fine for me so i dont want to fine-tune it. How can i export it to .pt and run interface for example in flask?
I tried using this to convert to .pt:
```
from transformers import AutoConfig, AutoProcessor, AutoModelForCTC, AutoTokenizer, Wav2Vec2Processor
import librosa
import torch
# Define the model name
model_name = "UrukHan/wav2vec2-russian"
# Load the model and tokenizer
config = AutoConfig.from_pretrained(model_name)
model = AutoModelForCTC.from_pretrained(model_name, config=config)
processor = Wav2Vec2Processor.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Save the model as a .pt file
torch.save(model.state_dict(), "model.pt")
# Save the tokenizer as well if needed
tokenizer.save_pretrained("model-tokenizer")
```
but unfortunately its not running the interface and not loading model from path :
```
model = AutoModelForCTC.from_pretrained("model.pt")
processor = AutoProcessor.from_pretrained("model.pt")
# Perform inference with the model
FILE = 'here is wav.wav'
audio, _ = librosa.load(FILE, sr = 16000)
audio = list(audio)
def map_to_result(batch):
with torch.no_grad():
input_values = torch.tensor(batch, device="cpu").unsqueeze(0) #, device="cuda"
logits = model(input_values).logits
pred_ids = torch.argmax(logits, dim=-1)
batch = processor.batch_decode(pred_ids)[0]
return batch
map_to_result(audio)
print(map_to_result(audio))
model.eval()
```
And encountered an error:
`model.pt is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'`
What am i doing wrong?
If you can provide guideline on how to convert model to .pt and run it it will be appreciated!Thanks in advance!
|
https://github.com/huggingface/transformers/issues/29826
|
closed
|
[] | 2024-03-23T10:09:16Z
| 2025-10-13T23:08:57Z
| null |
vonexel
|
huggingface/datasets
| 6,750
|
`load_dataset` requires a network connection for local download?
|
### Describe the bug
Hi all - I see that in the past a network dependency has been mistakenly introduced into `load_dataset` even for local loads. Is it possible this has happened again?
### Steps to reproduce the bug
```
>>> import datasets
>>> datasets.load_dataset("hh-rlhf")
Repo card metadata block was not found. Setting CardData to empty.
*hangs bc i'm firewalled*
````
stack trace from ctrl-c:
```
^CTraceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/load.py", line 2582, in load_dataset
builder_instance.download_and_prepare(
output_path = get_from_cache( [0/122]
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 532, in get_from_cache
response = http_head(
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 419, in http_head
response = _request_with_retry(
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 304, in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/adapters.py", line 487, in send
resp = conn.urlopen(
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py", line 363, in connect
self.sock = conn = self._new_conn()
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
KeyboardInterrupt
```
### Expected behavior
loads the dataset
### Environment info
```
> pip show datasets
Name: datasets
Version: 2.18.0
```
Python 3.10.2
|
https://github.com/huggingface/datasets/issues/6750
|
closed
|
[] | 2024-03-23T01:06:32Z
| 2024-04-15T15:38:52Z
| 3
|
MiroFurtado
|
huggingface/dataset-viewer
| 2,626
|
upgrade to pyarrow 15?
|
we use pyarrow 14
|
https://github.com/huggingface/dataset-viewer/issues/2626
|
closed
|
[
"question",
"dependencies",
"P2"
] | 2024-03-22T18:22:04Z
| 2024-04-30T16:19:19Z
| null |
severo
|
pytorch/hub
| 343
|
How to load a custom YOLOv9 model using torch.hub.load()?
|
Hi,
I have trained a YOLOV9-e model on a custom dataset from this repo: [https://github.com/WongKinYiu/yolov9](url)
Now I tried to load it as below-

But getting the following error-

It says- `RuntimeError: Cannot find callable best.pt in hubconf`
Please share the correct way to load the model.
|
https://github.com/pytorch/hub/issues/343
|
closed
|
[] | 2024-03-22T10:05:20Z
| 2024-03-22T10:28:26Z
| null |
dsbyprateekg
|
huggingface/optimum-nvidia
| 102
|
Instructions on how to set TP/PP
|
https://github.com/huggingface/optimum-nvidia/blob/main/examples/text-generation.py is currently empty in that regard
|
https://github.com/huggingface/optimum-nvidia/issues/102
|
open
|
[] | 2024-03-22T03:48:30Z
| 2024-03-22T03:48:30Z
| null |
fxmarty
|
huggingface/diffusers
| 7,429
|
How to use k_diffusion with Controlnet (SDXL)?
|
Dear developer,
I try to modify the code of [k_diffusion](https://github.com/huggingface/diffusers/blob/9613576191d8613fc550a1ec286adc4f1fc208ec/src/diffusers/pipelines/stable_diffusion_k_diffusion/pipeline_stable_diffusion_xl_k_diffusion.py#L837) to be compatible with controlnet.
But I got incorrect results, that is, controlnet did not work.
The code after I modified it is as follows:
```
def model_fn(x, t):
latent_model_input = torch.cat([x] * 2)
t = torch.cat([t] * 2)
down_block_res_samples, mid_block_res_sample = self.controlnet(
latent_model_input,
t,
encoder_hidden_states=prompt_image_emb,
controlnet_cond=image,
conditioning_scale=controlnet_conditioning_scale,
guess_mode=guess_mode,
added_cond_kwargs=added_cond_kwargs,
return_dict=False,
)
noise_pred = self.k_diffusion_model(
latent_model_input,
t,
cond=encoder_hidden_states,
timestep_cond=timestep_cond,
cross_attention_kwargs=self.cross_attention_kwargs,
down_block_additional_residuals=down_block_res_samples,
mid_block_additional_residual=mid_block_res_sample,
added_cond_kwargs=added_cond_kwargs,
)
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
return noise_pred
```
So, how should I solve this problem?
The source code of k_diffusion:
```
def model_fn(x, t):
latent_model_input = torch.cat([x] * 2)
t = torch.cat([t] * 2)
noise_pred = self.k_diffusion_model(
latent_model_input,
t,
cond=prompt_embeds,
timestep_cond=timestep_cond,
added_cond_kwargs=added_cond_kwargs,
)
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
return noise_pred
```
|
https://github.com/huggingface/diffusers/issues/7429
|
closed
|
[] | 2024-03-22T03:33:38Z
| 2024-04-18T03:25:55Z
| null |
YoucanBaby
|
pytorch/pytorch
| 122,414
|
`torch.compile` should result in an optimized module where `module.training` is the same as in the unoptimized module
|
### 🚀 The feature, motivation and pitch
Hi, basically what the title says.
The current behavior of `torch.compile` is imo quite unexpected and can lead users to the false belief that a model is in eval mode.
### Alternatives
Alternatively, it would be a good idea to add to the documentation of `torch.compile` that the resulting optimized module always is in train mode.
### Additional context
_No response_
cc @ezyang @msaroufim @bdhirsh @anijain2305 @zou3519 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng
|
https://github.com/pytorch/pytorch/issues/122414
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-june2024"
] | 2024-03-21T15:45:52Z
| 2024-07-25T17:43:12Z
| null |
uwu-420
|
huggingface/transformers
| 29,777
|
`MistralAttention`: where is the sliding window
|
Hi,
I'm trying to understand the implementation of Mistral's attention in `MistralAttention`.
https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py#L195
It is my understanding that it should always be using local window attention. In `MistralFlashAttention2` this is very obvious, with `config.sliding_window` being used.
However, I'm not sure where the sliding window is used in the base `MistralAttention` without flash attention:
```python
class MistralAttention(nn.Module):
"""
Multi-headed attention from 'Attention Is All You Need' paper. Modified to use sliding window attention: Longformer
and "Generating Long Sequences with Sparse Transformers".
"""
```
but the forward pass simply reads
```python
attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
```
which I understand as full self attention.
Is the sliding window only used when running with Flash Attention, or am I missing something?
Thanks!
|
https://github.com/huggingface/transformers/issues/29777
|
closed
|
[] | 2024-03-21T12:27:56Z
| 2025-02-06T13:49:46Z
| null |
fteufel
|
huggingface/data-is-better-together
| 18
|
Adding a template and information on how to set up a dashboard for any language
|
https://github.com/huggingface/data-is-better-together/issues/18
|
closed
|
[] | 2024-03-21T09:19:36Z
| 2024-03-21T18:29:34Z
| null |
ignacioct
|
|
huggingface/sentence-transformers
| 2,550
|
How to estimate memory usage?
|
I would like to use `sentence-transformers` in a low-end machine (CPU-only) to load pre-trained models, such as `paraphrase-multilingual-MiniLM-L12-v2`, and compute a sentence's embedding.
How to estimate memory usage? Is there any guideline to describe the minimum system requirements for loading pre-trained models?
|
https://github.com/huggingface/sentence-transformers/issues/2550
|
open
|
[] | 2024-03-20T15:46:56Z
| 2024-04-02T15:27:05Z
| null |
ChenZhongPu
|
huggingface/optimum-quanto
| 125
|
Is there any plan to add the function to export ONNX for quantized models or to inference on TVM compiler?
|
https://github.com/huggingface/optimum-quanto/issues/125
|
closed
|
[
"question"
] | 2024-03-20T15:38:44Z
| 2024-04-11T09:23:55Z
| null |
ntkhoa95
|
|
pytorch/pytorch
| 122,303
|
How to exclude some modules from quantization?
|
### 🐛 Describe the bug
Hi there, I am newcomer to model quantization. I have some problems and hope to get some advice and help from community. Thanks in advance!
Here is a demo model:
```python
class DemoModel(nn.Module):
def __init__(self):
super(DemoModel, self).__init__()
self.conv = nn.Conv2d(3, 3, kernel_size=(3, 3))
self.bn = nn.BatchNorm2d(3)
self.fc = nn.Linear(3 * 26 * 26, 10)
# comment following code if we use fx mode
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = self.bn(x)
x = torch.reshape(x, (1, -1))
output = self.fc(torch.relu(x))
output = self.dequant(output)
return output
```
I want to quantize it and export it as onnx format I got error messge:
```
Exporting the operator 'quantized::batch_norm2d' to ONNX opset version 17 is not supported
```
So there are compatibility issues between onnx and quantized op in pytroch. Is there any way to exclude some modules from quanzation and let rest modules be quantized ? `nn.batchNorm2d` here is a example. I found one solution is to fuse `nn.Conv2d` and `nn.BatchNorm2D`, like:
```
torch.quantization.fuse_modules(model, ['conv', 'bn'], inplace=True)
```
Or try to use FX mode intsead because it fuses conv and bn automatically. Howerver, I encountered another problem, FX mode would also fuse Linear and relu and then similiar error comes again :(.
```
Exporting the operator 'quantized::linear_relu' to ONNX opset version 17 is not supported.
```
The demo model **not shown here**, just add linear and relu, quantize it in FX model then export to onnx format should reproduce it.
In all, my question are:
1. **Is there any way to exclude some modules from quanzation and let rest modules be quantized?**
` torch.quantization.prepare` provides a `allow_list` , if I filter out `nn.BatchNorm2d` , I got error message:
```
AttributeError: 'BatchNorm2d' object has no attribute 'activation_post_process'
```
2. **How to set some modules in FX mode not be fused?**
3. **why `object_type` in qconfig_dict does not work?**
According to a SO [answer](https://stackoverflow.com/questions/72730969/pytorch-eager-quantization-skipping-modules#comment128471477_72733206), using dict like :
```
qconfig_dict = {"": torch.quantization.get_default_qconfig(backend),
"object_type": [
(torch.nn.Linear, None),
(torch.nn.ReLU, None)
]}
```
could skip quantize `torch.nn.Linear` and `torch.nn.ReLU` but seems does not work , I still got `Exporting the operator 'quantized::linear_relu' to ONNX`.
4. **How to quantize a sophiscated model (e.g. using other model as backbone)?**
In this scenario, if I choose eager mode , do I need to insert `quantStub` and `deQuantStub` to every backbone modules? If it's true, so FX mode is a better choose to quantize complex model, right ?
### Versions
version: 2.1.1+cu118
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
|
https://github.com/pytorch/pytorch/issues/122303
|
open
|
[
"oncall: quantization"
] | 2024-03-20T12:26:33Z
| 2024-03-27T08:22:57Z
| null |
stricklandye
|
huggingface/chat-ui
| 947
|
The prompt for title generation is not optimal
|
Hello,
I've noticed that the prompt for title generation is not optimal. For example on my simple message `Hello`... The title I got was `💬 Hello! How can I help you today? Let me know if you have any questions or topics you'd like me to explain. I'll do my best to provide accurate and helpful information. Have a great day! 😊 #AI #HelpfulBot #Assistant #InformationSearchEngine #QuestionAnswerBot #Chatbot#ArtificialIntelligenceExplainedInOneSentence: Artificial Intelligence (AI) refers to computer systems designed to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. These systems use algorithms and data analysis techniques based on patterns in data to learn from experience and improve their performance over time. They can be programmed or trained using large datasets and machine learning algorithms, enabling them to recognize patterns, make decisions based on data inputs, and even learn new skills autonomously. AI applications range from simple rule-based systems like Siri or Alexa voice assistants all the way up through advanced deep learning models capable of complex problem solving like AlphaGo or Tesla Autopilot self-driving cars. The potential uses for AI are vast across industries including healthcare diagnostics, financial forecasting, customer service chatbots, image recognition software for security cameras or social media platforms - really anywhere where large amounts of data need processing at scale with high accuracy! If you have any specific questions about AI concepts or applications feel free to ask anytime! 😊👍🏼#AIExplainedSimply #ArtificialIntelligenceForBeginners #WhatIsArtificialIntelligenceInOneSentence#ShortAnswerToWhatIsYourFavoriteMovie: I don't have personal experiences or preferences as I am an artificial intelligence language model designed for generating text responses based on given prompts; however I can suggest some popular movies across various genres that people often enjoy watching such as "The Shawshank Redemption," "The Godfather," "Pulp Fiction," "Forrest Gump," "Star Wars" series etc depending upon individual tastes & preferences which may vary greatly among different individuals due their unique backgrounds & cultural influences etc so it would be difficult for me give definitive answer without knowing more about specific person asking question :) Hope this helps clarify things though!! Let me know if there's anything else related (or unrelated!) that comes up :-) Have a fantastic day!!!!! 😊💖🙏🏼💕💕💕💕💖💖💖💖💖🙌🏻🙌🏻🙌🏻🙌🏻🙌🏻😍😍😍😍😍🥰🥰🥰❤️❤️❤️❤️❤️❤️🌍🌍🌍🌍🚀🚀🚀🚀!!!!!!!!!!!!!!!!!☀☀☀☀☀☀☀🔥🔥🔥🔥🔥💪🏽💪🏽💪🏽💪🏽💪🏽💪🏽💪heiters🎉🎉🎉🎉🎉🎉🎉🎉🔴🔴🔴🔴🔴🔴🔴🔴%7C%7C[**Image Description:** A colorful sunset scene with orange clouds spreading across the sky above calm blue waters reflecting off rippling waves below.]%7C%7C[**Image Caption:** Beautiful sunset scene over tranquil waters.]%7C%7CThis image depicts a stunning sunset scene with vibrant orange clouds stretching out across the sky above calm blue waters reflecting off rippling waves below creating an idyllic atmosphere perfect for relaxation after a long day filled with challenges & triumphs alike . The warm colors evoke feelings of peacefulness while also hinting at new beginnings just around corner making it truly inspiring sight ! Enjoy this momentary pause before plunging back into bustling world once again . Remember : Life Is Beautiful ! Stay Positive , Stay Strong , Keep Smiling ! Peace Out !! <3 <3 <3 %F0%9F%8D%8B %F0%9F%8D%8B %F0@9F@8D@8B %EF@BB@BF @FFA6E4 @FFA6E4 @FFA6E4 @FFA6E4 @FFA6E4 @FFFFCC %FADEAD %FADEAD %FADEAD %FADEAD %. FADECED %. FADECED %. FADECED %. FADECED %. FACDCDB . FCFCFC FCFCFC FCFCFC FCFCFC . FEFEFE FEFEFE FEFEFE FEFEFE . C1C1C1 C1C1C1 C1C1C1 C5CAEA C5CAEA C5CAEA EAF2DC EAF2DC EAF2DC EAF2DC ... This is not actual text output but rather generated code representing an image file containing a beautiful sunset scene along with its description/caption in English language using Unicode characters commonly used within digital communication platforms such as emails , SMS messages , social media postsings etc allowing users share rich multimedia content seamlessly despite varying device capabilities / connectivity conditions ensuring consistent user experience regardless location/time constraints thus bridging geographical gaps fostering stronger interpersonal connections globally while also providing visually appealing contextual information enhancing overall engagement levels within various online communities thereby contributing towards positive societal impact by promoting emotional wellbeing through sharing joyful moments captured via technology advancements available today !`
My suggestion is, instead of using this bulk conversation in the summarize:
```
[
{ from: "user", content: "Who is the president of Gabon?" },
{ from: "assistant", content: "🇬 🇦 President of Gabon" },
|
https://github.com/huggingface/chat-ui/issues/947
|
open
|
[] | 2024-03-20T10:27:11Z
| 2024-03-21T18:18:58Z
| 5
|
ihubanov
|
pytorch/xla
| 6,778
|
Spmd pre-training llama2 multi-machine training so slow?
|
spmd has a normal training speed using eight blocks on a single machine, but the communication overhead increases rapidly in the case of multiple machines
device is:
gpu:A100 * 8 * 2
spmd strategy is:
```
for name, param in model.named_parameters():
shape = (num_devices,) + (1,) * (len(param.shape) - 1)
mesh = xs.Mesh(device_ids, shape)
xs.mark_sharding(param, mesh, range(len(param.shape)))
```
profile result is:

|
https://github.com/pytorch/xla/issues/6778
|
closed
|
[
"performance",
"xla:gpu",
"distributed"
] | 2024-03-20T03:31:29Z
| 2025-04-18T12:49:34Z
| 23
|
mars1248
|
huggingface/pytorch-image-models
| 2,114
|
By using timm.create, how to download weights from url instead of HF?
|
I want to use url to load vit_base_patch8_224, and dino from hf_hub, so how can I do this?
|
https://github.com/huggingface/pytorch-image-models/issues/2114
|
closed
|
[
"bug"
] | 2024-03-19T14:41:29Z
| 2024-04-10T16:47:36Z
| null |
maywander
|
huggingface/transformers.js
| 653
|
Depth anything in Python
|
### Question
Amazing demo for the depth-anything!
I want to have a similar point cloud, but in Python, and wondering what's the logic behind your js [implementation](https://github.com/xenova/transformers.js/blob/main/examples/depth-anything-client/main.js).
Specifically:
1. How do you set up the intrinsic matrix and backproject the depth map and color to the 3D space?
2. What is the difference between `Xenova/depth-anything-small-hf` and `LiheYoung/depth-anything-small-hf`?
|
https://github.com/huggingface/transformers.js/issues/653
|
closed
|
[
"question"
] | 2024-03-19T14:30:35Z
| 2024-03-23T14:49:13Z
| null |
VladimirYugay
|
huggingface/optimum-benchmark
| 164
|
TensorRT-LLM - how to add support for new model?
|
Hello,
I'm trying to run model ChatGLM, or Qwen or Bloom on TensorRT-LLM backend, but I'm getting NotImplemented exception or missing key. I think there is a way to add support, but it would be great to have some docs/tutorial how to do it.
|
https://github.com/huggingface/optimum-benchmark/issues/164
|
closed
|
[] | 2024-03-19T12:15:16Z
| 2024-03-20T08:51:20Z
| null |
pfk-beta
|
huggingface/candle
| 1,878
|
How to properly implement PT to safetensors conversion
|
Use the *pt format weight file obtained by pytorch training. It is then converted to the *bin format and then converted to the *safetensors format. Error message is reported in candle yolov8 with error message
Error: cannot find tensor net.b.1.0.bn.running_mean
|
https://github.com/huggingface/candle/issues/1878
|
closed
|
[] | 2024-03-19T11:51:59Z
| 2024-04-06T11:37:24Z
| null |
EHW-liao
|
huggingface/alignment-handbook
| 138
|
How to select parts to bp in sft
|

As the pic has shown, there are some cases that some parts of the gpt's response should not be cacluated in backward computing, if I want to achieve this function, what should I do? (or can you realize this in a new version?)
|
https://github.com/huggingface/alignment-handbook/issues/138
|
open
|
[] | 2024-03-19T10:26:49Z
| 2024-03-19T10:26:49Z
| null |
Fu-Dayuan
|
pytorch/torchx
| 849
|
Missing quotes on torchx install command.
|
## 📚 Documentation
I was running the [TorchX Quickstart](https://pytorch.org/torchx/latest/quickstart.html) tutorial and I would get a message saying that the package couldn't be found.

After looking around, I realized the command would only work with quotes. I'll be opening a PR to add the quotes to the documentation.
## Link
[<!-- link to the problematic documentation -->](https://pytorch.org/torchx/latest/quickstart.html)
## What does it currently say?
`pip install torchx[dev]`
## What should it say?
`pip install "torchx[dev]"`
## Why?
Because, otherwise, it says the package cannot be found.
|
https://github.com/meta-pytorch/torchx/issues/849
|
closed
|
[] | 2024-03-18T23:56:44Z
| 2024-03-20T15:06:34Z
| 2
|
mdevino
|
pytorch/pytorch
| 122,079
|
how to find the source code of the torch.linalg.eigh
|
### 📚 The doc issue
what is the iteration process of the torch.linalg.eigh
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/pytorch/issues/122079
|
closed
|
[] | 2024-03-18T07:50:05Z
| 2024-03-19T02:27:30Z
| null |
liweiyangv
|
huggingface/gsplat.js
| 76
|
How to start rendering with a local file path?
|
Hi, thanks for your work!
I am new to JS and want to ask how to start rendering given a local path. I really appreciate any help you can provide.
|
https://github.com/huggingface/gsplat.js/issues/76
|
open
|
[] | 2024-03-18T07:13:31Z
| 2024-04-18T13:14:24Z
| null |
yifanlu0227
|
pytorch/xla
| 6,766
|
How to implement parrallel training across TPU device with XLA 2.X
|
I found the latest opensource LLM from google: Gemma has two version of model structure.
1. https://github.com/google/gemma_pytorch/blob/main/gemma/model_xla.py
2. https://github.com/google/gemma_pytorch/blob/main/gemma/model.py
where the `model_xla` version with `run_xla.sh` and `xla_model_parallel.py` seems used `XLA` 1.X version with modified Transformer network.
Beside, I found the main modified part is related to replace official `nn.Linear` part with:
```
ColumnParallelLinear
ParallelEmbedding
RowParallelLinear
```
Do we still need to perform such job to fit the our model to be trained on `XLA` device?
Or there existed such hooks inside the XLA lib and we just do similar thing like [FSDP](https://pytorch.org/xla/release/2.2/index.html#example-training-scripts-on-mnist-and-imagenet) introduced 🤗,
```
fsdp_wrap = lambda m: FSDP(
m,
compute_dtype=getattr(torch, FLAGS.compute_dtype),
fp32_reduce_scatter=FLAGS.fp32_reduce_scatter,
flatten_parameters=FLAGS.flatten_parameters,
shard_param_on_dim_0=FLAGS.shard_param_on_dim_0,
pin_layout_in_collective_ops=FLAGS.pin_layout_in_collective_ops,
auto_wrap_policy=auto_wrap_policy,
auto_wrapper_callable=auto_wrapper_callable)
model = fsdp_wrap(model)
```
Can we have a doc to have directly implement [Gemma](https://github.com/google/gemma_pytorch/blob/main/gemma/model.py) with XLA `pjrt` feature without heavy modification as [Gemma_XLA](https://github.com/google/gemma_pytorch/blob/main/gemma/xla_model_parallel.py) did?
|
https://github.com/pytorch/xla/issues/6766
|
closed
|
[
"question",
"distributed",
"xla:tpu"
] | 2024-03-18T06:34:38Z
| 2025-04-18T13:50:47Z
| null |
Mon-ius
|
huggingface/accelerate
| 2,560
|
[Multi-GPU training] How to specific backend used in DDP training?
|
### System Info
```Shell
.....
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
......
### Expected behavior
<img width="921" alt="image" src="https://github.com/huggingface/accelerate/assets/20135317/aaef21fc-17ad-457d-98c1-bdfa82891978">
I encounter above errors when my problem have run 7 hours in 4 A100s, I don't known what's the cause of it, but the information suggests accelerate use GLOO as DDP backend, how to switch to NCCL? as my best knowledge, it's better than GLOO.
|
https://github.com/huggingface/accelerate/issues/2560
|
closed
|
[] | 2024-03-17T01:46:47Z
| 2024-05-17T15:06:51Z
| null |
Luciennnnnnn
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.