repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
โ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/accelerate
| 2,164
|
how to get same timestamp in different subprocesses while using accelerate launch
|
I would like to get a unique timestamp to name my result folder like below
```
def get_time_string() -> str:
x = datetime.datetime.now()
return f"{(x.year - 2000):02d}{x.month:02d}{x.day:02d}-{x.hour:02d}{x.minute:02d}{x.second:02d}"
```
, however, it sometimes will get a different timestamp in different subprocesses, is there anyway to get a unique timestamp?
Thanks very much for your time!
|
https://github.com/huggingface/accelerate/issues/2164
|
closed
|
[] | 2023-11-17T06:36:00Z
| 2023-11-29T07:30:04Z
| null |
shliu0
|
huggingface/open_asr_leaderboard
| 14
|
How to run calc_rtf.py? Cannot reproduce rtf results.
|
There is no guide on how to execute calc_rtf.py. For example, this one https://github.com/huggingface/open_asr_leaderboard/blob/main/transformers/calc_rtf.py references 4469669.mp3. But there is no such file in the repo from what I see.
So the results are not reproducible.
Same for https://github.com/huggingface/open_asr_leaderboard/blob/main/nemo_asr/calc_rtf.py What is /disk3/datasets/speech-datasets/earnings22/media/4469669.wav?
BTW, I don't recommend simply copying the same sample multiple times for an evaluation. It can cause performance that looks too good compared to running in production. While the data won't be cached, the same chunks of external language models will get hit multiple times, giving better-than-reality results, as one example. What that means is that, for example, the whisper models are never diverging across elements in the batch in the sequence they are producing. This can cause the embedding lookup to be better than it really should be.
I got my RTFx results in https://arxiv.org/abs/2311.04996 by cahcing the entire dataset in memory https://github.com/nvidia-riva/riva-asrlib-decoder/blob/8282368816552a7ee22c9340dce7b9c3c8d1f193/src/riva/asrlib/decoder/test_graph_construction.py#L77-L89 This is what we do at MLPerf Inference benchmarks as well. Which is the gold standard for benchmarking.
|
https://github.com/huggingface/open_asr_leaderboard/issues/14
|
open
|
[] | 2023-11-16T21:14:31Z
| 2023-11-16T21:14:31Z
| null |
galv
|
huggingface/transformers.js
| 397
|
[Question] Tokenizing a base64 for string is very slow?
|
Hi! I happened to be encoding some files using transformers.js and one of the files happened to have some base64 in it. What I noticed is that base64 takes an enormously long time, relative to the number of tokens produced. Tokenizing a string of english text to the same number of tokens is far quicker.
For example:
```javascript
const testBase64 =
"VGhlIFNwYW5pc2ggQ2l2aWwgV2FyIChTcGFuaXNoOiBHdWVycmEgQ2l2aWwgRXNwYcOxb2xhKVtub3RlIDJdIHdhcyBmb3VnaHQgZnJvbSAxOTM2IHRvIDE5MzkgYmV0d2VlbiB0aGUgUmVwdWJsaWNhbnMgYW5kIHRoZSBOYXRpb25hbGlzdHMuIFJlcHVibGljYW5zIHdlcmUgbG95YWwgdG8gdGhlIGxlZnQtbGVhbmluZyBQb3B1bGFyIEZyb250IGdvdmVybm1lbnQgb2YgdGhlIFNlY29uZCBTcGFuaXNoIFJlcHVibGljLCBhbmQgY29uc2lzdGVkIG9mIHZhcmlvdXMgc29jaWFsaXN0LCBjb21tdW5pc3QsIHNlcGFyYXRpc3QsIGFuYXJjaGlzdCwgYW5kIHJlcHVibGljYW4gcGFydGllcywgc29tZSBvZiB3aGljaCBoYWQgb3Bwb3NlZCB0aGUgZ292ZXJubWVudCBpbiB0aGUgcHJlLXdhciBwZXJpb2QuWzEyXSBUaGUgb3Bwb3NpbmcgTmF0aW9uYWxpc3RzIHdlcmUgYW4gYWxsaWFuY2Ugb2YgRmFsYW5naXN0cywgbW9uYXJjaGlzdHMsIGNvbnNlcnZhdGl2ZXMsIGFuZCB0cmFkaXRpb25hbGlzdHMgbGVkIGJ5IGEgbWlsaXRhcnkganVudGEgYW1vbmcgd2hvbSBHZW5lcmFsIEZyYW5jaXNjbyBGcmFuY28gcXVpY2tseSBhY2hpZXZlZCBhIHByZXBvbmRlcmFudCByb2xlLiBEdWUgdG8gdGhlIGludGVybmF0aW9uYWwgcG9saXRpY2FsIGNsaW1hdGUgYXQgdGhlIHRpbWUsIHRoZSB3YXIgaGFkIG1hbnkgZmFjZXRzIGFuZCB3YXMgdmFyaW91c2x5IHZpZXdlZCBhcyBjbGFzcyBzdHJ1Z2dsZSwgYSByZWxpZ2lvdXMgc3RydWdnbGUsIGEgc3RydWdnbGUgYmV0d2VlbiBkaWN0YXRvcnNoaXAgYW5kIHJlcHVibGljYW4gZGVtb2NyYWN5LCBiZXR3ZWVuIHJldm9sdXRpb24gYW5kIGNvdW50ZXJyZXZvbHV0aW9uLCBhbmQgYmV0d2VlbiBmYXNjaXNtIGFuZCBjb21tdW5pc20uWzEzXSBBY2NvcmRpbmcgdG8gQ2xhdWRlIEJvd2VycywgVS5TLiBhbWJhc3NhZG9yIHRvIFNwYWluIGR1cmluZyB0aGUgd2FyLCBpdCB3YXMgdGhlICJkcmVzcyByZWhlYXJzYWwiIGZvciBXb3JsZCBXYXIgSUkuWzE0XSBUaGUgTmF0aW9uYWxpc3RzIHdvbiB0aGUgd2FyLCB3aGljaCBlbmRlZCBpbiBlYXJseSAxOTM5LCBhbmQgcnVsZWQgU3BhaW4gdW50aWwgRnJhbmNvJ3MgZGVhdGggaW4gTm92ZW1iZXIgMTk3NS4KClRoZSB3YXIgYmVnYW4gYWZ0ZXIgdGhlIHBhcnRpYWwgZmFpbHVyZSBvZiB0aGUgY291cCBkJ8OpdGF0IG9mIEp1bHkgMTkzNiBhZ2FpbnN0IHRoZSBSZXB1YmxpY2FuIGdvdmVybm1lbnQgYnkgYSBncm91cCBvZiBnZW5lcmFscyBvZiB0aGUgU3BhbmlzaCBSZXB1YmxpY2FuIEFybWVkIEZvcmNlcywgd2l0aCBHZW5lcmFsIEVtaWxpbyBNb2xhIGFzIHRoZSBwcmltYXJ5IHBsYW5uZXIgYW5kIGxlYWRlciBhbmQgaGF2aW5nIEdlbmVyYWwgSm9zw6kgU2FuanVyam8gYXMgYSBmaWd1cmVoZWFkLiBUaGUgZ292ZXJubWVudCBhdCB0aGUgdGltZSB3YXMgYSBjb2FsaXRpb24gb2YgUmVwdWJsaWNhbnMsIHN1cHBvcnRlZCBpbiB0aGUgQ29ydGVzIGJ5IGNvbW11bmlzdCBhbmQgc29jaWFsaXN0IHBhcnRpZXMsIHVuZGVyIHRoZSBsZWFkZXJzaGlwIG9mIGNlbnRyZS1sZWZ0IFByZXNpZGVudCBNYW51ZWwgQXphw7FhLlsxNV1bMTZdIFRoZSBOYXRpb25hbGlzdCBmYWN0aW9uIHdhcyBzdXBwb3J0ZWQgYnkgYSBudW1iZXIgb2YgY29uc2VydmF0aXZlIGdyb3VwcywgaW5jbHVkaW5nIENFREEsIG1vbmFyY2hpc3RzLCBpbmNsdWRpbmcgYm90aCB0aGUgb3Bwb3NpbmcgQWxmb25zaXN0cyBhbmQgdGhlIHJlbGlnaW91cyBjb25zZXJ2YXRpdmUgQ2FybGlzdHMsIGFuZCB0aGUgRmFsYW5nZSBFc3Bhw7FvbGEgZGUgbGFzIEpPTlMsIGEgZmFzY2lzdCBwb2xpdGljYWwgcGFydHkuWzE3XSBBZnRlciB0aGUgZGVhdGhzIG9mIFNhbmp1cmpvLCBFbWlsaW8gTW9sYSBhbmQgTWFudWVsIEdvZGVkIExsb3BpcywgRnJhbmNvIGVtZXJnZWQgYXMgdGhlIHJlbWFpbmluZyBsZWFkZXIgb2YgdGhlIE5hdGlvbmFsaXN0IHNpZGUuCgpUaGUgY291cCB3YXMgc3VwcG9ydGVkIGJ5IG1pbGl0YXJ5IHVuaXRzIGluIE1vcm9jY28sIFBhbXBsb25hLCBCdXJnb3MsIFphcmFnb3phLCBWYWxsYWRvbGlkLCBDw6FkaXosIEPDs3Jkb2JhLCBhbmQgU2V2aWxsZS4gSG93ZXZlciwgcmViZWxsaW5nIHVuaXRzIGluIGFsbW9zdCBhbGwgaW1wb3J0YW50IGNpdGllc+KAlHN1Y2ggYXMgTWFkcmlkLCBCYXJjZWxvbmEsIFZhbGVuY2lhLCBCaWxiYW8sIGFuZCBNw6FsYWdh4oCUZGlkIG5vdCBnYWluIGNvbnRyb2wsIGFuZCB0aG9zZSBjaXRpZXMgcmVtYWluZWQgdW5kZXIgdGhlIGNvbnRyb2wgb2YgdGhlIGdvdmVybm1lbnQuIFRoaXMgbGVmdCBTcGFpbiBtaWxpdGFyaWx5IGFuZCBwb2xpdGljYWxseSBkaXZpZGVkLiBUaGUgTmF0aW9uYWxpc3RzIGFuZCB0aGUgUmVwdWJsaWNhbiBnb3Zlcm5tZW50IGZvdWdodCBmb3IgY29udHJvbCBvZiB0aGUgY291bnRyeS4gVGhlIE5hdGlvbmFsaXN0IGZvcmNlcyByZWNlaXZlZCBtdW5pdGlvbnMsIHNvbGRpZXJzLCBhbmQgYWlyIHN1cHBvcnQgZnJvbSBGYXNjaXN0IEl0YWx5LCBOYXppIEdlcm1hbnkgYW5kIFBvcnR1Z2FsLCB3aGlsZSB0aGUgUmVwdWJsaWNhbiBzaWRlIHJlY2VpdmVkIHN1cHBvcnQgZnJvbSB0aGUgU292aWV0IFVuaW9uIGFuZCBNZXhpY28uIE90aGVyIGNvdW50cmllcywgc3VjaCBhcyB0aGUgVW5pdGVkIEtpbmdkb20sIEZyYW5jZSwgYW5kIHRoZSBVbml0ZWQgU3RhdGVzLCBjb250aW51ZWQgdG8gcmVjb2duaXNlIHRoZSBSZXB1YmxpY2FuIGdvdmVybm1lbnQgYnV0IGZvbGxvd2VkIGFuIG9mZmljaWFsIHBvbGljeSBvZiBub24taW50ZXJ2ZW50aW9uLiBEZXNwaXRlIHRoaXMgcG9saWN5LCB0ZW5zIG9mIHRob3VzYW5kcyBvZiBjaXRpemVucyBmcm9tIG5vbi1pbnRlcnZlbnRpb25pc3QgY291bnRyaWVzIGRpcmVjdGx5IHBhcnRpY2lwYXRlZCBpbiB0aGUgY29uZmxpY3QuIFRoZXkgZm91Z2h0IG1vc3RseSBpbiB0aGUgcHJvLVJlcHVibGljYW4gSW50ZXJuYXRpb25hbCBCcmlnYWRlcywgd2hpY2ggYWxzbyBpbmNsdWRlZCBzZXZlcmFsIHRob3VzYW5kIGV4aWxlcyBmcm9tIHByby1OYXRpb25hbGlzdCByZWdpbWVzLg==";
const { AutoTokenizer } = await import("@xenova/transformers");
const tokenizer = await AutoTokenizer.from_pretrained(
"Xenova/all-MiniLM-L6-v2"
);
const startTime = Date.now();
const tokenized = tokenizer.encode(testBase64);
const endTime = Date.now();
console.log("It took ", endTime - startTime, "ms to tokenize");
const decoded = tokenizer.decode(tokenized);
console.log("Decoded: ", decoded);
```
Takes 56 seconds to tokenize and when decoded returns the same input string.
Interestingly, similar logic
|
https://github.com/huggingface/transformers.js/issues/397
|
closed
|
[
"question"
] | 2023-11-16T20:27:51Z
| 2023-11-17T19:48:57Z
| null |
samlhuillier
|
huggingface/transformers.js
| 396
|
[Question] How to use transformer.js in langchain
|
Hi all, I'm writing a custom LLM to use transformer.js with langchain. Does a structure like this make sense? Any advice for optimizing it or best practices to apply?
Any suggestions or feedback would be greatly appreciated ๐ ๐
```
import { pipeline } from "@xenova/transformers";
import { LLM } from "langchain/llms/base";
class MyHF extends LLM {
static instance = null;
constructor(modelTask = "text2text-generation", modelName = "Xenova/LaMini-Flan-T5-783M") {
super({ maxConcurrency: 1 });
this.modelTask = modelTask;
this.modelName = modelName;
this.llmModel = MyHF.getInstance(this.modelTask, this.modelName);
}
static async getInstance(modelTask, modelName, progress_callback = null) {
if (this.instance === null) {
this.instance = pipeline(modelTask, modelName, { progress_callback });
}
return this.instance;
}
_llmType() {
return "hf";
}
async _call(prompt, options = { topk: 1 }) {
const executor = await MyHF.getInstance(this.modelTask, this.modelName);
const { generated_text } = await executor(prompt, options);
return generated_text
}
}
export default MyHF;
```
|
https://github.com/huggingface/transformers.js/issues/396
|
open
|
[
"question"
] | 2023-11-16T17:27:52Z
| 2023-12-21T16:27:28Z
| null |
mrddter
|
huggingface/autotrain-advanced
| 349
|
How to reload the checkpoints for LLM finetuning?
|
May I ask how to resume from the latest checkpoint using `autotrain llm` if it crashed. I only found one from the `dreambooth` trainers, but I cannot find the `resume_from_checkpoint` anywhere else.
I was wondering if it has currently not fully supported this feature yet or I was missing something? It would be super helpful if anyone can kindly pointing out how to do that using autotrain?
Many thanks!
|
https://github.com/huggingface/autotrain-advanced/issues/349
|
closed
|
[
"stale"
] | 2023-11-16T11:51:25Z
| 2024-02-02T08:58:47Z
| null |
xihajun
|
huggingface/trl
| 1,004
|
Guidance on how to fix the scheduler and ConstantLengthDataset
|
Hello,
I want to fix the issue related to the `ConstantLengthDataset` not knowing the dataset's length in advance.
Besides having a broken progressbar and a wrong epoch count, the only problem I see is related to the scheduler, as most of us are training using cosine with warmup; if we want a complete cycle, the scheduler needs the total number of steps to adjust the ratios accordingly.
One solution would be to "guess" how many batches/iteration of packed data we will see by grabbing some samples and estimating the total length. A function tries to do something like this by computing a char/tok ratio.
Do you have any advice so I can draft a PR?
Ohh I just saw that @lvwerra has a [PR](https://github.com/huggingface/trl/pull/979) in the works, but only for "finite" dataset.
|
https://github.com/huggingface/trl/issues/1004
|
closed
|
[] | 2023-11-16T10:58:30Z
| 2024-01-05T15:05:18Z
| null |
tcapelle
|
huggingface/diffusers
| 5,816
|
low attention to prompt in SDXL
|
Hi,
One of the difference between DALLE3 and SDXL is that SDXL pay less attention to prompt,
Is there a way to solve this problem? I don't Know. for example changing the text encoder to other can help to solve this problem ?
Thanks
|
https://github.com/huggingface/diffusers/issues/5816
|
closed
|
[
"question",
"stale"
] | 2023-11-16T07:24:15Z
| 2024-01-09T15:06:55Z
| null |
saeedkhanehgir
|
huggingface/transformers
| 27,526
|
How to preupgrade transformer cache and build the upgraded into docker image?
|
### System Info
Linux ubuntu 22.04
Docker 24.05
I am not sure if this is the right place for this issue. Apology if it isn't and please direct me to the right place.
I have been using transformer in docker images that are deployed at runpod/replicate. The containers of the images could go cold and be relaunched again and again. Each time the container would waste 20 to 40 seconds for the blow cache upgrade.
```
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
```
It would take around 20 to 40 seconds, which is a significant waste of our GPU time and container startup time.
I have tried to find out how to preupgrade the cache and build the upgrade cache into docker image by google but I couldn't find a way to do it.
Please advise how to preupgrade the cache and build the upgraded cache in docker image.
Many thanks.
### Expected behavior
The cache for model files is preupgraded and built into container image to avoid upgrade each time a container is launched.
|
https://github.com/huggingface/transformers/issues/27526
|
closed
|
[] | 2023-11-16T02:53:54Z
| 2023-12-24T08:03:44Z
| null |
lanyusan
|
pytorch/benchmark
| 2,040
|
How to run test_bench.py with ROCM?
|
Hi @xuzhao9,
I don't know how to create a dockerfile for AMD ROCM, is there any example?
Best Regards
|
https://github.com/pytorch/benchmark/issues/2040
|
closed
|
[
"module: rocm",
"ciflow/rocm"
] | 2023-11-15T14:16:59Z
| 2024-03-18T22:00:08Z
| null |
jinsong-mao
|
pytorch/TensorRT
| 2,471
|
โ [Question] How to compile model when input is a list of tensors
|
## โ Question
I am trying to follow the tutorial [here](https://pytorch.org/TensorRT/tutorials/serving_torch_tensorrt_with_triton.html) and am stuck at compiling the model with tensor-rt. The model i am using takes a list of tensors as inputs and hence i could not get the following compile code to work as i cannot get the shape of a list:
```
trt_model = torch_tensorrt.compile(self.model,
inputs= [torch_tensorrt.Input(inputs.shape)],
enabled_precisions= { torch.half} # Run with FP32
)
```
Inputs have the following tensors:
> ic| i.shape: torch.Size([1, 3, 256, 256])
> ic| i.shape: torch.Size([1, 98, 3])
> ic| i.shape: torch.Size([1, 3, 3])
## What you have already tried
I have tried using `(3,)` but i am getting the following errror:
```
File "/home/default/anaconda3/envs/driverstate_ttrt/lib/python3.10/site-packages/torch/jit/_recursive.py", line 397, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
forward(__torch__.spiga.models.cnn.layers.___torch_mangle_24.Residual self, Tensor x) -> Tensor:
Keyword argument core unknown.
:
File "/home/default/driver-state-detection/Fabian/headpose/SPIGA/spiga/models/cnn/hourglass.py", line 45
low1 = self.low1(pool1)
if self.n > 1:
low2, core = self.low2(low1, core=core)
~~~~~~~~~ <--- HERE
else:
low2 = self.low2(low1)
```
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.0.1+cu118
- OS (e.g., Linux): WSL2 on Windows11
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Python version: 3.10.12
- GPU models and configuration: 2070Super
- Any other relevant information: torch-tensorrt 1.4.0
## Additional context
Basically asking what should i used as inputs shape if it is a list of tensors. Should i instead look to [this](https://github.com/pytorch/TensorRT/tree/main/examples/dynamo)?
|
https://github.com/pytorch/TensorRT/issues/2471
|
closed
|
[
"question"
] | 2023-11-15T09:50:36Z
| 2025-11-24T17:44:36Z
| null |
HeChengHui
|
pytorch/vision
| 8,118
|
missing labels in FER2013 test data
|
### ๐ Describe the bug
The file **test.csv** has no label column, so the labels in the test split all have value None:
```
from torchvision.datasets import FER2013
dat = FER2013(root='./', split='test')
print(dat[0][1])
```
Adding labels to the file raises a RuntimeError, presumably because of a resulting different md5 hash. The code above assumes the data has been downloaded from kaggle, as described in the [source code](https://github.com/pytorch/vision/blob/main/torchvision/datasets/fer2013.py).
### Versions
PyTorch version: 2.1.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.0-26-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650 Ti
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 165
Model name: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
Stepping: 2
CPU MHz: 1944.273
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 5199.98
Virtualization: VT-x
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 1.5 MiB
L3 cache: 12 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Vulnerable: No microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.1.1
[pip3] torchaudio==2.1.1
[pip3] torchvision==0.16.1
[pip3] triton==2.1.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.26.0 py311h08b1b3b_0
[conda] numpy-base 1.26.0 py311hf175353_0
[conda] pytorch
|
https://github.com/pytorch/vision/issues/8118
|
closed
|
[
"enhancement",
"help wanted",
"module: datasets"
] | 2023-11-15T09:01:24Z
| 2024-06-04T10:21:51Z
| 8
|
dtafler
|
huggingface/optimum
| 1,538
|
Optimum supports AMDGPUใ๏ผ
|
### Feature request
Onnxruntime supports AMD-ROCM ๏ผ
how to compile on optimum
### Motivation
Our company is currently testing amdgpu and has learned that optim can accelerate inference on CUDA. We are not sure if it will support ROCM in the future?
### Your contribution
none
|
https://github.com/huggingface/optimum/issues/1538
|
closed
|
[] | 2023-11-15T04:15:21Z
| 2024-01-09T16:10:39Z
| 1
|
taikai-zz
|
huggingface/tokenizers
| 1,391
|
How to split special token in encode?
|
i have converted a slow tokenizer into PreTrainedTokenizerFast, and get a tokenizer.json file.But i found that this tokenizer did not split special tokens.Here is my add_tokens in tokenizer.json:
` tokenizer.add_special_tokens(
[
AddedToken("[gMASK]", normalized=True, single_word=False),
AddedToken("sop", normalized=True, single_word=False),
]
)
`
|
https://github.com/huggingface/tokenizers/issues/1391
|
closed
|
[] | 2023-11-15T03:41:22Z
| 2024-01-04T06:26:38Z
| null |
leizhao1234
|
pytorch/TensorRT
| 2,468
|
โ [Question] New release of torch-tensorRT with PyTorch 2.1
|
## โ Question
New release of torch-tensort with PyTorch 2.1
## What you have already tried
Is there going to be a new release? or is this supported now through torch.compile only?
|
https://github.com/pytorch/TensorRT/issues/2468
|
closed
|
[
"question"
] | 2023-11-14T23:42:50Z
| 2025-01-21T17:21:34Z
| null |
agunapal
|
pytorch/TensorRT
| 2,465
|
ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin Mod version 1
|
I want to use tensorrt to accelerate VisionEncoderDecoderModel. Use the following code to convert it to onnx and it was successful.
```
from transformers import VisionEncoderDecoderModel
def model_converter():
model = VisionEncoderDecoderModel.from_pretrained("./examples/data")
model.to(device)
model.eval()
tokenizer = NougatTokenizerFast.from_pretrained(r'./examples/data')
latex_processor = NougatLaTexProcessor.from_pretrained(r'./examples/data')
task_prompt = tokenizer.bos_token
decoder_input_ids = tokenizer(task_prompt, add_special_tokens=False,
return_tensors="pt").input_ids.to(device)
# Create dummy inputs with the correct shapes for both inputs
dummy_pixel_values = torch.randn(1, 3, 224, 560, device=device)
# Provide names for the inputs
input_names = ['pixel_values', 'decoder_input_ids']
output_names = ['output']
# Export the model to ONNX
torch.onnx.export(
model,
(dummy_pixel_values, decoder_input_ids),
'./examples/test2.onnx',
export_params=True,
verbose=True,
input_names=input_names,
output_names=output_names
)
```
Then, when changing onnx into trt, an error occurred:
> Loading ONNX file from path ./examples/test.onnx...
Beginning ONNX file parsing
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 1400793072
[TensorRT] WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin Mod version 1
ERROR: Failed to parse the ONNX file.
In node -1 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
Completed parsing of ONNX file
[TensorRT] ERROR: Network must have at least one output
[TensorRT] ERROR: Network validation failed.
Traceback (most recent call last):
File "create_onnx.py", line 350, in <module>
f.write(engine.serialize())
AttributeError: 'NoneType' object has no attribute 'serialize'
the code is
```
import os
import tensorrt as trt
TRT_LOGGER = trt.Logger()
model_path = './examples/test.onnx'
engine_file_path = "./examples/test.trt"
EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH) # batchsize=1
with trt.Builder(TRT_LOGGER) as builder, builder.create_network(EXPLICIT_BATCH) \
as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
builder.max_workspace_size = 1 << 28
builder.max_batch_size = 1
if not os.path.exists(model_path):
print('ONNX file {} not found.'.format(model_path))
exit(0)
print('Loading ONNX file from path {}...'.format(model_path))
with open(model_path, 'rb') as model:
print('Beginning ONNX file parsing')
if not parser.parse(model.read()):
print('ERROR: Failed to parse the ONNX file.')
for error in range(parser.num_errors):
print(parser.get_error(error))
network.get_input(0).shape = [1, 3, 224, 560]
network.get_input(1).shape = [1,1]
print('Completed parsing of ONNX file')
engine = builder.build_cuda_engine(network)
with open(engine_file_path, "wb") as f:
f.write(engine.serialize())
```
> TensorRT-7.2.3.4
|
https://github.com/pytorch/TensorRT/issues/2465
|
closed
|
[
"question"
] | 2023-11-14T09:37:12Z
| 2023-11-15T01:34:35Z
| null |
lin-lcx
|
pytorch/executorch
| 1,203
|
How to load original images for model inference
|
Hi, I am invsetigating in `examples/portable/executor_runner/executor_runner.cpp.`
And on the [PrepareInputTensors](https://github.com/pytorch/executorch/blob/47900c96388453c83d9a6706151c0c2157fbfabd/examples/portable/executor_runner/executor_runner.cpp#L154), [method of PrepareInputTensor](https://github.com/pytorch/executorch/blob/9682172576d5d9a10f3162ad91e0a32b384a3b7c/util/util.h#L65-L137) generated just ones-initialized inputs.
So I would like to know how to load original dataset images and set them in Aten.
Is it using opencv or other way?
Thanks
|
https://github.com/pytorch/executorch/issues/1203
|
closed
|
[
"need-user-input",
"triaged"
] | 2023-11-14T05:11:42Z
| 2024-01-15T07:12:37Z
| null |
EarthMu
|
huggingface/diffusers
| 5,786
|
How to load a precomputed dataset in the cache folder on a different machine?
|
**Is your feature request related to a problem? Please describe.**
Some slurm cluster may have a limit on time allocation, so I'd like to precompute the dataset on my local machine then move it to a location on the cluster to directly reuse it.
**Describe the solution you'd like**
I saw load dataset automatically create arrow files inside ~/.cache/imagefolder, and the dataset folder path is translated into some hash code. So I hope I can copy the dataset here and pass it to --dataset_name in training SDXL unet. Or perhaps I'm not aware now, some ways to let me reuse the precomputed cached dataset on a different machine.
**Describe alternatives you've considered**
please see above.
**Additional context**
please see above
|
https://github.com/huggingface/diffusers/issues/5786
|
closed
|
[
"question",
"stale"
] | 2023-11-14T02:26:00Z
| 2024-01-09T15:07:14Z
| null |
linnanwang
|
huggingface/alignment-handbook
| 22
|
How to perform full parameter finetuning without A100 GPUs
|
Hi, thank you for your great work! I'd like to reproduce full parameter fine-tuning of dpo training. However I only have 10 * Nvidia A40 GPUs (46 Gbs memory each).
I tried the command
`CUDA_VISIBLE_DEVICES=2,3,4,5,6,7,8,9 ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero3.yaml --main_process_port 6000 scripts/run_dpo.py recipes/zephyr-7b-beta/dpo/config_full.yaml`
and it reported OOM error, even if I set batch size to 1.
I don't mind the program runs a bit slower (e.g., use smaller batchsize and more gradient accumulation steps). However, I don't know if there is a way to successfully deploy the full-dpo code.
Can you help me, please?
Also, I'm wondering how large is the performance gap between lora and full parameter finetunning.
|
https://github.com/huggingface/alignment-handbook/issues/22
|
open
|
[] | 2023-11-14T01:33:41Z
| 2024-02-14T13:47:16Z
| null |
ChenDRAG
|
huggingface/controlnet_aux
| 83
|
How to get keypoints output .json file like original OpenPose ?
|
https://github.com/huggingface/controlnet_aux/issues/83
|
open
|
[] | 2023-11-13T21:55:35Z
| 2023-11-17T21:04:49Z
| null |
mayank64ce
|
|
huggingface/chat-ui
| 550
|
Can this ui be run on a colab?
|
I am wondering if this ui can be used inside a colab.
|
https://github.com/huggingface/chat-ui/issues/550
|
closed
|
[
"question"
] | 2023-11-13T16:58:35Z
| 2023-11-15T16:17:10Z
| null |
amida47
|
huggingface/text-generation-inference
| 1,258
|
How to deal with bias=True Model
|
### Feature request
How to deploy model within bias=True. Example: vinai/PhoGPT-7B5-Instruct
### Motivation
.
### Your contribution
.
|
https://github.com/huggingface/text-generation-inference/issues/1258
|
closed
|
[
"Stale"
] | 2023-11-13T09:20:08Z
| 2024-01-20T01:46:38Z
| null |
anhnh2002
|
huggingface/trl
| 985
|
how to setup epoch number in SFTTrainer?
|
there my example code
from datasets import load_dataset
from trl import SFTTrainer
dataset = load_dataset("IMDB", split="train")
trainer = SFTTrainer(
"sshleifer/tiny-gpt2",
train_dataset=dataset,
dataset_text_field="text",
max_seq_length=512,
)
trainer.train()
|
https://github.com/huggingface/trl/issues/985
|
closed
|
[] | 2023-11-12T20:02:31Z
| 2023-11-14T18:29:53Z
| null |
KlausikPL
|
huggingface/diffusers
| 5,774
|
How to fine tune Stable Diffusion on custom dataset {caption, image}?
|
I need to do the task that fine tuning SD on custom dataset {caption, image} and custom size? Could you please give me a tutorial for this task?
|
https://github.com/huggingface/diffusers/issues/5774
|
closed
|
[
"stale"
] | 2023-11-12T14:52:23Z
| 2024-01-09T15:07:21Z
| null |
npk7264
|
huggingface/diffusers
| 5,772
|
Does webdataset faster than default huggingface datasets?
|
### Describe the bug
Hi, I see there is a large scale training example https://github.com/huggingface/diffusers/blob/controlnet_webdatasets/examples/controlnet/train_controlnet_webdatasets.py using webdatasets, which suggests that webdatasets may have better data loading performance than huggingface datasets that is organized with Apache Arrow.
Then, I'm wondering whether or not webdatasets is a good choice for me. I have a image dataset with 350k images, the size of the image is 768 * 768. I use a batch size of 64 or 192. Does webdataset is for me? Any help would be appreciated!
### Reproduction
.
### Logs
_No response_
### System Info
.
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/5772
|
closed
|
[
"question",
"stale"
] | 2023-11-12T08:40:22Z
| 2024-01-09T15:07:23Z
| null |
Luciennnnnnn
|
huggingface/chat-ui
| 549
|
How can I use this offline with local models?
|
I really like the web_search feature, can I somehow use it with local models? I tried but I dont see any bat files to launch it.
|
https://github.com/huggingface/chat-ui/issues/549
|
closed
|
[
"support"
] | 2023-11-11T23:59:09Z
| 2023-11-20T21:38:27Z
| 9
|
iChristGit
|
huggingface/diffusers
| 5,766
|
Image+Image+Text to Image
|
Maybe a dumb question but I can't seem to find good ways to have multiple images to image modeling. I looked into Multi-ControlNet but I can't tell how to use it. I'm trying to train a model that takes in 2 images and a prompt:
1. a template base image (e.g. a photo of a room in someone's house with a painting on the wall)
2. a photo of a painting someone made (e.g. not a famous one like a Van Gogh, just someone's painting)
3. an optional text prompt describing the 2nd image...may not be necessary but curious what people here say
And I want to place image2 in image1 to replace the painting on the wall with the new one. Is this the right forum / model to use? I thought maybe creating a custom dataset and then simply feeding 2 image controls in would do the job but really could use some experts' guidance here.
|
https://github.com/huggingface/diffusers/issues/5766
|
closed
|
[
"question",
"stale"
] | 2023-11-11T20:15:27Z
| 2024-01-09T15:07:25Z
| null |
tval2
|
huggingface/optimum
| 1,531
|
Pytorch + TensorRT support
|
### Feature request
Is it possible to start supporting Pytorch and TensorRT inference optimizations? There are a lot of use cases where it could be useful, and optimum seems to already have a lot of good tooling to enable this.
### Motivation
Using Pytorch or TensorRT in production is painful today, and requires a lot of custom optimizations.
### Your contribution
I could help with a PR.
|
https://github.com/huggingface/optimum/issues/1531
|
closed
|
[
"feature-request",
"Stale"
] | 2023-11-11T17:27:47Z
| 2025-02-27T02:04:37Z
| 2
|
youssefadr
|
huggingface/optimum
| 1,530
|
AnimateDiff support?
|
### Feature request
Hi!
can u guys please support animatediff for onnx in the future? it will be great for both gpu directml and cpu too
kind regards
### Motivation
not a bug, just a feature that i really would like to see for us directml and cpu users for onnx
### Your contribution
i would but i don't know anything about coding. i'm just a casual user
|
https://github.com/huggingface/optimum/issues/1530
|
closed
|
[
"feature-request",
"Stale"
] | 2023-11-11T14:21:25Z
| 2025-03-01T02:08:38Z
| 1
|
Amin456789
|
huggingface/autotrain-advanced
| 338
|
How to
|
I successfully trained the mistral 7B sharded model on google colab using the autotrain
Now, how can I do inference , I am unable to merger the adapter with the base model , can someone please share the code for inference with me . Please help
|
https://github.com/huggingface/autotrain-advanced/issues/338
|
closed
|
[
"stale"
] | 2023-11-11T12:58:24Z
| 2024-05-06T13:35:52Z
| null |
eviIgenius
|
huggingface/diffusers
| 5,761
|
The cost of consistency decoder
|
### Describe the bug
I replace original VAE decoder of a stable diffusion model with Consistency Decoder, then CUDA out of memory occurs. My question is that How large of Consistency Decoder is compared to original VAE decoder.
- `diffusers` version: 0.23.0
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Huggingface_hub version: 0.17.3
- Transformers version: 4.34.0
- Accelerate version: 0.23.0
- xFormers version: 0.0.18
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Reproduction
Decode a large latent
### Logs
_No response_
### System Info
..
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/5761
|
closed
|
[
"question",
"stale"
] | 2023-11-11T03:54:20Z
| 2024-01-09T15:07:30Z
| null |
Luciennnnnnn
|
pytorch/serve
| 2,785
|
How to batch process in the intermediate node in touchserve workflow
|
Hi, I need some help with the TouchServe workflow. Currently, I use the Touchserve to orchestrate my server model and logic to work together, which could be represented in the graph below.
```mermaid
stateDiagram-v2
[*] --> PreProcess
PreProcess --> Model_A
Model_A --> IntermediaProcess
PreProcess --> IntermediaProcess
IntermediaProcess --> Model_B
Model_B --> PostProcess
PostProcess --> [*]
```
My problem is that the result from **IntermediaProcess** is batch output. When I try to send a batch output to **Model_B**, it raises an error about `one input cannot have multiple output`, so I solve this problem by packing a result from **IntermediaProcess** into 1 payload and then sending it to **Model_B** with batch processing inside **Model_B**, which could solve the problem. However, it affects the performance of the overall pipeline due to the fact that **Model_B** handles a lot of inference in each request for one request pipeline.
My question is there is alternative method to config pipeline to batch process with node level on **Model_B** ? I think it might increase concurrency of the node **Model_B** like this
```mermaid
stateDiagram-v2
[*] --> PreProcess
PreProcess --> Model_A
Model_A --> IntermediaProcess
PreProcess --> IntermediaProcess
IntermediaProcess --> Model_B
IntermediaProcess --> Model_B
IntermediaProcess --> Model_B
Model_B --> PostProcess
PostProcess --> [*]
```
|
https://github.com/pytorch/serve/issues/2785
|
closed
|
[] | 2023-11-11T02:47:56Z
| 2023-11-27T08:18:12Z
| null |
RTae
|
huggingface/candle
| 1,319
|
Question: How to edit specific indices of a tensor?
|
Hello everybody,
While developing beam search for candle-sampling, I have run into a small issue where it appears there is no way to edit specific indices of a tensor after creation. For example, in Python the following works for lists (and very similar for pytorch tensors):
```python
values = [[1,2,3],[4,5,6]]
values[0][0] = 0
print(values) #[[0,2,3],[4,5,6]]
```
Is there an equivalent in `Candle` which I can use to edit specific indices of a tensor without creating a new tensor?
|
https://github.com/huggingface/candle/issues/1319
|
closed
|
[] | 2023-11-11T01:10:42Z
| 2023-11-26T15:53:19Z
| null |
EricLBuehler
|
huggingface/datasets
| 6,400
|
Safely load datasets by disabling execution of dataset loading script
|
### Feature request
Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution.
Any suggested workarounds are welcome as well.
### Motivation
This is a security vulnerability that could lead to arbitrary code execution.
### Your contribution
n/a
|
https://github.com/huggingface/datasets/issues/6400
|
closed
|
[
"enhancement"
] | 2023-11-10T23:48:29Z
| 2024-06-13T15:56:13Z
| 4
|
irenedea
|
huggingface/diffusers
| 5,758
|
how to run huggingface model in replicate
|
### Describe the bug
i am trying to run https://medium.com/ai-artistry/streamlining-ai-agent-development-with-autogen-and-llava-b84fb0d25262 code by adding https://huggingface.co/LLaVA-VL/llava_plus_v0_7b instead of replicate code.
My Question is: Challenges running the huggingface model using replicate?
something like this ๐
```
response = replicate.run(
"yorickvp/llava-13b:2facb4a474a0462c15041b78b1ad70952ea46b5ec6ad29583c0b29dbd4249591",
input={"image": img, "prompt": prompt.replace("<image>", " ")}
)
```
i tried
```
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/LLaVA-VL/llava_plus_v0_7b", additional_tools={"prompt": "Show me a tree"})
agent.run(return_code=True)
```
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[15], line 4
1 from transformers import HfAgent
2 agent = HfAgent("https://api-inference.huggingface.co/models/LLaVA-VL/llava_plus_v0_7b", additional_tools={"prompt": "Show me a tree"})
----> 4 agent.run( return_code=True)
TypeError: Agent.run() missing 1 required positional argument: 'task'
```
### Reproduction
Challenges running the huggingface model using replicate
something like this ๐
```
response = replicate.run(
"yorickvp/llava-13b:2facb4a474a0462c15041b78b1ad70952ea46b5ec6ad29583c0b29dbd4249591",
input={"image": img, "prompt": prompt.replace("<image>", " ")}
)
```
### Logs
_No response_
### System Info
RTX 3090
### Who can help?
@patrickvonplaten @sayakpaul @williamberman
|
https://github.com/huggingface/diffusers/issues/5758
|
closed
|
[
"bug"
] | 2023-11-10T20:31:04Z
| 2023-11-11T03:33:51Z
| null |
andysingal
|
pytorch/tutorials
| 2,670
|
๐ก [REQUEST] - Tutorial of USB for Semi-Supervised Learning
|
### ๐ Descirbe the improvement or the new tutorial
This tutorial helps people to get a basic usage understanding of the Semi-Supervised Learning codebase [USB](https://github.com/microsoft/Semi-supervised-learning) - benchmark. We will show how to use the API provided in USB to train Semi-Supervised Algorithms, e.g., FixMatch, on different data.
### Existing tutorials on this topic
Category: Extending PyTorch
Category: Image and Video
### Additional context
Invited by @carljparker as part of the PyTorch Docathon H2 2023.
Label: docathon-h2-2023
|
https://github.com/pytorch/tutorials/issues/2670
|
closed
|
[] | 2023-11-10T16:02:32Z
| 2023-12-07T15:57:32Z
| 0
|
Hhhhhhao
|
huggingface/diffusers
| 5,756
|
How to we generate LCM LoRA of an existing model?
|
I generated a DreamBooth model from SDXL base 1.0
To get the speed boost of LCM I need to generate a LCM LoRA from this model
How we do it? I don't see documentation
|
https://github.com/huggingface/diffusers/issues/5756
|
closed
|
[
"stale"
] | 2023-11-10T15:44:52Z
| 2023-12-27T13:28:38Z
| null |
FurkanGozukara
|
pytorch/tutorials
| 2,669
|
๐ก [REQUEST] - A Tutorial on Whole Slide Image Classification using PyTorch and TIAToolbox
|
### ๐ Descirbe the improvement or the new tutorial
Whole Slide Images are the digital data format from which pathologists and computational pathology researchers investigate cancer growth. To due their enormous image resolutions and and file size (in the order of several gigabytes), conventional image processing methods do not work effectively. This is why we propose writing this tutorial: to (a) explain how to load WSIs using TIAToolbox, which helps process such slides with speed and efficiency using its pyramid stack structure, and (b) show how you can use `torchvision` models can to analyse WSIs. We believe this tutorial will be useful to the PyTorch community, especially who is interested in using PyTorch models tackle cancer tissue research.
### Existing tutorials on this topic
The tutorial will be adapted from our [WSI classification example](https://tia-toolbox.readthedocs.io/en/latest/_notebooks/jnb/05-patch-prediction.html).
### Additional context
**Category: Image and Video**
Written by Tissue Image Analytics Centre (TIA) and invited by @carljparker as part of the PyTorch Docathon H2 2023.
cc @datumbox @nairbv @fmassa @NicolasHug @YosuaMichael @sekyondaMeta @svekars @carljparker @kit1980 @subramen @measty @behnazelhaminia @DavidBAEpstein @shaneahmed @msaroufim
|
https://github.com/pytorch/tutorials/issues/2669
|
closed
|
[
"module: vision",
"docathon-h2-2023"
] | 2023-11-10T14:32:47Z
| 2023-12-19T06:57:38Z
| 1
|
Abdol
|
huggingface/chat-ui
| 548
|
MaxListenersExceededWarning: Possible EventEmitter memory leak detected.
|
Running dev, and no errors until i try to write into the chat interface on the website locally hosted in WSL2 (win11).
Worked before i updated to version v.0.6.0
error message in web ui:

Error message in terminal:
> root@xxxxxxxxx:/mnt/c/WSL/HuggingChat test/AI# npm run dev-chat-ui
>
> > ai@1.0.0 dev-chat-ui
> > cd ../chat-ui && npm run dev -- --host 0.0.0.0
>
>
> > chat-ui@0.6.0 dev
> > vite dev --host 0.0.0.0
>
>
>
> VITE v4.3.9 ready in 15775 ms
>
> โ Local: http://localhost:5173/
> โ Network: http://172.xx.142.227:5173/
> โ press h to show help
> (node:80446) **MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [TLSSocket]. Use emitter.setMaxListeners() to increase limit**
> (Use `node --trace-warnings ...` to show where the warning was created)
> 2:44:12 PM [vite] Error when evaluating SSR module /src/lib/server/websearch/sentenceSimilarity.ts:
> |- TypeError: fetch failed
> at fetch (/mnt/c/WSL/HuggingChat test/chat-ui/node_modules/undici/index.js:110:15)
> at processTicksAndRejections (node:internal/process/task_queues:95:5)
> at runNextTicks (node:internal/process/task_queues:64:3)
> at listOnTimeout (node:internal/timers:540:9)
> at process.processTimers (node:internal/timers:514:7)
> at async getModelFile (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)
> at async getModelJSON (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
> at async Promise.all (index 0)
> at async loadTokenizer (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)
> at async AutoTokenizer.from_pretrained (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)
>
> 2:44:12 PM [vite] Error when evaluating SSR module /src/lib/server/websearch/runWebSearch.ts: failed to import "/src/lib/server/websearch/sentenceSimilarity.ts"
> |- TypeError: fetch failed
> at fetch (/mnt/c/WSL/HuggingChat test/chat-ui/node_modules/undici/index.js:110:15)
> at processTicksAndRejections (node:internal/process/task_queues:95:5)
> at runNextTicks (node:internal/process/task_queues:64:3)
> at listOnTimeout (node:internal/timers:540:9)
> at process.processTimers (node:internal/timers:514:7)
> at async getModelFile (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)
> at async getModelJSON (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
> at async Promise.all (index 0)
> at async loadTokenizer (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)
> at async AutoTokenizer.from_pretrained (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)
>
> 2:44:12 PM [vite] Error when evaluating SSR module /mnt/c/WSL/HuggingChat test/chat-ui/src/routes/conversation/[id]/+server.ts: failed to import "/src/lib/server/websearch/runWebSearch.ts"
> |- TypeError: fetch failed
> at fetch (/mnt/c/WSL/HuggingChat test/chat-ui/node_modules/undici/index.js:110:15)
> at processTicksAndRejections (node:internal/process/task_queues:95:5)
> at runNextTicks (node:internal/process/task_queues:64:3)
> at listOnTimeout (node:internal/timers:540:9)
> at process.processTimers (node:internal/timers:514:7)
> at async getModelFile (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)
> at async getModelJSON (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
> at async Promise.all (index 0)
> at async loadTokenizer (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)
> at async AutoTokenizer.from_pretrained (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)
>
> TypeError: fetch failed
> at fetch (/mnt/c/WSL/HuggingChat test/chat-ui/node_modules/undici/index.js:110:15)
> at processTicksAndRejections (node:internal/process/task_queues:95:5)
> at runNextTicks (node:internal/process/task_queues:64:3)
> at listOnTimeout (node:internal/timers:540:9)
> at process.processTimers (node:internal/timers:514:7)
> at async getModelFile (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)
> at async getModelJSON (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
> at async Pr
|
https://github.com/huggingface/chat-ui/issues/548
|
closed
|
[
"support"
] | 2023-11-10T13:56:03Z
| 2023-11-16T20:02:07Z
| 7
|
patchie
|
huggingface/sentence-transformers
| 2,355
|
How to Finetune a Clip Model with Custom Data
|
I want to do my custom data training to get high accuracy embeddings of my image data.
Are there any scripts or documentation that would be helpful?
thank you.
|
https://github.com/huggingface/sentence-transformers/issues/2355
|
closed
|
[] | 2023-11-10T07:27:23Z
| 2023-12-25T03:23:20Z
| null |
unmo
|
huggingface/diffusers
| 5,742
|
where is the Parameter Description?
|
https://github.com/huggingface/diffusers/issues/5742
|
closed
|
[] | 2023-11-10T07:07:03Z
| 2023-11-13T18:01:56Z
| null |
MRG-DOT
|
|
pytorch/vision
| 8,107
|
cannot install torch==2.0.0 torchvision==0.15.2
|
### ๐ Describe the bug
For some reason, I cannot do:
```
pip install torch==2.0.0 torchvision==0.15.2 --index-url https://download.pytorch.org/whl/cu118
```
But I can install them separately with `--no-deps` and `torchvision` seems to work just fine. Why is this the case? Isn't `torchvision==0.15` supposed to be compatible with `torch==2.0`?
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.18 | packaged by conda-forge | (default, Oct 10 2023, 15:44:36) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.10
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz
Stepping: 2
CPU MHz: 2299.882
CPU max MHz: 2300.0000
CPU min MHz: 1200.0000
BogoMIPS: 4599.76
Virtualization: VT-x
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 6 MiB
L3 cache: 60 MiB
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi
mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.1+cu118
[pip3] torchvision==0.15.1+cu118
[pip3] triton==2.0.0
[conda] numpy 1.24.1 pypi_0 pypi
[conda] torch 2.0.0+cu118 pypi_0 pypi
[conda] torchaudio 2.0.1+cu118 pypi_0 pypi
[conda] torchvision 0.15.2+cu118 pypi_0 pypi
[conda] triton
|
https://github.com/pytorch/vision/issues/8107
|
closed
|
[] | 2023-11-10T02:13:06Z
| 2023-11-10T14:26:40Z
| 1
|
wemoveon2
|
huggingface/setfit
| 436
|
ใquestionใcould you tell me the latest embedding model which usable by setfit?
|
Hi!
This is not bug report but question.
From my understand, when we use SetFit, we have to choose one of embedding model from sentense transformer.
But now, I feel those models are kind of old and would like to know the latest model for embedding which can be used by setfit
Thank you in adv
|
https://github.com/huggingface/setfit/issues/436
|
closed
|
[
"question"
] | 2023-11-10T02:10:01Z
| 2023-11-12T01:02:24Z
| null |
Yongtae723
|
pytorch/serve
| 2,780
|
example of integrating deepspeed fastgen into TorchServe
|
### ๐ The feature
Provide an example of integrating deepspeed fastgen in TorchServe.
### Motivation, pitch
deepspeed fastgen was published in mii.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/serve/issues/2780
|
open
|
[
"future",
"example"
] | 2023-11-09T19:32:46Z
| 2023-11-09T19:32:46Z
| 0
|
lxning
|
pytorch/xla
| 5,784
|
Is there a Bug with AllGather backprop algorithm?
|
https://github.com/pytorch/xla/blob/d5d023063bfa8ecb4629f621f9b5890bc8396f58/torch_xla/core/functions.py#L66C1-L66C1
In the aforementioned line, we see the class
```
class AllGather(torch.autograd.Function):
@staticmethod
def forward(ctx, input, dim):
ctx.dim = dim
ctx.ordinal = xm.get_ordinal()
ctx.world_size = xm.xrt_world_size()
return xm.all_gather(input, dim=dim)
@staticmethod
def backward(ctx, grad_output):
slice_size = grad_output.size(ctx.dim) // ctx.world_size
return torch.narrow(grad_output.clone(), ctx.dim, ctx.ordinal * slice_size,
slice_size), None
```
I went to test this method with the following:
```
import torch
import os
import torch.distributed as dist
import torch_xla.core.xla_model as xm
import torch_xla.distributed.xla_backend
if __name__ == "__main__":
dist.init_process_group('xla')
device = xm.xla_device()
rank = xm.get_ordinal()
xla_ = True
t = torch.arange(0,8,1,dtype=torch.float,requires_grad=True,device=device).view(2,4).contiguous()
t.retain_grad()
#t1 = torch.narrow(t,0,rank,1).contiguous()
#t2 = torch.narrow(t,0,1,1).contiguous()
t2 = torch.arange(0,8,1,dtype=torch.float,requires_grad=True,device=device).view(2,4).contiguous()
t2.retain_grad()
tout = torch.matmul(t,t2.T)
loss=tout.sum()
loss.backward()
res_t = t.grad.detach().cpu()
tnew = torch.arange(0,8,1,dtype=torch.float,requires_grad=True,device=device).view(2,4).contiguous()
tnew = torch.narrow(tnew,0,rank,1)
tnew = tnew.clone()
tnew.retain_grad()
t2n = torch.arange(4*rank,(rank+1)*4,device=device,requires_grad=True,dtype=torch.float).contiguous()
t2n.retain_grad()
tnew2 = AllGather.apply(tnew)
ton = torch.matmul(tnew2,t2n.T)
loss=ton.sum()
loss.backward()
rest_tn = tnew.grad.detach().cpu()
xm.rendezvous('completed')
print(res_t)
print(rest_tn)
```
I noticed that the results are not the same,
However, if I run
```
class AllGather(torch.autograd.Function):
@staticmethod
def forward(ctx, input, dim):
ctx.dim = dim
ctx.ordinal = xm.get_ordinal()
ctx.world_size = xm.xrt_world_size()
return xm.all_gather(input, dim=dim)
@staticmethod
def backward(ctx, grad_output):
slice_size = grad_output.size(ctx.dim) // ctx.world_size
xm.reduce(xm.REDUCE_SUM,grad_output.contiguous())
return torch.narrow(grad_output.clone(), ctx.dim, ctx.ordinal * slice_size,
slice_size), None
```
Then they are the same. Is there an issue with my code? I am trying to confirm that backprop is working properly?
|
https://github.com/pytorch/xla/issues/5784
|
open
|
[
"question",
"distributed"
] | 2023-11-09T18:30:24Z
| 2025-04-28T12:21:19Z
| null |
mathephysicist
|
pytorch/pytorch
| 113,370
|
Incorrect stride when permuting shapes where a zero dimension is present.
|
### ๐ Describe the bug
I ran into a problem while permuting the following tensor (to convert into a complex dtype):
```python
>>> torch.view_as_complex(torch.empty(1,0,2,100,100).permute(0,1,3,4,2).contiguous())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Tensor must have a last dimension with stride 1
```
Upon further investigation I found that strides behave oddly when permuting with a zero dimension present.
Contrast the difference when `tensor.size(1) == 0` and `tensor.size(1) == 99`:
```python
>>> torch.empty(1,0,2,100,100).stride()
(20000, 20000, 10000, 100, 1)
>>> torch.empty(1,0,2,100,100).permute(0,1,3,4,2).contiguous().stride()
(20000, 20000, 100, 1, 10000)
>>> torch.empty(1,99,2,100,100).permute(0,1,3,4,2).contiguous().stride()
(1980000, 20000, 200, 2, 1)
```
Is this expected behavior?
**Notes:**
I am aware that there is no data at all if a dim is 0, I wouldn't have been surprised to observe a stride tuple containing all 0's or 1's. The latter - which would work with `view_as_complex` - would obviously be most convenient for me.
(My motivation for using 0 sized tensors is that it's often easier to work with an empty array `[]` value than working with a `None` value which requires null-checks all over the place.)
### Versions
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: NixOS 22.11 (Raccoon) (x86_64)
GCC version: (GCC) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.6 (main, Aug 1 2022, 20:38:21) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.114-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 520.56.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5950X 16-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 72%
CPU max MHz: 5083.3979
CPU min MHz: 2200.0000
BogoMIPS: 6787.42
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant librarie
|
https://github.com/pytorch/pytorch/issues/113370
|
open
|
[
"triaged",
"module: edge cases",
"module: empty tensor"
] | 2023-11-09T17:16:14Z
| 2024-02-23T18:06:34Z
| null |
rehno-lindeque
|
huggingface/datasets
| 6,394
|
TorchFormatter images (H, W, C) instead of (C, H, W) format
|
### Describe the bug
Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy.
However, pytorch normally uses (C, H, W) format.
Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways.
If not using the format it is possible to directly use torchvision transforms but any non-transformed value will not be a tensor.
Is there a reason for this choice?
### Steps to reproduce the bug
```python
from datasets import Dataset, Features, Audio, Image
images = ["path/to/image.png"] * 10
features = Features({"image": Image()})
ds = Dataset.from_dict({"image": images}, features=features)
ds = ds.with_format("torch")
ds[0]["image"].shape
```
```python
torch.Size([512, 512, 4])
```
### Expected behavior
```python
from datasets import Dataset, Features, Audio, Image
images = ["path/to/image.png"] * 10
features = Features({"image": Image()})
ds = Dataset.from_dict({"image": images}, features=features)
ds = ds.with_format("torch")
ds[0]["image"].shape
```
```python
torch.Size([4, 512, 512])
```
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-6.5.9-100.fc37.x86_64-x86_64-with-glibc2.31
- Python version: 3.11.6
- Huggingface_hub version: 0.18.0
- PyArrow version: 14.0.1
- Pandas version: 2.1.2
|
https://github.com/huggingface/datasets/issues/6394
|
closed
|
[] | 2023-11-09T16:02:15Z
| 2024-04-11T12:40:16Z
| 9
|
Modexus
|
huggingface/transformers.js
| 386
|
[Question] Any plan to rewrite js in typescript ?
|
I'm doing it for my own usage although I'm loosing the benfit of upgrades.
Typings are usefull you know :)
While doing it I found this,
in models.js, line 1027 :
```javascript
let sampledTokens = sampler(logits);
```
should be
```javascript
let sampledTokens = sampler.sample(logits);
```
|
https://github.com/huggingface/transformers.js/issues/386
|
closed
|
[
"question"
] | 2023-11-09T13:41:10Z
| 2023-11-15T18:18:39Z
| null |
pnocera
|
huggingface/candle
| 1,304
|
How to repeat_interleave on Tensor?
|
There is [repeat_interleave](https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html) function, but I can't find analog in candle.
I need convert `tensor([[6110, 1]])` to `tensor([[6110, 1], [6110, 1], [6110, 1]])`
I found some examples [like](https://github.com/huggingface/candle/blob/f772213e844fdfcc8dbaf662fc11819f4028dc78/candle-transformers/src/models/segment_anything/mask_decoder.rs#L234) this and [this](https://github.com/huggingface/candle/blob/73d02f4f57c788c43f3e11991635bc15701c25c0/candle-transformers/src/models/mpt.rs#L137). But in my case the result is `tensor([6110, 6110, 6110, 1, 1, 1])`.
Looks like I do something wrong: :-D I expect result the same as from python https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L3090C31-L3090C31
How I can repeat python example in current candle version?
|
https://github.com/huggingface/candle/issues/1304
|
closed
|
[] | 2023-11-09T06:31:04Z
| 2023-11-09T08:16:19Z
| null |
bragovo
|
huggingface/diffusers
| 5,709
|
How to run stable diffusion pipeline using multithreading in fastapi ?
|
Hi.. I have created an stable diffusion API using Fastapi and it is working perfectly fine if sequential request are been made. I have tried to implement multithreading in the api to concurrently run multiple request, but the problem is every request output generation time is dependent on total number of request that are made. For Eg. if one request takes 5 secs to run, and if 5 request are made simultaneously then it will take 5*5 = 25 secs for every request to get output. After researching about these problem, I get know that GIL (Global Interpreter Lock) in python is allowing only one thread to execute per process. So we will get same output as single thread if we use multithreading in these purpose. Also, I have tried multiprocessing to overcome this issue but it is loading multiple instances of the same model for each process and its become very hard to load all model in 16 GB RAM.
Do you know how to get output in same time for every requests that are made. If 5 requests are made concurrently then every request should get output in 5 seconds only. Also do gpu configuration matters tp gets results in quick time based on number of request ?
GPU Configuration:
Nvidia 3050 8GB RAM
@sayakpaul @patrickvonplaten
|
https://github.com/huggingface/diffusers/issues/5709
|
closed
|
[
"stale"
] | 2023-11-08T16:19:45Z
| 2024-01-09T15:07:46Z
| null |
minkvirparia
|
huggingface/gsplat.js
| 23
|
How do you set up initial camera position?
|
When loading a splat file, I'd like to set the initial camera position to a specific location. How can this be achieved?
|
https://github.com/huggingface/gsplat.js/issues/23
|
closed
|
[
"enhancement",
"question"
] | 2023-11-08T16:04:04Z
| 2023-11-11T16:35:57Z
| null |
reconlabs-chris
|
huggingface/safetensors
| 381
|
Would a CLI to perform convert operation be useful?
|
### Feature request
Could it be possible to add to this repo a CLI tool that would use the library to convert files stored in different format and convert them to safetensors.
It would be useful to have also from the command line a way to introspect a model and find some property about it (layers, metadata, ...)
### Motivation
I'm frustrated when I got a lot of example models on my disk that I'm not too sure about and I would like to have a quick and easy way from the command line to inspect them, convert them, compress them and do all the tasks I need to perform straight from the command line with completion support.
### Your contribution
I could contribute design suggestions about the interface but I have no particular knowledge of Rust and I'm learning transformers and ML in general.
|
https://github.com/huggingface/safetensors/issues/381
|
closed
|
[
"Stale"
] | 2023-11-08T15:39:02Z
| 2024-01-02T01:48:28Z
| 2
|
remyleone
|
huggingface/transformers
| 27,361
|
Add how to preprocess mask for finetuning with SAM
|
### Feature request
The [SAM image processor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam/image_processing_sam.py) takes images as input and resizes them so that the longest edge is 1024 (using default values). This is the size expect as input fo the SAM model.
For inference, this works fine as only the images need resizing but for fine-tuning as per [this tutorial](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb), you need to resize both your images and your masks as the SAM model produces `pred_masks` with size 256x256. If I don't resize my masks I get `ground truth has different shape (torch.Size([2, 1, 768, 1024])) from input (torch.Size([2, 1, 256, 256]))` when trying to calculate loss.
To fix this, I've currently written a resize and pad function into my code:
```
from PIL import Image
def resize_mask(image):
longest_edge = 256
# get new size
w, h = image.size
scale = longest_edge * 1.0 / max(h, w)
new_h, new_w = h * scale, w * scale
new_h = int(new_h + 0.5)
new_w = int(new_w + 0.5)
resized_image = image.resize((new_w, new_h), resample=Image.Resampling.BILINEAR)
return resized_image
def pad_mask(image):
pad_height = 256 - image.height
pad_width = 256 - image.width
padding = ((0, pad_height), (0, pad_width))
padded_image = np.pad(image, padding, mode="constant")
return padded_image
def process_mask(image):
resized_mask = resize_mask(image)
padded_mask = pad_mask(resized_mask)
return padded_mask
```
and then have added this to my definition of SAMDataset:
```
class SAMDataset(Dataset):
def __init__(self, dataset, processor, transform = None):
self.dataset = dataset
self.processor = processor
self.transform = transform
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
item = self.dataset[idx]
if self.transform:
image = self.transform(item["pixel_values"])
else:
image = item["pixel_values"]
# get bounding box prompt
padded_mask = process_mask(item["label"])
prompt = get_bounding_box(padded_mask)
# prepare image and prompt for the model
inputs = self.processor(image, input_boxes=[[prompt]], return_tensors="pt")
# remove batch dimension which the processor adds by default
inputs = {k:v.squeeze(0) for k,v in inputs.items()}
# add ground truth segmentation
inputs["ground_truth_mask"] = padded_mask
return inputs
```
This seems to work fine.
What I think would be good is to allow input of masks in the SAM image processor. For example, the [Segformer image processor](https://github.com/huggingface/transformers/blob/v4.35.0/src/transformers/models/segformer/image_processing_segformer.py#L305) takes images and masks as inputs and resizes both to the size expected by the Segformer model.
I have also seen there is a 'post_process_mask' method in the SAM image processor but I am unsure how to implement this in the tutorial I'm following. If you think this is a better way vs. what I am suggesting then please could you explain where I would add this in the code from the tutorial notebook.
### Motivation
Easier fine tuning of SAM model.
### Your contribution
I could try write a PR for this and/or make a PR to update the [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb) instead .
|
https://github.com/huggingface/transformers/issues/27361
|
closed
|
[
"Feature request",
"Vision"
] | 2023-11-08T11:53:31Z
| 2024-01-08T16:40:38Z
| null |
rwood-97
|
huggingface/chat-ui
| 546
|
Custom Theme
|
I want to change the UI layout yet still be able to update the code in order to enjoy the new features as they are released.
Is there a way to add my changes in a way that would be similar to a theme? or an outside addon?
|
https://github.com/huggingface/chat-ui/issues/546
|
closed
|
[] | 2023-11-08T08:26:43Z
| 2023-11-15T09:32:22Z
| 2
|
kaplanyaniv
|
pytorch/executorch
| 1,162
|
How to deploy llama2 on Qualcomm Snapdragon chips through ExecuTorch๏ผ
|
Excuse me, if I need to deploy llama2 on Qualcomm Snapdragon chip through ExecuTorch and want to use NPU computing power as an inference computing unit, what do I need to do?
The chip specs I'm currently using are SG885G-WF https://www.quectel.com/product/wi-fi-bt-sg885g-wf-smart-moduleใ
|
https://github.com/pytorch/executorch/issues/1162
|
closed
|
[
"need-user-input",
"partner: qualcomm",
"triaged"
] | 2023-11-07T12:32:59Z
| 2025-02-03T18:21:13Z
| null |
tensorflowt
|
huggingface/datasets
| 6,388
|
How to create 3d medical imgae dataset?
|
### Feature request
I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii')
### Motivation
help us to upload 3d medical dataset to huggingface!
### Your contribution
I'll submit a PR if I find a way to add this feature
|
https://github.com/huggingface/datasets/issues/6388
|
open
|
[
"enhancement"
] | 2023-11-07T11:27:36Z
| 2023-11-07T11:28:53Z
| null |
QingYunA
|
huggingface/datasets
| 6,387
|
How to load existing downloaded dataset ?
|
Hi @mariosasko @lhoestq @katielink
Thanks for your contribution and hard work.
### Feature request
First, I download a dataset as normal by:
```
from datasets import load_dataset
dataset = load_dataset('username/data_name', cache_dir='data')
```
The dataset format in `data` directory will be:
```
-data
|-data_name
|-test-00000-of-00001-bf4c733542e35fcb.parquet
|-train-00000-of-00001-2a1df75c6bce91ab.parquet
```
Then I use SCP to clone this dataset into another machine, and then try:
```
from datasets import load_dataset
dataset = load_dataset('data/data_name') # load from local path
```
This leads to re-generating training and validation split for each time, and the disk quota will be duplicated occupation.
How can I just load the dataset without generating and saving these splits again?
### Motivation
I do not want to download the same dataset in two machines, scp is much faster and better than HuggingFace API. I hope we can directly load the downloaded datasets (.parquest)
### Your contribution
Please refer to the feature
|
https://github.com/huggingface/datasets/issues/6387
|
closed
|
[
"enhancement"
] | 2023-11-06T22:51:44Z
| 2023-11-16T18:07:01Z
| null |
liming-ai
|
huggingface/gsplat.js
| 15
|
Does it work with polycam models?
|
Hello! Thank you for your work, it looks very promising. Got it working with the README file... Just tried it with a .ply object out of polycam and got error
```
Uncaught (in promise) RangeError: byte length of Float32Array should be a multiple of 4
at new Float32Array (<anonymous>)
at R.setData (Scene.ts:43:25)
at W.LoadAsync (Loader.ts:31:15)
at async main (main.ts:11:5)
```
with what file type is it compatible? Thanks!
|
https://github.com/huggingface/gsplat.js/issues/15
|
closed
|
[
"question"
] | 2023-11-06T21:15:51Z
| 2023-11-10T18:26:55Z
| null |
karen-pal
|
pytorch/tutorials
| 2,655
|
Why multiply sqrt(d_model) before TransformerEncoderLayer?
|
Hi,
Thank you so much for the tutorial! I notice that in https://github.com/pytorch/tutorials/blob/main/beginner_source/transformer_tutorial.py#L92, you multiply sqrt(d_model) before TransformerEncoderLayer. May I ask why we need to do this?
Thanks!
|
https://github.com/pytorch/tutorials/issues/2655
|
closed
|
[
"question"
] | 2023-11-06T19:48:45Z
| 2023-11-06T20:13:28Z
| null |
yuzhenmao
|
huggingface/chat-ui
| 545
|
Chat-UI throws an 403 forbidden when access settings
|
When viewing the settings page after first setup the settings page fives the error: ```Failed to load resource: the server responded with a status of 403 (Forbidden) settings:1``` in the console. Without any explanation of what and why.
Setup:
```yaml
services:
# Chat ui webserver
chat-ui:
container_name: chat
build:
context: ./
dockerfile: Dockerfile
ports:
- 8080:3000
networks:
default:
ipv4_address: 172.25.0.2
# Mongo database
database:
container_name: mongo-chatui
image: "mongo:latest"
ports:
- 27017:27017
restart: always
environment:
- MONGO_INITDB_DATABASE=chat-ui
networks:
default:
ipv4_address: 172.25.0.3
networks:
default:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.25.0.0/28
gateway: 172.25.0.1
```
And my .env.local:
```
MONGODB_URL=mongodb://172.25.0.3:27017
PUBLIC_ORIGIN=http://localhost:3030
HF_ACCESS_TOKEN=recacted
MODELS=recated
```
What are the steps to take here?
The database connections gets accepted according to the mongoDB instance
|
https://github.com/huggingface/chat-ui/issues/545
|
closed
|
[
"support"
] | 2023-11-06T15:09:33Z
| 2024-02-15T21:03:04Z
| 5
|
IT-Guy007
|
pytorch/audio
| 3,688
|
Why does `transforms.TimeStretch` return of type `complex64`?
|
### ๐ Describe the bug
Good day!
https://pytorch.org/audio/2.1.0/generated/torchaudio.transforms.TimeStretch.html#torchaudio.transforms.TimeStretch.forward:
> Stretched spectrogram. The resulting tensor is of the same dtype as the input spectrogram, but the number of frames is changed to `ceil(num_frame / rate)`.
But:
```
s = torchaudio.transforms.Spectrogram()(x)
s.dtype # => torch.float32
t = torchaudio.transforms.TimeStretch(fixed_rate=0.9)(s)
t.dtype # => torch.complex64
```
Should I collect a bug report or don't I understand time stretching?
(previously posted [at the forum](https://discuss.pytorch.org/t/why-does-transforms-timestretch-return-complex64/191208))
### Versions
torchaudio 2.1.1 from Google Colab
|
https://github.com/pytorch/audio/issues/3688
|
closed
|
[] | 2023-11-05T12:02:57Z
| 2023-11-10T10:25:51Z
| 4
|
kuraga
|
huggingface/alignment-handbook
| 9
|
How to finetune or lora on custom dataset
|
How to finetune or lora on custom dataset
|
https://github.com/huggingface/alignment-handbook/issues/9
|
open
|
[] | 2023-11-05T02:38:33Z
| 2024-11-11T07:52:57Z
| null |
universewill
|
huggingface/peft
| 1,080
|
Add docs on how to merge adapters after 4bit QLoRA with PEFT 0.6
|
### Feature request
there has been some controversy on how to correctly **merge the adapters with the base model after 4bit LoRA** training.
to me it seems there are two ways to merge and save:
- ChrisHayduk https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930
- TheBloke https://github.com/TheBlokeAI/AIScripts/blob/main/merge_peft_adapters.py
What is the correct way to merge the adapters now (with PEFT 0.6 and [PR 851](https://github.com/huggingface/peft/pull/851) merged) after training a 4-bit quantized model ?
### Motivation
no docs, at least i haven't found any
### Your contribution
example:
**quantize and train**
```
modelpath="models/Mistral-7B-v0.1"
model = AutoModelForCausalLM.from_pretrained(
modelpath,
load_in_4bit=True,
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
),
torch_dtype=torch.bfloat16,
)
model = prepare_model_for_kbit_training(model)
config = LoraConfig(
r=64,
lora_alpha=16,
target_modules =
['q_proj',
'k_proj',
'down_proj',
'v_proj',
'gate_proj',
'o_proj',
'up_proj'],
lora_dropout=0.1,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, config)
train ...
```
**merge and save**
```
base_model = AutoModelForCausalLM.from_pretrained(
"models/Mistral-7B-v0.1",
return_dict=True,
torch_dtype=torch.bfloat16,
)
model = PeftModel.from_pretrained(base_model, "some-checkpoint")
model = model.merge_and_unload()
model.save_pretrained(args.out, safe_serialization=True)
```
is this the proper way to do it? if yes/no, it would be nice to have this documented somwhere! ๐ค
|
https://github.com/huggingface/peft/issues/1080
|
closed
|
[] | 2023-11-04T10:07:16Z
| 2023-11-17T22:22:06Z
| null |
geronimi73
|
huggingface/huggingface_hub
| 1,801
|
Entire operation get cancelled when 1 file fails when using api.upload_folder - how to make it iterative
|
I am using below code. Uploaded like 80 GB file and the entire operation failed just because of 1 png failed to upload for some reason
I see uploaded repo has 0 changes
How can I make it iterative? So after each file upload it is committed to the repo
I don't need commit or file history. Just upload newer files and overwrite if newer
```
from huggingface_hub import HfApi
api = HfApi()
# Upload all the content from the local folder to your remote Space.
# By default, files are uploaded at the root of the repo
api.upload_folder(
folder_path="/workspace/path",
repo_id="username/repo",
repo_type="model",
)
```
### Reproduction
_No response_
### Logs
_No response_
### System info
```shell
- huggingface_hub version: 0.16.4
- Platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /root/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: ME
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.0.1+cu118
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.5.0
- hf_transfer: N/A
- gradio: 3.41.2
- tensorboard: N/A
- numpy: 1.23.5
- pydantic: 1.10.12
- aiohttp: 3.8.5
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets
- HF_TOKEN_PATH: /root/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
```
|
https://github.com/huggingface/huggingface_hub/issues/1801
|
closed
|
[
"bug"
] | 2023-11-04T00:20:00Z
| 2023-11-26T09:09:35Z
| null |
FurkanGozukara
|
pytorch/xla
| 5,768
|
How to provide sharding annotation for MpDeviceLoader when data has different dimensions
|
## โ Questions and Help
Let's say my dataloader yields a dict when iterating over and the members of this dict has different dimensions
```python
{
"input_ids": shape = (batch, seq),
"masks": shape = (batch, seq, seq),
}
```
`pl.MpDeviceLoader` appears to only able to provide one sharding annotation. I'm currently using it like this:
```python
data_loader = pl.MpDeviceLoader(
data_loader,
dev,
input_sharding=xs.ShardingSpec(mesh, ('data', None, None)))
```
Obviously, ('data', None, None) is not valid for `input_ids` which has only 2 dimensions. But this seems to work. I wonder what's the proper way of using `MpDeviceLoader` in this case.
|
https://github.com/pytorch/xla/issues/5768
|
closed
|
[
"question",
"distributed"
] | 2023-11-03T20:43:19Z
| 2025-04-28T12:30:11Z
| null |
hanzhi713
|
pytorch/pytorch
| 112,876
|
How to handle CVE vulnerabilities in underlying operating system?
|
Hello,
The base images for Cuda are pretty old (2.1.0-cuda11.8 was pushed more than a month ago) how should we act to get latest security updates from the Ubuntu base image?
|
https://github.com/pytorch/pytorch/issues/112876
|
open
|
[
"triaged",
"module: docker",
"security"
] | 2023-11-03T17:32:14Z
| 2023-11-06T22:34:04Z
| null |
bjorn-ali-goransson
|
huggingface/transformers.js
| 378
|
Security issue - content security policy - script unsafe-eval
|
Context:
I use @xenova/transformers 2.6.2 npm package from a web application to do image classifcations. Here is the gist of my setup:
```js
const modelPath = 'own-domain/models-and-wasm/'
env.localModelPath = "/";
env.useBrowserCache = true;
env.backends.onnx.wasm.wasmPaths = modelPath;
const classifier = await pipeline("image-classification", modelPath, { quantized: true });
const output = await classifier(imagePath, { topk: 5 });
```
Everything works code-wise but when I remove unsafe-inline in CSP, it fails with this warning in the browser console:
```js
Failed to asynchronously prepare wasm:
CompileError: WebAssembly.instantiate(): Refused to compile or instantiate WebAssembly module because 'unsafe-eval' is not an allowed source of script in the following Content Security Policy directive
```
I **cannot** allow script-src: unsafe-eval in my web application (corporate rules). Do I have any alternatives?
|
https://github.com/huggingface/transformers.js/issues/378
|
open
|
[
"question"
] | 2023-11-03T13:50:30Z
| 2023-11-06T13:44:57Z
| null |
stiano
|
huggingface/diffusers
| 5,643
|
How to use the ip adapter controlnet?
|
Hi, I can't use this specific controlnet because it's from here: https://huggingface.co/lllyasviel/sd_control_collection/tree/main
and the format doesn't allow from_pretrained. When I use from_single_file, I get:
```
stable_diffusion/convert_from_ckpt.py", line 422, in convert_ldm_unet_checkpoint
new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"]
KeyError: 'time_embed.0.weight'
```
I used this to get the error:
`ControlNetModel.from_single_file("./ip-adapter_sd15_plus.pth", torch_dtype=torch.float32,local_files_only=True).to('cuda')`
a similar error was raised and the response was: "just don't use from_single_file" https://github.com/huggingface/diffusers/issues/5577
|
https://github.com/huggingface/diffusers/issues/5643
|
closed
|
[] | 2023-11-03T13:34:44Z
| 2023-11-13T15:12:29Z
| null |
alexblattner
|
huggingface/dataset-viewer
| 2,050
|
Should we support video datasets?
|
Like https://huggingface.co/datasets/commaai/commavq
There was a previous intent in datasets: https://github.com/huggingface/datasets/pull/5339
|
https://github.com/huggingface/dataset-viewer/issues/2050
|
closed
|
[
"question",
"feature request"
] | 2023-11-03T13:33:00Z
| 2023-12-11T15:04:08Z
| null |
severo
|
huggingface/distil-whisper
| 16
|
How to use ONNX model?
|
Hello there,
I'm interested in using the ONNX model, as I saw that you are providing the weights for it.
I tried to use it with `optimum` library, but didn't manage to make it work.
Could someone indicate in which direction I should look into?
Thank you so much for this repository and the work you put into it. It really helps!!
### Note:
here is what I tried
```
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
import torch
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "distil-whisper/distil-large-v2"
model = ORTModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, encoder_file_name=f"encoder_model.onnx"
)
```
Here is the error:
```
RuntimeError: Too many ONNX model files were found in distil-whisper/distil-large-v2, specify which one to load by using the encoder_file_name argument.
```
|
https://github.com/huggingface/distil-whisper/issues/16
|
open
|
[] | 2023-11-03T11:51:44Z
| 2023-11-07T07:36:50Z
| null |
H-G-11
|
huggingface/dataset-viewer
| 2,049
|
Retry jobs that finish with `ClientConnection` error?
|
Maybe here: https://github.com/huggingface/datasets-server/blob/f311a9212aaa91dd0373e5c2d4f5da9b6bdabcb5/chart/env/prod.yaml#L209
Internal conversation on Slack: https://huggingface.slack.com/archives/C0311GZ7R6K/p1698224875005729
Anyway: I'm wondering if we can have the error now that the dataset scripts are disabled by default.
|
https://github.com/huggingface/dataset-viewer/issues/2049
|
closed
|
[
"question",
"improvement / optimization",
"P2"
] | 2023-11-03T11:28:19Z
| 2024-02-06T17:29:45Z
| null |
severo
|
huggingface/transformers.js
| 377
|
GPU Acceleration to increase performance
|
Do we have any option to use GPU to increase performance of model loading and detection?
As currently in Object Detection it's taking around 10 seconds. If we want to do this on GPU, can we do that?
Running below lines through web worker, increases overall UI experience but not increases any performance.
```
const model = await pipeline("object-detection", "Xenova/detr-resnet-50");
const result = await model(img, { threshold: 0.9 });
```
Can we use GPU for that?
|
https://github.com/huggingface/transformers.js/issues/377
|
closed
|
[
"question"
] | 2023-11-03T07:44:05Z
| 2024-10-18T13:30:08Z
| null |
milind-yadav
|
pytorch/serve
| 2,766
|
How to auto-scale model replicas in a single GPU based EC2 instance based on number-of-requests-in-queue ?
|
Hi team, I mainly had 1 question and 1 observation:
---
### **Question:**
- **I was not able to locate any resource explaining ways to auto-scale ML model in torch-serve on single GPU instance.**
- I did had a look at the model configuration documentation which explained the 2 parameters: min-workers and max-worker where each worker will have 1 model loaded. I also had a look at this issue: https://github.com/pytorch/serve/issues/714 where **ts_queue_latency_microseconds** flag was explained to auto-scale in a Kubernetes cluster.
#### **_But what I need is:_**
> A way to load more replicas of the model in the same instance based on certain conditions like: number-of-requests-in-queue or something similar.
> **Assumption**: There is sufficient amount of GPU memory remaining and GPU utilization is not 100%
---
### **Observation:** The Problem I faced:
- I hosted the simple **MNIST classifier example** provided in torch-serve tutorials on a **T4 GPU (G4dn EC2 Instance**) and load tested it using the **Locust Application**
- I had set the max-workers to 2 and min-workers to 1. The batch-size was set to 1.
- With the help of Locust Application, I gradually sent 1000 requests per sec to the model server.
- I observed the GPU and CPU memory and compute utilization:
- GPU memory utilization was less than 3% because the ML model is very small. The compute utilization was also less than 5%.
- Even CPU memory and compute was not utilized at max (was higher than GPU but less than 20% of total availability)
- **Problem**:
- **The model server did not process all the requests. At any given point, it only responded to ~700-750 requests and the remaining requests were discarded/dropped**
- I dont think the model got replicated as 2nd worker because the GPU memory and compute utilization was very small.
---
Please let me know if there are any good resources to refer and how to auto-scale based on **ts_queue_latency_microseconds** flag in a single GPU instance.
|
https://github.com/pytorch/serve/issues/2766
|
closed
|
[
"triaged"
] | 2023-11-02T16:03:49Z
| 2023-11-26T18:39:03Z
| null |
yogendra-yatnalkar
|
pytorch/serve
| 2,765
|
How to auto-scale model replicas in a single GPU based EC2 instance based on time_of_request_in_queue
|
https://github.com/pytorch/serve/issues/2765
|
closed
|
[] | 2023-11-02T15:39:16Z
| 2023-11-02T17:46:34Z
| null |
yogendra-yatnalkar
|
|
huggingface/distil-whisper
| 11
|
[Speculative Decoding] How to run speculative decoding for batch_size > 1?
|
Transformers 4.35 only supports speculative decoding for batch size == 1. In order to use speculative decoding for batch size > 1, please make sure to use this branch: https://github.com/huggingface/transformers/pull/26875
To do so, you need to install transformers as follows:
```
pip install git+https://github.com/huggingface/transformers.git@assistant_decoding_batch
```
and then you can run:
```py
from transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor
import torch
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
assistant_model_id = "distil-whisper/distil-large-v2"
assistant_model = AutoModelForCausalLM.from_pretrained(
assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
assistant_model.to(device)
model_id = "openai/whisper-large-v2"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
generate_kwargs={"assistant_model": assistant_model},
torch_dtype=torch_dtype,
chunk_length_s=15,
batch_size=4,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "default", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
The PR will be merged to Transformers soon.
**Note**: Given the "speculative" nature of assistant decoding (*a.k.a* speculative decoding), it is not recommended to make use of speculative decoding for batch sizes higher than 4 as this might actually lead to the transcription pipeline being slower compared to just using the teacher model.
Confer with Table 22 of [the paper](https://arxiv.org/pdf/2311.00430.pdf).
|
https://github.com/huggingface/distil-whisper/issues/11
|
open
|
[] | 2023-11-02T14:19:55Z
| 2024-10-03T13:12:22Z
| null |
patrickvonplaten
|
pytorch/vision
| 8,090
|
to_pil_image different results depending on numpy/torch input
|
### ๐ Describe the bug
to_pil_image has different behaviour depending on torch or numpy input. This is not documented as far as I can see. There is a note that numpy is expected to be HWC, whereas torch is expected to be CHW, but that's not relevant here.
```python
import torch
from torchvision.transforms.functional import to_pil_image
a = torch.rand((100, 101))
print(to_pil_image(a).mode)
# L
print(to_pil_image(a.numpy()).mode)
# F
```
This is not documented, nor is there any warning, so errors due to this are hard to track down. The problematic code is this section:
```python
if isinstance(pic, torch.Tensor):
if pic.is_floating_point() and mode != "F":
pic = pic.mul(255).byte()
```
in which the torch.tensor is rescaled. Can we mirror functionality for numpy arrays? `(pic * 255).round().astype(np.uint8)`
### Versions
all versions
|
https://github.com/pytorch/vision/issues/8090
|
closed
|
[] | 2023-11-02T12:46:29Z
| 2023-11-08T08:51:45Z
| 5
|
rb-synth
|
huggingface/chat-ui
| 542
|
Request: more clarity on JSON response from custom models
|
Note: duplicate from https://huggingface.co/spaces/huggingchat/chat-ui/discussions/309, not sure which is the proper place to post.
I followed the guide chat-ui to deploy a version in gcp, and I love the chat interface.
I would love to hook it up to one of my custom models, so I specified
```
"endpoints": [{"url": "[http://127.0.0.1:8000"}]](http://127.0.0.1:8000"%7D%5D/)
}
]`
```
for MODELS as suggested.
I receive the message that has been posted in the web interface at my endpoint, but I am unable to send back the proper json response. So far, in python, I do:
```
response_content = [
{
"generated_text": "Please show this response."
}
]
response = make_response(jsonify(response_content))
return response
```
It is received in the chat-ui code (confirmed by injecting console.log statements), but it doesn't show in the browser conversation.
Can someone please clarify what json (content, headers, whatever is needed) I need to send from my custom model endpoint as a response to the chat-ui interface? Or if this is the wrong place to ask, tell me where I should ask?
|
https://github.com/huggingface/chat-ui/issues/542
|
open
|
[
"support"
] | 2023-11-02T10:31:53Z
| 2023-11-03T19:44:02Z
| 1
|
thubreg
|
huggingface/distil-whisper
| 8
|
Where is the model?
|
Link to HF leads to empty files section.
|
https://github.com/huggingface/distil-whisper/issues/8
|
closed
|
[] | 2023-11-02T08:47:23Z
| 2023-11-02T17:31:08Z
| null |
lkmdhertg
|
pytorch/xla
| 5,762
|
how to use torch-xla with huggingface transformers
|
## โ Questions and Help
I am fine-tuning the model provided by huggingface, modify a model from pytorch to torch-xla and run it. but it will freeze when running. Is there something wrong here?
dataset as follows:
https://github.com/zyds/transformers-code/blob/master/01-Getting%20Started/04-model/ChnSentiCorp_htl_all.csv
pytorch code as follows:
```
import pandas as pd
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
from torch.optim import Adam
from torch.utils.data import Dataset
from torch.utils.data import random_split
from torch.utils.data import DataLoader
class MyDataset(Dataset):
def __init__(self, data_path) -> None:
super().__init__()
self.data = pd.read_csv(data_path)
self.data = self.data.dropna()
def __getitem__(self, index):
return self.data.iloc[index]["review"], self.data.iloc[index]["label"]
def __len__(self):
return len(self.data)
if __name__ == "__main__":
dataset = MyDataset('./ChnSentiCorp_htl_all.csv')
trainset, validset = random_split(dataset, lengths=[0.9, 0.1])
tokenizer = AutoTokenizer.from_pretrained("rbt3")
def collate_func(batch):
texts, labels = [], []
for item in batch:
texts.append(item[0])
labels.append(item[1])
inputs = tokenizer(texts, max_length=128, padding="max_length", truncation=True, return_tensors="pt")
inputs["labels"] = torch.tensor(labels)
return inputs
trainloader = DataLoader(trainset, batch_size=32, shuffle=True, collate_fn=collate_func)
validloader = DataLoader(validset, batch_size=64, shuffle=False, collate_fn=collate_func)
model = AutoModelForSequenceClassification.from_pretrained("./rbt3/")
if torch.cuda.is_available():
model = model.cuda()
optimizer = Adam(model.parameters(), lr=2e-5)
def evaluate():
model.eval()
acc_num = 0
with torch.inference_mode():
for batch in validloader:
if torch.cuda.is_available():
batch = {k: v.cuda() for k, v in batch.items()}
output = model(**batch)
pred = torch.argmax(output.logits, dim=-1)
acc_num += (pred.long() == batch["labels"].long()).float().sum()
return acc_num / len(validset)
def train(epoch=3, log_step=100):
global_step = 0
for ep in range(epoch):
model.train()
for batch in trainloader:
if torch.cuda.is_available():
batch = {k: v.cuda() for k, v in batch.items()}
optimizer.zero_grad()
output = model(**batch)
output.loss.backward()
optimizer.step()
if global_step % log_step == 0:
print(f"ep: {ep}, global_step: {global_step}, loss: {output.loss.item()}")
global_step += 1
acc = evaluate()
print(f"ep: {ep}, acc: {acc}")
train()
```
model by torch-xla as follows, run cmd is 'PJRT_DEVICE=CUDA python classification_demo_xla.py'
```
import pandas as pd
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
from torch.optim import Adam
from torch.utils.data import Dataset
from torch.utils.data import random_split
from torch.utils.data import DataLoader
import torch_xla
from torch_xla import runtime as xr
import torch_xla.core.xla_model as xm
import torch_xla.distributed.xla_multiprocessing as xmp
import torch_xla.distributed.parallel_loader as pl
class MyDataset(Dataset):
def __init__(self, data_path) -> None:
super().__init__()
self.data = pd.read_csv(data_path)
self.data = self.data.dropna()
def __getitem__(self, index):
return self.data.iloc[index]["review"], self.data.iloc[index]["label"]
def __len__(self):
return len(self.data)
if __name__ == "__main__":
dataset = MyDataset('./ChnSentiCorp_htl_all.csv')
trainset, validset = random_split(dataset, lengths=[0.9, 0.1])
tokenizer = AutoTokenizer.from_pretrained("rbt3")
def collate_func(batch):
texts, labels = [], []
for item in batch:
texts.append(item[0])
labels.append(item[1])
inputs = tokenizer(texts, max_length=128, padding="max_length", truncation=True, return_tensors="pt")
inputs["labels"] = torch.tensor(labels)
return inputs
train_loader = DataLoader(trainset, batch_size=32, shuffle=True, collate_fn=collate_func)
valid_loader = DataLoader(validset, batch_size=64, shuffle=False, collate_fn=collate_func)
model = AutoModelForSequenceClassification.from_pretrained("./rbt3/")
device = xm.xla_device()
model = model.to(device)
print('model device:', model.device)
optimizer = Ada
|
https://github.com/pytorch/xla/issues/5762
|
closed
|
[] | 2023-11-02T08:43:46Z
| 2023-11-03T01:34:51Z
| null |
markc-614
|
huggingface/candle
| 1,241
|
How to reduce memory usage of backpropagation?
|
I implemented the [tiny NeRF example](https://github.com/bmild/nerf/blob/master/tiny_nerf.ipynb) using `candle` here: https://github.com/laptou/nerfy/blob/fc50dbd61c4012d1f12f556a72474b59a8b3c158/examples/tiny_nerf.rs
The example, which is written using TensorFlow, runs fine on my laptop. My `candle` implementation consumes all available memory on my laptop, which crashes my desktop session if I use CPU and errors out with a CUDA memory allocation error if I use the GPU. I'm running on a laptop with 32 GB of RAM, 32 GB of swap, and an RTX A3000 w/ 12 GB of VRAM.
I'm barely able to run it on CPU if I decrease the hidden layer size from 256 to 64.

I tracked the memory allocations using `heaptrack`, and it seems like most of them are related to keeping track of the operations for backpropagation.
Can you spot any obvious issues in my implementation that are causing it to consume so much memory? Is there a way that I can disable or reduce this behavior in some parts of the code to reduce the amount of memory that it uses?
|
https://github.com/huggingface/candle/issues/1241
|
open
|
[] | 2023-11-02T03:38:32Z
| 2025-09-10T05:14:01Z
| null |
laptou
|
huggingface/candle
| 1,240
|
Demo showing how to load in candle computer vision model using webcam
|
```
use anyhow::Result; // Automatically handle the error types
use opencv::{
prelude::*,
videoio,
highgui
}; // Note, the namespace of OpenCV is changed (to better or worse). It is no longer one enormous.
fn main() -> Result<()> { // Note, this is anyhow::Result
// Open a GUI window
highgui::named_window("window", highgui::WINDOW_FULLSCREEN)?;
// Open the web-camera (assuming you have one)
let mut cam = videoio::VideoCapture::new(0, videoio::CAP_ANY)?;
let mut frame = Mat::default(); // This array will store the web-cam data
// Read the camera
// and display in the window
loop {
cam.read(&mut frame)?;
highgui::imshow("window", &frame)?;
let key = highgui::wait_key(1)?;
if key == 113 { // quit with q
break;
}
}
Ok(())
}
```
Here is a basic example of opening Qt using Opencv-rust.
It would be great to have a working example using this alongside candle!
Open to submitting this as a pr in any of the example folders.
|
https://github.com/huggingface/candle/issues/1240
|
open
|
[] | 2023-11-02T03:38:19Z
| 2023-11-02T06:24:11Z
| null |
bazylhorsey
|
huggingface/candle
| 1,239
|
How inference on a new model, have to hand written model.rs manually?
|
Just wonder if there scripts convert a pth or onnx to candle format maybe?
|
https://github.com/huggingface/candle/issues/1239
|
closed
|
[] | 2023-11-02T03:32:11Z
| 2023-11-02T07:03:54Z
| null |
lucasjinreal
|
huggingface/safetensors
| 375
|
How do I load the tensors in Rust?
|
Hi,
I am unable to find good documentation to read the weights in rust. I want to write gpt2 from scratch, and want to be able to load the HF weights. Since, I only plan to use the ndarray library, I want to be able to load the FP32 tensors somehow. Please help.
In python I do:
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
import safetensors
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
safetensors.torch.save_model(model, 'gpt2_weights.st')
```
I want to use some code like this in rust (which is currently incorrect because safetensors doesn't have a Reader) and I am unable to figure out the API.
```rust
use safetensors::Reader;
use std::error::Error;
fn main() -> Result<(), Box<dyn Error>> {
let reader = Reader::from_file("gpt2_weights.st")?;
for (name, tensor) in reader.tensors() {
println!("Tensor name: {}", name);
let tensor = tensor?;
println!("Shape: {:?}", tensor.shape());
}
Ok(())
}
```
|
https://github.com/huggingface/safetensors/issues/375
|
closed
|
[
"Stale"
] | 2023-11-02T02:11:11Z
| 2024-01-02T01:48:31Z
| 5
|
arunpatro
|
huggingface/safetensors
| 374
|
safetensor.*.save_file the parameter name to set the incoming tensors change from "tensors" to "tensor_dict"
|
### Feature request
In Jax, torch, and paddle is:
> tensors (Dict[str, torch.Tensor]) โ The incoming tensors. Tensors need to be contiguous and dense.
Check: https://huggingface.co/docs/safetensors/api/torch#safetensors.torch.save
In Numpy:
> tensor_dict (Dict[str, np.ndarray]) โ The incoming tensors. Tensors need to be contiguous and dense.
Check: https://huggingface.co/docs/safetensors/api/numpy#safetensors.numpy.save_file
Is there a reason to change the name between frameworks?
### Motivation
Improve the documentation.
### Your contribution
I can submit a PR if that helps!
|
https://github.com/huggingface/safetensors/issues/374
|
closed
|
[
"Stale"
] | 2023-11-02T00:41:14Z
| 2024-01-02T01:48:32Z
| 2
|
csaybar
|
huggingface/safetensors
| 373
|
Stream load models (load model larger than system memory)
|
### Feature request
I'm not very familiar with the details, but I'd like to load a 20GB model while having only 8 GB system memory.
Currently, safetensors loads the entire model into system memory.
Is it possible to load models incrementally/as a stream?
Related:
https://github.com/turboderp/exllama/issues/245
https://github.com/huggingface/safetensors/issues/67
Possibly related (writing is different from reading):
https://github.com/huggingface/safetensors/issues/291
### Motivation
Using swap requires unnecessary wear on SSDs. And it's silly to read a model from disk, just to write it back to disk as a swap, and then read it again from disk.
Alternatively, the model should be saved in a format that can be streamed directly to memory?
Similarly, it's silly to require X amount of system memory to be available for just a few seconds while loading a large model.
### Your contribution
Unqualified to contribute.
|
https://github.com/huggingface/safetensors/issues/373
|
closed
|
[
"Stale"
] | 2023-11-01T16:14:18Z
| 2024-01-03T01:48:07Z
| 6
|
erikschul
|
huggingface/text-embeddings-inference
| 59
|
how to resolve this compile error?
|
### System Info
cargo 1.73.0 (9c4383fb5 2023-08-26)
gcc (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)
cuda 11.8
v100
```
"-Wl,-Bdynamic" "-llayernorm" "-lcudart" "-lstdc++" "-lcuda" "-lnvrtc" "-lcurand" "-lcublas" "-lcublasLt" "-lssl" "-lcrypto" "-lgcc_s" "-lutil" "-lrt" "-lpthread" "-lm" "-ldl" "-lc" "-Wl,--eh-frame-hdr" "-Wl,-z,noexecstack" "-L" "/home/luoweichao/.rustup/toolchains/1.73.0-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-o" "/home/luoweichao/text-embeddings-inference/target/release/deps/text_embeddings_router-0345b2604448f561" "-Wl,--gc-sections" "-pie" "-Wl,-z,relro,-z,now" "-Wl,-O1" "-nodefaultlibs"
= note: /opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld: /home/luoweichao/text-embeddings-inference/target/release/build/candle-layer-norm-3b4dbfa3d047ac72/out/liblayernorm.a(ln_api.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a PIE object; recompile with -fPIC
/opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld: final link failed: nonrepresentable section on output
collect2: error: ld returned 1 exit status
error: could not compile `text-embeddings-router` (bin "text-embeddings-router") due to previous error
error: failed to compile `text-embeddings-router v0.3.0 (/home/luoweichao/text-embeddings-inference/router)`, intermediate artifacts can be found at `/home/luoweichao/text-embeddings-inference/target`.
```
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
cargo install --path router -F candle-cuda-volta --no-default-features
### Expected behavior
build successfully!
|
https://github.com/huggingface/text-embeddings-inference/issues/59
|
closed
|
[] | 2023-10-31T11:35:02Z
| 2023-11-02T07:52:18Z
| null |
kingder
|
pytorch/tutorials
| 2,630
|
๐ก [REQUEST] - <title>An inbuilt function to retrieve a list of datasets categorised by problem type (e.g., classification, regression, clustering).
|
### ๐ Descirbe the improvement or the new tutorial
PyTorch has inbuilt function to list all datasets.
`import torchvision.datasets as datasets
//Get a list of all datasets
all_datasets = datasets.__all__
//Print the list of datasets
print(all_datasets)
`
Rather than focusing on getting all the dataset, we can include a parameter. Parameter will take the type of task person wants to do e.g Clustering, Regression, Classification. After putting parameter all the related dataset according to task will be shown.
Overall, a built-in function to retrieve a list of datasets categorised by problem type would be a valuable addition to PyTorch. It would make it easier for users to find, discover, use, and share datasets.
### Existing tutorials on this topic
_No response_
### Additional context
_No response_
```[tasklist]
### Tasks
```
```[tasklist]
### Tasks
- [ ] Add a draft title or issue reference here
```
```[tasklist]
### Tasks
```
|
https://github.com/pytorch/tutorials/issues/2630
|
open
|
[] | 2023-10-31T09:08:51Z
| 2023-11-01T16:06:56Z
| 1
|
xd932
|
huggingface/optimum
| 1,497
|
about LCM onnx model
|
Hi!
can someone please tell how we can use the LCM model in onnx? i see u guys made an script to run it in onnx, but what about the model? can we simply use the normal stable diffusion script onnx conversation for lcm model too? or we have to wait someone make an conversation script?
or could someone upload onnx converted of LCM model on huggingface and share it with us please?
kind regards
### Who can help?
@echarlaix
|
https://github.com/huggingface/optimum/issues/1497
|
closed
|
[
"bug"
] | 2023-10-31T08:57:16Z
| 2024-01-04T14:21:54Z
| 6
|
Amin456789
|
pytorch/executorch
| 1,117
|
[build Error initializing DaemonStateData] how to fix it
|
hi,
I reference [the tutorial](https://pytorch.org/executorch/stable/getting-started-setup.html#building-a-runtime) to install the buck2-x86_64-unknown-linux-musl.zst on my PC.
And I want to build
```
/tmp/buck2 build //examples/portable/executor_runner:executor_runner --show-output
```
and face the build failed.
And I try to use `killall` reference from https://stackoverflow.com/questions/76771689/buck2-cant-create-inotify-watchers
and try build again.
But it is still build failed. Could somebody help me?

OS: Linux Ubuntu 20.04.4 LTS x86_64
buck2 version: 2023-07-18
Thanks,
Kris
|
https://github.com/pytorch/executorch/issues/1117
|
closed
|
[] | 2023-10-31T05:49:12Z
| 2024-01-23T10:08:57Z
| null |
kris-himax
|
pytorch/pytorch
| 112,454
|
Inductor chooses too large of a block size in cases where the `YBLOCK` dimension is too large.
|
### ๐ Describe the bug
```python
import torch
torch.set_default_device('cuda')
@torch.compile
def f(x, y):
return x.t() + y
f(torch.randn(2**25, 128), torch.randn(128, 2**25))
```
The concrete issue is that this results in us potentially choosing a config like `XBLOCK=256, YBLOCK=512`, which requires too much shared memory.
The reason we end up in this situation is: https://github.com/pytorch/pytorch/blob/main/torch/_inductor/triton_heuristics.py#L810
Basically, because we are limited to launching 65536 blocks on the second/third dim, `triton_config` will elect to scale up `YBLOCK` until we "fit" within the limit.
In this case, we start with a config like `XBLOCK=256, YBLOCK=32`, but we end up scaling `YBLOCK` to 512.
Possible solutions are:
1. Stop launching 2d configs, and just flatten it down to one axis of threadblocks.
2. Choose the XBLOCK axis to be the "large" one.
3. Scale down XBLOCK if `XBLOCK * RBLOCK` is too large.
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
|
https://github.com/pytorch/pytorch/issues/112454
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2023-10-31T00:18:12Z
| 2023-11-07T01:48:02Z
| null |
Chillee
|
huggingface/dataset-viewer
| 2,038
|
How to pass single quote in /filter endpoint "where" parameter?
|
See `https://huggingface.co/datasets/albertvillanova/lm_en_dummy2/viewer/default/train?f[meta][value]='{'file': 'file_4.txt'}'`
From `https://datasets-server.huggingface.co/filter?dataset=albertvillanova/lm_en_dummy2&config=default&split=train&where=meta='{'file': 'file_4.txt'}'`, we get:
```
{"error":"Parameter 'where' is invalid"}
```
We want to search he value `{'file': 'file_4.txt'}` in the column `meta`
|
https://github.com/huggingface/dataset-viewer/issues/2038
|
closed
|
[
"bug",
"documentation",
"P1"
] | 2023-10-30T22:21:24Z
| 2023-11-02T17:22:54Z
| null |
severo
|
huggingface/datasets
| 6,364
|
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
|
Hi,
I am trying to load a local csv dataset(similar to explodinggradients_fiqa) using load_dataset. When I try to pass features, I am facing the mentioned issue.
CSV Data sample(golden_dataset.csv):
Question | Context | answer | groundtruth
"what is abc?" | "abc is this and that" | "abc is this " | "abc is this and that"
```
import csv
# built it based on https://huggingface.co/datasets/explodinggradients/fiqa/viewer/ragas_eval?row=0
mydict = [
{'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]},
{'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]},
{'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]}
]
fields = ['question', 'contexts', 'answer', 'ground_truths']
with open('golden_dataset.csv', 'w', newline='\n') as file:
writer = csv.DictWriter(file, fieldnames = fields)
writer.writeheader()
for row in mydict:
writer.writerow(row)
```
Retrieved dataset:
DatasetDict({
train: Dataset({
features: ['question', 'contexts', 'answer', 'ground_truths'],
num_rows: 1
})
})
Code to reproduce issue:
```
from datasets import load_dataset, Features, Sequence, Value
encode_features = Features(
{
"question": Value(dtype='string', id=0),
"contexts": Sequence(feature=Value(dtype='string', id=1)),
"answer": Value(dtype='string', id=2),
"ground_truths": Sequence(feature=Value(dtype='string',id=3)),
}
)
eval_dataset = load_dataset('csv', data_files='/golden_dataset.csv', features = encode_features )
```
Error trace:
```
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1925, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1924 _time = time.time()
-> 1925 for _, table in generator:
1926 if max_shard_size is not None and writer._num_bytes > max_shard_size:
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:192, in Csv._generate_tables(self, files)
189 # Uncomment for debugging (will print the Arrow table size and elements)
190 # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}")
191 # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows)))
--> 192 yield (file_idx, batch_idx), self._cast_table(pa_table)
193 except ValueError as e:
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:167, in Csv._cast_table(self, pa_table)
165 if all(not require_storage_cast(feature) for feature in self.config.features.values()):
166 # cheaper cast
--> 167 pa_table = pa.Table.from_arrays([pa_table[field.name] for field in schema], schema=schema)
168 else:
169 # more expensive cast; allows str <-> int/float or str to Audio for example
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:3781, in pyarrow.lib.Table.from_arrays()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:1449, in pyarrow.lib._sanitize_arrays()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/array.pxi:354, in pyarrow.lib.asarray()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:551, in pyarrow.lib.ChunkedArray.cast()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/compute.py:400, in cast(arr, target_type, safe, options, memory_pool)
399 options = CastOptions.safe(target_type)
--> 400 return call_function("cast", [arr], options, memory_pool)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/_compute.pyx:572, in pyarrow._compute.call_function()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/_compute.pyx:367, in pyarrow._compute.Function.call()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[57], line 1
----> 1 eval_dataset = load_dataset('csv', data_files='/golden_dataset.csv
|
https://github.com/huggingface/datasets/issues/6364
|
closed
|
[] | 2023-10-30T20:14:01Z
| 2023-10-31T19:21:23Z
| 2
|
divyakrishna-devisetty
|
pytorch/pytorch
| 112,369
|
In the func Tensor.to, how can I make privateuse lazy init
|
### ๐ Describe the bug
Iโm using privateuse1 to add our backend. My customer find the following code is working in cuda, but not working in my backend.
Use `Tensor.to()` with device message which not has a index, for example "cuda".
```
import torch
tensor_a = torch.rand(2).to("cuda")
```
Privateuse1 uses the same logic but fails.
```
import torch
# assumption my device is privateuseone
import torch_privateuseone
tensor_a = torch.rand(2).to("privateuseone")
```
Above code will fail with `impl->getDevice()` because of the lack of lazy_init for the privateuseone device.
https://github.com/pytorch/pytorch/blob/bbd5b935e49a54578ac88cb23ca962ab896a8c7a/aten/src/ATen/native/TensorConversions.cpp#L210-L216
cuda will init in `THPVariable_to`
https://github.com/pytorch/pytorch/blob/bbd5b935e49a54578ac88cb23ca962ab896a8c7a/tools/autograd/templates/python_variable_methods.cpp#L958-L980
Where I can add `privateuseone_init` is in my own `to_impl` after Dispatcher, but by then it was too late. Any advice for this case?
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git7bcf7da
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (aarch64)
GCC version: (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.23.1
Libc version: glibc-2.27
Python version: 3.8.17 (default, Jul 5 2023, 20:40:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.15.0-29-generic-aarch64-with-glibc2.26
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: aarch64
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 4
NUMA node(s): 4
Vendor ID: 0x48
Model: 0
Stepping: 0x1
BogoMIPS: 200.00
L1d cache: 64K
L1i cache: 64K
L2 cache: 512K
L3 cache: 24576K
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
NUMA node2 CPU(s): 96-143
NUMA node3 CPU(s): 144-191
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==2.1.0a0+git7bcf7da
[pip3] torch-npu==2.1.0+gitf565f75
[pip3] torchair==0.1
[pip3] torchvision==0.15.2
[conda] numpy 1.23.4 pypi_0 pypi
[conda] torch 2.1.0a0+git7bcf7da pypi_0 pypi
[conda] torch-npu 2.1.0+gitf565f75 pypi_0 pypi
[conda] torchair 0.1 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
|
https://github.com/pytorch/pytorch/issues/112369
|
closed
|
[
"module: internals",
"triaged"
] | 2023-10-30T06:39:18Z
| 2024-01-09T20:12:12Z
| null |
huihoaan
|
huggingface/diffusers
| 5,575
|
How to set the "transformer_in" layer's hidden size in LoRA training?
|
### Describe the bug
I modify the code for text-to-image [lora](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) as Figure 1,
<img width="908" alt="image" src="https://github.com/huggingface/diffusers/assets/52530394/0639998b-8106-49d9-8761-c58014095e7e">
However, in 3D UNet there is a "transformer_in" layer that does not exist in 2D UNet. So I add "transformer_in" process in the code. And I set the "hidden_size" to be "unet.config.block_out_channels[0]" following the 3D UNet's definition as [this link](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_3d_condition.py) Figure 2:
<img width="641" alt="image" src="https://github.com/huggingface/diffusers/assets/52530394/c23efadc-e22d-4bd9-aa3f-7e69bd83a7c2">
But the there is a shape error as Figure 3

:
### Reproduction
Load a 3D UNet. Adapt the LoRA codes as Figure 1.
### Logs
_No response_
### System Info
- `diffusers` version: 0.21.4
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- PyTorch version (GPU?): 2.0.1 (True)
- Huggingface_hub version: 0.18.0
- Transformers version: 4.26.0
- Accelerate version: 0.23.0
- xFormers version: 0.0.22.post7
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sayakpaul @patrickvonplaten @DN6 @yiyi
|
https://github.com/huggingface/diffusers/issues/5575
|
closed
|
[
"bug",
"stale"
] | 2023-10-30T03:44:32Z
| 2024-01-10T15:07:20Z
| null |
lxycopper
|
huggingface/diffusers
| 5,574
|
How to train a part of UNet attention parameters with LoRA
|
### Describe the bug
I adapt the LoRA training code in # to train my model.
And I only want to update the parameters in "down block", so I comment out the code for other attention blocks:
<img width="909" alt="image" src="https://github.com/huggingface/diffusers/assets/52530394/6b204ad8-e201-43b0-ab97-5d29a936e3c8">
However, I got an error at this line "unet.set_attn_processor(lora_attn_procs)" as shown in the code:
<img width="1009" alt="image" src="https://github.com/huggingface/diffusers/assets/52530394/0d914626-fcbc-40a5-a254-8bc5f258fbdf">
### Reproduction
comment out the code for other attention blocks as my first figure.
### Logs
_No response_
### System Info
diffusers 0.21.4
python 3.10.13
Ubuntu 18
### Who can help?
@sayakpaul @patr
|
https://github.com/huggingface/diffusers/issues/5574
|
closed
|
[
"bug",
"stale"
] | 2023-10-30T02:58:07Z
| 2023-12-08T15:05:16Z
| null |
lxycopper
|
pytorch/TensorRT
| 2,419
|
โ [Question] How do the dtypes work with torch.compile(backend="torch_tensorrt"). Getting error.
|
## โ Question
I tried the following script to load a resnet50 model and test a sample input -
```python
import torch_tensorrt
import torch
# Load a pre-trained ResNet50 model
x = torch.randn(1, 3, 224, 224, device='cuda').half()
model = torch.hub.load(
'pytorch/vision:v0.6.0', 'resnet50', pretrained=True
).cuda().half().eval()
model_opt = torch.compile(model, backend="torch_tensorrt", dynamic=False, options={"debug": True, "min_block_size": 1, "enabled_precisions": {torch.half}})
# Check correctness
torch.testing.assert_close(actual=model_opt(x), expected=model(x), rtol=1e-2, atol=1e-2)
```
and I am getting the following error -
```
Using cache found in /home/shreyansh/.cache/torch/hub/pytorch_vision_v0.6.0
/home/shreyansh/miniconda3/envs/shreyansh-env-py10/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/home/shreyansh/miniconda3/envs/shreyansh-env-py10/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet50_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet50_Weights.DEFAULT` to get the most up-to-date weights.
warnings.warn(msg)
[2023-10-28 09:37:06,703] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing forward
[2023-10-28 09:37:08,530] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo done tracing forward (RETURN_VALUE)
[2023-10-28 09:37:08,552] torch._dynamo.output_graph: [INFO] Step 2: calling compiler function torch_tensorrt_backend
[10/28/2023-09:37:36] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
INFO:torch_tensorrt.fx.fx2trt:TRT INetwork construction elapsed time: 0:00:00.008624
INFO:torch_tensorrt.fx.fx2trt:Build TRT engine elapsed time: 0:01:02.300433
[10/28/2023-09:38:38] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
[10/28/2023-09:38:38] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
INFO:torch_tensorrt.fx.fx2trt:TRT INetwork construction elapsed time: 0:00:00.004251
INFO:torch_tensorrt.fx.fx2trt:Build TRT engine elapsed time: 0:00:01.587664
[10/28/2023-09:38:40] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
[10/28/2023-09:38:40] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
INFO:torch_tensorrt.fx.fx2trt:TRT INetwork construction elapsed time: 0:00:00.004451
INFO:torch_tensorrt.fx.fx2trt:Build TRT engine elapsed time: 0:00:01.805693
[10/28/2023-09:38:42] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
ERROR:torch_tensorrt.dynamo.backend.backends:FX2TRT conversion failed on the subgraph. See trace above. Returning GraphModule forward instead.
Traceback (most recent call last):
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10/lib/python3.10/site-packages/torch_tensorrt/dynamo/backend/backends.py", line 74, in _pretraced_backend
trt_compiled = _compile_module(
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10/lib/python3.10/site-packages/torch_tensorrt/dynamo/backend/backends.py", line 129, in _compile_module
submodule_inputs = get_submod_inputs(
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10/lib/python3.10/site-packages/torch_tensorrt/dynamo/backend/lowering/_partition.py", line 207, in get_submod_inputs
mod(*inputs)
File "/home/shreyansh/miniconda3/envs/shreyansh-env-py10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 662, in call_wrapped
return self._wrapped_call(self, *args, **k
|
https://github.com/pytorch/TensorRT/issues/2419
|
closed
|
[
"question"
] | 2023-10-28T16:48:28Z
| 2023-10-30T17:24:55Z
| null |
shreyansh26
|
huggingface/transformers.js
| 372
|
[Question] onnxruntime_binding.node issue on mac electron app
|
Hi,
I'm getting this error on an intel macbook running an electron forge app:
```
(node:63267) UnhandledPromiseRejectionWarning: Error: Cannot find module '../bin/napi-v3/darwin/x64/onnxruntime_binding.node'
Require stack:
- /Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js
- /Users/sam/Desktop/electron-forge-react-typescript-tailwind/node_modules/electron/dist/Electron.app/Contents/Resources/default_app.asar/main.js
-
at Module._resolveFilename (node:internal/modules/cjs/loader:963:15)
at n._resolveFilename (node:electron/js2c/browser_init:2:109411)
at Module._load (node:internal/modules/cjs/loader:811:27)
at f._load (node:electron/js2c/asar_bundle:2:13330)
at Module.require (node:internal/modules/cjs/loader:1035:19)
at require (node:internal/modules/cjs/helpers:102:18)
at ./node_modules/@xenova/transformers/node_modules/onnxruntime-node/dist/binding.js (/Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js:229:1)
at __webpack_require__ (/Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js:83093:42)
at ./node_modules/@xenova/transformers/node_modules/onnxruntime-node/dist/backend.js (/Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js:153:19)
```
I check the path ```../bin/napi-v3/darwin/x64/onnxruntime_binding.node``` and it does exist in node_modules. So I'm not sure what's going on/whether this is a bug.
|
https://github.com/huggingface/transformers.js/issues/372
|
closed
|
[
"question"
] | 2023-10-28T00:34:05Z
| 2023-11-01T21:56:19Z
| null |
samlhuillier
|
huggingface/transformers
| 27,107
|
How to export a Marian model in rust ?
|
Most models based on Marian are also available in rust, such as : Helsinki-NLP/opus-mt-en-roa
Is it possible to do this using transformers ?
Did you asssit Helsinki-NLP in exporting the models to Rust ?
|
https://github.com/huggingface/transformers/issues/27107
|
closed
|
[] | 2023-10-27T13:01:13Z
| 2023-12-05T08:03:53Z
| null |
flutter-painter
|
pytorch/vision
| 8,071
|
How to tell if Faster RCNN Detection model is overfitting
|
I'm confused as to how I can tell if the Faster RCNN Detection model I'm training is overfitting or not given that the validation loss is not computed in the `evaluate` function seen [here](https://github.com/pytorch/vision/blob/main/references/detection/engine.py#L75C1-L115C26) and below.
Any help would be greatly appreciated.
```
@torch.inference_mode()
def evaluate(model, data_loader, device):
n_threads = torch.get_num_threads()
# FIXME remove this and make paste_masks_in_image run on the GPU
torch.set_num_threads(1)
cpu_device = torch.device("cpu")
model.eval()
metric_logger = utils.MetricLogger(delimiter=" ")
header = "Test:"
coco = get_coco_api_from_dataset(data_loader.dataset)
iou_types = _get_iou_types(model)
coco_evaluator = CocoEvaluator(coco, iou_types)
for images, targets in metric_logger.log_every(data_loader, 100, header):
images = list(img.to(device) for img in images)
if torch.cuda.is_available():
torch.cuda.synchronize()
model_time = time.time()
outputs = model(images)
outputs = [{k: v.to(cpu_device) for k, v in t.items()} for t in outputs]
model_time = time.time() - model_time
res = {target["image_id"]: output for target, output in zip(targets, outputs)}
evaluator_time = time.time()
coco_evaluator.update(res)
evaluator_time = time.time() - evaluator_time
metric_logger.update(model_time=model_time, evaluator_time=evaluator_time)
# gather the stats from all processes
metric_logger.synchronize_between_processes()
print("Averaged stats:", metric_logger)
coco_evaluator.synchronize_between_processes()
# accumulate predictions from all images
coco_evaluator.accumulate()
coco_evaluator.summarize()
torch.set_num_threads(n_threads)
return coco_evaluator
```
|
https://github.com/pytorch/vision/issues/8071
|
open
|
[] | 2023-10-27T00:03:39Z
| 2025-12-22T11:12:36Z
| null |
1andDone
|
huggingface/chat-ui
| 535
|
API format?
|
ok, so this may be a dumb question, but i am not sure where else to ask it. So if we use this repo to deploy our app on HF, what is the format of the API parameters for calling our space?
|
https://github.com/huggingface/chat-ui/issues/535
|
closed
|
[] | 2023-10-26T21:56:22Z
| 2023-10-27T15:01:57Z
| 3
|
silvacarl2
|
pytorch/tutorials
| 2,624
|
~ PyTorch Docathon H2 2023 ~
|
# ~ PyTorch Docathon H2 2023 ~
We have a large backlog of issues that we want to address and it's a great opportunity for you to start contributing to PyTorch. We have limited this docathon to the [pytorch/tutorials](https://github.com/pytorch/tutorials/pulls?q=is%3Apr+is%3Aopen+label%3Adocathon-h2-2023+) and [pytorch/pytorch](https://github.com/pytorch/pytorch/pulls?q=is%3Apr+is%3Aopen+label%3Adocathon-h2-2023+) repositories, so please work on the issues from these two repositories.
**NOTE**: This issue outlines the work in the pytorch/tutorials repo. If you would prefer to work on the PyTorch docstrings issues, please go to the [pytorch/pytorch Docathon issue](https://github.com/pytorch/pytorch/issues/112176).
# Date and location
**WHEN:** The docathon starts on November 1st 10 AM PST. Please do not work on tasks until then. We will continue accepting new submissions until 5 PM PST on November 12th.
**WHERE:** Virtual
**WHAT:** Issues with the **docathon-h2-2023** label - will be posted on November 1st.
Watch our intro video to learn more details about the event.
[](https://youtu.be/IhTjsRKqjtA?si=OdRvcjDj_82axD2I)
# Can everyone participate?
We encourage everyone to consider participating in the docathon but there are a few things we expect from the participants:
- You must have a GitHub account and know how to use Git and GitHub, how to submit or rebase your PR on the latest main branch, how to fork or clone the repo. We reserve the right to reject incorrectly submitted PRs.
- You must be familiar with Python, the basics of Machine Learning, and have at least a basic knowledge of PyTorch. Familiarity with Sphinx, sphinx-gallery, and reStructuredText is a plus.
Before you start contributing make sure to read [Linux Foundation Code of Conduct](https://events.linuxfoundation.org/about/code-of-conduct/).
# What contributions are we looking for?
All issues for this docathon are tagged with the **docathon-h2-2023** label. Please note that contributions that address other issues won't be counted. We are primarily looking for the following contributions:
**NOTE:** Please avoid working on issues with **intel**, **amd**, and **nvidia** labels which are reserved for our partners.
- Bug fixes in the [pytorch/tutorials](https://github.com/pytorch/tutorials) repo tagged with the docathon-h2-2023 label - see [the list](https://github.com/pytorch/tutorials/issues?q=is%3Aopen+is%3Aissue+label%3Adocathon-h2-2023) repo.
- Docstring fixes in the [pytorch/pytorch](https://github.com/pytorch/pytorch) repo tagged with the docathon-h2-2023 label - see [this list](https://github.com/pytorch/pytorch/issues?q=is%3Aopen+is%3Aissue+label%3Adocathon-h2-2023) repo.
**NOTE:** Due to the large number of RSVPs, the tasks are provided on a first come first serve basis โ please don't hoard the tasks!
# Difficulty Levels
The issues have three levels of difficulty: **easy**, **medium**, and **advanced**. If this is your first time contributing to PyTorch, we recommend that you start with an issue that is tagged as **easy** or **medium**.
# How to contribute to tutorials?
1. Read [pytorch/tutorials/CONTRIBUTING.md](https://github.com/pytorch/tutorials/blob/main/CONTRIBUTING.md) for general guidelines on how the submission process works and overall style and voice.
2. Pick an issue that is labeled as **docathon-h2-2023**.
3. In the issue, add a comment with the text **/assigntome**. If the issue is already assigned, please find another issue to work on. We ask that you assign one issue at a time - we want to give everyone a fair chance to participate. When you are done with one issue and get it approved, you can assign another one to yourself and start working on it.
4. If you are submitting a new tutorial, use [this template](https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py).
5. Fork or clone the PyTorch repository to your computer. For simple fixes, like incorrect URLs, you could use the GitHub UI as well.
6. Create a branch and work on the fix.
7. Test your fix by running the single tutorial locally. Don't run the whole build as it takes hours and requires a GPU. You can run one tutorial as a script python3 <tutorial-name.py> or GALLERY_PATTERN="neural_style_transfer_tutorial.py" make html
8. After you fix all the issues, you are ready to submit your PR.
# Submit Your PR
1. Submit your PR referencing the issue you've picked. For example:
<img width="1058" alt="docathonsubmission" src="https://github.com/pytorch/tutorials/assets/127536312/3096037c-14d8-46ba-bb48-4a7314b463eb">
2. Pick an issue that is labeled as **docathon-h2-2023**.
3. If you have not yet, sign the Contributor License Agreement (CLA) - prompted as a check in the PR. We can't accept any PRs without a signed CLA.
4
|
https://github.com/pytorch/tutorials/issues/2624
|
open
|
[
"docathon-h2-2023"
] | 2023-10-26T16:14:39Z
| 2023-11-06T17:50:19Z
| 3
|
sekyondaMeta
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.