repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/chat-ui
| 297
|
Is there a way to deploy without the HF token ?
|
I'm trying to use chat-ui with my own endpoints and I would like to know if I can get rid of the HF_ACCESS_TOKEN variable and also allow to run every model I want.
I tried to modify the TS in modelEndpoint.ts and model.ts but I can't figure how to run it independently to HF (I want it offline), here are the parts I suspect to prevent me from doing it :
modelEndpoint.ts :
```
if (!model.endpoints) {
return {
url: 'https://api-inference.huggingface.co/models/${model.name}',
authorization: 'Bearer ${HF_ACCESS_TOKEN}',
weight: 1,
};
}
```
model.ts :
```
endpoints: z
.array(
z.object({
url: z.string().url(),
authorization: z.string().min(1).default(`Bearer ${HF_ACCESS_TOKEN}`),
weight: z.number().int().positive().default(1),
})
)
```
Any thoughts about this ?
|
https://github.com/huggingface/chat-ui/issues/297
|
closed
|
[
"support"
] | 2023-06-14T12:11:04Z
| 2023-06-15T09:52:39Z
| 2
|
samichaignonmejai
|
huggingface/chat-ui
| 296
|
Issue when deploying model : Error in 'stream': 'stream' is not supported for this model
|
I'm trying to use bigscience/bloom-560m with chat-ui
I already have an API for the model and it's working well, same for chat-ui when I use my HF token but i get the following error message when I launch a request to my bloom-560m API from chat-ui :
```
Could not parse last message {"error":["Error in `stream`: `stream` is not supported for this model"]}
SyntaxError: Unexpected end of JSON input
at JSON.parse (<anonymous>)
at parseGeneratedText (/src/routes/conversation/[id]/+server.ts:180:32)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async saveMessage (/src/routes/conversation/[id]/+server.ts:95:26)
```
I tried modifying the URL of my API from http://xxx.xxx.x.xxx:8080/generate_stream to http://xxx.xxx.x.xxx:8080/generate but it is not working as well ... any thoughts about this ?
|
https://github.com/huggingface/chat-ui/issues/296
|
closed
|
[
"support",
"models"
] | 2023-06-14T09:04:07Z
| 2023-06-19T10:57:01Z
| 2
|
samichaignonmejai
|
huggingface/datasets
| 5,951
|
What is the Right way to use discofuse dataset??
|
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
**Below is the following way, as per my understanding , Is it correct :question: :question:**
The **columns/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** are:
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
1. **coherent_first_sentence**
2. **coherent_second_sentence**
3. **incoherent_first_sentence**
4. **incoherent_second_sentence**
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
The **`encoder` will take these four columns as input and encode them into a sequence of hidden states. The `decoder` will then take these hidden states as input and decode them into a new sentence that fuses the two original sentences together.**
The **discourse type, connective_string, has_coref_type_pronoun, and has_coref_type_nominal columns will not be used as input to the encoder or decoder.** These columns are used to provide additional information about the dataset, but they are not necessary for the task of sentence fusion.
Please correct me if I am wrong; otherwise, if this understanding is right, how shall I implement this task practically?
|
https://github.com/huggingface/datasets/issues/5951
|
closed
|
[] | 2023-06-14T08:38:39Z
| 2023-06-14T13:25:06Z
| null |
akesh1235
|
huggingface/chat-ui
| 295
|
Facing issue for using custom model deployed locally on flask
|
I have a chat model which responds on
```
@app.route("/get")
#function for the bot response
def get_bot_response():
userText = request.args.get('msg')
data = T.getResponse(userText)
return str(data)
```
I'm not sure about the configuration but I have added `MODELS=[{"name": "mymodel", "endpoints": [{"url": "http://127.0.0.1:5000/get"}]}]` in the` .env.local` file
Getting following error:

Can someone please help me to configure my local model with HuggingChat chat-ui
|
https://github.com/huggingface/chat-ui/issues/295
|
closed
|
[
"support"
] | 2023-06-14T08:20:41Z
| 2023-07-24T10:53:41Z
| 6
|
awsum0225
|
pytorch/data
| 1,184
|
Roadmap for mixed chain of multithread and multiprocessing pipelines?
|
### 🚀 The feature
[pypeln](https://cgarciae.github.io/pypeln/#mixed-pipelines) has a nice feature to chain pipelines which may run on different kind of workers including process, thread or asyncio.
```python
data = (
range(10)
| pl.process.map(slow_add1, workers=3, maxsize=4)
| pl.thread.filter(slow_gt3, workers=2)
| pl.sync.map(lambda x: print x)
| list
)
```

I remembered that in the first proposal of pytorch/data, it claims to support something alike. I'd like to ask if it's still planed and the concrete roadmap.
### Motivation, pitch
Initial proposed
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/meta-pytorch/data/issues/1184
|
open
|
[] | 2023-06-14T07:12:36Z
| 2023-06-15T17:32:46Z
| 2
|
npuichigo
|
pytorch/serve
| 2,412
|
How to identify "full" torchserve instances on Google Kubernetes Engine
|
We're currently trying to deploy torchserve on scale on Kubernetes. We have highly fluctuating requests, basically every 5 minutes some requests come in with nothing in-between, and sometimes there'll be huge spikes. Therefore we want small pods that scale aggressively as soon as load comes in.
Here comes the issues: based on what metric can we scale and is there a way to identify pods that are at their limit?
For scaling we currently just use cpu usage, `queueLength` would be ideal. For that we probably have to wait on #2101, right?
Once scaling has happened, k8s has no way of knowing which pods can actually serve requests (one request can take up to 10 seconds, so a full queue will stay full for a while). Again, readiness probe on `queueLength` would be ideal. `queueTime` will only tell us that we should have scaled x seconds ago.
We've come up with the solution of using the `readinessProbe` to send a dummy request to the handler to check whether it gets denied immediately. But that can't be it, right? Surely, this problem can't be so unique that there is no better solution.
I apologize in advance if this is not the right place to ask this question, I couldn't find anything better.
|
https://github.com/pytorch/serve/issues/2412
|
open
|
[
"triaged",
"kubernetes"
] | 2023-06-13T20:06:20Z
| 2023-06-26T17:16:00Z
| null |
tsteffek
|
huggingface/optimum
| 1,106
|
Onnxruntime support for multiple modalities model types
|
### Feature request
Add support for layout and multi-modal models (e.g. LayoutLM, LayoutLMv3, LILT) to the ORTModels.
### Motivation
ORTModels allows to interact with onnxruntime models in the same way as transformers API, which is very convenient, as optimum is a part of huggingface ecosystem and the compatibility between all the components is crucial. But unfortunately currently ORTModels do not support models that accept multiple modalities, e.g. text+layout ot text+layout+image. As of now only _input_ids, attention_mask and token_type_ids_ are processed for in _**ORTModelForFeatureExtraction, ORTModelForQuestionAnswering, ORTModelForSequenceClassification, ORTModelForTokenClassification**_ in modeling_ort.py.
### Your contribution
I can submit a PR, but since there are a lot of ways how this can be implemented - I would like to agree how to do this better.
For example:
**first way:**
* Implement it in similar way how AutoModels* works in transformers: have the mapping for the model and the ort model class which suits it .
`{ "bert": OrtModelForTokenCkassification,
"roberta": OrtModelForTokenCkassification,
"layoutlm": OrtLayoutLMForTokenClassification
}`
For the models that does not text-only we will need to add a separate class and substitute the class when initializing the ORTModel* with corresponding model, while for the models that are already supported nothing will change.
**second way:**
* Add mapping for the model name as key and input attr as value:
`{ "bert": ["input_ids", "attention_mask", "token_type_ids"],
"layoutlm": ["input_ids", "attention_mask", "token_type_ids", "bbox"]
}`
* Substitute the model inputs with the given model map in modeling_ort.py.
|
https://github.com/huggingface/optimum/issues/1106
|
open
|
[
"feature-request",
"onnxruntime"
] | 2023-06-13T14:30:10Z
| 2023-06-14T11:10:49Z
| 0
|
mariababich
|
huggingface/optimum
| 1,105
|
IO Binding for ONNX Non-CUDAExecutionProviders
|
### Feature request
When using use_io_binding=True with TensorrtExecutionProvider, a warning appears :
```
No need to enable IO Binding if the provider used is not CUDAExecutionProvider. IO Binding will be turned off.
```
I don't understand the reason for this, as data movement optimization should also work for TensorrtExecutionProvider at least. If this is not possible, can someone explain the reason? Thank you.
### Motivation
Being able to decouple data movement between CPU DRAM and GPU DRAM from computation makes it possible to overlap computation with communication.
### Your contribution
Theoretically, the iobinding implementation for CUDAExecutionProvider should work for TensorrtExecutionProvider too.
|
https://github.com/huggingface/optimum/issues/1105
|
open
|
[
"help wanted",
"onnxruntime"
] | 2023-06-13T14:11:31Z
| 2023-09-26T11:47:17Z
| 5
|
cyang49
|
pytorch/pytorch
| 103,506
|
How to add testing capabilities for third party devices
|
### 🚀 The feature, motivation and pitch
The current community test cases are all cpu and cuda based, there is no ability to look after third party devices, for example many test cases use the @onlycuda decorator, any suggestions for improvements for the privateuse1 device?
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/103506
|
closed
|
[
"triaged",
"module: third_party",
"module: testing"
] | 2023-06-13T12:37:13Z
| 2023-06-26T17:07:54Z
| null |
Bin1024
|
huggingface/datasets
| 5,946
|
IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ??
|
### Describe the bug
in <cell line: 1>:1 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1537 in train │
│ │
│ 1534 │ │ inner_training_loop = find_executable_batch_size( │
│ 1535 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1536 │ │ ) │
│ ❱ 1537 │ │ return inner_training_loop( │
│ 1538 │ │ │ args=args, │
│ 1539 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1540 │ │ │ trial=trial, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1789 in _inner_training_loop │
│ │
│ 1786 │ │ │ │ rng_to_sync = True │
│ 1787 │ │ │ │
│ 1788 │ │ │ step = -1 │
│ ❱ 1789 │ │ │ for step, inputs in enumerate(epoch_iterator): │
│ 1790 │ │ │ │ total_batched_samples += 1 │
│ 1791 │ │ │ │ if rng_to_sync: │
│ 1792 │ │ │ │ │ self._load_rng_state(resume_from_checkpoint) │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py:377 in __iter__ │
│ │
│ 374 │ │ dataloader_iter = super().__iter__() │
│ 375 │ │ # We iterate one batch ahead to check when we are at the end │
│ 376 │ │ try: │
│ ❱ 377 │ │ │ current_batch = next(dataloader_iter) │
│ 378 │ │ except StopIteration: │
│ 379 │ │ │ yield │
│ 380 │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:633 in __next__ │
│ │
│ 630 │ │ │ if self._sampler_iter is None: │
│ 631 │ │ │ │ # TODO(https://github.com/pytorch/pytorch/issues/76750) │
│ 632 │ │ │ │ self._reset() # type: ignore[call-arg] │
│ ❱ 633 │ │ │ data = self._next_data() │
│ 634 │ │ │ self._num_yielded += 1 │
│ 635 │ │ │ if self._dataset_kind == _DatasetKind.Iterable and \ │
│ 636 │ │ │ │ │ self._IterableDataset_len_called is not None and \ │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:677 in _next_data │
│ │
│ 674 │ │
│ 675 │ def _next_data(self): │
│ 676 │ │ index = self._next_index() # may raise StopIteration │
│ ❱ 677 │ │ data = self._dataset_fetcher.fetch(index) # may raise StopIteration │
│ 678 │ │ if self._pin_memory:
|
https://github.com/huggingface/datasets/issues/5946
|
open
|
[] | 2023-06-13T07:34:15Z
| 2023-07-14T12:04:48Z
| 6
|
syngokhan
|
huggingface/safetensors
| 273
|
Issue with Loading Model in safetensors Format
|
### System Info
- `transformers` version: 4.30.1
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Reproduction
I'm trying to load a model saved in safetensors format using the Transformers library. Here's the code I'm using:
```python
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("path/to/model")
model = LlamaForCausalLM.from_pretrained("path/to/model", use_safetensors=True)
```
However, I'm running into this error:
```
Traceback (most recent call last):
File "/Users/maxhager/Projects2023/nsfw/model_run.py", line 4, in <module>
model = LlamaForCausalLM.from_pretrained("path/to/model", use_safetensors=True)
File "/Users/maxhager/.virtualenvs/nsfw/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2449, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory path/to/model.
```
In my model directory, I have the following files (its [this](https://huggingface.co/notstoic/pygmalion-13b-4bit-128g) model locally):
- 4bit-128g.safetensors
- config.json
- generation_config.json
- pytorch_model.bin.index.json
- special_tokens_map.json
- tokenizer.json
- tokenizer.model
- tokenizer_config.json
### Expected behavior
I would expect that setting use_safetensors=True would inform the from_pretrained method to load the model from the safetensors format. However, it appears the method is looking for the usual model file formats (pytorch_model.bin, tf_model.h5, etc) instead of recognizing the safetensors format.
I'm looking for a solution or guidance on how to successfully load a model stored in the safetensors format using the Transformers library.
|
https://github.com/huggingface/safetensors/issues/273
|
closed
|
[
"Stale"
] | 2023-06-12T21:25:33Z
| 2024-03-08T13:28:30Z
| 11
|
yachty66
|
pytorch/data
| 1,181
|
Does Collator need to exist?
|
### 📚 The doc issue
Docs for [Collator](https://pytorch.org/data/0.6/generated/torchdata.datapipes.iter.Collator.html#torchdata.datapipes.iter.Collator) leave a lot of questions.
> Collates samples from DataPipe to Tensor(s) by a custom collate function
What does collate mean in this context? What is the collate function applied to? In the torch Dataloader docs, it's clear that collate_fn is meant to be applied to a batch of data, but that's not explained here at all. Looking at the implementation I think the input datapipe is supposed to be batched here too, but that's not clear.
What's the difference between this and Mapper? Sort of seems like the only difference is that the output of `collate_fn` is supposed to be tensors? Or collections of Tensors? I have used it with a function that returns a list of ints though, so there doesn't seem to be anything enforcing that the output is Tensors.
### Suggest a potential alternative/fix
Get rid of Collator if it doesn't add anything over Mapper, it's confusing
If keeping it:
* If it's basically Mapper with a default mapping function that converts things to tensors, don't allow specifying the function.
* Or explain why this is different than mapper.
* State that input should be batched
* Document the `conversion` argument
|
https://github.com/meta-pytorch/data/issues/1181
|
open
|
[] | 2023-06-12T15:02:52Z
| 2023-07-18T00:38:02Z
| 1
|
lendle
|
huggingface/transformers.js
| 144
|
Question-Answer Examples
|
Ca you please send us an example of question-answer please
|
https://github.com/huggingface/transformers.js/issues/144
|
closed
|
[
"question"
] | 2023-06-09T21:54:37Z
| 2023-06-09T22:59:17Z
| null |
Zenyker
|
huggingface/optimum
| 1,095
|
Installation issue on Openvino NNcf
|
### System Info
```shell
LINUX WSL 2
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
OPTIMUM
Name: optimum
Version: 1.8.6
Summary: Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality.
Home-page: https://github.com/huggingface/optimum
Author: HuggingFace Inc. Special Ops Team
Author-email: hardware@huggingface.co
License: Apache
Location: /home/debayan/CT_with_LLM/opvino/lib/python3.11/site-packages
Requires: coloredlogs, datasets, huggingface-hub, numpy, packaging, sympy, torch, torchvision, transformers
PYTHON
3.11.3
```
### Who can help?
@echarlaix , while trying to install openvino nncf, i am getting this issue and cannot figure out how to fix this problem.
The hardware is intel and hence was working via this approach. I am trying to optimize blip model for image captioning
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
fatal: not a git repository (or any of the parent directories): .git
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-f8b_3uou/onnx_22d50665ccb74d03a417ba4977874f9c/setup.py", line 318, in <module>
raise FileNotFoundError("Unable to find " + requirements_file)
FileNotFoundError: Unable to find requirements.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python -m pip install optimum[openvino,nncf] is failing with the issue
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
fatal: not a git repository (or any of the parent directories): .git
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-_g6qzuag/onnx_aebd33cd3ee44e7daf5f0a07afd43101/setup.py", line 318, in <module>
raise FileNotFoundError("Unable to find " + requirements_file)
FileNotFoundError: Unable to find requirements.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
### Expected behavior
installation should be successful
|
https://github.com/huggingface/optimum/issues/1095
|
closed
|
[
"bug"
] | 2023-06-09T09:55:45Z
| 2024-01-05T11:10:06Z
| 5
|
DebayanChakraborty
|
pytorch/tutorials
| 2,453
|
💡 [REQUEST] - Add ABI=1 compilation instruction to README
|
### 🚀 Descirbe the improvement or the new tutorial
Under certain usage circumstances, PyTorch needs to have C++11 ABI enabled. Currently there's no docs in README for introducing how to get it enabled.
Link https://github.com/pytorch/pytorch/pull/95177 to enable this request.
### Existing tutorials on this topic
https://github.com/pytorch/pytorch
### Additional context
We aim to complete the document as part of PyTorch Docathon 2023. cc @jgong5 @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ZailiWang @ZhaoqiongZ @leslie-fang-intel @Xia-Weiwen @sekahler2 @CaoE @zhuhaozhe @Valentine233 @CaoE
|
https://github.com/pytorch/tutorials/issues/2453
|
closed
|
[] | 2023-06-09T07:53:48Z
| 2023-06-15T07:13:34Z
| 1
|
jingxu10
|
huggingface/transformers.js
| 140
|
[Question] OrtRun error code 6 with a longer string for question-answering
|
Why do I keep running into an OrtRun error code 6 with a longer string for question-answering task:
`const result = await model(question, context, {
padding: true,
truncation: true,
});
`
Error:
`
models.js:158 An error occurred during model execution: "Error: failed to call OrtRun(). error code = 6.".
models.js:159 Inputs given to model:
{input_ids: Proxy(Tensor), attention_mask: Proxy(Tensor), token_type_ids: Proxy(Tensor)}
attention_mask
:
Proxy(Tensor) {dims: Array(2), type: 'int64', data: BigInt64Array(550), size: 550}
input_ids
:
Proxy(Tensor) {dims: Array(2), type: 'int64', data: BigInt64Array(550), size: 550}
token_type_ids
:
Proxy(Tensor) {dims: Array(2), type: 'int64', data: BigInt64Array(550), size: 550}
[[Prototype]]
:
Object
ort-web.min.js:6 Uncaught (in promise) Error: failed to call OrtRun(). error code = 6.
at Object.run (ort-web.min.js:6:454854)
at ort-web.min.js:6:444202
at Object.run (ort-web.min.js:6:447121)
at InferenceSession.run (inference-session-impl.js:91:1)
at sessionRun (models.js:153:1)
at Function._call (models.js:639:1)
at Function._call (models.js:1091:1)
at Function.closure [as model] (core.js:62:1)
at Function._call (pipelines.js:253:1)
at closure (core.js:62:1)
(anonymous) @ ort-web.min.js:6
(anonymous) @ ort-web.min.js:6
run @ ort-web.min.js:6
run @ inference-session-impl.js:91
sessionRun @ models.js:153
_call @ models.js:639
_call @ models.js:1091
closure @ core.js:62
_call @ pipelines.js:253
closure @ core.js:62
(anonymous) @ background.js:146
await in (anonymous) (async)
`
|
https://github.com/huggingface/transformers.js/issues/140
|
closed
|
[
"bug",
"question"
] | 2023-06-09T04:07:28Z
| 2023-07-11T11:07:26Z
| null |
iamfiscus
|
huggingface/datasets
| 5,931
|
`datasets.map` not reusing cached copy by default
|
### Describe the bug
When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was to use `save_to_disk` after my last transform and then use that in my DataLoader pipeline. Are there any other solutions for the same?
One more thing, my dataset is occupying 6GB storage memory after I use `map`, is there any way I can reduce that memory usage?
### Steps to reproduce the bug
```
# make sure that dataset decodes audio with correct sampling rate
dataset_sampling_rate = next(iter(self.raw_datasets.values())).features["audio"].sampling_rate
if dataset_sampling_rate != self.feature_extractor.sampling_rate:
self.raw_datasets = self.raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=self.feature_extractor.sampling_rate)
)
vectorized_datasets = self.raw_datasets.map(
self.prepare_dataset,
remove_columns=next(iter(self.raw_datasets.values())).column_names,
num_proc=self.num_workers,
desc="preprocess datasets",
)
# filter data that is longer than max_input_length
self.vectorized_datasets = vectorized_datasets.filter(
self.is_audio_in_length_range,
num_proc=self.num_workers,
input_columns=["input_length"],
)
def prepare_dataset(self, batch):
# load audio
sample = batch["audio"]
inputs = self.feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
batch["labels"] = self.tokenizer(batch["target_text"]).input_ids
return batch
```
### Expected behavior
`map` to use cached copy and if possible an alternative technique to reduce memory usage after using `map`
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
|
https://github.com/huggingface/datasets/issues/5931
|
closed
|
[] | 2023-06-07T09:03:33Z
| 2023-06-21T16:15:40Z
| 1
|
bhavitvyamalik
|
huggingface/chat-ui
| 282
|
OpenID login
|
How to get providerURL, client ID and client token to create azure openid login?????
|
https://github.com/huggingface/chat-ui/issues/282
|
closed
|
[
"support"
] | 2023-06-06T10:45:46Z
| 2023-06-19T09:38:34Z
| 1
|
sankethgadadinni
|
pytorch/tutorials
| 2,435
|
How can we contribute with videos
|
How can we contribute videos to GitHub in PyTorch? The video will likely be long and is a link enough to be contributed or should I send with a link
|
https://github.com/pytorch/tutorials/issues/2435
|
closed
|
[
"question"
] | 2023-06-06T09:09:59Z
| 2023-06-12T16:19:56Z
| null |
Killpit
|
huggingface/transformers.js
| 137
|
[Question] Failed to fetch onnx model when to use AutoModel.from_pretrained
|
**The code here:**
```
import { AutoModel, AutoTokenizer } from '@xenova/transformers';
const modelPath = 'Xenova/distilgpt2'
let tokenizer = await AutoTokenizer.from_pretrained(modelPath); // **successful to fetch model**
let model = await AutoModel.from_pretrained(modelPath); // **failed to fetch model**
let inputs = await tokenizer('I love transformers!');
let { logits } = await model(inputs);
```
**Error information:**
file:///Users/xxx/Documents/github/transformers.js/examples/node/esm/node_modules/@xenova/transformers/src/utils/hub.js:223
throw Error(`Could not locate file: "${remoteURL}".`)
^
Error: Could not locate file: "https://huggingface.co/Xenova/distilgpt2/resolve/main/onnx/model_quantized.onnx".
at handleError (file:///Users/xxx/Documents/github/transformers.js/examples/node/esm/node_modules/@xenova/transformers/src/utils/hub.js:223:19)
at getModelFile (file:///Users/xxx/Documents/github/transformers.js/examples/node/esm/node_modules/@xenova/transformers/src/utils/hub.js:412:24)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async constructSession (file:///Users/xxx/Documents/github/transformers.js/examples/node/esm/node_modules/@xenova/transformers/src/models.js:88:18)
transformers.js version: 2.1.1
|
https://github.com/huggingface/transformers.js/issues/137
|
closed
|
[
"question"
] | 2023-06-06T02:03:41Z
| 2023-06-20T13:24:37Z
| null |
peter-up
|
huggingface/transformers.js
| 136
|
[Question] Using CLIP for simple image-text similarity
|
I'm trying to get a simple image-text similarity thing working with CLIP, and I'm not sure how to do it, or whether it's currently supported with Transformers.js outside of the zero-shot image classification pipeline.
Is there a code example somewhere to get me started? Here's what I have so far:
```js
import { AutoModel, AutoTokenizer } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.1.1';
let tokenizer = await AutoTokenizer.from_pretrained('Xenova/clip-vit-base-patch16');
let model = await AutoModel.from_pretrained('Xenova/clip-vit-base-patch16');
let inputIds = await tokenizer(["cat", "astronaut"]);
let image = await fetch("https://i.imgur.com/fYhUGoY.jpg").then(r => r.blob());
// how to process the image, and how to pass the image and inputIds to `model`?
```
Here's what I see if I inspect the `model` function in DevTools:

I also tried this:
```js
import { AutoModel, AutoTokenizer, AutoProcessor } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.1.1';
let model = await AutoModel.from_pretrained('Xenova/clip-vit-base-patch16');
let processor = await AutoProcessor.from_pretrained("Xenova/clip-vit-base-patch16");
let inputs = await processor({text:["a photo of a cat", "a photo of an astronaut"], images:["https://i.imgur.com/fYhUGoY.jpg"]});
let outputs = await model(inputs);
```
But it seems that `processor` expects an array of images, or something? The above code throws an error saying that an `.rgb()` method should exist on the input.
|
https://github.com/huggingface/transformers.js/issues/136
|
closed
|
[
"question"
] | 2023-06-05T14:24:56Z
| 2023-06-06T13:35:45Z
| null |
josephrocca
|
pytorch/pytorch
| 102,966
|
how to workaround the error "don't have an op for vulkan_prepack::create_linear_context" ?
|
### 🐛 Describe the bug
I have a modified resnet-50 network, which I want to run on android using vulkan backend.
The custom build of pytorch with USE_VULKAN=1 works fine, but I got the error message "We don't have an op for vulkan_prepack::create_linear_context but it isn't a special case." during "optimize_for_mobile" API invocation.
What's the problem here, and how to deal with it?
(I tried on both release 1.13 and release v2.0.1 tags, but got the same error message above).
```
git clone -b release/1.13 --recursive https://github.com/pytorch/pytorch
cd pytorch
git submodule sync
git submodule update --init --recursive
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py build --cmake-only
ccmake build # or cmake-gui build
BUILD_LITE_INTERPRETER=0 USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 python setup.py develop
BUILD_LITE_INTERPRETER=0 ANDROID_ABI=arm64-v8a USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 bash ./scripts/build_android.sh
BUILD_LITE_INTERPRETER=0 ANDROID_ABI=arm64-v8a USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 bash ./scripts/build_pytorch_android.sh
```
```
>>> import torch
>>> import os
>>>
>>> from torch.utils.mobile_optimizer import optimize_for_mobile
>>>
>>> #file_dir = '.'
>>> file_dir = '../pytorch-script/'
>>> model = torch.jit.load(file_dir + '/modified-resnet50-image.pt')
>>> model.eval()
RecursiveScriptModule(original_name=ImageModel)
>>> script_model = torch.jit.script(model)
>>> script_model_vulkan = optimize_for_mobile(script_model, backend='vulkan')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/DataExt/devroot/src/pytorch/torch/utils/mobile_optimizer.py", line 67, in optimize_for_mobile
optimized_cpp_module = torch._C._jit_pass_vulkan_optimize_for_mobile(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: 0 INTERNAL ASSERT FAILED at "/mnt/DataExt/devroot/src/pytorch/torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for vulkan_prepack::create_linear_context but it isn't a special case. Argument types: Tensor, Tensor,
Candidates:
>>> exit()
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0a0+gite9ebda2
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 10.3.0-1ubuntu1~20.04) 10.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.11.3 (main, Apr 19 2023, 23:54:32) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A4000
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 45 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5218N CPU @ 2.30GHz
Stepping: 7
CPU MHz: 2294.609
BogoMIPS: 4589.21
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 16 MiB
L3 cache: 22 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation
|
https://github.com/pytorch/pytorch/issues/102966
|
open
|
[
"module: build",
"triaged",
"module: vulkan",
"ciflow/periodic"
] | 2023-06-05T09:53:28Z
| 2023-09-12T00:19:52Z
| null |
ldfandian
|
huggingface/diffusers
| 3,669
|
General question: what are the steps to debug if the image produced is just wrong?
|
I have a lora(lycoris) that I have tested with A1111's webui and I'm pretty happy with the result. When I tried to use it with `diffusers` it just give me corrupted image. The lora brings some desired effect (like white background), but the overall image is just not right.
I have included some personal code to use lycoris (AFAIK diffusers currently doesn't support lycoris, correct me if I'm wrong). But the question is more general as what should I do in case like this, what experiments to run, where should I check? I printed the sum of weight for each layer and was sure they match with A1111's version.
Thank you.
|
https://github.com/huggingface/diffusers/issues/3669
|
closed
|
[
"stale"
] | 2023-06-05T01:44:49Z
| 2023-07-13T15:03:51Z
| null |
wangdong2023
|
pytorch/pytorch
| 102,939
|
Not sure what is wrong,
|
### 🐛 Describe the bug
It was working the last time I ran it, I ran an update and now i'm getting this when trying to train a lora
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link
================================================================================
CUDA SETUP: Loading binary C:\Users\newpc_53bcer\Documents\Lora\kohya_ss\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda116.dll...
use 8-bit AdamW optimizer | {}
running training / 学習開始
num train images * repeats / 学習画像の数×繰り返し回数: 4700
num reg images / 正則化画像の数: 0
num batches per epoch / 1epochのバッチ数: 2350
num epochs / epoch数: 1
batch size per device / バッチサイズ: 2
total train batch size (with parallel & distributed & accumulation) / 総バッチサイズ(並列学習、勾配合計含む): 2
gradient ccumulation steps / 勾配を合計するステップ数 = 1
total optimization steps / 学習ステップ数: 2350
steps: 0%| | 0/2350 [00:00<?, ?it/s]╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Users\newpc_53bcer\Documents\Lora\kohya_ss\train_db.py:477 in <module> │
│ │
│ 474 │ args = parser.parse_args() │
│ 475 │ args = train_util.read_config_from_file(args, parser) │
│ 476 │ │
│ ❱ 477 │ train(args) │
│ 478 │
│ │
│ C:\Users\newpc_53bcer\Documents\Lora\kohya_ss\train_db.py:245 in train │
│ │
│ 242 │ ) │
│ 243 │ │
│ 244 │ if accelerator.is_main_process: │
│ ❱ 245 │ │ accelerator.init_trackers("dreambooth" if args.log_tracker_name is None else arg │
│ 246 │ │
│ 247 │ loss_list = [] │
│ 248 │ loss_total = 0.0 │
│ │
│ C:\Users\newpc_53bcer\Documents\Lora\kohya_ss\venv\lib\site-packages\accelerate\accelerator.py:5 │
│ 48 in _inner │
│ │
│ 545 │ │ │ │ ) │
│ 546 │ │ │
│ 547 │ │ def _inner(*args, **kwargs): │
│ ❱ 548 │ │ │ return PartialState().on_main_process(function)(*args, **kwargs) │
│ 549 │ │ │
│ 550 │ │ return _inner │
│ 551 │
│ │
│ C:\Users\newpc_53bcer\Documents\Lora\kohya_ss\venv\lib\site-packages\accelerate\accelerator.py:2 │
│ 031 in init_trackers │
│ │
│ 2028 │ │ │ │ if getattr(tracker_init, "requires_logging_directory"): │
│ 2029 │ │ │ │ │ # We can skip this check since it was done in `__init__` │
│ 2030 │ │ │ │ │ self.trackers.append( │
│ ❱ 2031 │ │ │ │
|
https://github.com/pytorch/pytorch/issues/102939
|
closed
|
[] | 2023-06-04T23:13:41Z
| 2023-06-05T15:28:14Z
| null |
NeVeREire
|
huggingface/chat-ui
| 275
|
web search hallucination and prompt results
|
Hello, great job building web search module. Just a few things i noticed using it for the past hours.
1- It does connect to the web perfectly.
2- It tend to take only the first page result and not contextualize enough the data, trying to mix it with the model data and it ends up destroying the final output. So maybe should take the first 3 results to do a summary.
3- Takes time, maybe it's ok, but I think making sure that it takes less time might be good, but it's not critical at this stage.
4- Various output from serp api : as serp api allows to get not only text result but also video and maps, would be cool to allow the end user for example to prompt "give me the best yoga video tutorials" and get a reply with shortcuts and/or small views on maybe 3 youtube vid . The best real case doing that is on perplexity ai , you can check with a request.
5- Maps can be book. "what is the best itineray from x to y location" result prompting using google map query. and same of air tickets with google flights.
Just a few options and reco from a fan, great job again, I know you already did a lot.
|
https://github.com/huggingface/chat-ui/issues/275
|
open
|
[] | 2023-06-02T23:09:11Z
| 2023-06-05T08:36:41Z
| 1
|
Billyroot
|
huggingface/peft
| 537
|
Where is the PeftModel weights stored?
|
## expect behavior
I am going to check if the model (mt0-xxl [13B](https://huggingface.co/bigscience/mt0-xxl)) weights have been updated.
Could you tell me how to check the weights of the model original before using peft?
How to check loaded Lora Module weights when using the peft?
## script
modified from [this file](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py#L71)
```python
model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
model.enable_input_require_grads()
model.gradient_checkpointing_enable()
.....
for epoch in range(num_epochs):
with TorchTracemalloc() as tracemalloc:
model.train()
accelerator.print('train epoch{}'.format(epoch))
total_loss = 0
for step, batch in enumerate(tqdm(train_dataloader)):
outputs = model(**batch, use_cache=False) # dsj
# outputs = model(**batch) # dsj
loss = outputs.loss
# loss.requires_grad=True # dsj
total_loss += loss.detach().float()
==== =========>>pdb.set_trace() # where I pdb
```
## debug process
```
(Pdb) model.module.base_model.model.encoder.block[0].layer[0].SelfAttention.q
Linear(
in_features=4096, out_features=4096, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=8, out_features=4096, bias=False)
)
)
(Pdb) model.module.base_model.model.encoder.block[0].layer[0].SelfAttention.q.weight
Parameter containing:
tensor([], device='cuda:0', dtype=torch.bfloat16)
(Pdb) model.module.base_model.model.encoder.block[0].layer[0].SelfAttention.q.lora_A.default.weight
Parameter containing:
tensor([], device='cuda:0', dtype=torch.bfloat16, requires_grad=True)
```
|
https://github.com/huggingface/peft/issues/537
|
closed
|
[] | 2023-06-02T09:10:09Z
| 2023-07-10T15:03:40Z
| null |
dsj96
|
pytorch/data
| 1,177
|
what is the right way to serialize DataLoader2 so that pipeline with shuffle can resume from the right place?
|
### 🐛 Describe the bug
I tried all these versions, the only version that worked was the last one, but it's too hacky. Is there a better way?
```py
dp = IterableWrapper(list(range(20)))
dp = dp.shuffle()
items = []
rs = InProcessReadingService()
dl = DataLoader2(dp, reading_service=rs)
iter1 = iter(dl)
for _ in range(4):
next(iter1)
# 16 elements left in dl
state = dl.state_dict()
dl2 = DataLoader2.from_state(state, reading_service=rs)
# assert len(list(dl2)) == 20 - 4 # got 20
dp2 = deserialize_datapipe(serialize_datapipe(dl.datapipe))
# assert len(list(dp2)) == 20 - 4 # got 20
dp3 = deserialize_datapipe(serialize_datapipe(dl.datapipe))
_simple_graph_snapshot_restoration(dp3, dp3._number_of_samples_yielded)
ret3 = list(dp3)
assert len(ret3) == 20 - 4
# but content is not the same
dl4 = DataLoader2.from_state(state, reading_service=rs)
_simple_graph_snapshot_restoration(dl4.datapipe, dl.datapipe._number_of_samples_yielded)
ret4 = list(dl4)
assert len(ret4) == 20 - 4
# but content is not the same
dp5 = deserialize_datapipe(serialize_datapipe(dl.datapipe))
pipes = get_all_pipes(dp5)
for pipe in pipes:
if isinstance(pipe, ShufflerIterDataPipe):
buffer_cache = pipe._buffer[:]
assert len(buffer_cache) == 20 - 4
rng_state = pipe._rng.getstate()
_simple_graph_snapshot_restoration(dp5, dl.datapipe._number_of_samples_yielded)
dp5._buffer = buffer_cache[:]
dp5._rng.setstate(rng_state)
it5 = iter(dp5)
ret5 = list(it5)
assert len(ret5) == 20 - 4
expected = list(iter1)
# ret5 is the only method that worked
# assert ret3 == expected
# assert ret4 == expected
assert ret5 == expected
```
### Versions
```
PyTorch version: 2.0.0a0+gite9ebda2
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 12.0.1 (https://github.com/conda-forge/clangdev-feedstock d44358f44aef33e9fa7c5f93e2481ee8f1a04ab6)
CMake version: version 3.19.1
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-64-generic-x86_64-with-glibc2.10
Is CUDA available: False
CUDA runtime version: 12.0.140
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.3.0
[pip3] numpy==1.23.5
[pip3] pytorch3d==0.6.2
[pip3] torch==2.0.1+1684801906.cuda120.cudnn891.nccl218.ap
[pip3] torch-mlir==1684442443
[pip3] torch-scatter==2.1.0
[pip3] torch-tb-profiler==0.4.1
[pip3] torchdata==0.7.0.dev20230601
[pip3] torchfile==0.1.0
[pip3] torchvision==0.15.1a0+42759b1
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2020.4 h726a3e6_304 conda-forge
[conda] mkl-include 2023.1.0 h84fe81f_48680 conda-forge
[conda] numpy 1.23.5 py38h7042d01_0 conda-forge
[conda] pytorch3d 0.6.2 pypi_0 pypi
[conda] torch 2.0.1+1684801906.cuda120.cudnn891.nccl218.ap pypi_0 pypi
[conda] torch-mlir 1684442443 pypi_0 pypi
[conda] torch-scatter 2.1.0 pypi_0 pypi
[conda] torch-tb-profiler 0.4.1 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchvision 0.15.1a0+42759b1 pypi_0 pypi
```
|
https://github.com/meta-pytorch/data/issues/1177
|
open
|
[] | 2023-06-02T06:52:14Z
| 2023-06-08T17:31:18Z
| 2
|
zhengwy888
|
huggingface/chat-ui
| 273
|
Documentation about how to configure custom model endpoints is missing
|
It seems it has been removed in https://github.com/huggingface/chat-ui/commit/fae93d9fc3be9a39d8efd9ab9993dea13f0ae844.
|
https://github.com/huggingface/chat-ui/issues/273
|
closed
|
[
"documentation"
] | 2023-06-01T19:37:44Z
| 2023-06-19T08:59:15Z
| 4
|
djmaze
|
pytorch/pytorch
| 102,718
|
How to support AMD GPU on Mac
|
### 🚀 The feature, motivation and pitch
My computer is running macOS, with intel9900k cpu and amd Rx6600xt gpu.
Can I build to support this gpu?
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/102718
|
closed
|
[] | 2023-06-01T09:03:42Z
| 2024-06-21T14:05:02Z
| null |
Aiden-Dong
|
pytorch/benchmark
| 1,707
|
How to execute with docker?
|
I'm using ARG BASE_IMAGE=ghcr.io/pytorch/torchbench:latest
but I am having problems with this container.
or should use ghcr.io/pytorch:pytorch-nightly or [ghcr.io/pytorch:pytorch-nightly](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch)
|
https://github.com/pytorch/benchmark/issues/1707
|
closed
|
[] | 2023-06-01T07:43:32Z
| 2023-06-13T03:31:41Z
| null |
johnnynunez
|
huggingface/optimum
| 1,078
|
[SAM] Split encoder and mask decoder into separate .onnx files
|
### Feature request
Currently, exporting SAM models with optimum results in a single .onnx file (https://huggingface.co/Xenova/sam-vit-base/tree/main/onnx). It would be great if we could add an option to separate the encoder and decoder into separate onnx files (like traditional seq2seq models).
Example SAM exports for which this has been done:
- https://huggingface.co/visheratin/segment-anything-vit-b/tree/main
- https://huggingface.co/visheratin/segment-anything-vit-l/tree/main
- https://huggingface.co/visheratin/segment-anything-vit-h/tree/main
### Motivation
The primary motivation for this feature request is to reuse the encoded image (which should only be computed once), and then use the decoder for querying. At the moment, users would have to encode the image each time they wish to perform a query.
This would be great for Transformers.js.
### Your contribution
I can integrate this into Transformers.js once it's available.
|
https://github.com/huggingface/optimum/issues/1078
|
closed
|
[] | 2023-05-31T10:47:19Z
| 2023-08-24T16:05:39Z
| 8
|
xenova
|
pytorch/data
| 1,175
|
Mux with MPRS causes operations after sharding_round_robin_dispatcher to run on the same worker
|
### 📚 The doc issue
This doesn't seem to be mentioned in the docs, but if you have two datapipes that use `sharding_round_robin_dispatcher` and then `mux` them together:
1. Any steps between `sharding_round_robin_dispatcher` and `mux` will take place on the same worker process.
2. Only the steps after the `mux` will take place on separate workers.
For example, with the below graph, the `Mapper` nodes in between the `ShardingRoundRobinDispatcher` nodes and `Multiplexer` run on the same worker process. The `Mapper` node after `Multiplexer` will run across multiple processes as they're fed data in a round-robin fashion.

My incorrect expectation was that the dispatching process would distribute data to worker processes immediately after `sharding_round_robin_dispatch` as usual, and then everything after `mux` would take place on either one or multiple worker processes.
### Suggest a potential alternative/fix
The documentation for `Multiplexer`, `ShardingRoundRobinDispatcher`, and/or `MultiProcessingReadingService` should be updated to clarify what the intended behavior is here.
|
https://github.com/meta-pytorch/data/issues/1175
|
open
|
[] | 2023-05-30T20:36:43Z
| 2023-05-31T07:48:21Z
| 3
|
JohnHBrock
|
pytorch/data
| 1,174
|
Support for proper Distributed & Multiprocessing Sharding
|
### 🚀 The feature
In MPI-based training, each process is independent from each other. Each training process might want to speed up dataloading using multiprocessing (MP). This requires data sharding to take place on two levels:
A. On a distributed level, usually resulting in big(ger) shards.
B. On a MP level later on, further splitting those big shards among worker processes.
While (A.) might potentially shard on a coarser, logical scale (e.g. on years or months if working with climatological data), (B.) might potentially shard directly on already loaded data (e.g. on indices of the previous shards).
Right now, combining distributed & MP sharding in torchdata faces two hurdles that need addressing:
1. Due to optional check in , there can only be a single `sharding_pipe()`. This check however does not take into account if a sharding pipe only operates on a specific sharding group / priority. This issue is already tracked by https://github.com/pytorch/data/issues/1082. A simple fix for this is to drop the check all together.
2. torchdata assumes a single sharding (and distribution) model: Namely that distributed & MP shards are on the same logical level and that those are distributed in a round-robin fashion to worker processes. This is enforced in https://github.com/pytorch/data/blame/main/torchdata/dataloader2/utils/worker.py#L82 which prevents more general sharding strategies.
Overall, these two hurdles need addressing via monkey patching at the moment to enable more general sharding strategies (see motivation for an use case and example of such a strategy). https://github.com/sehoffmann/atmodata/blob/6a7c2974a5de1354a7156d427bf53899fc6c0177/atmodata/patching.py shows what patches need to be done.
Specifically:
- The check in `apply_sharding()` needs to be removed
- `process_init_fn()` should call `apply_sharding()` on the whole pipe, not only on non-dispatching branches.
- `pipe.repeat(n_workers).sharding_round_robin_dispatch()` needs to be used as a workaround to distribute the same shard to all workers. For this, an additional pipe should be introduced (just `dispatch()`).
Instead of having to monkey-patch, torchdata should be less restrictive wrt. sharding and distribution strategies.
### Motivation, pitch
I'm working with climatological timeseries data on the terabyte scale. The sharding strategy and MP strategy that, in my humble opinion, makes the most sense for this use case looks like this:
1. Shard (distributed) across the time-dimension on a logical level. Single shards could e.g. represent a single month, be contained in a single file, and be multiple gigabytes in size. These shards are pre loaded by the main process via network and in parallel.
2. The **same** shard is distributed to each worker process via shared memory (to reduce memory overhead). E.g. each worker process sees the same shard/month. Now this "super-shard" is sharded further among worker processes by accessing only a subset of the indices. The time-resolution could e.g. be 1h.
3. Batches from individual workers are aggregated by the main thread again.
Overall, this pipelines roughly looks like this:
```
# Main Thread - Pre-loading
months = IterableWrapper(["1979-Jan", "1979-Feb", ..., "2020-Dec"])
pipe = months.shuffle().sharding_filter(DISTRIBUTED)
pipe = pipe.load_data().prefetch()
pipe = pipe.repeat(n_workers).round_robin_dispatch()
# Worker Process
pipe = pipe.unroll_indices() # -> yields (idx, data) tuples where data is the whole shard and idx are akin to enumerate()
pipe = pipe.shuffle().sharding_filter(MULTIPROCESSING)
pipe = pipe.do_work_on_sample()
pipe = pipe.batch()
# Main Thread - Post-process
pipe = pipe.non_replicable() # non-replicable No-Op pipeline to force transfer to main thread
pipe = pipe.post_process()
```
#### Why can't individual worker processes operate independently on the same shards as in (1.), i.e. months?
Shards can be fairly big in size. If every worker would operate on independent shards then memory consumption might explode. Furthermore, worker processes might compete for shared network IO bandwidth. Also, depending on the shard size, there are potentially not that many shards in the dataset. This would then imposes a maximum on the number of GPUs for training.
#### Why can't you reduce the shard size then? E.g. weeks instead of months
We are cropping timeseries from those shards. We thus always have some data waste at the end (or start) of each shard from which we can't crop. Reducing the shard size would increase the amount of data we would need to throw away. Furthermore, loading a few big shards via network is much more efficient than loading many small shards, and we want to utilize our network interface as much as possible for maximum throughput.
#### Why can't you shard directly on index level and then distribut in a round-robin fashion?
This would be horrendously slow.
Overall, the difficulties with this kind
|
https://github.com/meta-pytorch/data/issues/1174
|
open
|
[] | 2023-05-30T16:33:59Z
| 2023-05-30T16:40:35Z
| 0
|
sehoffmann
|
pytorch/tutorials
| 2,355
|
💡 [REQUEST] - Write a tutorial about how to leverage AMX with PyTorch on the 4th Gen of Xeon
|
### 🚀 Descirbe the improvement or the new tutorial
The 4th Generation Intel® Xeon® Scalable Processor platform is an unique, scalable platform optimized for different workloads acceleration on AI. The new built-in AI acceleration engine, Intel® Advanced Matrix Extensions (AMX) is able to accelerate a variety of AI Inference and Training workloads (NLP, recommendation systems, image recognition…) with BF16 and INT8 datatype.
PyTorch has enabled AMX support for computation intensive operators, e.g. Conv2d, ConvTranspose2d, Linear, MatMul, bmm with `torch.bfloat16` datatype and int8 on the quantization backend. It is better to write a tutorial to tell users how to leverage AMX on PyTorch.
### Existing tutorials on this topic
_No response_
### Additional context
We aim to complete the document as part of PyTorch Docathon 2023. cc @jgong5 @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ZailiWang @ZhaoqiongZ @leslie-fang-intel @Xia-Weiwen @sekahler2 @CaoE @zhuhaozhe @Valentine233 @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen @caoe
|
https://github.com/pytorch/tutorials/issues/2355
|
closed
|
[
"docathon-h1-2023",
"advanced",
"intel"
] | 2023-05-30T03:02:23Z
| 2023-11-02T19:30:05Z
| null |
mingfeima
|
huggingface/diffusers
| 3,602
|
What is the default for VAE option?
|
If "VAE" is not specified for "Stable Diffusion," what is the default applied?
|
https://github.com/huggingface/diffusers/issues/3602
|
closed
|
[] | 2023-05-29T15:42:19Z
| 2023-06-08T10:30:27Z
| null |
Michi-123
|
pytorch/android-demo-app
| 322
|
I have a Whisper-based model. How can I convert it to fairseq.dict format ?
|
model https://huggingface.co/openai/whisper-large-v2
|
https://github.com/pytorch/android-demo-app/issues/322
|
open
|
[] | 2023-05-29T08:52:30Z
| 2023-05-29T09:00:13Z
| null |
Roland-Du
|
huggingface/transformers.js
| 125
|
[Question] Why running transformer in js is faster than python?
|
I created a repo to test how to use transformers.
https://github.com/pitieu/huggingface-transformers
I was wondering why is it that running the same models in javascript is faster than running them in python?
Is `Xenova/vit-gpt2-image-captioning` optimized somehow compared to `nlpconnect/vit-gpt2-image-captioning` ?
I run it on my MAC M1.
|
https://github.com/huggingface/transformers.js/issues/125
|
closed
|
[
"question"
] | 2023-05-28T05:23:05Z
| 2023-07-16T17:21:39Z
| null |
pitieu
|
huggingface/safetensors
| 258
|
ONNX has just become twice as fast as before. Can SafeTensors also achieve that?
|
Here are some announcements and technical details. It's nice to see that they are making significant improvements. Could some of that be useful and implemented for SafeTensors?
https://devblogs.microsoft.com/directx/dml-stable-diffusion/
https://www.tomshardware.com/news/nvidia-geforce-driver-promises-doubled-stable-diffusion-performance
https://build.microsoft.com/en-US/sessions/47fe414f-97b8-4b71-ae9e-be9602713667

|
https://github.com/huggingface/safetensors/issues/258
|
closed
|
[] | 2023-05-27T12:23:01Z
| 2023-06-07T09:26:24Z
| 2
|
WEBPerformace
|
huggingface/datasets
| 5,906
|
Could you unpin responses version?
|
### Describe the bug
Could you unpin [this](https://github.com/huggingface/datasets/blob/main/setup.py#L139) or move it to test requirements? This is a testing library and we also use it for our tests as well. We do not want to use a very outdated version.
### Steps to reproduce the bug
could not install this library due to dependency conflict.
### Expected behavior
can install datasets
### Environment info
linux 64
|
https://github.com/huggingface/datasets/issues/5906
|
closed
|
[] | 2023-05-26T20:02:14Z
| 2023-05-30T17:53:31Z
| 0
|
kenimou
|
pytorch/tutorials
| 2,352
|
💡 [REQUEST] - Port TorchRL `Pendulum` tutorial from pytorch.org/rl to pytorch.org/tutorials
|
### 🚀 Descirbe the improvement or the new tutorial
For historical reasons, TorchRL privately hosts a bunch of tutorials.
We'd like to bring the most significant ones to pytorch tutorials for more visibility.
Here is the [tutorial](https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/pendulum.py).
Environments (or simulators) are a core part of many RL algorithms. The OpenAI Gym API has had a great success in the past years and paved the way for RL researchers to quickly test ideas with an easy-to-use tool.
As a PyTorch-first library, torchrl aims at being (1) oblivious to the simulator (gym or other), (2) rely on pytorch for anything we can in the simulation process, (3) a good integration within the library and (4) a coverage of many different types of environments (simulators, real-life hardware, model-based, RLHF etc). For these reasons, TorchRL propose its own class of environments. We have a dedicated tutorial that covers their design and usage: you can help us port it where it belongs!
Steps:
1. Port the tutorial from the RL repo to the tutorials repo.
2. Fix any formatting issues or typos.
3. Make sure the tutorial follows the tutorial template ([template_tutorial.py](https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py))
4. Preserve the original author
### Existing tutorials on this topic
_No response_
### Additional context
The tutorial should not require extra dependencies beyond those already present in requirements.txt
cc @nairbv @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ZailiWang @ZhaoqiongZ @leslie-fang-intel @Xia-Weiwen @sekahler2 @CaoE @zhuhaozhe @Valentine233
|
https://github.com/pytorch/tutorials/issues/2352
|
closed
|
[
"medium",
"docathon-h2-2023"
] | 2023-05-26T19:50:31Z
| 2023-11-09T20:47:06Z
| 4
|
vmoens
|
pytorch/tutorials
| 2,351
|
💡 [REQUEST] - Port TorchRL "Coding a DDPG loss" from pytorch.org/rl to pytorch.org/tutorials
|
### 🚀 Descirbe the improvement or the new tutorial
For historical reasons, TorchRL privately hosts a bunch of tutorials.
We'd like to bring the most significant ones to pytorch tutorials for more visibility.
Here is the [tutorial](https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/coding_ddpg.py).
TorchRL splits down what is commonly referred to as Agents in other frameworks into various pieces that echo what can be found in other domains: data collection, datasets, transforms and losses. A dedicated class named LossModule covers this last functionality. We have a tutorial that instructs users on how to build and use such classes, you can help us port it to pytorch tutorials!
Steps:
1. Port the tutorial from the RL repo to the tutorials repo.
2. Fix any formatting issues or typos.
3. Make sure the tutorial follows the tutorial template ([template_tutorial.py](https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py))
4. Preserve the original author
### Existing tutorials on this topic
_No response_
### Additional context
The tutorial should not require extra dependencies beyond those already present in requirements.txt
cc @nairbv
|
https://github.com/pytorch/tutorials/issues/2351
|
closed
|
[
"docathon-h1-2023",
"medium"
] | 2023-05-26T19:45:04Z
| 2023-06-13T16:15:45Z
| 2
|
vmoens
|
pytorch/tutorials
| 2,350
|
~PyTorch Docathon H1 2023~
|
# 🎉 It's a wrap! 🎉
See our [leaderboard](https://github.com/pytorch/tutorials/blob/main/docathon-leaderboard.md) and [blog post](https://pytorch.org/blog/docathon-h1-2023-wrap-up/). Thank you to everyone who contributed and congrats to the winners!
We have a large backlog of issues that we want to address and it's a great opportunity for you to start contributing to PyTorch. We have limited this docathon to the [pytorch/tutorials](https://github.com/pytorch/tutorials) and [pytorch/examples](https://github.com/pytorch/examples) repositories, so please work on the issues from these two repositories.
# Date and location
**WHEN:** The docathon starts on May 31st 10 AM PST. Please do not work on tasks until then. We will continue accepting new submissions until 5 PM PST on June 11th.
**WHERE:** Virtual
**WHAT:** Issues with the **docathon-h1-2023** label - will be posted on May 31.
Watch our intro video to learn more details about the event.
[](https://youtu.be/qNAZtYowAM0)
# Can everyone participate?
We encourage everyone to consider participating in the docathon but there are a few things we expect from the participants:
- You must have a GitHub account and know how to use Git and GitHub, how to submit or rebase your PR on the latest main branch, how to fork or clone the repo, how to view errors in the CI and troubleshoot. We reserve the right to reject incorrectly submitted PRs.
- You must be familiar with Python, the basics of Machine Learning, and have at least a basic knowledge of PyTorch. Familiarity with Sphinx, sphinx-gallery, and reStructuredText is a plus.
Before you start contributing make sure to read [Linux Foundation Code of Conduct](https://events.linuxfoundation.org/about/code-of-conduct/).
# What contributions are we looking for?
All issues for this docathon are tagged with the **docathon-h1-2023** label. Please note that contributions that address other issues won't be counted. We are primarily looking for the following contributions:
**NOTE:** Please avoid working on issues with **intel**, **amd**, and **nvidia** labels which are reserved for our partners.
- Bug fixes in the [pytorch/tutorials](https://github.com/pytorch/tutorials) repo tagged with the docathon-h1-2023 label - see [the list](https://github.com/pytorch/tutorials/issues?q=is%3Aopen+is%3Aissue+label%3Adocathon-h1-2023).
- New examples in the [pytorch/examples](https://github.com/pytorch/examples) repo tagged with the docathon-h1-2023 label - see [the issue](https://github.com/pytorch/examples/issues?q=is%3Aopen+is%3Aissue+label%3Adocathon-h1-2023).
**NOTE:** Due to the large number of RSVPs, the tasks are provided on a first come first serve basis — please don't hoard the tasks!
# Difficulty Levels
The issues have three levels of difficulty: **easy**, **medium**, and **advanced**. If this is your first time contributing to PyTorch, we recommend that you start with an issue that is tagged as **easy**.
# How to contribute to tutorials?
1. Read [pytorch/tutorials/CONTRIBUTING.md](https://github.com/pytorch/tutorials/blob/main/CONTRIBUTING.md) for general guidelines on how the submission process works and overall style and voice.
2. Pick an issue that is labeled as **docathon-h1-2023**.
3. In the issue, add a comment with the text /assigntome. If the issue is already assigned, please find another issue to work on. We ask that you assign one issue at a time - we want to give everyone a fair chance to participate. When you are done with one issue and get it approved, you can assign another one to yourself and start working on it.
4. If you are submitting a new tutorial, use [this template](https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py).
5. Fork or clone the PyTorch repository to your computer. For simple fixes, like incorrect URLs, you could use the GitHub UI as well.
6. Create a branch and work on the fix.
7. Test your fix by running the single tutorial locally. Don't run the whole build as it takes hours and requires a GPU. You can run one tutorial as a script python3 <tutorial-name.py> or GALLERY_PATTERN="neural_style_transfer_tutorial.py" make html
8. After you fix all the issues, you are ready to submit your PR.
# Submit Your PR
1. Submit your PR referencing the issue you've picked. For example:
<img width="1058" alt="s_pytorch_pr_example" src="https://github.com/pytorch/tutorials/assets/5317992/f838571a-83d0-4908-94b6-3f7e3b200825">
3. If you have not yet, sign the Contributor License Agreement (CLA) - prompted as a check in the PR. We can't accept any PRs without a signed CLA.
4. Watch for any CI errors and fix as needed - all checks must pass successfully.
5. There are two ways to check the resulting HTML. For simple fixes and .rst files, you can check the
|
https://github.com/pytorch/tutorials/issues/2350
|
closed
|
[
"docathon-h1-2023"
] | 2023-05-26T19:09:32Z
| 2023-06-20T18:59:49Z
| 14
|
svekars
|
pytorch/tutorials
| 2,349
|
💡 [REQUEST] - Port TorchRL `Recurrent DQN` tutorial from pytorch.org/rl to pytorch.org/tutorials
|
### 🚀 Descirbe the improvement or the new tutorial
For historical reasons, TorchRL privately hosts a bunch of tutorials.
We'd like to bring the most significant ones to pytorch tutorials for more visibility.
Here is the [tutorial](https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/dqn_with_rnn.py).
In RL, we often add a RNN to a model to account for past observations when executing a policy. This of it as this: if your policy just sees a single image when playing a computer game, it will have little context about what is really happening there. If you keep a memory of past events, your performance will drastically improve.
This is useful not only in the context of Partially Observable MDPs but more broadly than that.
Storing recurrent values can be tricky, and torchrl brings its own solution to this problem. This tutorial explains this.
Steps:
1. Port the tutorial from the RL repo to the tutorials repo.
2. Fix any formatting issues or typos.
3. Make sure the tutorial follows the tutorial template ([template_tutorial.py](https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py))
4. Preserve the original author
### Existing tutorials on this topic
_No response_
### Additional context
The tutorial should not require extra dependencies beyond those already present in requirements.txt.
cc @nairbv @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen
|
https://github.com/pytorch/tutorials/issues/2349
|
closed
|
[
"medium",
"docathon-h2-2023"
] | 2023-05-26T16:27:51Z
| 2023-11-08T16:40:10Z
| 4
|
vmoens
|
huggingface/datasets
| 5,905
|
Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently
|
### Feature request
I would like a way to resume training from a checkpoint without waiting for a very long time when using an iterable dataset.
### Motivation
I am training models on the speech-recognition task. I have very large datasets that I can't comfortably store on a disk and also quite computationally intensive audio processing to do. As a result I want to load data from my remote when it is needed and perform all processing on the fly.
I am currently using the iterable dataset feature of _datasets_. It does everything I need with one exception. My issue is that when resuming training at a step n, we have to download all the data and perform the processing of steps < n, just to get the iterable at the right step. In my case it takes almost as long as training for the same steps, which make resuming training from a checkpoint useless in practice.
I understand that the nature of iterators make it probably nearly impossible to quickly resume training.
I thought about a possible solution nonetheless :
I could in fact index my large dataset and make it a mapped dataset. Then I could use set_transform to perform the processing on the fly. Finally, if I'm not mistaken, the _accelerate_ package allows to [skip steps efficiently](https://github.com/huggingface/accelerate/blob/a73898027a211c3f6dc4460351b0ec246aa824aa/src/accelerate/data_loader.py#L827) for a mapped dataset.
Is it possible to lazily load samples of a mapped dataset ? I'm used to [dataset scripts](https://huggingface.co/docs/datasets/dataset_script), maybe something can be done there.
If not, I could do it using a plain _Pytorch_ dataset. Then I would need to convert it to a _datasets_' dataset to get all the features of _datasets_. Is it something possible ?
### Your contribution
I could provide a PR to allow lazy loading of mapped dataset or the conversion of a mapped _Pytorch_ dataset into a _Datasets_ dataset if you think it is an useful new feature.
|
https://github.com/huggingface/datasets/issues/5905
|
open
|
[
"enhancement"
] | 2023-05-26T12:33:02Z
| 2023-06-15T13:34:18Z
| 1
|
bruno-hays
|
pytorch/tutorials
| 2,347
|
💡 [REQUEST] - Tutorial on extending TorchX
|
### 🚀 Descirbe the improvement or the new tutorial
Create a better tutorial showing how to extend torchx.
### Existing tutorials on this topic
https://pytorch.org/torchx/latest/custom_components.html
### Additional context
_No response_
cc @msaroufim @svekars @carljparker @NicolasHug @kit1980 @subramen
|
https://github.com/pytorch/tutorials/issues/2347
|
open
|
[
"advanced",
"module: torchx",
"docathon-h2-2023"
] | 2023-05-25T22:32:28Z
| 2023-11-19T17:51:58Z
| 12
|
sekyondaMeta
|
pytorch/tutorials
| 2,346
|
💡 [REQUEST] - How to use TorchServe on Vertex
|
### 🚀 Descirbe the improvement or the new tutorial
Create a tutorial on how to use TorchServe on Vertex AI
### Existing tutorials on this topic
_No response_
### Additional context
_No response_
cc @msaroufim @agunapal @svekars @carljparker @NicolasHug @kit1980 @subramen
|
https://github.com/pytorch/tutorials/issues/2346
|
closed
|
[
"torchserve",
"advanced",
"docathon-h2-2023"
] | 2023-05-25T19:54:42Z
| 2023-11-15T00:29:15Z
| null |
sekyondaMeta
|
pytorch/tutorials
| 2,345
|
💡 [REQUEST] - How to use TorchServe on AWS SageMaker
|
### 🚀 Descirbe the improvement or the new tutorial
Create a tutorial on how to use TorchServe on AWS SageMaker
### Existing tutorials on this topic
_No response_
### Additional context
_No response_
cc @msaroufim @agunapal @svekars @carljparker @NicolasHug @kit1980 @subramen
|
https://github.com/pytorch/tutorials/issues/2345
|
open
|
[
"torchserve",
"advanced",
"docathon-h2-2023"
] | 2023-05-25T19:53:36Z
| 2023-11-09T23:01:20Z
| null |
sekyondaMeta
|
pytorch/tutorials
| 2,341
|
💡 [REQUEST] - How to use TorchServe Large Model Inference: walk through an example
|
### 🚀 Descirbe the improvement or the new tutorial
Create a new tutorial showing a walk through example of TorchServe Large Model Inference
### Additional context
You can find some content to use here:
https://github.com/pytorch/serve/blob/master/docs/large_model_inference.md
https://github.com/pytorch/serve/tree/master/examples/large_models/Huggingface_pippy
cc @msaroufim @agunapal @svekars @carljparker @NicolasHug @kit1980 @subramen
|
https://github.com/pytorch/tutorials/issues/2341
|
open
|
[
"torchserve",
"advanced",
"docathon-h2-2023"
] | 2023-05-24T20:39:18Z
| 2023-11-01T16:48:43Z
| null |
sekyondaMeta
|
pytorch/tutorials
| 2,340
|
💡 [REQUEST] - How to use TorchServe: Walk through an example
|
### 🚀 Descirbe the improvement or the new tutorial
We could use an updated tutorial/walk through example on how to use TorchServe. The closest thing we have is the TorchServe Getting Started page located [here](https://github.com/pytorch/serve/blob/master/docs/getting_started.md).
### Existing tutorials on this topic
TorchServe Getting started: https://github.com/pytorch/serve/blob/master/docs/getting_started.md
### Additional context
_No response_
cc @msaroufim @agunapal @svekars @carljparker @NicolasHug @kit1980 @subramen
|
https://github.com/pytorch/tutorials/issues/2340
|
open
|
[
"torchserve",
"advanced",
"docathon-h2-2023"
] | 2023-05-24T20:20:52Z
| 2023-11-06T20:14:07Z
| null |
sekyondaMeta
|
huggingface/chat-ui
| 263
|
[question] Where should we discuss chat-ui roadmap?
|
Is there a forum to discuss future features?
I need to implement some sort of UI component for answer references. Something like perplexity.ai "pills" under the answer.
I guess this is useful for others and I would like to discuss how should I implement such thing before hand.
- should I use pills?
- should I create a special message component?
- maybe horizontal scrolling on "facts"/references?
Is there a place for this kind of discussion? Am I the only one with this demand?
|
https://github.com/huggingface/chat-ui/issues/263
|
closed
|
[] | 2023-05-24T13:17:47Z
| 2023-05-26T02:22:29Z
| 1
|
fredguth
|
pytorch/xla
| 5,063
|
How can I use the flash attention in pytorch/xla GPU mode?
|
## ❓ Questions and Help
Hello, [Flash Attention](https://arxiv.org/abs/2205.14135) is a method to produce tiled and fused kernels such that the tiled parameters can fit onto the device SRAM.
May I ask to what degree this technique has been applied to pytorch/XLA?
And How do I use the `flash attention` library in Pytorch/XLA GPU mode?
And How do I use the similar third_party custom operators libraries?
Thanks.
Resources
Triton [example implementation](https://github.com/openai/triton/blob/main/python/tutorials/06-fused-attention.py)
https://github.com/HazyResearch/flash-attention
https://github.com/lucidrains/flash-attention-jax
|
https://github.com/pytorch/xla/issues/5063
|
closed
|
[
"question"
] | 2023-05-24T08:42:40Z
| 2025-04-30T13:04:03Z
| null |
wbmc
|
huggingface/optimum
| 1,069
|
llama-7b inference report Failed to allocate memory for requested buffer of size 180355072
|
### System Info
```shell
optimum 1.8.5, 32g v100
```
### Who can help?
@JingyaHuang
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
model_id = "my finetund llama-7b"
tokenizer = LlamaTokenizer.from_pretrained(model_id)
model = ORTModelForCausalLM.from_pretrained(model_id, export=True)
# Load the optimization configuration detailing the optimization we wish to apply
optimization_config = AutoOptimizationConfig.O3(for_gpu=True)
optimizer = ORTOptimizer.from_pretrained(model)
optimizer.optimize(save_dir=save_dir, optimization_config=optimization_config)
model = ORTModelForCausalLM.from_pretrained(save_dir,provider="CUDAExecutionProvider")
```
### Expected behavior
Successfully loaded and ready for generation.
But it gives
```
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization:
/onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:368 void*
onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool,
onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 180355072
```
I guess this is actually OOM? It seems that fp16 onnx conversion has some issue. But with fp32, two llama-7b model(normal and with_past) is too big for a single card. Is there any solution for this? i don'y see any multi-gpu inference in optimum's doc.
`model = ORTModelForCausalLM.from_pretrained(model_id, export=True)`
I think this model is fp32? Is there a way to make this model fp16? Then maybe I don't need onnx to convert to fp16.
Thank you!
|
https://github.com/huggingface/optimum/issues/1069
|
closed
|
[
"bug",
"onnxruntime"
] | 2023-05-23T09:50:36Z
| 2023-06-19T05:05:01Z
| 6
|
drxmy
|
huggingface/chat-ui
| 258
|
Language change during chat
|
While writing in German, it answers in English. Before it always used to work...
Photo:

|
https://github.com/huggingface/chat-ui/issues/258
|
closed
|
[
"support"
] | 2023-05-23T08:41:44Z
| 2023-07-24T11:46:33Z
| 2
|
Mbuni21
|
huggingface/transformers.js
| 122
|
[Question] Basic Whisper Inference vs Speed of Demo Site
|
Hello, I love the library~ thanks for making it!
I am trying to use the Whisper inference method displayed on the demo site, but it's running really slow,
It's taking me about 20 seconds to run it locally vs a few seconds on the demo site.
Is there some magic behind the scenes that I'm missing?
I'm just running a simple post message and listening for the updates:
```
worker.postMessage({
task: 'automatic-speech-recognition',
audio: file,
generation: {
do_sample: false,
max_new_tokens: 50,
num_beams: 1,
temperature: 1,
top_k: 0
}
});
worker.addEventListener('message', event => {
const data = event.data;
if(data.type === 'update') {
let elem = document.getElementById("whisper");
elem.value = data.data
}
});
```
|
https://github.com/huggingface/transformers.js/issues/122
|
closed
|
[
"question"
] | 2023-05-23T05:55:40Z
| 2023-06-10T22:41:15Z
| null |
jpg-gamepad
|
pytorch/tutorials
| 2,336
|
💡 [REQUEST] - Write a Tutorial for PyTorch 2.0 Export Quantization Frontend (Quantizer and Annotation API)
|
### 🚀 Descirbe the improvement or the new tutorial
In PyTorch 2.0, we have a new quantization path that is built on top of the graph captured by torchdynamo.export, see an example flow here: https://github.com/pytorch/pytorch/blob/main/test/quantization/pt2e/test_quantize_pt2e.py#L907, it requires backend developers to write a quantizer, we have an existing quantizer object defined for QNNPack/XNNPack here: https://github.com/pytorch/pytorch/blob/main/torch/ao/quantization/_pt2e/quantizer/qnnpack_quantizer.py#L176.
The API that quantizer is interfacing with is called Annotation API, and we just finished design and implementation (WIP as of 05/22, but should be done this week) of this API, and would like to have a tutorial that walks through how to annotate nodes using this API.
Design Doc for Annotation API: https://docs.google.com/document/d/1tjIsL7-uVgm_1bv_kUK7iovP6G1D5zcbzwEcmYEG2Js/edit# please ping @jerryzh168 for access.
General Design Doc for the quantization path in pytorch 2.0: https://docs.google.com/document/d/1_jjXrdaPbkmy7Fzmo35-r1GnNKL7anYoAnqozjyY-XI/edit#
What should the tutorial contain:
1. overall introduction for pytorch 2.0 export flow, quantizer and annotation API
2. how to annotate common operator patterns (https://docs.google.com/document/d/1tjIsL7-uVgm_1bv_kUK7iovP6G1D5zcbzwEcmYEG2Js/edit#heading=h.it9h4gjr7m9g), maybe use add as an example instead since bias is not properly handled in the example
3. how to annotate sharing qparams operators, e.g. cat or add with two inputs sharing quantization parameters
4. how to annotate fixed qparams operators, e.g. sigmoid (https://github.com/pytorch/pytorch/blob/main/torch/ao/quantization/backend_config/_common_operator_config_utils.py#L74)
5. how to annotate bias for linear (DerivedQuantizationSpec)
6. put everything together and play around with a toy model and check the output quantized model (after convert_pt2e)
### Existing tutorials on this topic
The most relevant tutorial that we have written (by @andrewor14 ) is this:
* https://pytorch.org/tutorials/prototype/backend_config_tutorial.html?highlight=fx%20graph%20mode%20quantization
### Additional context
_No response_
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ZailiWang @ZhaoqiongZ @leslie-fang-intel @Xia-Weiwen @sekahler2 @CaoE @zhuhaozhe @Valentine233
|
https://github.com/pytorch/tutorials/issues/2336
|
closed
|
[
"docathon-h1-2023",
"advanced",
"intel"
] | 2023-05-22T23:14:04Z
| 2023-06-09T23:16:37Z
| 2
|
jerryzh168
|
pytorch/xla
| 5,043
|
graceful shutdown on TPU, the proper way to handle SIGINT / SIGTERM in TPU code (using PJRT runtime)?
|
## ❓ Questions and Help
Hi,
I would like to run some cleanup code (writing a final checkpoint, flushing a logger, etc) to run in the process that has `xm.is_master_ordinal() == True`. I am using the pjrt backend. I attempted this:
```python
if xm.is_master_ordinal():
signal.signal(signal.SIGINT, my_handler)
```
or to register it for all processes but have the `xm.is_master_ordinal()` test inside the handler.
Unfortunately, I see the error that a signal handler cannot be registered except on the main thread.
Is there a recommended way to accomplish graceful shutdown of a training run on TPU?
```
File "aiayn/train.py", line 325, in main
xmp.spawn(_mp_fn, args=(resume_ckpt, hps_overrides), nprocs=None)
File "/home/henry/miniconda3/envs/aiayn/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 386, in spawn
return pjrt.spawn(fn, nprocs, start_method, args)
File "/home/henry/miniconda3/envs/aiayn/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py", line 365, in spawn
_run_multiprocess(spawn_fn, start_method=start_method)
File "/home/henry/miniconda3/envs/aiayn/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py", line 92, in wrapper
return fn(*args, **kwargs)
File "/home/henry/miniconda3/envs/aiayn/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py", line 322, in _run_multiprocess
replica_results = list(
File "/home/henry/miniconda3/envs/aiayn/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py", line 323, in <genexpr>
itertools.chain.from_iterable(
File "/home/henry/miniconda3/envs/aiayn/lib/python3.8/concurrent/futures/process.py", line 484, in _chain_from_iterable_of_lists
for element in iterable:
File "/home/henry/miniconda3/envs/aiayn/lib/python3.8/concurrent/futures/_base.py", line 619, in result_iterator
yield fs.pop().result()
File "/home/henry/miniconda3/envs/aiayn/lib/python3.8/concurrent/futures/_base.py", line 444, in result
return self.__get_result()
File "/home/henry/miniconda3/envs/aiayn/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
ValueError: signal only works in main thread
```
|
https://github.com/pytorch/xla/issues/5043
|
open
|
[
"question",
"needs reproduction"
] | 2023-05-22T19:18:43Z
| 2025-04-30T13:13:59Z
| null |
hrbigelow
|
huggingface/datasets
| 5,880
|
load_dataset from s3 file system through streaming can't not iterate data
|
### Describe the bug
I have a JSON file in my s3 file system(minio), I can use load_dataset to get the file link, but I can't iterate it
<img width="816" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/cc0778d3-36f3-45b5-ac68-4e7c664c2ed0">
<img width="1144" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/76872af3-8b3c-42ff-9f55-528c920a7af1">
we can change 4 lines to fix this bug, you can check whether it is ok for us.
<img width="941" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/5a22155a-ece7-496c-8506-047e5c235cd3">
### Steps to reproduce the bug
1. storage a file in you s3 file system
2. use load_dataset to read it through streaming
3. iterate it
### Expected behavior
can iterate it successfully
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
https://github.com/huggingface/datasets/issues/5880
|
open
|
[] | 2023-05-22T07:40:27Z
| 2023-05-26T12:52:08Z
| 4
|
janineguo
|
huggingface/chat-ui
| 256
|
changing model to 30B in the .env file
|
here is the model am using which is 12B i want to change to 30B:
defual one:
`MODELS=`[
{
"name": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
"datasetName": "OpenAssistant/oasst1",
"description": "A good alternative to ChatGPT",
"websiteUrl": "https://open-assistant.io",
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>",
"messageEndToken": "</s>",`
this is what i change to:
`"name": "OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor",
"datasetName": "OpenAssistant/oasst1",
"description": "A good alternative to ChatGPT",
"websiteUrl": "https://open-assistant.io",
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>",
"messageEndToken": "</s>",
`
i got error when i run the model/chat-ui
`Model not found & Could not parse last message {"error":"Task not found for this model"}
SyntaxError: Unexpected end of JSON input
at JSON.parse (<anonymous>)
at parseGeneratedText (/src/routes/conversation/[id]/+server.ts:178:32)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async saveMessage (/src/routes/conversation/[id]/+server.ts:94:26)`
plz help if you know how to change the model to `30B OpenAssistant`
|
https://github.com/huggingface/chat-ui/issues/256
|
closed
|
[
"support"
] | 2023-05-21T18:30:04Z
| 2023-06-19T09:34:10Z
| 5
|
C0deXG
|
pytorch/xla
| 5,039
|
nightly version/ kaggle tpu
|
## ❓ Questions and Help
Hi I installed pytorch xla nightly on kaggle notebook tpu, it was working fine but a week ago it keeps giving this error
[FileNotFoundError: [Errno 2] No such file or directory: 'gsutil']

|
https://github.com/pytorch/xla/issues/5039
|
open
|
[
"question"
] | 2023-05-21T09:31:40Z
| 2025-04-30T13:17:50Z
| null |
dina-fahim103
|
huggingface/transformers.js
| 119
|
[Question] A WebGPU-accelerated ONNX inference run-time
|
Is it possible to use https://github.com/webonnx/wonnx with transformersjs?
|
https://github.com/huggingface/transformers.js/issues/119
|
closed
|
[
"question"
] | 2023-05-21T06:11:20Z
| 2024-10-18T13:30:07Z
| null |
ansarizafar
|
huggingface/chat-ui
| 255
|
how to prompt it
|
how can i prompt this model to act certain way like be `your food assistant and you will provide the best food assistant` how can i prompt it because it all around the place when i run this model :(
|
https://github.com/huggingface/chat-ui/issues/255
|
closed
|
[
"support"
] | 2023-05-20T21:41:46Z
| 2023-06-01T13:00:48Z
| 1
|
C0deXG
|
huggingface/setfit
| 376
|
How to get the number of parameters in a SetFitModel object?
|
The context is I would like to compare the parameter sizes of different models. Is there a way to count the model parameters in a SetFitModel object? Something like model.count_params() in keras. Thanks!
|
https://github.com/huggingface/setfit/issues/376
|
closed
|
[
"question"
] | 2023-05-19T23:58:53Z
| 2023-12-05T14:47:55Z
| null |
yihangit
|
pytorch/examples
| 1,153
|
Just get a low accuracy of 75.8 with resnet50 on ImageNet
|
I train resnet50 on ImageNet with GPUs=8, batchsize=256, learning-rate=0.1, epochs=90, and momentum=0.90.
The attained top1 accuracy is 75.80, lower than the reported 76.15. The gap is not marginal on the large-scale ImageNet.
Why does the difference exist?
|
https://github.com/pytorch/examples/issues/1153
|
open
|
[] | 2023-05-19T22:45:33Z
| 2023-12-12T04:19:09Z
| 2
|
mountain111
|
huggingface/chat-ui
| 252
|
Users can't get passed "Start Chatting" modal - ethicsModelAcceptedAt not getting set?
|
<img width="836" alt="image" src="https://github.com/huggingface/chat-ui/assets/1438064/28a3d7f1-65e4-4b61-a82b-ffc78eb3e074">
let me know what more info you need to debug. just keeps redirecting back to home and never clears the modal.
|
https://github.com/huggingface/chat-ui/issues/252
|
open
|
[
"support",
"p2"
] | 2023-05-19T19:33:33Z
| 2024-01-26T08:44:39Z
| 7
|
cfregly
|
pytorch/tutorials
| 2,326
|
TorchVision Instance Segmentation Finetuning Tutorial - No module named 'torch._six'
|
### 🚀 Descirbe the improvement or the new tutorial
The torch._six module was deprecated and removed from PyTorch starting from version 1.7.0. The code is not working because of that. How can I adjust it to make it work?
### Existing tutorials on this topic
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/tutorials/issues/2326
|
closed
|
[] | 2023-05-19T14:41:15Z
| 2023-08-04T12:00:23Z
| 3
|
weronikawiera
|
huggingface/optimum
| 1,061
|
mpt model support?
|
### Feature request
Can you please add mpt model support to this library?
### Motivation
just testing things, and mpt seems to be unsupported by multiple huggingface libraries
### Your contribution
im just getting started, im not sure if ill be of any help
|
https://github.com/huggingface/optimum/issues/1061
|
closed
|
[] | 2023-05-19T09:28:28Z
| 2023-07-06T16:37:01Z
| 7
|
sail1369
|
huggingface/datasets
| 5,875
|
Why split slicing doesn't behave like list slicing ?
|
### Describe the bug
If I want to get the first 10 samples of my dataset, I can do :
```
ds = datasets.load_dataset('mnist', split='train[:10]')
```
But if I exceed the number of samples in the dataset, an exception is raised :
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
> ValueError: Requested slice [:999999999] incompatible with 60000 examples.
### Steps to reproduce the bug
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
### Expected behavior
I would expect it to behave like python lists (no exception raised, the whole list is kept) :
```
d = list(range(1000))[:999999]
print(len(d)) # > 1000
```
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
https://github.com/huggingface/datasets/issues/5875
|
closed
|
[
"duplicate"
] | 2023-05-19T07:21:10Z
| 2024-01-31T15:54:18Z
| 1
|
astariul
|
pytorch/pytorch
| 101,860
|
How to add/save parameters (metadata) to pytorch model
|
### 🚀 The feature, motivation and pitch
When I working on pytorch model, its difficult for me to keep variables required to run the model.
If I can add metadata to my model, I am not required to save parameters separately.
So any one knows, how to add metadata to pytorch model?
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/101860
|
closed
|
[] | 2023-05-19T07:20:06Z
| 2023-05-20T05:03:08Z
| null |
naseemap47
|
huggingface/chat-ui
| 246
|
Documentation Request - Clarity around login flow outside of HuggingFace context
|
Could the docs (if not the code) be improved to make it clear how to:
- run this without requiring users to authenticate
- handle authentication via a 3rd party cloud (Azure, AWS, GCP, etc)
- run this with an arbitrary 3rd party model (OpenAI, Rasa, etc)
I originally thought this was the purpose of `OPENID_CLIENT_ID` and `OPENID_CLIENT_SECRET`, but it seems not... (?).
|
https://github.com/huggingface/chat-ui/issues/246
|
closed
|
[
"documentation",
"enhancement"
] | 2023-05-19T02:57:56Z
| 2023-06-01T06:26:49Z
| 3
|
hack-r
|
pytorch/xla
| 5,034
|
How to recover from 'Exception in device=TPU:0' sickness without terminating session?
|
I ran all cells in the [mnist-training.ipynb](https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/mnist-training.ipynb) colab successfully. However, during execution of the last cell:
```python
def _mp_fn(rank, flags):
global FLAGS
FLAGS = flags
torch.set_default_tensor_type('torch.FloatTensor')
accuracy, data, pred, target = train_mnist()
if rank == 0:
# Retrieve tensors that are on TPU core 0 and plot.
plot_results(data.cpu(), pred.cpu(), target.cpu())
xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=FLAGS['num_cores'],
start_method='fork')
```
I interrupted execution before it was finished. On trying to restart that cell, I see the following exception. On further experimentation, the only way to recover from this situation is through:
Runtime -> Manage Sessions -> Terminate Current Session
and then restart the whole thing.
The 'Restart runtime' option does not work, nor does the 'Disconnect and Delete Runtime option'
Would anyone know of a faster way to recover from this sick state without completely restarting from scratch? I've seen several issues posted about this but haven't seen a resolution.
```
Exception in device=TPU:0: INTERNAL: From /job:tpu_worker/replica:0/task:0:
2 root error(s) found.
(0) INTERNAL: stream did not block host until done; was already in an error state
[[{{node XRTExecute}}]]
[[XRTExecute_G12]]
(1) INTERNAL: stream did not block host until done; was already in an error state
[[{{node XRTExecute}}]]
0 successful operations.
0 derived errors ignored.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 334, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.10/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 328, in _start_fn
fn(gindex, *args)
File "<ipython-input-5-8e919fc51ff8>", line 6, in _mp_fn
accuracy, data, pred, target = train_mnist()
File "<ipython-input-4-0bb5e5cb92ef>", line 130, in train_mnist
train_loop_fn(para_loader.per_device_loader(device))
File "<ipython-input-4-0bb5e5cb92ef>", line 106, in train_loop_fn
xm.get_ordinal(), x, loss.item(), tracker.rate(),
RuntimeError: INTERNAL: From /job:tpu_worker/replica:0/task:0:
2 root error(s) found.
(0) INTERNAL: stream did not block host until done; was already in an error state
[[{{node XRTExecute}}]]
[[XRTExecute_G12]]
(1) INTERNAL: stream did not block host until done; was already in an error state
[[{{node XRTExecute}}]]
0 successful operations.
0 derived errors ignored.
---------------------------------------------------------------------------
ProcessExitedException Traceback (most recent call last)
[<ipython-input-5-8e919fc51ff8>](https://localhost:8080/#) in <cell line: 11>()
9 plot_results(data.cpu(), pred.cpu(), target.cpu())
10
---> 11 xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=FLAGS['num_cores'],
12 start_method='fork')
2 frames
[/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py](https://localhost:8080/#) in join(self, timeout)
147 )
148 else:
--> 149 raise ProcessExitedException(
150 "process %d terminated with exit code %d" %
151 (error_index, exitcode),
ProcessExitedException: process 0 terminated with exit code 17
```
|
https://github.com/pytorch/xla/issues/5034
|
closed
|
[] | 2023-05-19T01:32:17Z
| 2023-05-19T19:52:59Z
| null |
hrbigelow
|
huggingface/chat-ui
| 245
|
Strange DNS Behavior
|
Apparently some part of this leverages DNS right away when you run it, but it doesn't work on any privacy-respecting DNS resolvers. I can demonstrate this via toggling firewall options, resolv.conf, or packet inspection, but I'm not sure what in the code is related to this or how to fix it.
|
https://github.com/huggingface/chat-ui/issues/245
|
closed
|
[] | 2023-05-19T01:19:11Z
| 2023-05-19T02:53:11Z
| 1
|
hack-r
|
pytorch/examples
| 1,151
|
How to run rpc/pipeline /main.py on two physical machines?
|
I want to run the Resnet on two different machines , how to run the main.py
When i change the code by add the follow
`# on rank 0
dist.init_process_group(
backend = "gloo",
init_method = 'tcp://172.16.8.196:8864',
rank = 0,
world_size = 2
)
# on rank 1
dist.init_process_group(
backend = "gloo",
init_method = 'tcp://172.16.8.196:8864',
rank = 1,
world_size = 2
)`
In machine 1/2, the command is python main.py
Then an error occurs, RuntimeError: Socket Timeout.
How to fix it ?
|
https://github.com/pytorch/examples/issues/1151
|
open
|
[] | 2023-05-18T10:54:52Z
| 2023-05-18T10:54:52Z
| null |
Unknown-Body
|
pytorch/examples
| 1,150
|
input and output
|
I really want to know how to make the format of dataset.I have 30-demension variables as input and 0-1class as output .how can I put it into the SAC model?
|
https://github.com/pytorch/examples/issues/1150
|
open
|
[] | 2023-05-18T10:18:59Z
| 2023-05-18T10:18:59Z
| 0
|
luzi560
|
pytorch/xla
| 5,022
|
torch.distributed.reduce vs torch_xla.core.xla_model.all_reduce
|
## ❓ Questions and Help
I am a bit confused here. Can we use torch_xla.core.xla_model.all_reduce in place of torch.distributed.reduce? If, yes
In torch.distributed.reduce we need a rank destination, how to change that if we use torch_xla.core.xla_model.all_reduce?
|
https://github.com/pytorch/xla/issues/5022
|
closed
|
[
"question",
"distributed"
] | 2023-05-17T13:26:02Z
| 2025-05-05T12:42:24Z
| null |
RishabhPandit-00
|
huggingface/optimum
| 1,057
|
owlvit is not supported
|
### Feature request
The conversion is supported in transfomers[onnx], but not yet supported in optimum.
### Motivation
convert open world vocabulary to onnx model for faster inference.
### Your contribution
If there is a guideline on how to do it, I think I can help
|
https://github.com/huggingface/optimum/issues/1057
|
closed
|
[] | 2023-05-17T07:01:39Z
| 2023-07-12T13:20:52Z
| 11
|
darwinharianto
|
huggingface/datasets
| 5,870
|
Behaviour difference between datasets.map and IterableDatasets.map
|
### Describe the bug
All the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs.
I basically need to .map() a transform on images in an iterable dataset, which was made using a custom databuilder config.
This works very good in map-styles datasets, but the .map() fails in IterableDatasets, show behvaiour as such:
"pixel_values" key not found, KeyError in examples object/dict passed into transform function for map, which works fine with map style, even as batch.
In iterable style, the object/dict passed into map() paramter callable function is completely different as what is mentioned in all examples.
Please look into this. Thank you
My databuilder class is inherited as such:
def _info(self):
print ("Config: ",self.config.__dict__.keys())
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"labels": datasets.Sequence(datasets.Value("uint16")),
# "labels_name": datasets.Value("string"),
# "pixel_values": datasets.Array3D(shape=(3, 1280, 960), dtype="float32"),
"pixel_values": datasets.Array3D(shape=(1280, 960, 3), dtype="uint8"),
"image_s3_path": datasets.Value("string"),
}
),
supervised_keys=None,
homepage="none",
citation="",
)
def _split_generators(self, dl_manager):
records_train = list(db.mini_set.find({'split':'train'},{'image_s3_path':1, 'ocwen_template_name':1}))[:10000]
records_val = list(db.mini_set.find({'split':'val'},{'image_s3_path':1, 'ocwen_template_name':1}))[:1000]
# print (len(records),self.config.num_shards)
# shard_size_train = len(records_train)//self.config.num_shards
# sharded_records_train = [records_train[i:i+shard_size_train] for i in range(0,len(records_train),shard_size_train)]
# shard_size_val = len(records_val)//self.config.num_shards
# sharded_records_val = [records_val[i:i+shard_size_val] for i in range(0,len(records_val),shard_size_val)]
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"records":records_train} # passing list of records, for sharding to take over
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION, gen_kwargs={"records":records_val} # passing list of records, for sharding to take over
),
]
def _generate_examples(self, records):
# print ("Generating examples for [{}] shards".format(len(shards)))
# initiate_db_connection()
# records = list(db.mini_set.find({'split':split},{'image_s3_path':1, 'ocwen_template_name':1}))[:10]
id_ = 0
# for records in shards:
for i,rec in enumerate(records):
img_local_path = fetch_file(rec['image_s3_path'],self.config.buffer_dir)
# t = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.squeeze()
# print (t.shape, type(t),type(t[0][0][0]))
# sys.exit()
pvs = np.array(Image.open(img_local_path).resize((1280,960))) # image object is wxh, so resize as per that, numpy array of it is hxwxc, transposing to cxwxh
# pvs = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.astype(np.float16).squeeze()
# print (type(pvs[0][0][0]))
lblids = self.config.processor.tokenizer('<s_class>'+rec['ocwen_template_name']+'</s_class>'+'</s>', add_special_tokens=False, padding=False, truncation=False, return_tensors="np")["input_ids"].squeeze(0) # take padding later, as per batch collating
# print (len(lblids),type(lblids[0]))
# print (type(pvs),pvs.shape,type(pvs[0][0][0]), type(lblids))
yield id_, {"labels":lblids,"pixel_values":pvs,"image_s3_path":rec['image_s3_path']}
id_+=1
os.remove(img_local_path)
and I load it inside my trainer script as such
`ds = load_dataset("/tmp/DonutDS/dataset/", split="train", streaming=True) # iterable dataset, where .map() falls`
or also as
`ds = load_from_disk('/tmp/DonutDS/dataset/') #map style dataset`
Thank you to the team for having such a great library, and for this bug fix in advance!
### Steps to reproduce the bug
Above config can allow one to reproduce the said bug
### Expected behavior
.map() should show some consistency b/w map-style and iterable-style datasets, or atleast the docs should address iterable-style datasets behaviour and examples. I honestly do not figur
|
https://github.com/huggingface/datasets/issues/5870
|
open
|
[] | 2023-05-16T14:32:57Z
| 2023-05-16T14:36:05Z
| 1
|
llStringll
|
pytorch/PiPPy
| 801
|
How to run the gpt2 example on a single node with four GPU?
|
I am trying to reproduce the [gpt2 example](https://github.com/pytorch/PiPPy/tree/main/examples/hf/gpt2) in a single node without slurm for some performance metrics, but the code only provides slurm scripts. How should I modify the code to implement this example in a single node?
|
https://github.com/pytorch/PiPPy/issues/801
|
open
|
[] | 2023-05-16T11:49:37Z
| 2023-05-16T11:49:37Z
| null |
lsder
|
huggingface/chat-ui
| 232
|
Possible performance regression in the production model?
|
I have been using it for 5 days , it could write simple codes for me but now it can't ;/
|
https://github.com/huggingface/chat-ui/issues/232
|
closed
|
[
"bug",
"question"
] | 2023-05-16T08:39:19Z
| 2023-09-11T09:30:26Z
| null |
overvalue
|
huggingface/chat-ui
| 230
|
Task not found for this model
|
I tried running code on my local system and updated the model name in the .env file from "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5" to "OpenAssistant/oasst-sft-6-llama-30b-xor" and now for every prompt I am getting "Task not found for this model"
|
https://github.com/huggingface/chat-ui/issues/230
|
closed
|
[
"support"
] | 2023-05-16T05:18:25Z
| 2024-12-13T01:28:06Z
| 4
|
newway-anshul
|
huggingface/datasets
| 5,868
|
Is it possible to change a cached file and 're-cache' it instead of re-generating?
|
### Feature request
Hi,
I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours
### Motivation
For large datasets, I think it is very important because we always face the problem which is changing something in the original cache without re-generating it.
### Your contribution
For now, I can't help, sorry.
|
https://github.com/huggingface/datasets/issues/5868
|
closed
|
[
"enhancement"
] | 2023-05-16T03:45:42Z
| 2023-05-17T11:21:36Z
| 2
|
zyh3826
|
pytorch/TensorRT
| 1,920
|
how to convert itensor to pytorch tensor in torch-tensorrt fx mode?
|
Hi:
I'm trying to create engine with custom plugin using torch-tensorrt fx. How do I convert ITensor to torch tensor?
|
https://github.com/pytorch/TensorRT/issues/1920
|
closed
|
[
"No Activity"
] | 2023-05-15T11:52:46Z
| 2023-11-24T00:02:13Z
| null |
shuyuan-wang
|
huggingface/chat-ui
| 225
|
Special tokens for user and assistant turns?
|
Hi,
I've been checking the example that used `OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5` model. This model uses the following tokens to specify the beginning of the user and assistant:
```
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>"
```
I'm trying to run `bigcode/starcoder` model along with `bigcode/the-stack-dedup` dataset, but I'm not sure which values do those variables need for this particular model and how they influence the model's answer generation.
Could you please briefly guide me into this? I'm kinda new to this.
|
https://github.com/huggingface/chat-ui/issues/225
|
closed
|
[] | 2023-05-15T10:32:06Z
| 2023-05-15T11:06:23Z
| 3
|
frandominguezl
|
huggingface/chat-ui
| 218
|
Support for Contrastive Search?
|
Context: https://huggingface.co/blog/introducing-csearch
Passing only:
"penalty_alpha":0.6,
"top_k": 4,
Does not seem to work, as truncate, and temperature is still required. When passing this:
<pre>
"parameters": {
"temperature": 0.9,
"penalty_alpha":0.6,
"top_k": 4,
"truncate": 512,
"max_new_tokens": 512
}
</pre>
penalty_alpha seems to be ignored:
GenerateParameters { best_of: None, temperature: Some(0.9), repetition_penalty: None, top_k: Some(4), top_p: None, typical_p: None, do_sample: false, max_new_tokens: 512, return_full_text: Some(false), stop: [], truncate: Some(512), watermark: false, details: false, seed: None } })
|
https://github.com/huggingface/chat-ui/issues/218
|
closed
|
[] | 2023-05-13T22:02:37Z
| 2023-09-18T13:27:20Z
| 2
|
PhNyx
|
huggingface/setfit
| 374
|
Resolving confusion between fine-grained classes
|
My dataset has 131 classes. Some of them are fine-grained, for example:
- Flag fraud on the account -> **Open Dispute**
- Find out if there is a fraud hold on my debit card ->**Dispute Inquiry**
The model is getting confused between such classes. I have roughly 20 samples per class in my dataset and I am using `mpnet-base-v2` with `num_iterations=25`. Is there a way to specify which classes to draw the negative samples from given a positive class? Should I just add more data into the confusing classes?
|
https://github.com/huggingface/setfit/issues/374
|
closed
|
[
"question"
] | 2023-05-13T10:13:15Z
| 2023-11-24T15:09:55Z
| null |
vahuja4
|
huggingface/transformers.js
| 108
|
[Question] Problem when converting an embedding model.
|
A thirst I would like to thank everyone for providing and maintaining this library. It makes working with ML in JavaScript a breeze.
I was working with the embedding models and tried to convert a multilingual model [("paraphrase-multilingual-MiniLM-L12-v2")](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) for use with transformers.js. I used the flow command to do the conversion:
```
python -m scripts.convert --quantize --model_id sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 --task semantic-segmentation --from_hub
```
But I got the following error back:
```
File "/opt/saturncloud/envs/saturn/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 470, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.bert.configuration_bert.BertConfig'> for this kind of AutoModel: AutoModelForSemanticSegmentation.
Model type should be one of BeitConfig, Data2VecVisionConfig, DPTConfig, MobileNetV2Config, MobileViTConfig, SegformerConfig, UperNetConfig.
```
I think I am using the wrong type of task, but I am not sure. Can anyone help me with this problem at hand.
Thanks in advance. Falcon
|
https://github.com/huggingface/transformers.js/issues/108
|
closed
|
[
"question"
] | 2023-05-13T09:54:12Z
| 2023-05-15T17:24:16Z
| null |
falcon027
|
huggingface/setfit
| 372
|
Update Previous Model with New Categories
|
Is there a way to add categories based on new data?
For example - Initially I trained a model with 5 categories and saved the model. I now have new data that I want to feed into the model but this new data has 8 categories. Would I have to start from scratch or can I use the original model I trained?
Thank you!
|
https://github.com/huggingface/setfit/issues/372
|
closed
|
[
"question"
] | 2023-05-12T21:22:12Z
| 2023-11-24T15:10:46Z
| null |
ronils428
|
huggingface/dataset-viewer
| 1,174
|
Add a field, and rename another one, in /opt-in-out-urls
|
The current response for /opt-in-out-urls is:
```
{
"urls_columns": ["url"],
"has_urls_columns": true,
"num_opt_in_urls": 0,
"num_opt_out_urls": 4052,
"num_scanned_rows": 12452281,
"num_urls": 12452281
}
```
I think we should:
- rename `num_urls` into `num_scanned_urls`
- add `num_rows` with the total number of rows in the dataset/config/split. It would help understand which proportion of the dataset has been scanned. Note that the information is already available in `/size`, but I think it would be handy to have this information here. wdyt?
|
https://github.com/huggingface/dataset-viewer/issues/1174
|
closed
|
[
"question"
] | 2023-05-12T13:15:40Z
| 2023-05-12T13:54:14Z
| null |
severo
|
huggingface/chat-ui
| 207
|
MongoParseError: Invalid scheme
|
I tried to run chat-ui on my mac (Intel 2020, MacOS Ventura 13.3.1), and I get the following error:
```bash
(base) thibo@mac-M:~/Documents/chat-ui$ npm install
added 339 packages, and audited 340 packages in 39s
72 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
(base) thibo@mac:~/Documents/chat-ui$ npm run dev
> chat-ui@0.1.0 dev
> vite dev
(node:3340) ExperimentalWarning: Import assertions are not a stable feature of the JavaScript language. Avoid relying on their current behavior and syntax as those might change in a future version of Node.js.
(Use `node --trace-warnings ...` to show where the warning was created)
(node:3340) ExperimentalWarning: Importing JSON modules is an experimental feature and might change at any time
Forced re-optimization of dependencies
VITE v4.3.5 ready in 2136 ms
➜ Local: http://localhost:5173/
➜ Network: use --host to expose
➜ press h to show help
9:25:43 AM [vite] Error when evaluating SSR module /src/lib/server/database.ts:
|- MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
at new ConnectionString (/Users/thibo/Documents/chat-ui/node_modules/mongodb-connection-string-url/lib/index.js:86:19)
at parseOptions (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/connection_string.js:191:17)
at new MongoClient (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/mongo_client.js:46:63)
at eval (/src/lib/server/database.ts:7:16)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async instantiateModule (file:///Users/thibo/Documents/chat-ui/node_modules/vite/dist/node/chunks/dep-934dbc7c.js:54360:9)
9:25:43 AM [vite] Error when evaluating SSR module /src/hooks.server.ts: failed to import "/src/lib/server/database.ts"
|- MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
at new ConnectionString (/Users/thibo/Documents/chat-ui/node_modules/mongodb-connection-string-url/lib/index.js:86:19)
at parseOptions (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/connection_string.js:191:17)
at new MongoClient (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/mongo_client.js:46:63)
at eval (/src/lib/server/database.ts:7:16)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async instantiateModule (file:///Users/thibo/Documents/chat-ui/node_modules/vite/dist/node/chunks/dep-934dbc7c.js:54360:9)
MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
at new ConnectionString (/Users/thibo/Documents/chat-ui/node_modules/mongodb-connection-string-url/lib/index.js:86:19)
at parseOptions (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/connection_string.js:191:17)
at new MongoClient (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/mongo_client.js:46:63)
at eval (/src/lib/server/database.ts:7:16)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async instantiateModule (file:///Users/thibo/Documents/chat-ui/node_modules/vite/dist/node/chunks/dep-934dbc7c.js:54360:9)
MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
at new ConnectionString (/Users/thibo/Documents/chat-ui/node_modules/mongodb-connection-string-url/lib/index.js:86:19)
at parseOptions (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/connection_string.js:191:17)
at new MongoClient (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/mongo_client.js:46:63)
at eval (/src/lib/server/database.ts:7:16)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async instantiateModule (file:///Users/thibo/Documents/chat-ui/node_modules/vite/dist/node/chunks/dep-934dbc7c.js:54360:9)
MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
at new ConnectionString (/Users/thibo/Documents/chat-ui/node_modules/mongodb-connection-string-url/lib/index.js:86:19)
at parseOptions (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/connection_string.js:191:17)
at new MongoClient (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/mongo_client.js:46:63)
at eval (/src/lib/server/database.ts:7:16)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async instantiateModule (file:///Users/thibo/Documents/chat-ui/node_modules/vite/dist/node/chunks/dep-934dbc7c.js:54360:9)
MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
at new ConnectionString (/Users/thibo/Documents/chat-ui/node_modules/mongodb-connection-string-url/lib/index.js:86:19)
at parseOptions (/Users/thibo/Documents/cha
|
https://github.com/huggingface/chat-ui/issues/207
|
closed
|
[] | 2023-05-12T07:32:22Z
| 2023-05-12T08:26:39Z
| 1
|
thiborose
|
pytorch/pytorch
| 101,246
|
Tool for identifying where in eager model an operation is nondeterministic
|
### 🐛 Describe the bug
Let's say you have a model code and when you run it twice you get bitwise different results. Where did it diverge? We can use TorchFunctionMode/TorchDispatchMode to localize where the first divergence occurred.
### Versions
master
cc @mruberry @kurtamohler
|
https://github.com/pytorch/pytorch/issues/101246
|
open
|
[
"triaged",
"module: determinism"
] | 2023-05-12T02:50:04Z
| 2023-05-12T14:21:45Z
| null |
ezyang
|
pytorch/TensorRT
| 1,912
|
❓ [Question] How to correctly convert model by using torch-tensorrt
|
## ❓ Question
Hi, I am trying to convert resnet_rmac_fpn model which is used for image retrieval. I am unable to convert it to tensorrt model by using torch-tensorrt. According to debug information, some of the operators are not supported by Torch-TensorRT.
However, if I export the model into onnx and then convert it by using `trtexec ` command, the conversion works. Therefore, I was wondering if there are any possible ways for making this conversion possible? Here is the error prompt :
```
INFO: [Torch-TensorRT] - Method requested cannot be compiled end to end by Torch-TensorRT.TorchScript.
Unsupported operators listed below:
- profiler::_record_function_exit._RecordFunction(__torch__.torch.classes.profiler._RecordFunction _0) -> ()
- aten::linalg_vector_norm(Tensor self, Scalar ord=2, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor
- prim::PythonOp(...) -> ...
- profiler::_record_function_enter_new(str name, str? args=None) -> __torch__.torch.classes.profiler._RecordFunction
DEBUG: [Torch-TensorRT] - Unsupported operator: aten::linalg_vector_norm(Tensor self, Scalar ord=2, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor
/usr/local/lib/python3.8/dist-packages/torch/functional.py(1519): norm
/usr/local/lib/python3.8/dist-packages/torch/_tensor.py(647): norm
/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py(4665): normalize
/codebase/Deep_Image_Retrieval/dirtorch/nets/rmac_resnet.py(8): l2_normalize
/codebase/Deep_Image_Retrieval/dirtorch/nets/rmac_resnet.py(68): forward
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1520): _slow_forward
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1533): _call_impl
/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py(169): forward
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1520): _slow_forward
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1533): _call_impl
/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(1056): trace_module
/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(794): trace
benchmark.py(71): create_torchtrt_model
benchmark.py(110): benchmark_torchtrt_model
benchmark.py(132): <module>
DEBUG: [Torch-TensorRT] - Unsupported operator: prim::PythonOp(...) -> ...
/usr/local/lib/python3.8/dist-packages/torch/autograd/function.py(506): apply
/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/scatter_gather.py(27): scatter_map
/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/scatter_gather.py(31): scatter_map
/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/scatter_gather.py(44): scatter
/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/scatter_gather.py(52): scatter_kwargs
/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py(178): scatter
/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py(161): forward
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1520): _slow_forward
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1533): _call_impl
/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(1056): trace_module
/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(794): trace
benchmark.py(71): create_torchtrt_model
b
enchmark.py(110): benchmark_torchtrt_model
benchmark.py(132): <module>
DEBUG: [Torch-TensorRT] - Unsupported operator: profiler::_record_function_exit._RecordFunction(__torch__.torch.classes.profiler._RecordFunction _0) -> ()
/usr/local/lib/python3.8/dist-packages/torch/_ops.py(316): __call__
/usr/local/lib/python3.8/dist-packages/torch/autograd/profiler.py(507): __exit__
/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py(169): forward
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1520): _slow_forward
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1533): _call_impl
/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(1056): trace_module
/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(794): trace
benchmark.py(71): create_torchtrt_model
benchmark.py(110): benchmark_torchtrt_model
benchmark.py(132): <module>
DEBUG: [Torch-TensorRT] - Unsupported operator: profiler::_record_function_enter_new(str name, str? args=None) -> __torch__.torch.classes.profiler._RecordFunction
/usr/local/lib/python3.8/dist-packages/torch/_ops.py(504): __call__
/usr/local/lib/python3.8/dist-packages/torch/autograd/profiler.py(492): __enter__
/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py(151): forward
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1520): _slow_forward
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py(1533): _call_impl
/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py(1056): trace_module
/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.p
|
https://github.com/pytorch/TensorRT/issues/1912
|
closed
|
[
"question",
"No Activity"
] | 2023-05-11T18:40:58Z
| 2023-08-21T00:02:10Z
| null |
HtutLynn
|
huggingface/chat-ui
| 202
|
Help wanted: Installing `@huggingface` package from NPM registry
|
👋🏻
Sorry if I am opening a dumb issue but I was just looking into fixing some UI issues and not entirely sure how to run this project locally. I've created a `.env.local` with:
```
MONGODB_URL=
HF_ACCESS_TOKEN=XXX
```
Haven't actually set the `MONGODB_URL` but did create an access token for HF.
Running into the following error when running `yarn`
```
yarn install v1.22.11
info No lockfile found.
warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
[1/4] 🔍 Resolving packages...
error Couldn't find package "@huggingface/shared@*" required by "@huggingface/inference@^2.2.0" on the "npm" registry.
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
```
I suppose I need a secret or something for Yarn to be able to fetch that package from a different registry than NPM?
**use NPM instead of Yarn?**
Yes, I've also tried using NPM, ran into the same issue.
Again sorry if I am mistaking the readme and doing things wrong.
Thanks! 👋🏻
|
https://github.com/huggingface/chat-ui/issues/202
|
closed
|
[] | 2023-05-11T17:38:24Z
| 2023-05-12T11:07:10Z
| 5
|
eertmanhidde
|
huggingface/datasets
| 5,841
|
Abusurdly slow on iteration
|
### Describe the bug
I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment:
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
I noticed that the dataset in numpy format performs significantly faster than the one in torch format. My hypothesis is that the dataset undergoes a transformation process of torch->python->numpy(torch) in the background, which might be causing the slowdown. Is there any way to expedite the process by bypassing such transformations?
Furthermore, if I increase the size of a to an image shape, like:
```python
a=torch.randn(3,224,224)
```
the iteration speed becomes absurdly slow, around 100 iterations per second, whereas the speed with numpy format is approximately 250 iterations per second. This level of speed would be unacceptable for large image datasets, as it could take several hours just to iterate through a single epoch.
### Steps to reproduce the bug
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
### Expected behavior
iteration faster
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
|
https://github.com/huggingface/datasets/issues/5841
|
closed
|
[] | 2023-05-11T08:04:09Z
| 2023-05-15T15:38:13Z
| 4
|
fecet
|
huggingface/optimum
| 1,046
|
Make torchvision optional?
|
### Feature request
Currently torchvision is a required dependency
https://github.com/huggingface/optimum/blob/22e4fd6de3ac5e7780571570f962947bd8777fd4/setup.py#L20
### Motivation
I only work on text so I don't need vision support
### Your contribution
I am sure the change would be more difficult than just "remove the line from the setup.py" file but if you have other suggestions how to tackle the removal, I am happy to help.
|
https://github.com/huggingface/optimum/issues/1046
|
closed
|
[] | 2023-05-10T10:49:18Z
| 2023-05-12T23:05:46Z
| 4
|
BramVanroy
|
huggingface/datasets
| 5,838
|
Streaming support for `load_from_disk`
|
### Feature request
Support for streaming datasets stored in object stores in `load_from_disk`.
### Motivation
The `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data from the buckets becomes essential.
### Your contribution
I'd be happy to contribute this feature if I could get the guidance on how to do so.
|
https://github.com/huggingface/datasets/issues/5838
|
closed
|
[
"enhancement"
] | 2023-05-10T06:25:22Z
| 2024-10-28T14:19:44Z
| 12
|
Nilabhra
|
pytorch/TensorRT
| 1,898
|
❓ [Question] is there any example on how to convert T5 model that compatible with huggingace's generate function?
|
## ❓ Question
is there any example on how to convert T5 model that is compatible with huggingface's generate function? and able to handle dynamic shapes ?.
|
https://github.com/pytorch/TensorRT/issues/1898
|
closed
|
[
"question",
"No Activity"
] | 2023-05-09T18:51:06Z
| 2023-08-20T00:02:15Z
| null |
dathudeptrai
|
huggingface/datasets
| 5,834
|
Is uint8 supported?
|
### Describe the bug
I expect the dataset to store the data in the `uint8` data type, but it's returning `int64` instead.
While I've found that `datasets` doesn't yet support float16 (https://github.com/huggingface/datasets/issues/4981), I'm wondering if this is the case for other data types as well.
Is there a way to store vector data as `uint8` and then upload it to the hub?
### Steps to reproduce the bug
```python
from datasets import Features, Dataset, Sequence, Value
import numpy as np
dataset = Dataset.from_dict(
{"vector": [np.array([0, 1, 2], dtype=np.uint8)]}, features=Features({"vector": Sequence(Value("uint8"))})
).with_format("numpy")
print(dataset[0]["vector"].dtype)
```
### Expected behavior
Expected: `uint8`
Actual: `int64`
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-12.1-x86_64-i386-64bit
- Python version: 3.8.12
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
https://github.com/huggingface/datasets/issues/5834
|
closed
|
[] | 2023-05-09T17:31:13Z
| 2023-05-13T05:04:21Z
| 5
|
ryokan0123
|
pytorch/xla
| 4,994
|
Different Graph generations
|
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
This code snippet is extracted from the AdamW optimizer. This optimizer for different ranges of learning rate and weight decay generates different graphs. This is causing unexpected compilations during the running of the application. The fix is also mentioned in the section. However, such scenarios can occur anywhere and we need a generic mechanism to make sure the same graph is generated.
## To Reproduce
```
import torch
import torch, random, os
import numpy as np
import torch_xla.core.xla_model as xm
os.environ["NEURON_FRAMEWORK_DEBUG"] = "1"
os.environ["XLA_IR_DEBUG"] = "1"
os.environ["XLA_HLO_DEBUG"]="1"
os.environ['XLA_USE_BF16']="1"
os.environ['XLA_NO_SPECIAL_SCALARS']="1"
def func1():
param = torch.FloatTensor([0.001]).to(xm.xla_device())
lr = 2.9999999999999997e-06
weight_decay = 0.01
param.mul_(1 - lr * weight_decay)
print(param)
def func2():
param = torch.FloatTensor([0.001]).to(xm.xla_device())
lr = 4.6874999999999995e-08
weight_decay = 0.01
param.mul_(1 - lr * weight_decay)
print(param)
def func3():
param = torch.FloatTensor([0.001]).to(xm.xla_device())
lr = 2.9999999999999997e-06
weight_decay = 0.01
param.sub_(param * lr * weight_decay)
print(param)
def func4():
param1 = torch.FloatTensor([0.001]).to(xm.xla_device())
lr1 = 4.6874999999999995e-08
weight_decay1 = 0.01
param1.sub_(param1 * lr1 * weight_decay1)
print(param1)
func1()
func2()
func3()
func4()
```
<!--
It is really important for the team to have a quick repro, which requires no setup work.
The quicker is the repro to be run, the higher the chances the bug will be addressed sooner.
The best way to create quick repros is to create a Colab based on the following template:
```
https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#using-debug_runpy-to-collect-debug-information
Things to avoid in repros is the need to download datasets which require setting up keys or other login information, like Kaggle downloads for example.
Another example are Colab which mount user's Google Drive storages.
Using a fake data generator could be a solution, in case the dataset cannot be easily downloaded without setting up credentials:
https://github.com/pytorch/xla/blob/784b4d4f21751a54be0029a95f47d3896561c2a9/test/test_train_mp_mnist.py#L65
-->
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. Or better use the Colab template: https://github.com/pytorch/xla/blob/master/contrib/colab/issue-report.ipynb -->
## Expected behavior
func1 gives the graph:
```
HloModule SyncTensorsGraph.6, entry_computation_layout={(bf16[],bf16[1]{0})->(bf16[1]{0})}
ENTRY %SyncTensorsGraph.6 (p0: bf16[], p1: bf16[1]) -> (bf16[1]) {
%p1 = bf16[1]{0} parameter(1), frontend_attributes={neff_input_name="input1"}, metadata={op_type="xla__device_data" op_name="xla__device_data"}
%p0 = bf16[] parameter(0), frontend_attributes={neff_input_name="input0"}, metadata={op_type="xla__device_data" op_name="xla__device_data"}
%broadcast = bf16[1]{0} broadcast(bf16[] %p0), dimensions={}, metadata={op_type="aten__mul" op_name="aten__mul"}
%multiply = bf16[1]{0} multiply(bf16[1]{0} %p1, bf16[1]{0} %broadcast), metadata={op_type="aten__mul" op_name="aten__mul"}
ROOT %tuple = (bf16[1]{0}) tuple(bf16[1]{0} %multiply), frontend_attributes={neff_output_names="output0"}
}
```
func2 gives a different graph:
```
HloModule SyncTensorsGraph.6, entry_computation_layout={(bf16[1]{0})->(bf16[1]{0})}
ENTRY %SyncTensorsGraph.6 (p0: bf16[1]) -> (bf16[1]) {
%p0 = bf16[1]{0} parameter(0), frontend_attributes={neff_input_name="input0"}, metadata={op_type="xla__device_data" op_name="xla__device_data"}
%constant = bf16[] constant(1), metadata={op_type="prim__Constant" op_name="prim__Constant"}
%broadcast = bf16[1]{0} broadcast(bf16[] %constant), dimensions={}, metadata={op_type="aten__mul" op_name="aten__mul"}
%multiply = bf16[1]{0} multiply(bf16[1]{0} %p0, bf16[1]{0} %broadcast), metadata={op_type="aten__mul" op_name="aten__mul"}
ROOT %tuple = (bf16[1]{0}) tuple(bf16[1]{0} %multiply), frontend_attributes={neff_output_names="output0"}
}
```
func3 and func4 give the same graphs:
```
HloModule SyncTensorsGraph.14, entry_computation_layout={(bf16[],bf16[],bf16[1]{0})->(bf16[1]{0})}
ENTRY %SyncTensorsGraph.14 (p0: bf16[], p1: bf16[], p2: bf16[1]) -> (bf16[1]) {
%p2 = bf16[1]{0} parameter(2), frontend_attributes={neff_input_name="input2"}, metadata={op_type="xla__device_data" op_name="xla__device_data"}
%p1 = bf16[] parameter(1), frontend_attributes={neff_input_name="input1"}, metadata={op_type="xla__device_data" op_name="xla__device_data"}
%broadcast.1 = bf16[1]{0} broadcast(bf16[] %p1), dimensions={}, metadata={op_ty
|
https://github.com/pytorch/xla/issues/4994
|
closed
|
[
"question",
"lowering"
] | 2023-05-09T07:18:12Z
| 2025-05-05T12:57:35Z
| null |
amithrm
|
pytorch/pytorch
| 100,859
|
how to calculate the macs after prune?
|
### 🚀 The feature, motivation and pitch
I use torch.nn.utils.prune as prune to prune the model, then I use torchprofile.profile_macs() to calculate the macs of Pruned_model, but I find the macs will be increase before prune.remove() to make the pruning permanent. it is normal because additional calculate wil be weight * mask.
but after I use prune.remove() to make the pruning permanent, the macs calculated by torchprofile.profile_macs() still same as the model befor prune.I use torch.nn.utils.prune as prune to prune the model, then I use torchprofile.profile_macs() to calculate the macs of Pruned_model, but I find the macs will be increase before prune.remove() to make the pruning permanent. it is normal because additional calculate wil be weight * mask.
but after I use prune.remove() to make the pruning permanent, the macs calculated by torchprofile.profile_macs() still same as the model befor prune.

### Alternatives
_No response_
### Additional context
_No response_
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
|
https://github.com/pytorch/pytorch/issues/100859
|
closed
|
[
"oncall: quantization",
"triaged"
] | 2023-05-08T08:06:34Z
| 2023-10-05T23:32:18Z
| null |
machengjie321
|
pytorch/tutorials
| 2,313
|
how to calculate the macs after prune?
|
### 🚀 Descirbe the improvement or the new tutorial
I use torch.nn.utils.prune as prune to prune the model, then I use torchprofile.profile_macs() to calculate the macs of Pruned_model, but I find the macs will be increase before prune.remove() to make the pruning permanent. it is normal because additional calculate wil be weight * mask.
but after I use prune.remove() to make the pruning permanent, the macs calculated by torchprofile.profile_macs() still same as the model befor prune.

### Existing tutorials on this topic
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/tutorials/issues/2313
|
open
|
[
"question"
] | 2023-05-08T08:02:31Z
| 2023-05-26T20:02:13Z
| null |
machengjie321
|
pytorch/pytorch
| 100,827
|
How to install standalone torch dynamo with pytorch1.x
|
### 🐛 Describe the bug
For many reasons, the environment is not compatible with pytorch2.0. For example, Megatron-LM compiles its transformer operators written in C++, which confine it to the limit of torch 1.x c++ extension, otherwise many compile errors. For another example, DeepSpeed implements their distributed trainer whose components depends on triton 1 but not triton 2 to build.
### Error logs
_No response_
### Minified repro
_No response_
### Versions
Therefore, could you be so kind to guide me how to install torchdynamo independently without having a torch2.0?
Or, are there other ways for compilation in torch1.0? I heard of torch.jit, but someone told me that it could not speed up training.
I would appreciate if there is any methods that work to speedup torch 1.x 's code with regard to fast Large Language Model training.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
|
https://github.com/pytorch/pytorch/issues/100827
|
closed
|
[
"dependency issue",
"oncall: pt2"
] | 2023-05-07T09:55:43Z
| 2023-05-07T21:50:41Z
| null |
2catycm
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.