repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/executorch
| 1,101
|
How to virtualize the qte model?
|
Hi,
I am now working on executorch. I want to see the model architecture of qte, which is easy for us to debug.
However, I cannot find a virtualizing tool. Netron does not support qte format now.
Could executorch support to virtualize the qte format model?
Besides, I wonder whether the export function will translate the ops in the Pytorch model to specifics ops in qte format?
Thanks!!!
```[tasklist]
### Tasks
```
|
https://github.com/pytorch/executorch/issues/1101
|
closed
|
[
"need-user-input"
] | 2023-10-26T12:52:41Z
| 2023-10-27T13:48:47Z
| null |
liang1232018
|
huggingface/diffusers
| 5,538
|
Why is the pipeline_stable_diffusion_upscale.py file not using the encoder-decoder latent?
|
### Describe the bug
There is no training script for pipeline_stable_diffusion_upscale.py because the authors chose not to utilize the latent domain for the Super-resolution task. Additionally, the U-Net implemented in pipeline_stable_diffusion_upscale.py only accepts 7 channels. How is this achieved?
### Reproduction
None
### Logs
_No response_
### System Info
None
### Who can help?
[AnasHXH](https://github.com/AnasHXH)
|
https://github.com/huggingface/diffusers/issues/5538
|
closed
|
[
"question",
"stale"
] | 2023-10-26T10:47:10Z
| 2023-12-08T15:05:44Z
| null |
AnasHXH
|
huggingface/chat-ui
| 534
|
Login issue with Google OpenID
|
I set up google OpenID for my chatUI. I have set the scope to openId and ./auth/userinfo.profile in OAuth Consent Screen. I tried to log the data shared by google to the app and it was the following
{
sub: '****',
picture: 'https://lh3.googleusercontent.com/****',
email: 'shagun@****',
email_verified: true,
hd: '*****'
}
As you can see, the name is not being shared and hence I am getting an error as Name is a required field. How can I fix this?
Note: Google shares name for some accounts and for them it does not. This is my first time working with OpenID so any help will be appreciated.
|
https://github.com/huggingface/chat-ui/issues/534
|
closed
|
[] | 2023-10-26T10:00:05Z
| 2023-10-26T10:49:36Z
| 3
|
shagunhexo
|
pytorch/TensorRT
| 2,415
|
❓ [Question] Examples not working in nvcr.io/nvidia/pytorch:23.09-py3.
|
## ❓ Question
I am within the `nvcr.io/nvidia/pytorch:23.09-py3` container. Trying out some snippets from:
https://youtu.be/eGDMJ3MY4zk?si=MhkbgwAPVQSFZEha.
Both JIT and AoT examples failed. For JIT, it complained that "tensorrt" backend isn't available, for AoT, it complained that "The user code is using a feature we don't support. Please try torchdynamo.explain() to get possible the reasons".
I am on an A100. What's going on?
|
https://github.com/pytorch/TensorRT/issues/2415
|
closed
|
[
"question"
] | 2023-10-26T09:53:16Z
| 2025-11-24T17:42:35Z
| null |
sayakpaul
|
huggingface/candle
| 1,185
|
Question: How to create a Var from MmapedSafetensors
|
Hello everybody,
I was wondering how to create a Var instance from an `MMapedSafetensors` `TensorView`. I have tried using `candle_core::Var::from_slice(tensor.data(), tensor.shape(), &device)?`, but I get the error:
`Error: Shape mismatch, got buffer of size 90177536 which is compatible with shape [11008, 4096]`.
Is there a better way to do this?
In addition, I notice the buffer is of type `u8`, which is definitely not the data type the safetensors should be decoded as. Where can I find how `VarBuilder` does this?
**In summary, I have 2 questions:**
- How to decode a `TensorView` into a `Var`?
- Or, if the above is not feasible, how does `VarBuilder` do this?
|
https://github.com/huggingface/candle/issues/1185
|
closed
|
[] | 2023-10-26T09:41:37Z
| 2023-10-26T11:26:29Z
| null |
EricLBuehler
|
huggingface/datasets
| 6,353
|
load_dataset save_to_disk load_from_disk error
|
### Describe the bug
datasets version: 2.10.1
I `load_dataset `and `save_to_disk` sucessfully on windows10( **and I `load_from_disk(/LLM/data/wiki)` succcesfully on windows10**), and I copy the dataset `/LLM/data/wiki`
into a ubuntu system, but when I `load_from_disk(/LLM/data/wiki)` on ubuntu, something weird happens:
```
load_from_disk('/LLM/data/wiki')
File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1874, in load_from_disk
return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/dataset_dict.py", line 1309, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(
File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1543, in load_from_disk
fs_token_paths = fsspec.get_fs_token_paths(dataset_path, storage_options=storage_options)
File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/core.py", line 610, in get_fs_token_paths
chain = _un_chain(urlpath0, storage_options or {})
File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/core.py", line 325, in _un_chain
cls = get_filesystem_class(protocol)
File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/registry.py", line 232, in get_filesystem_class
raise ValueError(f"Protocol not known: {protocol}")
ValueError: Protocol not known: /LLM/data/wiki
```
It seems that something went wrong on the arrow file?
How can I solve this , since currently I can not save_to_disk on ubuntu system
### Steps to reproduce the bug
datasets version: 2.10.1
### Expected behavior
datasets version: 2.10.1
### Environment info
datasets version: 2.10.1
|
https://github.com/huggingface/datasets/issues/6353
|
closed
|
[] | 2023-10-26T03:47:06Z
| 2024-04-03T05:31:01Z
| 5
|
brisker
|
huggingface/text-embeddings-inference
| 43
|
How to add custom python file for pretrained model on TEI server?
|
### System Info
I am pretty new to this space. Please help.
I have made a python file with pre-trained model, which generates embeddings. What I want is to -
1. Create a docker image of Python file
2. Run it on TEI server?
How can we do this?
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [X] My own modifications
### Reproduction
Need to host a custom python file( which runs a sentence embedding model) on TEI server
### Expected behavior
NA
|
https://github.com/huggingface/text-embeddings-inference/issues/43
|
open
|
[] | 2023-10-25T16:09:52Z
| 2023-10-25T17:57:46Z
| null |
cken21
|
huggingface/llm-vscode
| 100
|
How to generate the response from locally hosted end point in vscode?
|
Hi,
I managed to plug the llm-vcode extension to point to the locally running endpoint. Now when I am selected the content like as below:
# function to sum 2 numbers in python
then Cmd+shif+a > llm: show code attribution
My local endpoint invokes and give the relevant response as well in below format
`{
"details": {
"best_of_sequences": [
{
"finish_reason": "length",
"generated_text": "test",
"generated_tokens": 1,
"prefill": [
{
"id": 0,
"logprob": -0.34,
"text": "test"
}
],
"seed": 42,
"tokens": [
{
"id": 0,
"logprob": -0.34,
"special": false,
"text": "test"
}
],
"top_tokens": [
[
{
"id": 0,
"logprob": -0.34,
"special": false,
"text": "test"
}
]
]
}
],
"finish_reason": "length",
"generated_tokens": 1,
"prefill": [
{
"id": 0,
"logprob": -0.34,
"text": "test"
}
],
"seed": 42,
"tokens": [
{
"id": 0,
"logprob": -0.34,
"special": false,
"text": "test"
}
],
"top_tokens": [
[
{
"id": 0,
"logprob": -0.34,
"special": false,
"text": "test"
}
]
]
},
"generated_text": "test"
}`
"generated_text": value is replaced with actual response with python sum function
After 200, I can see the anything related to generated code in vscode.
Please suggest to how to I can get generated response in vscode itself.
|
https://github.com/huggingface/llm-vscode/issues/100
|
open
|
[
"stale"
] | 2023-10-25T15:55:40Z
| 2023-11-25T01:46:01Z
| null |
dkaus1
|
huggingface/tokenizers
| 1,375
|
Question: what is the add_special_tokens parameter of Tokenizer::encode?
|
As stated above, what does the parameter add_special_tokens do? Does it add bos/eos tokens? Thanks!
|
https://github.com/huggingface/tokenizers/issues/1375
|
closed
|
[] | 2023-10-25T09:55:55Z
| 2023-10-25T18:43:54Z
| null |
EricLBuehler
|
huggingface/candle
| 1,173
|
Question: what is the add_special_tokens parameter of Tokenizer::encode?
|
As stated above, what does the parameter add_special_tokens do? Does it add bos/eos tokens? Thanks!
|
https://github.com/huggingface/candle/issues/1173
|
closed
|
[] | 2023-10-25T09:30:01Z
| 2023-10-25T09:55:42Z
| null |
EricLBuehler
|
huggingface/dataset-viewer
| 2,009
|
Are URLs in rows response sanitized?
|
see https://github.com/huggingface/moon-landing/pull/7798#discussion_r1369813236 (internal)
> Is "src" validated / sanitized?
> if not there is a potential XSS exploit here (you can inject javascript code in an image src)
> Are S3 object names sanitized? If no, it should be the case in dataset-server side
|
https://github.com/huggingface/dataset-viewer/issues/2009
|
closed
|
[
"question",
"security",
"P1"
] | 2023-10-24T15:10:29Z
| 2023-11-21T15:39:13Z
| null |
severo
|
huggingface/chat-ui
| 528
|
Websearch error in proxy
|
I'm developing in a proxy environment, I'm guessing it's because **websearch module can't import the model(Xenova/gte-small) from huggingface.**
I don't want to use websearch, but it tries to load the gte-small model anyway, and I get an error.
```
11:36:36 AM [vite] Error when evaluating SSR module /src/lib/server/websearch/sentenceSimilarity.ts:
|- TypeError: fetch failed
at fetch (/home/dev/chat-ui/node_modules/undici/index.js:110:15)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async getModelFile (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)
at async getModelJSON (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
at async Promise.all (index 1)
at async loadTokenizer (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)
at async AutoTokenizer.from_pretrained (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)
at async Promise.all (index 0)
at async loadItems (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2110:5)
at async Proxy.pipeline (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2056:19)
11:36:36 AM [vite] Error when evaluating SSR module /src/lib/server/websearch/runWebSearch.ts: failed to import "/src/lib/server/websearch/sentenceSimilarity.ts"
|- TypeError: fetch failed
at fetch (/home/dev/chat-ui/node_modules/undici/index.js:110:15)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async getModelFile (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)
at async getModelJSON (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
at async Promise.all (index 1)
at async loadTokenizer (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)
at async AutoTokenizer.from_pretrained (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)
at async Promise.all (index 0)
at async loadItems (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2110:5)
at async Proxy.pipeline (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2056:19)
11:36:36 AM [vite] Error when evaluating SSR module /home/dev/chat-ui/src/routes/conversation/[id]/+server.ts: failed to import "/src/lib/server/websearch/runWebSearch.ts"
|- TypeError: fetch failed
at fetch (/home/dev/chat-ui/node_modules/undici/index.js:110:15)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async getModelFile (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)
at async getModelJSON (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
at async Promise.all (index 1)
at async loadTokenizer (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)
at async AutoTokenizer.from_pretrained (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)
at async Promise.all (index 0)
at async loadItems (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2110:5)
at async Proxy.pipeline (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2056:19)
```
1. Is there a workaround to downloading the model directly?
2. Need Improve: the proxy related code.
3. Need Improve: add option to turn off websearch initialization. (
|
https://github.com/huggingface/chat-ui/issues/528
|
closed
|
[
"enhancement",
"support",
"websearch"
] | 2023-10-24T03:53:25Z
| 2023-11-15T15:44:01Z
| 6
|
calycekr
|
huggingface/candle
| 1,165
|
How do I raise 2 to the power of a tensor?
|
How do I write:
```python
x = 2 ** (y * z)
```
Where `y` is an integer and `z` is a tensor?
I tried to use `powf`, but it only works with float arguments.
|
https://github.com/huggingface/candle/issues/1165
|
closed
|
[] | 2023-10-23T22:13:28Z
| 2023-10-24T04:28:23Z
| null |
laptou
|
huggingface/candle
| 1,163
|
how to modify the contents of a Tensor?
|
what is the `candle` equivalent of this?
```python
t[2, :] *= 2;
```
|
https://github.com/huggingface/candle/issues/1163
|
closed
|
[] | 2023-10-23T19:58:50Z
| 2023-10-24T04:28:10Z
| null |
laptou
|
huggingface/transformers.js
| 367
|
[Question] How to include ort-wasm-simd.wasm with the bundle?
|
How can I include ort-wasm-simd.wasm with the bundle? I'm using this on an app that needs to be able to run offline, so I'd like to package this with the lib. I'm also running this on web worker, so that file gets requested 1+n times per user session when the worker starts.
<img width="725" alt="image" src="https://github.com/xenova/transformers.js/assets/1594723/39f7fc6e-0914-4b40-a3bc-aa17ed53851c">
|
https://github.com/huggingface/transformers.js/issues/367
|
closed
|
[
"question"
] | 2023-10-23T04:54:16Z
| 2023-10-26T08:27:28Z
| null |
mjp0
|
pytorch/torchx
| 782
|
Workspace patch is applied only on role[0] image
|
## ❓ Questions and Help
Per https://github.com/pytorch/torchx/blob/main/torchx/runner/api.py#L362-L370, we assume that patch needs to be applied only for a single role. Effectively assumes that:
1. role0 is the only image that needs to be updated
2. workspace is mapped to image of role0.
This issue has surfaced for an internal Meta user.
### Question
Should we treat this as a bug an apply patch to all the roles or introduce proper mapping between workspaces and roles? This hasn't surfaced since most of our customers use single role, but looks like it is broken. My personal preference is to provide warning to users when multiple roles are defined first, then add non-default option to specify workspace for each role name.
|
https://github.com/meta-pytorch/torchx/issues/782
|
open
|
[
"enhancement",
"question"
] | 2023-10-22T23:26:32Z
| 2023-10-23T19:56:21Z
| 5
|
kurman
|
huggingface/autotrain-advanced
| 310
|
How to determine the LMTrainingType ? chat or generic mode?
|
It is said that there are two modes (chat and generic), but I cannot find a way to determine it.
|
https://github.com/huggingface/autotrain-advanced/issues/310
|
closed
|
[] | 2023-10-21T14:28:59Z
| 2023-11-26T04:31:08Z
| null |
qiaoqiaoLF
|
huggingface/datasets
| 6,324
|
Conversion to Arrow fails due to wrong type heuristic
|
### Describe the bug
I have a list of dictionaries with valid/JSON-serializable values.
One key is the denominator for a paragraph. In 99.9% of cases its a number, but there are some occurences of '1a', '2b' and so on.
If trying to convert this list to a dataset with `Dataset.from_list()`, I always get
`ArrowInvalid: Could not convert '1' with type str: tried to convert to int64`, presumably because pyarrow tries to convert the keys to integers.
Is there any way to circumvent this and fix dtypes? I didn't find anything in the documentation.
### Steps to reproduce the bug
* create a list of dicts with one key being a string of an integer for the first few thousand occurences and try to convert to dataset.
### Expected behavior
There shouldn't be an error (e.g. some flag to turn off automatic str to numeric conversion).
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.1
|
https://github.com/huggingface/datasets/issues/6324
|
closed
|
[] | 2023-10-20T23:20:58Z
| 2023-10-23T20:52:57Z
| 2
|
jphme
|
huggingface/transformers.js
| 365
|
[Question] Headers not defined
|
Hi friends!
Neither headers nor fetch seems to be getting resolved.. trying to run this on a nodejs application...
file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:201
return fetch(urlOrPath, { headers });
^
TypeError: fetch is not a function
at getFile (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:201:16)
at getModelFile (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:468:30)
at async getModelJSON (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
at async Promise.all (index 0)
at async loadTokenizer (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/tokenizers.js:52:16)
at async Function.from_pretrained (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/tokenizers.js:3826:48)
at async Promise.all (index 0)
at async loadItems (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/pipelines.js:2193:5)
at async pipeline (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/pipelines.js:2139:19)
at async Server.<anonymous> (/home/rajesh/code/ai/js/invoice/inv.js:65:24)
-------
Unable to load from local path "/home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/models/Xenova/distilbert-base-uncased-finetuned-sst-2-english/tokenizer.json": "ReferenceError: Headers is not defined"
Unable to load from local path "/home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/models/Xenova/distilbert-base-uncased-finetuned-sst-2-english/tokenizer_config.json": "ReferenceError: Headers is not defined"
Unable to load from local path "/home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/models/Xenova/distilbert-base-uncased-finetuned-sst-2-english/config.json": "ReferenceError: Headers is not defined"
file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:188
const headers = new Headers();
^
ReferenceError: Headers is not defined
at getFile (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:188:25)
at getModelFile (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:468:30)
at async getModelJSON (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
at async Promise.all (index 0)
at async loadTokenizer (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/tokenizers.js:52:16)
at async Function.from_pretrained (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/tokenizers.js:3826:48)
at async Promise.all (index 0)
at async loadItems (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/pipelines.js:2193:5)
at async pipeline (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/pipelines.js:2139:19)
|
https://github.com/huggingface/transformers.js/issues/365
|
closed
|
[
"question"
] | 2023-10-20T16:29:28Z
| 2023-11-22T06:15:35Z
| null |
trilloc
|
huggingface/sentence-transformers
| 2,335
|
How to get individual token embeddings of a sentence from sentence transformers
|
How to get individual token embeddings of a sentence from sentence transformers
|
https://github.com/huggingface/sentence-transformers/issues/2335
|
closed
|
[] | 2023-10-20T06:49:00Z
| 2023-12-18T16:21:32Z
| null |
pradeepdev-1995
|
huggingface/safetensors
| 371
|
Non-blocking `save_file`
|
### Feature request
Add the option to make calls to `safetensors.*.save_file` non-blocking to allow execution to continue while large tensors / models are being saved.
### Motivation
I'm writing a script a bulk compute embeddings however I am getting poor GPU utilisation due to time spent saving to disk with `safetensors`. It would be nice if saving was non-blocking to allow execution to continue.
### Your contribution
I am unsure how this would work, but could give it a try if someone pointed me to the relevant code and some high level steps. Happy to defer to more experienced developers~
One issue I can see with this feature is how to deal with tensors being changed after the call to `save_file` but before saving is actually complete. A copy would work, but maybe not appropriate for large models / tensors.
|
https://github.com/huggingface/safetensors/issues/371
|
closed
|
[
"Stale"
] | 2023-10-20T05:42:47Z
| 2023-12-11T01:48:39Z
| 1
|
vvvm23
|
huggingface/huggingface_hub
| 1,767
|
Request: discerning what the default model is when using `InferenceClient` without a `model`
|
When doing something like the below:
```python
client = InferenceClient() # NOTE: no model specified
client.feature_extraction("hi")
```
It would be cool to know what model is being used behind the scenes. How can one figure this out programmatically?
I am thinking there may be a need for a new `InferenceClient` method resembling the following:
```python
def get_default_model(task: str) -> str:
"""Get the model's name used by default for the input task."""
```
|
https://github.com/huggingface/huggingface_hub/issues/1767
|
closed
|
[
"enhancement",
"good first issue"
] | 2023-10-19T20:56:53Z
| 2023-11-08T13:47:14Z
| null |
jamesbraza
|
huggingface/diffusers
| 5,457
|
What is function of `attention_mask` in `get_attention_scores`?
|
What is function of `attention_mask` in `get_attention_scores`? I guess it is used to ignore some value when calculating the attention map
I can not find a example in diffusers library that actually use this `attention_mask`. Could you provide an example on how to use it?
https://github.com/huggingface/diffusers/blob/e5168588864d72a4dca37e90318c6b11da0eaaf1/src/diffusers/models/attention_processor.py#L454
|
https://github.com/huggingface/diffusers/issues/5457
|
closed
|
[
"stale"
] | 2023-10-19T18:14:38Z
| 2023-11-28T15:05:41Z
| null |
g-jing
|
pytorch/tutorials
| 2,610
|
[BUG] - <title>When I use fsdp, Because the flattened parameters, I always meet some question
|
### Add Link
When I use fsdp, Because the flattened parameters, I always meet some question.
for examples:
`
RuntimeError: mat2 must be a matrix, got 1-D tensor
`
and
`
RuntimeError: weight should have at least three dimensions
`
It always occurred in some flattened model weights, sucn as conv, linear etc.
How can I solve this problem?
### Describe the bug
When I use fsdp, Because the flattened parameters, I always meet some question
for examples:
`
RuntimeError: mat2 must be a matrix, got 1-D tensor
`
and
`
RuntimeError: weight should have at least three dimensions
`
It always occurred in some flattened model weights, sucn as conv, linear etc.
How can I solve this problem?
### Describe your environment
Pytorch 2.1.0
cc @osalpekar @H-Huang @kwen2501
|
https://github.com/pytorch/tutorials/issues/2610
|
closed
|
[
"bug",
"distributed"
] | 2023-10-19T14:18:09Z
| 2025-05-12T15:33:13Z
| 4
|
sqzhang-lazy
|
huggingface/accelerate
| 2,068
|
How to use cpu_offload function, attach_align_device_hook function,
|
attach_align_device_hook is called in the cpu_offload function. How is skip_keys used in attach_align_device_hook ?
def attach_align_device_hook(
module: torch.nn.Module,
execution_device: Optional[torch.device] = None,
offload: bool = False,
weights_map: Optional[Mapping] = None,
offload_buffers: bool = False,
module_name: str = "",
skip_keys: Optional[Union[str, List[str]]] = None,
preload_module_classes: Optional[List[str]] = None,
):
I wonder what the role of skip keys is?I see this function in diffusers inference stable-diffusion using enable_sequential_cpu_offload.
What I want to achieve is to adjust some of the stable-diffusion submodules to run in the gpu, so that the vram occupancy can be controlled.
|
https://github.com/huggingface/accelerate/issues/2068
|
closed
|
[] | 2023-10-19T10:25:07Z
| 2023-11-26T15:06:04Z
| null |
LeonNerd
|
huggingface/accelerate
| 2,067
|
how to automatically load state dict from memory to a multi-gpu device?
|
``` Python
config_dict = AutoConfig.from_pretrained(model_config, device_map="auto")
model = AutoModelForCausalLM.from_config(config_dict)
raw_state_dict = torch.load(args.model_path, map_location="cpu")
state_dict = convert_ckpt(raw_state_dict)
model.load_state_dict(state_dict, strict=False)
```
`model.load_state_dict(state_dict, strict=False)` only loads state dict on a single gpu, even when `device_map="auto"` is set by `AutoConfig`. Additionally, the `load_checkpoint_and_dispatch` func only accepts a file path as the `checkpoint` parameter.
Is there any way to automatically load state dict from memory to a multi-gpu device?
|
https://github.com/huggingface/accelerate/issues/2067
|
closed
|
[] | 2023-10-19T05:57:39Z
| 2023-12-22T15:06:31Z
| null |
tlogn
|
huggingface/accelerate
| 2,064
|
How to use `gather_for_metrics()` with decoder-generated strings to compute rouge score?
|
I am fine-tuning an encoder-decoder model and during the validation step, using the `.generate` method to generate tokens from the decoder that are subsequently decoded into strings (in this case classes). These generations are occurring across 8 GPUs and I am using Accelerate to manage the distribution.
My hope was to append these strings to lists, and pass the lists to `gather_for_metrics()` on each GPU to get a "master list" of predictions and references, added to the rouge metric and then computed:
```python
predictions, references = accelerator.gather_for_metrics(
(predictions, references)
)
rouge_metric.add_batch(
predictions=predictions,
references=references,
)
rouge_score = rouge_metric.compute(rouge_types=["rougeL"], use_aggregator=True)["rougeL"]
```
After encountering some strange errors, i noticed that `gather_for_metrics()` will [only interact with tensors](https://huggingface.co/docs/accelerate/v0.19.0/en/package_reference/accelerator#accelerate.Accelerator.gather_for_metrics)
And from what I can tell, you cannot create a torch.Tensor with string members.
How do the accelerate folks recommend using `gather_for_metrics()` with decoder-generated strings?
|
https://github.com/huggingface/accelerate/issues/2064
|
closed
|
[
"solved"
] | 2023-10-18T19:25:29Z
| 2023-12-25T15:07:03Z
| null |
plamb-viso
|
huggingface/transformers.js
| 364
|
[Question] Error in getModelJSON with React
|
Hey, I am trying to transcribe audio to speech using transformers.js. I tried two ways
1. https://huggingface.co/docs/transformers.js/api/pipelines#pipelinesautomaticspeechrecognitionpipeline
2. https://huggingface.co/docs/transformers.js/tutorials/react
But seem to get an error like this

Files for your reference: https://filebin.net/88munmsfk4u0127m
Please do let me know if I am doing something wrong or what is the best way using ReactJS
|
https://github.com/huggingface/transformers.js/issues/364
|
closed
|
[
"question"
] | 2023-10-18T16:57:20Z
| 2024-01-24T19:54:17Z
| null |
ajaykrupalk
|
pytorch/vision
| 8,053
|
When will torch and torchvision support Python 3.12?
|
### 🚀 The feature
Python 3.11 is the latest version that is supported by torch and torchvision. Python 3.12 was released this month and I'd like to know when we'll be able to use torch & torchvision packages with Python 3.12.
### Motivation, pitch
I'm not specifically having any troubles, it's just that I personally like to stay up to date, and since ya'll have an astonishing library, I'm trying to contribute to it from my side by first raising an issue, and see if there's any other technical contribution that I would be able to make in this specific goal for your package 🤝
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/vision/issues/8053
|
closed
|
[] | 2023-10-18T10:44:17Z
| 2023-10-18T12:10:37Z
| 1
|
AlirezaShk
|
huggingface/transformers.js
| 363
|
[Question] Build step process for Vercel
|
Hi, I am currently in the process of trying to deploy to Vercel using Nextjs.
I am using pnpm as my package manager and have put the model in the public folder.
I hit this error, when building occurs, is there something necessary post install just as #295 has done?
I don't understand why this step is necessary
```
An error occurred while writing the file to cache: [Error: ENOENT: no such file or directory, mkdir '/var/task/node_modules/.pnpm/@xenova+transformers@2.6.2/node_modules/@xenova/transformers/.cache'
```
|
https://github.com/huggingface/transformers.js/issues/363
|
open
|
[
"question"
] | 2023-10-18T00:27:18Z
| 2024-04-06T06:23:06Z
| null |
kyeshmz
|
huggingface/setfit
| 432
|
[Q] How to ensure reproducibility
|
Can someone explain how to ensure reproducibility of a pre-trained model ("sentence-transformers/paraphrase-mpnet-base-v2")?
I thought that the result would be reproducible because SetFitTrainer() has a default random seed in its constructor, but found that it was not the case. SetFitTrainer source code indicates that "to ensure reproducibility across runs, I need to use [`~SetTrainer.model_init`] function to instantiate the model". But, I don't understand what it entails.
Is there an example that I can follow?
Any help would be highly appreciated.
Thanks,
|
https://github.com/huggingface/setfit/issues/432
|
closed
|
[] | 2023-10-17T23:47:46Z
| 2023-12-06T13:19:54Z
| null |
youngjin-lee
|
huggingface/chat-ui
| 519
|
.env.local prepromt env variable with multi lines
|
Hi
I have a prepromt which is basically a 2 shorts inference. very long text ( 1200 lines like) that I want to add as a prepromts, but the env. file does not allow a multi line text as a variable
any idea how to handle this?
|
https://github.com/huggingface/chat-ui/issues/519
|
open
|
[] | 2023-10-17T18:34:30Z
| 2023-11-07T13:11:21Z
| 6
|
RachelShalom
|
pytorch/xla
| 5,709
|
how can I debug in openxla xla source. in pytorch xla .
|
I build pytorch and pytorch xla install in my computer. and I can debug in pytorch xla, but I dont known ,how debug in openxla xla source code.
The compilation of xla depends on openxla. The openxla xla compiled source code can be seen here, xla/build/temp.linux-x86_64-cpython-310/bazel-xla/external. How should I set it so that the debug version on vscode can breakpoint to openxla? What about the source code of xla in it?
|
https://github.com/pytorch/xla/issues/5709
|
closed
|
[
"question",
"openxla"
] | 2023-10-17T12:02:34Z
| 2025-04-29T13:07:15Z
| null |
ckfgihub
|
huggingface/optimum
| 1,459
|
nougat to onnx
|
### Feature request
I would like to do the transformation of the [nougat](https://huggingface.co/facebook/nougat-base) model to onnx, is it possible to do it through optimum?
### Motivation
Nougat is a [Donut](https://huggingface.co/docs/transformers/model_doc/donut) model trained to transcribe scientific PDFs into an easy-to-use markdown format.
|
https://github.com/huggingface/optimum/issues/1459
|
closed
|
[] | 2023-10-17T10:03:15Z
| 2024-08-27T06:16:17Z
| 3
|
arvisioncode
|
pytorch/vision
| 8,050
|
Any plans to implement the functions in opencv?
|
### 🚀 The feature
Expect an implementation of some of the apis available in opencv (e.g. cv2.findContours(), cv2.connectedComponents(), ...)
### Motivation, pitch
Just want torchvision to be able to do these things faster using gpus, and make these api faster.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/vision/issues/8050
|
open
|
[] | 2023-10-17T07:59:40Z
| 2023-10-18T18:24:54Z
| 1
|
mortal-Zero
|
huggingface/diffusers
| 5,416
|
How to correctly implement a class-conditional model
|
Hi, I'd like to implement a DDPM that is class-conditioned, but not conditioned on anything else (no text), using `UNet2DConditionModel`. I'm training from scratch.
I'm calling the model with `noise_pred = model(noisy_images, timesteps, class_labels=class_labels, return_dict=False)[0]`, but I get the error `UNet2DConditionModel.forward() missing 1 required positional argument: 'encoder_hidden_states'`. However, when I set `encoder_hidden_states` to `None`, I get `TypeError: AttnDownBlock2D.forward() got an unexpected keyword argument 'scale'`. I'm not sure what `encoder_hidden_states` should be set to since I'm only using class conditioning.
Thanks!
|
https://github.com/huggingface/diffusers/issues/5416
|
closed
|
[] | 2023-10-16T20:53:41Z
| 2023-10-16T21:02:39Z
| null |
nickk124
|
huggingface/chat-ui
| 511
|
ChatUI on HuggingFace Spaces errors out with PermissionError: [Errno 13] Permission denied
|
When I try following the below two tutorials I hit the same error, where the container code tries to create a directory and fails due to permission issues on the host
tutorials:
1. https://huggingface.co/docs/hub/spaces-sdks-docker-chatui#chatui-on-spaces
2. https://huggingface.co/blog/Llama2-for-non-engineers
Note: I have set the env vars `HUGGING_FACE_HUB_TOKEN` and in a prior attempt `HF_TOKEN` as well.
<img width="1252" alt="Screenshot 2023-10-16 at 1 25 04 AM" src="https://github.com/huggingface/chat-ui/assets/9070365/6a54c653-ed30-4bcf-af83-80dc04ce2bc1">
stack trace on hugging face space
```
Traceback (most recent call last):
File "/opt/conda/bin/text-generation-server", line 8, in <module>
sys.exit(app())
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py", line 131, in download_weights
utils.download_and_unload_peft(
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/peft.py", line 38, in download_and_unload_peft
os.makedirs(model_id, exist_ok=True)
File "/opt/conda/lib/python3.9/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/opt/conda/lib/python3.9/os.py", line 225, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: 'skrelan'
```
|
https://github.com/huggingface/chat-ui/issues/511
|
open
|
[
"support",
"spaces"
] | 2023-10-16T08:29:06Z
| 2023-12-17T02:58:52Z
| 3
|
Skrelan
|
huggingface/candle
| 1,105
|
How to run a model in Fp16?
|
EDIT: Never mind, see below comment
|
https://github.com/huggingface/candle/issues/1105
|
closed
|
[] | 2023-10-16T03:32:16Z
| 2023-10-18T19:40:54Z
| null |
joeyballentine
|
huggingface/candle
| 1,104
|
How to load .pth file weights?
|
I've been experimenting with candle and re-implementing ESRGAN in it. I ended up needing to convert a couple .pth files I have into .safetensors format in python in order to load them into the VarBuilder. I saw on the docs you say this supports loading pytorch weights directly though, but there does not seem to be an example on how to do that. I looked into the pickle module included in the library and got as far as being able to read the weights into a pickle format with TensorInfo, but then I got stuck trying to convert those to tensors and get it in a format VarBuilder would accept.
An example on how to either load these weights or convert them to safetensors format in rust would be great, thanks!
|
https://github.com/huggingface/candle/issues/1104
|
open
|
[] | 2023-10-16T03:29:53Z
| 2023-10-19T22:01:42Z
| null |
joeyballentine
|
huggingface/datasets
| 6,303
|
Parquet uploads off-by-one naming scheme
|
### Describe the bug
I noticed this numbering scheme not matching up in a different project and wanted to raise it as an issue for discussion, what is the actual proper way to have these stored?
<img width="425" alt="image" src="https://github.com/huggingface/datasets/assets/1981179/3ffa2144-7c9a-446f-b521-a5e9db71e7ce">
The `-SSSSS-of-NNNNN` seems to be used widely across the codebase. The section that creates the part in my screenshot is here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5287
There are also some edits to this section in the single commit branch.
### Steps to reproduce the bug
1. Upload a dataset that requires at least two parquet files in it
2. Observe the naming scheme
### Expected behavior
The couple options here are of course **1. keeping it as is**
**2. Starting the index at 1:**
train-00001-of-00002-{hash}.parquet
train-00002-of-00002-{hash}.parquet
**3. My preferred option** (which would solve my specific issue), dropping the total entirely:
train-00000-{hash}.parquet
train-00001-{hash}.parquet
This also solves an issue that will occur with an `append` variable for `push_to_hub` (see https://github.com/huggingface/datasets/issues/6290) where as you add a new parquet file, you need to rename everything in the repo as well.
However, I know there are parts of the repo that use 0 as the starting file or may require the total, so raising the question for discussion.
### Environment info
- `datasets` version: 2.14.6.dev0
- Platform: macOS-14.0-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.18.0
- PyArrow version: 12.0.1
- Pandas version: 1.5.3
|
https://github.com/huggingface/datasets/issues/6303
|
open
|
[] | 2023-10-14T18:31:03Z
| 2023-10-16T16:33:21Z
| 4
|
ZachNagengast
|
huggingface/diffusers
| 5,392
|
How to train an unconditional latent diffusion model ?
|
It seems that there is only one available unconditional LDM model (CompVis/ldm-celebahq-256).
```python
pipeline = LDMPipeline.from_pretrained("CompVis/ldm-celebahq-256")
```
How can I train this unconditional model on my own dataset? The LDM model includes the training of both `VQModel` and `UNet2DModel`, but the [official training examples](https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/train_unconditional.py) seem not to be fully applicable.
|
https://github.com/huggingface/diffusers/issues/5392
|
closed
|
[] | 2023-10-14T03:32:34Z
| 2024-02-16T08:59:49Z
| null |
Rashfu
|
huggingface/safetensors
| 368
|
Streaming weights into a model directly?
|
### Feature request
Hi! I'm curious whether there is a way to stream model weights from disk into the on-GPU model directly?
That is, [I see](https://huggingface.co/docs/safetensors/speed#gpu-benchmark) that by settings `os.environ["SAFETENSORS_FAST_GPU"] = "1"` and using `load_file`, you can stream the weights themselves from disk to GPU. But if I understand correctly, one still has to wait for all of the weights to be moved to GPU before they can subsequently be loaded into the model itself: first load the weights to GPU by some means (possibly streaming), then `model.load(weights)`, schematically.
Is there a way to overlap the loading-into-model step with the streaming from disk?
Is something like that possible? Or already implemented somewhere?
### Motivation
Faster model loading.
### Your contribution
I don't know `rust`, but would be happy to contribute `python`-side. Just not sure if the request is feasible.
|
https://github.com/huggingface/safetensors/issues/368
|
closed
|
[
"Stale"
] | 2023-10-13T15:21:33Z
| 2023-12-11T01:48:41Z
| 1
|
garrett361
|
huggingface/huggingface_hub
| 1,734
|
Docs request: what is loaded/loadable?
|
When working with `get_model_status`: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.get_model_status
It tells you if the model is loadable and/or loaded. The question is, what does this mean?
- What does "loaded" mean... what is it loaded into?
- If something isn't loaded, but is loadable, how can one load it?
|
https://github.com/huggingface/huggingface_hub/issues/1734
|
closed
|
[] | 2023-10-13T04:59:47Z
| 2023-10-17T14:18:11Z
| null |
jamesbraza
|
huggingface/trl
| 868
|
What is the difference of these two saved checkpoints in sft_llama2 example?
|
I am trying to understand this
https://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama_2/scripts/sft_llama2.py#L206C1-L206C1
`trainer.model.save_pretrained(output_dir)` seems already saves the base+lora model to the "final_checkpoint".
Then what is doing here `model = model.merge_and_unload()` and save it again to "final_merged_checkpoint"?
```
trainer.save_model(script_args.output_dir)
output_dir = os.path.join(script_args.output_dir, "final_checkpoint")
trainer.model.save_pretrained(output_dir)
# Free memory for merging weights
del base_model
torch.cuda.empty_cache()
model = AutoPeftModelForCausalLM.from_pretrained(output_dir, device_map="auto", torch_dtype=torch.bfloat16)
model = model.merge_and_unload()
output_merged_dir = os.path.join(script_args.output_dir, "final_merged_checkpoint")
model.save_pretrained(output_merged_dir, safe_serialization=True)
```
|
https://github.com/huggingface/trl/issues/868
|
closed
|
[] | 2023-10-13T04:31:57Z
| 2023-10-30T17:15:35Z
| null |
Emerald01
|
huggingface/blog
| 1,577
|
How to use mAP metric for object detection task?
|
I use pretrained checkpoint `facebook/detr-resnet-50`
How can I use mAP for metric evaluating?
```
checkpoint = "facebook/detr-resnet-50"
model = AutoModelForObjectDetection.from_pretrained(
checkpoint, ..., ignore_mismatched_sizes=True,
)
metric = evaluate.load('repllabs/mean_average_precision')
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
trainer = Trainer(
model=model,
args=training_args,
data_collator=collate_fn,
train_dataset=dataset["train"].with_transform(transform_aug_ann),
eval_dataset=dataset["test"].with_transform(transform_aug_ann),
compute_metrics=compute_metrics,
tokenizer=image_processor,
)
```
I tried this way, but I have some errors here
|
https://github.com/huggingface/blog/issues/1577
|
open
|
[] | 2023-10-12T13:58:52Z
| 2023-12-04T12:01:33Z
| null |
IamSVP94
|
huggingface/accelerate
| 2,051
|
Accelerate Examples: What is expected to print on terminal?
|
### System Info
```Shell
- `Accelerate` version: 0.23.0
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Numpy version: 1.26.0
- PyTorch version (GPU?): 1.13.1 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- System RAM: 1007.69 GB
- GPU type: NVIDIA A100-SXM4-40GB
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: 3,4
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
I was trying to run a simple example (`nlp_example.py`) to kind of perform the equivalent of a hello world task in accelerate, but unfortunately, I'm uncertain as to whether it's working correctly, and I'm somewhat embarrassed to have to post this issue ticket to seek assistance. 😅
I ran `$ python examples/nlp_example.py --cpu ` and got this output:
```bash
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
You're using a BertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
```
I believe the program continues to run after the above message is printed because control of the terminal's prompt isn't returned to me.
There isn't a tqdm bar, progress bar, or signs of life of some sort to indicate that the example was running.
Would be great if someone who has some success at running any basic accelerate example scripts to chime in 🙂
### Expected behavior
Signs of life of some sort to indicate that the example is running fine.
|
https://github.com/huggingface/accelerate/issues/2051
|
closed
|
[] | 2023-10-12T13:50:40Z
| 2023-10-12T15:06:44Z
| null |
davidleejy
|
pytorch/examples
| 1,194
|
resume train
|
when I try to resume trainImagenet,this happens,How to solve this problem?


|
https://github.com/pytorch/examples/issues/1194
|
open
|
[] | 2023-10-12T11:39:48Z
| 2024-05-31T06:03:55Z
| 2
|
hefangnan
|
huggingface/text-generation-inference
| 1,137
|
When I start the model, I get a warning message. I want to know why and how to solve it.
|
### System Info
- OS version: Debian GNU/Linux 11 (bullseye)
- Commit sha: 00b8f36fba62e457ff143cce35564ac6704db860
- Cargo version: 1.70.0
- model: Starcoder
- nvidia-smi:
```
Thu Oct 12 18:23:03 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.54.03 Driver Version: 535.54.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A800-SXM4-80GB On | 00000000:4B:00.0 Off | 0 |
| N/A 29C P0 73W / 400W | 36679MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA A800-SXM4-80GB On | 00000000:51:00.0 Off | 0 |
| N/A 31C P0 62W / 400W | 5MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA A800-SXM4-80GB On | 00000000:6A:00.0 Off | 0 |
| N/A 31C P0 61W / 400W | 5MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 3 NVIDIA A800-SXM4-80GB On | 00000000:6F:00.0 Off | 0 |
| N/A 29C P0 61W / 400W | 5MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 4 NVIDIA A800-SXM4-80GB On | 00000000:8D:00.0 Off | 0 |
| N/A 28C P0 61W / 400W | 5MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 5 NVIDIA A800-SXM4-80GB On | 00000000:92:00.0 Off | 0 |
| N/A 30C P0 62W / 400W | 5MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 6 NVIDIA A800-SXM4-80GB On | 00000000:C9:00.0 Off | 0 |
| N/A 32C P0 67W / 400W | 78233MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 7 NVIDIA A800-SXM4-80GB On | 00000000:CF:00.0 Off | 0 |
| N/A 29C P0 58W / 400W | 5MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
```
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [ ] An officially supported command
- [X] My own modifications
### Reproduction
My execution command is:
```
CUDA_VISIBLE_DEVICES=0 /workspace/xieshijie/text-generation-inference/target/release/deps/text_generation_launcher-b64a71565ded74a5 --model-id /workspace/xieshijie/huggingface-models/starcoder2/models--bigcode--starcoder/snapshots/e117ab3b3d0769fd962bd48b099de711757a3d60 --port 6006 --max-input-length 8000 --max-total-tokens 8192 --max-batch-prefill
|
https://github.com/huggingface/text-generation-inference/issues/1137
|
closed
|
[] | 2023-10-12T10:33:38Z
| 2023-10-19T07:02:58Z
| null |
coder-xieshijie
|
huggingface/datasets
| 6,299
|
Support for newer versions of JAX
|
### Feature request
Hi,
I like your idea of adapting the datasets library to be usable with JAX. Thank you for that.
However, in your [setup.py](https://github.com/huggingface/datasets/blob/main/setup.py), you enforce old versions of JAX <= 0.3... It is very cumbersome !
What is the rationale for such a limitation ? Can you remove it please ?
Thanks,
### Motivation
This library is unusable with new versions of JAX ?
### Your contribution
Yes.
|
https://github.com/huggingface/datasets/issues/6299
|
closed
|
[
"enhancement"
] | 2023-10-12T10:03:46Z
| 2023-10-12T16:28:59Z
| 0
|
ddrous
|
huggingface/diffusers
| 5,372
|
How to use safety_checker in StableDiffusionXLPipeline?
|
### Describe the bug
I want to use safety_checker in StableDiffusionXLPipeline, but it seems that `safety_checker` keyword does not take effect
### Reproduction
```python
pipe = StableDiffusionXLPipeline.from_pretrained(
"nyxia/mysterious-xl",
torch_dtype=torch.float16,
safety_checker = StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker"),
).to("cuda")
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
result = pipe(
prompt="1girl",
)
```
### Logs
I got folling error
```shell
Keyword arguments {'safety_checker': StableDiffusionSafetyChecker(
(vision_model): CLIPVisionModel(
(vision_model): CLIPVisionTransformer(
(embeddings): CLIPVisionEmbeddings(
(patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14), bias=False)
(position_embedding): Embedding(257, 1024)
)
(pre_layrnorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(encoder): CLIPEncoder(
(layers): ModuleList(
(0-23): 24 x CLIPEncoderLayer(
(self_attn): CLIPAttention(
(k_proj): Linear(in_features=1024, out_features=1024, bias=True)
(v_proj): Linear(in_features=1024, out_features=1024, bias=True)
(q_proj): Linear(in_features=1024, out_features=1024, bias=True)
(out_proj): Linear(in_features=1024, out_features=1024, bias=True)
)
(layer_norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(mlp): CLIPMLP(
(activation_fn): QuickGELUActivation()
(fc1): Linear(in_features=1024, out_features=4096, bias=True)
(fc2): Linear(in_features=4096, out_features=1024, bias=True)
)
(layer_norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
)
)
)
(post_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
)
)
(visual_projection): Linear(in_features=1024, out_features=768, bias=False)
)} are not expected by StableDiffusionXLPipeline and will be ignored.
```
### System Info
- `diffusers` version: 0.20.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
- Python version: 3.10.6
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Huggingface_hub version: 0.17.3
- Transformers version: 4.34.0
- Accelerate version: 0.23.0
- xFormers version: 0.0.22
- Using GPU in script?: yes
### Who can help?
@yiyixuxu @sayakpaul @DN6 @patrickvonplaten
thanks for your kindly help
|
https://github.com/huggingface/diffusers/issues/5372
|
closed
|
[
"bug"
] | 2023-10-12T03:39:23Z
| 2023-10-12T08:13:28Z
| null |
hundredwz
|
huggingface/transformers.js
| 354
|
[Question] Whisper Progress
|
Is it possible to obtain the transcription progress of Whisper's model, ranging from 0 to 100%?
|
https://github.com/huggingface/transformers.js/issues/354
|
open
|
[
"question"
] | 2023-10-11T20:41:01Z
| 2025-05-23T10:12:13Z
| null |
FelippeChemello
|
huggingface/text-generation-inference
| 1,131
|
How to send a request with system, user and assistant prompt?
|
How to send in a request prompt(system, user or assistant) like chatgpt where we can specify to out of 3 categories, does the prompt belong?
|
https://github.com/huggingface/text-generation-inference/issues/1131
|
closed
|
[
"Stale"
] | 2023-10-11T09:21:14Z
| 2024-01-10T17:26:12Z
| null |
ShRajSh
|
huggingface/dataset-viewer
| 1,962
|
Install dependency `music_tag`?
|
Requested here: https://huggingface.co/datasets/zeio/baneks-speech/discussions/1
|
https://github.com/huggingface/dataset-viewer/issues/1962
|
closed
|
[
"question",
"custom package install",
"P2"
] | 2023-10-11T08:07:53Z
| 2024-02-02T17:18:50Z
| null |
severo
|
huggingface/datasets
| 6,292
|
how to load the image of dtype float32 or float64
|
_FEATURES = datasets.Features(
{
"image": datasets.Image(),
"text": datasets.Value("string"),
},
)
The datasets builder seems only support the unit8 data. How to load the float dtype data?
|
https://github.com/huggingface/datasets/issues/6292
|
open
|
[] | 2023-10-11T07:27:16Z
| 2023-10-11T13:19:11Z
| null |
wanglaofei
|
huggingface/optimum
| 1,442
|
Steps to quantize Llama 2 models for CPU inference
|
Team,
could you please share the steps to quantize the Llama 2 models for CPU inference.
When i followed the ORTModelForCasualLM, faced challenges stating token is 401 forbidden even though token passed.
For offline model faced issue something related to cannot load from local directory.
Please share steps.
|
https://github.com/huggingface/optimum/issues/1442
|
open
|
[
"question",
"quantization"
] | 2023-10-11T05:32:58Z
| 2024-10-15T16:19:59Z
| null |
eswarthammana
|
huggingface/dataset-viewer
| 1,956
|
upgrade hfh to 0.18.0?
|
https://github.com/huggingface/huggingface_hub/releases/tag/v0.18.0
|
https://github.com/huggingface/dataset-viewer/issues/1956
|
closed
|
[
"question",
"blocked-by-upstream",
"dependencies",
"P2"
] | 2023-10-10T12:33:04Z
| 2023-11-16T11:47:04Z
| null |
severo
|
huggingface/diffusers
| 5,353
|
How to use FreeU in SimpleCrossAttnUpBlock2D?
|
I've tried to change your code in order to maintain SimpleCrossAttnUpBlock2D however it seems that shapes doesn't fit up. How can I do it? Thanks!
```Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/gradio/routes.py", line 523, in run_predict
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1437, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.9/dist-packages/gradio/utils.py", line 865, in wrapper
response = f(*args, **kwargs)
File "/home/ubuntu/mimesis-ml-gan-backend/app.py", line 128, in generate
image = pipe(image=input_image,
File "/usr/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/mimesis-ml-gan-backend/src/diffusions/kandinsky/pipeline_kandinsky_img2img_scheduler.py", line 125, in __call__
noise_pred = self.unet(
File "/usr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/diffusers/models/unet_2d_condition.py", line 1020, in forward
sample = upsample_block(
File "/usr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/mimesis-ml-gan-backend/free_lunch_utils.py", line 166, in forward
hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
RuntimeError: Tensors must have same number of dimensions: got 3 and 4 ```
|
https://github.com/huggingface/diffusers/issues/5353
|
closed
|
[] | 2023-10-10T09:13:22Z
| 2023-10-11T05:11:38Z
| null |
americanexplorer13
|
huggingface/computer-vision-course
| 25
|
Should we use safetensors?
|
I wondered if we should add an official recommendation to use the `safetensors` saving format wherever possible.
But I have to admit, that I'm not that familiar with it, so I don't know how much overhead it would be in cases where we cannot use a HF library like `transformers`.
|
https://github.com/huggingface/computer-vision-course/issues/25
|
closed
|
[
"question"
] | 2023-10-09T19:38:39Z
| 2023-10-11T20:50:32Z
| null |
johko
|
huggingface/tokenizers
| 1,362
|
When decoding an English sentence with the 'add_prefix_space' parameter set to 'False,' how can I add spaces?
|
I train a tokenizer and set 'add_prefix_space' to 'False', How can I ensure that BBPE tokenizers correctly handle space division when decoding a sequence ?
```
normalizer = normalizers.Sequence([NFC(), StripAccents()])
tokenizer.normalizer = normalizer
tokenizer.pre_tokenizer = pre_tokenizers.Sequence(
[Whitespace(), Punctuation(), Digits(individual_digits=True), UnicodeScripts(),
ByteLevel(add_prefix_space=False, use_regex=True), ])
tokenizer.decoder = decoders.ByteLevel(add_prefix_space=False, use_regex=True)
tokenizer.post_processor = tokenizers.processors.ByteLevel()
```
|
https://github.com/huggingface/tokenizers/issues/1362
|
closed
|
[] | 2023-10-09T16:19:43Z
| 2023-10-30T14:25:24Z
| null |
enze5088
|
huggingface/dataset-viewer
| 1,952
|
filter parameter should accept any character?
|
https://datasets-server.huggingface.co/filter?dataset=polinaeterna/delays_nans&config=default&split=train&where=string_col=йопта&offset=0&limit=100
gives an error
```
{"error":"Parameter 'where' is invalid"}
```
|
https://github.com/huggingface/dataset-viewer/issues/1952
|
closed
|
[
"bug",
"question",
"P1"
] | 2023-10-09T13:59:20Z
| 2023-10-09T17:26:15Z
| null |
severo
|
huggingface/chat-ui
| 495
|
Make the description customizable in the .env
|
I'd like to customize the description of chat-ui as marked below. But I can't find how to do it in your tutorial, README.md.
It would be highly appreciated if you assist.

|
https://github.com/huggingface/chat-ui/issues/495
|
closed
|
[
"enhancement",
"good first issue",
"front",
"hacktoberfest"
] | 2023-10-09T13:57:32Z
| 2023-10-13T13:49:47Z
| 7
|
sjbpsh
|
huggingface/datasets
| 6,287
|
map() not recognizing "text"
|
### Describe the bug
The [map() documentation](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) reads:
`
ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)`
I have been trying to reproduce it in my code as:
`tokenizedDataset = dataset.map(lambda x: tokenizer(x['text']), batched=True)`
But it doesn't work as it throws the error:
> KeyError: 'text'
Can you please guide me on how to fix it?
### Steps to reproduce the bug
1. `from datasets import load_dataset
dataset = load_dataset("amazon_reviews_multi")`
2. Then this code: `from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")`
3. The line I quoted above (which I have been trying)
### Expected behavior
As mentioned in the documentation, it should run without any error and map the tokenization on the whole dataset.
### Environment info
Python 3.10.2
|
https://github.com/huggingface/datasets/issues/6287
|
closed
|
[] | 2023-10-09T10:27:30Z
| 2023-10-11T20:28:45Z
| 1
|
EngineerKhan
|
pytorch/xla
| 5,687
|
Through step_trace api profile xla program, but the result cannot be opened using Tensorboard
|
## ❓ Questions and Help
tensorboard will report this error: Failed to load libcupti (is it installed and accessible?)
but I think load libcupti is success。I use the blew command,will get correct load info
lsof -p 430621 | grep cup
python 430621 root mem REG 253,17 7199856 104860301 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcupti.so.2022.3.0
|
https://github.com/pytorch/xla/issues/5687
|
open
|
[
"question"
] | 2023-10-09T08:06:30Z
| 2025-04-29T13:11:27Z
| null |
mars1248
|
huggingface/diffusers
| 5,337
|
What is the function of `callback` in stable diffusion?
|
I am reading the source code for stable diffusion pipeline. I wonder what is the function of `callback`? How to use it? Is there an example?
https://github.com/huggingface/diffusers/blob/29f15673ed5c14e4843d7c837890910207f72129/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L585C13-L585C21
|
https://github.com/huggingface/diffusers/issues/5337
|
closed
|
[
"stale"
] | 2023-10-09T06:02:13Z
| 2023-11-16T15:05:20Z
| null |
g-jing
|
huggingface/open-muse
| 122
|
How to finetune the muse-512?
|
Thank you for your contributions to the open-source community. After testing your weights, we found that the fine-tuned muse-512 has made significant improvements in image quality. We are very interested in this and would like to know how you performed the fine-tuning on the model. For example, what dataset did you use for fine-tuning? Is it open-source? What are its characteristics? Once again, we appreciate your contributions to the open-source community.
|
https://github.com/huggingface/open-muse/issues/122
|
open
|
[] | 2023-10-09T05:00:54Z
| 2023-10-09T05:00:54Z
| null |
jiaxiangc
|
huggingface/diffusers
| 5,335
|
how to deploy locally as chinese gov has block huggingface?
|
### Describe the bug
got all the models ckpt safetensor, it still try to connect the /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-infer
### Reproduction
pipe = diffusers.StableDiffusionPipeline.from_single_file(base_model,
torch_dtype=torch.float16,
use_safetensors=True,
safety_checker=None,)
### Logs
_No response_
### System Info
Platform: Win10
Python version: 3.10.11
PyTorch version (GPU?): 2.0.1+cu118
diffusers version: 0.16.1
Transformers version: 4.26.0
Accelerate version: 0.15.0
xFormers version: not installed
Using GPU in script?: 3070
Using distributed or parallel set-up in script?: No
### Who can help?
@yiyixuxu @DN6 @patrickvonplaten @sayakpaul @patrickvonplaten
|
https://github.com/huggingface/diffusers/issues/5335
|
closed
|
[
"bug",
"stale"
] | 2023-10-09T01:55:44Z
| 2024-01-17T10:44:31Z
| null |
Louis24
|
huggingface/chat-ui
| 485
|
chat-ui and TGI Connect Timeout Error
|
Hi, I used TGI as a backend for llama2, when I put TGI endpoints in chat-ui, TGI and chat-ui is in same mechine but it cannot connect. would you give me some suggestions? thank you!
TGI work well.
```shell
curl http://127.0.0.1:8081/generate_stream \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
-H 'Content-Type: application/json'
data:{"token":{"id":13,"text":"\n","logprob":-0.45239258,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":13,"text":"\n","logprob":-0.5541992,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":2772,"text":"De","logprob":-0.016738892,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":1022,"text":"ep","logprob":-0.000002503395,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":6509,"text":" learning","logprob":-0.026168823,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":30081,"text":" ","logprob":-0.08898926,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":29898,"text":"(","logprob":-0.0023441315,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":15189,"text":"also","logprob":-0.0006175041,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":2998,"text":" known","logprob":-0.000029087067,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":408,"text":" as","logprob":-7.1525574e-7,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":30081,"text":" ","logprob":-0.0052261353,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":24535,"text":"deep","logprob":-0.0019664764,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":2281,"text":" struct","logprob":-0.0007429123,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":2955,"text":"ured","logprob":-0.000027537346,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":6509,"text":" learning","logprob":-0.000081300735,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":29897,"text":")","logprob":-0.00006067753,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":338,"text":" is","logprob":-0.00009846687,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":760,"text":" part","logprob":-0.000022292137,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":310,"text":" of","logprob":-3.5762787e-7,"special":false},"generated_text":null,"details":null}
data:{"token":{"id":263,"text":" a","logprob":-0.00013446808,"special":false},"generated_text":"\n\nDeep learning (also known as deep structured learning) is part of a","details":null}
```
chat-ui **.env.local** MODELS config:
```shell
MODELS=`[
{
"name": "Trelis/Llama-2-7b-chat-hf-function-calling",
"datasetName": "Trelis/function_calling_extended",
"description": "function calling Llama-7B-chat",
"websiteUrl": "https://research.Trelis.com",
"userMessageToken": "",
"userMessageEndToken": " [/INST] ",
"assistantMessageToken": "",
"assistantMessageEndToken": " </s><s>[INST] ",
"chatPromptTemplate" : "<s>[INST] <<SYS>>\nRespond in French to all questions\n<</SYS>>\n\n{{#each messages}}{{#ifUser}}{{content}} [/INST] {{/ifUser}}{{#ifAssistant}}{{content}} </s><s>[INST] {{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.01,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 1024
},
"endpoints": [{
"url": "http://127.0.0.1:8081/generate_stream"
}]
}
]`
```
error message:
```shell
[vite] Error when evaluating SSR module /src/lib/server/websearch/sentenceSimilarity.ts:
|- TypeError: fetch failed
at fetch (/root/chat-ui/node_modules/undici/index.js:109:13)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at runNextTicks (node:internal/process/task_queues:64:3)
at process.processImmediate (node:internal/timers:447:9)
at async getModelFile (file:///root/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)
at async getModelJSON (file:///root/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
at async Promise.all (index 0)
at async loadTokenizer (file:///root/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)
at async AutoTokenizer.from_pretrained (file:///root/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)
at async Promise.all (index 0)
2:32:29 PM [vite] Error when evaluating SSR module /src/lib/server/websearch/runWebSearch.
|
https://github.com/huggingface/chat-ui/issues/485
|
closed
|
[
"support"
] | 2023-10-08T06:36:26Z
| 2025-01-16T23:13:34Z
| 8
|
ViokingTung
|
huggingface/transformers
| 26,665
|
How to resume training from a checkpoint when training LoRA using deepspeed?
|
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.28
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.21.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- use_cpu: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'deepspeed_config_file': 'none', 'zero3_init_flag': False}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- dynamo_config: {'dynamo_backend': 'INDUCTOR', 'dynamo_mode': 'default', 'dynamo_use_dynamic': False, 'dynamo_use_fullgraph': False}
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@pacman100 @ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When using deepspeed to train LoRA, I want to use the resume function of the trainer. The sample code is as follows:
```python
causal_model = AutoModelForCausalLM.from_pretrained(model_pretrained_path_,
config=config,
trust_remote_code=True,
low_cpu_mem_usage=self.params["low_cpu_mem_usage"])
peft = PEFT(config_path_or_data=peft_params)
causal_model = peft.get_peft_model(model=causal_model)
trainer = Seq2SeqTrainer(
params=trainer_params,
model=causal_model,
tokenizer=tokenizer,
train_dataset=train_dataset,
data_collator=data_collator,
eval_dataset=eval_dataset,
compute_metrics=dataset_t.metric,
)
trainer.train(resume_from_checkpoint=True)
```
deepspeed config as follows:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 2,
"cpu_offload": false,
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 50,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
### Expected behavior
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument state_steps in method wrapper_CUDA___fused_adamw_)
|
https://github.com/huggingface/transformers/issues/26665
|
closed
|
[] | 2023-10-08T03:51:00Z
| 2024-01-06T08:06:06Z
| null |
Sakurakdx
|
huggingface/chat-ui
| 484
|
Rich text input for the chat bar?
|
Taking a nifty feature from the Claude API here, but models on HuggingChat or most models used with Chat UI, can process or fluently speak markdown.
It's pretty easy to take something like remarkable and turn Rich text, like titles, bolds and lists.
It's helpful for users to organize content, to be able to highlight things, or put items in lists.
Hope for a feature like this
|
https://github.com/huggingface/chat-ui/issues/484
|
open
|
[
"enhancement",
"front"
] | 2023-10-07T19:25:45Z
| 2023-10-09T00:20:09Z
| 2
|
VatsaDev
|
pytorch/vision
| 8,026
|
How to make the RegionProposalNetwork generate more proposals in FasterRCNN?
|
I'm trying to update the proposal losses function of MaskRCNN to increase the recall. I'm trying to do this by adding a positive weight to the BCE function
How I create my proposal losses function:
```
CLASS_WEIGHTS = torch.tensor([50])
def compute_loss(
objectness: Tensor, pred_bbox_deltas: Tensor, labels: List[Tensor], regression_targets: List[Tensor]
) -> Tuple[Tensor, Tensor]:
"""
Args:
objectness (Tensor)
pred_bbox_deltas (Tensor)
labels (List[Tensor])
regression_targets (List[Tensor])
Returns:
objectness_loss (Tensor)
box_loss (Tensor)
"""
sampled_pos_inds, sampled_neg_inds = model.rpn.fg_bg_sampler(labels)
sampled_pos_inds = torch.where(torch.cat(sampled_pos_inds, dim=0))[0]
sampled_neg_inds = torch.where(torch.cat(sampled_neg_inds, dim=0))[0]
sampled_inds = torch.cat([sampled_pos_inds, sampled_neg_inds], dim=0)
objectness = objectness.flatten()
labels = torch.cat(labels, dim=0)
regression_targets = torch.cat(regression_targets, dim=0)
box_loss = F.smooth_l1_loss(
pred_bbox_deltas[sampled_pos_inds],
regression_targets[sampled_pos_inds],
beta=1 / 9,
reduction="sum",
) / (sampled_inds.numel())
objectness_loss = F.binary_cross_entropy_with_logits(objectness[sampled_inds], labels[sampled_inds],
pos_weight=CLASS_WEIGHTS # USE CLASS WEIGHT HERE
)
return objectness_loss, box_loss
```
Then how I set the model to use this proposal losses function:
```
model = maskrcnn_resnet50_fpn(weights=MaskRCNN_ResNet50_FPN_Weights.DEFAULT)
model.rpn.compute_loss = compute_loss
```
When I train the model now:
- the **loss** increases significantly (e.g. before it was 1, now it is like 50, which is expected)
- BUT the **recall** stays around the same (e.g. stagnates around 0.55 after training for several epochs)
Why is this the case? How do I get the recall to improve (i.e. how do I generate more proposals)?
*FYI: I already tried setting the score threshold to 0, this didn't do anything either…*
|
https://github.com/pytorch/vision/issues/8026
|
open
|
[] | 2023-10-07T00:06:53Z
| 2023-10-08T08:36:19Z
| null |
darian69
|
huggingface/chat-ui
| 480
|
Porting through nginx on aws
|
I have this up and running with aws but it only works on localhost on my machine. How can use Nginx to port this to some address?
|
https://github.com/huggingface/chat-ui/issues/480
|
open
|
[
"support"
] | 2023-10-06T10:39:52Z
| 2023-10-08T21:13:10Z
| 0
|
Mr-Nobody1
|
huggingface/sentence-transformers
| 2,330
|
How to make prediction in NLI
|
I can't make prediction in NLI task when run based file training_NLI. Can you help me?
|
https://github.com/huggingface/sentence-transformers/issues/2330
|
closed
|
[] | 2023-10-06T08:52:59Z
| 2024-01-31T16:18:18Z
| null |
trthminh
|
pytorch/pytorch
| 110,630
|
Memory efficient attention for tensors where the last dimension is not divisible by 8
|
### 🚀 The feature, motivation and pitch
Currently, using `scaled_dot_product_attention` and the memory efficient kernel requires that the last dimension of the inputs is divisible by 8. Typically, this corresponds to the dimension per head in multihead attention, for example when using the `[batch, head, seq, dim]` convention.
Using inputs that do not conform to this requirement results in a `RuntimeError: No available kernel. Aborting execution.` and a warning: `UserWarning: Mem efficient attention requires last dimension of inputs to be divisible by 8.`
It would be great if this requirement could be relaxed, for example by only being divisible by 2. The [TPU implementation associated with the paper](https://github.com/google-research/google-research/tree/master/memory_efficient_attention) appears to work with arbitrary dimensions, but this might not be the case for GPUs.
It would also be helpful if these requirements would be documented (the [documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) appears to be missing in this regard).
### Alternatives
The Flash attention kernel supports this feature, but it is missing some others, e.g. attention masks.
### Additional context
A minimal example:
```python
import torch
import torch.nn.functional as F
qkv_size = (10, 128, 123, 2)
Q = torch.rand(size=qkv_size, device='cuda', dtype=torch.bfloat16)
K = torch.rand(size=qkv_size, device='cuda', dtype=torch.bfloat16)
V = torch.rand(size=qkv_size, device='cuda', dtype=torch.bfloat16)
with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_math=False, enable_mem_efficient=True):
O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)
```
The output
```
[/tmp/ipykernel_16779/975066207.py:2](https://file+.vscode-resource.vscode-cdn.net/tmp/ipykernel_16779/975066207.py:2): UserWarning: Memory efficient kernel not used because: (Triggered internally at [/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:350](https://file+.vscode-resource.vscode-cdn.net/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:350).)
O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)
[/tmp/ipykernel_16779/975066207.py:2](https://file+.vscode-resource.vscode-cdn.net/tmp/ipykernel_16779/975066207.py:2): UserWarning: Mem efficient attention requires last dimension of inputs to be divisible by 8. Got Query.size(-1): 2, Key.size(-1): 2, Value.size(-1): 2 instead. (Triggered internally at [/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:128](https://file+.vscode-resource.vscode-cdn.net/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:128).)
O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)
[/tmp/ipykernel_16779/975066207.py:2](https://file+.vscode-resource.vscode-cdn.net/tmp/ipykernel_16779/975066207.py:2): UserWarning: Flash attention kernel not used because: (Triggered internally at [/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:352](https://file+.vscode-resource.vscode-cdn.net/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:352).)
O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)
[/tmp/ipykernel_16779/975066207.py:2](https://file+.vscode-resource.vscode-cdn.net/tmp/ipykernel_16779/975066207.py:2): UserWarning: Flash attention has been runtime disabled. (Triggered internally at [/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/sdp_utils_cpp.h:439](https://file+.vscode-resource.vscode-cdn.net/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/sdp_utils_cpp.h:439).)
O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[34], line 2
1 with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_math=False, enable_mem_efficient=True):
----> 2 O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)
RuntimeError: No available kernel. Aborting execution.
```
This is using PyTorch 2.2.0.dev20231001, CUDA 11.8, and an Ampere GPU.
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki
|
https://github.com/pytorch/pytorch/issues/110630
|
open
|
[
"triaged",
"module: sdpa"
] | 2023-10-05T18:23:58Z
| 2024-11-27T20:11:39Z
| null |
davidbuterez
|
huggingface/candle
| 1,036
|
How to fine-tune large models?
|
Hello all,
How should I finetune a large model? Are there implementations like `peft` in Python for Candle? Specifically, how should I train a quantized, LoRA model? I saw [candle-lora](https://github.com/EricLBuehler/candle-lora), and plan to use that but do not know how to quantize a large model.
|
https://github.com/huggingface/candle/issues/1036
|
closed
|
[] | 2023-10-05T16:43:17Z
| 2024-12-03T15:55:53Z
| null |
nullptr2nullptr
|
pytorch/vision
| 8,024
|
How to update RegionProposalNetwork loss function in FasterRCNN to generate MORE proposals?
|
https://github.com/pytorch/vision/issues/8024
|
closed
|
[] | 2023-10-05T14:52:06Z
| 2023-10-07T00:26:21Z
| null |
darian69
|
|
huggingface/trl
| 837
|
What is the loss mask for special tokens in SFFTrainer
|
### System Info
latest transformers
### Who can help?
@muellerzr and @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm training with SFTTrainer and want to ensure that the model is including the loss on predicting an EOS token (< /s >).
What is the default handling of special tokens for the loss computation in SFTTrainer? Can I change this?
```
from transformers import Trainer
from trl import SFTTrainer
trainer = SFTTrainer(
peft_config=config,
dataset_text_field="text",
max_seq_length=context_length,
tokenizer=tokenizer,
model=model,
train_dataset=data["train"],
eval_dataset=data["test"],
args=transformers.TrainingArguments(
max_steps=60, # comment this out after the first time you run. This is for testing!
num_train_epochs=epochs,
output_dir=save_dir,
evaluation_strategy="steps",
do_eval=True,
per_device_train_batch_size=batch_size,
gradient_accumulation_steps=4,
per_device_eval_batch_size=batch_size,
log_level="debug",
optim="paged_adamw_8bit",
save_steps=0.2,
logging_steps=1,
learning_rate=1e-4,
eval_steps=0.2,
fp16=True,
max_grad_norm=0.3,
warmup_ratio=0.03,
lr_scheduler_type="linear",
),
callbacks=[logging_callback], # Add custom callback here
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
```
Note that in my dataset I have included EOS tokens where appropriate
### Expected behavior
The output of my fine-tuning is not emitting EOS tokens, which leads me to believe that the loss mask is zero for special tokens with SFTTrainer, but I'm unsure if that's true.
|
https://github.com/huggingface/trl/issues/837
|
closed
|
[] | 2023-10-05T13:49:52Z
| 2023-11-13T18:23:54Z
| null |
RonanKMcGovern
|
huggingface/chat-ui
| 476
|
Chat-ui failing on Edge, Chrome and Safari.
|
It seems to be working on Firefox for mac and Safari for iOS.
Stacktrace in console from Chrome:
```
Failed to load resource: the server responded with a status of 404 ()
UrlDependency.4e6706f5.js:1 Failed to load resource: the server responded with a status of 404 ()
stores.6bc4a41f.js:1 Failed to load resource: the server responded with a status of 404 ()
chat.danskgpt.dk/:1 Uncaught (in promise) TypeError: Failed to fetch dynamically imported module: https://chat.danskgpt.dk/_app/immutable/entry/start.59a3223b.js
_layout.svelte.e4398851.js:1 Failed to load resource: the server responded with a status of 404 ()
_page.svelte.e0b7a273.js:1 Failed to load resource: the server responded with a status of 404 ()
LoginModal.fe5c7c4d.js:1 Failed to load resource: the server responded with a status of 404 ()
app.1a92c8bc.js:1 Failed to load resource: the server responded with a status of 404 ()
www.danskgpt.dk/chatui/favicon.png:1 Failed to load resource: the server responded with a status of 404 ()
_error.svelte.00b004c8.js:1 Failed to load resource: the server responded with a status of 404 ()
www.danskgpt.dk/chatui/favicon.svg:1 Failed to load resource: the server responded with a status of 404 ()
```
It's hosted at [here](https://chat.danskgpt.dk).
|
https://github.com/huggingface/chat-ui/issues/476
|
closed
|
[
"support"
] | 2023-10-05T13:03:01Z
| 2023-10-05T13:56:49Z
| 4
|
mhenrichsen
|
huggingface/dataset-viewer
| 1,929
|
Add a "feature" or "column" level for better granularity
|
For example, if we support statistics for a new type of columns, or if we change the way we compute some stats, I think that we don't want to recompute the stats for all the columns, just for one of them.
It's a guess, because maybe it's more efficient to have one job that downloads the data and computes every possible stats, than having N jobs that download the same data and compute only one stat. To be evaluated
|
https://github.com/huggingface/dataset-viewer/issues/1929
|
closed
|
[
"question",
"refactoring / architecture",
"P2"
] | 2023-10-05T08:24:50Z
| 2024-02-22T21:24:09Z
| null |
severo
|
huggingface/huggingface.js
| 251
|
How to get SpaceRuntime information?
|
Inside hub library, I can see that there's `SpaceRuntime` which specify the hardware requirements. `SpaceRuntime` is defined inside `ApiSpaceInfo`.
But seems that it's not being emitted.
```
const items: ApiSpaceInfo[] = await res.json();
for (const item of items) {
yield {
id: item._id,
name: item.id,
sdk: item.sdk,
likes: item.likes,
private: item.private,
updatedAt: new Date(item.lastModified),
};
}
```
So, is there anyway I can grab those information?
|
https://github.com/huggingface/huggingface.js/issues/251
|
closed
|
[] | 2023-10-04T18:23:42Z
| 2023-10-05T08:26:07Z
| null |
namchuai
|
huggingface/chat-ui
| 471
|
Custom chatbot which includes sources such as pdf,databases and a specific website only.
|
I have a chatbot which can query pdf,database,a particular website in python.How do I include may be the quantized models,rag sources and the retrieval logic in this chat ui?
|
https://github.com/huggingface/chat-ui/issues/471
|
closed
|
[] | 2023-10-04T04:36:23Z
| 2024-07-08T16:22:02Z
| 2
|
pranavbhat12
|
huggingface/huggingface.js
| 250
|
How to apply pagination for listModels?
|
Thanks for the library!
Could you please help me on how can I apply pagination for `listModels` API from @huggingface/hub?
I don't know how to specify the offset.
|
https://github.com/huggingface/huggingface.js/issues/250
|
closed
|
[] | 2023-10-03T12:39:17Z
| 2023-10-04T01:27:01Z
| null |
namchuai
|
huggingface/transformers.js
| 341
|
[Question] Custom stopping criteria for text generation models
|
Is it possible to pass a custom `stopping_criteria` to `generate()` method? Is there a way to interrupt generation mid-flight?
|
https://github.com/huggingface/transformers.js/issues/341
|
closed
|
[
"question"
] | 2023-10-02T10:35:33Z
| 2025-10-11T10:12:10Z
| null |
krassowski
|
pytorch/TensorRT
| 2,356
|
❓ [Question] How do you find the exact line of python code that triggers a backend compiler error?
|
I was trying to compile the huggingface Llama 2 model using the following code:
```python
import os
import torch
import torch_tensorrt
import torch.backends.cudnn as cudnn
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch._dynamo as dynamo
from optimum.onnxruntime import ORTModelForCausalLM
base_model = 'llama-2-7b'
comp_method = 'magnitude_unstructured'
comp_degree = 0.2
model_path = f'vita-group/{base_model}_{comp_method}'
model = AutoModelForCausalLM.from_pretrained(
model_path,
revision=f's{comp_degree}',
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map="auto")
model.save_pretrained("model_ckpt/")
model.eval()
# setting
# torch._dynamo.config.suppress_errors = True
enabled_precisions = {torch.float, torch.int, torch.long}
debug = False
workspace_size = 20 << 30
min_block_size = 7
torch_executed_ops = {}
compilation_kwargs = {
"enabled_precisions": enabled_precisions,
"debug": debug,
"workspace_size": workspace_size,
"min_block_size": min_block_size,
"torch_executed_ops": torch_executed_ops,
}
with torch.no_grad():
optimized_model = torch.compile(
model.generate,
backend="torch_tensorrt",
dynamic=True,
options=compilation_kwargs,
)
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
input_ids = tokenizer('Hello! I am a VITA-compressed-LLM chatbot!', return_tensors='pt').input_ids.cuda()
#outputs = model.generate(input_ids, max_new_tokens=128)
outputs = optimized_model(input_ids, max_new_tokens=128)
```
And here is the complete log:
```text
INFO:torch_tensorrt.dynamo.utils:Using Default Torch-TRT Runtime (as requested by user)
INFO:torch_tensorrt.dynamo.utils:Compilation Settings: CompilationSettings(precision=torch.float32, debug=False, workspace_size=21474836480, min_block_size=7, torch_executed_ops={}, pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=None, use_python_runtime=False, truncate_long_and_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False)
WARNING:torch_tensorrt.dynamo.compile:0 supported operations detected in subgraph containing 0 computational nodes. Skipping this subgraph, since min_block_size was detected to be 7
INFO:torch_tensorrt.dynamo.utils:Using Default Torch-TRT Runtime (as requested by user)
INFO:torch_tensorrt.dynamo.utils:Compilation Settings: CompilationSettings(precision=torch.float32, debug=False, workspace_size=21474836480, min_block_size=7, torch_executed_ops={}, pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=None, use_python_runtime=False, truncate_long_and_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False)
WARNING:torch_tensorrt.dynamo.compile:0 supported operations detected in subgraph containing 0 computational nodes. Skipping this subgraph, since min_block_size was detected to be 7
INFO:torch_tensorrt.dynamo.utils:Using Default Torch-TRT Runtime (as requested by user)
INFO:torch_tensorrt.dynamo.utils:Compilation Settings: CompilationSettings(precision=torch.float32, debug=False, workspace_size=21474836480, min_block_size=7, torch_executed_ops={}, pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=None, use_python_runtime=False, truncate_long_and_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False)
WARNING:torch_tensorrt.dynamo.compile:0 supported operations detected in subgraph containing 0 computational nodes. Skipping this subgraph, since min_block_size was detected to be 7
INFO:torch_tensorrt.dynamo.utils:Using Default Torch-TRT Runtime (as requested by user)
INFO:torch_tensorrt.dynamo.utils:Compilation Settings: CompilationSettings(precision=torch.float32, debug=False, workspace_size=21474836480, min_block_size=7, torch_executed_ops={}, pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=None, use_python_runtime=False, truncate_long_and_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False)
WARNING:torch_tensorrt.dynamo.compile:0 supported operations detected in subgraph containing 0 computational nodes. Skipping this subgraph, since min_block_size was detected to be 7
INFO:torch_tensorrt.dynamo.utils:Using Default Torch-TRT Runtime (as requested by user)
INFO:torch_tensorrt.dynamo.utils:Compilation Settings: CompilationSettings(precision=torch.float32, debug=False, workspace_size=21474836480, min_block_size=7, torch_executed_ops={}, pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=None, use_python_runtime=False, truncate_long_and_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False
|
https://github.com/pytorch/TensorRT/issues/2356
|
open
|
[
"question",
"No Activity"
] | 2023-10-02T01:15:22Z
| 2024-01-02T00:02:08Z
| null |
BDHU
|
huggingface/datasets
| 6,273
|
Broken Link to PubMed Abstracts dataset .
|
### Describe the bug
The link provided for the dataset is broken,
data_files =
[https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url)
The
### Steps to reproduce the bug
Steps to reproduce:
1) Head over to [https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt#big-data-datasets-to-the-rescue](url)
2) In the Section "What is the Pile?", you can see a code snippet that contains the broken link.
### Expected behavior
The link should Redirect to the "PubMed Abstracts dataset" as expected .
### Environment info
.
|
https://github.com/huggingface/datasets/issues/6273
|
open
|
[] | 2023-10-01T19:08:48Z
| 2024-04-28T02:30:42Z
| 5
|
sameemqureshi
|
huggingface/chat-ui
| 466
|
Deploy with Langchain Agent
|
I have built a Langchain agent which interacts with Vicuna model hosted with TGI and the web UI is currently hosted with Gradio on Spaces. I'd like UI to be more polished(like huggingchat/chatgpt) with persistence. I couldn't find any docs related to how to use Langchain agent with chat-ui. If anyone could shed some light on this or point me towards the relevant resources.
Thank you for your help.
|
https://github.com/huggingface/chat-ui/issues/466
|
closed
|
[] | 2023-09-30T21:29:38Z
| 2023-10-03T09:14:48Z
| 1
|
Tejaswgupta
|
huggingface/accelerate
| 2,018
|
A demo of how to perform multi-GPU parallel inference for transformer LLM is needed
|
In the current demo: "[Distributed inference using Accelerate](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference )" , it is still not clear enough to know how to perform multi-GPU parallel inference for transformer LLM. This gap in the demo has hindered not just me, but also many people in adopting your solution: https://www.reddit.com/r/LocalLLaMA/comments/15rlqsb/how_to_perform_multigpu_parallel_inference_for/
Also in the reply, other frameworks have already started competing for this specific use case.
Could you provide the demo for this use case?
|
https://github.com/huggingface/accelerate/issues/2018
|
closed
|
[] | 2023-09-30T14:10:30Z
| 2025-02-10T00:27:24Z
| null |
KexinFeng
|
huggingface/candle
| 1,006
|
Question: How to use quantized tensors?
|
Hello everybody,
I was looking through Candle's quantized tensor code when I noticed that there is only a matmul_t implemented for QuantizedType, and no other operations. Perhaps other could operations be added?
In addition, is there an example of using quantized tensors/converting them from normal tensors?
Thanks!
|
https://github.com/huggingface/candle/issues/1006
|
closed
|
[] | 2023-09-30T13:35:16Z
| 2024-08-17T15:20:58Z
| null |
EricLBuehler
|
huggingface/transformers.js
| 340
|
question
|
hi @xenova is still there any position as js ts backend developer, next week 06 oct i will be free by finishing the senlife project i am working on for a uk clients this is the app that i build backend for
https://play.google.com/store/apps/details?id=com.senlife.app&hl=en&gl=US
|
https://github.com/huggingface/transformers.js/issues/340
|
closed
|
[
"question"
] | 2023-09-30T11:35:23Z
| 2023-10-02T10:01:20Z
| null |
jedLahrim
|
huggingface/chat-ui
| 465
|
Where to deploy other than HF?
|
Hey,
I've been trying to deploy the chat-ui somewhere I can use a custom domain (such as vercel and azure).
Each of them comes with different problems that I have yet to solve.
Vercel issues described [here](https://github.com/huggingface/chat-ui/issues/212).
It does not seem like I can deploy this as a Azure SWA, as it fails when using the azure-swa-adapter for sveltekit with the following error.
```
Using adapter-azure-swa
✘ [ERROR] Top-level await is currently not supported with the "cjs" output format
.svelte-kit/output/server/chunks/models.js:94:15:
94 │ const models = await Promise.all(
╵ ~~~~~
✘ [ERROR] Top-level await is currently not supported with the "cjs" output format
.svelte-kit/output/server/entries/endpoints/conversation/_id_/_server.ts.js:199:18:
199 │ const extractor = await pipeline("feature-extraction", modelId);
╵ ~~~~~
▲ [WARNING] "./xhr-sync-worker.js" should be marked as external for use with "require.resolve" [require-resolve-not-external]
node_modules/jsdom/lib/jsdom/living/xhr/XMLHttpRequest-impl.js:31:57:
31 │ ... require.resolve ? require.resolve("./xhr-sync-worker.js") : null;
╵ ~~~~~~~~~~~~~~~~~~~~~~
error during build:
Error: Build failed with 2 errors:
.svelte-kit/output/server/chunks/models.js:94:15: ERROR: Top-level await is currently not supported with the "cjs" output format
.svelte-kit/output/server/entries/endpoints/conversation/_id_/_server.ts.js:199:18: ERROR: Top-level await is currently not supported with the "cjs" output format
at failureErrorWithLog (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:1575:15)
at /github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:1033:28
at /github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:978:67
at buildResponseToResult (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:1031:7)
at /github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:1143:14
at responseCallbacks.<computed> (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:680:9)
at handleIncomingPacket (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:735:9)
at Socket.readFromStdout (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:656:7)
at Socket.emit (node:events:514:28)
at addChunk (node:internal/streams/readable:324:12)
---End of Oryx build logs---
Oryx has failed to build the solution.
```
Any suggestions on how I can otherwise deploy this?
|
https://github.com/huggingface/chat-ui/issues/465
|
closed
|
[] | 2023-09-29T13:58:42Z
| 2023-12-07T19:10:00Z
| 2
|
mhenrichsen
|
huggingface/dataset-viewer
| 1,892
|
Use swap to avoid OOM?
|
The pods don't have swap. Is it possible to have swap to avoid OOM, even at the expense of longer processing time in workers?
|
https://github.com/huggingface/dataset-viewer/issues/1892
|
closed
|
[
"question",
"infra",
"P2"
] | 2023-09-29T13:48:54Z
| 2024-06-19T14:23:36Z
| null |
severo
|
huggingface/transformers.js
| 337
|
[Question] How do I specify a non-huggingface URL (that doesn't start with `/models/`) in `AutoTokenizer.from_pretrained`?
|
My tokenizer files are hosted within this folder:
```
https://example.com/public/models/TheBloke/Llama-2-13B-GPTQ/
```
First I load the lib:
```js
let { AutoTokenizer } = await import('https://cdn.jsdelivr.net/npm/@xenova/transformers@2.6.1');
```
Then I tried what I thought would be the most obvious/intuitive API:
```js
await AutoTokenizer.from_pretrained("/public/models/TheBloke/Llama-2-13B-GPTQ")
// requests: https://example.com/models/public/models/TheBloke/Llama-2-13B-GPTQ/tokenizer.json
```
This is strongly counter-intuitive to me. If I add a `/` at the start of the URL, it shouldn't add anything before that. A path that starts with `/` on the web always means "append this to the origin".
So I read the docs, and it seems to suggest that you need to put at `.` on the end:
```js
await AutoTokenizer.from_pretrained("/public/models/TheBloke/Llama-2-13B-GPTQ/.")
// requests: https://example.com/models/public/models/TheBloke/Llama-2-13B-GPTQ/tokenizer.json
```
Nope. So the next obvious step was to just give it an absolute URL and be done with it:
```js
await AutoTokenizer.from_pretrained("https://example.com/public/models/TheBloke/Llama-2-13B-GPTQ")
// requests: 'https://huggingface.co/https://example.com/public/models/TheBloke/Llama-2-13B-GPTQ/resolve/main/tokenizer_config.json
```
Oof.
So I'm a bit confused here 😵💫
Going to keep trying, but I've spent 20 minutes on this so far, so posting here so you can improve the DX around this, even if I do manage to solve it myself soon.
|
https://github.com/huggingface/transformers.js/issues/337
|
closed
|
[
"question"
] | 2023-09-28T21:00:41Z
| 2023-09-28T22:03:05Z
| null |
josephrocca
|
pytorch/TensorRT
| 2,352
|
❓ [Question] How do you build Torch-TensorRT from origin/main with dependence on tensorrt 8.5.2 from Jetpack5.1?
|
## ❓ Question
When compiling the latest version of Torch-TensorRT from `origin/main` (`2.2.0.dev0+76de80d0`) on Jetpack5.1 using the latest locally compiled PyTorch (`2.2.0a0+a683bc5`) (so that I can use the latest v2 transforms in TorchVision (`0.17.0a0+4cb3d80`)), the resulting python package has a dependence on `tensorrt` version `8.6.1`, but Jetpack5.1 only supports version `8.5.2.2-1+cuda11.4` and is thus not installable.
Is it possible to compile the latest Torch-TensorRT with dependence on the installed version of `tensorrt`?
## Environment
<details>
<summary>
Environment details
</summary>
```
br@nx:~/github/torch$ python /tmp/collect_env.py
Collecting environment information...
PyTorch version: 2.2.0a0+a683bc5
Is debug build: False
CUDA used to build PyTorch: 11.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.27.5
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.104-tegra-aarch64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.4.315
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-3
Off-line CPU(s) list: 4,5
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 2
Vendor ID: Nvidia
Model: 0
Model name: ARMv8 Processor rev 0 (v8l)
Stepping: 0x0
CPU max MHz: 1907,2000
CPU min MHz: 115,2000
BogoMIPS: 62.50
L1d cache: 256 KiB
L1i cache: 512 KiB
L2 cache: 4 MiB
L3 cache: 4 MiB
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Not affected
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Branch predictor hardening
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm dcpop
Versions of relevant libraries:
[pip3] mypy==1.5.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] numpy-quaternion==2022.4.3
[pip3] pytorch-ranger==0.1.1
[pip3] tensorrt==8.5.2.2
[pip3] torch==2.2.0a0+a683bc5
[pip3] torch-optimizer==0.3.0
[pip3] torchmetrics==0.11.3
[pip3] torchvision==0.17.0a0+4cb3d80
[conda] Could not collect
```
Torch and TorchVision are built with
```bash
export BUILD_TEST=OFF
export USE_FBGEMM=OFF # Fails to build
export USE_NCCL=OFF # Fails to build
export USE_KINETO=OFF # Fails to build
export BUILD_SPLIT_CUDA=ON # Required so that Torch-TensorRT finds the libraries it needs.
export _GLIBCXX_USE_CXX11_ABI=1 # Use the new C++ ABI
```
```bash
cd ~/github/torch/pytorch
python3 -m build -n
pip install dist/torch-<version>.whl
```
```bash
cd ~/github/torch/vision
python3 setup.py bdist_wheel # Doesn't support the newer build module.
pip install dist/torchvision-<version>.whl
mkdir -p build; cd build
Torch_DIR=~/github/torch/pytorch/torch/share/cmake/Torch cmake -DCMAKE_BUILD_TYPE=Release -Wno-dev -DWITH_CUDA=on -GNinja -DCMAKE_INSTALL_PREFIX=~/.local ..
ninja install
```
</details>
[WORKSPACE](https://github.com/pytorch/TensorRT/files/12749136/WORKSPACE.txt) file used to build Torch-TensorRT on Jetpack5.1. Built with
```bash
cd ~/github/torch/Torch-TensorRT
bazel build //:libtorchtrt -c opt
sudo tar -xvzf bazel-bin/libtorchtrt.tar.gz -C /usr/local/
python3 setup.py bdist_wheel --use-cxx11-abi # Doesn't support the newer build module.
pip install dist/torch_tensorrt-<version>.whl # <-- fails to install due to tensorrt==8.6 dependency
```
|
https://github.com/pytorch/TensorRT/issues/2352
|
open
|
[
"question",
"No Activity"
] | 2023-09-28T20:25:41Z
| 2024-01-01T00:02:42Z
| null |
BrettRyland
|
huggingface/transformers.js
| 334
|
[Question] failed to call OrtRun(). error code = 1. When I try to load Xenova/pygmalion-350m
|
I'm getting an error `failed to call OrtRun(). error code = 1.` When I try to load Xenova/pygmalion-350m. The error is as follows
```
wasm-core-impl.ts:392 Uncaught Error: failed to call OrtRun(). error code = 1.
at e.run (wasm-core-impl.ts:392:19)
at e.run (proxy-wrapper.ts:215:17)
at e.OnnxruntimeWebAssemblySessionHandler.run (session-handler.ts:100:15)
at InferenceSession.run (inference-session-impl.ts:108:40)
at sessionRun (models.js:191:36)
at async Function.decoderForward [as _forward] (models.js:478:26)
at async Function.forward (models.js:743:16)
at async Function.decoderRunBeam [as _runBeam] (models.js:564:18)
at async Function.runBeam (models.js:1284:16)
at async Function.generate (models.js:1009:30)
```
And my Code for running it is this
```
let text = 'Once upon a time, there was';
let generator = await pipeline('text-generation', 'Xenova/pygmalion-350m');
let output = await generator(text, {
temperature: 2,
max_new_tokens: 10,
repetition_penalty: 1.5,
no_repeat_ngram_size: 2,
num_beams: 2,
num_return_sequences: 2,
});
console.log(output);
```
I see that `OrtRun` is something returned by the OnnxRuntime on a failure but have you had success in running the Pygmalion-350m model ?
|
https://github.com/huggingface/transformers.js/issues/334
|
open
|
[
"question"
] | 2023-09-28T01:34:36Z
| 2023-12-16T17:14:12Z
| null |
sebinthomas
|
huggingface/datasets
| 6,267
|
Multi label class encoding
|
### Feature request
I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels.
Here's an example of what I'd like to encode:
```
data = {
'text': ['one', 'two', 'three', 'four'],
'labels': [['a', 'b'], ['b'], ['b', 'c'], ['a', 'd']]
}
dataset = Dataset.from_dict(data)
dataset = dataset.class_encode_column('labels')
```
I did some digging into the code base to evaluate the feasibility of this (note I'm very new to this code base) and from what I noticed the `ClassLabel` feature is still stored as an underlying raw data type of int so I thought a `MultiLabel` feature could similarly be stored as a Sequence of ints, thus not requiring significant serialization / conversion work to / from arrow.
I did a POC of this [here](https://github.com/huggingface/datasets/commit/15443098e9ce053943172f7ec6fce3769d7dff6e) and included a simple test case (please excuse all the commented out tests, going for speed of POC here and didn't want to fight IDE to debug a single test). In the test I just assert that `num_classes` is the same to show that things are properly serializing, but if you break after loading from disk you'll see the dataset correct and the dataset feature is as expected.
After digging more I did notice a few issues
- After loading from disk I noticed type of the `labels` class is `Sequence` not `MultiLabel` (though the added `feature` attribute came through). This doesn't happen for `ClassLabel` but I couldn't find the encode / decode code paths that handle this.
- I subclass `Sequence` in `MultiLabel` to leverage existing serialization, but this does miss the custom encode logic that `ClassLabel` has. I'm not sure of the best way to approach this as I haven't fully understood the encode / decode flow for datasets. I suspect my simple implementation will need some improvement as it'll require a significant amount of repeated logic to mimic `ClassLabel` behavior.
### Motivation
See above - would like to support multi label class encodings.
### Your contribution
This would be a big help for us and we're open to contributing but I'll likely need some guidance on how to implement to fit the encode / decode flow. Some suggestions on tests / would be great too, I'm guessing in addition to the class encode tests (that I'll need to expand) we'll need encode / decode tests.
|
https://github.com/huggingface/datasets/issues/6267
|
open
|
[
"enhancement"
] | 2023-09-27T22:48:08Z
| 2023-10-26T18:46:08Z
| 7
|
jmif
|
huggingface/huggingface_hub
| 1,698
|
How to change cache dir?
|
### Describe the bug
by default, all downloaded models are stored on
> cache_path = '/root/.cache/huggingface/hub'
Is there a way to change this dir to something else?
I tried to set "HUGGINGFACE_HUB_CACHE"
```
import os
os.environ['HUGGINGFACE_HUB_CACHE'] = '/my_workspace/models_cache'
```
but it doesn't work,
### Reproduction
_No response_
### Logs
_No response_
### System info
```shell
- huggingface_hub version: 0.17.2
- Platform: Linux-5.4.0-162-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /root/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: adhikjoshi
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.2.0.dev20230922+cu118
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 10.0.1
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 1.24.4
- pydantic: 2.3.0
- aiohttp: N/A
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets
- HF_TOKEN_PATH: /root/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
```
|
https://github.com/huggingface/huggingface_hub/issues/1698
|
closed
|
[
"bug"
] | 2023-09-27T07:45:30Z
| 2023-09-27T09:08:34Z
| null |
adhikjoshi
|
huggingface/accelerate
| 2,010
|
How to set different seed for DDP data sampler for every epoch
|
Hello there!
I am using the following code to build my data loader.
```python
data_loader_train = DataLoader(
dataset_train,
collate_fn=collate_fn,
batch_size=cfg.data.train_batch_size,
num_workers=cfg.data.num_workers,
pin_memory=cfg.data.pin_memory,
)
data_loader_train = accelerator.prepare(data_loader_train)
```
I am using DDP for training and I want to set different data sample seed for every epoch, so that different epochs will have different batch data orders. How can I do that?
|
https://github.com/huggingface/accelerate/issues/2010
|
closed
|
[] | 2023-09-27T02:46:10Z
| 2023-09-27T11:32:22Z
| null |
Mountchicken
|
huggingface/transformers
| 26,412
|
How to run Trainer + DeepSpeed + Zero3 + PEFT
|
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.24.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
### Who can help?
@ArthurZucker and @younesbelkada and @pacman100 and @muellerzr
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
[This script](https://gist.github.com/BramVanroy/f2abb3940111b73ae8923822ef6096dd) is a modification of the official run_clm script. The only additions are the BNB config and PEFT. Yet, I cannot get it to work with a [deepspeed zero3 config](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_falcon_180b_z3.json).
Requirements to install:
```
accelerate >= 0.12.0
torch >= 1.3
datasets >= 1.8.0
sentencepiece != 0.1.92
protobuf
evaluate
scikit-learn
trl
peft
bitsandbytes
```
In the past I have had issues with low_cpu_mem_usage but neither a true/false value seem to get this to work:
Command 1:
```sh
deepspeed --include="localhost:0,1" run_clm.py \
--model_name_or_path facebook/opt-125m\
--dataset_name wikitext\
--dataset_config_name wikitext-2-raw-v1\
--per_device_train_batch_size 2\
--per_device_eval_batch_size 2\
--do_train\
--do_eval\
--output_dir /tmp/test-clm\
--deepspeed deepspeed_configs/ds_config_zero3.json\
--low_cpu_mem_usage true
```
==> `ValueError: DeepSpeed Zero-3 is not compatible with `low_cpu_mem_usage=True` or with passing a `device_map`.`
Command 2:
```sh
deepspeed --include="localhost:0,1" run_clm.py \
--model_name_or_path facebook/opt-125m\
--dataset_name wikitext\
--dataset_config_name wikitext-2-raw-v1\
--per_device_train_batch_size 2\
--per_device_eval_batch_size 2\
--do_train\
--do_eval\
--output_dir /tmp/test-clm\
--deepspeed deepspeed_configs/ds_config_zero3.json\
--low_cpu_mem_usage false
```
==> `ValueError: weight is on the meta device, we need a `value` to put in on 0.`
### Expected behavior
Any option to make this combination of Trainer + DeepSpeed + Zero3 + PEFT work.
|
https://github.com/huggingface/transformers/issues/26412
|
open
|
[
"WIP"
] | 2023-09-26T10:31:46Z
| 2024-01-11T15:40:02Z
| null |
BramVanroy
|
pytorch/data
| 1,201
|
Loading `.tfrecords` files that require a deserialization method
|
### 🐛 Describe the bug
Hi,
I have a dataset in TFRecords format and am trying to move to TorchData's API for loading tfrecords files.
This is the minimal example:
```python3
datapipe1 = IterableWrapper(['path/to/my/tfrecords/file.tfrecords'])
datapipe2 = FileOpener(datapipe1, mode="b")
tfrecord_loader_dp = datapipe2.load_from_tfrecord()
for d in tfrecord_loader_dp:
pass
```
It fails, as the datapipe does not know how to properly deserialize the tfrecord file.
```
File ~/.conda/envs/bend/lib/python3.10/site-packages/torchdata/datapipes/iter/util/tfrecordloader.py:245, in TFRecordLoaderIterDataPipe.__iter__(self)
243 pathname, data_stream = data
244 try:
--> 245 for example_bytes in iterate_tfrecord_file(data_stream):
246 example = example_pb2.SequenceExample() # type: ignore
247 example.ParseFromString(example_bytes) # type: ignore
File ~/.conda/envs/bend/lib/python3.10/site-packages/torchdata/datapipes/iter/util/tfrecordloader.py:83, in iterate_tfrecord_file(data)
81 (length,) = struct.unpack("<Q", length_bytes)
82 if length > len(data_bytes):
---> 83 data_bytes = data_bytes.zfill(int(length * 1.5))
84 data_bytes_view = memoryview(data_bytes)[:length]
85 if data.readinto(data_bytes_view) != length:
OverflowError: Python int too large to convert to C ssize_t
This exception is thrown by __iter__ of TFRecordLoaderIterDataPipe(datapipe=FileOpenerIterDataPipe, length=-1, spec=None)
```
In the legacy tensorflow codebase, I would have to specify a function to deserialize the tfrecord, by doing
```python3
import tensorflow as tf
import tensorflow_datasets as tfds
dataset = tf.data.Dataset.from_tensor_slices(['path/to/my/tfrecords/file.tfrecords'])
dataset = dataset.interleave(lambda fp: tf.data.TFRecordDataset(fp, compression_type=compression_type), cycle_length=1, block_length=1, num_parallel_calls=tf.data.AUTOTUNE)
features = tfds.features.FeaturesDict.from_json(json.load(json_file)) # this file contains info about the .tfrecords file i'm trying to load
dataset = dataset.map(features.deserialize_example, num_parallel_calls=tf.data.AUTOTUNE)
iterator = dataset.as_numpy_iterator()
for d in iterator:
pass #this works, returning a dict of tf tensors
```
The problem is basically that I have to deserialize the tfrecord, but I can't apply anything to the `TFRecordLoaderIterDataPipe` before it fails.
Is there a workaround? I tried just wrapping the tensorflow dataset object in an `IterableWrapper`, but the tensorflow dataset can't be pickled so fails in `DataLoader2`.
Thanks!
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: glibc-2.31
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1027-aws-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.994
BogoMIPS: 4999.98
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 8 MiB
L3 cache: 35.8 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/s
|
https://github.com/meta-pytorch/data/issues/1201
|
open
|
[] | 2023-09-26T09:17:39Z
| 2024-10-21T16:25:37Z
| 1
|
fteufel
|
pytorch/TensorRT
| 2,348
|
❓ [Question] How do you build and use PytorchTRT on Windows 10?
|
## ❓ Question
After trying even using MSVC instead of Ninja, I kind was able to generate some dll files. The files are torchtrt.dll, torch_plugins.dll, torchtrt_runtimes.dll, torchtrtc.exe.
Now what do I do with these. I just assumed, I put them in the lib folder "C:\Users\{Username}\AppData\Local\Programs\Python\Python310\Lib\site-packages\torch\lib" and now this does not work.
## What you have already tried
I have read everything and tried literrally everything and the building process is literaly broken.
https://pytorch.org/TensorRT/getting_started/getting_started_with_windows.html
and then tried a few diffrerent things and somehow was able to this.
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (torch-2.0.1+cu118.dist-info):
- CPU Architecture: Intel x86 10500H
- OS (e.g., Linux): Windows
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): Pip
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.10
- CUDA version:11.8
- GPU models and configuration: RTX 3060 Laptop
- Any other relevant information:
## Additional context
The building and then also using of PytorchRT Tensor is not easy and very problematic and outdated it seems.
And even if you manage to get it build somehow, you do not know, what to expect. Are those 4 dll files enough or did I miss something?
What do I do with these dll files?
Is there a simple example on Windows starting python files, that will run on TensorRT Pytorch like a Hello World Tensort TRT like.
|
https://github.com/pytorch/TensorRT/issues/2348
|
closed
|
[
"question"
] | 2023-09-26T04:48:04Z
| 2023-09-29T03:15:04Z
| null |
jensdraht1999
|
pytorch/audio
| 3,619
|
torchaudio/compliance/kaldi.py FBank _get_window function can not support multiprocessing?
|
### 🐛 Describe the bug
i use torchaudio 0.13.0+cu117 to get Fbank, if i use it in one thread is ok, but i want to use multiprocessing, like this
`p = multiprocessing.Pool(1)
xx = p.apply_async(audio_functiong, arg=(audio_in,))
p.close()
p.join()
emb = xx.get()`
the code will hold on, and get nothing, i use debug found this function _get_window in kaldi.py can not run, so please help fix it,thanks!
### Versions
python 3.8
|
https://github.com/pytorch/audio/issues/3619
|
closed
|
[] | 2023-09-26T02:06:19Z
| 2023-10-09T05:39:47Z
| 1
|
haha010508
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.