repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/serve
| 3,296
|
integrating the Torch Serve hosted model with a third party application
|
I have an application that takes an image converts that image into base64 to create a input request for API call.
The input schema structure created by my application looks something like this,
{
"instances":
[
{
"base64": "base64 string of image",
"mode_type": "some value"
"metadata": "some metadata like timestamp"
}
]
}
Now, I have to use this application to call a torch serve hosted application. From going through the Torch Serve documents I understood that the torch serve hosted API would accept an input in the below structure,
{
"instances":
[
{
"data": [input_data]
}
]
}
where the **input_data** is the data that is directly accepted by the model. For understanding purpose lets say it is Numpy array.
Here is my question:
If I wanted to use my application to call a Torch Serve API, How easy or difficult it would be? Having in account that similar discrepancy is there in the output structure which might require some pre or post processing of the base64 into appropriate format.
How can I integrate my application with Torch Serve API seamlessly?
|
https://github.com/pytorch/serve/issues/3296
|
open
|
[] | 2024-08-22T06:18:20Z
| 2024-08-22T16:27:35Z
| 1
|
tarunsk1998
|
huggingface/sentence-transformers
| 2,900
|
how to keep `encode_multi_process` output on the GPU
|
I saw this [example](https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/semantic-search/semantic_search.py) where we can do the following:
`query_embedding = embedder.encode(query, convert_to_tensor=True)`
`hits = util.semantic_search(query_embedding, corpus_embeddings, top_k=5)`
I read that setting `convert_to_tensor=True` keeps the embedding vectors on the GPU to optimize the similarity calculations. But if I work with multiple CPUs and GPUs, can I do the same? I didn't see a `convert_to_tensor` argument for `encode_multi_process`.
|
https://github.com/huggingface/sentence-transformers/issues/2900
|
open
|
[] | 2024-08-21T21:05:35Z
| 2024-08-21T21:07:39Z
| null |
anshuchen
|
pytorch/TensorRT
| 3,109
|
❓ [Question] how to specify dynamic shape when using torch_tensorrt.save
|
## ❓ Question
<!-- Your question -->
I was following [the documentation](https://pytorch.org/TensorRT/user_guide/dynamic_shapes.html#dynamic-shapes) on compiling a model with dynamic input shape. When saving the compiled graph module (following [this](https://pytorch.org/TensorRT/user_guide/saving_models.html)), the new `torch_tensorrt.save(module, path, inputs)` API requires `inputs` to be all tensors. How do I pass dynamic shapes to `torch_tensorrt.save`? Error: https://github.com/pytorch/TensorRT/blob/77278fe395d6ffdd456fd7a8a94852cd27ee63a9/py/torch_tensorrt/_compile.py#L420
```
import torch
import torch_tensorrt
model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet50', pretrained=True)
model.eval().cuda()
inputs = [torch_tensorrt.Input(min_shape=[1, 3, 224, 224],
opt_shape=[4, 3, 224, 224],
max_shape=[8, 3, 224, 224],
dtype=torch.float32)]
trt_gm = torch_tensorrt.compile(model, ir="dynamo", inputs=inputs)
torch_tensorrt.save(trt_gm, "trt_gm.ep", inputs=inputs)
```
```
WARNING:torch_tensorrt.dynamo.conversion.aten_ops_converters:Unable to import quantization op. Please install modelopt library (https://github.com/NVIDIA/TensorRT-Model-Optimizer?tab=readme-ov-file#installation) to add support for compiling quantized models
INFO:torch_tensorrt.dynamo._compiler:Compilation Settings: CompilationSettings(enabled_precisions={<dtype.f32: 7>}, debug=False, workspace_size=0, min_block_size=5, torch_executed_ops=set(), pass_through_build_failures=False, max_aux_streams=None, version_compatible=False, optimization_level=None, use_python_runtime=False, truncate_double=False, use_fast_partitioner=True, enable_experimental_decompositions=False, device=Device(type=DeviceType.GPU, gpu_id=0), require_full_compilation=False, disable_tf32=False, assume_dynamic_shape_support=False, sparse_weights=False, refit=False, engine_capability=<EngineCapability.STANDARD: 1>, num_avg_timing_iters=1, dla_sram_size=1048576, dla_local_dram_size=1073741824, dla_global_dram_size=536870912, dryrun=False, hardware_compatible=False, timing_cache_path='/tmp/timing_cache.bin')
INFO:torch_tensorrt.dynamo._compiler:Partitioning the graph via the fast partitioner
INFO:torch_tensorrt [TensorRT Conversion Context]:[MemUsageChange] Init CUDA: CPU +1, GPU +0, now: CPU 449, GPU 1622 (MiB)
INFO:torch_tensorrt [TensorRT Conversion Context]:[MemUsageChange] Init builder kernel library: CPU +1622, GPU +288, now: CPU 2218, GPU 1910 (MiB)
WARNING:torch_tensorrt.dynamo.conversion.converter_utils:Detected unparseable type in node formatting: <class 'torch.SymInt'>
INFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:TRT INetwork construction elapsed time: 0:00:00.609398
INFO:torch_tensorrt [TensorRT Conversion Context]:Global timing cache in use. Profiling results in this builder pass will be stored.
INFO:torch_tensorrt [TensorRT Conversion Context]:Detected 1 inputs and 1 output network tensors.
INFO:torch_tensorrt [TensorRT Conversion Context]:Total Host Persistent Memory: 343968
INFO:torch_tensorrt [TensorRT Conversion Context]:Total Device Persistent Memory: 7168
INFO:torch_tensorrt [TensorRT Conversion Context]:Total Scratch Memory: 6424576
INFO:torch_tensorrt [TensorRT Conversion Context]:[BlockAssignment] Started assigning block shifts. This will take 86 steps to complete.
INFO:torch_tensorrt [TensorRT Conversion Context]:[BlockAssignment] Algorithm ShiftNTopDown took 0.644934ms to assign 4 blocks to 86 nodes requiring 65830912 bytes.
INFO:torch_tensorrt [TensorRT Conversion Context]:Total Activation Memory: 65830912
INFO:torch_tensorrt [TensorRT Conversion Context]:Total Weights Memory: 127383968
INFO:torch_tensorrt [TensorRT Conversion Context]:Engine generation completed in 0.553365 seconds.
INFO:torch_tensorrt [TensorRT Conversion Context]:[MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 16 MiB, GPU 129 MiB
INFO:torch_tensorrt [TensorRT Conversion Context]:[MemUsageStats] Peak memory usage during Engine building and serialization: CPU: 4064 MiB
INFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:Build TRT engine elapsed time: 0:00:00.649827
INFO:torch_tensorrt.dynamo.conversion._TRTInterpreter:TRT Engine uses: 129675836 bytes of Memory
INFO:torch_tensorrt [TensorRT Conversion Context]:Serialized 26 bytes of code generator cache.
INFO:torch_tensorrt [TensorRT Conversion Context]:Serialized 292352 bytes of compilation cache.
INFO:torch_tensorrt [TensorRT Conversion Context]:Serialized 3744 timing cache entries
WARNING: [Torch-TensorRT] - Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled. For more on the implications of this as well as workarounds, see the linked documentation (https://pytorch.org/TensorRT/user_guide/runtime.html#multi-device-safe-mode)
Traceback (most recent call last):
File "test.py", line 11, in <module>
|
https://github.com/pytorch/TensorRT/issues/3109
|
closed
|
[
"question"
] | 2024-08-21T18:35:28Z
| 2024-09-26T20:38:44Z
| null |
Qi-Zha0
|
pytorch/ao
| 724
|
What is the difference between WeightNormSparsifier and torch.nn.utils.prune.l1_unstructured ?
|
https://github.com/pytorch/ao/issues/724
|
open
|
[
"question"
] | 2024-08-21T18:14:19Z
| 2024-08-23T15:03:35Z
| null |
mayank64ce
|
|
huggingface/parler-tts
| 116
|
How to use italian language?
|
It is possible use an italian style speaker? I've tried many prompt but all of this are in english style
|
https://github.com/huggingface/parler-tts/issues/116
|
open
|
[] | 2024-08-21T15:24:57Z
| 2025-06-18T13:20:22Z
| null |
piperino11
|
huggingface/chat-ui
| 1,423
|
Generated answers with Llama 3 include <|start_header_id|>assistant<|end_header_id|>
|
## Bug description
I have set up a local endpoint serving Llama 3. All the answers I get from it start with `<|start_header_id|>assistant<|end_header_id|>`.
## Steps to reproduce
Set up Llama 3 in a local endpoint. In my `.env.local`, it is defined as the following:
```
MODELS=`[
{
"name": "llama3",
"displayName": "Llama 3 loaded from GCS",
"chatPromptTemplate": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{{preprompt}}<|eot_id|>{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|><|start_header_id|>assistant<|end_header_id|>{{/ifUser}}{{#ifAssistant}}{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}",
"preprompt": "You are a helpful AI assistant.",
"parameters": {
"stop": ["<|endoftext|>", "<|eot_id|>"],
"temperature": 0.4,
"max_new_tokens": 1024,
"truncate": 3071
},
"endpoints": [{
"type": "openai",
"baseURL": "http://localhost:8080/openai/v1"
}],
}
]`
```
## Context
I have tried variations of the chat template, also not providing any. The `<|start_header_id|>assistant<|end_header_id|>` is always there.
AFAIK, these tokens should be the last ones in the prompt, so that the model knows that it should continue the prompt with the assistant's answer. It seems they are not properly appended to the prompt, but the model still realizes it should add them itself.
### Logs
This a sample request that my local server receives (running VLLM):
```
INFO 08-21 11:47:18 async_llm_engine.py:529] Received request cmpl-d1482c4eb4ce49c2a259a2f782ee3712-0: prompt: "<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant. Unless otherwise specified, give concise and straightforward answers.<|eot_id|><|start_header_id|>user<|end_header_id|>
[ChatCompletionRequestMessageContentPartText(type='text', text='Hi, what is pizza?')]<|eot_id|>", sampling_params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.4, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=['<|endoftext|>', '<|eot_id|>'], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=1024, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [128000, 128000, 128006, 9125, 128007, 271, 2675, 527, 264, 11190, 15592, 18328, 13, 11115, 6062, 5300, 11, 3041, 64694, 323, 31439, 11503, 13, 128009, 128006, 882, 128007, 271, 58, 16047, 34290, 1939, 2097, 2831, 5920, 1199, 5930, 1151, 1342, 518, 1495, 1151, 13347, 11, 1148, 374, 23317, 30, 52128, 128009], lora_request: None.
```
### Specs
- **OS**: macOS
- **Browser**: Firefox 129.0.1
- **chat-ui commit**: 28351dfefa581e4494b2047de3c093eaa7a7cdbc
### Config
```
MONGODB_URL=mongodb://localhost:27017
HF_TOKEN=...
```
## Notes
I'm not sure what the `ChatCompletionRequestMessageContentPartText(...)` in the prompt is supposed to mean. Is it some internal request object rendered as a string?
|
https://github.com/huggingface/chat-ui/issues/1423
|
closed
|
[
"support"
] | 2024-08-21T11:56:47Z
| 2024-08-26T14:31:53Z
| 5
|
erickrf
|
huggingface/trl
| 1,955
|
How to fine-tune LLaVA using PPO
|
Does LLaVA support training with PPO?
If not, what modifications do I need to make to enable this support?
|
https://github.com/huggingface/trl/issues/1955
|
open
|
[
"✨ enhancement",
"👁️ VLM"
] | 2024-08-21T07:34:30Z
| 2024-08-26T11:13:46Z
| null |
Yufang-Liu
|
pytorch/xla
| 7,897
|
Import "torch_xla.core.xla_model" could not be resolved
|
getting issues on torch_xla.core.xla_model. , while installing package also getting errors : "ERROR: Could not find a version that satisfies the requirement torch-xla (from versions: none)
ERROR: No matching distribution found for torch-xla"
I have installed python version is : Python 3.10.0
Any Solution ?
|
https://github.com/pytorch/xla/issues/7897
|
closed
|
[
"question"
] | 2024-08-21T05:25:35Z
| 2025-04-01T12:26:48Z
| null |
hiralU
|
huggingface/diffusers
| 9,235
|
Is there any way to get diffusers-v0.27.0.dev0?
|
Is there any way to get diffusers-v0.27.0.dev0? I want to compare the difference between diffusers-v0.27.0.dev0 and branches that develop on it in another project, but I didn't find it on the releases or tags page.
|
https://github.com/huggingface/diffusers/issues/9235
|
closed
|
[] | 2024-08-21T03:42:11Z
| 2024-08-21T05:10:26Z
| 2
|
D222097
|
huggingface/llm.nvim
| 108
|
How to use proxy env var
|
I am unable to communicate with any http endpoints because I am behind a corporate proxy that uses self-signed certificates. Typically we use the http_proxy and https_proxy environment variables for this purpose, but I can't see any obvious configurations that I can add to my lua config to make this work.
I have tried adding http_proxy = "http://ProxyURL:ProxyPort" to cmd_env in the llm.setup but it still keeps throwing an http error... invalid peer certificate, unknown issuer.
|
https://github.com/huggingface/llm.nvim/issues/108
|
open
|
[] | 2024-08-20T18:52:54Z
| 2024-08-20T18:53:36Z
| null |
SethARhodes
|
huggingface/huggingface_hub
| 2,468
|
How can I modify this repo files downloader jupyter notebook script to improve downloading speed? Perhaps multiple downloads at the same time?
|
This below code works but it is just slow
How can i speed up? Machine has much bigger speed and i really need to download lots of AI models to test
Thank you
```
import os
import requests
import hashlib
from huggingface_hub import list_repo_files, hf_hub_url, hf_hub_download
from huggingface_hub.utils import HfFolder
from tqdm import tqdm
def calculate_file_hash(file_path):
sha256_hash = hashlib.sha256()
with open(file_path, "rb") as f:
for byte_block in iter(lambda: f.read(4096), b""):
sha256_hash.update(byte_block)
return sha256_hash.hexdigest()
def download_file(url, target_path, headers, expected_size=None):
response = requests.get(url, headers=headers, stream=True)
response.raise_for_status()
total_size = int(response.headers.get('content-length', 0))
mode = 'ab' if os.path.exists(target_path) else 'wb'
with tqdm(total=total_size, unit='B', unit_scale=True, desc=os.path.basename(target_path), initial=0, ascii=True) as pbar:
with open(target_path, mode) as f:
for chunk in response.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
pbar.update(len(chunk))
if expected_size and os.path.getsize(target_path) != expected_size:
raise ValueError(f"Size mismatch for {target_path}. Expected: {expected_size}, Got: {os.path.getsize(target_path)}")
# Define the repository and target folder
repo_id = "YourUserName/reponame"
target_folder = "/home/Ubuntu/apps/stable-diffusion-webui/models/Stable-diffusion"
# Retrieve the token from the .huggingface folder or set it manually
token = HfFolder.get_token()
if not token:
raise ValueError("Hugging Face token not found. Please log in using `huggingface-cli login` or set the token manually.")
headers = {"Authorization": f"Bearer {token}"}
# List all files in the repository
files = list_repo_files(repo_id)
# Ensure the target folder exists
os.makedirs(target_folder, exist_ok=True)
# Download each file directly to the target folder
for file in files:
try:
target_path = os.path.join(target_folder, file)
# Get file metadata
file_info = hf_hub_download(repo_id, filename=file, repo_type='model', token=token, local_dir=target_folder, local_dir_use_symlinks=False)
expected_size = os.path.getsize(file_info)
# Check if the file already exists and has the correct size
if os.path.exists(target_path):
if os.path.getsize(target_path) == expected_size:
print(f"File {file} already exists and is complete. Skipping download.")
continue
else:
print(f"File {file} exists but is incomplete. Resuming download.")
# Get the URL for the file
file_url = hf_hub_url(repo_id, filename=file, repo_type='model')
# Ensure subdirectories exist
os.makedirs(os.path.dirname(target_path), exist_ok=True)
# Download the file with authentication and size verification
download_file(file_url, target_path, headers, expected_size)
# Set the correct permissions for the downloaded file
os.chmod(target_path, 0o644) # Read and write for owner, read for group and others
except Exception as e:
print(f"An error occurred while processing file {file}: {e}")
print(f"All files have been downloaded and verified in {target_folder}")
```
### System info
```shell
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.24.6
- Platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Running in iPython ?: Yes
- iPython shell: ZMQInteractiveShell
- Running in notebook ?: Yes
- Running in Google Colab ?: No
- Token path ?: /home/Ubuntu/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: MonsterMMORPG
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: N/A
- Jinja2: 3.1.4
- Graphviz: N/A
- keras: N/A
- Pydot: N/A
- Pillow: N/A
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: N/A
- pydantic: N/A
- aiohttp: 3.10.5
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /home/Ubuntu/.cache/huggingface/hub
- HF_ASSETS_CACHE: /home/Ubuntu/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/Ubuntu/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
{'huggingface_hub version': '0.24.6',
'Platform': 'Linux-6.5.0-45-generic-x86_64-with-glibc2.35',
'Python version': '3.10.12',
'Running in iPython ?': 'Yes',
'iPython shell': 'ZM
|
https://github.com/huggingface/huggingface_hub/issues/2468
|
closed
|
[] | 2024-08-20T15:13:13Z
| 2024-08-27T16:22:14Z
| null |
FurkanGozukara
|
pytorch/xla
| 7,890
|
In spmd training of multiple machines, xp.trace is problematic
|
## ❓ Questions and Help
I printed all the thunk that was executed and found that there were a lot of thunk that didn't appear in my tensorboard. And the order of the front and back is also wrong.
I trace according to this example:https://github.com/pytorch/xla/blob/master/test/spmd/test_train_spmd_imagenet.py#L318-L333
xla_version: latest
device: 2 * 8 A100
|
https://github.com/pytorch/xla/issues/7890
|
open
|
[
"question"
] | 2024-08-20T12:48:39Z
| 2025-04-01T12:28:34Z
| null |
mars1248
|
huggingface/datasets
| 7,116
|
datasets cannot handle nested json if features is given.
|
### Describe the bug
I have a json named temp.json.
```json
{"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]}
```
I want to load it.
```python
ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({
'ref1': datasets.Value('string'),
'ref2': datasets.Value('string'),
'cuts': datasets.Sequence({
"cut1": datasets.Value("uint16"),
"cut2": datasets.Value("uint16")
})
}))
```
The above code does not work. However, I can load it without giving features.
```python
ds = datasets.load_dataset('json', data_files="./temp.json")
```
Is it possible to load integers as uint16 to save some memory?
### Steps to reproduce the bug
As in the bug description.
### Expected behavior
The data are loaded and integers are uint16.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.21.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
|
https://github.com/huggingface/datasets/issues/7116
|
closed
|
[] | 2024-08-20T12:27:49Z
| 2024-09-03T10:18:23Z
| 3
|
ljw20180420
|
huggingface/datasets
| 7,113
|
Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch)
|
### Describe the bug
Hi there,
I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgraded to datasets-2.19.2. With 2.21.0 the problem remains.
Please see the code below to reproduce the problem.
The dataset can iterate correctly if we set either streaming=False or drop_last_batch=False.
I have to use drop_last_batch=True since it's for distributed training.
### Steps to reproduce the bug
```python
# datasets==2.21.0
import datasets
def data_prepare(examples):
print(examples["sentence1"][0])
return examples
batch_size = 101
# the size of the dataset is 100
# the dataset iterates correctly if we set either streaming=False or drop_last_batch=False
dataset = datasets.load_dataset("mteb/biosses-sts", split="test", streaming=True)
dataset = dataset.map(lambda x: data_prepare(x),
drop_last_batch=True,
batched=True, batch_size=batch_size)
for ex in dataset:
print(ex)
pass
```
### Expected behavior
The dataset iterates regardless of the batch size.
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0
|
https://github.com/huggingface/datasets/issues/7113
|
closed
|
[] | 2024-08-20T08:26:40Z
| 2024-08-26T04:24:11Z
| 1
|
memray
|
pytorch/serve
| 3,290
|
model_yaml_config usage is not explained well enough
|
### 📚 The doc issue
### Expected :
The [documentation ](https://github.com/pytorch/serve/blob/master/docs/configuration.md#config-model)about `model_yaml_config` sounds as if we could use it as below in `config.properties` and access it later.
- file name : `config.properties`
- content :
```
inference_address=https://127.0.0.1:8443
management_address=https://127.0.0.1:8444
metrics_address=https://127.0.0.1:8445
model_yaml_config={\
"pippy": {\
"rpc_timeout": <some value>\
}\
}
```
and I can't access the `model_yaml_config` property through `context.model_yaml_config` and actually it throws an error.
---
### Reality :
However, the way we could use the property is as below.
- command : `torch-model-archiver --model-name <something> --serialized-file <some path> ... --config-file <yaml file path>`
and this logic is very confusing when compared with what is written in the documentation
### Suggest a potential alternative/fix
The logic seems like when my handler, having inherited `BaseHandler`, doesn't acutally assign `self.model_yaml_config` in its `initialize` [method.](https://github.com/pytorch/serve/blob/ef196c0f1d5f14bb0e01f65b7b21d43c3c143814/ts/torch_handler/base_handler.py#L151) Actually, it is assigned when `Service` is instantiated with `.__init__` [method](https://github.com/pytorch/serve/blob/ef196c0f1d5f14bb0e01f65b7b21d43c3c143814/ts/service.py#L34)
I suggest either of the two
1. Modify the documentation to use `model_yaml_config` property with `torch-model-archiver --config-file <path>` argument
2. Or modify the code to assign `model_yaml_config` through `config.properties` as it sounds in the current documentation.
|
https://github.com/pytorch/serve/issues/3290
|
open
|
[] | 2024-08-20T00:34:32Z
| 2024-08-26T18:49:27Z
| 1
|
Foundsheep
|
pytorch/torchchat
| 1,041
|
Improve support for and documentation of custom models
|
### 🚀 The feature, motivation and pitch
torchchat supports adding models to the "known_model" list and has CLI support for local models not hosted in torchchat's, but this can be better documented.
### Alternatives
_No response_
### Additional context
Some PR's Related to this theme:
* https://github.com/pytorch/torchchat/issues/1038
* https://github.com/pytorch/torchchat/issues/1040
### RFC (Optional)
_No response_
|
https://github.com/pytorch/torchchat/issues/1041
|
closed
|
[
"documentation",
"enhancement",
"Known Gaps",
"triaged"
] | 2024-08-19T16:43:48Z
| 2025-02-04T18:22:48Z
| 1
|
Jack-Khuu
|
huggingface/diffusers
| 9,216
|
I made a pipeline that lets you use any number of models at once
|
### Model/Pipeline/Scheduler description
Here's how to do it:
from rubberDiffusers import StableDiffusionRubberPipeline
pipe=StableDiffusionRubberPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float32,local_files_only=True,safety_checker=None, requires_safety_checker=False,
)
pipe2=StableDiffusionRubberPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float32,local_files_only=True,safety_checker=None, requires_safety_checker=False,
)
apply_multiModel(pipe)
pipe.added_model=[pipe2]
image=pipe("your prompt",width=512,height=512,pos=["0:0-512:512"],mask_strengths=[.5],model_kwargs=[{prompt="your prompt for the first loaded model"}]).images[0]
### Open source status
- [ ] The model implementation is available.
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
https://github.com/alexblattner/RubberDiffusers
|
https://github.com/huggingface/diffusers/issues/9216
|
open
|
[
"stale"
] | 2024-08-19T11:46:08Z
| 2024-09-21T15:03:31Z
| 3
|
alexblattner
|
pytorch/torchtitan
| 528
|
How to train using bfloat16?
|
Hi! I have a quick question: how to train using bfloat16? I found the default setting using fp32.
I changed ''data_parallel_degree" to 4 (my number of GPUs) but still did not use bfloat16.
Thanks in advance!
|
https://github.com/pytorch/torchtitan/issues/528
|
closed
|
[] | 2024-08-19T07:38:12Z
| 2024-08-20T13:45:47Z
| null |
zyushun
|
pytorch/ao
| 704
|
Question: How to use Float8InferenceLinear with FSDP1/2?
|
Hey Team,
I'm trying to use FSDP1/2 with Float8InferenceLinear but seems have some issues (with torch 2.3.1+cu118). Do you suggestion to bump to higher version of torch and have a try or maybe use the training setup without using the inference layer? I also tried using the Flont8linear layer without using the quantization function to convert to Float8InferenceLinear but seems face some issues when using FSDP1 that when computing the amax, some input x tensors are empty (x.numel()=0) and some are NaN.
Best regards,
QQ
|
https://github.com/pytorch/ao/issues/704
|
open
|
[
"float8",
"inference"
] | 2024-08-19T07:33:07Z
| 2024-08-26T02:40:18Z
| null |
qingquansong
|
huggingface/transformers
| 32,873
|
How to use 【examples/pytorch/contrastive-image-text】 to inter inference
|
### Feature request
I have reviewed the training code for CLIP and successfully executed it. Now, I want to use the obtained model for inference testing.
### Motivation
I would like to test the performance of the model I have trained.
### Your contribution
I hope I can get a example script to inference testing like below script :
python examples/pytorch/contrastive-image-text/run_clip.py \
--output_dir ./clip-roberta-finetuned \
--model_name_or_path ./clip-roberta \
--data_dir $PWD/data \
--dataset_name ydshieh/coco_dataset_script \
--dataset_config_name=2017 \
--image_column image_path \
--caption_column caption \
--remove_unused_columns=False \
--do_train --do_eval \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="64" \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \
--overwrite_output_dir \
--push_to_hub
|
https://github.com/huggingface/transformers/issues/32873
|
open
|
[
"Feature request"
] | 2024-08-19T05:54:54Z
| 2024-08-19T08:33:50Z
| null |
rendaoyuan
|
pytorch/TensorRT
| 3,098
|
❓ [Question] When using torch_tensorrt.compile to optimize Mask2Former's multi_scale_deformable_attn layer, an error occurs.
|
## ❓ Question
<!-- Your question -->
I was preparing to export a TRT model for Mask2Former using the command **optimized_model = torch_tensorrt.compile(model, inputs=imgs, enabled_precisions={torch.half})**, where model is a Mask2Former loaded through mmseg.
However, I encountered an error at the line **value_l_ = value_list[0].flatten(2).transpose(1, 2).reshape(4 * 8, 32, 16, 16)**:
The error message was:
`"Failed running call_method reshape(*(FakeTensor(..., device='cuda:0', size=(1, 256, 256),
grad_fn=<TransposeBackward0>), 32, 32, 16, 16), **{}):
shape '[32, 32, 16, 16]' is invalid for input of size 65536"`
The original code was **value_l_ = value_list[level].flatten(2).transpose(1, 2).reshape(bs * num_heads, embed_dims, H_, W_)**. Even after fixing all variables with constants, **During training, this can be reshaped normally**, but the above error occurs when using torch_tensorrt.compile.
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- pytorch: 2.3.0
- torch_tensorrt: 2.3.0
- OS: ubuntu20:
- mmsegmentation: 1.2.1
## Additional context
The complete code is as follows:
```
value_list = value.split([16*16,32*32,64*64], dim=1)
value_l_ = value_list[0].flatten(2).transpose(1, 2).reshape(4 * 8, 32, 16, 16)
sampling_grid_l_ = sampling_grids[:, :, :,0].transpose(1, 2).flatten(0, 1)
sampling_value_l_ = F.grid_sample(
value_l_,
sampling_grid_l_,
mode='bilinear',
padding_mode='zeros',
align_corners=False)
sampling_value_list.append(sampling_value_l_)
```
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/3098
|
open
|
[
"question"
] | 2024-08-19T03:03:03Z
| 2024-09-24T18:38:56Z
| null |
edition3234
|
huggingface/chat-ui
| 1,415
|
Bad request: Task not found for this model
|
Hi all,
I am facing the following issue when using HuggingFaceEndpoint for my custom finetuned model in my repository "Nithish-2001/RAG-29520hd0-1-chat-finetune" which is public with gradio.
llm_name: Nithish-2001/RAG-29520hd0-1-chat-finetune
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api-inference.huggingface.co/models/Nithish-2001/RAG-29520hd0-1-chat-finetune
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 763, in predict
output = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/route_utils.py", line 288, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1931, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1516, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "<ipython-input-7-4e46265a5151>", line 90, in conversation
response = qa_chain.invoke({"question": message, "chat_history": formatted_chat_history})
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 164, in invoke
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 154, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/conversational_retrieval/base.py", line 169, in _call
answer = self.combine_docs_chain.run(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py", line 170, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 603, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py", line 170, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 381, in __call__
return self.invoke(
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 164, in invoke
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 154, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/combine_documents/base.py", line 138, in _call
output, extra_return_dict = self.combine_docs(
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/combine_documents/stuff.py", line 257, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 316, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py", line 170, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 381, in __call__
return self.invoke(
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 164, in invoke
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 154, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 126, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 138, in generate
return self.llm.generate_prompt(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 750, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File
|
https://github.com/huggingface/chat-ui/issues/1415
|
open
|
[
"support"
] | 2024-08-18T09:33:10Z
| 2024-08-25T22:38:00Z
| 1
|
NITHISH-Projects
|
pytorch/TensorRT
| 3,095
|
❓ [Question] Why does the speed (fps) of torch-tensorrt perform so badly in `torch.multiprocessing`?
|
## ❓ Question
Hello, dear developer:
Thank your for your amazing job!
Why does the speed (fps) of torch-tensorrt perform so badly in `torch.multiprocessing`?
Currently I use `torch.multiprocessing` to create and run 3 Process (in 1 GPU) of resnet18, resnet50 and resnet101 at the same time. But I find their speeds of inference are worse than single process.
Here is my single process code:
```
# single process
import time
import torch
import tensorrt
import torch_tensorrt
from torchvision.models import resnet18, resnet50, resnet101
if __name__ == '__main__':
# --------------------------------ResNet18---------------------------------------
model0 = torch.jit.load("res18_trt_fp16.ts")
inputs = [torch.randn((10, 3, 224, 224)).half().cuda()]
print("Warm up ...")
with torch.no_grad():
for _ in range(10):
features = model0(*inputs)
torch.cuda.synchronize()
t0 = time.time()
with torch.no_grad():
_ = model0(*inputs)
torch.cuda.synchronize()
t1 = time.time()
print('res18: ', (t1 - t0) * 1000, 'ms')
# --------------------------------ResNet50---------------------------------------
model1 = torch.jit.load("res50_trt_fp16.ts")
inputs = [torch.randn((10, 3, 224, 224)).half().cuda()]
print("Warm up ...")
with torch.no_grad():
for _ in range(10):
features = model1(*inputs)
torch.cuda.synchronize()
t0 = time.time()
with torch.no_grad():
_ = model1(*inputs)
torch.cuda.synchronize()
t1 = time.time()
print('res50: ', (t1 - t0) * 1000, 'ms')
# --------------------------------ResNet101--------------------------------------
model2 = torch.jit.load("res101_trt_fp16.ts")
inputs = [torch.randn((10, 3, 224, 224)).half().cuda()]
with torch.no_grad():
for _ in range(10):
features = model2(*inputs)
torch.cuda.synchronize()
t0 = time.time()
with torch.no_grad():
res = model2(*inputs)
torch.cuda.synchronize()
t1 = time.time()
print('res101: ', (t1 - t0) * 1000, 'ms')
```
The results are:
```
res18: 1.2104511260986328 ms
res50: 2.7513504028320312 ms
res101: 5.034923553466797 ms
```
And here is my multiprocessing code
```
# multiprocess
import pycuda.driver as cuda
import pycuda.autoinit
import os
import time
import numpy as np
import torch
import torch.multiprocessing as mp
import torch_tensorrt
def Worker1():
print('Worker1 PID:', os.getpid())
net = torch.jit.load("res18_trt_fp16.ts")
x = torch.randn(10, 3, 224, 224).half().cuda()
for i in range(10):
_ = net(x)
with torch.no_grad():
while True:
# infer
torch.cuda.synchronize()
t0 = time.time()
results = net(x)
torch.cuda.synchronize()
t1 = time.time()
print('Res18', (t1 - t0) * 1000, 'ms')
def Worker2():
print('Worker2 PID:', os.getpid())
net = torch.jit.load("res50_trt_fp16.ts")
x = torch.randn(10, 3, 224, 224).half().cuda()
for i in range(10):
_ = net(x)
with torch.no_grad():
while True:
# infer
torch.cuda.synchronize()
t0 = time.time()
results = net(x)
torch.cuda.synchronize()
t1 = time.time()
print('Res50', (t1 - t0) * 1000, 'ms')
def Worker3():
print('Worker3 PID:', os.getpid())
net = torch.jit.load("res101_trt_fp16.ts")
x = torch.randn(10, 3, 224, 224).half().cuda()
for i in range(10):
_ = net(x)
with torch.no_grad():
while True:
# infer
torch.cuda.synchronize()
t0 = time.time()
results = net(x)
torch.cuda.synchronize()
t1 = time.time()
print('Res101', (t1 - t0) * 1000, 'ms')
if __name__ == '__main__':
mp.set_start_method('spawn', force=True)
# create
processes = [
mp.Process(target=Worker1, args=()),
mp.Process(target=Worker2, args=()),
mp.Process(target=Worker3, args=()),
]
# start
for p in processes:
p.start()
# main loop
while True:
continue
```
BUT the results are (average):
```
Res18: 5.539894104003906 ms
Res50: 7.973670959472656 ms
Res101:13.53001594543457 ms
```
The results of multiprocessing are so wired. They are much slower than single process, which confuses me a lot.
Is there any way to fix them up or speed them up?
Thank you in advance!
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.3.0 stable
- PyTorch-Tensorrt Version (e.g., 1.0): 2.3.0
- Tensorrt Version (e.g., 1.0): 10.0.1
- CPU Architecture: x64
- OS (e.g., Linux): ubuntu 22.04
- How y
|
https://github.com/pytorch/TensorRT/issues/3095
|
open
|
[
"question"
] | 2024-08-17T08:32:46Z
| 2025-04-15T13:54:47Z
| null |
zhongqiu1245
|
pytorch/torchx
| 945
|
Using torchx as a SDK
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our
[documentation](https://pytorch.org/torchx).
### Question
The examples on the documentation refer to using torchx via the cli implementation. I was wondering if there was any way that torchx can be used in a sdk format. For instance:
```
class MyCustomComponent
class MyScheduler
runner = torchx.runner()
runner.with_scheduler(MyScheduler())
runner.run_component(MyCustomComponent())
```
If its possible, is there any documentation or a sample project that provides an example of how this can be used, in particular using a custom scheduler and component?
Thank You!
|
https://github.com/meta-pytorch/torchx/issues/945
|
open
|
[] | 2024-08-17T03:21:30Z
| 2024-08-19T14:18:45Z
| 1
|
juinquok
|
huggingface/sentence-transformers
| 2,893
|
how to finetune sentence-transformers with unsupervised methods?
|
how to finetune sentence-transformers with unsupervised methods? for semantic search
|
https://github.com/huggingface/sentence-transformers/issues/2893
|
closed
|
[] | 2024-08-17T02:32:09Z
| 2024-08-18T02:51:29Z
| null |
keyuchen21
|
huggingface/diffusers
| 9,205
|
Can we pass output_attentions=True to DiT model such as pixart to get attention output?
|
Can we pass output_attentions=True to DiT model such as pixart to get attention output? Like using output_attentions=True in transformer?
|
https://github.com/huggingface/diffusers/issues/9205
|
open
|
[
"stale"
] | 2024-08-16T17:26:14Z
| 2024-09-16T15:02:42Z
| 1
|
foreverpiano
|
huggingface/datatrove
| 266
|
How to look into the processed data?
|
Hi,
After running `tokenize_from_hf_to_s3.py`, I would like to inspect the resulting data. But I find that the current data is in a binary file (`.ds`). is there a way to allow me to look into the data?
Thanks!
|
https://github.com/huggingface/datatrove/issues/266
|
open
|
[] | 2024-08-16T16:54:45Z
| 2024-08-29T15:26:35Z
| null |
shizhediao
|
huggingface/trl
| 1,934
|
How to Save the PPOTrainer?
|
The previous issue for this question https://github.com/huggingface/trl/issues/1643#issue-2294886330 is closed but remained unanswered. If I do `ppo_trainer.save_pretrained('path/to/a/folder')` and then `ppo_trainer.from_pretrained('path/to/that/folder')`, I get this error:
ValueError: tokenizer must be a PreTrainedTokenizerBase like a PreTrainedTokenizer or a PreTrainedTokenizerFast, got <class 'NoneType'>
It seems that the `PPOTrainer` object does not implement the two functions from `huggingface_hub.PyTorchModelHubMixin`. How should I save my trainer then?
|
https://github.com/huggingface/trl/issues/1934
|
closed
|
[] | 2024-08-16T09:41:39Z
| 2024-10-07T14:57:51Z
| null |
ThisGuyIsNotAJumpingBear
|
huggingface/parler-tts
| 109
|
How many epoch of training did you do? What is the accuracy?
|
How many epoch of training did you do? What is the accuracy?
|
https://github.com/huggingface/parler-tts/issues/109
|
open
|
[] | 2024-08-16T09:35:31Z
| 2024-08-16T09:35:31Z
| null |
xuezhongfei2008
|
pytorch/torchchat
| 1,038
|
How to deploy a new model by torchchat?
|
I want to use torchchat to load the trained model directly from the local. How to change the torchchat/config/data/models.json? Need to change download _ and _ convert in download.py?And, what other documents may need to be changed?
|
https://github.com/pytorch/torchchat/issues/1038
|
open
|
[
"bug"
] | 2024-08-16T09:33:29Z
| 2024-08-19T18:24:37Z
| null |
liu8060
|
huggingface/diffusers
| 9,195
|
Problem with Flux Schnell bfloat16 multiGPU
|
### Describe the bug
Hello! I set device_map='balanced' and get images generated in 2.5 minutes (expected in 12-20 seconds), while in pipe.hf_device_map it shows that the devices are distributed like this:
```
{
"transformer": "cuda:0",
"text_encoder_2": "cuda:2",
"text_encoder": "cuda:0",
"vae": "cuda:1"
}
```
I have 3 video cards 3090 Ti 24 GB and I can’t run it on them.
I also tried this way:
pipe.transformer.to('cuda:2')
pipe.text_encoder.to('cuda:2')
pipe.text_encoder_2.to('cuda:1')
pipe.vae.to('cuda:0')
What is the best way to launch it so that generation occurs on the GPU and quickly?
### Reproduction
```python
pipe = FluxPipeline.from_pretrained(
path_chkpt,
torch_dtype=torch.bfloat16,
device_map='balanced',
)
```
### Logs
_No response_
### System Info
ubuntu 22.04 3 GPU: 3090 TI 24 GB
accelerate==0.30.1
addict==2.4.0
apscheduler==3.9.1
autocorrect==2.5.0
chardet==4.0.0
cryptography==37.0.2
curl_cffi
diffusers==0.30.0
beautifulsoup4==4.11.2
einops
facexlib>=0.2.5
fastapi==0.92.0
hidiffusion==0.1.6
invisible-watermark>=0.2.0
numpy==1.24.3
opencv-python==4.8.0.74
pandas==2.0.3
pycocotools==2.0.6
pymystem3==0.2.0
pyyaml==6.0
pyjwt==2.6.0
python-multipart==0.0.5
pytrends==4.9.1
psycopg2-binary
realesrgan==0.3.0
redis==4.5.1
sacremoses==0.0.53
selenium==4.2.0
sentencepiece==0.1.97
scipy==1.10.1
scikit-learn==0.24.1
supervision==0.16.0
tb-nightly==2.14.0a20230629
tensorboard>=2.13.0
tomesd
transformers==4.40.1
timm==0.9.16
yapf==0.32.0
uvicorn==0.20.0
spacy==3.7.2
nest_asyncio==1.5.8
httpx==0.25.0
torchvision==0.15.2
insightface==0.7.3
psutil==5.9.6
tk==0.1.0
customtkinter==5.2.1
tensorflow==2.13.0
opennsfw2==0.10.2
protobuf==4.24.4
gfpgan==1.3.8
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/9195
|
closed
|
[
"bug"
] | 2024-08-16T06:30:54Z
| 2025-12-05T06:38:14Z
| 26
|
OlegRuban-ai
|
pytorch/TensorRT
| 3,092
|
❓ [Question] Is there any way to deploy on a single machine with multi-gpus?
|
## ❓ Question
<!-- Your question -->
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
As the title, I have a machine with multiple GPUs and I would like to know if there is any way to evenly distribute the model across these GPUs. Is there any way to achieve this?
|
https://github.com/pytorch/TensorRT/issues/3092
|
open
|
[
"question"
] | 2024-08-16T02:01:21Z
| 2024-08-16T17:58:02Z
| null |
SZ-ing
|
pytorch/pytorch
| 133,643
|
How to Manage CPU Memory Usage in PyTorch After Moving Model to CPU?
|
### 📚 The doc issue
Hi everyone,
I'm currently working on a deep learning project using PyTorch, and I've run into some issues with managing CPU memory after transferring a model to the GPU.
In specific, I'm loading a pre-trained model using PyTorch, then moving the model to the GPU. However, I've noticed that after moving the model to the GPU, the CPU memory usage doesn't decrease as much as I expected.
I used 'memory_profiler' to analyze memory usage, and here's what I found:
Before moving to GPU: The model uses a significant amount of CPU memory during the loading and preparation stages.
After moving to GPU: The memory usage on the CPU doesn't drop much. It seems like some data or buffers might still be retained in CPU memory.
I've tried deleting references to the model on CPU using 'del' and forced garbage collection using 'gc.collect()' but this doesn't seem to affect the memory.
So is that because PyTorch inherently keep some CPU memory for caching or other purposes? Is it possible to fully release the CPU memory after moving a model to GPU in PyTorch?
I would appreciate any insights or advice on how to better manage CPU memory in this context. Thanks in advance for your help!Hi everyone,
I'm currently working on a deep learning project using PyTorch, and I've run into some issues with managing CPU memory after transferring a model to the GPU.
In specific, I'm loading a pre-trained model using PyTorch, then moving the model to the GPU. However, I've noticed that after moving the model to the GPU, the CPU memory usage doesn't decrease as much as I expected.
I used **'memory_profiler'** to analyze memory usage, and here's what I found:
Before moving to GPU: The model uses a significant amount of CPU memory during the loading and preparation stages.
After moving to GPU: The memory usage on the CPU doesn't drop much. It seems like some data or buffers might still be retained in CPU memory.
I've tried deleting references to the model on CPU using **'del'** and forced garbage collection using **'gc.collect()'** but this doesn't seem to affect the memory.
So is that because PyTorch inherently keep some CPU memory for caching or other purposes? Is it possible to fully release the CPU memory after moving a model to GPU in PyTorch?
I would appreciate any insights or advice on how to better manage CPU memory in this context. Thanks in advance for your help!
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/pytorch/issues/133643
|
closed
|
[] | 2024-08-15T23:23:15Z
| 2024-08-16T20:43:11Z
| null |
prisnguyen
|
pytorch/xla
| 7,858
|
[Bug] Notebook `Stable Diffusion with PyTorch/XLA 2.0` is outdated
|
## 🐛 Bug
Official Notebook `Stable Diffusion with PyTorch/XLA 2.0` is outdated
## To Reproduce:
Run [Stable Diffusion with PyTorch/XLA 2.0 Notebook](https://github.com/pytorch/xla/blob/master/contrib/kaggle/pytorch-xla-2-0-on-kaggle.ipynb) on Kaggle TPU VM v3-8
## Environment
Kaggle TPU VM v3-8
## Expected behavior
Generate and show image.
## Error:
```shel
FutureWarning: `callback` is deprecated and will be removed in version 1.0.0. Passing `callback` as an input argument to `__call__` is deprecated, consider using `callback_on_step_end`
deprecate(
2%|▏ | 1/50 [00:00<00:32, 1.51it/s]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[8], line 4
1 generator = torch.Generator().manual_seed(0)
2 # xm.mark_step compiles and executes the graph after each iteration.
3 # The first few steps will be much slower than the rest.
----> 4 image = pipeline(prompt, callback=lambda *args: xm.mark_step(), generator=generator).images[0]
5 image
File /usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /usr/local/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:1041, in StableDiffusionPipeline.__call__(self, prompt, height, width, num_inference_steps, timesteps, sigmas, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, ip_adapter_image, ip_adapter_image_embeds, output_type, return_dict, cross_attention_kwargs, guidance_rescale, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, **kwargs)
1039 if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
1040 progress_bar.update()
-> 1041 if callback is not None and i % callback_steps == 0:
1042 step_idx = i // getattr(self.scheduler, "order", 1)
1043 callback(step_idx, t, latents)
TypeError: unsupported operand type(s) for %: 'int' and 'NoneType'
```
|
https://github.com/pytorch/xla/issues/7858
|
open
|
[
"bug",
"documentation",
"xla:tpu"
] | 2024-08-15T11:21:01Z
| 2025-05-02T23:15:34Z
| 2
|
steveepreston
|
pytorch/xla
| 7,857
|
Why do the communication in my spmd training have control-predecessors
|
## ❓ Questions and Help
In my formal training task, there are some control-predecessors in the communication operator, but the single test I constructed cannot reproduce this situation. I would like to know under what circumstances these control-predecessors can be generated.
```
all-gather-start.12 = (f32[256]{0}, f32[4096]{0}) all-gather-start(param.639.0), channel_id=25, replica_groups={{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}}, dimensions={0}, use_global_device_ids=true, control-predecessors={all-gather-done.11}, metadata={op_type="aten__add" op_name="train_loop.1/aten__add.123/aten__add" source_file="/opt/conda/lib/python3.8/site-packages/torch/nn/modules/linear.py" source_line=118}, backend_config={"operation_queue_id":"0","wait_on_operation_queues":[],"collective_backend_config":{"is_sync":false,"no_parallel_custom_call":false},"force_earliest_schedule":false}
all-gather-done.12 = f32[4096]{0} all-gather-done(all-gather-start.12), metadata={op_type="aten__add" op_name="train_loop.1/aten__add.123/aten__add" source_file="/opt/conda/lib/python3.8/site-packages/torch/nn/modules/linear.py" source_line=118}
all-gather-start.13 = (f32[256]{0}, f32[4096]{0}) all-gather-start(param.640.0), channel_id=26, replica_groups={{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}}, dimensions={0}, use_global_device_ids=true, control-predecessors={all-gather-done.12}, metadata={op_type="aten__mul" op_name="train_loop.1/aten__mul.126/aten__mul" source_file="/opt/conda/lib/python3.8/site-packages/torch/nn/modules/normalization.py" source_line=205}, backend_config={"operation_queue_id":"0","wait_on_operation_queues":[],"collective_backend_config":{"is_sync":false,"no_parallel_custom_call":false},"force_earliest_schedule":false}
all-gather-done.13 = f32[4096]{0} all-gather-done(all-gather-start.13), metadata={op_type="aten__mul" op_name="train_loop.1/aten__mul.126/aten__mul" source_file="/opt/conda/lib/python3.8/site-packages/torch/nn/modules/normalization.py" source_line=205}
```
|
https://github.com/pytorch/xla/issues/7857
|
closed
|
[
"question",
"distributed"
] | 2024-08-15T11:17:08Z
| 2025-04-01T12:33:52Z
| null |
mars1248
|
huggingface/diffusers
| 9,184
|
What is the correct way to apply the dictionary with the control strengths (called “scales”) but with blocks?
|
### Describe the bug
I have managed to apply the basic dictionary. as the documentation mentions
```
adapter_weight_scales = { "unet": { "down": 1, "mid": 0, "up": 0} }
pipe.set_adapters("Lora1", adapter_weight_scales)
```
and it already works for N number of LORAS that I want to load, for example
```
adapter_weight_scales_1 = { "unet": { "down": 0.5, "mid": 0, "up": 0} }
adapter_weight_scales_2 = { "unet": { "down": 0, "mid": 0, "up": 0.5} }
pipe.set_adapters(["Lora1", "Lora2"], [adapter_weight_scales_1, adapter_weight_scales_2])
```
it works for me correctly, and I get very good results in my images
### Reproduction
Now I'm trying to apply the scaling dictionary to LORA but with blocks, for example:
```
adapter_weight_scales_blocks_1 = {
'unet': {
'down': {
'block_0': [0.2, 0.5],
'block_1': [0.5, 0.2]},
'mid': {
'block_0': [0.2, 0.5],
'block_1': [0.5, 0.2]},
'up': {
'block_0': [0.2, 0.5],
'block_1': [0.5, 0.5, 0.2]
}
}
}
adapter_weight_scales_blocks_2 = {
'unet': {
'down': {
'block_0': [0.5, 0.5],
'block_1': [0.5, 0.5]},
'mid': {
'block_0': [0.5, 0.5],
'block_1': [0.5, 0.5]},
'up': {
'block_0': [0.5, 0.5],
'block_1': [0.5, 0.5, 0.5]
}
}
}
pipe.set_adapters(["Lora1", "Lora2"], [ adapter_weight_scales_blocks_1, adapter_weight_scales_blocks_2])
```
### Logs
```shell
but an error like this is getting me:
/usr/local/lib/python3.10/dist-packages/diffusers/loaders/lora_base.py in set_adapters(self, adapter_names, adapter_weights)
571
572 if issubclass(model.__class__, ModelMixin):
--> 573 model.set_adapters(adapter_names, _component_adapter_weights[component])
574 elif issubclass(model.__class__, PreTrainedModel):
575 set_adapters_for_text_encoder(adapter_names, model, _component_adapter_weights[component])
/usr/local/lib/python3.10/dist-packages/diffusers/loaders/peft.py in set_adapters(self, adapter_names, weights)
107 weights = scale_expansion_fn(self, weights)
108
--> 109 set_weights_and_activate_adapters(self, adapter_names, weights)
110
111 def add_adapter(self, adapter_config, adapter_name: str = "default") -> None:
/usr/local/lib/python3.10/dist-packages/diffusers/utils/peft_utils.py in set_weights_and_activate_adapters(model, adapter_names, weights)
264 else:
265 module.active_adapter = adapter_name
--> 266 module.set_scale(adapter_name, get_module_weight(weight, module_name))
267
268 # set multiple active adapters
/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/layer.py in set_scale(self, adapter, scale)
278 # Ignore the case where the adapter is not in the layer
279 return
--> 280 self.scaling[adapter] = scale * self.lora_alpha[adapter] / self.r[adapter]
281
282 def scale_layer(self, scale: float) -> None:
TypeError: unsupported operand type(s) for *: 'dict' and 'float'``
```
What would be the correct way to do it?
```
### System Info
System Info
I am using google colab,
diffusers version: 0.30.0
Python version: 3.10.
### Who can help?
Diffuser masters can help me understand how to use that feature: @sayakpaul, @yiyixuxu @asomoza
|
https://github.com/huggingface/diffusers/issues/9184
|
closed
|
[
"bug"
] | 2024-08-15T06:05:42Z
| 2024-08-17T00:54:28Z
| null |
Eduardishion
|
pytorch/xla
| 7,855
|
How to sync TPUs when using a pod with more than 1 VM in SPMD
|
## ❓ Questions and Help
Generally we feel that since in SPMD most of the work is under the hood its hard to understand what is required from us when using it in order to sync between TPUs on a pod with multiple VMs.
We would like to know the stages of syncing in that case, and how is it different from the regular syncing required on TPUs (a list of stages by name will be nice).
Specifically, If all the VMs run the same command and they all work as they are run alone (global index 0, global count 1) who should log the loss? should we use torch.distributed.get_rank() == 0 to determine the "master" for logging? @JackCaoG
|
https://github.com/pytorch/xla/issues/7855
|
closed
|
[
"question",
"distributed"
] | 2024-08-14T18:51:04Z
| 2025-04-01T12:35:29Z
| null |
dudulightricks
|
pytorch/xla
| 7,854
|
Using mark_sharding vs. MpDeviceLoader with input_sharding=xs.ShardingSpec
|
## ❓ Questions and Help
If we have a few tensors in a batch with different sizes and we use mark_sharding on each of them, we lose something comparing to input_sharding=xs.ShardingSpec in the MpDeviceLoader (which only works for a single size of tensor in the batch)? @JackCaoG
|
https://github.com/pytorch/xla/issues/7854
|
closed
|
[
"question",
"distributed"
] | 2024-08-14T18:41:34Z
| 2025-04-01T12:36:56Z
| null |
dudulightricks
|
pytorch/xla
| 7,850
|
SPMD - how to use different dataloader on each VM of a TPU pod in SPMD
|
## ❓ Questions and Help
While in SPMD mode If we run the train command of a model on all the VMs together (single program multiple machines) each VM has its own dataloader using cpu cores.
Then, when we use mark_sharding on the batch its practically copy the batch of the first VM (rank 0) to all the TPUs and ignore the batches of other VMs (which were loaded with different dataloaders).
In order to solve that (use all the dataloaders on the different VMs to load different data and use it all) we have added torch.distributed.all_gather_object on the batch object to get one huge batch before using mark_sharding.
The problem is that in this case we afraid that the huge batch is held in the memory of one VM before the sharding. The ideal solution for us would have been something like batch.mark_sharding(gather_all=True) in which instead of ignoring the different batches on all the VMs it will gather them all together logically and use mark_sharding on the result huge batch (which is practically splited over the TPUs). This way we will use all the loaded data without exploding the memory of the first VM.
Is there anything like that command? How can we use the data loaded in all the dataloaders on the different VMs? In our case its important because the data is large and it takes time to load it. @JackCaoG
|
https://github.com/pytorch/xla/issues/7850
|
closed
|
[
"question",
"distributed"
] | 2024-08-14T17:50:09Z
| 2025-04-01T12:41:07Z
| null |
dudulightricks
|
huggingface/diffusers
| 9,180
|
Pipeline has no attribute '_execution_device'
|
### Describe the bug
Hello, I implemented my own custom pipeline referring StableDiffusionPipeline (RepDiffusionPipeline), but there are some issues
I called "accelerator.prepare" properly, and mapped the models on device (with "to.(accelerator.device)")
But when I call pipeline and the '__call__' function is called, sometimes I met the error
It is not only problem in using multi-gpu, it occurs when I use single gpu.
For example, I defined my pipeline for my validation in training code like this:
```python
val_pipe = RepDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
unet=accelerator.unwrap_model(unet),
rep_encoder=accelerator.unwrap_model(rep_encoder),
vae=accelerator.unwrap_model(vae),
revision=None, variant=None, torch_dtype=weight_dtype, safety_checker=None
).to(accelerator.device)
```
then, when I called 'val_pipe' like this:
```
model_pred = val_pipe(
image = condition_original_image if args.val_mask_op else data["original_images"],
representation = representation,
prompt = "",
num_inference_steps = 20,
image_guidance_scale = 1.5,
guidance_scale = scale,
generator = generator
).images[0]
```
At that time, the error "RepDiffusionPipeline has no attribute '_execution_device'" occurs. (Not always, just randomly)
How can I solve this issue, or what part of my code can be doubted and fixed?
Thank you for reading:)
### Reproduction
It occurs randomly, so there is no option to reproduce...
But when I call the defined pipeline, it occurs randomly.
### Logs
```shell
RepDiffusionPipeline has no attribute '_execution_device'
```
### System Info
I tried to test in various diffusers & python versions, but the problem still occurs.
In now, I am running my code in diffusers 0.27.2, python 3.10.14.
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.2.2+cu121 with CUDA 1201 (you have 2.2.2+cu118)
Python 3.10.14 (you have 3.10.14)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `diffusers` version: 0.27.2
- Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- PyTorch version (GPU?): 2.2.2+cu118 (True)
- Huggingface_hub version: 0.24.3
- Transformers version: 4.43.3
- Accelerate version: 0.33.0
- xFormers version: 0.0.25.post1
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sayakpaul @yiyixuxu
|
https://github.com/huggingface/diffusers/issues/9180
|
open
|
[
"bug",
"stale"
] | 2024-08-14T14:43:15Z
| 2025-11-18T13:22:52Z
| 33
|
choidaedae
|
pytorch/vision
| 8,588
|
size mismatch for rpn
|
### 🐛 Describe the bug
I created a Mask R-CNN model using a set of parameters that I saved in a JSON file. Once the model was trained, I saved the weights using `torch.save(model.state_dict(), "MaskRCNN.pt")`. Later, I recreated the same model and loaded the saved weights `model.load_state_dict(torch.load("MaskRCNN.pt", map_location=Device))`.
On my laptop (MacBook Pro M2) using Torch 2.2.2, TorchVision 0.17.2 (most up to date for this environment), and CPU only, everything works just fine.
However, on a cluster based on Centos with Torch 2.4, TorchVision 0.19 (most up to date for this environment), and Cuda 12.1.1, I get the following error when loading the weights:
File "/home/XXX//MaskRCNN.py", line 84, in Load
model.load_state_dict(torch.load(WeightsPath, map_location=Device))
File "/home/XXX/torch/nn/modules/module.py", line 2215, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for MaskRCNN:
size mismatch for rpn.head.cls_logits.weight: copying a param with shape torch.Size([6, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([14, 256, 1, 1]).
size mismatch for rpn.head.cls_logits.bias: copying a param with shape torch.Size([6]) from checkpoint, the shape in current model is torch.Size([14]).
size mismatch for rpn.head.bbox_pred.weight: copying a param with shape torch.Size([24, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([56, 256, 1, 1]).
size mismatch for rpn.head.bbox_pred.bias: copying a param with shape torch.Size([24]) from checkpoint, the shape in current model is torch.Size([56]).
The code is exactly the same on my laptop and on the cluster.
I double checked, and I used exactly the same parameters to create ALL the models.
How can I fix this?
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 7.9.2009 (Core) (x86_64)
GCC version: (GCC) 13.2.0
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.17
Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.10.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
GPU 4: Tesla V100-SXM2-32GB
GPU 5: Tesla V100-SXM2-32GB
GPU 6: Tesla V100-SXM2-32GB
GPU 7: Tesla V100-SXM2-32GB
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/local/cuda-9.1/targets/x86_64-linux/lib/libcudnn.so.7.0.5
/usr/local/cuda-9.2/targets/x86_64-linux/lib/libcudnn.so.7.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping: 4
CPU MHz: 1000.000
CPU max MHz: 2401.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 28160K
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke spec_ctrl intel_stibp flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] numpydoc==1.8.0
[pip3] torch==2.4.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] numpyd
|
https://github.com/pytorch/vision/issues/8588
|
closed
|
[] | 2024-08-14T11:08:41Z
| 2024-08-15T09:49:41Z
| 4
|
FiReTiTi
|
pytorch/xla
| 7,849
|
Is it possible free TPU memory without restarting in pytorch xla?
|
## 📚 Documentation
I have tried to move a TPU tensor to CPU or delete the tensor. However, the memory is not released.
https://colab.research.google.com/drive/1pTTDu_eJssUwjsrjBDiiyo6tlOEZTjMf?usp=sharing
<!-- A clear and concise description of what content is an issue. -->
|
https://github.com/pytorch/xla/issues/7849
|
closed
|
[] | 2024-08-14T10:48:37Z
| 2024-08-26T01:25:00Z
| 6
|
fengyang0317
|
huggingface/diffusers
| 9,174
|
[Quantization] bring quantization to diffusers core
|
Now that we have a working PoC (#9165) of NF4 quantization through `bitsandbytes` and also [this](https://huggingface.co/blog/quanto-diffusers) through `optimum.quanto`, it's time to bring in quantization more formally in `diffusers` 🎸
In this issue, I want to devise a rough plan to attack the integration. We are going to start with `bitsandbytes` and then slowly increase the list of our supported quantizers based on community interest. This integration will also allow us to do LoRA fine-tuning of large models like [Flux](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) through `peft` ([guide](https://huggingface.co/docs/peft/en/developer_guides/quantization)).
Three PRs are expected:
- [ ] Introduce a [base quantization config class](https://github.com/huggingface/transformers/blob/main/src/transformers/quantizers/base.py) like we have in `transformers`.
- [ ] Introduce `bitsandbytes` related utilities to handle processing, post-processing of layers for injecting `bitsandbytes` layers. Example is [here](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/bitsandbytes.py).
- [ ] Introduce a `bitsandbytes` config ([example](https://github.com/huggingface/transformers/blob/main/src/transformers/quantizers/quantizer_bnb_4bit.py)) and quantization loader mixin aka `QuantizationLoaderMixin`. This loader will enable passing a quantization config to `from_pretrained()` of a `ModelMixin` and will tackle how to modify and prepare the model for the provided quantization config. This will also allow us to serialize the model according to the quantization config.
---
Notes:
* We could have done this with `accelerate` ([guide](https://huggingface.co/docs/accelerate/en/usage_guides/quantization)) but this doesn't yet support NF4 serialization.
* Good example PR: https://github.com/huggingface/transformers/pull/32306
---
@DN6 @SunMarc sounds good?
|
https://github.com/huggingface/diffusers/issues/9174
|
closed
|
[
"quantization"
] | 2024-08-14T08:05:34Z
| 2024-10-21T04:42:46Z
| 15
|
sayakpaul
|
huggingface/diffusers
| 9,172
|
why rebuild a vae in inference stage?
|
Thanks for ur effort for diffusion model.
I want to know why we need to rebuild a vae in inference stage. I think it will introduce extra GPU cost.
https://github.com/huggingface/diffusers/blob/a85b34e7fdc0a5fceb11aa0fa6199bd9afaca396/examples/text_to_image/train_text_to_image_sdxl.py#L1217C16-L1223C24
|
https://github.com/huggingface/diffusers/issues/9172
|
open
|
[
"stale"
] | 2024-08-14T05:52:38Z
| 2024-11-14T15:03:55Z
| 2
|
WilliammmZ
|
huggingface/candle
| 2,413
|
How to load multiple safetensors with json format
|
For such a task:
https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/transformer
how should safetensors be loaded?
|
https://github.com/huggingface/candle/issues/2413
|
open
|
[] | 2024-08-14T04:50:37Z
| 2025-06-11T19:05:05Z
| null |
oovm
|
pytorch/pytorch
| 133,397
|
Don't know how to explain but here's the error
|
### 🐛 Describe the bug
File "C:\Users\USER\Downloads\pytorch\main.py", line 3, in <module>
import torch
File "C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\__init__.py", line 148, in <module>
raise err
OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.5 (tags/v3.12.5:ff3bc82, Aug 6 2024, 20:45:27) [MSC v.1940 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture=9
CurrentClockSpeed=2419
DeviceID=CPU0
Family=205
L2CacheSize=5120
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2419
Name=11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.18.1
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[conda] Could not collect
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
|
https://github.com/pytorch/pytorch/issues/133397
|
closed
|
[
"module: windows"
] | 2024-08-14T02:53:02Z
| 2024-08-15T00:59:59Z
| null |
Nohj9984
|
huggingface/diffusers
| 9,170
|
sdxl and contronet must has a GPU memory more than 36G?
|
### Describe the bug
https://github.com/huggingface/diffusers/blob/15eb77bc4cf2ccb40781cb630b9a734b43cffcb8/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py
line73---line113
I run the demo with 24G GPU, then OOM everytime.
so I must run SDXl with 48G?
@yiyixuxu @sayakpaul @DN6 tks
### Reproduction
File "/root/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1150, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB. GPU 0 has a total capacity of 23.65 GiB of which 7.56 MiB is free. Process 3431486 has 18.91 GiB memory in use. Process 3081991 has 4.72 GiB memory in use. Of the allocated memory 4.09 GiB is allocated by PyTorch, and 171.75 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
### Logs
_No response_
### System Info
0.28?
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/9170
|
closed
|
[
"bug"
] | 2024-08-14T01:46:35Z
| 2024-11-13T08:49:22Z
| 3
|
henbucuoshanghai
|
huggingface/trl
| 1,927
|
how to use kto_pair loss in the latest version ?
|
I can see that kto_pair losstype is no longer available in the latest version of dpo trainer. You suggest to use ktotrainer instead.
But kto_pair loss worked much better than kto_trainer on my dataset, so how do I continue to use kto_pair if I'm using the latest version of the trl library?
thanks a lot!
|
https://github.com/huggingface/trl/issues/1927
|
closed
|
[
"🏋 DPO",
"🏋 KTO"
] | 2024-08-13T15:59:25Z
| 2024-10-20T16:56:21Z
| null |
vincezengqiang
|
pytorch/xla
| 7,846
|
Is pytorch xla spmd working as expected?
|
## 🐛 Bug
I tried to run [test_train_spmd_linear_model.py](https://github.com/pytorch/xla/blob/master/test/spmd/test_train_spmd_linear_model.py) with `sharding='batch'`. The input data sharing is {devices=[8,1]0,1,2,3,4,5,6,7}, which is expected. However, after a linear layer, the fc1 output sharding becomes 'replicated'. I am wonder whether all the following layers are running without sharding?
Print the sharding_spec during forward.
```
def forward(self, x):
print('x', torch_xla._XLAC._get_xla_sharding_spec(x))
fc1 = self.fc1(x)
print('fc1', torch_xla._XLAC._get_xla_sharding_spec(fc1))
y = self.relu(fc1)
print('y', torch_xla._XLAC._get_xla_sharding_spec(y))
z = self.fc2(y)
print('z', torch_xla._XLAC._get_xla_sharding_spec(z))
o = self.fc3(z)
print('o', torch_xla._XLAC._get_xla_sharding_spec(o))
return o
```
Obtained outputs
```
x {devices=[8,1]0,1,2,3,4,5,6,7}
fc1
y
z
o
```
## To Reproduce
https://colab.research.google.com/drive/1508nWHxCthxWBlIeKLF0sLZXcjtsO6Ly#scrollTo=nGTxOOgDDOU3
Steps to reproduce the behavior:
run the colab above.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. Or better use the Colab template: https://github.com/pytorch/xla/blob/master/contrib/colab/issue-report.ipynb -->
## Expected behavior
The fc1, y, z, o should have sharding.
## Environment
- Reproducible on XLA backend [CPU/TPU/CUDA]: TPU
- torch_xla version: nightly
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/xla/issues/7846
|
closed
|
[] | 2024-08-13T14:43:50Z
| 2024-09-01T12:58:48Z
| 3
|
fengyang0317
|
huggingface/autotrain-advanced
| 728
|
[BUG] Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. How to mitigate this?
|
### Prerequisites
- [X] I have read the [documentation](https://hf.co/docs/autotrain).
- [X] I have checked other issues for similar problems.
### Backend
Local
### Interface Used
CLI
### CLI Command
```
!autotrain --config path-to.yml
```
```
task: llm-sft
base_model: teknium/OpenHermes-2.5-Mistral-7B
project_name: XXX
log: none
backend: local
data:
path: /content
train_split: train
valid_split: null
chat_template: null
column_mapping:
text_column: text
params:
block_size: 256
model_max_length: 512
epochs: 1
batch_size: 2
lr: 3e-5
peft: true
quantization: int4
target_modules: all-linear
padding: right
optimizer: adamw_torch
scheduler: cosine
gradient_accumulation: 1
mixed_precision: none
unsloth: true
lora_r: 16
lora_alpha: 16
lora_dropout: 0
hub:
username: abc
token: hf_XXX
push_to_hub: false
```
### UI Screenshots & Parameters
_No response_
### Error Logs
```
Loading checkpoint shards: 100% 2/2 [01:21<00:00, 40.56s/it]
INFO | 2024-08-13 04:46:20 | autotrain.trainers.clm.utils:get_model:666 - model dtype: torch.float16
INFO | 2024-08-13 04:46:20 | autotrain.trainers.clm.train_clm_sft:train:37 - creating trainer
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'.
Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead.
warnings.warn(message, FutureWarning)
/usr/local/lib/python3.10/dist-packages/trl/trainer/sft_trainer.py:192: UserWarning: You passed a `packing` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/trl/trainer/sft_trainer.py:280: UserWarning: You passed a `max_seq_length` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/trl/trainer/sft_trainer.py:318: UserWarning: You passed a `dataset_text_field` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
```
### Additional Information
I am not sure why this pops up. I know this is just a UserWarning and model is able to fine-tune ok, but is anything being affected?
|
https://github.com/huggingface/autotrain-advanced/issues/728
|
closed
|
[
"bug"
] | 2024-08-13T05:00:10Z
| 2024-08-13T12:31:19Z
| null |
jackswl
|
huggingface/diffusers
| 9,164
|
the dog example of train_dreambooth_lora_flux.py can not convergence
|
### Describe the bug
```
export MODEL_NAME="black-forest-labs/FLUX.1-dev"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="trained-flux-lora"
accelerate launch train_dreambooth_lora_flux.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--mixed_precision="bf16" \
--instance_prompt="a photo of sks dog" \
--resolution=512 \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--learning_rate=1e-5 \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=500 \
--validation_prompt="A photo of sks dog in a bucket" \
--validation_epochs=25 \
--seed="0" \
--push_to_hub
```
I follow this command to train lora of flux-dev and download the dog-example from huggingFace, but this setting could not get better result, the loss is normal

the dog-example look like this:

but my result look like below:

and don't use the lora to generate image of the same prompt look like below:

### Reproduction
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("/opt/ml/volume/default/aigc/project/FLUX.1-dev",torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()
pipe.lora_state_dict("/opt/ml/volume/default/aigc/project/diffusers/examples/dreambooth/trained-flux-lora/checkpoint-500")
prompts = []
prompts.append("an sks dog")
index = 0
for prompt in prompts:
image = pipe(
prompt=prompt,
num_inference_steps=20,
guidance_scale=7.5,
max_sequence_length=512,
width=1152,
height=768
).images[0]
save_file = "dog"+str(index)+'.png'
index+=1
image.save(save_file)
```
### Logs
_No response_
### System Info
ubuntu 20.04
### Who can help?
@sayakpaul @linoytsaban
|
https://github.com/huggingface/diffusers/issues/9164
|
closed
|
[
"bug"
] | 2024-08-13T03:08:10Z
| 2024-08-13T10:23:23Z
| 7
|
chongxian
|
pytorch/xla
| 7,837
|
Make `tpu-info` more visible to the community
|
## 📚 Documentation
We highlighted tpu-info in the [PyTorch/XLA 2.4 release](https://cloud.google.com/blog/products/ai-machine-learning/pytorch-xla-2-4-improves-pallas-and-adds-eager-mode?e=13802955). I understand we have a [CoLab demo page](https://colab.sandbox.google.com/drive/1aMYTONPE4f3BtZpRq1_jPcRcIiSKtoY9?usp=drive_open#scrollTo=ZqjPdg3XlTnG) to help users set up and use `tpu-info`.
A quick search on the web, however, shows no pointer to the instructions on how to set up `tpu-info`. I suggest we publish a guide that brings this feature to the forefront. cc @duncantech
Similarly, PyTorchXLA docker images benefit from having this feature built-in. Can we add it to our docker nightly/release setup flow?
Do we have plans to make `tpu-info` a standalong installation package?
|
https://github.com/pytorch/xla/issues/7837
|
closed
|
[
"usability"
] | 2024-08-12T19:17:30Z
| 2024-08-17T06:39:58Z
| 5
|
miladm
|
huggingface/text-embeddings-inference
| 380
|
How do i deploy to vertex ?
|
How do i deploy to vertex ? I think i saw some feature=google setting in code which supports compatibility with vertex . Please guide.
|
https://github.com/huggingface/text-embeddings-inference/issues/380
|
closed
|
[] | 2024-08-12T17:15:30Z
| 2024-10-17T10:19:02Z
| null |
pulkitmehtaworkmetacube
|
pytorch/vision
| 8,585
|
Cant find nms function in code?
|
### 🐛 Describe the bug
I am looking for a method in torch, but for the love of god I can not not find the function definition!
The reason I need to find it is that I need to get rid of the torch dependency and I want to try to convert it into numpy.
I am speaking about torchvision.ops.nms()
This method is located in torchvision/ops/boxes.py and returns torch.ops.torchvision.nms().
This method is generated code which can be found in torch/_ops.py where they initialize ops with ops: _Ops = _Ops().
Thats the point where I am lost, the class is located in the same file, but I cant figure out which library it calls to get the nms() method.
Please help me :frowning:
### Versions
Latest
|
https://github.com/pytorch/vision/issues/8585
|
closed
|
[] | 2024-08-12T12:17:23Z
| 2024-08-12T12:26:58Z
| 1
|
asusdisciple
|
pytorch/xla
| 7,832
|
80B model how to shard restore in spmd training
|
## ❓ Questions and Help
In pytorch we can use `fsdp meta init` shard restore my big model(like have 80B parameters),in torch_xla i only find shard save like use this.https://github.com/pytorch/xla/blob/master/torch_xla/experimental/distributed_checkpoint/manager.py#L257.
Is there a way to recover the original pytorch model parameters in pieces during spmd training, and when saving the model, save it in the original pytorch format for inference
|
https://github.com/pytorch/xla/issues/7832
|
closed
|
[
"question",
"distributed"
] | 2024-08-12T11:52:00Z
| 2025-04-01T12:50:25Z
| null |
mars1248
|
pytorch/pytorch
| 133,205
|
How to use libtorch in a c++11 project?
|
### 🐛 Describe the bug
c++14_warning.h:32:2: 错误:#error This file requires compiler and library support for the forthcoming ISO C++ 2014 standard. This support is currently experimental, and must be enabled with the -std=c++1y or -std=gnu++1y compiler options.
#error This file requires compiler and library support for the forthcoming \
### Versions
PyTorch version: 1.12.0a0+git664058f
OS: CentOS release 6.9 (Final) (x86_64)
GCC version: (GCC) 5.4.0
Clang version: 3.4.2 (tags/RELEASE_34/dot2-final)
CMake version: version 3.21.3
Libc version: glibc-2.10
在使用libtorch构建自己的工程时报错,我的工程是c++11(不可升级) ,有没有办法使用libtorch?
cc @svekars @brycebortree @jbschlosser @seemethere @malfet @osalpekar @atalman
|
https://github.com/pytorch/pytorch/issues/133205
|
closed
|
[
"module: docs",
"module: cpp",
"triaged"
] | 2024-08-12T08:45:32Z
| 2024-09-24T02:03:36Z
| null |
zhb0920
|
huggingface/trl
| 1,916
|
How to Add PEFT to PPO Trainer or PPO Config
|
I am trying to realize RLHF through PPO.
May I ask how can I realize PEFT in RLHF/PPO. I can see this parameter in DPOTrainer. However, I cannot see that in PPOTrainer.
|
https://github.com/huggingface/trl/issues/1916
|
closed
|
[
"✨ enhancement",
"🧒 good second issue",
"🏋 PPO"
] | 2024-08-12T01:02:07Z
| 2024-11-18T10:54:10Z
| null |
ZhichaoWang970201
|
huggingface/trl
| 1,915
|
How to dpo llava?
|
Thank you for great work!
I do dpo llava using raw `/trl/examples/scripts/dpo_visual.py` code by using a command
`CUDA_VISIBLE_DEVICES=0 accelerate launch examples/scripts/dpo_visual.py --dataset_name HuggingFaceH4/rlaif-v_formatted --model_name_or_path llava-hf/llava-1.5-7b-hf --per_device_train_batch_size 1 --gradient_accumulation_steps 64 --dataset_num_proc 32 --output_dir dpo_llava --bf16 --torch_dtype bfloat16 --gradient_checkpointing --use_peft --lora_target_modules=all-linear`
however I got a error such as
> multiprocess.pool.RemoteTraceback:
> """
> Traceback (most recent call last):
> File "/root/anaconda3/lib/python3.12/site-packages/multiprocess/pool.py", line 125, in worker
> result = (True, func(*args, **kwds))
> ^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 678, in _write_generator_to_queue
> for i, result in enumerate(func(**kwargs)):
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3522, in _map_single
> example = apply_function_on_filtered_inputs(example, i, offset=offset)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3421, in apply_function_on_filtered_inputs
> processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/trl/trainer/dpo_trainer.py", line 808, in tokenize_row
> prompt_tokens = self.processor(prompt, images=images, add_special_tokens=False)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> TypeError: LlavaProcessor.__call__() got an unexpected keyword argument 'add_special_tokens'
> """
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File "/trl/examples/scripts/dpo_visual.py", line 178, in <module>
> trainer = DPOTrainer(
> ^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f
> return f(*args, **kwargs)
> ^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/trl/trainer/dpo_trainer.py", line 529, in __init__
> train_dataset = train_dataset.map(self.tokenize_row, num_proc=self.dataset_num_proc, writer_batch_size=10)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 602, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3253, in map
> for rank, done, content in iflatmap_unordered(
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 718, in iflatmap_unordered
> [async_result.get(timeout=0.05) for async_result in async_results]
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/multiprocess/pool.py", line 774, in get
> raise self._value
> TypeError: LlavaProcessor.__call__() got an unexpected keyword argument 'add_special_tokens'
> Traceback (most recent call last):
> File "/root/anaconda3/bin/accelerate", line 8, in <module>
> sys.exit(main())
> ^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
> args.func(args)
> File "/root/anaconda3/lib/python3.12/site-packages/accelerate/commands/launch.py", line 1106, in launch_command
> simple_launcher(args)
> File "/root/anaconda3/lib/python3.12/site-packages/accelerate/commands/launch.py", line 704, in simple_launcher
> raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
> subprocess.CalledProcessError: Command '['/root/anaconda3/bin/python', 'examples/scripts/dpo_visual.py', '--dataset_name', 'HuggingFaceH4/rlaif-v_formatted', '--model_name_or_path', 'llava-hf/llava-1.5-7b-hf', '--per_device_train_batch_size', '1', '--gradient_accumulation_steps', '64', '--dataset_num_proc', '32', '--output_dir', 'dpo_llava', '--bf16', '--torch_dtype', 'bfloat16', '--gradient_checkpointing', '--use_peft', '--lora_target_modules=all-linear']' returned non-zero exit status 1.
Is there a solution?
|
https://github.com/huggingface/trl/issues/1915
|
closed
|
[] | 2024-08-11T00:57:38Z
| 2024-08-11T01:23:16Z
| null |
ooooohira
|
huggingface/transformers.js
| 887
|
VSCode Interpolation
|
### Question
I'm finding that VSCode is extremely slow when reading type definitions from the `@xenova/transformers` path. Is there anything I might be doing wrong? I've noticed that it uses JS comments to define the types instead of a type definition file, is the issue I am having a known issue with using that type of markup?
|
https://github.com/huggingface/transformers.js/issues/887
|
closed
|
[
"question"
] | 2024-08-11T00:08:30Z
| 2024-08-25T01:55:36Z
| null |
lukemovement
|
huggingface/diffusers
| 9,140
|
Diffusers model not working as good as repo ckpt model
|
Hi,
When I try to run the models stable diffusion v1-5 or Instructpix2pix through the diffusers pipeline and use .from_pretrained() it downloads the models from hugging face and I'm using the code to run inference given in hugging face, the results are not good at all in the sense that there is still noise in the generated images.
But when I run these models using their GitHub repo code and ckpt models given by them the outputs are very good.
Is there any solution to this or any other way to use the diffusers library pipeline.
Also the diffusers.StableDiffusionInstructPix2PixPipeline does not have .from_single_file() option.
Thank you
|
https://github.com/huggingface/diffusers/issues/9140
|
closed
|
[
"stale"
] | 2024-08-09T09:34:30Z
| 2024-12-14T12:13:15Z
| 6
|
kunalkathare
|
pytorch/TensorRT
| 3,075
|
❓ [Question] failed to run the `examples/dynamo/vgg16_fp8_ptq.y` example
|
## ❓ Question
I'm trying to run the `examples/dynamo/vgg16_fp8_ptq.y` example but got following error:
```
Traceback (most recent call last):
File "/home/wh/generative_action/SynHSI/vgg_quat.py", line 232, in <module>
exp_program = torch.export.export(model, (input_tensor,))
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/__init__.py", line 174, in export
return _export(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 1066, in wrapper
raise e
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 1039, in wrapper
ep = fn(*args, **kwargs)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/exported_program.py", line 100, in wrapper
return fn(*args, **kwargs)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 2034, in _export
export_artifact = export_func( # type: ignore[operator]
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 1273, in _strict_export
return _strict_export_lower_to_aten_ir(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 1412, in _strict_export_lower_to_aten_ir
aten_export_artifact = lower_to_aten_callback(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 633, in _export_to_aten_ir
gm, graph_signature = transform(aot_export_module)(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1194, in aot_export_module
fx_g, metadata, in_spec, out_spec = _aot_export_function(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1426, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 429, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(flat_fn, flat_args, aot_config)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 730, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 105, in aot_dispatch_export
graph, _, _ = aot_dispatch_base_graph(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 138, in aot_dispatch_base_graph
fw_module = _create_graph(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 46, in _create_graph
fx_g = make_fx(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1805, in wrapped
return make_fx_tracer.trace(f, *args)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1751, in trace
return self._trace_inner(f, *args)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1737, in _trace_inner
t = dispatch_trace(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_compile.py", line 31, in inner
return disable_fn(*args, **kwargs)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 631, in _fn
return fn(*args, **kwargs)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 899, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1392, in trace
res = super().trace(root, concrete_args)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 631, in _fn
return fn(*args, **kwargs)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 823, in trace
(self.create_arg(fn(*args)),),
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 920, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 403, in _functionalized_f_
|
https://github.com/pytorch/TensorRT/issues/3075
|
open
|
[
"question"
] | 2024-08-09T08:01:14Z
| 2024-08-23T22:06:56Z
| null |
broken-dream
|
pytorch/xla
| 7,823
|
[XLA:GPU compile Error] nvcc fatal : Unsupported gpu architecture 'compute_35'
|
detail:
NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4
for Kepler GPUs are removed from CUDA 12.x. how can i compile torch_xla for gpu in CUDA Version 12.X(GPU guide use CUDA12.X). really confused. thanks for reply.

original issue:https://github.com/pytorch/xla/issues/7783
|
https://github.com/pytorch/xla/issues/7823
|
closed
|
[
"bug",
"xla:gpu",
"build"
] | 2024-08-09T07:50:59Z
| 2025-04-01T12:53:16Z
| 3
|
FatJhon
|
huggingface/diffusers
| 9,136
|
IP adapter output on some resolutions suffers in quality?
|
### Describe the bug
I am running IP adapter for 768x1344 which is one of the sdxl listed resolutions. I find that the output quality is much less than say regular 768x768 generations. I've attached sample images and code below. In this experiment 1080x768 seemed to get best output, but its not one of the supported resolutions @asomo





### Reproduction
import torch
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL, UniPCMultistepScheduler
from diffusers.image_processor import IPAdapterMaskProcessor
from transformers import CLIPVisionModelWithProjection
from controlnet_aux import AnylineDetector
import cv2
import numpy as np
from PIL import Image, ImageOps
from huggingface_hub import hf_hub_download
def create_controlnet_pipes(image_encoder=None)->StableDiffusionXLControlNetPipeline:
## get controlnet
controlnet = ControlNetModel.from_pretrained(
"diffusers/controlnet-canny-sdxl-1.0",
torch_dtype=torch.float16,
use_safetensors=True,
)
pipe = StableDiffusionXLPipeline.from_single_file(
"sdxl model path",
add_watermarker=False,
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True,
image_encoder=image_encoder,
)
pipe = StableDiffusionXLControlNetPipeline(
controlnet=controlnet,
**pipe.components,
add_watermarker=False,
)
pipe = pipe.to("cuda")
return pipe
def canny(image):
image = np.array(image)
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
return Image.fromarray(image)
if __name__ == '__main__':
## crop different values like 0,0,1080,768 or 0,0,1280,768
ref_image = Image.open('images/fridge_fg.png').crop((0,0,1344,768))
bg_ref_image = Image.open('images/fridge_bg.png').crop((0,0,1344,768))
mask_new = Image.open('images/fridge_mask.png').convert('L').crop((0,0,1344,768))
inv_mask = Image.open('images/fridge_inv_mask.png').convert('L').crop((0,0,1344,768))
processor = IPAdapterMaskProcessor()
mask_fg = processor.preprocess([mask_new])
mask_fg = mask_fg.reshape(1, mask_fg.shape[0], mask_fg.shape[2], mask_fg.shape[3])
mask_bg = processor.preprocess([inv_mask])
mask_bg = mask_bg.reshape(1, mask_bg.shape[0], mask_bg.shape[2], mask_bg.shape[3])
canny_pil = Image.open('images/fridge_canny.png').crop((0,0,1344,768))
image_encoder = CLIPVisionModelWithProjection.from_pretrained(
"h94/IP-Adapter",
subfolder="models/image_encoder",
torch_dtype=torch.float16
)
pipe = create_controlnet_pipes(image_encoder=image_encoder)
pipe.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name=["ip-adapter-plus_sdxl_vit-h.safetensors", "ip-adapter-plus_sdxl_vit-h.safetensors"], use_safetensors=True)
scale_config_fg = {'down':1, 'mid':1, 'up':1}
scale_config_bg = {"down":0.7, 'mid':0.7, 'up':0.7}
pipe.set_ip_adapter_scale([scale_config_fg, scale_config_bg])
for idx in range(5):
outputs = pipe(
prompt='kitchen scene',
image=canny_pil,
ip_adapter_image=[ref_image, bg_ref_image],
negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality, fuzzy, blurry",
guidance_scale=5,
num_inference_steps=30,
controlnet_conditioning_scale=0.53,
cross_attention_kwargs={"ip_adapter_masks": [mask_fg, mask_bg]},
num_images_per_prompt=1
# generator=generator,
).images
for image in outputs:
image.save(<path>)
# image.save(f'output_plus/fridge_ar_ctrlnet_1280_plus_{idx}.png')
print('done')
pipe.unload_ip_adapter()
### Logs
_No response_
### System Info
v0.28.2 diffusers
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/9136
|
open
|
[
"bug",
"stale"
] | 2024-08-09T06:36:39Z
| 2024-09-14T15:03:17Z
| 2
|
darshats
|
pytorch/text
| 2,270
|
undefined symbol
|
## undefined symbol
PyTorch version 2.1.2
I am looking for a version of torchtext that will work with PyTorch 2.1.2. I have tried every version from 0.16.0 to 0.18.0
Each version of torchtext has some version of undefined symbol.
```
python -c "import torchtext"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/app/software/scGPT/0.2.1-foss-2023a/lib/python3.11/site-packages/torchtext/__init__.py", line 6, in <module>
from torchtext import _extension # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/software/scGPT/0.2.1-foss-2023a/lib/python3.11/site-packages/torchtext/_extension.py", line 64, in <module>
_init_extension()
File "/app/software/scGPT/0.2.1-foss-2023a/lib/python3.11/site-packages/torchtext/_extension.py", line 58, in _init_extension
_load_lib("libtorchtext")
File "/app/software/scGPT/0.2.1-foss-2023a/lib/python3.11/site-packages/torchtext/_extension.py", line 50, in _load_lib
torch.ops.load_library(path)
File "/app/software/PyTorch/2.1.2-foss-2023a/lib/python3.11/site-packages/torch/_ops.py", line 852, in load_library
ctypes.CDLL(path)
File "/app/software/Python/3.11.3-GCCcore-12.3.0/lib/python3.11/ctypes/__init__.py", line 376, in __init__
self._handle = _dlopen(self._name, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: /app/software/scGPT/0.2.1-foss-2023a/lib/python3.11/site-packages/torchtext/lib/libtorchtext.so: undefined symbol: _ZN5torch6detail10class_baseC2ERKSsS3_SsRKSt9type_infoS6_
```
|
https://github.com/pytorch/text/issues/2270
|
open
|
[] | 2024-08-08T23:25:46Z
| 2024-09-18T09:00:08Z
| 1
|
fizwit
|
pytorch/vision
| 8,570
|
RandomPhotometricDistort has undocumented channel shuffle feature
|
### 🐛 Describe the bug
The documentation for RandomPhotometricDistort neither exposes the channel shuffle behavior as a parameter or lists in the description that this is a possibility.
https://pytorch.org/vision/stable/generated/torchvision.transforms.v2.RandomPhotometricDistort.html#torchvision.transforms.v2.RandomPhotometricDistort
I was trying to use this as convince for randomly brightness and contrast operations, but I got unexpected breaking channel swaps as well.
The best course of action could be to expose a boolean true/false parameter on whether to do channel swaps or not.
### Versions
0.19 stable documentation
|
https://github.com/pytorch/vision/issues/8570
|
closed
|
[] | 2024-08-08T19:14:05Z
| 2024-08-13T02:50:14Z
| 1
|
chadrockey
|
huggingface/transformers.js
| 885
|
TimeSformer on the web
|
### Question
Glad to see this repo! If I want to use TimeSformer on the web, any suggestion or guide for it? Where can I learn from this repo or it's a totally different things? Thanks in advance!
|
https://github.com/huggingface/transformers.js/issues/885
|
open
|
[
"question"
] | 2024-08-08T17:59:13Z
| 2024-08-11T09:02:47Z
| null |
tomhsiao1260
|
pytorch/functorch
| 1,146
|
Strange behaviour of autograd.functional.jacobian when vectorize=True and strategy=‘forward-mode’
|
I calculate the Jacobian of a neural network with respect to its 14 input variables. The network has an output of 9015, meaning I have 126210 gradients. Because I have some complex calculations in my neural network I cannot use jacrev/jacfwd, see [ jacfwd and jacrev are fundamentally broken for complex inputs #94397 ](https://github.com/pytorch/pytorch/issues/94397).
Therefore I am using autograd.functional.jacobian with default settings which works perfectly fine but calculating the Jacobian takes approx. 16 seconds. Since I have to calculate the Jacobian several times during an iteration that I run I have to speed up this process. I do all the calculations on my GPU.
I set vectorize=True and strategy=‘forward-mode’ it works (also with 0.05sec) but after 30 iterations it stops and says ‘torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU’.
I am aware of the performance <-> memory tradeoff as described by @zou3519 in [ Cuda Memory Overflow in Jacobian Computation #1058 ](https://github.com/pytorch/functorch/issues/1058) but this seems a bit drastic as the error only occurs after some iterations. The first few work perfectly fine.
Here is a minimal example of my code:
`x = torch.rand((1, 601, 20)).to(device)
initial_guess = torch.rand((1, 14)).to(device)
y = torch.rand((1, 601, 15)).to(device)
model = ....to(device)
model.load_state_dict(torch.load(model_location + 'model_weights.pth', map_location=device))
model.eval()
def get_jacobian(neural_net, input1, input2):
def partial_forward(diff_inp):
return neural_net(input1, diff_inp)
return autograd.functional.jacobian(partial_forward, input2, strategy='forward-mode', vectorize=True).to(device)
def method(neural_net, input1, input2, result, nb_it):
for i in range(nb_it):
jac_placeholder = torch.zeros(result.shape[0], result.shape[1], result.shape[2],
input2.shape[1]).to(device)
print(torch.cuda.memory_summary(device=None, abbreviated=False))
jac = get_jacobian(neural_net, input1, input2)
diff = neural_net(input1, input2) - result
for j in range(result.shape[0]):
jac_placeholder[j, :, :, :] = jac[j, :, :, j, :]
true_jac = torch.permute(jac_placeholder, (0, 3, 1, 2))
mul = torch.einsum('bptx,btx->bp', true_jac, diff).to(device)
input2 = input2 - mul
torch.cuda.empty_cache()
method(model, x1, x2, y, 3000)`
**Edit 1:**
The error occurs because of the line 'input2 = input2 - mul' but it is unclear to me why this happens.
**Edit 2:**
I was able to find the error. There was a with torch.no_grad() missing around
`jac = get_jacobian(neural_net, input1, input2)
diff = neural_net(input1, input2) - result`
meaning it was also calculating the networks weight gradients...
The RAM usage is now low but the CPU still runs on 100% altough I have everything on the GPU. This still baffles me.
|
https://github.com/pytorch/functorch/issues/1146
|
closed
|
[] | 2024-08-08T12:51:16Z
| 2024-08-09T11:27:07Z
| 0
|
dezenn
|
pytorch/TensorRT
| 3,073
|
❓ Cannot figure out the following error: AttributeError: module 'torch_tensorrt' has no attribute 'ptq'.
|
## ❓ Question
I am encountering an AttributeError when trying to use the ptq module from Torch-TensorRT on google colab.
I am attempting to run this line of code
calibrator = torch_tensorrt.ptq.DataLoaderCalibrator(...)
## Environment
- PyTorch Version (e.g., 1.0): 2.4.0+cu121
- CUDA Version: 12.2
- Python version : 3.10.12
- torch_tensorrt version : 2.4.0
|
https://github.com/pytorch/TensorRT/issues/3073
|
closed
|
[
"question"
] | 2024-08-08T11:37:08Z
| 2024-08-09T06:04:57Z
| null |
ImaanIbrar
|
huggingface/cookbook
| 163
|
Incorrect markdown table rendering in Colab in "How to use Inference Endpoints to Embed Documents"
|
There is an issue with the rendering of the Inference Endpoints table in Colab in [How to use Inference Endpoints to Embed Documents](https://huggingface.co/learn/cookbook/automatic_embedding_tei_inference_endpoints). Although the table correctly renders on HF cookbook webpage:
<img width="610" alt="image" src="https://github.com/user-attachments/assets/e32731fb-31e1-4a5d-8a35-a230b1bea50c">
when opening with Colab with the upper "Open in Colab" button, the rows are rendered incorrectly:
<img width="583" alt="image" src="https://github.com/user-attachments/assets/65d76a12-bd4d-41ce-93d9-4c0b19986bdf">
|
https://github.com/huggingface/cookbook/issues/163
|
closed
|
[] | 2024-08-08T11:16:40Z
| 2024-08-08T16:22:48Z
| null |
sergiopaniego
|
huggingface/alignment-handbook
| 192
|
Constant training loss in the model adapter card
|
Hello,
I could fine-tune a model using a small dataset and I see that the validation loss decreases, while the training loss remains the same in the model card.
I don't think this is normal, even though the new task I try to teach the model is similar to what it already does, I think it should be able to learn from the dataset. I took a look at the trainer_state.json file created during the fine-tuning process and I saw that the training_loss for step 2 is different from the one displayed in the model card.
**Results from model_card:**
|Training Loss | Epoch | Step | Validation Loss|
|-------|-------|-------|-------|
|1.3185 | 1.0 | 1 | 1.4256|
|1.3185 | 1.1429 | 2 | 1.3196|
**Results from the trainer_state.json:**
"log_history": [
{
"epoch": 1.0,
"grad_norm": 1.1992276906967163,
"learning_rate": 0.0002,
"loss": 1.3185,
"step": 1
},
{
"epoch": 1.0,
"eval_loss": 1.4256268739700317,
"eval_runtime": 1.7474,
"eval_samples_per_second": 1.145,
"eval_steps_per_second": 0.572,
"step": 1
},
{
"epoch": 1.1428571428571428,
"eval_loss": 1.3196333646774292,
"eval_runtime": 1.552,
"eval_samples_per_second": 1.289,
"eval_steps_per_second": 0.644,
"step": 2
},
{
"epoch": 1.1428571428571428,
"step": 2,
"total_flos": 823612516859904.0,
"train_loss": 0.7439389228820801,
"train_runtime": 27.974,
"train_samples_per_second": 0.5,
"train_steps_per_second": 0.071
}
Does the training loss remain the same, or is there a problem with the model card generation?
Have a nice day!
|
https://github.com/huggingface/alignment-handbook/issues/192
|
closed
|
[] | 2024-08-08T09:35:40Z
| 2024-08-08T13:29:00Z
| 1
|
Michelet-Gaetan
|
huggingface/optimum
| 1,985
|
Correct example to use TensorRT?
|
### System Info
```shell
optimum: 1.20.0
os: ubuntu 20.04 with RTX 2080TI
python: 3.10.14
```
### Who can help?
@michaelbenayoun @JingyaHuang @echarlaix
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
I followed the doc [here](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/gpu#tensorrtexecutionprovider). The below is my code:
```python
from transformers import AutoProcessor
from optimum.onnxruntime import ORTModelForVision2Seq
model = 'facebook/nougat-small'
ort_model = ORTModelForVision2Seq.from_pretrained(
"facebook/nougat-small",
export=True,
provider="TensorrtExecutionProvider",
)
assert ort_model.providers == ["TensorrtExecutionProvider", "CUDAExecutionProvider", "CPUExecutionProvider"]
processor = AutoProcessor.from_pretrained(model)
ort_model.save_pretrained('./nougat-small-trt')
processor.save_pretrained('./nougat-small-trt')
```
When running the code, the terminal looks like:
```
2024-08-08 16:31:02.881585368 [W:onnxruntime:Default, tensorrt_execution_provider.h:83 log] [2024-08-08 08:31:02 WARNING] onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
```
I waited for almost half an hour for exporting the model (RTX 2080TI). However, when I loaded it by the below code, it just repeated the same thing.
```python
session_options = ort.SessionOptions()
session_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
session_options.log_severity_level = 3
trt_engine_cache = './nougat-small-trt-cache'
os.makedirs(trt_engine_cache, exist_ok=True)
provider_options = {
'trt_engine_cache_enable': True,
'trt_engine_cache_path': trt_engine_cache
}
self.model = ORTModelForVision2Seq.from_pretrained(
model,
provider='TensorrtExecutionProvider',
provider_options=provider_options,
session_options=session_options,
)
```
Therefore, I want to know whether Optimum supports TensorRT or not. Or there is something wrong with the official doc to run TensorRT.
### Expected behavior
When loading the converted model by TensorRT, optimum should not repeat the converting process again.
|
https://github.com/huggingface/optimum/issues/1985
|
open
|
[
"bug"
] | 2024-08-08T08:46:14Z
| 2024-08-29T11:24:35Z
| 2
|
sherlcok314159
|
huggingface/diffusers
| 9,127
|
flux.1-dev device_map didn't work
|
I try to use device_map to use multiple gpu's, but it not worked, how can I use all my gpus?
|
https://github.com/huggingface/diffusers/issues/9127
|
closed
|
[] | 2024-08-08T08:30:33Z
| 2024-11-26T02:11:03Z
| 33
|
hznnnnnn
|
pytorch/tutorials
| 2,994
|
[Reinforcement Learning] - help on cartpull tutorial
|
hello im completely new to machine learning and just trying to learn. im getting this warning an none of the figure are showing up (libEGL warning: DRI2: failed to authenticate) does anyone know what i could be missing or what might be the cause? im running this in unraid on a VM with a graphics card passed thru with its drivers installed. using Ubuntu 24.04 LTS fresh install, plz help thanks.
cc @vmoens @nairbv
|
https://github.com/pytorch/tutorials/issues/2994
|
closed
|
[
"question"
] | 2024-08-08T03:33:14Z
| 2024-08-09T03:47:52Z
| null |
Misticfury
|
pytorch/vision
| 8,569
|
Allow ffmpeg-python backend for torchvision.io.write_video?
|
### 🚀 The feature
Create another backend for torchvision.io.write_video which uses ffmpeg-python as a backend, but which otherwise has exactly the same interface/functionality.
### Motivation, pitch
torchvision.io.write_video currently calls PyAV, which in turn is a wrapper for ffmpeg. [PyAV has an issue](https://github.com/PyAV-Org/PyAV/issues/371) which seems still unresolved where setting the CRF (constant rate factor) through the options has no effect. [This issue has been referenced as recently as March of this year](https://github.com/imageio/imageio/issues/1062). As far as I can tell, adjusting CRF is the canonical way to tune a video's level of compression. Adding support for ffmpeg-python as a backend would let users tune CRF, which would allow arbitrary levels of compression.
### Alternatives
If there is some other set of options which can be passed to write_video to alter the level of compression, that would be an acceptable alternative (at least for my use-case). In this case, it would be ideal to include this alternative set of options in the write_video documentation as an example.
### Additional context
I already kind of got it working in a notebook, but it's missing support for audio and such.
```
# Define output video parameters
output_filename = 'output_video.mp4'
fps = 30
codec = 'libx264'
# Create the input process from the NumPy array
process1 = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(video_array.shape[2], video_array.shape[1]))
.output(output_filename, pix_fmt='yuv420p', r=fps, vcodec=codec, crf=10)
.overwrite_output()
.run_async(pipe_stdin=True)
)
# Write the NumPy array to the input pipe
for frame in video_array:
process1.stdin.write(frame.tobytes())
# Close the input pipe
process1.stdin.close()
# Wait for the ffmpeg process to finish
process1.wait()
```
crf=10 produces something good-looking, while crf=50 produces something very compressed-looking as expected.
|
https://github.com/pytorch/vision/issues/8569
|
closed
|
[] | 2024-08-08T01:14:07Z
| 2024-10-11T11:53:49Z
| 1
|
adaGrad1
|
huggingface/diffusers
| 9,120
|
[ar] Translating docs to Arabic (العربية)
|
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐.
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about Diffusers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/diffusers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63).
Thank you so much for your help! 🤗
|
https://github.com/huggingface/diffusers/issues/9120
|
closed
|
[] | 2024-08-07T21:04:54Z
| 2024-10-29T08:14:24Z
| 2
|
AhmedAlmaghz
|
huggingface/chat-ui
| 1,394
|
I need to reload to get the response
|

i am using LLama 3.1 70B to chat, but it is so slow to get response and i need to reload to get response , is it because the model is overload ?
|
https://github.com/huggingface/chat-ui/issues/1394
|
closed
|
[
"support"
] | 2024-08-07T09:31:03Z
| 2024-08-15T06:56:59Z
| 2
|
renaldy-therry
|
huggingface/chat-ui
| 1,393
|
Generation Error with Ollama - Inconsistent Output Generation
|
Hi,
I'm experiencing issues while running GEMMA2 on Ollama. Specifically, I'm encountering the following problems:
Error on Message Generation:
Whenever a new chat is created, every message results in the error:
Error: Generation failed, in the back end
No output is generated,on the front end.
Inconsistent Message Handling:
After retrying the same message multiple times (ranging from 2 to 15 attempts), the message is eventually processed correctly and the output is displayed on the front end.
Server Responsiveness:
Despite the above issues, the server responds to every query.
Expected Behavior:
Messages should be processed and output generated on the first attempt without errors.
Additional Context:
Ollama Version: 0.3.3
GEMMA2:2b (I've tried others models and the problem is the same)
Operating System: CentOS
Relevant Logs:
error message:
ERROR (537688): Generation failed
err: {
"type": "Error",
"message": "Generation failed",
"stack":
Error: Generation failed
at Module.generateFromDefaultEndpoint (/chat-ui/src/lib/server/generateFromDefaultEndpoint.ts:23:9)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async generateTitle (/chat-ui/src/lib/server/textGeneration/title.ts:54:10)
at async Module.generateTitleForConversation (/chat-ui/src/lib/server/textGeneration/title.ts:17:19)
Its something with the title of the conversation but retrying the message finally the conversations name is changed too. And messages after conversations name is changed have the same problem, rarely it works at first attempt.
My env.local:
MONGODB_URL="mongodb://localhost:27017"
HF_TOKEN=Mytoken
OPENAI_API_KEY="ollama"
MODELS=`[
{
"name": "google/gemma-2-2b-it",
"chatPromptTemplate": "{{#each messages}}{{#ifUser}}<start_of_turn>user\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<end_of_turn>\n<start_of_turn>model\n{{/ifUser}}{{#ifAssistant}}{{content}}<end_of_turn>\n{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"max_new_tokens": 2048,
"stop": ["<end_of_turn>"]
},
"endpoints": [
{
"type": "ollama",
"baseURL": "http://127.0.0.1:11434",
"ollamaName" : "gemma2:2b"
}
]
},
]`
USE_LOCAL_WEBSEARCH=true
Any assistance in resolving this issue would be greatly appreciated. Thank you!
|
https://github.com/huggingface/chat-ui/issues/1393
|
open
|
[
"support"
] | 2024-08-07T09:02:19Z
| 2024-08-07T11:05:19Z
| 1
|
juanjuanignacio
|
huggingface/chat-ui
| 1,392
|
Cannot send the message and get response in hugging chat
|
I cannot send message and get a response from llm, and i cannot click "activate" to change model in huggingchat (https://huggingface.co/chat/)
|
https://github.com/huggingface/chat-ui/issues/1392
|
closed
|
[
"support",
"huggingchat"
] | 2024-08-07T08:37:01Z
| 2024-08-07T09:06:59Z
| 4
|
renaldy-therry
|
pytorch/executorch
| 4,579
|
how to realize the sliding window of kv cache?
|
hello,
now I want to realize the sliding window of kv cache, so dynamic allocation and reclamation of memory needs to be realized. could you please teach me how to realize the dynamic allocation and reclamation of memory in the transformer?
Thank you in advanced.
|
https://github.com/pytorch/executorch/issues/4579
|
closed
|
[] | 2024-08-07T07:05:42Z
| 2024-08-15T05:04:51Z
| null |
l2002924700
|
huggingface/text-embeddings-inference
| 371
|
how to support a SequenceClassification model
|
### Feature request
I have a model can be run by transformers.AutoModelForSequenceClassification.from_pretrained, how can i serve it in TEI
### Motivation
to support more models
### Your contribution
YES
|
https://github.com/huggingface/text-embeddings-inference/issues/371
|
closed
|
[] | 2024-08-06T10:45:00Z
| 2024-10-17T10:24:09Z
| null |
homily707
|
huggingface/chat-ui
| 1,387
|
CopyToClipBoardBtn in ChatMessage.svelte has a bug?
|
https://github.com/huggingface/chat-ui/blob/6de97af071c69aa16e8f893adebb46f86bdeeaff/src/lib/components/chat/ChatMessage.svelte#L378-L384
When compared to other components, classNames is the only difference here.
When rendered, the icon appears faint in the browser.
Is there a reason for this, or is it a bug?
https://github.com/huggingface/chat-ui/blob/6de97af071c69aa16e8f893adebb46f86bdeeaff/src/lib/components/CopyToClipBoardBtn.svelte#L37-L51
It seems that the classNames of IconCopy is the cause of the faintness.
|
https://github.com/huggingface/chat-ui/issues/1387
|
closed
|
[
"bug",
"good first issue",
"front"
] | 2024-08-06T04:59:45Z
| 2024-08-12T09:35:21Z
| 5
|
calycekr
|
huggingface/diffusers
| 9,092
|
Fluxpipeline report model_index.json not found
|
### Describe the bug
I use the Fluxpipeline and report no file model_index.json.
I read other issue and set the `revision="refs/pr/3"`,but it doesn't work, how can i do to solve this problem and how to use the T5xxl as text encoder? thanks for your help
### Reproduction
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("/opt/ml/volume/default/aigc/project/chanPin/models/flux", revision="refs/pr/3",torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()
prompt = "a tiny astronaut hatching from an egg on the moon"
out = pipe(
prompt=prompt,
guidance_scale=3.5,
height=768,
width=1360,
num_inference_steps=50,
).images[0]
out.save("image.png")
```
### Logs
_No response_
### System Info
ubuntu 20.04
### Who can help?
@sayakpaul
|
https://github.com/huggingface/diffusers/issues/9092
|
closed
|
[
"bug"
] | 2024-08-06T01:48:40Z
| 2024-08-06T02:25:03Z
| 3
|
chongxian
|
huggingface/trl
| 1,900
|
How to speed up PPOTrainer .generate()?
|
During PPO, I'm finding that `.generate()` is extremely slow. The following call takes ~3 and a half minutes for batch size of 64 with a 1.4B parameter policy LM:
```
ppo_trainer.generate(
input_token_ids_list,
pad_token_id=policy_model_tokenizer.eos_token_id,
return_prompt=False,
**generation_config_dict,
)
```
How can I accelerate sampling? The same function call with `vllm` takes <30s for setup and execution, so I feel like I am doing something suboptimally.
|
https://github.com/huggingface/trl/issues/1900
|
closed
|
[] | 2024-08-05T18:35:31Z
| 2024-10-01T06:35:50Z
| null |
RylanSchaeffer
|
huggingface/chat-ui
| 1,386
|
System role problem running Gemma 2 on vLLM
|
Hello,
In running chat ui and trying some models, with phi3 and llama i had no problem but when I run gemma2 in vllm Im not able to make any good api request,
in env.local:
{
"name": "google/gemma-2-2b-it",
"id": "google/gemma-2-2b-it",
"chatPromptTemplate": "{{#each messages}}{{#ifUser}}<start_of_turn>user\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<end_of_turn>\n<start_of_turn>model\n{{/ifUser}}{{#ifAssistant}}{{content}}<end_of_turn>\n{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 2048,
"stop": ["<end_of_turn>"]
},
"endpoints": [
{
"type": "openai",
"baseURL": "http://127.0.0.1:8000/v1",
}
]
}
and I always have the same response in vllm server:
ERROR 08-05 12:39:06 serving_chat.py:118] Error in applying chat template from request: System role not supported
INFO: 127.0.0.1:42142 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request
do someone know if I have to change and how do change the chat template or deactivate system role ? is it a vllm problem or a chat ui problem?
Thank U!
|
https://github.com/huggingface/chat-ui/issues/1386
|
closed
|
[
"support"
] | 2024-08-05T13:22:10Z
| 2024-11-07T21:39:47Z
| 5
|
juanjuanignacio
|
pytorch/TensorRT
| 3,060
|
❓ [Question] function `torch._ops.aten.aten::_to_copy` not currently supported with dynamic input shape
|
## ❓ Question
I'm trying to compile a model with dynamic input shape but told that the `function torch._ops.aten.aten::_to_copy` is not currently supported:
```Traceback (most recent call last):
File "/home/wh/generative_action/SynHSI/test_module.py", line 325, in <module>
model = torch_tensorrt.compile(model, ir="dynamo", inputs=trt_inputs)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/_compile.py", line 249, in compile
trt_graph_module = dynamo_compile(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py", line 243, in compile
trt_gm = compile_module(gm, inputs, settings)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py", line 431, in compile_module
trt_module = convert_module(
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 107, in convert_module
interpreter_result = interpret_module_to_result(module, inputs, settings)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 88, in interpret_module_to_result
interpreter_result = interpreter.run()
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 336, in run
self._construct_trt_network_def()
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 317, in _construct_trt_network_def
super().run()
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/interpreter.py", line 147, in run
self.env[node] = self.run_node(node)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 378, in run_node
trt_node: torch.fx.Node = super().run_node(n)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/interpreter.py", line 204, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 480, in call_function
raise UnsupportedOperatorException(
torch_tensorrt.dynamo.conversion._TRTInterpreter.UnsupportedOperatorException: Conversion of function torch._ops.aten.aten::_to_copy not currently supported!
```
the code caused this error is as follow:
`pi = self.positional_encoder.pos_encoding[pi.long()]`
where the `self.positional_encoder` is an instance of a customized implementation of the transformer position encoder:
```
class PositionalEncoding(nn.Module):
def __init__(self, dim_model, dropout_p, max_len):
super().__init__()
# Modified version from: https://pytorch.org/tutorials/beginner/transformer_tutorial.html
# max_len determines how far the position can have an effect on a token (window)
# Info
self.dropout = nn.Dropout(dropout_p)
# Encoding - From formula
pos_encoding = torch.zeros(max_len, dim_model)
positions_list = torch.arange(0, max_len, dtype=torch.float).reshape(-1, 1) # 0, 1, 2, 3, 4, 5
division_term = torch.exp(
torch.arange(0, dim_model, 2).float() * (-math.log(10000.0)) / dim_model) # 1000^(2i/dim_model)
# PE(pos, 2i) = sin(pos/1000^(2i/dim_model))
pos_encoding[:, 0::2] = torch.sin(positions_list * division_term)
# PE(pos, 2i + 1) = cos(pos/1000^(2i/dim_model))
pos_encoding[:, 1::2] = torch.cos(positions_list * division_term)
# Saving buffer (same as parameter without gradients needed)
pos_encoding = pos_encoding.unsqueeze(0).transpose(0, 1)
self.register_buffer("pos_encoding", pos_encoding)
def forward(self, token_embedding: torch.tensor) -> torch.tensor:
# Residual connection + pos encoding
return self.dropout(token_embedding + self.pos_encoding[:token_embedding.size(0), :])
```
## What you have already tried
The complete model is complicated so I have tried to implement a minimal reproducible example, but the compilation of a single `PositionalEncoding` model succeed. I also tried adding more context code but it still succeed. I'm unable to get a minimal reproducible example now.
I found this error only occurs with dynamic input shape. Compiling model with fixed input shape works well.
Besides, I noticed that [#2161](https://github.com/pytorch/TensorRT/pull/2161) had added the `_to_copy` converter, so I'm confused why it told me `_to_copy` is not supported, or maybe I misunderstand something?
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch V
|
https://github.com/pytorch/TensorRT/issues/3060
|
open
|
[
"question"
] | 2024-08-05T12:20:32Z
| 2024-12-12T18:33:18Z
| null |
broken-dream
|
huggingface/optimum
| 1,981
|
[GPTQQuantizer] How to use multi-GPU for GPTQQuantizer?
|
### System Info
```shell
hello:
I encountered an out-of-memory error while attempting to quantize a model using GPTQQuantizer. The error seems to be related to the large size of the model weights. Below is the quantization code I used:
from optimum.gptq import GPTQQuantizer
quantizer = GPTQQuantizer(
bits=4,
dataset='wikitext2',
block_name_to_quantize=decoder.layers,
disable_exllama=False,
damp_percent=0.1,
group_size=128
)
The error message I received is as follows:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 784.00 MiB. GPU 0 has a total capacty of 10.90 GiB of which 770.44 MiB is free. Including non-PyTorch memory
Environment:
· Transformers version: 4.43.2
· Optimum version: 1.21.2
· GPU model and memory: 11GiB * 2
· CUDA version: 12.4
Question:How to use multi-GPU for GPTQQuantizer? thank you!
```
### Who can help?
@kashif @srush @danieldk @mausch @dmaniloff How to use multi-GPU for GPTQQuantizer?
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
from optimum.gptq import GPTQQuantizer
```python
quantizer = GPTQQuantizer(
bits=4,
dataset='wikitext2',
block_name_to_quantize=decoder.layers,
disable_exllama=False,
damp_percent=0.1,
group_size=128
)
```
### Expected behavior
use multi-GPU for GPTQQuantizer?
|
https://github.com/huggingface/optimum/issues/1981
|
closed
|
[
"bug"
] | 2024-08-05T07:58:11Z
| 2024-08-08T02:19:18Z
| null |
RunTian1
|
huggingface/datasets
| 7,087
|
Unable to create dataset card for Lushootseed language
|
### Feature request
While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering languages that aren't available in the options?
### Motivation
I'd like to add more information about my dataset in the dataset card, and the language is one of the most important pieces of information, since the entire dataset is primarily concerned collecting Lushootseed documents.
### Your contribution
I can submit a pull request
|
https://github.com/huggingface/datasets/issues/7087
|
closed
|
[
"enhancement"
] | 2024-08-04T14:27:04Z
| 2024-08-06T06:59:23Z
| 2
|
vaishnavsudarshan
|
huggingface/diffusers
| 9,076
|
Add a better version of 'callback_on_step_end' for FluxPipeline
|
**Is your feature request related to a problem? Please describe.**
There is a huge delay before starting the inference and once the 4th step is complete and there is no callback for that and it feels like it is stuck, just want a more responsive version.
```
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
guidance_scale=0.0,
output_type="pil",
num_inference_steps=4,
max_sequence_length=256,
generator=torch.Generator("cuda").manual_seed(0)
).images[0]
print('started saving file')
image.save("flux-schnell.png")
```
If you run the above code, it feels like you are stuck at step 0 and then after 4/4 is done
I am using a 48GB A40
**Describe the solution you'd like.**
Can we get some kind of callback for these two delays as well
|
https://github.com/huggingface/diffusers/issues/9076
|
closed
|
[
"stale"
] | 2024-08-04T10:34:04Z
| 2024-11-23T00:24:14Z
| 3
|
nayan-dhabarde
|
pytorch/data
| 1,309
|
what's the exact plan for torchdata now?
|
hi, as a user of torchdata, i'm very happy to see the resurrection of the project.
i have a question about the development plan. from the README, i see:
> torchdata repo to be an iterative enhancement of torch.utils.data.DataLoader
this is somewhat surprising. although the current Datapipes seem to have various issues underneath the shell, so far, Datapipes ARE torchdata. the current API reference:
> API Reference:
>
> [Stateful DataLoader](https://pytorch.org/data/beta/torchdata.stateful_dataloader.html)
> [Iterable-style DataPipes](https://pytorch.org/data/beta/torchdata.datapipes.iter.html)
> [Map-style DataPipes](https://pytorch.org/data/beta/torchdata.datapipes.map.html)
> [Utility Functions](https://pytorch.org/data/beta/torchdata.datapipes.utils.html)
> [DataLoader2](https://pytorch.org/data/beta/dataloader2.html)
> [ReadingService](https://pytorch.org/data/beta/reading_service.html)
and this is it; i.e., until ver 0.7, torchdata == the datapipes and other necessary utilities (dataloader2 and reading service).
and that's why it is surprising for me, that while the development of torchdata has re-started, it is being done in a way it discards everything it had.
so, can i ask for a bit more details about what the new direction (enhancement of torch.utils.data.DataLoader)? or am i missing something here?
thanks.
|
https://github.com/meta-pytorch/data/issues/1309
|
closed
|
[] | 2024-08-04T00:25:26Z
| 2024-08-04T00:27:17Z
| 1
|
keunwoochoi
|
pytorch/xla
| 7,805
|
Kaggle Notebooks: TPU detected but wont use
|
## ❓ Questions and Help
Hi All,
I Have this code
```
import optuna
from torch.optim.lr_scheduler import ReduceLROnPlateau
# Assuming dataset is already defined
train_size = int(0.8 * len(dataset))
val_size = len(dataset) - train_size
train_dataset, val_dataset = random_split(dataset, [train_size, val_size])
def objective(trial):
device = xm.xla_device()
learning_rate = trial.suggest_float('learning_rate', 1e-5, 1e-2, log=True)
dropout_prob = trial.suggest_float('dropout_prob', 0.2, 0.7)
batch_size = trial.suggest_int('batch_size', 2, 32)
optimizer_name = trial.suggest_categorical('optimizer', ['Adam', 'SGD'])
loss_fn_name = trial.suggest_categorical('loss_fn', ['DiceLoss', 'FocalLoss', 'CombinedLoss', 'BCEWithLogitsLoss'])
backbone = "resnet101"
model_name = "DeepLabV3Plus"
model = create_model(model_name, encoder_name=backbone, in_channels=3, classes=1)
model.to(device)
if optimizer_name == 'Adam':
optimizer = optim.Adam(model.parameters(), lr=learning_rate, weight_decay=0.0001)
elif optimizer_name == 'SGD':
optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9, weight_decay=0.0001)
if loss_fn_name == 'DiceLoss':
loss_fn = DiceLoss()
elif loss_fn_name == 'FocalLoss':
loss_fn = FocalLoss()
elif loss_fn_name == 'CombinedLoss':
loss_fn = CombinedLoss()
elif loss_fn_name == 'BCEWithLogitsLoss':
pos_weight = torch.tensor([1.127], device=device)
loss_fn = nn.BCEWithLogitsLoss(pos_weight=pos_weight)
for module in model.modules():
if isinstance(module, nn.Conv2d):
module.add_module('dropout', nn.Dropout2d(dropout_prob))
scheduler = ReduceLROnPlateau(optimizer, mode='min', patience=3, factor=0.1)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False)
num_epochs = 5
best_loss = float('inf')
for epoch in range(num_epochs):
model.train()
train_losses = []
para_loader = pl.ParallelLoader(train_loader, [device])
for inputs, targets in tqdm(para_loader.per_device_loader(device), desc=f"Epoch {epoch+1}/{num_epochs} - Training"):
inputs, targets = inputs.to(device), targets.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_fn(outputs, targets.float())
loss.backward()
xm.optimizer_step(optimizer)
train_losses.append(loss.item())
model.eval()
val_losses = []
para_loader = pl.ParallelLoader(val_loader, [device])
with torch.no_grad():
for inputs, targets in tqdm(para_loader.per_device_loader(device), desc=f"Epoch {epoch+1}/{num_epochs} - Validation"):
inputs, targets = inputs.to(device), targets.to(device)
outputs = model(inputs)
loss = loss_fn(outputs, targets.float())
val_losses.append(loss.item())
val_loss = np.mean(val_losses)
scheduler.step(val_loss)
if val_loss < best_loss:
best_loss = val_loss
return best_loss
# Save the study to a persistent storage
study_name = "my_study"
storage_name = f"sqlite:///example.db"
study = optuna.create_study(direction='minimize', study_name=study_name, storage=storage_name, load_if_exists=True)
study.optimize(objective, n_trials=15)
# Print the best hyperparameters
print('Best trial:')
trial = study.best_trial
print(f' Value: {trial.value}')
print(' Params: ')
for key, value in trial.params.items():
print(f' {key}: {value}')
```
However the even though the TPU is detected as `Using device: xla:0` It does not show in dashboard, and the TPU deactivates after while due to not been used.
Would anyone be able to help me with this matter please .
Thanks & Best Regards
AMJS
|
https://github.com/pytorch/xla/issues/7805
|
closed
|
[
"question",
"xla:tpu"
] | 2024-08-03T16:32:58Z
| 2025-04-01T12:55:08Z
| null |
MichaelSchroter
|
huggingface/diffusers
| 9,069
|
TypeError: expected np.ndarray (got numpy.ndarray)
|
### Describe the bug
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
# Depending on the variant being used, the pipeline call will slightly vary.
# Refer to the pipeline documentation for more details.
image = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]
image.save("flux.png")
```
with this code, it report the error as following:
```
(flux) xiangyu@gpu06:~/st/flux$ python gen.py
Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]Traceback (most recent call last):
File "/scr/user/xiangyu/flux/gen.py", line 4, in <module>
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 876, in from_pretrained
loaded_sub_model = load_sub_model(
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 700, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_utils.py", line 157, in from_pretrained
return cls.from_config(config, return_unused_kwargs=return_unused_kwargs, **kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 260, in from_config
model = cls(**init_dict)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 653, in inner_init
init(self, *args, **init_kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py", line 76, in __init__
timesteps = torch.from_numpy(timesteps).to(dtype=torch.float32)
TypeError: expected np.ndarray (got numpy.ndarray)
```
### Reproduction
```python
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
# Depending on the variant being used, the pipeline call will slightly vary.
# Refer to the pipeline documentation for more details.
image = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]
image.save("flux.png")
```
with this code, it report the error as following:
(flux) xiangyu@gpu06:~/st/flux$ python gen.py
Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]Traceback (most recent call last):
File "/scr/user/xiangyu/flux/gen.py", line 4, in <module>
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 876, in from_pretrained
loaded_sub_model = load_sub_model(
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 700, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_utils.py", line 157, in from_pretrained
return cls.from_config(config, return_unused_kwargs=return_unused_kwargs, **kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 260, in from_config
model = cls(**init_dict)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 653, in inner_init
init(self, *args, **init_kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py", line 76, in __init__
ti
|
https://github.com/huggingface/diffusers/issues/9069
|
closed
|
[
"bug"
] | 2024-08-03T12:45:03Z
| 2024-10-27T06:43:32Z
| 11
|
xiangyumou
|
pytorch/torchchat
| 1,001
|
[Raspbian] streamlit GUI interface does not work / no documentation how to install
|
### 🐛 Describe the bug
from #985:
> 2. If you're interested in debugging the browser, feel free to spin up another issue with the error message from this
> > streamlit run torchchat.py -- browser llama3
Thanks, I will. I suspect it's pretty straightforward - there's no streamlit installed on my system. I assumed that your install script would install it, or tell me to install it if I needed that?!
```
$ streamlit
bash: streamlit: command not found
```
I have no idea what to install for / how to install streamlit, and even less so whether it's available for this platform. It wasn't high on my list, and so I moved on when it didn't work. (Was curious to try the GUI just for kicks, in case this was installed by default with the OS.)
Here's the Raspbian version I used:
```
$ uname -a
Linux raspberrypi 6.6.31+rpt-rpi-v8 #1 SMP PREEMPT Debian 1:6.6.31-1+rpt1 (2024-05-29) aarch64 GNU/Linux
```
opening a separate issue as suggested in #985
### Versions
wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
--2024-08-02 20:22:47-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 23357 (23K) [text/plain]
Saving to: 'collect_env.py.1'
collect_env.py.1 100%[===================>] 22.81K --.-KB/s in 0.005s
2024-08-02 20:22:47 (4.57 MB/s) - 'collect_env.py.1' saved [23357/23357]
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux trixie/sid (aarch64)
GCC version: (Debian 13.3.0-3) 13.3.0
Clang version: 16.0.6 (27+b1)
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.11.2 (main, May 2 2024, 11:59:08) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.6.31+rpt-rpi-v8-aarch64-with-glibc2.39
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: ARM
Model name: Cortex-A76
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 4
Socket(s): -
Cluster(s): 1
Stepping: r4p1
CPU(s) scaling MHz: 100%
CPU max MHz: 2400.0000
CPU min MHz: 1500.0000
BogoMIPS: 108.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp
L1d cache: 256 KiB (4 instances)
L1i cache: 256 KiB (4 instances)
L2 cache: 2 MiB (4 instances)
L3 cache: 2 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.10.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] types-flake8-2020==1.8
[pip3] types-flake8-bugbear==23.9.16
[pip3] types-flake8-builtins==2.2
[pip3] types-flake8-docstrings==1.7
[pip3] types-flake8-plugin-utils==1.3
[pip3] types-flake8-rst-docstrings==0.3
[pip3] types-flake8-simplify==0.21
[pip3] types-flake8-typing-imports==1.15
[pip3] types-mypy-extensions==1.0
[con
|
https://github.com/pytorch/torchchat/issues/1001
|
closed
|
[
"bug",
"Browser"
] | 2024-08-03T03:24:10Z
| 2024-08-06T00:32:41Z
| null |
sunshinesfbay
|
pytorch/pytorch
| 132,559
|
How to fix tensor.numpy() not supported for torch.export with strict=False
|
### 🐛 Describe the bug
This is trying to do a BE task to unblock https://github.com/pytorch/pytorch/pull/130977. The problem is very similar to https://github.com/pytorch/pytorch/pull/120261, though that one uses torch.export with strict=True.
# repro:
```
import numpy as np
import torch
class MyNumpyModel(torch.nn.Module):
def __init__(self):
super(MyNumpyModel, self).__init__()
def forward(self, input):
return input.numpy()
with torch._subclasses.FakeTensorMode():
model = MyNumpyModel()
_ = torch.export.export(model, args=(torch.randn(1000),), strict=False)
```
# Error:
```
RuntimeError:.numpy() is not supported for tensor subclasses.
```
# Attempt:
Inside tracing, the tensor is `FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(1000,))))`, and applying `torch._numpy.ndarray` would turn it into `FunctionalTensor(_to_functional_torch.ndarray(FakeTensor(..., size=(1000,), dtype=float64)))`.
However, I don't know how to make it into a permanent fix.
### Versions
PyTorch version: 2.5.0a0+git0b7d6b3
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
Frequency boost: enabled
CPU(s) scaling MHz: 100%
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
|
https://github.com/pytorch/pytorch/issues/132559
|
open
|
[
"module: numpy",
"tensor subclass",
"module: functionalization",
"export-triage-review",
"oncall: export"
] | 2024-08-02T23:04:39Z
| 2024-08-06T18:42:05Z
| null |
henrylhtsang
|
pytorch/xla
| 7,803
|
[question] Seeking information on low-level TPU interaction and libtpu.so API
|
I'm looking to build an automatic differentiation library for TPUs without using high-level front-ends like TensorFlow/JAX/PyTorch-XLA, but I'm finding information about lower-level TPU usage is practically non-existent.
Specifically, I'm interested in:
1. How to interact with TPUs at a lower level than what's typically exposed in TensorFlow
2. Information about the libtpu.so library and its API
3. Any resources or documentation on implementing custom TPU operations
Are there any insights or suggestions on how to approach this, particularly regarding TPU support? Any ideas or help would be greatly appreciated.
I understand that some of this information might be proprietary, but any guidance on what is possible or available would be very helpful.
|
https://github.com/pytorch/xla/issues/7803
|
closed
|
[
"question",
"xla:tpu"
] | 2024-08-02T10:16:01Z
| 2025-04-01T12:56:19Z
| null |
notlober
|
huggingface/evaluate
| 611
|
How to customize my own evaluator and metrics?
|
I'm facing a task on VQA, where I need to compute [VQA](https://visualqa.org/evaluation.html) accuracy](https://visualqa.org/evaluation.html) as follows:
```math
\text{Acc}(ans) = \min{ \left\{ \frac{\text{\# humans that said } ans }{3}, 1 \right\} }
```
I have following questions:
1. Do I need to customize my own metric? If so, can I only create `metrics/vqa_accuracy/vqa_accuracy.py` without other operations, such as running `evaluate-cli create "accuracy name" --module_type "metric"`?
2. I found that there is no suitable `evaluator` for my task, and I'm not sure if it is possible to customize my own `evaluator`, since I didn't find any document on creating new `evaluator`.
|
https://github.com/huggingface/evaluate/issues/611
|
closed
|
[] | 2024-08-02T08:37:47Z
| 2024-08-15T02:26:30Z
| null |
Kamichanw
|
huggingface/diffusers
| 9,055
|
ImportError: cannot import name 'StableDiffusionLoraLoaderMixin' from 'diffusers.loaders'
|
### Describe the bug
I get this error in diffusers versions 25,26,27,28,29, how can I solve it?
### Reproduction
import ast
import gc
import inspect
import math
import warnings
from collections.abc import Iterable
from typing import Any, Callable, Dict, List, Optional, Union
import torch
import torch.nn.functional as F
from packaging import version
from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModelWithProjection
from diffusers.configuration_utils import FrozenDict
from diffusers.image_processor import PipelineImageInput, VaeImageProcessor
from diffusers.loaders import (
FromSingleFileMixin,
IPAdapterMixin,
StableDiffusionLoraLoaderMixin,
TextualInversionLoaderMixin,
)
from diffusers.models import AutoencoderKL, UNet2DConditionModel
from diffusers.models.attention import Attention, GatedSelfAttentionDense
from diffusers.models.attention_processor import AttnProcessor2_0
from diffusers.models.lora import adjust_lora_scale_text_encoder
from diffusers.pipelines import DiffusionPipeline
from diffusers.pipelines.pipeline_utils import StableDiffusionMixin
from diffusers.pipelines.stable_diffusion.pipeline_output import StableDiffusionPipelineOutput
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from diffusers.schedulers import KarrasDiffusionSchedulers
from diffusers.utils import (
USE_PEFT_BACKEND,
deprecate,
logging,
replace_example_docstring,
scale_lora_layers,
unscale_lora_layers,
)
from diffusers.utils.torch_utils import randn_tensor
### Logs
```shell
Traceback (most recent call last):
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/workspace/llm_sd.py", line 149, in <module>
llm_sd(args=args)
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/workspace/llm_sd.py", line 10, in llm_sd
pipe = DiffusionPipeline.from_pretrained(
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1147, in from_pretrained
pipeline_class = _get_pipeline_class(
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 380, in _get_pipeline_class
return get_class_from_dynamic_module(
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/utils/dynamic_modules_utils.py", line 452, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(".py", ""))
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/utils/dynamic_modules_utils.py", line 164, in get_class_in_module
module = importlib.import_module(module_path)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/wrusr/.cache/huggingface/modules/diffusers_modules/git/llm_grounded_diffusion.py", line 32, in <module>
from diffusers.loaders import (
ImportError: cannot import name 'StableDiffusionLoraLoaderMixin' from 'diffusers.loaders' (/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/loaders/__init__.py)
```
### System Info
torch==2.0.1
torchvision==0.15.2
torchaudio==2.0.2
accelerate==0.21.0
transformers==4.39.3
diffusers==0.27.2
peft==0.10.0
numpy==1.25.2
python3.10
### Who can help?
@yiyixuxu @asomoza
|
https://github.com/huggingface/diffusers/issues/9055
|
closed
|
[
"bug"
] | 2024-08-02T07:58:16Z
| 2024-08-02T09:32:12Z
| 2
|
MehmetcanTozlu
|
huggingface/optimum
| 1,980
|
Issue converting moss-moon-003-sft-int4 model to ONNX format
|
### System Info
```shell
I've been working with the owlv2 model and have encountered an issue while attempting to convert it into ONNX format using the provided command:
optimum-cli export onnx --task text-generation -m"/HDD/cz/tools/moss/" --trust-remote-code "HDD/cz/moss_onnx/"
Unfortunately, I'm facing the following error:
Trying to export a moss model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`.
As I am relatively new to this process, I'm unsure about the necessity and usage of custom ONNX configuration. Could you please provide some guidance on how to address this issue? Any assistance or insights would be greatly appreciated.
Thank you for your attention to this matter.
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
https://huggingface.co/fnlp/moss-moon-003-sft-int4/tree/main
### Expected behavior
Convert the model to onnx format
|
https://github.com/huggingface/optimum/issues/1980
|
open
|
[
"bug",
"onnx"
] | 2024-08-02T01:18:46Z
| 2024-10-08T15:51:12Z
| 0
|
ZhiChengWHU
|
pytorch/executorch
| 4,510
|
How to link custom ops?
|
Hi!
I'm trying to integrate some of quantized MatMul C++ kernels into Executorch and I'm having a bad time: the documentation is very vague about what exactly I need to include/link for ATen to pick up my ops.
I would greatly appreciate any help in trying to make it work.
### Overview:
Source code for the dynamic library containing the ops consists of 3 files: `lut_kernel.h`, `lut_kernel.cpp`, `lut_kernel_pytorch.cpp`. The files contain roughly this code:
```c++
// lut_kernel.h
#pragma once
#include <executorch/runtime/kernel/kernel_includes.h>
namespace torch {
namespace executor {
namespace native {
Tensor& code2x8_lut_matmat_out(
RuntimeContext& ctx,
const Tensor& input,
const Tensor& codes,
const Tensor& codebooks,
const Tensor& scales,
const optional<Tensor>& bias,
Tensor& out
);
} // namespace native
} // namespace executor
} // namespace torch
```
```c++
// lut_kernel.cpp
#include "lut_kernel.h"
#include <executorch/extension/kernel_util/make_boxed_from_unboxed_functor.h>
namespace torch {
namespace executor {
namespace native {
Tensor& code2x8_lut_matmat_out(
RuntimeContext& ctx,
const Tensor& input,
const Tensor& codes,
const Tensor& codebooks,
const Tensor& scales,
const optional<Tensor>& bias,
Tensor& out
) {
// CALCULATIONS
return out;
}
} // namespace native
} // namespace executor
} // namespace torch
EXECUTORCH_LIBRARY(aqlm, "code2x8_lut_matmat.out", torch::executor::native::code2x8_lut_matmat_out);
```
```c++
// lut_kernel_pytorch.cpp
#include "lut_kernel.h"
#include <executorch/extension/aten_util/make_aten_functor_from_et_functor.h>
#include <executorch/extension/kernel_util/make_boxed_from_unboxed_functor.h>
#include <torch/library.h>
namespace torch {
namespace executor {
namespace native {
Tensor& code2x8_lut_matmat_out_no_context(
...
Tensor& output
) {
void* memory_pool = malloc(10000000 * sizeof(uint8_t));
MemoryAllocator allocator(10000000, (uint8_t*)memory_pool);
exec_aten::RuntimeContext context{nullptr, &allocator};
return torch::executor::native::code2x8_lut_matmat_out(
context,
...,
output
);
}
at::Tensor code2x8_lut_matmat(
...
) {
auto sizes = input.sizes().vec();
sizes[sizes.size() - 1] = codes.size(1) * codebooks.size(2);
auto out = at::empty(sizes,
at::TensorOptions()
.dtype(input.dtype())
.device(input.device())
);
WRAP_TO_ATEN(code2x8_lut_matmat_out_no_context, 5)(
...,
out
);
return out;
}
} // namespace native
} // namespace executor
} // namespace torch
TORCH_LIBRARY(aqlm, m) {
m.def(
"code2x8_lut_matmat(Tensor input, Tensor codes, "
"Tensor codebooks, Tensor scales, Tensor? bias=None) -> Tensor"
);
m.def(
"code2x8_lut_matmat.out(Tensor input, Tensor codes, "
"Tensor codebooks, Tensor scales, Tensor? bias=None, *, Tensor(c!) out) -> Tensor(c!)"
);
}
TORCH_LIBRARY_IMPL(aqlm, CompositeExplicitAutograd, m) {
m.impl(
"code2x8_lut_matmat", torch::executor::native::code2x8_lut_matmat
);
m.impl(
"code2x8_lut_matmat.out",
WRAP_TO_ATEN(torch::executor::native::code2x8_lut_matmat_out_no_context, 5)
);
}
```
, which closely follows the executorch custom sdpa code.
I build it as two standalone dynamic libs: one `lut_kernel.cpp` with dependency only on `executorch` and `lut_kernel_pytorch.cpp` with additional `torch` dependency. I load the latter lib into pytorch as `torch.ops.load_library(f"../libaqlm_bindings.dylib")`.
### The problem:
I wrote a small `nn.Module` that basically just calls the op. In pytorch it works well. `aten_dialect` for it looks like this:
```
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, p_codes: "i8[3072, 128, 2]", p_codebooks: "f32[2, 256, 1, 8]", p_scales: "f32[3072, 1, 1, 1]", p_bias: "f32[3072]", input: "f32[s0, s1, 1024]"):
input_1 = input
# File: [/Users/blacksamorez/reps/AQLM/inference_lib/src/aqlm/inference.py:74](https://file+.vscode-resource.vscode-cdn.net/Users/blacksamorez/reps/AQLM/inference_lib/src/aqlm/inference.py:74) in forward, code: return torch.ops.aqlm.code2x8_lut_matmat(
code2x8_lut_matmat: "f32[s0, s1, 1024]" = torch.ops.aqlm.code2x8_lut_matmat.default(input_1, p_codes, p_codebooks, p_scales, p_bias); input_1 = p_codes
|
https://github.com/pytorch/executorch/issues/4510
|
closed
|
[] | 2024-08-01T21:16:01Z
| 2024-08-21T21:09:03Z
| null |
BlackSamorez
|
huggingface/transformers
| 32,376
|
AutoModel how to modify config?
|
```
config = AutoConfig.from_pretrained(
**self.params, trust_remote_code=True
)
config.vision_config.use_flash_attn = False
print(config.vision_config)
self.model = AutoModel.from_pretrained(
**self.params, trust_remote_code=True, config=config
).eval()
```
I need disable `use_flash_attn ` to False forcely when loading a model from pretrained. But looks like the config set didn't have any effect.
Why and how
|
https://github.com/huggingface/transformers/issues/32376
|
closed
|
[] | 2024-08-01T12:40:44Z
| 2024-08-02T02:30:22Z
| null |
lucasjinreal
|
huggingface/diffusers
| 9,039
|
how to load_lora_weights in FlaxStableDiffusionPipeline
|
### Describe the bug
how to load lora in FlaxStableDiffusionPipeline, there are no load_lora_weights in FlaxStableDiffusionPipeline
### Reproduction
N/A
### Logs
_No response_
### System Info
kaggle tpu vm
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/9039
|
closed
|
[
"bug",
"stale"
] | 2024-08-01T11:23:52Z
| 2024-10-15T03:23:54Z
| null |
ghost
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.