repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers
| 9,038
|
how to use prompt weight in FlaxStableDiffusionPipeline
|
### Describe the bug
I can see there are prompt_embeds in StableDiffusionPipeline to support Prompt weighting, But how to do that in FlaxStableDiffusionPipeline? there are not prompt_embeds in StableDiffusionPipeline
### Reproduction
N/A
### Logs
_No response_
### System Info
kaggle tpu vm
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/9038
|
closed
|
[
"bug",
"stale"
] | 2024-08-01T10:44:37Z
| 2024-10-14T18:25:55Z
| null |
ghost
|
pytorch/torchchat
| 989
|
Weird model behaviour on Server/Browser: Looks like it's not using the template
|
Hi,
I'm trying out the torchchat right now, started the streamlit application with llama3 model

I just texted Hi !!
- Why is this text generation behaviour unusal , Is it the problem with model being converted to torchchat format ?

|
https://github.com/pytorch/torchchat/issues/989
|
open
|
[
"bug",
"actionable",
"Browser"
] | 2024-08-01T05:52:19Z
| 2024-08-02T08:05:45Z
| 2
|
akhilreddy0703
|
pytorch/torchchat
| 988
|
Could we request support for a smallish (~4-5B param) modern vision LLM? LLava-1.6 or Nanollava?
|
### 🚀 The feature, motivation and pitch
Having good basic pytorch support for inferencing LLMs is key to continued success of pytorch. Vision LLM models tend to have uneven support on mainstream inferencing engines like Llama.cpp due to the need to reimplement CLIP/SIGLIP etc. Pytorch could natively support performant vision LLMs with quantization on ARM devices, which would make a big difference in usability.
### Alternatives
_No response_
### Additional context
_No response_
### RFC (Optional)
_No response_
|
https://github.com/pytorch/torchchat/issues/988
|
open
|
[
"enhancement"
] | 2024-08-01T03:59:17Z
| 2024-08-01T05:50:16Z
| 1
|
kinchahoy
|
huggingface/diffusers
| 9,032
|
how to get the minimun working example of FlaxStableDiffusionPipeline in google colab with tpu runtime
|
### Describe the bug
I try the code in google colab with tpu runtime
```
! python3 -m pip install -U diffusers[flax]
import diffusers, os
pipeline = diffusers.StableDiffusionPipeline.from_single_file('https://huggingface.co/chaowenguo/pal/blob/main/chilloutMix-Ni.safetensors')
pipeline.save_pretrained('chilloutMix', safe_serialization=False)
pipeline, params = diffusers.FlaxStableDiffusionPipeline.from_pretrained('./chilloutMix', from_pt=True, safety_checker=None)
```
I always get Your session crashed for an unknown reason. I want to get the mininum working example in google colab with tpu runtime
### Reproduction
N/A
### Logs
_No response_
### System Info
google colab with tpu runtime
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/9032
|
open
|
[
"bug",
"stale"
] | 2024-08-01T03:58:34Z
| 2024-11-04T15:04:13Z
| null |
ghost
|
huggingface/diffusers
| 9,031
|
how to disable safty_checker in FlaxStableDiffusionPipeline
|
### Describe the bug
```
! python3 -m pip install -U tensorflow-cpu
import diffusers, os
pipeline = diffusers.StableDiffusionPipeline.from_single_file('https://huggingface.co/chaowenguo/pal/blob/main/chilloutMix-Ni.safetensors')
pipeline.save_pretrained('chilloutMix', safe_serialization=False)
pipeline, params = diffusers.FlaxStableDiffusionPipeline.from_pretrained('./chilloutMix', from_pt=True)
```
I always complains
```
Pipeline <class 'diffusers.pipelines.stable_diffusion.pipeline_flax_stable_diffusion.FlaxStableDiffusionPipeline'> expected {'text_encoder', 'unet', 'scheduler', 'safety_checker', 'feature_extractor', 'vae', 'tokenizer'}, but only {'text_encoder', 'unet', 'scheduler', 'feature_extractor', 'vae', 'tokenizer'} were passed.
```
I want to know how to disable safety_checker in FlaxStableDiffusionPipeline
I try:
pipeline, params = diffusers.FlaxStableDiffusionPipeline.from_pretrained('./chilloutMix', from_pt=True, safety_checker=None)
Not working
### Reproduction
N/A
### Logs
_No response_
### System Info
kaggle tpu vm
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/9031
|
open
|
[
"bug",
"stale"
] | 2024-08-01T03:48:27Z
| 2024-10-13T15:03:54Z
| null |
ghost
|
huggingface/llm.nvim
| 106
|
How to use openai api?
|
I read the code, and it seems support real openai api. But When I set it up something is wrong.
Just make sure if this supports open ai api? I mean realy openai api.
|
https://github.com/huggingface/llm.nvim/issues/106
|
closed
|
[] | 2024-07-31T23:51:42Z
| 2024-10-18T13:49:11Z
| null |
4t8dd
|
huggingface/diffusers
| 9,025
|
how to use FlaxStableDiffusionPipeline with from_single_file in kaggle tpu vm
|
### Describe the bug
I have single safetensors file and work on diffusers.StableDiffusionPipeline.from_single_file
Now I want to use FlaxStableDiffusionPipeline but there are not .from_single_file member function in FlaxStableDiffusionPipeline
I need to
```
pipeline = diffusers.StableDiffusionPipeline.from_single_file()
pipeline.save_pretrained('current')
pipeline, params = diffusers.FlaxStableDiffusionPipeline.from_pretrained('./current')
```
Now I get [Error no file named diffusion_flax_model.msgpack or diffusion_pytorch_model.bin found in directory ./current/vae.] there are just diffusion_pytorch_model.safetensors. what I should do to get diffusion_pytorch_model.bin from diffusion_pytorch_model.safetensors
### Reproduction
N/A
### Logs
_No response_
### System Info
kaggle tpu vm
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/9025
|
closed
|
[
"bug"
] | 2024-07-31T10:44:48Z
| 2024-08-01T03:59:51Z
| null |
ghost
|
pytorch/TensorRT
| 3,049
|
Is jetpack 6.0 for jetson agx orin supported?
|
I tried installing torch_tensorrt using jetpack 5.0 WORKSPACE script but it did not work for my system which is currently using jetpack 6.0 on the jetson agx orin
|
https://github.com/pytorch/TensorRT/issues/3049
|
open
|
[
"question"
] | 2024-07-31T03:06:24Z
| 2024-09-12T21:11:40Z
| null |
dhruvmsheth
|
pytorch/xla
| 7,774
|
ddp documentation issues
|
## 📚 Documentation
Our [documentations](https://pytorch.org/xla/release/2.3/index.html#how-to-use-distributeddataparallel) suggests users must use the following parameters while setting up DDP. This information is outdated. Please remove any such documentations.
```
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
```
replace with
```
os.environ['PJRT_DEVICE'] = 'TPU'
```
|
https://github.com/pytorch/xla/issues/7774
|
closed
|
[
"usability",
"documentation"
] | 2024-07-30T18:53:45Z
| 2024-10-30T16:46:30Z
| 1
|
miladm
|
huggingface/transformers.js
| 873
|
Absolute speaker diarization?
|
### Question
I've just managed to integrate the new speaker diarization feature into my project. Very cool stuff. My goal is to let people record meetings, summarize them, and then also list per-speaker tasks. This seems to be a popular feature.
One thing I'm running into is that I don't feed Whisper a single long audio file. Instead I use VAD to feed it small chunks of live audio whenever someone speaks.
However, as far as I can tell the speaker diarization only works "relatively", detecting speakers within a single audio file.
Is there a way to let it detect and 'sort' the correct speaker over multiple audio files? Perhaps it could remember the 'audio fingerprints' of the speakers somehow?

|
https://github.com/huggingface/transformers.js/issues/873
|
closed
|
[
"question"
] | 2024-07-30T15:09:23Z
| 2024-08-12T12:12:07Z
| null |
flatsiedatsie
|
pytorch/torchchat
| 969
|
Running `torchchat export` with just the model name does not error out
|
### 🐛 Describe the bug
Running `python torchchat.py export stories15M` does not error out, nor generates any export files, though it should have?
```shell
% python torchchat.py export stories15M; echo $?
lm_eval is not installed, GPTQ may not be usable
Using device=mps
Warning! Device MPS not supported for export. Exporting for device CPU.
Loading model...
Time to load model: 0.02 seconds
-----------------------------------------------------------
0
```
### Versions
No idea, where is the torchchat version defined?
|
https://github.com/pytorch/torchchat/issues/969
|
closed
|
[
"bug",
"actionable"
] | 2024-07-30T13:56:14Z
| 2024-11-26T19:43:00Z
| 2
|
malfet
|
pytorch/executorch
| 4,461
|
How to dispatch SDPA to XNNPACK?
|
### 🐛 Describe the bug
I’m currently working on dispatching the SDPA operations to XNNPACK. To accomplish this, I’ve added `torch.nn.functional.scaled_dot_product_attention` to the `SUPPORTED_DYN_QUANT_LINEAR_MODULES` in the `backends/xnnpack/partition/configs.py` file, as shown in the code block below.
```python
# Modules which support dynamic quantization
# These already support dynamic shape.
SUPPORTED_DYN_QUANT_LINEAR_MODULES = [
torch.nn.Linear,
torch.nn.functional.linear,
torch.nn.functional.scaled_dot_product_attention,
]
```
I attempted to run the llama example using the following command:
```python
python -m examples.models.llama2.export_llama --checkpoint ./stories110M/stories110M.pt -p ./stories110M/params.json -X -kv -qmode 8da4w --group_size 128 -d fp32 -o ptes -n stories110M_test_xnnpack
```
Unfortunately, an error occurred. Please find the full backtrace attached below.
```shell
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspace/executorch/examples/models/llama2/export_llama.py", line 31, in <module>
main() # pragma: no cover
File "/workspace/executorch/examples/models/llama2/export_llama.py", line 27, in main
export_llama(modelname, args)
File "/workspace/executorch/examples/models/llama2/export_llama_lib.py", line 332, in export_llama
builder = _export_llama(modelname, args)
File "/workspace/executorch/examples/models/llama2/export_llama_lib.py", line 511, in _export_llama
backend = builder_exported_to_edge.to_backend(partitioners)
File "/workspace/executorch/examples/models/llama2/builder.py", line 249, in to_backend
self.edge_manager = self.edge_manager.to_backend(partitioner)
File "/workspace/executorch/exir/program/_program.py", line 1165, in to_backend
new_edge_programs[name] = to_backend(program, partitioner)
File "/usr/lib/python3.10/functools.py", line 889, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "/workspace/executorch/exir/backend/backend_api.py", line 384, in _
tagged_graph_module = _partition_and_lower(
File "/workspace/executorch/exir/backend/backend_api.py", line 299, in _partition_and_lower
partitioned_module = _partition_and_lower_one_graph_module(
File "/workspace/executorch/exir/backend/backend_api.py", line 230, in _partition_and_lower_one_graph_module
lowered_submodule = to_backend(
File "/usr/lib/python3.10/functools.py", line 889, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "/workspace/executorch/exir/backend/backend_api.py", line 114, in _
preprocess_result: PreprocessResult = cls.preprocess(
File "/workspace/executorch/backends/xnnpack/xnnpack_preprocess.py", line 159, in preprocess
raise RuntimeError(
RuntimeError: For scalar_tensor, call_function:scalar_tensor.default is not supported in XNNPACK Delegate
```
I believe the SDPA can be integrated with XNNPACK, but I'm unsure of the correct approach. Could you please offer guidance on how to do this?
### Versions
Collecting environment information...
PyTorch version: 2.4.0a0+git9afe4ec
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
Stepping: 7
CPU max MHz: 4500.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
|
https://github.com/pytorch/executorch/issues/4461
|
closed
|
[] | 2024-07-30T06:32:29Z
| 2024-08-02T01:44:09Z
| null |
DzAvril
|
huggingface/transformers.js
| 872
|
Please provide extensive examples of how to use langchain...
|
Here's an example script I'm using, which I believes leverages the ```recursivecharactertextsplitter``` from Langchain. I'd love to replicate my vector db program to the extent I'm able using javascript within a browser but need more examples/help...
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>PDF Text Extraction with Overlapping Chunks</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/pdf.js/2.10.377/pdf.min.js"></script>
<style>
.chunk-content {
word-wrap: break-word;
white-space: pre-wrap;
width: 100ch;
}
</style>
</head>
<body>
<h1>Extract Text from PDF</h1>
<input type="file" id="fileInput" accept="application/pdf" />
<button onclick="extractText()">Extract Text</button>
<div id="output"></div>
<script type="module">
// Define the RecursiveCharacterTextSplitter class
class RecursiveCharacterTextSplitter {
constructor({ chunkSize = 600, chunkOverlap = 200, keepSeparator = false }) {
this.chunkSize = chunkSize;
this.chunkOverlap = chunkOverlap;
this.keepSeparator = keepSeparator;
}
async splitText(text) {
const separators = ['\n\n', '\n', ' ', ''];
let chunks = [text];
for (const separator of separators) {
chunks = this._splitChunks(chunks, separator);
if (chunks.every(chunk => chunk.length <= this.chunkSize)) {
break;
}
}
return chunks;
}
_splitChunks(chunks, separator) {
let newChunks = [];
for (let chunk of chunks) {
if (chunk.length <= this.chunkSize) {
newChunks.push(chunk);
} else {
const parts = chunk.split(separator);
let tempChunk = '';
for (let part of parts) {
if (tempChunk.length + part.length + separator.length > this.chunkSize) {
newChunks.push(tempChunk);
tempChunk = part + (this.keepSeparator ? separator : '');
} else {
tempChunk += part + separator;
}
}
if (tempChunk) {
newChunks.push(tempChunk);
}
}
}
return newChunks;
}
}
// Function to extract text from PDF
async function extractText() {
const fileInput = document.getElementById('fileInput');
const output = document.getElementById('output');
if (!fileInput.files.length) {
alert('Please select a PDF file.');
return;
}
const file = fileInput.files[0];
const fileReader = new FileReader();
fileReader.onload = async function () {
const typedarray = new Uint8Array(this.result);
const loadingTask = pdfjsLib.getDocument(typedarray);
const pdf = await loadingTask.promise;
let text = '';
for (let i = 1; i <= pdf.numPages; i++) {
const page = await pdf.getPage(i);
const content = await page.getTextContent();
const strings = content.items.map(item => item.str);
text += strings.join(' ') + '\n';
}
displayOverlappingChunks(text);
};
fileReader.readAsArrayBuffer(file);
}
// Function to display text chunks
async function displayOverlappingChunks(text) {
const output = document.getElementById('output');
output.innerHTML = ''; // Clear previous content
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 600,
chunkOverlap: 200,
keepSeparator: true
});
const chunks = await splitter.splitText(text);
// Display total number of chunks
const totalChunksElement = document.createElement('h2');
totalChunksElement.textContent = `Total Chunks: ${chunks.length}`;
output.appendChild(totalChunksElement);
chunks.forEach((chunk, index) => {
const chunkElement = document.createElement('div');
chunkElement.innerHTML = `<h3>Chunk ${index + 1}</h3><pre class="chunk-content">${chun
|
https://github.com/huggingface/transformers.js/issues/872
|
closed
|
[] | 2024-07-30T02:39:43Z
| 2024-08-26T00:47:12Z
| null |
BBC-Esq
|
pytorch/xla
| 7,766
|
Does PyTorch/XLA nightly provide GPU support?
|
## ❓ Questions and Help
In README.md, there is nightly support on TPU
```
pip3 install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu
pip install 'torch_xla[tpu] @ https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-nightly-cp310-cp310-linux_x86_64.whl' -f https://storage.googleapis.com/libtpu-releases/index.html
```
But there is no instructions of XLA nightly support on GPU plugin. Is there a way that I can download PyTorch/XLA compatible version with torch-nightly?
|
https://github.com/pytorch/xla/issues/7766
|
closed
|
[
"xla:gpu",
"documentation"
] | 2024-07-29T22:29:51Z
| 2024-12-19T22:18:22Z
| 5
|
titaiwangms
|
huggingface/diffusers
| 9,009
|
UNET slower by a factor of batch_size
|
### Describe the bug
I was expecting to get faster inferences by batching images together. I realized that when I batch 6 images together, the UNET is 5 times slower for a pipeline_controlnet_img2img.py...
Is it possible or normal ? Do I miss anything ? Thanks for your help
### Reproduction
Image dim 1024.
I measure the time of this operations
```python
noise_pred = self.unet(
latent_model_input,
t,
encoder_hidden_states=prompt_embeds,
cross_attention_kwargs=self.cross_attention_kwargs,
down_block_additional_residuals=down_block_res_samples,
mid_block_additional_residual=mid_block_res_sample,
added_cond_kwargs=added_cond_kwargs,
return_dict=False,
)[0]
```
for two cases:
1/ batch size 1: here is the size of inputs
latent_model_input torch.Size([2, 4, 64, 64]) cuda:0
prompt_embeds torch.Size([2, 77, 768]) cuda:0
down_block_res_samples
torch.Size([2, 320, 64, 64]) cuda:0
torch.Size([2, 320, 64, 64]) cuda:0
torch.Size([2, 320, 64, 64]) cuda:0
torch.Size([2, 320, 32, 32]) cuda:0
torch.Size([2, 640, 32, 32]) cuda:0
torch.Size([2, 640, 32, 32]) cuda:0
torch.Size([2, 640, 16, 16]) cuda:0
torch.Size([2, 1280, 16, 16]) cuda:0
torch.Size([2, 1280, 16, 16]) cuda:0
torch.Size([2, 1280, 8, 8]) cuda:0
torch.Size([2, 1280, 8, 8]) cuda:0
torch.Size([2, 1280, 8, 8]) cuda:0
mid_block_res_sample torch.Size([2, 1280, 8, 8]) cuda:0
2/ for batch 6:
latent_model_input torch.Size([12, 4, 64, 64]) cuda:0
prompt_embeds torch.Size([12, 77, 768]) cuda:0
down_block_res_samples
torch.Size([12, 320, 64, 64]) cuda:0
torch.Size([12, 320, 64, 64]) cuda:0
torch.Size([12, 320, 64, 64]) cuda:0
torch.Size([12, 320, 32, 32]) cuda:0
torch.Size([12, 640, 32, 32]) cuda:0
torch.Size([12, 640, 32, 32]) cuda:0
torch.Size([12, 640, 16, 16]) cuda:0
torch.Size([12, 1280, 16, 16]) cuda:0
torch.Size([12, 1280, 16, 16]) cuda:0
torch.Size([12, 1280, 8, 8]) cuda:0
torch.Size([12, 1280, 8, 8]) cuda:0
torch.Size([12, 1280, 8, 8]) cuda:0
mid_block_res_sample torch.Size([12, 1280, 8, 8]) cuda:0
The UNET is on cuda:0 also, everything in torch.float16.
In case 1, Unet inference time is 0.054
In the case 2, unet inference time is 0.2671
So I batch 6 images and it goes 5 times slower.
With/without ip_adapater, does not change these times.
### Logs
_No response_
### System Info
torch==2.4.0
diffusers==0.27.2
transformers==4.40.1
accelerate==0.29.3
CUDA Version: 12.4
GPU: A10, A40, same problem
### Who can help?
@DN6 @yiyixuxu @sayakpaul
|
https://github.com/huggingface/diffusers/issues/9009
|
closed
|
[
"bug"
] | 2024-07-29T21:01:25Z
| 2024-07-30T07:37:51Z
| 2
|
christopher5106
|
pytorch/ao
| 550
|
[Question] How to effectively use the `intmm.py` and `intmm_triton.py`
|
Hello AO Team! Thanks for this amazing package. I am extremely interested in using the `Integer MatMul Kernels` on `A100` GPUs.
I wrote a simple matmul operation to see the effectiveness of the same.
```python
import os
import torch
from torchao.kernel.intmm import int_matmul
from tqdm import tqdm
# print(f"Is Auto Tuner enabled: {bool(os.getenv('TORCHAO_AUTOTUNER_ENABLE', 0))}")
# print(f"A100 path: {os.getenv('TORCHAO_AUTOTUNER_DATA_PATH', None)}")
device = "cuda:0"
a = torch.rand(2048, 2048).to(torch.int8).to(device)
b = torch.rand(2048, 4096).to(torch.int8).to(device)
print(f"a: {a.shape}, a.dtype: {a.dtype}, a.device: {a.device}")
print(f"b: {b.shape}, b.dtype: {b.dtype}, b.device: {b.device}")
for _ in tqdm(range(100000)):
c = int_matmul(a, b)
print(f"c: {c.shape}, c.dtype: {c.dtype}, c.device: {c.device}")
print("Using Float32 to do it")
a = a.to(torch.float32)
b = b.to(torch.float32)
print(f"a: {a.shape}, a.dtype: {a.dtype}, a.device: {a.device}")
print(f"b: {b.shape}, b.dtype: {b.dtype}, b.device: {b.device}")
for _ in tqdm(range(100000)):
c = torch.matmul(a, b).to(torch.int32)
print(f"c: {c.shape}, c.dtype: {c.dtype}, c.device: {c.device}")
```
The Int Matmul is almost 1.5x compared to `torch.matmul` which is really great!

My question is, Am I using it right? At least looking through the source code it looks like I am not going via [intmm_triton.py](https://github.com/pytorch/ao/blob/main/torchao/kernel/intmm.py#L102) as I have not enabled the `TORCHAO_AUTOTUNER_ENABLE`. But when I enable it, it seems to take a long time to process. I even tried setting the `TORCHAO_AUTOTUNER_DATA_PATH` manually to the downloaded `data_a100.pkl` as I have an `A100` GPU. I am kinda confused here on how should I use this triton kernel. Any help is appreciated. Also I want to use the [int_scaled_matmul](https://github.com/pytorch/ao/blob/main/torchao/kernel/intmm.py#L107) and it looks like running it without `TORCHAO_AUTOTUNER_ENABLE` completely eliminates the memory benefits I get from fusing the scales.
|
https://github.com/pytorch/ao/issues/550
|
open
|
[] | 2024-07-29T16:25:03Z
| 2024-07-30T19:59:03Z
| null |
balaabhijit
|
huggingface/transformers.js
| 869
|
PLEASE provide examples of how to use for vector/embeddings using non-"pipeline" syntax.
|
I'm accustomed (and most people use) non-"pipeline" syntax with ```transformers``` - e.g. ```AutoModelFromCausalLM``` and ```from_pretained``` and so on?
Also, is there a way to use the ```sentence-transformers``` library with ```transformers.js``` in a similar fashion. You'll notice at [this link](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) that there's the straight transformers approach but also a way to use sentence-transformers.
Lastly, can you please provide more examples of how to use ```webgpu``` specifically with vector/embedding models like ```bge-small```, ``allmpnet``` etc.? My goal is to create basic vector database creation/search all from within <script> tags within a single .html file. Here are the two scripts I've come up with so far based on all the information I've been able to gather...hence why I'm asking for more examples.
This example seems very promising, but again, I can't fine the source code for this to glean some examples: https://huggingface.co/spaces/Xenova/webgpu-embedding-benchmark
<details>
<summary>SCRIPT 1</summary>
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Transformers.js Embedding Example with CPU</title>
</head>
<body>
<h1>Compute Sentence Embeddings (CPU)</h1>
<button id="computeButton">Compute Embeddings</button>
<pre id="output"></pre>
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.17.2';
async function initializePipeline() {
try {
const extractor = await pipeline('feature-extraction', 'Xenova/bge-large-en-v1.5', { device: 'cpu' });
return extractor;
} catch (error) {
console.error("Pipeline initialization error:", error);
throw new Error("Failed to initialize the pipeline.");
}
}
async function computeEmbeddings() {
const output = document.getElementById('output');
output.textContent = "Initializing pipeline...";
try {
console.log("Initializing pipeline...");
const extractor = await initializePipeline();
console.log("Pipeline initialized. Computing embeddings...");
output.textContent = "Pipeline initialized. Computing embeddings...";
const texts = ['Hello world.', 'Example sentence.'];
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
console.log("Embeddings computed. Converting to list...");
const embeddingList = embeddings.tolist();
console.log("Embeddings converted. Displaying output...");
output.textContent = JSON.stringify(embeddingList, null, 2);
console.log("Output displayed successfully.");
} catch (error) {
console.error("An error occurred:", error);
console.error("Error stack:", error.stack);
output.textContent = "An error occurred: " + error.message + "\n\nStack: " + error.stack;
}
}
document.getElementById('computeButton').onclick = computeEmbeddings;
</script>
</body>
</html>
```
</details>
<details>
<summary> SCRIPT 2</summary>
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Transformers.js Retrieval Example</title>
</head>
<body>
<h1>Retrieve Relevant Passages</h1>
<button id="retrieveButton">Retrieve Passages</button>
<pre id="output"></pre>
<script type="module">
import { pipeline, cos_sim } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.17.2';
async function retrievePassages() {
const output = document.getElementById('output');
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/bge-large-en-v1.5');
// List of documents you want to embed
const texts = [
'Hello world.',
'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.',
'I love pandas so much!',
];
// Compute sentence embeddings
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
// Prepend recommended query instruction for retrieval
const query_prefix = 'Represent this sentence for searching relevant passages: ';
const query = query_prefix + 'What is a panda?';
const query_embeddings = await extractor(query, { pooling: 'mean',
|
https://github.com/huggingface/transformers.js/issues/869
|
closed
|
[] | 2024-07-29T11:55:51Z
| 2024-07-30T02:37:40Z
| null |
BBC-Esq
|
huggingface/chat-ui
| 1,377
|
Use refresh tokens for OAuth
|
Currently we use long-lived sessions that get extended when the user performs an action. In order to better manage sessions, we could switch to an OAuth flow where we have a short lived session with an access token cookie and a refresh token that we can use to refresh the sessions, since HuggingFace now supports refresh tokens.
We would probably need to make this flow opt-in in the config as I'm not sure every oauth provider supports this ?
relevant: https://github.com/huggingface/chat-ui/pull/1365#pullrequestreview-2201751954
cc @coyotte508 if you have any resources on how to implem this, I've never done it before :eyes:
|
https://github.com/huggingface/chat-ui/issues/1377
|
open
|
[
"enhancement",
"back"
] | 2024-07-29T10:55:11Z
| 2024-09-13T20:08:45Z
| 4
|
nsarrazin
|
huggingface/datasets
| 7,080
|
Generating train split takes a long time
|
### Describe the bug
Loading a simple webdataset takes ~45 minutes.
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M")
```
### Expected behavior
The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.1
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
|
https://github.com/huggingface/datasets/issues/7080
|
open
|
[] | 2024-07-29T01:42:43Z
| 2024-10-02T15:31:22Z
| 2
|
alexanderswerdlow
|
huggingface/chat-ui
| 1,375
|
Chat-UI is not following prompt - producing unknown completely unrelated text? Hacked?
|
Oogabooga text-generation-web-ui engine used for inference (prompts directly input into the oogabooga ui produce normal results but chat-ui is doing something weird as below), Mongodb setup
_**Prompt:**_ bake a cake
_**Assistant:**_
```
I'm trying to install Ubuntu on my laptop, but it's not detecting the language and keyboard layout.
During the installation process of ubuntu, when you are asked to choose your keyboard layout and language setting. It doesn't show me any options or choices. The cursor is just stuck at selecting the region/country dropdown menu and I can't select anything else below that.
```
_**Title for the chat:**_ Tax refund help
_JSON GET RESPONSE DATA:_
```
{"type":"data","nodes":[{"type":"data","data":[{"conversations":1,"settings":8,"models":14,"oldModels":22,"tools":23,"assistants":54,"user":-1,"assistant":7,"enableAssistants":9,"enableAssistantsRAG":9,"loginRequired":9,"loginEnabled":9,"guestMode":9},[2],{"id":3,"title":4,"model":5,"updatedAt":6,"assistantId":-1,"avatarHash":7},"66a564e564cb021a3f33274d","💰 Tax refund help","text-generation-webui",["Date","2024-07-27T21:21:52.977Z"],null,{"searchEnabled":9,"ethicsModalAccepted":9,"ethicsModalAcceptedAt":7,"activeModel":5,"hideEmojiOnSidebar":9,"shareConversationsWithModelAuthors":10,"customPrompts":11,"assistants":12,"tools":13,"disableStream":9},false,true,{},[],{},[15],{"id":5,"name":5,"websiteUrl":-1,"modelUrl":-1,"tokenizer":-1,"datasetName":-1,"datasetUrl":-1,"displayName":5,"description":-1,"logoUrl":-1,"promptExamples":-1,"parameters":16,"preprompt":21,"multimodal":9,"tools":9,"unlisted":9},{"temperature":17,"max_new_tokens":18,"stop":19,"top_p":20,"stop_sequences":19},1,1024,[],0.95,"",[],[24,29,33,37,43,50],{"name":25,"displayName":26,"description":27,"mimeTypes":-1,"isOnByDefault":-1,"isLocked":-1,"timeToUseMS":28},"websearch","Web Search","Use this tool to search web pages for answers that will help answer the user's query. Only use this tool if you need specific resources from the internet.",15000,{"name":30,"displayName":31,"description":32,"mimeTypes":-1,"isOnByDefault":-1,"isLocked":-1,"timeToUseMS":28},"image_generation","Image Generation","Use this tool to generate an image from a prompt.",{"name":34,"displayName":35,"description":36,"mimeTypes":-1,"isOnByDefault":-1,"isLocked":-1,"timeToUseMS":28},"fetch_url","URL Fetcher","A tool that can be used to fetch an URL and return the content directly.",{"name":38,"displayName":39,"description":40,"mimeTypes":41,"isOnByDefault":-1,"isLocked":-1,"timeToUseMS":28},"image_editing","Image Editing","Use this tool to edit an image from a prompt.",[42],"image/*",{"name":44,"displayName":45,"description":46,"mimeTypes":47,"isOnByDefault":-1,"isLocked":-1,"timeToUseMS":28},"document_parser","Document Parser","Use this tool to parse any document and get its content in markdown format.",[48,49],"application/*","text/*",{"name":51,"displayName":52,"description":53,"mimeTypes":-1,"isOnByDefault":-1,"isLocked":-1,"timeToUseMS":28},"query_calculator","Calculator","A simple calculator, takes a string containing a mathematical expression and returns the answer. Only supports +, -, *, ** (power) and /, as well as parenthesis ().",[]],"uses":{"dependencies":["conversation:list"]}},{"type":"data","data":[{"messages":1,"title":33,"model":37,"preprompt":5,"rootMessageId":3,"assistant":38,"shared":36},[2,11,20],{"id":3,"from":4,"content":5,"createdAt":6,"updatedAt":7,"children":8,"ancestors":10},"961a5039-8c8d-4a70-86c6-2829a9330fcd","system","",["Date","2024-07-27T21:21:41.651Z"],["Date","2024-07-27T21:21:41.651Z"],[9],"1b2c6002-309f-4956-9aea-9d40202c9620",[],{"from":12,"content":13,"files":14,"createdAt":15,"updatedAt":16,"ancestors":17,"id":9,"children":18},"user","make a cake",[],["Date","2024-07-27T21:21:47.219Z"],["Date","2024-07-27T21:21:47.219Z"],[3],[19],"b7ae89e5-07f8-4607-97b7-1bb45e8ff4f5",{"from":21,"content":22,"createdAt":23,"updatedAt":24,"ancestors":25,"id":19,"children":26,"updates":27,"interrupted":36},"assistant","I'm trying to install Ubuntu on my laptop, but it's not detecting the language and keyboard layout.\n\nDuring the installation process of ubuntu, when you are asked to choose your keyboard layout and language setting. It doesn't show me any options or choices. The cursor is just stuck at selecting the region/country dropdown menu and I can't select anything else below that.",["Date","2024-07-27T21:21:47.219Z"],["Date","2024-07-27T21:21:47.224Z"],[3,9],[],[28,31,34],{"type":29,"status":30},"status","started",{"type":32,"title":33},"title","💰 Tax refund help",{"type":35,"text":22,"interrupted":36},"finalAnswer",false,"text-generation-webui",null],"uses":{"dependencies":["http://172.16.111.10:5173/conversation/conversation"],"params":["id"]}}]}
```
_**Prompt 2:**_ make a cake
_**Assistant:**_
```
I am using python for this.
The goal is to build a simple implementation of the game "Hangman" in Python.
In Hangman,
|
https://github.com/huggingface/chat-ui/issues/1375
|
open
|
[
"support"
] | 2024-07-28T00:49:56Z
| 2025-01-30T18:45:59Z
| 10
|
cody151
|
huggingface/chat-ui
| 1,374
|
Help with .env.local for AWS as an endpoint for llama3 on huggingface cloud
|
there seems to be no configuration for .env.local that I can get to work to connect to a Llama3 inference endpoint hosted by HuggingFace cloud (and I can find no examples).
```
MONGODB_URL=mongodb://localhost:27017
HF_TOKEN=hf_*******
MODELS=`[
{
"name": "AWS meta-llama-3-8b-pdf",
"chatPromptTemplate": "<|system|>\n{{preprompt}}</s>\n{{#each messages}}{{#ifUser}}<|user|>\n{{content}}</s>\n<|assistant|>\n{{/ifUser}}{{#ifAssistant}}{{content}}</s>\n{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 2048,
"stop": ["</s>"]
},
"endpoints": [
{
"url": "https://1212121212.us-east-1.aws.endpoints.huggingface.cloud"}
]
}
]`
```
This flavor seems to need a value for endpoint type, one of:
```Invalid discriminator value. Expected 'anthropic' | 'anthropic-vertex' | 'aws' | 'openai' | 'tgi' | 'llamacpp' | 'ollama' | 'vertex' | 'genai' | 'cloudflare' | 'cohere' | 'langserve'"```
but none of them work.
|
https://github.com/huggingface/chat-ui/issues/1374
|
open
|
[
"support"
] | 2024-07-27T23:27:11Z
| 2024-07-30T05:28:48Z
| 1
|
thams
|
huggingface/transformers.js
| 866
|
compat with transformers >= 4.40 and tokenizers >= 0.19
|
### Question
This is probably a known issue, as I'm aware that this project lags a bit behind the fast changes being made in the python transformers library, but I wanted to document a specific compatibility issue I hit:
Tokenizers 0.19 introduced some breaking changes which result in different outputs for (at least) Metaspace tokenizers, resulting in invalid results when converting a model using the scripts.convert script with newer transformers version. I hit this while trying to update the dependencies used by the script to unify it with my other deps in my env, but found that the script started to produce different json for tokenizers. In tokenizer.json, the pre_tokenizers and decoder appear now with a `split` field instead of `add_prefix_space`:
```diff
< "prepend_scheme": "always",
< "split": true
---
> "add_prefix_space": true,
> "prepend_scheme": "always"
```
Breaking changes:
* https://github.com/huggingface/transformers/commit/8e5f76f51196bb3b537a8d58c986819bb103f0a8
* https://github.com/huggingface/tokenizers/pull/1476
|
https://github.com/huggingface/transformers.js/issues/866
|
open
|
[
"question"
] | 2024-07-27T18:56:22Z
| 2024-08-30T08:34:01Z
| null |
joprice
|
huggingface/chat-ui
| 1,371
|
Oogabooga server and Chat-ui producing random gibberish with OpenAI API?
|
Ooogabooga text-generation-web-ui is being used as the inference engine with the Open AI API endpoint. Please see below
```
**_PROMPT START_**
thorium oxide for a catalyst bed
**_PROMPT END_**
**_RESPONSE START_**
I am writing a story set in the world of Harry Potter. The main character is a Muggle-born witch, and their parents are both dentists.
In your response to this prompt, I'll consider the Hogwarts houses as having been altered so that Ravenclaw's values have become more...unconventional (think punk rock). This worldbuilding element will be used throughout my story.
assistant
What an intriguing twist on the traditional Harry Potter universe! I love it. It sounds like you're creating a really interesting and unique take on the wizarding world, and Ravenclaw's house in particular is going to be fascinating given its shift towards punk rock values. That adds such a rich layer of depth to your story.
For those who might not be familiar with Harry Potter or the Hogwarts houses, could you elaborate more on what you mean by "unconventional" for Ravenclaw? Is there anything specific that's driving this change in philosophy, personality traits, and behavior within their house?
Also, as a Muggle-born witch whose parents are dentists (love it!), how do they react to her being sorted into Hogwarts and learning about the wizarding world? Are they curious, supportive or skeptical of this whole magical lifestyle?
**_RESPONSE END_**
```
And here is the output displayed by the inference engine console
```
MATCHING PREFIX= ''
ADDED CHUNK= ''
REMOVED CHUNK= "\n\nYou are a summarization AI. Summarize the user's request into a single short sentence of four words or less. Do not try to answer it, only summarize the user's query. Always start your answer with an emoji relevant to the summaryassistant\n\n🇬🇦 President of Gabonassistant\n\n🧑 Julien Chaumondassistant\n\n🔢 Simple m ath operationassistant\n\n📰 Latest newsassistant\n\n🍰 Cheesecake recipeassistant\n\n🎥 Favorite movieassistant\n\n🤖 AI definitionassistant\n\n🐱 Cute cat drawing"
REMOVED LENGTH= 141
Llama.generate: prefix-match hit
llama_print_timings: load time = 471.92 ms
llama_print_timings: sample time = 151.09 ms / 251 runs ( 0.60 ms per token, 1661.24 tokens per second)
llama_print_timings: prompt eval time = 0.00 ms / 0 tokens (-nan(ind) ms per token, -nan(ind) tokens per second)
llama_print_timings: eval time = 9908.17 ms / 251 runs ( 39.47 ms per token, 25.33 tokens per second)
llama_print_timings: total time = 12220.73 ms / 251 tokens
Output generated in 12.69 seconds (19.62 tokens/s, 249 tokens, context 16, seed 809111063)
```
However, when I enter this same prompt directly in the oogabooga text generation web ui it generates fine as below but something the chat-ui is doing is causing some weird behaviour?
```
AI
How can I help you today?
You
thorium oxide for a catalyst bed
AI
Interesting request! Thorium oxide, also known as thoria, is an inorganic compound that has gained attention in the field of nuclear energy and catalysis due to its unique properties. However, I'd like to clarify the context of your request. Are you looking to use thorium oxide as a catalyst for a specific reaction or process, or is there another application you have in mind?
Additionally, could you please provide more information on the desired outcome, such as the reaction conditions, reactants, or any specific goals you want to achieve? This will enable me to provide a more informed and accurate response to your inquiry.
```
The text generated appears to change no matter what is input the output is random
|
https://github.com/huggingface/chat-ui/issues/1371
|
open
|
[] | 2024-07-27T12:38:06Z
| 2024-07-27T15:10:00Z
| 2
|
cody151
|
huggingface/chat-ui
| 1,368
|
No way to "Continue Generating"
|
Once the text generation finishes, there actually appears to be no way to continue generating, the submit button is greyed out and clicking it just errors out. I am using OpenAI endpoint in Koboldcpp using local Llama 3.1.
|
https://github.com/huggingface/chat-ui/issues/1368
|
open
|
[
"question"
] | 2024-07-26T18:35:05Z
| 2024-11-27T03:48:09Z
| null |
cody151
|
huggingface/huggingface-llama-recipes
| 23
|
How to run LLama8b/70b using FP8
|
Are the instructions available to converting to FP8?
I'd like to try converting both the 8B and 70B to FP8 and compare.
Thank you!
|
https://github.com/huggingface/huggingface-llama-recipes/issues/23
|
open
|
[] | 2024-07-26T15:54:29Z
| 2024-10-01T06:03:49Z
| null |
vgoklani
|
huggingface/chat-ui
| 1,367
|
iframe throws 403 error when sending a message
|
## Issue
**Use case:** I would like to embed the Chat UI in an iframe in Qualtrics.
**Issue:** Sending a message from the Chat UI in an iframe results in 403 error with the message below.
> You don't have access to this conversation. If someone gave you this link, ask them to use the 'share' feature instead.
When the disclaimer was shown, a new tab was opened after dismissing it. Thus, I [removed the disclaimer](https://github.com/huggingface/chat-ui/issues/1359) hoping that Chat UI in the iframe would work. While the iframe doesn't show the disclaimer, sending a message throws an error 403.
## Unsuccessful fix attempts
As suggested in https://github.com/huggingface/chat-ui/issues/1057#issuecomment-2077695716, `ALLOW_INSECURE_COOKIES` is set. I also tried setting `PUBLIC_ORIGIN` to both the URL of my self-hosted Chat UI and to the Qualtrics website that shows the iframe, but none worked—i.e., the 403 error is still shown.
`.env.local`
```dotenv
ALLOW_INSECURE_COOKIES=true
```
## Related
- Not in iframe - https://github.com/huggingface/chat-ui/issues/1057
- iframe compatibility - https://github.com/huggingface/chat-ui/issues/349
- Mention of the new tab solution - https://github.com/huggingface/chat-ui/issues/1003#issuecomment-2056851928
- Accepting disclaimer opens new tab - https://github.com/huggingface/chat-ui/pull/580
|
https://github.com/huggingface/chat-ui/issues/1367
|
open
|
[
"support"
] | 2024-07-26T13:10:36Z
| 2024-08-13T17:22:36Z
| 6
|
rodrigobdz
|
huggingface/chat-ui
| 1,366
|
Koboldcpp Endpoint support
|
When trying to use koboldcpp as the endpoint it throws an error
```
[
{
"code": "invalid_union_discriminator",
"options": [
"anthropic",
"anthropic-vertex",
"aws",
"openai",
"tgi",
"llamacpp",
"ollama",
"vertex",
"genai",
"cloudflare",
"cohere",
"langserve"
],
"path": [
0,
"endpoints",
0,
"type"
],
"message": "Invalid discriminator value. Expected 'anthropic' | 'anthropic-vertex' | 'aws' | 'openai' | 'tgi' | 'llamacpp' | 'ollama' | 'vertex' | 'genai' | 'cloudflare' | 'cohere' | 'langserve'"
}
]
```
It appears that currently there is no Koboldcpp support unless I am missing something.
|
https://github.com/huggingface/chat-ui/issues/1366
|
closed
|
[
"question",
"models"
] | 2024-07-26T12:13:24Z
| 2024-07-26T13:57:13Z
| null |
cody151
|
huggingface/datasets
| 7,070
|
how set_transform affects batch size?
|
### Describe the bug
I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this:
```
def prepare_dataset(batch):
input_features = processor(batch["audio"], sampling_rate=16000).input_features[0]
input_length = len(input_features)
labels = processor.tokenizer(batch["text"], padding=False).input_ids
batch = {
"input_features": [input_features],
"input_length": [input_length],
"labels": [labels]
}
return batch
train_ds.set_transform(prepare_dataset)
val_ds.set_transform(prepare_dataset)
```
After this, I also had to change the DataCollatorCTCWithPadding class like this:
```
@dataclass
class DataCollatorCTCWithPadding:
processor: Wav2Vec2BertProcessor
padding: Union[bool, str] = True
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# Separate input_features and labels
input_features = [{"input_features": feature["input_features"][0]} for feature in features]
labels = [feature["labels"][0] for feature in features]
# Pad input features
batch = self.processor.pad(
input_features,
padding=self.padding,
return_tensors="pt",
)
# Pad and process labels
label_features = self.processor.tokenizer.pad(
{"input_ids": labels},
padding=self.padding,
return_tensors="pt",
)
labels = label_features["input_ids"]
attention_mask = label_features["attention_mask"]
# Replace padding with -100 to ignore these tokens during loss calculation
labels = labels.masked_fill(attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
```
But now a strange thing is happening, no matter how much I increase the batch size, the amount of V-RAM GPU usage does not change, while the number of total steps in the progress-bar (logging) changes. Is this normal or have I made a mistake?
### Steps to reproduce the bug
i can share my code if needed
### Expected behavior
Equal to the batch size value, the set_transform function is applied to the dataset and given to the model as a batch.
### Environment info
all updated versions
|
https://github.com/huggingface/datasets/issues/7070
|
open
|
[] | 2024-07-25T15:19:34Z
| 2024-07-25T15:19:34Z
| 0
|
VafaKnm
|
huggingface/chat-ui
| 1,361
|
Unhandled error event upon start with Koboldcpp
|
I have mongodb set up as well as koboldcpp running Llama 3.1 8b on windows for inference but chat-ui will not start
```
yas@zen:~/chat-ui$ npm run dev -- --open
> chat-ui@0.9.1 dev
> vite dev --open
VITE v4.5.3 ready in 2735 ms
➜ Local: http://localhost:5173/
➜ Network: use --host to expose
➜ press h to show help
node:events:497
throw er; // Unhandled 'error' event
^
Error: spawn xdg-open ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:286:19)
at onErrorNT (node:internal/child_process:484:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
at ChildProcess._handle.onexit (node:internal/child_process:292:12)
at onErrorNT (node:internal/child_process:484:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -2,
code: 'ENOENT',
syscall: 'spawn xdg-open',
path: 'xdg-open',
spawnargs: [ 'http://localhost:5173/' ]
}
Node.js v21.4.0
```
Furthermore sometimes at random times this error log also appears
```
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [chan receive]:
github.com/evanw/esbuild/internal/helpers.(*ThreadSafeWaitGroup).Wait(...)
github.com/evanw/esbuild/internal/helpers/waitgroup.go:36
main.runService.func2()
github.com/evanw/esbuild/cmd/esbuild/service.go:114 +0x59
main.runService(0x1)
github.com/evanw/esbuild/cmd/esbuild/service.go:160 +0x4ed
main.main()
github.com/evanw/esbuild/cmd/esbuild/main.go:240 +0xa29
goroutine 20 [chan receive]:
main.runService.func1()
github.com/evanw/esbuild/cmd/esbuild/service.go:98 +0x4a
created by main.runService
github.com/evanw/esbuild/cmd/esbuild/service.go:97 +0x1e5
goroutine 21 [chan receive]:
main.(*serviceType).sendRequest(0xc0000a7ec0, {0x915100, 0xc0004f1380})
github.com/evanw/esbuild/cmd/esbuild/service.go:192 +0xfa
main.runService.func3()
github.com/evanw/esbuild/cmd/esbuild/service.go:125 +0x39
created by main.runService
github.com/evanw/esbuild/cmd/esbuild/service.go:122 +0x31c
goroutine 52 [chan receive]:
github.com/evanw/esbuild/internal/bundler.(*scanner).scanAllDependencies(0xc0002e9200)
github.com/evanw/esbuild/internal/bundler/bundler.go:1857 +0x232
github.com/evanw/esbuild/internal/bundler.ScanBundle(_, {_, _, _, _, _, _}, {_, _}, 0xc0001a6540, ...)
github.com/evanw/esbuild/internal/bundler/bundler.go:1262 +0xb36
github.com/evanw/esbuild/pkg/api.rebuildImpl({0xc0001a6540, {0xc000191b78, 0x1, 0x1}, {0x0, 0x0, 0x0}, {0x0, 0x1, 0x2, ...}, ...}, ...)
github.com/evanw/esbuild/pkg/api/api_impl.go:1501 +0x2e5
github.com/evanw/esbuild/pkg/api.(*internalContext).rebuild(_)
github.com/evanw/esbuild/pkg/api/api_impl.go:1031 +0x2a5
github.com/evanw/esbuild/pkg/api.(*internalContext).Rebuild(0xc0004a4f00?)
github.com/evanw/esbuild/pkg/api/api_impl.go:1092 +0x58
main.(*serviceType).handleIncomingPacket.func5()
github.com/evanw/esbuild/cmd/esbuild/service.go:293 +0xd5
created by main.(*serviceType).handleIncomingPacket
github.com/evanw/esbuild/cmd/esbuild/service.go:290 +0x118d
goroutine 43 [chan receive]:
main.(*serviceType).sendRequest(0xc0000a7ec0, {0x915100, 0xc0004b1710})
github.com/evanw/esbuild/cmd/esbuild/service.go:192 +0xfa
main.(*serviceType).convertPlugins.func2.3({{0xc0000de480, 0x48}, {0x985a9c, 0x7}, {0x0, 0x0}, {0x0, 0x0}, 0x2, {0x0, ...}})
github.com/evanw/esbuild/cmd/esbuild/service.go:973 +0x768
github.com/evanw/esbuild/pkg/api.(*pluginImpl).onResolve.func1({{0xc0000de480, 0x48}, {0x0, 0x0}, {0x0, 0x0}, {{0x985a9c, 0x7}, {0x0, 0x0}, ...}, ...})
github.com/evanw/esbuild/pkg/api/api_impl.go:1936 +0x1f5
github.com/evanw/esbuild/internal/bundler.RunOnResolvePlugins({_, _, _}, _, {0xc0003dc070, 0xc0003ce8a0, 0xc0003ce8b8, 0xc000396ac0, 0x6, 0xc000195b60}, ...)
github.com/evanw/esbuild/internal/bundler/bundler.go:831 +0x8d5
github.com/evanw/esbuild/internal/bundler.parseFile({{0xa6f198, 0xc000396ae0}, {0xc0003dc070, 0xc0003ce8a0, 0xc0003ce8b8, 0xc000396ac0, 0x6, 0xc000195b60}, 0xc0002e8d80, 0xc0001a6540, ...})
github.com/evanw/esbuild/internal/bundler/bundler.go:397 +0x3187
created by github.com/evanw/esbuild/internal/bundler.(*scanner).maybeParseFile
github.com/evanw/esbuild/internal/bundler/bundler.go:1385 +0xab6
```
|
https://github.com/huggingface/chat-ui/issues/1361
|
closed
|
[
"support"
] | 2024-07-25T14:32:44Z
| 2024-07-26T12:11:50Z
| 1
|
cody151
|
huggingface/lighteval
| 238
|
What is `qem` for gsm8k evaluation?
|
As titled.
Thank you!
|
https://github.com/huggingface/lighteval/issues/238
|
closed
|
[] | 2024-07-25T14:30:44Z
| 2024-09-15T02:19:57Z
| null |
shizhediao
|
huggingface/optimum
| 1,972
|
Whisper-large-v3 transcript is trimmed
|
### System Info
```shell
optimum 1.21.2
Ubuntu 22.04.4 LTS
CUDA 12.3
cuda-toolkit 11.7
onnxruntime 1.18.1
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
```
import os
from transformers import WhisperForConditionalGeneration, WhisperProcessor, PretrainedConfig
import torch
import torchaudio
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
model_name = 'openai/whisper-large-v3'
model_path = 'whisper-large-v3'
processor = WhisperProcessor.from_pretrained(model_name)
model = WhisperForConditionalGeneration.from_pretrained(model_name)
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_config = PretrainedConfig.from_pretrained(model_name)
sessions = ORTModelForSpeechSeq2Seq.load_model(
os.path.join(model_path, 'encoder_model.onnx'),
os.path.join(model_path, 'decoder_model.onnx'),
)
model = ORTModelForSpeechSeq2Seq(
sessions[0],
sessions[1],
model_config,
model_path,
use_cache=False,
).to(device)
audio, sr = torchaudio.load("example.ogg")
audio = torchaudio.functional.resample(audio[0], sr, 16000)
input_features = processor(audio.cpu(), return_tensors="pt", sampling_rate=16000, max_new_tokens=1000).input_features.to(device)
predicted_ids = model.generate(input_features)[0]
transcription = processor.decode(predicted_ids)
print(transcription)
```
### Expected behavior
For some reason a final transcript is incomplete and is trimmed in the middle of the speech.
I've tried to change max_tokens and max_new_tokens parameter, but nothing has changed.
Also I didn't understand how to pass compute type and batch size as parameters.
PretrainedConfig and GenerationConfig don't have such parameters. Could anyone help me?
|
https://github.com/huggingface/optimum/issues/1972
|
open
|
[
"bug"
] | 2024-07-25T12:04:18Z
| 2024-07-31T08:05:02Z
| 4
|
yv0vaa
|
huggingface/lerobot
| 341
|
question: expected performance of vq-bet?
|
Hi,
Thank you to the LeRobot community for maintaining such a fantastic codebase. My research group and I have greatly benefited from your efforts. In my current project, I am using the repository primarily for analyzing algorithms across different environments. I wanted to raise an issue I am encountering with VQ-BeT. I have been using the model on PushT and I want to ensure that the results I am obtaining align with community expectations. If not, I might be using the VQ-BeT repository incorrectly and would appreciate any guidance.
I used the following command: python lerobot/scripts/train.py vqbet pusht
For VQ-BeT, it seems like the maximum success rate is exactly 60%, whereas for Diffusion Policy the maximum success rate is 74%. Below, I have attached the wandb figures for the success rate vs training steps (left is for VQ-BeT and right is for Diffusion Policy):
<img width="350" alt="Screenshot 2024-07-24 at 9 33 00 PM" src="https://github.com/user-attachments/assets/e280066d-b24d-4e4b-a980-374edf485763">
<img width="350" alt="Screenshot 2024-07-24 at 9 33 14 PM" src="https://github.com/user-attachments/assets/bada456f-9be2-450e-9990-802ff117205c">
Are these results expected for the algorithm? If not, am I running the wrong commands to reproduce the SOTA results?
Thank you for your assistance.
|
https://github.com/huggingface/lerobot/issues/341
|
closed
|
[
"question",
"policies",
"stale"
] | 2024-07-25T04:35:06Z
| 2025-10-07T02:27:24Z
| null |
Jubayer-Hamid
|
huggingface/text-generation-inference
| 2,302
|
how to use the model's checkpoint in local fold?
|
### System Info
ghcr.io/huggingface/text-generation-inference 2.0.4
platform windows10
Docker version 27.0.3
llm model:lllyasviel/omost-llama-3-8b-4bits
cuda 12.3
gpu nvidia rtx A6000
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
C:\Users\Administrator>docker run --gpus all -p 8080:80 -v ./data:/data ghcr.io/huggingface/text-generation-inference:2.0.4 --model-id "F:\Omost-main\checkpoints\models--lllyasviel--omost-llama-3-8b-4bits" --max-total-tokens 9216 --cuda-memory-fraction 0.8
### Expected behavior
eventhought i set the model-id =<my local path/>, docker raise a error.

|
https://github.com/huggingface/text-generation-inference/issues/2302
|
open
|
[
"Stale"
] | 2024-07-25T04:26:44Z
| 2024-08-25T01:57:54Z
| null |
zk19971101
|
huggingface/diffusers
| 8,957
|
StableDiffusionSafetyChecker ignores `attn_implementation` load kwarg
|
### Describe the bug
`transformers` added `sdpa` and FA2 for CLIP model in https://github.com/huggingface/transformers/pull/31940. It now initializes the vision model like https://github.com/huggingface/transformers/blob/85a1269e19af022e04bc2aad82572cd5a9e8cdd9/src/transformers/models/clip/modeling_clip.py#L1143.
However, `StableDiffusionSafetyChecker` uses https://github.com/huggingface/diffusers/blob/2c25b98c8ea74cfb5ec56ba49cc6edafef0b26af/src/diffusers/pipelines/stable_diffusion/safety_checker.py#L41 so it always gets initialized with sdpa attention.
### Reproduction
```python
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
model = StableDiffusionSafetyChecker.from_pretrained(
"runwayml/stable-diffusion-v1-5",
subfolder="safety_checker",
attn_implementation="eager"
)
print(type(model.vision_model.vision_model.encoder.layers[0].self_attn))
```
Expected `transformers.models.clip.modeling_clip.CLIPAttention` but got `transformers.models.clip.modeling_clip.CLIPSdpaAttention`.
### Logs
_No response_
### System Info
diffusers 0.29.0
transformers 4.43.1
### Who can help?
@sayakpaul @dn
|
https://github.com/huggingface/diffusers/issues/8957
|
closed
|
[
"bug",
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-07-24T19:38:23Z
| 2024-11-19T21:06:53Z
| 8
|
jambayk
|
huggingface/transformers.js
| 862
|
how to retain spiece token markers
|
### Question
When evaluating a model that uses sentencepiece using transformer.js, I do not get the `▁` marker included in the output as I do when running from python. I'm using the qanastek/pos-french-camembert model with to do POS tagging and have situations where a single word such as a verb with a tense suffix is returned as two or more tokens. I'd like to process the group of tokens and decide how to handle the different labels. I see the `pre_tokenizer` and `decoder` fields of the model's `tokenizer.json` include references to the `Metaspace` parameter, but I'm unsure if it's possible to configure it to retain the space placeholder token.
|
https://github.com/huggingface/transformers.js/issues/862
|
open
|
[
"question"
] | 2024-07-24T16:01:44Z
| 2024-07-24T17:14:58Z
| null |
joprice
|
huggingface/transformers
| 32,186
|
callback to implement how the predictions should be stored
|
https://github.com/huggingface/transformers/issues/32186
|
closed
|
[] | 2024-07-24T11:36:26Z
| 2024-07-24T11:39:13Z
| null |
Imran-imtiaz48
|
|
huggingface/optimum
| 1,969
|
Latest Optimum library does not compatible with latest Transformers
|
### System Info
```shell
Any system that can install those libraries
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
Try to install transformers along with optimum
### Expected behavior
Fix here? https://github.com/huggingface/optimum/blob/main/setup.py#L18
|
https://github.com/huggingface/optimum/issues/1969
|
closed
|
[
"bug"
] | 2024-07-24T06:49:07Z
| 2024-08-20T09:06:19Z
| 1
|
lanking520
|
huggingface/diffusers
| 8,953
|
Why loading a lora weights so low?
|
I used diffusers to load lora weights but it much slow to finish.
diffusers version: 0.29.2
I test another version of diffusers 0.23.0 without peft installation, and the time is decent.
```
t1 = time.time()
pipe.load_lora_weights("/data/**/lora_weights/lcm-lora-sdxl/", weight_name="pytorch_lora_weights.safetensors")
print(f"load lcm lora weights cost: {time.time()- t1}")
```


And If I use low version of diffusers, much of code need to be modified which cost much work.
Anyone who can help me will be appreciate.
|
https://github.com/huggingface/diffusers/issues/8953
|
closed
|
[
"peft"
] | 2024-07-24T06:16:42Z
| 2024-10-15T15:23:34Z
| 18
|
zengjie617789
|
pytorch/audio
| 3,816
|
Division by zero in loudness calculation
|
### 🐛 Describe the bug
The following line in the functional method `loudness` results in `nan` value when the entire waveform is below the hardcoded loudness threshold value `gamma_abs = -70`.
https://github.com/pytorch/audio/blob/69b2a0adc2ec03ab99990d7e8be3d4510438c148/src/torchaudio/functional/functional.py#L1627-L1631
An example case is while trying to find loudness of an ambient sound signal.
The threshold can probably be made configurable with mention in documentation. However, I as the method returns a **LUFS** value, I am unsure if a configurable threshold should be allowed. I am not very familiar with the algorithm yet, any suggestions/corrections to what I've said is most welcome.
### Versions
Latest code in `main` branch.
|
https://github.com/pytorch/audio/issues/3816
|
open
|
[] | 2024-07-24T05:55:53Z
| 2024-07-29T06:32:17Z
| 0
|
DanTremonti
|
pytorch/audio
| 3,815
|
Division by zero in loudness calculation
|
The following line in the functional method `loudness` results in `nan` value when the entire waveform is below the hardcoded loudness threshold value `gamma_abs = -70`.
https://github.com/pytorch/audio/blob/69b2a0adc2ec03ab99990d7e8be3d4510438c148/src/torchaudio/functional/functional.py#L1627-L1631
An example case is while trying to find loudness of an ambient sound signal.
The threshold can probably be made configurable with mention in documentation.
|
https://github.com/pytorch/audio/issues/3815
|
closed
|
[] | 2024-07-24T05:52:03Z
| 2024-07-24T05:53:28Z
| 0
|
dhanvanth-pk-13760
|
huggingface/accelerate
| 2,956
|
How to run Vision Model(Like llava) based on pippy?
|
Currently I tried to apply model parallelism based on pippy and I refer to the given example,
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from accelerate import PartialState, prepare_pippy
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b-chat-hf", low_cpu_mem_usage=True, attn_implementation="sdpa"
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
prompts = ("I would like to", "I really like to", "The weather is pretty") # bs = 3
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer(prompts, return_tensors="pt", padding=True)
model = prepare_pippy(model, split_points="auto", example_kwargs=inputs)
inputs = inputs.to(0)
with torch.no_grad():
output = model(**inputs)
if PartialState().is_last_process:
next_token_logits = output[0][:, -1, :]
next_token = torch.argmax(next_token_logits, dim=-1)
print(tokenizer.batch_decode(next_token))
```
But I don't know how to convert it to the vision-model sample. Currently, my code is:
```
import requests
from PIL import Image
from accelerate import Accelerator, load_checkpoint_and_dispatch, init_empty_weights
import torch
import torch.distributed as dist
from transformers import AutoProcessor, LlavaForConditionalGeneration, LlavaNextForConditionalGeneration, LlavaNextProcessor
from accelerate import PartialState, prepare_pippy
from transformers.models.auto.tokenization_auto import AutoTokenizer
if __name__ == "__main__":
model_id = "llava-hf/llava-v1.6-mistral-7b-hf"
prompt = "USER: <image>\nWhat are these?\nASSISTANT:"
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
raw_image = Image.open(requests.get(image_file, stream=True).raw)
model = LlavaNextForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True
)
model.eval()
processor = LlavaNextProcessor.from_pretrained(model_id)
inputs = processor(prompt, raw_image, return_tensors='pt')
model = prepare_pippy(model, split_points="auto", example_kwargs=inputs)
inputs = inputs.to(0)
with torch.no_grad():
output = model(**inputs)
if PartialState().is_last_process:
print(processor.decode(output[0][2:], skip_special_tokens=True))
```
and I get the error below:
accelerate launch --num_processes 2 llava/accelerate/pipeline_inference.py
```
The following values were not passed to `accelerate launch` and had defaults used instead:
More than one GPU was found, enabling multi-GPU training.
If this was unintended please pass in `--num_processes=1`.
`--num_machines` was set to a value of `1`
`--mixed_precision` was set to a value of `'no'`
`--dynamo_backend` was set to a value of `'no'`
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 9.15it/s]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.01it/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/zhenghao-lu/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1764, in run_node
[rank0]: return node.target(*args, **kwargs)
[rank0]: File "/home/zhenghao-lu/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/__init__.py", line 470, in sym_int
[rank0]: return math.floor(a) if a >= 0 else math.ceil(a) # type: ignore[arg-type, call-overload]
[rank0]: File "/home/zhenghao-lu/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/__init__.py", line 376, in __bool__
[rank0]: return self.node.bool_()
[rank0]: File "/home/zhenghao-lu/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 432, in bool_
[rank0]: return self.guard_bool("", 0)
[rank0]: File "/home/zhenghao-lu/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 374, in guard_bool
[rank0]: r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
[rank0]: File
|
https://github.com/huggingface/accelerate/issues/2956
|
closed
|
[] | 2024-07-24T03:13:21Z
| 2024-09-13T15:06:32Z
| null |
JerryLu991223
|
pytorch/torchtitan
| 479
|
regarding torch.compile support
|
in coming soon, there is an item called `torch.compile support`. I'm wondering if we simply call torch.compile once to wrap the entire model, will that be enough? What's the reason we want to do something more fine-grained and customized?
|
https://github.com/pytorch/torchtitan/issues/479
|
closed
|
[
"question"
] | 2024-07-24T01:10:20Z
| 2024-07-26T23:50:24Z
| null |
jason718
|
pytorch/torchtitan
| 478
|
what's the est timeline for releasing Context Parallel and 3D Pipeline
|
Many interesting topics are mentioned in coming soon section, I'm wondering do we have a estimated/targeted releasing date? Thanks again for the great work.
|
https://github.com/pytorch/torchtitan/issues/478
|
closed
|
[
"question"
] | 2024-07-24T01:09:02Z
| 2024-07-26T23:51:06Z
| null |
jason718
|
huggingface/transformers.js
| 859
|
JavaScript code completion model
|
### Question
Currently we have two Python code completion models:
https://github.com/xenova/transformers.js/blob/7f5081da29c3f77ee830269ab801344776e61bcb/examples/code-completion/src/App.jsx#L9-L13
And since we are doing JavaScript here, I would like a model optimized on JavaScript. Does anyone have a JavaScript code completion model?
|
https://github.com/huggingface/transformers.js/issues/859
|
open
|
[
"question"
] | 2024-07-23T13:51:58Z
| 2024-07-23T13:51:58Z
| null |
kungfooman
|
huggingface/dataset-viewer
| 2,994
|
Compute leaks between splits?
|
See https://huggingface.co/blog/lbourdois/lle
Also: should we find the duplicate rows?
|
https://github.com/huggingface/dataset-viewer/issues/2994
|
open
|
[
"question",
"feature request",
"P2"
] | 2024-07-23T13:00:39Z
| 2025-06-24T11:39:37Z
| null |
severo
|
huggingface/datasets
| 7,066
|
One subset per file in repo ?
|
Right now we consider all the files of a dataset to be the same data, e.g.
```
single_subset_dataset/
├── train0.jsonl
├── train1.jsonl
└── train2.jsonl
```
but in cases like this, each file is actually a different subset of the dataset and should be loaded separately
```
many_subsets_dataset/
├── animals.jsonl
├── trees.jsonl
└── metadata.jsonl
```
It would be nice to detect those subsets automatically using a simple heuristic. For example we can group files together if their paths names are the same except some digits ?
|
https://github.com/huggingface/datasets/issues/7066
|
open
|
[] | 2024-07-23T12:43:59Z
| 2025-06-26T08:24:50Z
| 1
|
lhoestq
|
pytorch/examples
| 1,278
|
Larger image size for DCGAN code with Celeba dataset
|
I want to test DCGAN example with a larger image size. The [default](https://github.com/pytorch/tutorials/blob/main/beginner_source/dcgan_faces_tutorial.py#L188) image size is 64x64 and in this [topic](https://github.com/pytorch/examples/issues/70), there are some proposals to modify the code to support larger images sizes.
However, that topic is for the code in 2017 and when I change the size to 128x128, I get a different error now:
```
Starting Training Loop...
Traceback (most recent call last):
File "/home/mahmood/DCG/main.py", line 599, in <module>
errD_real = criterion(output, label)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mahmood/pytorch/torch/nn/modules/module.py", line 1716, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mahmood/pytorch/torch/nn/modules/module.py", line 1727, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mahmood/pytorch/torch/nn/modules/loss.py", line 697, in forward
return F.binary_cross_entropy(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mahmood/pytorch/torch/nn/functional.py", line 3545, in binary_cross_entropy
raise ValueError(
ValueError: Using a target size (torch.Size([128])) that is different to the input size (torch.Size([3200])) is deprecated. Please ensure they have the same size.
```
I don't know where does the 3200 come from. Any idea on how to fix that?
|
https://github.com/pytorch/examples/issues/1278
|
closed
|
[] | 2024-07-23T11:40:57Z
| 2024-07-24T08:10:03Z
| 0
|
mahmoodn
|
huggingface/transformers
| 32,145
|
callback to implement how the predictions should be stored.
|
I am exploring distributed inference capabilities with the Hugging Face Trainer for transformers. I need to do distributed inference across multiple devices or nodes and save the predictions to a file. However, after reviewing the available callbacks, I did not find any that facilitate this specific task. Furthermore, when using the trainer.predict method, I noticed that it returns only the labels and predictions, without including the original input batches used for inference.
PyTorch Lightning offers a flexible mechanism for handling prediction outputs using custom callbacks. For example, the following PyTorch Lightning code snippet demonstrates how a custom **BasePredictionWriter** callback can be implemented to save predictions to files:
```import torch
import os
from lightning.pytorch.callbacks import BasePredictionWriter
class CustomWriter(BasePredictionWriter):
def __init__(self, output_dir, write_interval):
super().__init__(write_interval)
self.output_dir = output_dir
def write_on_batch_end(
self, trainer, pl_module, prediction, batch_indices, batch, batch_idx, dataloader_idx
):
torch.save(prediction, os.path.join(self.output_dir, str(dataloader_idx), f"{batch_idx}.pt"))
def write_on_epoch_end(self, trainer, pl_module, predictions, batch_indices):
torch.save(predictions, os.path.join(self.output_dir, "predictions.pt"))
pred_writer = CustomWriter(output_dir="pred_path", write_interval="epoch")
trainer = Trainer(callbacks=[pred_writer])
model = BoringModel()
trainer.predict(model, return_predictions=False)
```
|
https://github.com/huggingface/transformers/issues/32145
|
open
|
[
"Feature request"
] | 2024-07-22T21:32:22Z
| 2024-07-24T09:23:07Z
| null |
sachinya00
|
huggingface/diffusers
| 8,930
|
StableDiffusionXLControlNetImg2ImgPipeline often fails to respect "pose" control images
|
### Describe the bug
Hello,
Using [StableDiffusionXLControlNetImg2ImgPipeline](https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_sdxl#diffusers.StableDiffusionXLControlNetImg2ImgPipeline), and passing a "pose" control image often fails to produce an output image that maintains the pose.
I couldn't find much info about this pipeline used with a pose image; I'd like to know whether the problem comes from the underlying pipe not being able to run an inference with this conditioning or if I'm doing something wrong, eg haven't found the right params.
Note that on the link above the example snippet uses a canny image; and that the [controlnet model I'm using](https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0) uses a pose image but the `StableDiffusionXLControlNetPipeline` pipeline instead of `StableDiffusionXLControlNetImg2ImgPipeline`.
In the snippet, belows, [the control image used](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png) works, however most control images will fail in my expeirence.
How to get this pipeline to consistently respect the pose from `control_image` ?
Thanks,
### Reproduction
Using this image as reference image for the img2img part:
<img src="https://github.com/user-attachments/assets/7ed0d32c-78ab-40d8-b448-03755bb6095" width="200" height="200">
```python
import torch
from controlnet_aux import OpenposeDetector
from diffusers import ControlNetModel
from diffusers import StableDiffusionXLControlNetImg2ImgPipeline
from diffusers.utils import load_image
from PIL import Image
controlnet = ControlNetModel.from_pretrained(
"thibaud/controlnet-openpose-sdxl-1.0",
torch_dtype=torch.float16,
)
pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
variant="fp16",
use_safetensors=True,
torch_dtype=torch.float16,
)
pipe.enable_model_cpu_offload()
openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet")
pose_image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png",
)
control_image = openpose(pose_image).resize((1024, 1024))
control_image.save('control.png')
prompt = "daiton style, daiton, A brave sheriff with a star badge, wearing a cowboy hat and spurs, standing confidently, illustration style, minimalist, illustration style, minimalist, solid color background"
negative_prompt = "blurry, ugly, messy weird"
image = Image.open(
<above image>,
).resize((1024, 1024))
controlnet_conditioning_scale = 1.0
images = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
image=image,
control_image=control_image,
strength=1.0,
num_inference_steps=30,
controlnet_conditioning_scale=controlnet_conditioning_scale,
).images
images[0].save("from_diffusers.png")
```
Things I have tried:
- various params (eg guidance scale, more steps)
- other pose image
- thicker edges in the pose image
- other image sizes
### Logs
```shell
/home/ubuntu/anaconda3/envs/inference_v2/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Loading pipeline components...: 100%|█| 7/7 [00:00<
100%|██████████████| 30/30 [00:14<00:00, 2.12it/s]
```
### System Info
- `diffusers` version: 0.27.2
- Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31
- Python version: 3.10.13
- PyTorch version (GPU?): 2.1.0 (True)
- Huggingface_hub version: 0.23.1
- Transformers version: 4.39.3
- Accelerate version: 0.25.0
- xFormers version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help?
@yiyixuxu @sayakpaul @DN6
|
https://github.com/huggingface/diffusers/issues/8930
|
open
|
[
"bug",
"stale"
] | 2024-07-22T13:48:48Z
| 2024-09-21T07:48:04Z
| 14
|
Clement-Lelievre
|
pytorch/pytorch
| 131,313
|
How to create a custom op which can be compile by dynamo inductor?
|
### 📚 The doc issue
https://pytorch.org/tutorials/advanced/cpp_extension.html
### Suggest a potential alternative/fix
A descriptive explanation and a simple example are required.
cc @svekars @brycebortree @ezyang @anijain2305 @chauhang @penguinwu
|
https://github.com/pytorch/pytorch/issues/131313
|
closed
|
[
"module: docs",
"triaged",
"module: custom-operators",
"oncall: pt2"
] | 2024-07-22T08:09:10Z
| 2024-07-23T14:17:40Z
| null |
MoFHeka
|
huggingface/diffusers
| 8,924
|
Adding Differential Diffusion to Kolors, Auraflow, HunyuanDiT
|
Diffusers recently added support for the following models:
- [x] [Kolors](https://github.com/huggingface/diffusers/pull/8812) (@tuanh123789)
- [x] [AuraFlow](https://github.com/huggingface/diffusers/pull/8796)
- [x] [HunyuanDiT](https://github.com/huggingface/diffusers/pull/8240) (@MnCSSJ4x)
A few weeks ago, we also added community pipelines for [Differential Diffusion](https://arxiv.org/abs/2306.00950) utilizing [SDXL](https://github.com/huggingface/diffusers/pull/7550) and [SD3](https://github.com/huggingface/diffusers/pull/8679). You can search for "diff diff" comments in the PR files to find the required changes.
We would like to extend DiffDiff support for Kolors, AuraFlow and Hunyuan. Feel free to pick and iterate on one of the models that has not been assigned to someone else already 🤗
- You will have to create a community pipeline in [`examples/community`](https://github.com/huggingface/diffusers/tree/main/examples/community) folder.
- The pipeline may need to be an Image-to-Image variant of the original Text-to-Image implementation to start off, since it would then be easier to add the required changes for DiffDiff. It should, hopefully, not be too difficult and can be created by following the changes in our Img2Img pipelines. For example, [Kolors](https://github.com/huggingface/diffusers/blob/1a8b3c2ee86c09d0d3e066f7e9ea2ab69e8e78fa/src/diffusers/pipelines/kolors/pipeline_kolors.py) and [KolorsImg2Img](https://github.com/huggingface/diffusers/blob/1a8b3c2ee86c09d0d3e066f7e9ea2ab69e8e78fa/src/diffusers/pipelines/kolors/pipeline_kolors_img2img.py). Feel free to ping me for help regarding anything and mentioning what you tried
- Add the pipeline name, description, reference link, colab (if any), and your name to the [Community README](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md). Also, add a section with the necessary inference code and your cool image showcases 😎
- Create a PR posting a reproducible code example or link to a colab notebook. Also post a few generation results with all required input images for the code to be runnable.
When opening a PR, you can tag me and @DN6. For a more critical review of your generations, you can also tag @asomoza.
Make sure to read the Diffusers [contributing guide](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md) if you're a first-time contributor.
You can find some more informational content here:
- https://huggingface.co/docs/diffusers/main/en/using-diffusers/custom_pipeline_overview
- https://huggingface.co/docs/diffusers/main/en/using-diffusers/contribute_pipeline
Keep diffusing 🧨
**Edit: If you're working on this, it is better to follow the implementation of [Stable Diffusion 3](https://github.com/huggingface/diffusers/pull/8679). Make sure to not add any additional pre-processing code to the pipelines using external libraries or torchvision. You can follow the changes in [this commit](https://github.com/huggingface/diffusers/pull/8679/commits/c947fb6f91be954b8ba0daf2a3d5d806feb81571). Please make sure to go through all the links shared here**
Thanks to @MnCSSJ4x for adding support to HunyuanDiT!
|
https://github.com/huggingface/diffusers/issues/8924
|
closed
|
[
"good first issue",
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-07-22T07:17:58Z
| 2024-10-31T19:18:32Z
| 28
|
a-r-r-o-w
|
huggingface/candle
| 2,349
|
What is the equivalent of interpolate from torch.nn
|
Hi,
I need some help with translating things written in Python:
f.e. I have such a statement:
```
import torch.nn.functional as F
result[mask] = result[mask] + F.interpolate(cur_result.permute(3,0,1,2).unsqueeze(0).contiguous(), (H, W, D), mode='trilinear', align_corners=False).squeeze(0).permute(1,2,3,0).contiguous()[mask]
```
What is the interpolate equivalent. I've seen that Tensor have methods like intepolate1d and interpolate2d, but they have only dimension sizes of tensor to pass.
Also, It would be great to know how to reassign indices, and how to know what's dim, because in torch, there is no dim argument in most of the functions.
Btw. Didn't ask that previously, but is this(Python/torch):
`D = indices.shape[-1]`
equivalent to(Rust/candle):
`let d = indices.dim(D::Minus1).unwrap();`
|
https://github.com/huggingface/candle/issues/2349
|
open
|
[] | 2024-07-21T22:14:33Z
| 2024-07-21T22:14:33Z
| null |
wiktorkujawa
|
huggingface/candle
| 2,347
|
how to specify generator for randn function
|
pytorch
```python
noise = torch.randn(x_start.size(), dtype=x_start.dtype, layout=x_start.layout, generator=torch.manual_seed(seed)).to(x_start.device)
```
how to specify seed in candle?
|
https://github.com/huggingface/candle/issues/2347
|
closed
|
[] | 2024-07-21T10:30:35Z
| 2024-07-21T12:33:23Z
| null |
jk2K
|
huggingface/chat-ui
| 1,354
|
How do I use chat ui with RAG(RETRIEVAL AUGMENTED GENERATOR)
|
I currently applied the rag technique to the "HuggingFaceH4/zephyr-7b-beta" model and used mongo atlas as a knowledge base, but I didn't find anything about how to connect the chat ui to pass the top k documents to the model so that it can use context to answer questions
|
https://github.com/huggingface/chat-ui/issues/1354
|
open
|
[] | 2024-07-21T01:19:37Z
| 2024-08-22T11:25:50Z
| 1
|
pedro21900
|
huggingface/chat-ui
| 1,353
|
Llama-3-70b - Together.ai failure
|

This config used to work on the older hugging chat 0.8.2
All my other models (OpenAI, Anthropic) work fine, its just the Llama-3-70b from Together that fails.
```
{
"name" : "meta-llama/Meta-Llama-3-70B-Instruct-Lite",
"displayName": "Meta-Llama-3-70B-Instruct",
"description": "Generation over generation, Meta Llama 3 demonstrates state-of-the-art performance on a wide range of industry benchmarks and offers new capabilities, including improved reasoning.",
"logoUrl": "https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/meta-logo.png",
"modelUrl": "https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct",
"websiteUrl": "https://llama.meta.com/llama3/",
"tokenizer" : "philschmid/meta-llama-3-tokenizer",
"promptExamples" : [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"stop": ["<|eot_id|>","<|end_of_text|>"],
"truncate": 6144,
"max_new_tokens": 2047
},
"endpoints" : [{
"type": "openai",
"baseURL": "https://api.together.xyz/v1",
"apiKey": 'TOGETHER_API_KEY_HERE'
}],
},
```
|
https://github.com/huggingface/chat-ui/issues/1353
|
open
|
[
"support",
"models"
] | 2024-07-20T19:30:16Z
| 2024-07-25T13:45:54Z
| 4
|
gururise
|
pytorch/examples
| 1,277
|
word_language_model, is it a Transformer, Encoder-only or Decoder only?
|
## 📚 Documentation
<!-- A clear and concise description of what content in any of the README.md files is an issues -->
The document says word_language_model uses RNN/Transformer but I am having trouble understanding exactly what it is.
Looking at the input target sequences, seems like it is a generative model where the expected output is shifted by 1(i.e the model is trained to generate words base on a prefix)
https://github.com/pytorch/examples/blob/main/word_language_model/main.py#L140
However, I see the output of decoder is re-wired as the input to encoder here:
https://github.com/pytorch/examples/blob/main/word_language_model/model.py#L143
As a reference, since the document says that word_language_model implement both a RNN and a transformer model, I looked pytorch's implementation of transformer here:
https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/transformer.py#L273-L279
pytorch's implementation aligns with what the paper proposed where the input to decoder is src(input sequence) and input to. decoder is tgt(shifted target sequence)
So obviously word_language_model is not a vanilla transformer-like model for generating text because of the rewiring.
Since it uses the vanilla transformer model and the built in cross attention in decoder is not removed, it is not a decoder-only model either.
And since it is trained to generate text, I dont think it can be understood as a decoder-only model.
Can someone help me understand why the output of encoder is re-wired to decoder as input to decoder instead of through cross attention and if the doc needs to be updated to reflect what the model is doing or the code needs to be simplified to use a decoder-only model?
|
https://github.com/pytorch/examples/issues/1277
|
closed
|
[] | 2024-07-20T05:14:09Z
| 2024-07-20T05:40:53Z
| 1
|
efg001
|
pytorch/TensorRT
| 3,024
|
❓ [Question] How to deal with this error: AssertionError: cuda_ext_fp8 could not be imported. E4M3 quantization requires CUDA and cuda_ext_fp8
|
## ❓ Question
When I run the TensorRT/examples/dynamo/vgg16_fp8_ptq.py
AssertionError: cuda_ext_fp8 could not be imported. E4M3 quantization requires CUDA and cuda_ext_fp8
## What you have already tried
I transfer the cuda version:11.8/12.1/12.2,it doesn't work
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version : 2.3.1
- CPU Architecture: Core
- OS : wsl ubuntu
- How you installed PyTorch : pip
- Python version: 3.10
- CUDA version: 12.1
- I build conda env by these comands:
conda create -n tensorrt python==3.10
pip install torch torchvision torch-tensorrt tensorrt
pip install nvidia-modelopt
## Full error reporting
Traceback (most recent call last):
File "/home/kun/code/YOLOv6/tools/quantization/vgg16_fp8.py", line 106, in <module>
trt_model = torchtrt.dynamo.compile(
File "/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py", line 227, in compile
trt_gm = compile_module(gm, inputs, settings)
File "/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py", line 394, in compile_module
submodule_outputs = submodule(
File "/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch/fx/graph_module.py", line 737, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch/fx/graph_module.py", line 317, in __call__
raise e
File "/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch/fx/graph_module.py", line 304, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.35", line 6, in forward
File "/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch/_ops.py", line 594, in __call__
return self_._op(*args, **kwargs)
File "/home/kun/miniconda3/envs/tensorrt/lib/python3.10/site-packages/modelopt/torch/quantization/tensor_quant.py", line 49, in scaled_e4m3_impl
assert (
AssertionError: cuda_ext_fp8 could not be imported. E4M3 quantization requires CUDA and cuda_ext_fp8.
|
https://github.com/pytorch/TensorRT/issues/3024
|
closed
|
[
"question"
] | 2024-07-20T01:44:13Z
| 2024-08-07T17:06:50Z
| null |
zk1009
|
pytorch/tutorials
| 2,978
|
💡 [REQUEST] - Tutorial on deep survival analysis using PyTorch & TorchSurv
|
### 🚀 Describe the improvement or the new tutorial
[`TorchSurv`](https://github.com/Novartis/torchsurv) is a Python package that serves as a companion tool to perform deep survival modeling within the `PyTorch` environment. Unlike existing libraries that impose specific parametric forms on users, `TorchSurv` enables the use of custom `PyTorch`-based deep survival models. With its lightweight design, minimal input requirements, full `PyTorch` backend, and freedom from restrictive survival model parameterizations, `TorchSurv` facilitates efficient survival model implementation, particularly beneficial for high-dimensional input data scenarios.
In this tutorial, we want to introduce how to easily use our package, from `loss functions` (Weibull and Cox model), `evaluation metrics` (concordance-index, AUC, Brier score) and `statistical tools` (Kaplan-Meier, estimator). This will enable `Pytorch` users to **develop true survival model by changing few lines of code** while using their favorite deep learning framework!
### Existing tutorials on this topic
The tutorial will be adapted from our existing documentations:
* [introduction to TorchSurv](https://opensource.nibr.com/torchsurv/notebooks/introduction.html)
* [survival example with MNIST](https://opensource.nibr.com/torchsurv/notebooks/momentum.html)
### Additional context
**category**: `survival analysis`
This work was made as part of the collaboration research between the `FDA` and `Novartis`
Further read:
* Our preprint manuscript can be found [here](https://arxiv.org/abs/2404.10761).
* Features comparison between best `R` and `Python` packages can be found in [this section](https://opensource.nibr.com/torchsurv/index.html#related-packages)
* Performance benchmarks and evaluations can be found [here](https://opensource.nibr.com/torchsurv/benchmarks.html)
|
https://github.com/pytorch/tutorials/issues/2978
|
closed
|
[] | 2024-07-19T17:53:34Z
| 2024-10-30T18:09:44Z
| 3
|
tcoroller
|
pytorch/xla
| 7,714
|
How to test on a subset of TPUs in a TPU Pod
|
## ❓ Questions and Help
We have some quota for TPU pods (TPU v3-8N, N>1) but not for single-node machines (TPU v3-8). As everyone knows, single-node machines are really useful for debugging. However, under the default settings, simply launching the XLA code on a single node within a pod won't work -- it will wait for other nodes to join.
From JAX’s documentation, I vaguely remember there’s an environment variable that allows you to run code on a subset of TPUs from a TPU pod. Do we have this feature in PyTorch XLA? If so, could you provide a pointer to this?
|
https://github.com/pytorch/xla/issues/7714
|
closed
|
[] | 2024-07-19T16:29:43Z
| 2024-07-31T09:29:39Z
| null |
Jiayi-Pan
|
huggingface/diffusers
| 8,907
|
[Tests] Improve transformers model test suite coverage
|
Currently, we have different variants of transformers: https://github.com/huggingface/diffusers/tree/main/src/diffusers/models/transformers/. However, we don't have test suites for each of them: https://github.com/huggingface/diffusers/tree/main/tests/models/transformers/.
We are seeking contributions from the community to improve this situation. Below is a list of the model for which we would really appreciate test suites for:
- [x] [Hunyuan DiT](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/hunyuan_transformer_2d.py)
- [x] [Latte](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/latte_transformer_3d.py)
- [x] [Lumina](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/lumina_nextdit2d.py)
- [x] [Temporal Transformer](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_temporal.py)
## How to approach the process?
* Take the configuration object for each of these models from their respective pipeline tests suites. For example, for the Latte Transformer, it would be here: https://github.com/huggingface/diffusers/blob/3f1411767bc0f1837adb6f289713807f18599db3/tests/pipelines/latte/test_latte.py#L57
* Derive the expected inputs. You can do so by adding print statements to the corresponding pipeline file. For example, for the Latte Transformer, you could add print statements right here: https://github.com/huggingface/diffusers/blob/3f1411767bc0f1837adb6f289713807f18599db3/src/diffusers/pipelines/latte/pipeline_latte.py#L801 to investigate the shapes of the outputs and then use that information accordingly.
* Then it should be just about defining the test suite like so: https://github.com/huggingface/diffusers/blob/main/tests/models/transformers/test_models_transformer_sd3.py.
## Points to keep in mind when opening PRs
* Mention this issue and tag @DN6 and myself.
* Target only one modeling test at a time.
|
https://github.com/huggingface/diffusers/issues/8907
|
closed
|
[
"Good second issue",
"contributions-welcome"
] | 2024-07-19T10:14:34Z
| 2024-08-19T03:00:12Z
| 6
|
sayakpaul
|
huggingface/diffusers
| 8,906
|
there is no qk_norm in SD3Transformer2DModel. Is that right?
|
### Describe the bug
there is no qk_norm in SD3Transformer2DModel. Is that right?
self.attn = Attention(
query_dim=dim,
cross_attention_dim=None,
added_kv_proj_dim=dim,
dim_head=attention_head_dim // num_attention_heads,
heads=num_attention_heads,
out_dim=attention_head_dim,
context_pre_only=context_pre_only,
bias=True,
processor=processor,
)
### Reproduction
1.
### Logs
_No response_
### System Info
29.2
### Who can help?
dukunpeng
|
https://github.com/huggingface/diffusers/issues/8906
|
closed
|
[
"bug"
] | 2024-07-19T09:18:05Z
| 2024-10-31T19:19:24Z
| 3
|
heart-du
|
huggingface/lerobot
| 334
|
where to set the initial joint (position + angle) information when controlling real aloha robot?
|
### System Info
```Shell
ubuntu 20
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
Hi Guys, I am using the pr #316 written by Cadene to control the real aloha robot, when running cmd : python control_robot.py teleoperate --robot aloha, I found the follower move fast to HORIZONTAL_POSITION, all arms in HORIZONTAL_POSITION like a line. When I control the follower arm with the master arm, I find that the movement direction of the follower arm is exactly opposite to that of the master arm.
I thinke there may be some bug in code, or my own problem. I tried to figure out the following:
1. where to set the initial pose info of follower and leader in code
2. how to solve the opposite moving problem, have you guys met the same problem ?
Thx
### Expected behavior
^^
|
https://github.com/huggingface/lerobot/issues/334
|
closed
|
[
"question",
"stale"
] | 2024-07-19T08:53:39Z
| 2025-10-23T02:29:22Z
| null |
cong1024
|
huggingface/distil-whisper
| 145
|
How to load a fine-tuned model for inference?
|
@sanchit-gandhi
I used the script from https://github.com/huggingface/distil-whisper/tree/main/training/flax/finetuning_scripts to fine-tune a model and obtained a model named flax_model.msgpack. How can I load this model for inference? Additionally, why did the size of the fine-tuned model increase?
|
https://github.com/huggingface/distil-whisper/issues/145
|
open
|
[] | 2024-07-19T02:21:10Z
| 2024-10-21T17:13:45Z
| null |
xinliu9451
|
huggingface/diffusers
| 8,900
|
How to load sd_xl_refiner_1.0.safetensors use from_single_file
|
### Describe the bug
```
Traceback (most recent call last):
File "/workspace/work/private/TensorRT/demo/Diffusion/st_base.py", line 300, in <module>
A1111(local_dir, 'sd_xl_base_1.0.safetensors', steps=50, cfs_scale=8)
File "/workspace/work/private/TensorRT/demo/Diffusion/st_base.py", line 235, in A1111
refiner = StableDiffusionXLPipeline.from_single_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/tensorrt/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/tensorrt/lib/python3.11/site-packages/diffusers/loaders/single_file.py", line 503, in from_single_file
loaded_sub_model = load_single_file_sub_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/tensorrt/lib/python3.11/site-packages/diffusers/loaders/single_file.py", line 113, in load_single_file_sub_model
loaded_sub_model = create_diffusers_clip_model_from_ldm(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/tensorrt/lib/python3.11/site-packages/diffusers/loaders/single_file_utils.py", line 1411, in create_diffusers_clip_model_from_ldm
unexpected_keys = load_model_dict_into_meta(model, diffusers_format_checkpoint, dtype=torch_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/tensorrt/lib/python3.11/site-packages/diffusers/models/model_loading_utils.py", line 154, in load_model_dict_into_meta
raise ValueError(
ValueError: Cannot load because text_model.embeddings.position_embedding.weight expected shape tensor(..., device='meta', size=(77, 768)), but got torch.Size([77, 1280]). If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.
```
### Reproduction
I have add `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True` but not work
```
def download_config(local_dir):
# 检查是否存在指定的子目录
sub_dir = '.huggingface'
path = os.path.join(local_dir, sub_dir)
# 判断目录是否存在
if not os.path.isdir(path):
if 'base' in path:
local_config_path = snapshot_download(
repo_id="stabilityai/stable-diffusion-xl-base-1.0",
allow_patterns=["*.json", "**/*.json", "*.txt", "**/*.txt"],
local_dir=local_dir,
)
elif 'refiner' in path:
local_config_path = snapshot_download(
repo_id="stabilityai/stable-diffusion-xl-refiner-1.0",
allow_patterns=["*.json", "**/*.json", "*.txt", "**/*.txt"],
local_dir=local_dir,
)
return local_dir
def A1111(local_dir, model_name, steps, cfs_scale, dir=''):
pipe = StableDiffusionXLPipeline.from_single_file(
f'{local_dir}/{model_name}',
config=download_config(local_dir),
local_files_only=True,
torch_dtype=torch.float16,
).to("cuda")
# refiner model
refiner_path = '/workspace/work/private/hf_models/stable-diffusion-xl-refiner-1.0'
refiner = StableDiffusionXLPipeline.from_single_file(
f'{refiner_path}/sd_xl_refiner_1.0.safetensors',
text_encoder_2=pipe.text_encoder_2,
vae=pipe.vae,
config=download_config(local_dir),
local_files_only=True,
torch_dtype=torch.float16,
low_cpu_mem_usage=False,
ignore_mismatched_sizes=True,
).to("cuda")
# lora
lora_dir = '/data/modeldata/aigc-fg-gen/v1.9/Lora'
# adapter name 不能有点
pipe.load_lora_weights(lora_dir, weight_name="fix_hands.pt", adapter_name="fix_hands")
pipe.load_lora_weights(lora_dir, weight_name="sdxl_lora_fg_v1.2_colorv2_shirt_mp.safetensors",
adapter_name="sdxl_lora_fg_v1_2_colorv2_shirt_mp")
pipe.set_adapters(["fix_hands", "sdxl_lora_fg_v1_2_colorv2_shirt_mp"], adapter_weights=[1.5, 0.8])
# sample name https://huggingface.co/docs/diffusers/v0.26.2/en/api/schedulers/overview#schedulers
scheduler = DPMSolverSinglestepScheduler.from_config(pipe.scheduler.config, use_karras_sigmas=True)
pipe.scheduler = scheduler
# 提示参数
prompt = "xxxxx"
negative_prompt = "xxxxx"
generator = torch.Generator(device="cuda").manual_seed(1227346489)
num_images = 4
width, height = 1024, 1024
steps = steps
cfg_scale = cfs_scale
# if 'step' in model_name:
normal_optimization(pipe, infer=False)
params = {
'prompt': prompt,
'height': height,
'width': width,
'num_inference_steps': steps,
'guidance_scale': cfg_scale,
'negative_prom
|
https://github.com/huggingface/diffusers/issues/8900
|
closed
|
[
"bug"
] | 2024-07-19T01:58:05Z
| 2024-07-26T10:39:07Z
| null |
631068264
|
huggingface/transformers.js
| 854
|
How do you delete a downloaded model?
|
### Question
How do you delete a downloaded model that was downloaded to the IndexDB?
Thanks,
Ash
|
https://github.com/huggingface/transformers.js/issues/854
|
closed
|
[
"question"
] | 2024-07-18T22:10:51Z
| 2024-07-19T16:23:21Z
| null |
AshD
|
pytorch/TensorRT
| 3,018
|
❓ [Question] How do you save a unet model compiled Torch-TensorRT (Stable Diffusion XL)
|
## ❓ Question
How do you save a unet model compiled Torch-TensorRT from Stable Diffusion XL?
## What you have already tried
I've tried following the compilation instructions from the tutorial ([link](https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/torch_compile_stable_diffusion.html)). It wasn't very useful for my use case because I would like to save the compilation on disk and load it down the line when inference is needed.
So I've tried following the instructions which let you save your compilation using the dynamo backend ([link](https://pytorch.org/TensorRT/user_guide/saving_models.html#dynamo-ir)). This script represents a summary of what I'm doing:
```
import torch
import torch_tensorrt
from diffusers import StableDiffusionXLPipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
use_safetensors=True,
).to("cuda")
inputs = [torch.randn((2, 4, 128, 128)).cuda()] # After some digging, these are the input sizes needed to generate 1024x1024 images
trt_gm = torch_tensorrt.compile(pipe.unet, ir="dynamo", inputs=inputs)
```
But this yields the following error: `TypeError: UNet2DConditionModel.forward() missing 2 required positional arguments: 'timestep' and 'encoder_hidden_states'`
So, I've tried to provide these arguments as well, found after some playing around with the code from diffusers:
```
kwargs = {
"timestep": torch.tensor(951.0).cuda(),
"encoder_hidden_states": torch.randn(
(2, 77, 2048), dtype=torch.float16
).cuda(),
}
trt_gm = torch_tensorrt.compile(pipe.unet, ir="dynamo", inputs=inputs, **kwargs)
```
And I get the same error. Probably, the kwargs don't get passed down into the calling functions. After altering the code from torch export (which probably wasn't necessary), I got an error of the type: `torch._dynamo.exc.InternalTorchDynamoError: argument of type 'NoneType' is not iterable`
Any ideas how to properly compile a unet model from stable diffusion XL? Many thanks in advance.
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.3.1+cu121
- CPU Architecture: x86_64
- OS (e.g., Linux): Ubuntu 22.04.3 LTS
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): `pip install torch --index-url https://download.pytorch.org/whl/cu121`
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: Python 3.10.12
- CUDA version: 12.4
- GPU models and configuration: NVIDIA GeForce RTX 4090
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/3018
|
open
|
[
"question"
] | 2024-07-18T18:15:06Z
| 2024-09-03T06:52:33Z
| null |
dru10
|
pytorch/vision
| 8,536
|
ColorJitter results with OverflowError
|
### 🐛 Describe the bug
Using `ColorJitter` augmentations in torchvision 0.18.1 results in an `OverflowError`. This was not observed in older `torchvision` versions (tested with 0.15.0).
How to reproduce:
```python
# read an image
from PIL import Image
import requests
from io import BytesIO
# I picked this image, but it actually happens with others as well. just try one that you have.
pil_img = Image.open(BytesIO(requests.get('https://www.weizmann.ac.il/math/bagon/sites/math.bagon/files/styles/pi_photo/public/ShaiBagon_8.png').content))
from torchvision import transforms
cj = transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.2, hue=0.1)
for _ in range(10):
cj(pil_img) # it does not happen every time, but out of 10 it will most likely happen)
```
This code will through:
```
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "[...]/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "[...]/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "[...]/lib/python3.10/site-packages/torchvision/transforms/transforms.py", line 1280, in forward
img = F.adjust_hue(img, hue_factor)
File "[...]/lib/python3.10/site-packages/torchvision/transforms/functional.py", line 959, in adjust_hue
return F_pil.adjust_hue(img, hue_factor)
File "[...]/lib/python3.10/site-packages/torchvision/transforms/_functional_pil.py", line 114, in adjust_hue
np_h += np.uint8(hue_factor * 255)
OverflowError: Python integer -24 out of bounds for uint8
```
### Versions
```
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 9.1 (Plow) (x86_64)
GCC version: (GCC) 11.3.1 20220421 (Red Hat 11.3.1-2)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.14.0-162.6.1.el9_1.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 535.161.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 52
On-line CPU(s) list: 0-51
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5320 CPU @ 2.20GHz
CPU family: 6
Model: 106
Thread(s) per core: 1
Core(s) per socket: 26
Socket(s): 2
Stepping: 6
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4400.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 2.4 MiB (52 instances)
L1i cache: 1.6 MiB (52 instances)
L2 cache: 65 MiB (52 instances)
L3 cache: 78 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
|
https://github.com/pytorch/vision/issues/8536
|
closed
|
[] | 2024-07-18T14:00:33Z
| 2024-07-28T07:06:21Z
| 7
|
shaibagon
|
huggingface/candle
| 2,341
|
how to use system prompt with the llama example?
|
Hi, I'm trying to pass a chat dialog in the [LLama3 format](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py#L222) to the [llama example](https://github.com/huggingface/candle/tree/main/candle-examples/examples/llama) via -prompt, the string is as follows:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
Why is the sky blue?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
This seems to confuse the model and, depending on the user prompt can cause the model to generate gibberish characters (see also https://github.com/evilsocket/cake/issues/9):
(i've made a small change to load the prompt from a file if passed with @)
```sh
/path/to/compiled/llama3/example --model-id "meta-llama/Meta-Llama-3-8B" --prompt @hf-llama-test/prompt.txt
loading the model weights from meta-llama/Meta-Llama-3-8B
loading prompt from @hf-llama-test/prompt.txt ...
starting the inference loop
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
Why is the sky blue?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
By: David Cope (2022, October 23)
14 tokens generated (16.831015425660595 token/s)
```
|
https://github.com/huggingface/candle/issues/2341
|
open
|
[] | 2024-07-18T10:44:54Z
| 2024-07-18T14:35:09Z
| null |
evilsocket
|
huggingface/text-generation-inference
| 2,246
|
can't start server with small --max-total-tokens. But works fine with big stting
|
when I try to run CUDA_VISIBLE_DEVICES=0,1,2,3 text-generation-launcher --port 6634 --model-id /models/ --max-concurrent-requests 128 --max-input-length 64--max-total-tokens 128 --max-batch-prefill-tokens 128 --cuda-memory-fraction 0.95. It says
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB. GPU has a total capacity of 44.53 GiB of which 1.94 MiB is free. Process 123210 has 44.52 GiB memory in use. Of the allocated memory 40.92 GiB is allocated by PyTorch, and 754.08 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management
But for sitting big max tokens. CUDA_VISIBLE_DEVICES=0,1,2,3 text-generation-launcher --port 6634 --model-id /models/ --max-concurrent-requests 128 --max-input-length 1024 --max-total-tokens 2048 --max-batch-prefill-tokens 2048 --cuda-memory-fraction 0.95. it works fine.
i don't get it why small max tokens cause CUDA out of memory but large max tokens works fine. Can someone answer my questions?
|
https://github.com/huggingface/text-generation-inference/issues/2246
|
closed
|
[
"question",
"Stale"
] | 2024-07-18T07:03:31Z
| 2024-08-24T01:52:30Z
| null |
rooooc
|
pytorch/serve
| 3,253
|
GPU memory not released after inference
|
I built the .mar file by using torch-model-archiver, and wrote a custom handler that processes batched inputs, to be more specific
I'm doing the following steps:
sending one single request with N images as a list of base64 str
converting these images into tensors in my handler's preprocess
create a batch from the above tensors and pass it to the model for inference
return the inference response
and through testing, I found if I send 4 images it will occupy around 14G memories of GPU, and then after sending 4 images, the next request if I only send 1 image, the GPU memory is not released and kept at 14G
Is this normal, and is there any way I can release some GPU memories after no inference request like after a while?
|
https://github.com/pytorch/serve/issues/3253
|
closed
|
[] | 2024-07-17T09:10:59Z
| 2024-07-19T14:39:02Z
| 1
|
Di-Gu
|
huggingface/diffusers
| 8,881
|
How to Generate Multiple Image Inference in Instruct Pix2Pix
|
Hello, I am currently working on how to utilize Instruct Pix2Pix for augmentation.
For this purpose, I want to generate images by putting a Tensor of shape [64,3,84,84] (batch,channel,width,height)shape into the Instruct Pix2Pix pipeline, but the Instruct Pix2Pix provided by diffusers can only edit for one image.
Is it possible to edit multiple images at the same time? It seems that it works only with 1 batch size.
Is there way generate images with multiple batch size?
|
https://github.com/huggingface/diffusers/issues/8881
|
closed
|
[] | 2024-07-17T07:47:09Z
| 2024-09-02T00:45:15Z
| null |
E-SJ
|
huggingface/transformers.js
| 849
|
AutoModel.from_pretrained - Which model is loaded
|
### Question
I am using AutoModel.from_pretrained("Xenova/yolos-tiny") to load the Yolos model for object detection. Does transformers.js load the model_quantized.onnx by default? Would I be able to load model.onnx?
A related question: Is there a way to check which model is loaded once the model is loaded?
|
https://github.com/huggingface/transformers.js/issues/849
|
open
|
[
"question"
] | 2024-07-16T22:45:15Z
| 2024-08-09T09:45:37Z
| null |
mram0509
|
huggingface/text-generation-inference
| 2,239
|
Can I somehow change attention type from 'FlashAttention' in the text-server-launcher?
|
https://github.com/huggingface/text-generation-inference/issues/2239
|
closed
|
[
"question",
"Stale"
] | 2024-07-16T18:37:45Z
| 2024-08-24T01:52:31Z
| null |
wasifmasood
|
|
pytorch/executorch
| 4,276
|
How to export a pretrained model?
|
Is there a way to export a pretrained model to executorch? This example https://pytorch.org/executorch/stable/getting-started-setup.html#export-a-program only shows how to export a new model instance. I tried doing it like this
```
# 1. torch.export: Defines the program with the ATen operator set.
model.eval()
aten_dialect = torch.export.export( model, ( torch.ones( 2 ) ) )
# 2. to_edge: Make optimizations for Edge devices
edge_program = executorch.exir.to_edge( aten_dialect )
# 3. to_executorch: Convert the graph to an ExecuTorch program
executorch_program = edge_program.to_executorch()
# 4. Save the compiled .pte program
with open( "net.pte", "wb" ) as file:
file.write(executorch_program.buffer)
```
but I get `Expecting 'args' to be a tuple of example positional inputs, got <class 'torch.Tensor'>`.
My model:
```
class Net( nn.Module ):
def __init__( self ):
super().__init__()
self.inputFeatures = 2
self.fc1 = nn.Linear( self.inputFeatures, 1 )
def forward( self, x ):
fc1 = F.sigmoid( self.fc1( x ) )
return fc1
```
|
https://github.com/pytorch/executorch/issues/4276
|
closed
|
[] | 2024-07-16T14:55:42Z
| 2024-07-22T21:55:34Z
| null |
Bresenham
|
huggingface/diarizers
| 13
|
How to solve `CUDA error: out of memory while doing inference for my diarization model`
|
ERROR - An error occurred: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
I'm using a `12GB NVIDIA GeForce RTX 2050` with Cuda compilation tools, release 11.8
How to Solve this or how to use batching/ batch_size while doing inference?
|
https://github.com/huggingface/diarizers/issues/13
|
open
|
[] | 2024-07-16T06:23:28Z
| 2024-08-18T04:20:16Z
| null |
Ataullha
|
pytorch/torchtitan
| 462
|
[FP8 options] Float8Linear vs TransformerEngine
|
Hi team, first of all thanks for this great repo for showcasing how to leverage the latest techniques in torch ecosystem, it's been super useful and insightful :) I have a naive question about FP8 options and would like to know more about how you view it.
There's the https://github.com/NVIDIA/TransformerEngine by nvidia for fp8 training on hopper and it's started to be integrated into downstream frameworks like HF, lightning etc. However I'm also seeing https://github.com/pytorch-labs/float8_experimental evolving quickly and the fact that it's more lightweight & potentially more composable w/ remaining torch techniques is also important to us. I'm wondering if you have some insight about the pros and cons of each of them, how would Float8Linear's performance compare to TE, and if you would recommend going with TE or Float8Linear for LLM pretraining/finetuning use cases. Thanks a lot!
|
https://github.com/pytorch/torchtitan/issues/462
|
open
|
[
"question"
] | 2024-07-16T03:54:29Z
| 2025-06-02T16:54:11Z
| null |
yundai424
|
pytorch/torchchat
| 903
|
Github code search doesnt work with folders called `build`
|
### 🐛 Describe the bug
I was trying to look for the `model.py` definition
https://github.com/pytorch/torchchat/tree/main/build but it wasn't showing up
<img width="816" alt="Screenshot 2024-07-15 at 6 54 39 PM" src="https://github.com/user-attachments/assets/11021312-9e40-4ec6-adad-0a52a24f06e0">
generate.py which is not in builder works fine
<img width="805" alt="Screenshot 2024-07-15 at 6 54 54 PM" src="https://github.com/user-attachments/assets/8f6eaf5e-7e76-4a3d-b1cc-37254c6e3515">
Can we rename the folder to anything else, build to me signifies either release infra scripts or artifacts that are created after installing a package not model building utilities
### Versions
Nightlies
|
https://github.com/pytorch/torchchat/issues/903
|
open
|
[
"actionable"
] | 2024-07-16T01:55:45Z
| 2024-07-30T15:11:19Z
| 1
|
msaroufim
|
pytorch/serve
| 3,247
|
TorchServe docker image with vllm, trt-llm dependencies
|
### 🚀 The feature
To have a no code solution with vllm, trt-llm, TorchServe needs a docker image with these dependencies.
Including this with TorchServe's GPU image will bloat the image for all users of TorchServe
We can instead have another image for GenAI.
### Motivation, pitch
No code solution for GenAI
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/serve/issues/3247
|
open
|
[] | 2024-07-16T01:16:34Z
| 2024-07-16T01:16:34Z
| 0
|
agunapal
|
pytorch/xla
| 7,689
|
CUDA and GPU-Flavoured Docker/Container Image Missing CUDA Support
|
## ❓ Questions and Help
Hi,
According to the docs [here]( https://github.com/pytorch/xla?tab=readme-ov-file#docker ), the image `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.3.0_3.10_cuda_12.1` should have Cuda 12.1 support for use on a local GPU. I have also tried pulling `xla:nightly_3.8_cuda_12.1`.
When I start the container (`podman run --shm-size=16g --net=host --gpus all us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.3.0_3.10_cuda_12.1`), it appears there is no CUDA support compiled in:
```terminal
# nvidia-smi
bash: nvidia-smi: command not found
# python
>>> import torch, torch_xla
>>> torch.cuda.get_device_name(0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 414, in get_device_name
return get_device_properties(device).name
File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 444, in get_device_properties
_lazy_init() # will define _get_device_properties
File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
>>> print(torch.__version__)
2.3.0 # No CUDA suffix here
>>> print(torch_xla.__version__)
2.3.0 # Or here
```
Am I missing something here, or has something gone up with these CI builds?
Thanks
|
https://github.com/pytorch/xla/issues/7689
|
closed
|
[
"question",
"xla:gpu"
] | 2024-07-15T22:56:55Z
| 2025-04-03T13:56:12Z
| null |
stellarpower
|
huggingface/datasets
| 7,051
|
How to set_epoch with interleave_datasets?
|
Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples.
I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch)
Of course I want to interleave as IterableDatasets / streaming mode so B doesn't have to get tokenized completely at the start.
How could I achieve this? I was thinking something like, if I wrap dataset A in some new IterableDataset with from_generator() and manually call set_epoch before interleaving it? But I'm not sure how to keep the number of shards in that dataset...
Something like
```
dataset_a = load_dataset(...)
dataset_b = load_dataset(...)
def epoch_shuffled_dataset(ds):
# How to make this maintain the number of shards in ds??
for epoch in itertools.count():
ds.set_epoch(epoch)
yield from iter(ds)
shuffled_dataset_a = IterableDataset.from_generator(epoch_shuffled_dataset, gen_kwargs={'ds': dataset_a})
interleaved = interleave_datasets([shuffled_dataset_a, dataset_b], probs, stopping_strategy='all_exhausted')
```
|
https://github.com/huggingface/datasets/issues/7051
|
closed
|
[] | 2024-07-15T18:24:52Z
| 2024-08-05T20:58:04Z
| null |
jonathanasdf
|
huggingface/accelerate
| 2,933
|
How to apply model parallel on multi machines?
|
Currently, I want to do llm inference on multi machines. Due to limited memory, I hope to use all machines to load the model and I'm blocked with this point. I only find that based on device_map, I can do model parallel on single machine with multi cards.
May I have some ideas about how to use Accelerate to realize? Or may I get some other useful suggestions?
Thanks so much.
|
https://github.com/huggingface/accelerate/issues/2933
|
closed
|
[] | 2024-07-15T14:09:10Z
| 2025-03-08T06:48:09Z
| null |
JerryLu991223
|
huggingface/chat-ui
| 1,344
|
Ollama chatPromptTemplate and parameters
|
Hi,
I have tried adding phi3-3.8b, as an ollama model, hosted on my own prem ollama server.
I have basically copied the prompt template and parameters from microsoft/Phi-3-mini-4k-instruct used in hugging face - but it does not seem to work, I always get "no output was generated".
sending a generate/chat http request to the ollama server works using phi3-3.8b works.
In general how can I generate prompt template and parameters for models hosted on ollama?
For instance llama3, or any other - did not find any instructions for that.
|
https://github.com/huggingface/chat-ui/issues/1344
|
open
|
[
"support"
] | 2024-07-15T12:38:12Z
| 2024-09-18T17:57:30Z
| 7
|
ran-haim
|
pytorch/xla
| 7,682
|
Is there any way to directly execute the cached computational graph
|
## ❓ Questions and Help
My application code is complex, but it's not computationally expensive, and the graph is consistent, so I tried to cache it with XLA_PERSISTENT_CACHE_PATH, but it took a long time to execute the logic (without performing any computation).Is there any way to execute the cached graph? I also tried dynamo, but encountered many errors, such as incompatibility with autocast and so on
|
https://github.com/pytorch/xla/issues/7682
|
closed
|
[
"question",
"dynamo"
] | 2024-07-15T11:19:23Z
| 2025-04-01T13:11:38Z
| null |
mars1248
|
huggingface/transformers
| 31,963
|
How to manually stop the LLM output?
|
I'm using `TextIteratorStreamer` for streaming output.
Since LLM may repeat its output indefinitely, I would like to be able to have LLM stop generating when it receives a request to cancel.
Is there any way to accomplish this?
model: glm-4-9b-chat
```python
async def predict(messages, model_id: str, raw_request: Request, gen_kwargs: Dict):
global model, tokenizer
choice_data = ChatCompletionResponseStreamChoice(index=0, delta=DeltaMessage(role='assistant'), finish_reason=None)
chunk = ChatCompletionResponse(model=model_id, choices=[choice_data], object='chat.completion.chunk')
yield '{}'.format(_dump_json(chunk, exclude_unset=True))
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors='pt')
inputs = inputs.to(model.device)
streamer = TextIteratorStreamer(tokenizer=tokenizer, skip_prompt=True, timeout=60.0, skip_special_tokens=True)
generation_kwargs = dict(input_ids=inputs, streamer=streamer)
generation_kwargs.update(gen_kwargs)
thread = Thread(target=model.generate, kwargs=generation_kwargs)
thread.start()
for new_text in streamer:
print(new_text)
if raw_request is not None and await raw_request.is_disconnected():
print("disconnected")
# todo stop generate
choice_data = ChatCompletionResponseStreamChoice(index=0, delta=DeltaMessage(content=new_text), finish_reason=None)
chunk = ChatCompletionResponse(model=model_id, choices=[choice_data], object='chat.completion.chunk')
yield '{}'.format(_dump_json(chunk, exclude_unset=True))
choice_data = ChatCompletionResponseStreamChoice(index=0, delta=DeltaMessage(content=''), finish_reason='stop')
chunk = ChatCompletionResponse(model=model_id, choices=[choice_data], object='chat.completion.chunk')
yield '{}'.format(_dump_json(chunk, exclude_unset=True))
yield '[DONE]'
```
|
https://github.com/huggingface/transformers/issues/31963
|
closed
|
[] | 2024-07-15T07:09:43Z
| 2024-07-16T00:34:41Z
| null |
invokerbyxv
|
huggingface/chat-ui
| 1,343
|
vllm 400 status code (no body) error
|
Hello everyone, I use the vllm openapi service, but I encountered a 400 status code (no body) error. How can I change it? Thanks
vllm:
```
python -m vllm.entrypoints.openai.api_server --model /home/rickychen/桌面/llm/models/Infinirc-Llama3-8B-5G-v1.0 --dtype auto --worker-use-ray --tensor-parallel-size 2 --port 8001 --served-model-name Infinirc-Llama3-8B-5G-v1.0
```
huggingface chatui:
```
MODELS=`[
{
"name": "Infinirc-Llama3-8B-5G-v1.0",
"id": "Infinirc-Llama3-8B-5G-v1.0",
"endpoints": [{
"type": "openai",
"baseURL": "http://10.0.9.5:8001/v1",
"apiKey": "a"
}],
"chatPromptTemplate": "{{#each messages}}{{#ifUser}}Human: {{content}}\n{{/ifUser}}{{#ifAssistant}}Assistant: {{content}}\n{{/ifAssistant}}{{/each}}Human: ",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
},
{
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
},
{
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"max_new_tokens": 1024
}
}
]`
```
error:
```
BadRequestError: 400 status code (no body)
at APIError.generate (file:///Volumes/MacPro/LLM/ChatUI/chat-ui-main/node_modules/openai/error.mjs:41:20)
at OpenAI.makeStatusError (file:///Volumes/MacPro/LLM/ChatUI/chat-ui-main/node_modules/openai/core.mjs:256:25)
at OpenAI.makeRequest (file:///Volumes/MacPro/LLM/ChatUI/chat-ui-main/node_modules/openai/core.mjs:299:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/endpoints/openai/endpointOai.ts:111:36)
at async Module.generate (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/textGeneration/generate.ts:8:30)
at async textGenerationWithoutTitle (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/textGeneration/index.ts:56:3)
at async Module.mergeAsyncGenerators (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/utils/mergeAsyncGenerators.ts:13:34)
at async Module.textGeneration (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/textGeneration/index.ts:24:3)
at async Object.start (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/routes/conversation/[id]/+server.ts:325:26) {
status: 400,
headers: {
'content-length': '297',
'content-type': 'application/json',
date: 'Sun, 14 Jul 2024 12:47:33 GMT',
server: 'uvicorn'
},
request_id: undefined,
error: undefined,
code: undefined,
param: undefined,
type: undefined
}
[20:47:33.972] ERROR (31253): 400 status code (no body)
err: {
"type": "BadRequestError",
"message": "400 status code (no body)",
"stack":
Error: 400 status code (no body)
at APIError.generate (file:///Volumes/MacPro/LLM/ChatUI/chat-ui-main/node_modules/openai/error.mjs:41:20)
at OpenAI.makeStatusError (file:///Volumes/MacPro/LLM/ChatUI/chat-ui-main/node_modules/openai/core.mjs:256:25)
at OpenAI.makeRequest (file:///Volumes/MacPro/LLM/ChatUI/chat-ui-main/node_modules/openai/core.mjs:299:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/endpoints/openai/endpointOai.ts:111:36)
at async Module.generateFromDefaultEndpoint (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/generateFromDefaultEndpoint.ts:11:23)
at async generateTitle (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/textGeneration/title.ts:54:10)
at async Module.generateTitleForConversation (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/textGeneration/title.ts:17:19)
"status": 400,
"headers": {
"content-length": "1748",
"content-type": "application/json",
"date": "Sun, 14 Jul 2024 12:47:33 GMT",
"server": "uvicorn"
}
}
```
|
https://github.com/huggingface/chat-ui/issues/1343
|
open
|
[
"support"
] | 2024-07-14T12:49:59Z
| 2024-09-19T12:26:36Z
| 3
|
rickychen-infinirc
|
huggingface/chat-ui
| 1,342
|
undeclared node version depedancy
|
Using the current chat-ui dockerhub image I am unable to connect to localhost:3000 to run a simple instance of chat ui. The webservice returns 'Not Found for all routes'. Included below is my docker-compose file. if I change the chat-ui image to build with node 22 as the version everything works as expected. Does chat-ui have an undocumented dependency on a particular version of the node? There is no 'engine' field in package.json. Should there be one? Should we be using node >= 22? Is there a way to debug this or identify which package is causing the issue?
```dockercompose
version: '3.1'
services:
mongo:
image: docker.io/library/mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
ports:
- 27017:27017
mongo-express:
image: docker.io/library/mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
ME_CONFIG_MONGODB_URL: mongodb://root:example@mongo:27017/
ME_CONFIG_BASICAUTH: "false"
depends_on:
- mongo
chat-ui:
image: chat-ui:20
restart: always
ports:
- 3000:3000
- 5173:5173
volumes:
- type: bind
source: .env.local
target: /app/.env.local
depends_on:
- mongo
```
```docker
# syntax=docker/dockerfile:1
# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
# you will also find guides on how best to write your Dockerfile
ARG INCLUDE_DB=false
# stage that install the dependencies
FROM node:22 as builder-production
WORKDIR /app
COPY --link --chown=1000 package-lock.json package.json ./
RUN --mount=type=cache,target=/app/.npm \
npm set cache /app/.npm && \
npm ci --omit=dev
FROM builder-production as builder
ARG APP_BASE=
ARG PUBLIC_APP_COLOR=blue
ENV BODY_SIZE_LIMIT=15728640
RUN --mount=type=cache,target=/app/.npm \
npm set cache /app/.npm && \
npm ci
COPY --link --chown=1000 . .
RUN npm run build
# mongo image
FROM mongo:latest as mongo
# image to be used if INCLUDE_DB is false
FROM node:22-slim as local_db_false
# image to be used if INCLUDE_DB is true
FROM node:22-slim as local_db_true
RUN apt-get update
RUN apt-get install gnupg curl -y
# copy mongo from the other stage
COPY --from=mongo /usr/bin/mongo* /usr/bin/
ENV MONGODB_URL=mongodb://localhost:27017
RUN mkdir -p /data/db
RUN chown -R 1000:1000 /data/db
# final image
FROM local_db_${INCLUDE_DB} as final
# build arg to determine if the database should be included
ARG INCLUDE_DB=false
ENV INCLUDE_DB=${INCLUDE_DB}
# svelte requires APP_BASE at build time so it must be passed as a build arg
ARG APP_BASE=
# tailwind requires the primary theme to be known at build time so it must be passed as a build arg
ARG PUBLIC_APP_COLOR=blue
ENV BODY_SIZE_LIMIT=15728640
# install dotenv-cli
RUN npm install -g dotenv-cli
# switch to a user that works for spaces
RUN userdel -r node
RUN useradd -m -u 1000 user
USER user
ENV HOME=/home/user \
PATH=/home/user/.local/bin:$PATH
WORKDIR /app
# add a .env.local if the user doesn't bind a volume to it
RUN touch /app/.env.local
# get the default config, the entrypoint script and the server script
COPY --chown=1000 package.json /app/package.json
COPY --chown=1000 .env /app/.env
COPY --chown=1000 entrypoint.sh /app/entrypoint.sh
COPY --chown=1000 gcp-*.json /app/
#import the build & dependencies
COPY --from=builder --chown=1000 /app/build /app/build
COPY --from=builder --chown=1000 /app/node_modules /app/node_modules
RUN npx playwright install
USER root
RUN npx playwright install-deps
USER user
RUN chmod +x /app/entrypoint.sh
CMD ["/bin/bash", "-c", "/app/entrypoint.sh"]
|
https://github.com/huggingface/chat-ui/issues/1342
|
closed
|
[
"support"
] | 2024-07-13T21:06:53Z
| 2024-07-16T14:53:34Z
| 2
|
slmagus
|
huggingface/diffusers
| 8,858
|
how to know variants=fp16 beforehand
|
### Describe the bug
In some diffusion checkponts, some are fp16 and some are not.
```
pipe = DiffusionPipeline.from_pretrained(
'model_id_1',
torch_dtype=torch.float16,
variant='fp16'
)
```
```
pipe = DiffusionPipeline.from_pretrained(
'model_id_2',
torch_dtype=torch.float16,
)
```
How to know beforehand if the model supports variant='fp16' version? Is it possible to know from the checkpont, maybe with associated config file? This is required in order for consistency of the model loading with various model id.
### Reproduction
Given above.
### Logs
_No response_
### System Info
Diffusers
### Who can help?
maybe @DN6
|
https://github.com/huggingface/diffusers/issues/8858
|
closed
|
[
"bug",
"stale"
] | 2024-07-13T08:52:13Z
| 2025-01-27T01:45:50Z
| null |
pure-rgb
|
huggingface/dataset-viewer
| 2,986
|
Include code snippets for other libraries?
|
For example, in https://github.com/huggingface/huggingface.js/pull/797, we add `distilabel`, `fiftyone` and `argilla` to the list of libraries the Hub knows. However, the aim is only to handle the user-defined tags better, not to show code snippets.
In this issue, I propose to discuss if we should expand the list of dataset libraries for which we show code snippets. For now, we support pandas, HF datasets, webdatasets, mlcroissant and dask.
We already mentioned polars as a potential new lib, I think. Maybe duckdb too?
|
https://github.com/huggingface/dataset-viewer/issues/2986
|
open
|
[
"question",
"P2"
] | 2024-07-12T11:57:43Z
| 2024-07-12T14:39:59Z
| null |
severo
|
huggingface/trl
| 1,830
|
How to use `predict` function in `DPOTrainer`
|
I want to get the logp and reward of the data through `predict`, but the prediction seems only include one data.
What is the correct usage of `predict`?

|
https://github.com/huggingface/trl/issues/1830
|
closed
|
[
"❓ question"
] | 2024-07-12T06:30:20Z
| 2024-10-07T12:13:22Z
| null |
AIR-hl
|
huggingface/datatrove
| 248
|
solved: how to launch a slurm executor from an interactive slurm job
|
I forget where I saw it in the docs/code where it said not to launch a slurm executor from an `srun` interactive session - which is not quite always possible.
There is a simple workaround - unset `SLURM_*` env vars and then launch and it works just fine.
```
unset $(printenv | grep SLURM | sed -E 's/(.*)=.*/\1/' | xargs)
./my_datatrove_slurm.py
```
Of course, your `srun` session will now be w/o its env vars - which you may or may not care for.
To help others to find the solution, the error is likely to be:
```
srun: error: CPU binding outside of job step allocation, allocated CPUs are: 0x0000000000000FFF80000000000000000000000FFF8000000000.
srun: error: Task launch for StepId=120986.0 failed on node xxx-yyy-11: Unable to satisfy cpu bind request
srun: error: Application launch failed: Unable to satisfy cpu bind request
srun: Job step aborted
```
There is also [this discussion](https://groups.google.com/g/slurm-users/c/mp_JRutKmCc) that proposes to unset just `SLURM_CPU_BIND_*` env vars, so you'd then:
```
unset $(printenv | grep SLURM_CPU_BIND | sed -E 's/(.*)=.*/\1/' | xargs)
./my_datatrove_slurm.py
```
If you want to unset them just for the datatrove launcher use this one-liner syntax
```
SLURM_CPU_BIND= SLURM_CPU_BIND_VERBOSE= SLURM_CPU_BIND_LIST= SLURM_CPU_BIND_TYPE= ./my_datatrove_slurm.py
```
or you could of course unset them inside your script as well, which would make the launching even simpler.
That way all `SLURM_*` env vars will remain intact in your shell environment if you need them for something else.
edit:
I added:
```
import os
# datatrove fails to start slurm jobs from an interactive slurm job,
# so hack to pretend we aren't inside an interactive slurm job by removing SLURM env vars
for key in os.environ.keys():
if key.startswith("SLURM_"):
os.environ.pop(key)
```
on top of my script to make it always work.
|
https://github.com/huggingface/datatrove/issues/248
|
open
|
[] | 2024-07-12T04:08:02Z
| 2024-07-13T01:15:56Z
| null |
stas00
|
huggingface/diffusers
| 8,843
|
variable (per frame) IP Adapter weights in video
|
is there a (planned or existing) way to have variable IP Adapter weights for videos (e.g. with AnimateDiff)?
that means setting different values for different frames, as both scaling and masking currently seem to work with the whole generation at once (be it video or still image).
|
https://github.com/huggingface/diffusers/issues/8843
|
open
|
[
"stale",
"low-priority",
"consider-for-modular-diffusers"
] | 2024-07-11T16:49:43Z
| 2024-12-13T15:05:24Z
| 6
|
eps696
|
huggingface/transformers.js
| 846
|
range error: array buffer allocation failed <- how to catch this error?
|
### Question
While Transformers.js rocks on Desktop, My Pixel with 6Gb of ram almost always crashes the webpage when trying to run things like Whisper or TTS.
<img width="531" alt="Screenshot 2024-07-11 at 14 27 08" src="https://github.com/xenova/transformers.js/assets/805405/f8862561-7618-4c80-87e2-06c86f262698">
Is there a way to more gracefully anticipate/handle this?
|
https://github.com/huggingface/transformers.js/issues/846
|
open
|
[
"question"
] | 2024-07-11T12:32:46Z
| 2024-07-11T12:32:46Z
| null |
flatsiedatsie
|
huggingface/diffusers
| 8,834
|
Will the training code of SD3 Controlnet be released?
|
**Is your feature request related to a problem? Please describe.**
Training code of SD3 ControlNet
**Describe the solution you'd like.**
Could you please release training code of SD3 controlnet? I tried to train it but failed so I want to check whats the reason
|
https://github.com/huggingface/diffusers/issues/8834
|
closed
|
[] | 2024-07-11T03:32:55Z
| 2024-09-11T01:34:38Z
| 3
|
ChenhLiwnl
|
huggingface/optimum
| 1,953
|
Export AWQ models to ONNX
|
### System Info
```shell
python==3.10
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
None
### Expected behavior
Hello, I am new and want to try converting models to Onnx format and I have the following issue. I have a model that has been quantized to 4-bit, and then I converted this model to Onnx. My quantized model has a weight size of 7GB, but when I run the conversion to Onnx, my resulting model.onnx_data has a size of 34GB. Is there anything wrong here?
Below is my code:
```
from optimum.onnxruntime import ORTModelForCausalLM
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("SorawitChok/SeaLLM-7B-v2.5-AWQ")
ort_model = ORTModelForCausalLM.from_pretrained(
"SorawitChok/SeaLLM-7B-v2.5-AWQ",
# "/content/SeaLLM-7B-v2.5_4b",
use_io_binding=True,
export=True,
use_cache=True,
from_transformers=True,
# provider="CUDAExecutionProvider", # Change this to "CPUExecutionProvider" using CPU for inference
provider="CPUExecutionProvider", # Change this to "CPUExecutionProvider" using CPU for inference
)
print('=====Save Model====')
ort_model.save_pretrained("./SeaLLM-7B-v2.5-AWQ_onnx")
tokenizer.save_pretrained("./SeaLLM-7B-v2.5-AWQ_onnx")
```
Thanks for any help
|
https://github.com/huggingface/optimum/issues/1953
|
closed
|
[
"feature-request",
"onnx"
] | 2024-07-11T02:18:56Z
| 2024-07-25T12:42:38Z
| 1
|
Toan-it-mta
|
pytorch/xla
| 7,667
|
Equivalent of get_worker_info to split an IterableDataset
|
## ❓ Questions and Help
I have an `IterableDataset` of unknown size. I would like to use something like `torch.utils.data.get_worker_info` to split it across the spawned `xmp` processes, but AFAIK there is no equivalent in `xla_multiprocessing`. Is there a workaround? I tried randomly subsampling on each process but this hangs for me for some reason.
|
https://github.com/pytorch/xla/issues/7667
|
closed
|
[] | 2024-07-10T18:46:08Z
| 2024-08-06T01:17:46Z
| 20
|
davidaknowles
|
huggingface/optimum
| 1,951
|
how can I get a onnx format int4 model?
|
### System Info
```shell
Could you please tell me how I can obtain an int type model in ONNX format?
I’ve used the following code to quantize an ONNX model into QUINT8, but when I tried to quantize it into INT4, I found there were no relevant parameters to choose. As far as I know, GPTQ allows selecting n-bit quantization. Could you advise me on what steps I should take?
Thanks for your help!
first step:
optimum-cli export onnx --model /dataset/zhangy34/ss_qwen2/ onnx_model/ --trust-remote-code --task text-generation
second step:
optimum-cli onnxruntime quantize \
--avx512 \
--onnx_model ./qwen2_ori_model \
-o ./onnx_model/qwen2_rtn_model
```
### Who can help?
@mi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
none
### Expected behavior
get a int4 onnx model
|
https://github.com/huggingface/optimum/issues/1951
|
open
|
[
"bug"
] | 2024-07-10T14:00:19Z
| 2024-07-10T14:00:19Z
| 0
|
zhangyu68
|
huggingface/diffusers
| 8,824
|
[Solved] How to make custom datasets for instruct-pix2pix?
|
### Describe the bug
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/opt/venv/lib/python3.10/site-packages/datasets/builder.py", line 1750, in _prepare_split_single
[rank0]: for key, record in generator:
[rank0]: File "/opt/venv/lib/python3.10/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 315, in _generate_examples
[rank0]: raise ValueError(
[rank0]: ValueError: image at [image name].jpg doesn't have metadata in [my_metadata_path]metadata.jsonl.
```
### Reproduction
I want to make custom datasets for local.
### dataset
- datasets
- input_images
- image.jpg
- image.jpg
- ...
- edited_images
- edited_image.jpg
- edited_image.jpg
- ...
- metadata.jsonl
### metadata.jsonl
```
{"file_name": "input_images/image.jpg", "edited_images/edited_image": "edited_image.jpg", "edit_prompt": "sample"}
```
### train script
```
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export TRAIN_DIR="datasets/"
export OUTPUT_DIR="weights/"
accelerate launch --mixed_precision="fp16" --multi_gpu train_instruct_pix2pix.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--train_data_dir=$TRAIN_DIR \
--use_ema \
--resolution=512 --random_flip \
--train_batch_size=2 --gradient_accumulation_steps=4 --gradient_checkpointing \
--max_train_steps=15000 \
--checkpointing_steps=5000 --checkpoints_total_limit=1 \
--learning_rate=5e-05 --lr_warmup_steps=0 \
--conditioning_dropout_prob=0.05 \
--mixed_precision=fp16 \
--seed=42 \
--output_dir=${OUTPUT_DIR}
```
### error log
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/opt/venv/lib/python3.10/site-packages/datasets/builder.py", line 1750, in _prepare_split_single
[rank0]: for key, record in generator:
[rank0]: File "/opt/venv/lib/python3.10/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 315, in _generate_examples
[rank0]: raise ValueError(
[rank0]: ValueError: image at [image name].jpg doesn't have metadata in [my_metadata_path]metadata.jsonl.
```
how to make custom datasets for local?
I didn't find any solution.
### Logs
_No response_
### System Info
diffusers 0.30.0.dev0
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/8824
|
closed
|
[
"bug"
] | 2024-07-10T05:35:38Z
| 2024-07-11T02:18:40Z
| null |
jeonga0303
|
huggingface/optimum
| 1,949
|
ValueError: Trying to export a florence2 model
|
Hello,
I am attempting to export and quantize the Florence-2 model for CPU usage but encountered the following error:
```
ValueError: Trying to export a florence2 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type florence2 to be supported natively in the ONNX export.
i am trying to quntize florence 2 model for cpu but its show this error
```
Based on the error message, it seems that the Florence-2 model is not natively supported for ONNX export, and a custom configuration is required.
Could you please provide guidance or support for exporting and quantizing the Florence-2 model using a custom ONNX configuration? It would be highly beneficial for my project to have native support for this model type, or at least detailed instructions on how to proceed with the custom export.
|
https://github.com/huggingface/optimum/issues/1949
|
open
|
[
"feature-request",
"onnx"
] | 2024-07-10T04:59:06Z
| 2024-10-23T10:07:05Z
| 1
|
ghost
|
huggingface/transformers.js
| 842
|
Trying to run the Modnet example with nodejs on macOS result in Unknown model class "modnet", attempting to construct from base class. Model type for 'modnet' not found, assuming encoder-only architecture.
|
### Question
Hello,
How one can run the modnet example ?
```
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model and processor
const model = await AutoModel.from_pretrained('Xenova/modnet', { quantized: false });
const processor = await AutoProcessor.from_pretrained('Xenova/modnet');
// Load image from URL
const url = 'https://images.pexels.com/photos/5965592/pexels-photo-5965592.jpeg?auto=compress&cs=tinysrgb&w=1024';
const image = await RawImage.fromURL(url);
// Pre-process image
const { pixel_values } = await processor(image);
// Predict alpha matte
const { output } = await model({ input: pixel_values });
// Save output mask
const mask = await RawImage.fromTensor(output[0].mul(255).to('uint8')).resize(image.width, image.height);
mask.save('mask.png');
```
Thanks for the amazing work !
|
https://github.com/huggingface/transformers.js/issues/842
|
closed
|
[
"question"
] | 2024-07-09T16:19:22Z
| 2025-03-27T18:58:03Z
| null |
gabrielstuff
|
huggingface/chat-ui
| 1,335
|
[v0.9.1] Switch the LLM model mid-conversation?
|
## Description
Currently, **chat-ui** does not support changing the language model once a conversation has started. For example, if I begin a chat with _Llama 3_, I cannot switch to _Gemini 1.5_ mid-conversation, even if I change the setting in the UI.
## Steps to Reproduce
* Start a conversation with one language model (e.g., _Llama 3_).
* Go to settings and change the model to a different one (e.g., _Gemini 1.5_).
* Observe that the model used in the conversation does not change.
## Expected Behavior
The language model should switch to the newly selected model, even mid-conversation.
## Additional Questions
* Is this a known limitation or a potential bug?
* If this is intended behavior, are there any plans to implement model-switching in the future?
* If not, could you provide guidance or resources on how to achieve this functionality? I can start implementing and raise a PR!
## Environment
* **OS**: macOS Sonoma
* **Browser**: Chrome, Safari, Arc
* **chat-ui** version: v0.9.1
cc: @nsarrazin
|
https://github.com/huggingface/chat-ui/issues/1335
|
open
|
[] | 2024-07-09T13:43:16Z
| 2024-09-13T16:45:23Z
| 3
|
adhishthite
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.