repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/chat-ui
| 341
|
SSL Wrong version number error
|
i have added this
"endpoints": [
{"url": "http://127.0.0.1:8080/generate_stream", "weight": 100}
],
in the model but i am getting this error
TypeError: fetch failed
at fetch (/home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/undici/index.js:109:13)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (/node_modules/@sveltejs/kit/src/runtime/server/fetch.js:32:10)
at async POST (/home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/src/routes/conversation/[id]/+server.ts:91:16)
at async Module.render_endpoint (/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:47:20)
at async resolve (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:388:17)
at async Object.handle (/src/hooks.server.ts:66:20)
at async Module.respond (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:259:20)
at async file:///home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:506:22 {
cause: [Error: C0770BE8547F0000:error:0A00010B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:355:
] {
library: 'SSL routines',
reason: 'wrong version number',
code: 'ERR_SSL_WRONG_VERSION_NUMBER'
}
}
Error: aborted
at connResetException (node:internal/errors:717:14)
at abortIncoming (node:_http_server:754:17)
at socketOnClose (node:_http_server:748:3)
at Socket.emit (node:events:525:35)
at TCP.<anonymous> (node:net:322:12) {
code: 'ECONNRESET'
}
|
https://github.com/huggingface/chat-ui/issues/341
|
closed
|
[
"support"
] | 2023-07-12T04:40:58Z
| 2023-09-18T14:00:27Z
| 4
|
swikrit21
|
huggingface/diffusers
| 4,054
|
[SD-XL] How to apply invisible-watermark for latent output
|
### Describe the bug
As a part of the license with SAI, we need to ensure the invisible watermark is applied across all images output by these models, including the Img2Img pipeline.
### Reproduction
```py
# if xformers or torch_2_0 is used attention block does not need
# to be in float32 which can save lots of memory
if use_torch_2_0_or_xformers:
self.vae.post_quant_conv.to(latents.dtype)
self.vae.decoder.conv_in.to(latents.dtype)
self.vae.decoder.mid_block.to(latents.dtype)
else:
latents = latents.float()
if not output_type == "latent":
image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
else:
image = latents
return StableDiffusionXLPipelineOutput(images=image)
```
the relevant portion of the img2img pipeline code.
in the XL pipeline, the latent output mode does not have the watermark applied - so, it is easily bypassed.
### Logs
```shell
N/A
```
### System Info
Git main branch.
### Who can help?
cc: @sayakpaul
|
https://github.com/huggingface/diffusers/issues/4054
|
closed
|
[
"bug"
] | 2023-07-12T03:58:04Z
| 2023-07-12T10:21:29Z
| null |
bghira
|
huggingface/transformers.js
| 192
|
Table Question Answering Support?
|
Hi - Interested in support for table question answering models. It's noted that these aren't supported, but is there any reason they wouldn't work if leveraged?
|
https://github.com/huggingface/transformers.js/issues/192
|
open
|
[
"question"
] | 2023-07-12T01:12:07Z
| 2023-07-13T16:18:19Z
| null |
timtutt
|
huggingface/peft
| 685
|
Matrix mistmatch when trying to adapt Falcon with QLoRA, how to fix?
|
### System Info
```
(data_quality) brando9~ $ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 515.43.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7543 32-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 3455.484
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5599.81
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.25.1 pypi_0 pypi
[conda] numpy-base 1.25.0 py310hb5e798b_0
[conda] pytorch 2.0.1 py3.10_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py310_cu117 pytorch
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.
|
https://github.com/huggingface/peft/issues/685
|
closed
|
[] | 2023-07-11T20:01:37Z
| 2023-07-24T00:11:02Z
| null |
brando90
|
huggingface/diffusers
| 4,047
|
How to set lora scale when loading a LoRA model?
|
Hey there, first of all thanks for your fantastic work!
I am loading LoRA weights, and I would like to set the scale of them being applied. Checking the code, it appears to be possible as shown [here](https://github.com/huggingface/diffusers/blob/fc7aa64ea8f5979b67bd730777e8e1c32e3adb05/src/diffusers/loaders.py#L1094).
How can we do it in practice? Is it possible to provide a small code snippet?
Thank you so much! Really appreciate your help :)
|
https://github.com/huggingface/diffusers/issues/4047
|
closed
|
[] | 2023-07-11T17:38:05Z
| 2023-08-29T05:30:44Z
| null |
pietrobolcato
|
huggingface/diffusers
| 4,042
|
How to combine the reference-only with inpainting and depth control?
|
### Model/Pipeline/Scheduler description
Hi, I recently want to combine the reference-only with image inpaint , with depth control to replace background for portrait images. However, I have no idea to build this pipeline as for there is no reference with inpaint pipeline example. Could you please help me to figure it out?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
_No response_
|
https://github.com/huggingface/diffusers/issues/4042
|
closed
|
[] | 2023-07-11T12:17:24Z
| 2023-07-14T06:12:29Z
| null |
AmberCheng
|
pytorch/text
| 2,190
|
Missing documentation for T5 model
|
## 📚 Documentation
**Description**
<!-- A clear and concise description of what content in https://pytorch.org/text/stable/index.html is an issue. -->
As per title. There is no documentation on T5 model although it exists
https://pytorch.org/text/stable/models.html
|
https://github.com/pytorch/text/issues/2190
|
open
|
[] | 2023-07-11T10:40:37Z
| 2023-07-11T10:40:37Z
| 0
|
gau-nernst
|
huggingface/chat-ui
| 340
|
[WebSearch] "Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 1000 `inputs` tokens and 1024 `max_new_tokens`"
|
Hello there,
Title says it all.
We are not using any custom endpoints/models. We're just relying on the HuggingFace's API inferences.
Is there a way to increase/decrease the inputs token when using WebSearch (or even just increase the max sum)? Because it works fine if `max_new_tokens` is set to 512 BUT it, obviously, cuts any answer getting upper these numbers.
So far, I didn't find a good balance neither how to decrease the number of tokens of the input.
In advance, thanks for your answer!

|
https://github.com/huggingface/chat-ui/issues/340
|
closed
|
[
"question",
"models"
] | 2023-07-11T07:33:18Z
| 2023-07-12T09:16:21Z
| null |
gollumeo
|
huggingface/diffusers
| 4,029
|
How can I make diffuser pipeline to use .safetensors file for SDXL?
|
Cloning entire repo is taking 100 GB
How can I make below code to use .safetensors file instead of diffusers?
Lets say I have downloaded my safetensors file into path.safetensors
How to provide it?
The below code working but we are cloning 100 GB instead of just single 14 GB safetensors. Waste of bandwidth
**Also how can I add a LoRA checkpoint to this pipeline? a LoRA checkpoint made by Kohya script**
```
import gradio as gr
from diffusers import DiffusionPipeline
import torch
import base64
from io import BytesIO
import os
import gc
from datetime import datetime
from share_btn import community_icon_html, loading_icon_html, share_js
# SDXL code: https://github.com/huggingface/diffusers/pull/3859
model_dir = '/workspace'
access_token = os.getenv("ACCESS_TOKEN")
if model_dir:
# Use local model
model_key_base = os.path.join(model_dir, "stable-diffusion-xl-base-0.9")
model_key_refiner = os.path.join(model_dir, "stable-diffusion-xl-refiner-0.9")
else:
model_key_base = "stabilityai/stable-diffusion-xl-base-0.9"
model_key_refiner = "stabilityai/stable-diffusion-xl-refiner-0.9"
# Use refiner (enabled by default)
enable_refiner = os.getenv("ENABLE_REFINER", "true").lower() == "true"
# Output images before the refiner and after the refiner
output_images_before_refiner = True
# Create public link
share = os.getenv("SHARE", "false").lower() == "true"
print("Loading model", model_key_base)
pipe = DiffusionPipeline.from_pretrained(model_key_base, torch_dtype=torch.float16, use_auth_token=access_token)
#pipe.enable_model_cpu_offload()
pipe.to("cuda")
# if using torch < 2.0
pipe.enable_xformers_memory_efficient_attention()
# pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
if enable_refiner:
print("Loading model", model_key_refiner)
pipe_refiner = DiffusionPipeline.from_pretrained(model_key_refiner, torch_dtype=torch.float16, use_auth_token=access_token)
#pipe_refiner.enable_model_cpu_offload()
pipe_refiner.to("cuda")
# if using torch < 2.0
pipe_refiner.enable_xformers_memory_efficient_attention()
# pipe_refiner.unet = torch.compile(pipe_refiner.unet, mode="reduce-overhead", fullgraph=True)
# NOTE: we do not have word list filtering in this gradio demo
is_gpu_busy = False
def infer(prompt, negative, scale, samples=4, steps=50, refiner_strength=0.3, num_images=1):
prompt, negative = [prompt] * samples, [negative] * samples
images_b64_list = []
for i in range(0, num_images):
images = pipe(prompt=prompt, negative_prompt=negative, guidance_scale=scale, num_inference_steps=steps).images
os.makedirs(r"stable-diffusion-xl-demo/outputs", exist_ok=True)
gc.collect()
torch.cuda.empty_cache()
if enable_refiner:
if output_images_before_refiner:
for image in images:
buffered = BytesIO()
image.save(buffered, format="JPEG")
img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
image_b64 = (f"data:image/jpeg;base64,{img_str}")
images_b64_list.append(image_b64)
images = pipe_refiner(prompt=prompt, negative_prompt=negative, image=images, num_inference_steps=steps, strength=refiner_strength).images
gc.collect()
torch.cuda.empty_cache()
# Create the outputs folder if it doesn't exist
for i, image in enumerate(images):
buffered = BytesIO()
image.save(buffered, format="JPEG")
img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
timestamp = datetime.now().strftime("%Y%m%d%H%M%S")
image_b64 = (f"data:image/jpeg;base64,{img_str}")
images_b64_list.append(image_b64)
# Save the image as PNG with unique timestamp
filename = f"stable-diffusion-xl-demo/outputs/generated_image_{timestamp}_{i}.png"
image.save(filename, format="PNG")
return images_b64_list
```
|
https://github.com/huggingface/diffusers/issues/4029
|
closed
|
[] | 2023-07-10T21:52:22Z
| 2023-12-11T18:45:18Z
| null |
FurkanGozukara
|
huggingface/chat-ui
| 337
|
Feature Request: Save messages and error message even if text generation endpoint fails
|
Situation: Text generation endpoint is not running. Then user sends a message.
Current Behavior: UI throws an error and saves conversation to mongodb like this, with an empty message list.
```
{
_id: ObjectId('64ac1abc2ac09222e24cc984'),
title: 'Untitled 5',
messages: [],
model: 'GPT',
createdAt: ISODate('2023-07-10T14:50:36.324Z'),
updatedAt: ISODate('2023-07-10T14:50:36.324Z'),
sessionId: '0048fb5c-a224-49c2-a7be-ea417defa6e2'
}
```
Desired behavior: UI throws an error and saves conversation to mongodb with the user's message and the error message inside.
```
{
_id: ObjectId('64ac1abc2ac09222e24cc984'),
title: 'Untitled 5',
messages: [
{
content: 'What is 2-2?',
from: 'user',
id: '874cfd40-2c61-49fe-b9f6-8b296a79ab6a',
},
{
from: 'assistant',
error: 'TypeError: fetch failed
at fetch (C:\chat-ui\node_modules\undici\index.js:109:13)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (/node_modules/@sveltejs/kit/src/runtime/server/fetch.js:32:10)
at async POST (/src/routes/conversation/[id]/+server.ts:90:16)
at async Module.render_endpoint (/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:47:20)
at async resolve (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:388:17)
at async Object.handle (/src/hooks.server.ts:66:20)
at async Module.respond (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:259:20)
at async file:///C:/chat-ui/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:506:22 {
cause: Error: connect ECONNREFUSED 127.0.0.1:80
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1532:16) {
errno: -4078,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 80
}
}
},
],
model: 'GPT',
createdAt: ISODate('2023-07-10T14:50:36.324Z'),
updatedAt: ISODate('2023-07-10T14:50:36.324Z'),
sessionId: '0048fb5c-a224-49c2-a7be-ea417defa6e2'
}
```
|
https://github.com/huggingface/chat-ui/issues/337
|
closed
|
[
"enhancement",
"back",
"p2"
] | 2023-07-10T15:18:52Z
| 2023-10-10T11:16:22Z
| 1
|
loganlebanoff
|
huggingface/transformers.js
| 187
|
[Question] Performance and size of models
|
Great project, tons of potential! I have a general question I thought I may ask. Using the convert.py scripts, I took a Pytorch model and converted it to ONNX. With quantizing, I get a full 428MB model and a 110MB _quantized model. Now how does it work for the user exactly? Does the user automatically download the _quantized one?
Would this be accurate:
- WASM downloaded/loaded (e.g., 15MB)
- Transformers.js runs the core
- Model downloaded/load (e.g., 110MB)
- Model starts and runs
- Result is returned
- (next time it is called, WASM is reloaded and model is cached)
125MB is still quite big for the web: [https://huggingface.co/plopop/industry-classification-api-onnx](https://huggingface.co/plopop/industry-classification-api-onnx)
With something like [https://huggingface.co/Xenova/mobilebert-uncased-mnli](https://huggingface.co/Xenova/mobilebert-uncased-mnli) (27MB), running everything within a worker takes 8-15seconds depending on the input from our end right now - is there any other performance gains that can be saved, or would the only way be to optimize the source model further?
|
https://github.com/huggingface/transformers.js/issues/187
|
closed
|
[
"question"
] | 2023-07-10T14:39:31Z
| 2023-07-11T17:06:38Z
| null |
sabatale
|
huggingface/chat-ui
| 336
|
how to work in chat-ui with non streaming data?
|
I was working in a chat-ui by providing my endpoints only which is hosted in a localhost:8000/generate. I dont have any model but endpoints only so can you provide me a solution for working in only endpoints and non streaming data( application/json or application/plain). I have model hosted in this server.
in modelEndpoint.ts
if (!model.endpoints) {
return {
url: `http://10.0.2.27:8000/generate`,
// authorization: `Bearer ${HF_ACCESS_TOKEN}`,
// weight: 1,
};
}
in
Error: An error occurred while fetching the blob
at request (file:///home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/@huggingface/inference/dist/index.mjs:89:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Proxy.textGeneration (file:///home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/@huggingface/inference/dist/index.mjs:457:15)
at async Module.generateFromDefaultEndpoint (/src/lib/server/generateFromDefaultEndpoint.ts:22:28)
at async POST (/home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/src/routes/conversation/[id]/summarize/+server.ts:30:26)
at async Module.render_endpoint (/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:47:20)
at async resolve (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:388:17)
at async Object.handle (/src/hooks.server.ts:66:20)
at async Module.respond (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:259:20)
at async file:///home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:506:22
|
https://github.com/huggingface/chat-ui/issues/336
|
closed
|
[] | 2023-07-10T13:43:17Z
| 2023-07-11T08:29:40Z
| null |
swikrit21
|
huggingface/transformers.js
| 186
|
[Question] How to interpret boxes in object detection example ?
|
hi,
can anyone help me how to interpret boxes while using object detection with this model "Xenova/detr-resnet-50".
i want to crop out the detected object from the image using sharp (nodejs) ? how can i pass these boxes to sharp resize function ?
|
https://github.com/huggingface/transformers.js/issues/186
|
closed
|
[
"question"
] | 2023-07-10T12:59:22Z
| 2023-07-11T00:55:13Z
| null |
geminigeek
|
huggingface/chat-ui
| 335
|
Bug: Unexpected execution result on Firefox browser with Chat-UI ver. 0.3.0
|
I recently installed the 0.3.0 version of the HF Chat-UI software.
I then performed an evaluation using the **HuggingFaceH4/starchat-beta** model.
At that time, I typed the question "_Could you tell me about the weather in Toyko City in Japan on July-10-2023_?" and ran it.
Unfortunately, the results varied between browsers.
In the Firefox browser, the result is displayed normally.
However, the following error occurs in the Chrome browser.
* **Error message:**
```
403 You don't have access to this conversation.
If someone gave you this link, ask them to use the 'share' feature instead.
```
I was wondering if anyone else is experiencing the same issue, any comments are welcome.
|
https://github.com/huggingface/chat-ui/issues/335
|
closed
|
[
"support"
] | 2023-07-10T04:40:40Z
| 2023-09-11T09:32:14Z
| 2
|
leemgs
|
huggingface/chat-ui
| 334
|
Chat-ui is starting, but nothing happends
|
# Description:
When starting the Chat-ui, the initialization process begins as expected but stalls indefinitely, without any evident progress. The application doesn't crash nor gives any errors. This issue occurs across multiple attempts, regardless of browser type or device.
# Steps to reproduce:
- Install prerequisites
- Fill evn.local file
- Lauch a DB container for chat persistance
- Start Chat-UI
- Open a browser (e.g., Chrome, Firefox, Safari)
- Navigate to the Chat-ui web address.
- Observe the behavior.
# Expected result:
After navigating to the url, the Chat-ui should initialize and allow for the use of its various functionalities.
# Actual result:
The UI remains in a state of 'loading' indefinitely without any change, timing out after some time.
# Environment:
This issue was reproduced on:
1. Operating System: Ubuntu 22.04, Fedora Workstation 38
2. Node Version: v18.16.1
3. NPM Version: 9.5.1
Additional context:
- No error messages are displayed.
- There is no notable console log information.
- Network status is stable during the process.
- Similar behavior noticed on Fedora.
- Refreshing the browser, clearing the cache, or using a different browser does not resolve the issue.
- Firewall is disabled on host
If you need any further information, I would be glad to provide it. Thanks in advance!
|
https://github.com/huggingface/chat-ui/issues/334
|
closed
|
[
"support"
] | 2023-07-09T13:53:34Z
| 2023-09-11T09:31:49Z
| 2
|
Notespeak
|
huggingface/diffusers
| 3,988
|
how to use part of the controlnet models with a "StableDiffusionControlNetInpaintPipeline" object?
|
I created a "StableDiffusionControlNetInpaintPipeline" object with a list of controlnet models such as "canny","openpose", but sometimes I want to use canny only or openpose only.Is there's a way to reuse part of the controlnet models with a already inited "StableDiffusionControlNetInpaintPipeline" object?
|
https://github.com/huggingface/diffusers/issues/3988
|
closed
|
[] | 2023-07-07T09:18:18Z
| 2023-08-01T04:51:41Z
| null |
AdamMayor2018
|
pytorch/pytorch
| 104,764
|
How to integrate the new cpp file with Pytorch geometric?
|
### 🚀 The feature, motivation and pitch
I am using neighbour loader function in my code, which uses sample_adj_cpu function to sample neighbours. I am making some changes in this function which is present in the following file.
File link:
[[pytorch_sparse](https://github.com/rusty1s/pytorch_sparse/tree/master)/[csrc](https://github.com/rusty1s/pytorch_sparse/tree/master/csrc)/[cpu](https://github.com/rusty1s/pytorch_sparse/tree/master/csrc/cpu)
/sample_cpu.cpp](url)
How to integrate these changes in Pytorch geometric?
Alternatives
No response
Additional context
No response
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/104764
|
closed
|
[] | 2023-07-07T07:48:04Z
| 2023-07-07T16:32:01Z
| null |
shivanisankhyan
|
pytorch/TensorRT
| 2,082
|
❓ [Question] How to decrease the latency of the inference?
|
## ❓ Question
Hi. I convert pytorch retinaface and arcface model to TensorRT via torch_tensorrt library. Everything is okay but after some iterations inference is freezing and the time for handling the image is badly increased (>10x).
Snippet of inference simulation is here:
## Environment
TensorRT Version: 8.4.2
GPU Type: A100
Nvidia Driver Version: 465.19.01
CUDA Version: 11.3
CUDNN Version: 8
Operating System + Version: SLES “15-SP2” in host machine
Python Version (if applicable): 3.8
PyTorch Version (if applicable): 1.13.0a0+d321be6
Baremetal or Container (if container which image + tag): [nvcr.io/nvidia/pytorch:22.08-py3](http://nvcr.io/nvidia/pytorch:22.08-py3)
## Code
```
import torch
import torch_tensorrt
import time
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
retinaface_model = torch.jit.load('../jit_retinaface_trt.torch-tensorrt')
retinaface_model.eval()
retinaface_model.to(DEVICE)
arcface_model = torch.jit.load('../arcface_bs1_torch.float32.torch-tensorrt')
arcface_model.eval()
arcface_model.to(DEVICE)
retinaface_tensor = torch.rand(1, 3, 360, 640).to(DEVICE)
arcface_tensor = torch.rand(1, 3, 112, 112).to(DEVICE)
for _ in range(100):
global_start = time.time()
start_time = time.time()
with torch.no_grad():
ret_out = retinaface_model(retinaface_tensor)
torch.cuda.synchronize()
end_time = time.time()
ret_time = end_time - start_time
start_time = time.time()
with torch.no_grad():
arc_out = arcface_model(arcface_tensor)
torch.cuda.synchronize()
end_time = time.time()
arc_time = end_time - start_time
global_end = time.time()
global_time = global_end - global_start
# if global_time > 0.1:
print(f'ret time is : {ret_time}')
print(f'arc time is : {arc_time}')
print(f'global time is : {global_end-global_start}')
print('-'*40)
```
## Outputs
Outputs:
Normally output is like this:
ret time is : 0.0009617805480957031
arc time is : 0.0019981861114501953
global time is : 0.002961874008178711
ret time is : 0.0008959770202636719
arc time is : 0.0019989013671875
global time is : 0.002896547317504883
ret time is : 0.0009148120880126953
arc time is : 0.0020008087158203125
global time is : 0.0029172897338867188
ret time is : 0.0008985996246337891
arc time is : 0.001995086669921875
global time is : 0.002894878387451172
ret time is : 0.00446009635925293
arc time is : 0.002003192901611328
global time is : 0.006464719772338867
ret time is : 0.0009562969207763672
arc time is : 0.0020017623901367188
global time is : 0.0029592514038085938
ret time is : 0.0009098052978515625
arc time is : 0.002006053924560547
global time is : 0.002917051315307617
ret time is : 0.0009250640869140625
arc time is : 0.001997709274291992
global time is : 0.002924203872680664
ret time is : 0.0009291172027587891
arc time is : 0.001995086669921875
global time is : 0.002925395965576172
ret time is : 0.0009377002716064453
arc time is : 0.0020194053649902344
global time is : 0.0029582977294921875
ret time is : 0.0009005069732666016
arc time is : 0.0019958019256591797
global time is : 0.0028977394104003906
ret time is : 0.0009152889251708984
arc time is : 0.001996755599975586
global time is : 0.0029134750366210938
ret time is : 0.0009534358978271484
arc time is : 0.0019991397857666016
global time is : 0.0029540061950683594
ret time is : 0.0009467601776123047
arc time is : 0.0020117759704589844
global time is : 0.002960205078125
ret time is : 0.0008974075317382812
arc time is : 0.0019989013671875
global time is : 0.0028977394104003906
ret time is : 0.0009267330169677734
arc time is : 0.002001523971557617
global time is : 0.0029296875
But after some iterations and time return this:
ret time is : 0.0030410289764404297
arc time is : 0.10997724533081055 <-----
global time is : 0.11302065849304199
ret time is : 0.002657651901245117
arc time is : 0.1075441837310791 <-----
global time is : 0.11020350456237793
ret time is : 0.1104578971862793 <-----
arc time is : 0.0020885467529296875
global time is : 0.1125497817993164
ret time is : 0.11419057846069336 <-----
arc time is : 0.0020301342010498047
global time is : 0.11622214317321777
ret time is : 0.10733747482299805 <-----
arc time is : 0.0020294189453125
global time is : 0.10936880111694336
ret time is : 0.1150820255279541 <-----
arc time is : 0.0020606517791748047
global time is : 0.11714410781860352
I try changing the clock freq to the max of A100(1410MHz) but nothing changes from the default(765MHz).
In real-time handling after 26-28 iterations this happens.
It will be great if you support fixing this. Thanks in advance!!!
|
https://github.com/pytorch/TensorRT/issues/2082
|
closed
|
[
"question",
"No Activity",
"component: runtime",
"performance"
] | 2023-07-07T05:51:35Z
| 2023-10-16T00:02:22Z
| null |
hvildan
|
huggingface/optimum-habana
| 292
|
Where in the directory "/tmp/tst-summarization", is the summarization output stored?
|
### System Info
```shell
Optimum Habana : 1.6.0
SynapseAI : 1.10.0
Docker Image : Habana® Deep Learning Base AMI (Ubuntu 20.04)
Volume : 1000 GiB
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Start an EC2 instance with DL1 Resource and this image : Habana® Deep Learning Base AMI (Ubuntu 20.04)
Run these commands
a. docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.10.0/ubuntu20.04/habanalabs/pytorch-installer-2.0.1:latest
b. git clone https://github.com/huggingface/optimum-habana.git
c. pip install optimum[habana]
d. cd examples
e. cd summarization
f. pip install -r requirements.txt
python run_summarization.py \
--model_name_or_path t5-small \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--predict_with_generate \
--use_habana \
--use_lazy_mode \
--use_hpu_graphs_for_inference \
--gaudi_config_name Habana/t5 \
--ignore_pad_token_for_loss False \
--pad_to_max_length \
--save_strategy epoch \
--throughput_warmup_steps 3
### Expected behavior
Need a file with the summarized text and not just the evaluation metrics
|
https://github.com/huggingface/optimum-habana/issues/292
|
closed
|
[
"bug"
] | 2023-07-07T03:24:31Z
| 2023-07-18T08:30:21Z
| null |
Abhaycnvrg
|
huggingface/trl
| 503
|
How to get labels into the SFTTrainer
|
Hi!
I am trying to prompt tune medalpaca 7b using prompt tuning or lora with the SFTTrainer. I have a prompt and I have labels that I want the model to output. I have made a Dataset class that inherits from torch.utils.data.Dataset to prepare my inputs, but I am wondering, if there is some way to make the trainer use the datapoint["labels"] part during training? :
class DiagnosesDataset(torch.utils.data.Dataset):
def __init__(self, instances, tokenizer):
self.instances=instances
#self.labels=labels
self.tokenizer=tokenizer
def __getitem__(self, idx):
item={}
prompt= self.instances["prompt"][idx]
labels = self.instances["label"][idx]
item=self.tokenize(prompt+labels)
tokenized_instruction=self.tokenize(prompt)
label_instruction=self.tokenizer(labels)
i=len(tokenized_instruction["input_ids"])
item["labels"][i:]=label_instruction["input_ids"]
return item
def tokenize(self, prompt):
result_prompt=self.tokenizer(prompt,
truncation=True,
max_length=2048,
padding=False,
return_tensors=None)
result_prompt["labels"]=[-100]*len(result_prompt["input_ids"])
return result_prompt
def __len__(self):
return len(self.instances)
I am calling the trainer like this:
trainer=SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=dataset,
peft_config=peft_config,
packing=True,
data_coolator=DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding="max_length", max_length=2048)
args=training_arguments)
trainer.train()
This is the error I am currently getting, but I am not sure, this has something to do with sfttrainer
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /home/students/kulcsar/Bachelor/for_dataset/10000_diagnoses/falcon_model_pef │
│ t.py:544 in <module> │
│ │
│ 541 │ │
│ 542 │ │
│ 543 │ args=parser.parse_args() │
│ ❱ 544 │ run() │
│ 545 │ #main() │
│ 546 │ │
│ 547 │ #all_data, prompts, golds=preprocess("./dataset.pkl") │
│ │
│ /home/students/kulcsar/Bachelor/for_dataset/10000_diagnoses/falcon_model_pef │
│ t.py:153 in run │
│ │
│ 150 │ │ packing=True, │
│ 151 │ │ data_collator=DataCollatorForSeq2Seq(tokenizer, pad_to_multipl │
│ 152 │ │ args=training_arguments) │
│ ❱ 153 │ trainer.train() │
│ 154 │ │
│ 155 │ logging.info("Run Train loop") │
│ 156 │ #model_updated=train(model, dataset, args.seed, args.batch_size, a │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/transformers/trainer.py:1537 in train │
│ │
│ 1534 │ │ inner_training_loop = find_executable_batch_size( │
│ 1535 │ │ │ self._inner_training_loop, self._train_batch_size, args.a │
│ 1536 │ │ ) │
│ ❱ 1537 │ │ return inner_training_loop( │
│ 1538 │ │ │ args=args, │
│ 1539 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1540 │ │ │ trial=trial, │
│ │
│ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py │
│ thon3.9/site-packages/transformers/trainer.py:1802 in _inner_training_loop │
│ │
│ 1799 │ │ │ │ │ self.control = self.callback_handler.on_step_begi │
│ 1800 │ │ │ │ │
│ 1801 │ │ │ │ with self.accelerator.accumulate(model): │
│ ❱ 1802 │ │ │
|
https://github.com/huggingface/trl/issues/503
|
closed
|
[] | 2023-07-06T22:19:21Z
| 2023-08-14T15:05:10Z
| null |
MaggieK410
|
huggingface/transformers.js
| 182
|
Website and extension using same model
|
Per the chrome extension example, you pack the model with the extension. Is there a way for a website and chrome extension to use the same cached model? If my project has both a website and extension, I hope they could use a single model instead of having store 2 on the user's machine.
|
https://github.com/huggingface/transformers.js/issues/182
|
open
|
[
"question"
] | 2023-07-06T17:43:48Z
| 2023-07-16T17:26:09Z
| null |
escottgoodwin
|
huggingface/chat-ui
| 331
|
How to send model name as a input to API endpoint
|
I want to host two models and query them by switching between . The problem is I'm not able to send model name as a parameter from UI to API endpoints.
Can someone help on this?
|
https://github.com/huggingface/chat-ui/issues/331
|
closed
|
[
"question"
] | 2023-07-06T13:04:04Z
| 2023-09-18T14:03:18Z
| null |
sankethgadadinni
|
huggingface/transformers
| 24,685
|
How to get the last 4 Hidden states from the feature extraction pipeline
|
I have defined a pipeline for Feature extraction
```
# Create the pipeline
p = pipeline(
task="feature-extraction",
tokenizer="microsoft/biogpt",
model="microsoft/biogpt",
framework="pt",
device=0
)
bio_gpt = AutoModel.from_pretrained("microsoft/biogpt", output_hidden_states= True)
bio_gpt = bio_gpt.to(device)
```
and I want to extract the embeddings of the last token of the last hidden state, and the Average Pooling of the last 4 layers using the pipeline approach I am doing it like this
_Last token of the last hidden state:_
```
def extract_last_token(last_hidden_states):
last_hidden_states = np.array(last_hidden_states)
return last_hidden_states[:,-1,:]
# Process the data using the pipeline
results = p([row["text"] for _, row in df2.iterrows()])
# Extract the last token of the last hidden state
embeddings = [extract_last_token(hidden_state) for hidden_state in results]
# Create a DataFrame to store the results
df2["embeddings2"] = embeddings
```
_Average pooling of the last 4 layers:_
```
def mean_pooling(last_hidden_states, ):
last_4_layers = last_hidden_states[-4:] # Consider the last 4 layers
return np.mean(last_4_layers, axis=1)
# Process the data using the pipeline
results = p([row["text"] for _, row in df2.iterrows()])
features = np.squeeze(results)
print(features.shape)
# Perform mean pooling on the last hidden states
embeddings = [mean_pooling(hidden_state) for hidden_state in results]
# Create a DataFrame to store the results
df2["embeddings4"] = embeddings
```
The issues are:
1. When I extract the embeddings of the 4 last layers or the 12 last layers the embeddings are always the same

2. The embeddings of the last token of the last hidden state are different from the same embeddings using the "manual" method

Weardly in the above picture the 2 of the embeddings are the same but opposite row ids, this indicates another problem I don't see it if you can spot this I appreciate it.
Here is the code of how I did the manual version
```
output = bio_gpt(**model_inputs)
# Get the last state
last_state = output.last_hidden_state
cls_embeddings = last_state[:, -1, :]
# Print the last state
print(cls_embeddings)
# Assign cls_embeddings to "embeddings4" column in df2
df2["embeddings_manual"] = [cls_embeddings[i].cpu().detach().numpy() for i in range(len(df2))]
```
|
https://github.com/huggingface/transformers/issues/24685
|
closed
|
[] | 2023-07-06T08:45:08Z
| 2023-08-14T15:02:35Z
| null |
Luke-4
|
pytorch/serve
| 2,446
|
is TS_JOB_QUEUE_SIZE a valid environment variable?
|
### 📚 The doc issue
[This page](https://pytorch.org/serve/configuration.html) says environment variables are equivalent to server configuration set in `config.properties`
Setting `TS_JOB_QUEUE_SIZE` as an environment variable has no effect in Docker version 0.8.0
```
Torchserve version: 0.8.0
TS Home: /home/venv/lib/python3.9/site-packages
Current directory: /app
Temp directory: /home/model-server/tmp
Metrics config path: /app/config/metrics.yaml
Number of GPUs: 0
Number of CPUs: 4
Max heap size: 7952 M
Python executable: /home/venv/bin/python
Config file: /app/config/config.properties
Inference address: http://0.0.0.0:8080
Management address: http://0.0.0.0:8081
Metrics address: http://0.0.0.0:8082
Model Store: /app/model_store
Initial Models: ALL
Log dir: /app/logs
Metrics dir: /app/logs
Netty threads: 0
Netty client threads: 0
Default workers per model: 1
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Limit Maximum Image Pixels: true
Prefer direct buffer: false
Allowed Urls: [file://.*|http(s)?://.*]
Custom python dependency for model allowed: false
Enable metrics API: true
Metrics mode: prometheus
Disable system metrics: false
Workflow Store: /app/model_store
Model config: N/A
```
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/2446
|
closed
|
[
"question",
"triaged",
"docker"
] | 2023-07-06T01:18:47Z
| 2023-10-28T19:43:36Z
| null |
sreeprasannar
|
huggingface/setfit
| 393
|
AttributeError: 'list' object has no attribute 'shuffle'
|
I am getting the "AttributeError: 'list' object has no attribute 'shuffle'" error when I try to use setfit.
The dataset has two columns; one text and the second is the label column.
|
https://github.com/huggingface/setfit/issues/393
|
closed
|
[
"question"
] | 2023-07-05T16:47:17Z
| 2023-12-05T14:41:13Z
| null |
gpirge
|
huggingface/datasets
| 6,008
|
Dataset.from_generator consistently freezes at ~1000 rows
|
### Describe the bug
Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I
Somehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset.
I've let it run in the frozen state for way longer than it can possibly take to load the actual dataset.
Let me know if you have ideas how to resolve it!
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
def gen():
for row in range(10000):
yield {"i": np.random.rand(512, 512, 3)}
Dataset.from_generator(gen)
# -> 90% of the time gets stuck around 1000 rows
```
### Expected behavior
Should continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 12.0.1
- Pandas version: 1.5.1
|
https://github.com/huggingface/datasets/issues/6008
|
closed
|
[] | 2023-07-05T16:06:48Z
| 2023-07-10T13:46:39Z
| 3
|
andreemic
|
pytorch/torchx
| 737
|
-j vs --cpu/--gpu in ddp
|
## 📚 Documentation
## Link
[https://pytorch.org/torchx/latest/components/distributed.html](https://pytorch.org/torchx/latest/components/distributed.html)
## What does it currently say?
Not clear whether --cpu, --gpu arguments are overrided by -j arguments, although in my testing (launch then run top, etc.) it seems they are?
## What should it say?
Both the docs and the --help output for dist.ddp could be more clear on this front. More generally, I am wondering if there exists a torchx equivalent of `torchrun --standalone --nnodes=1 --nproc_per_node=auto ...`.
## Why?
Clearly I wouldn't want `--gpu=0` with `-j 1x2`, right? As such the listed defaults in docs --help are a little confusing.
|
https://github.com/meta-pytorch/torchx/issues/737
|
open
|
[] | 2023-07-05T15:57:56Z
| 2023-07-12T20:47:24Z
| 1
|
godfrey-cw
|
pytorch/pytorch
| 104,617
|
How to integrate the new cpp file with Pytorch geometroic?
|
### 🚀 The feature, motivation and pitch
I am using neighbour loader function in my code, which uses sample_adj_cpu function to sample neighbours. I am making some changes in this function which is present in the following file.
File link:
[[pytorch_sparse](https://github.com/rusty1s/pytorch_sparse/tree/master)/[csrc](https://github.com/rusty1s/pytorch_sparse/tree/master/csrc)/[cpu](https://github.com/rusty1s/pytorch_sparse/tree/master/csrc/cpu)
/sample_cpu.cpp](url)
How to integrate these changes in Pytorch geometric?
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
|
https://github.com/pytorch/pytorch/issues/104617
|
closed
|
[
"module: sparse",
"triaged"
] | 2023-07-05T06:47:12Z
| 2023-07-12T22:10:30Z
| null |
shivanisankhyan
|
huggingface/dataset-viewer
| 1,482
|
diagnose why the mongo server uses so much CPU
|
we have many alerts on the use of CPU on the mongo server.
```
System: CPU (User) % has gone above 95
```
Why?
|
https://github.com/huggingface/dataset-viewer/issues/1482
|
closed
|
[
"question",
"infra",
"improvement / optimization",
"P1"
] | 2023-07-04T16:04:06Z
| 2024-02-06T14:49:20Z
| null |
severo
|
huggingface/text-generation-inference
| 536
|
How to enable vllm
|
### Feature request
How to enable vllm
### Motivation
How to enable vllm
### Your contribution
How to enable vllm
|
https://github.com/huggingface/text-generation-inference/issues/536
|
closed
|
[] | 2023-07-04T05:20:21Z
| 2023-07-04T10:56:29Z
| null |
lucasjinreal
|
huggingface/transformers.js
| 180
|
[Question] Running transformers.js in a browser extension
|
Hello,
I'm trying to build a chrome extension that uses Transformers.js. When I try to import it in the background worker script, I first get an error that says process is not available, because apparently someone decided browser plugins shouldn't use process.env anymore. I found a solution that said to put
```
define: {
'process.env': {}
}
```
in my vite.config.js, which worked to get me past that, but the next error is:
```
Error: Dynamic require of "../bin/napi-v3/undefined/undefined/onnxruntime_binding.node" is not supported
```
Has anyone gotten this working in a browser environment yet? I saw a video about tensorflow.js in the browser, but I'd prefer to use transformers.js because you already provided me with an example of how to get it to behave like Sentence Transformers. :)
|
https://github.com/huggingface/transformers.js/issues/180
|
closed
|
[
"question"
] | 2023-07-04T01:09:29Z
| 2023-07-16T15:58:30Z
| null |
davidtbo
|
huggingface/datasets
| 6,003
|
interleave_datasets & DataCollatorForLanguageModeling having a conflict ?
|
### Describe the bug
Hi everyone :)
I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:
- `tokenize()` runs fine
- `group_text()` runs fine
Everytime, on step 19, I get
```pytb
File "env/lib/python3.9/site-packages/transformers/data/data_collator.py", line 779, in torch_mask_tokens
inputs[indices_random] = random_words[indices_random]
RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source.
```
I tried:
- training without interleave on dataset 1, it runs
- training without interleave on dataset 2, it runs
- training without `.to_iterable_dataset()`, it hangs then crash
- training without group_text() and padding to max_length seemed to fix the issue, but who knows if this was just because it was an issue that would come much later in terms of steps.
I might have coded something wrong, but I don't get what
### Steps to reproduce the bug
I have this function:
```py
def build_dataset(path: str, percent: str):
dataset = load_dataset(
"text",
data_files={"train": [path]},
split=f"train[{percent}]"
)
dataset = dataset.map(
lambda examples: tokenize(examples["text"]),
batched=True,
num_proc=num_proc,
)
dataset = dataset.map(
group_texts,
batched=True,
num_proc=num_proc,
desc=f"Grouping texts in chunks of {tokenizer.max_seq_length}",
remove_columns=["text"]
)
print(len(dataset))
return dataset.to_iterable_dataset()
```
I hardcoded group_text:
```py
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, and if the total_length < max_seq_length we exclude this batch and return an empty dict.
# We could add padding if the model supported it instead of this drop, you can customize this part to your needs.
total_length = (total_length // 512) * 512
# Split by chunks of max_len.
result = {
k: [t[i: i + 512] for i in range(0, total_length, 512)]
for k, t in concatenated_examples.items()
}
# result = {k: [el for el in elements if el] for k, elements in result.items()}
return result
```
And then I build datasets using the following code:
```py
train1 = build_dataset("d1.txt", ":95%")
train2 = build_dataset("d2.txt", ":95%")
dev1 = build_dataset("d1.txt", "95%:")
dev2 = build_dataset("d2.txt", "95%:")
```
and finally I run
```py
train_dataset = interleave_datasets(
[train1, train2],
probabilities=[0.8, 0.2],
seed=42
)
eval_dataset = interleave_datasets(
[dev1, dev2],
probabilities=[0.8, 0.2],
seed=42
)
```
Then I run the training part which remains mostly untouched:
> CUDA_VISIBLE_DEVICES=1 python custom_dataset.py --model_type bert --per_device_train_batch_size 32 --do_train --output_dir /var/mlm/training-bert/model --max_seq_length 512 --save_steps 10000 --save_total_limit 3 --auto_find_batch_size --logging_dir ./logs-bert --learning_rate 0.0001 --do_train --num_train_epochs 25 --warmup_steps 10000 --max_step 45000 --fp16
### Expected behavior
The model should then train normally, but fails every time at the same step (19).
printing the variables at `inputs[indices_random] = random_words[indices_random]` shows a magnificient empty tensor (, 32) [if I remember well]
### Environment info
transformers[torch] 4.30.2
Ubuntu
A100 0 CUDA 12
Driver Version: 525.116.04
|
https://github.com/huggingface/datasets/issues/6003
|
open
|
[] | 2023-07-03T17:15:31Z
| 2023-07-03T17:15:31Z
| 0
|
PonteIneptique
|
huggingface/dataset-viewer
| 1,472
|
How to show fan-in jobs' results in response ("pending" and "failed" keys)
|
In cache entries of fan-in jobs we have keys `pending` and `failed`. For example, config-level `/parquet` response has the following format (only "parquet_files" key):
```python
{
"parquet_files": [
{
"dataset": "duorc",
"config": "ParaphraseRC",
"split": "test",
"url": "https://huggingface.co/datasets/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/duorc-test.parquet",
"filename": "duorc-test.parquet",
"size": 6136591
},
... # list of parquet files
],
}
```
and for dataset-level it also has `pending` and `failed` keys:
```python
{
"parquet_files": [
{
"dataset": "duorc",
"config": "ParaphraseRC",
"split": "test",
"url": "https://huggingface.co/datasets/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/duorc-test.parquet",
"filename": "duorc-test.parquet",
"size": 6136591
},
... # list of parquet files
],
"pending": [],
"failed": []
}
```
To me, undocumented `"pending"` and `"failed"` keys look a bit too technical and unclear.
What we can do:
* document what these keys mean
* don't document it but also for these kind of endpoints show only examples where all levels are specified (currently it's not like this). So, don't show examples that return `pending` and `failed` field.
* anything else? @huggingface/datasets-server
|
https://github.com/huggingface/dataset-viewer/issues/1472
|
open
|
[
"question",
"api",
"P2"
] | 2023-07-03T16:49:10Z
| 2023-08-11T15:26:24Z
| null |
polinaeterna
|
huggingface/blog
| 1,281
|
How to push or shere lora adapter to hugging face hub?
|
hi, i trained falcon model and already set push_to_hub paramter in training argument, but they not working.
```
from transformers import TrainingArguments
output_dir = "chatb_f"
per_device_train_batch_size = 4
gradient_accumulation_steps = 4
optim = "paged_adamw_32bit"
save_steps = 60
logging_steps = 10
learning_rate = 2e-4
max_grad_norm = 0.3
max_steps = 60
warmup_ratio = 0.03
lr_scheduler_type = "constant"
training_arguments = TrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=per_device_train_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
optim=optim,
save_steps=save_steps,
logging_steps=logging_steps,
learning_rate=learning_rate,
fp16=True,
max_grad_norm=max_grad_norm,
max_steps=max_steps,
warmup_ratio=warmup_ratio,
group_by_length=True,
lr_scheduler_type=lr_scheduler_type,
push_to_hub = True
)
from trl import SFTTrainer
max_seq_length = 512
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=max_seq_length,
tokenizer=tokenizer,
args=training_arguments,
)
```
|
https://github.com/huggingface/blog/issues/1281
|
open
|
[] | 2023-07-01T13:56:47Z
| 2023-07-01T13:57:40Z
| null |
imrankh46
|
huggingface/diffusers
| 3,918
|
How to control the position of an object in an image using text in a txt2img model?
|
How to control the position of an object in an image using text in a txt2img model? I know this is easy to achieve in an img2img model, but how can it be done in a txt2img model?
Or, how can a model be fine-tuned to achieve this effect? For example, specifying x=0, y=1, which corresponds to the top-left corner.
I have tried similar approaches, but they are not sensitive to the position. I suspect it may be due to insensitivity to the text input. I tried using compel to enhance the positional features, but still couldn't control the position. Do I need to retrain the text_encoder related part for this?
In my fine-tuning code, I commented out the no_grad parts for text_encoder and others. Is this correct, and will it automatically train the text_encoder?
Thank you!
|
https://github.com/huggingface/diffusers/issues/3918
|
closed
|
[
"stale"
] | 2023-07-01T02:44:24Z
| 2023-08-08T15:03:15Z
| null |
XiaoyuZhuang
|
huggingface/dataset-viewer
| 1,464
|
Change the way we represent ResponseAlreadyComputedError in the cache
|
When a "parallel" step has already been computed, an error is stored in the cache with `ResponseAlreadyComputedError`error_code, and http status 500 (ie: if `split-first-rows-from-streaming` exists, then `split-first-rows-from-parquet` does not need to be computed).
But it makes it hard to monitor the "true" errors. If we follow the analogy with the HTTP status codes, it should be 3xx instead of 5xx, ie: a redirection to another resource.
I don't know how we should change this though. Let's put ideas in the issue.
|
https://github.com/huggingface/dataset-viewer/issues/1464
|
closed
|
[
"question",
"improvement / optimization",
"P2"
] | 2023-06-30T18:13:34Z
| 2024-02-23T09:56:05Z
| null |
severo
|
huggingface/transformers.js
| 176
|
[Question] Embeddings for the Entire Document
|
<!-- QUESTION GOES HERE -->
Hi Thanks for all the effort, I really appreciate it. I enjoy coding in JS and do all things in JS.
Is it a good idea to load the entire json document to get embeddings? What tokenizer should I choose? I have a tone of valuable information in my key and value pairs? or should I craft a sentence from the document?
```json
{
"id": 2053926,
"city": "New York",
"user_id": 3578165,
"price": 75,
"native_currency": "USD",
"price_native": 75,
"price_formatted": "$75",
"lat": 40.854397081884706,
"lng": -73.93876393071385,
"country": "United States",
"name": "air conditioned room w/ great view",
"smart_location": "New York, NY",
"has_double_blind_reviews": false,
"instant_bookable": false,
"bedrooms": 1,
"beds": 1,
"bathrooms": 1,
"market": "New York",
"min_nights": 1,
"neighborhood": "Washington Heights",
"person_capacity": 3,
"state": "NY",
"zipcode": "10033",
"user": {
"user": {
"id": 3578165,
"first_name": "Benjamin",
"has_profile_pic": true
}
},
"address": "Pinehurst Avenue, New York, NY 10033, United States",
"country_code": "US",
"cancellation_policy": "flexible",
"property_type": "Apartment",
"reviews_count": 14,
"room_type": "Private room",
"room_type_category": "private_room",
"picture_count": 18,
"_geoloc": {
"lat": 40.854397081884706,
"lng": -73.93876393071385
},
"objectID": "507205000"
}
```
|
https://github.com/huggingface/transformers.js/issues/176
|
closed
|
[
"question"
] | 2023-06-30T16:20:37Z
| 2023-06-30T22:43:03Z
| null |
hadminh
|
huggingface/sentence-transformers
| 2,247
|
how to tune hyperparameters using optuna or raytune
|
I want to finetune the MiniLM model and tune the hyperparameters of the same, but the model.fit function doesn't return any loss. Nor does it shows any performance metrics while training the model. What do you suggest in this case?
|
https://github.com/huggingface/sentence-transformers/issues/2247
|
open
|
[] | 2023-06-30T13:16:04Z
| 2023-06-30T13:16:04Z
| null |
nikshrimali
|
huggingface/diffusers
| 3,914
|
how to fine-tuning the sd model in low resolutions
|
When fine-tuning the stable diffusion model, there is a parameter called 'resolution' which, if set to a value like 128 or 256 to reduce GPU memory usage, could potentially have negative effects on training performance and results.
Would setting the resolution to a value other than 512, such as 128 or 256, have any adverse impact on training effectiveness and the final results?
Is there a way to modify the pre-trained model's resolution to 128 or 256, or do I need to train a separate low-resolution version of the model?
I have experimented with different resolutions, and it seems that setting the resolution to 512 produces the best results. Training with lower resolutions tends to generate complex and messy outputs.
I couldn't find any similar issues on GitHub, as most discussions focus on super-resolution. Thank you for your response!
|
https://github.com/huggingface/diffusers/issues/3914
|
closed
|
[
"stale"
] | 2023-06-30T12:42:12Z
| 2023-08-08T15:03:16Z
| null |
XiaoyuZhuang
|
pytorch/pytorch
| 104,450
|
Numpy/scipy module works fine with Torch modules, but not TorchScript. How to torchscript a numpy/scipy module?
|
### 🐛 Numpy module works fine with Torch modules, but not TorchScript.
```python
from scipy.signal import find_peaks
batch_size = 1
input_data_shape = 1000
input_shape = (batch_size, input_data_shape)
reference_inputs = numpy.random.random(input_shape)
reference_outputs, _ = find_peaks(reference_inputs[0, :])
class FindPeaks(torch.nn.Module):
def __init__(self):
super(FindPeaks, self).__init__()
def forward(self, xs):
xs_numpy = xs.numpy()[0, :]
peaks, _ = find_peaks(xs_numpy)
return torch.tensor(peaks, dtype=int)
inputs = torch.tensor(reference_inputs, dtype=float)
torch_model = FindPeaks()
torch_outputs = torch_model(inputs)
torchscript_model = torch.jit.trace(torch_model, example_inputs=[inputs])
torchscript_model.save(f"./artifacts/{torch_model.__class__.__name__}.pt")
torchscript_outputs = torchscript_model(inputs).detach()
assert isinstance(torchscript_outputs, torch.Tensor)
assert torchscript_outputs.shape == reference_outputs.shape
assert numpy.allclose(
reference_outputs, torchscript_outputs.numpy(), rtol=1.0e-3, atol=1.0e-5
)
for i in range(5):
reference_inputs = numpy.random.random(input_shape)
reference_outputs, _ = find_peaks(reference_inputs[0, :])
inputs = torch.tensor(reference_inputs, dtype=float)
torch_outputs = torch_model(inputs).detach()
assert isinstance(torch_outputs, torch.Tensor)
assert torch_outputs.shape == reference_outputs.shape # works fine
assert numpy.allclose(
reference_outputs, torch_outputs.numpy(), rtol=1.0e-3, atol=1.0e-5
) # works fine
torchscript_outputs = torchscript_model(inputs).detach()
assert isinstance(torchscript_outputs, torch.Tensor)
assert torchscript_outputs.shape == reference_outputs.shape, \
(torchscript_outputs, reference_outputs) # not working, seems memorizing the input/output when compiling the model.
assert numpy.allclose(
reference_outputs, torchscript_outputs.numpy(), rtol=1.0e-3, atol=1.0e-5
)
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.3 (x86_64)
GCC version: Could not collect
Clang version: 16.0.3
CMake version: version 3.21.1
Libc version: N/A
Python version: 3.8.16 (default, Dec 7 2022, 01:39:17) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-13.3-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] mypy==0.910
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.0
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[conda] Could not collect
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
|
https://github.com/pytorch/pytorch/issues/104450
|
open
|
[
"oncall: jit"
] | 2023-06-30T00:29:43Z
| 2023-08-02T17:55:14Z
| null |
kzhai
|
huggingface/optimum
| 1,148
|
Falcon-40b-instruct on Runpod
|
### System Info
```shell
2 x A100 80GB
32 vCPU 251 GB RAM
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"What does a raindrop feel when it hits the sea?:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
### Expected behavior
Expected to Run smoothly, give an output.
Error :
The model 'RWForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1259: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
warnings.warn(
Setting `pad_token_id` to `eos_token_id`:11 for open-end generation.
|
https://github.com/huggingface/optimum/issues/1148
|
closed
|
[
"bug"
] | 2023-06-29T18:48:05Z
| 2023-06-30T15:39:29Z
| 3
|
Mrin7
|
huggingface/text-generation-inference
| 509
|
Question: How to estimate memory requirements for a certain batch size/
|
I was just wondering how the GPU memory requirements vary depending on model size/batch size of request/max tokens. In doing some experiments where I needed the server to keep running for a long time, I found that it often ran out of memory and shut down - is there a way to estimate the memory footprint based on these variables?
|
https://github.com/huggingface/text-generation-inference/issues/509
|
closed
|
[] | 2023-06-29T15:39:51Z
| 2023-07-03T01:41:02Z
| null |
vaishakkrishna
|
huggingface/transformers.js
| 171
|
[Doc request] Add an example guide of how to use it in Svelte (and deploy to HF Spaces)
|
Similar to the cool React guide, would be awesome to showcase how to use transformers.js from Svelte (and how to deploy the resulting app to Spaces)
No need to do a SvelteKit version IMO, Svelte would be sufficient
Maybe a good first issue for the community?
|
https://github.com/huggingface/transformers.js/issues/171
|
open
|
[
"enhancement",
"help wanted",
"good first issue"
] | 2023-06-29T10:25:10Z
| 2023-08-21T20:36:59Z
| null |
julien-c
|
huggingface/optimum
| 1,145
|
How to use mean pooling with ONNX export with optimum-cli
|
### System Info
```shell
- `optimum` version: 1.8.8
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 2.0.1+cpu (cuda availabe: False)
- Tensorflow version (GPU?): not installed (cuda availabe: NA)
```
### Who can help?
@michaelbenayoun
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The Model card of paraphrase-MiniLM-L3-v2 at HuggingFace mentions that
**Without [sentence-transformers](https://www.sbert.net/), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.**
How to do this using the ONNX model generated using the optimum-cli?
Can we do this while generating the ONNX model?
For example, the **txtai** library does this ([https://github.com/neuml/txtai/blob/master/examples/18_Export_and_run_models_with_ONNX.ipynb])
```
onnx = HFOnnx()
embeddings = onnx("sentence-transformers/paraphrase-MiniLM-L6-v2", "pooling", "embeddings.onnx", quantize=True)
```
Or. does this needs to be done somehow after the ONNX model is generated (post-processing)?
### Expected behavior
Support for pooling in optimum_cli
|
https://github.com/huggingface/optimum/issues/1145
|
open
|
[
"bug"
] | 2023-06-29T05:57:35Z
| 2023-06-29T05:57:35Z
| null |
aunwesha
|
huggingface/chat-ui
| 328
|
Is there a way to see all of a user's history?
|
I want to see the chat history of all my users.
|
https://github.com/huggingface/chat-ui/issues/328
|
closed
|
[
"question"
] | 2023-06-29T05:01:55Z
| 2023-07-03T10:43:53Z
| null |
ildoonet
|
pytorch/tutorials
| 2,495
|
[BUG] - Only one trial completes on Ax NAS
|
### Add Link
https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html
### Describe the bug
Hi,
I was able to get the tutorial notebook working, and now I am trying to implement Ax-based NAS on my own model. However, only one of the trials complete and all the others fail. I have one objective which is to maximize the val_accuracy. The training script runs fine without any problem when I run it on terminal as well. This is the error I am getting:


--------------
Full log:
---------------------------------------------------------------------------
FailureRateExceededError Traceback (most recent call last)
Cell In[10], line 1
----> 1 scheduler.run_all_trials()
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:999](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:999), in Scheduler.run_all_trials(self, timeout_hours, idle_callback)
992 if self.options.total_trials is None:
993 # NOTE: Capping on number of trials will likely be needed as fallback
994 # for most stopping criteria, so we ensure `num_trials` is specified.
995 raise ValueError( # pragma: no cover
996 "Please either specify `num_trials` in `SchedulerOptions` input "
997 "to the `Scheduler` or use `run_n_trials` instead of `run_all_trials`."
998 )
--> 999 for _ in self.run_trials_and_yield_results(
1000 max_trials=not_none(self.options.total_trials),
1001 timeout_hours=timeout_hours,
1002 idle_callback=idle_callback,
1003 ):
1004 pass
1005 return self.summarize_final_result()
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:899](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:899), in Scheduler.run_trials_and_yield_results(self, max_trials, ignore_global_stopping_strategy, timeout_hours, idle_callback)
893 return
895 yield self.wait_for_completed_trials_and_report_results(
896 idle_callback, force_refit=True
897 )
--> 899 yield self._complete_optimization(
900 num_preexisting_trials=n_existing, idle_callback=idle_callback
901 )
902 return
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:1278](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:1278), in Scheduler._complete_optimization(self, num_preexisting_trials, idle_callback)
1273 res = self.wait_for_completed_trials_and_report_results(
1274 idle_callback=idle_callback, force_refit=True
1275 )
1276 # Raise an error if the failure rate exceeds tolerance at the
1277 # end of the optimization.
-> 1278 self.error_if_failure_rate_exceeded(force_check=True)
1279 self._record_run_trials_status(
1280 num_preexisting_trials=num_preexisting_trials,
1281 status=RunTrialsStatus.SUCCESS,
1282 )
1283 return res
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:779](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:779), in Scheduler.error_if_failure_rate_exceeded(self, force_check)
771 if self._num_trials_bad_due_to_err > num_bad_in_scheduler [/](https://file+.vscode-resource.vscode-cdn.net/) 2:
772 self.logger.warn(
773 "MetricFetchE INFO: Sweep aborted due to an exceeded error rate, "
774 "which was primarily caused by failure to fetch metrics. Please "
775 "check if anything could cause your metrics to be flakey or "
776 "broken."
777 )
--> 779 raise self._get_failure_rate_exceeded_error(
780 num_bad_in_scheduler=num_bad_in_scheduler,
781 num_ran_in_scheduler=num_ran_in_scheduler,
782 )
FailureRateExceededError: Failure rate exceeds the tolerated trial failure rate of 0.5 (at least 2 out of first 3 trials failed). Checks are triggered both at the end of a optimization and if at least 5 trials have failed.
-----------

I don't set any objective thresholds. When I run the script from the terminal, it works fine every time, and val_accuracy never becomes NaN. What might be the reason for such behavior in trials?
I also have another question. Does Ax support trying differen
|
https://github.com/pytorch/tutorials/issues/2495
|
closed
|
[
"bug",
"question",
"ax"
] | 2023-06-28T23:02:31Z
| 2023-10-30T17:00:14Z
| null |
ekurtgl
|
huggingface/chat-ui
| 327
|
Tokens limits issue
|
Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 603 `inputs tokens and 1024 `max_new_tokens
When deployed, the ui is working fine for like 2 or 3 promts, then every prompt we try we get a red line on top with a pop-up having this message. Please how can we remove this limitation on the code?
|
https://github.com/huggingface/chat-ui/issues/327
|
open
|
[
"question",
"back"
] | 2023-06-28T18:09:19Z
| 2023-09-18T14:03:59Z
| null |
Billyroot
|
huggingface/diffusers
| 3,890
|
How to apply the schedulers in diffusers to original SD
|
Hi! Thanks for this great work! Diffusers helps me a lot in many aspects!
Because of my recent work, I would like to know wether the schedulers in diffusers can be directly used in original SD? If yes, what should I do?
Any response will be greatly appreciated! Again, thank you all for this convenient framework!
|
https://github.com/huggingface/diffusers/issues/3890
|
closed
|
[
"stale"
] | 2023-06-28T11:02:41Z
| 2023-08-05T15:04:00Z
| null |
volcverse
|
huggingface/dataset-viewer
| 1,446
|
Add fields `viewer` and `preview` to /is-valid
|
For coherence with /valid, we should add the `viewer` and `preview` fields to /is-valid.
We should also consider deprecating the current `valid` field (as in https://github.com/huggingface/datasets-server/issues/1445). Note that it's in use in https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface.co+repo%3Ahuggingface%2Fnotebooks&type=code and also in the @lewtun's evaluator if I remember correctly.
|
https://github.com/huggingface/dataset-viewer/issues/1446
|
closed
|
[
"question",
"api"
] | 2023-06-28T09:19:56Z
| 2023-06-29T14:13:16Z
| null |
severo
|
huggingface/dataset-viewer
| 1,445
|
Remove `.valid` from `/valid` endpoint?
|
We recently added to fields to `/valid`:
- `viewer`: all the datasets that have a valid dataset viewer
- `preview`: all the datasets that don't have a valid dataset viewer, but have a dataset preview
And the Hub does not use the original field `valid` anymore. We still fill it with the union of both sets.
Should we remove it, as it doubles the size of the response and increases the response time, with no benefit? cc @huggingface/datasets-server
Note that it's used in the notebooks (https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface.co+repo%3Ahuggingface%2Fnotebooks&type=code), for example, so it is a breaking change.
I would vote in favor of removing it, and updating the notebooks (and the docs obviously).
|
https://github.com/huggingface/dataset-viewer/issues/1445
|
closed
|
[
"question",
"api"
] | 2023-06-28T09:17:13Z
| 2023-07-26T15:47:35Z
| null |
severo
|
pytorch/kineto
| 775
|
Profile particular functions / lines
|
Hey, is there a way to profile particular functions or code lines with one profiler i.e. not to have separate `with profile as..`statements around each of them?
Something similar to the [NVIDIA nvtx markers](https://docs.nvidia.com/cuda/profiler-users-guide/).
Use case:
Want to profile only particular activity such as `optimizer.step()` or `loss.backward()` in a training loop, and not the entire loop.
|
https://github.com/pytorch/kineto/issues/775
|
closed
|
[
"question"
] | 2023-06-28T02:03:02Z
| 2023-06-29T16:50:57Z
| null |
shradhasehgal
|
pytorch/kineto
| 774
|
Question about step time graph in Overview page
|
Hi, I am wondering what 'step' on the X axis represents in the step-time graph on the overview page.
I set my profiling schedule with 5 steps for 'active', yet the profiling results only include time for step 0 only and not steps 0 - 4.
Could you clarify what 'step' here refers to if not each of the step numbers the profiler was 'active' for?
<img width="1343" alt="Screenshot 2023-06-27 at 6 19 45 PM" src="https://github.com/pytorch/kineto/assets/13078034/28f47356-9c6f-42ed-9ecc-1f6e1ed79513">
|
https://github.com/pytorch/kineto/issues/774
|
closed
|
[
"question",
"plugin"
] | 2023-06-28T01:22:30Z
| 2024-04-23T15:28:39Z
| null |
shradhasehgal
|
pytorch/tutorials
| 2,493
|
[BUG] - ax_multiobjective_nas_tutorial.ipynb fails
| ERROR: type should be string, got "\r\nhttps://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html\r\n\r\n### Describe the bug\r\n\r\nHi,\r\n\r\nI am trying to get the [ax_multiobjective_nas_tutorial.ipnb tutorial](https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html) running on my local machine. I came until experiment running part without any problem, but when I start running the experiment, all the trials fail. I didn't change anything in the original notebook. This is the output:\r\n\r\n\r\n\r\nI tried running it on Google colab but got the same error.\r\n\r\n\r\n\r\nFull log:\r\n\r\n---------------------------------------------------------------------------\r\nFailureRateExceededError Traceback (most recent call last)\r\nCell In[11], line 1\r\n----> 1 scheduler.run_all_trials()\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:999](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:999), in Scheduler.run_all_trials(self, timeout_hours, idle_callback)\r\n 992 if self.options.total_trials is None:\r\n 993 # NOTE: Capping on number of trials will likely be needed as fallback\r\n 994 # for most stopping criteria, so we ensure `num_trials` is specified.\r\n 995 raise ValueError( # pragma: no cover\r\n 996 \"Please either specify `num_trials` in `SchedulerOptions` input \"\r\n 997 \"to the `Scheduler` or use `run_n_trials` instead of `run_all_trials`.\"\r\n 998 )\r\n--> 999 for _ in self.run_trials_and_yield_results(\r\n 1000 max_trials=not_none(self.options.total_trials),\r\n 1001 timeout_hours=timeout_hours,\r\n 1002 idle_callback=idle_callback,\r\n 1003 ):\r\n 1004 pass\r\n 1005 return self.summarize_final_result()\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:854](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:854), in Scheduler.run_trials_and_yield_results(self, max_trials, ignore_global_stopping_strategy, timeout_hours, idle_callback)\r\n 849 n_remaining_to_run = max_trials\r\n 850 while (\r\n 851 not self.should_consider_optimization_complete()[0]\r\n 852 and n_remaining_to_run > 0\r\n 853 ):\r\n--> 854 if self.should_abort_optimization():\r\n 855 yield self._abort_optimization(num_preexisting_trials=n_existing)\r\n 856 return\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:712](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:712), in Scheduler.should_abort_optimization(self)\r\n 707 \"\"\"Checks whether this scheduler has reached some intertuption [/](https://file+.vscode-resource.vscode-cdn.net/) abort\r\n 708 criterion, such as an overall optimization timeout, tolerated failure rate, etc.\r\n 709 \"\"\"\r\n 710 # if failure rate is exceeded, raise an exception.\r\n 711 # this check should precede others to ensure it is not skipped.\r\n--> 712 self.error_if_failure_rate_exceeded()\r\n 714 # if optimization is timed out, return True, else return False\r\n 715 timed_out = (\r\n 716 self._timeout_hours is not None\r\n 717 and self._latest_optimization_start_timestamp is not None\r\n (...)\r\n 720 >= not_none(self._timeout_hours) * 60 * 60 * 1000\r\n 721 )\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:779](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:779), in Scheduler.error_if_failure_rate_exceeded(self, force_check)\r\n 771 if self._num_trials_bad_due_to_err > num_bad_in_scheduler [/](https://file+.vscode-resource.vscode-cdn.net/) 2:\r\n 772 self.logger.warn(\r\n 773 \"MetricFetchE INFO: Sweep aborted due to an exceeded error rate, \"\r\n 774 \"which was primarily caused by failure to fetch metrics. Please \"\r\n 775 \"check if anything could cause your metrics to be flakey or \"\r\n 776 \"broken.\"\r\n 777 )\r\n--> 779 raise self._get_failure_rate_exceeded_error(\r\n 780 num_bad_in_scheduler=num_bad_in_scheduler,\r\n 781 num_ran_in_scheduler=num_ran_in_scheduler,\r\n 782 )\r\n\r\nFailureRateExceededError: Failure rate exceeds the tolerated trial failure rate of 0.5 (at least 8 out of first 8 trials failed). Checks are triggered both at the end of a optimization and if at least 5 trials have failed.\r\n\r\n\r\nWhat do you think might be the problem here? Thank you.\r\n\r\nBest,\r\nEmre\r\n\r\n### Describe your environment\r\n\r\nUbuntu "
|
https://github.com/pytorch/tutorials/issues/2493
|
closed
|
[
"question",
"ax"
] | 2023-06-27T23:09:05Z
| 2023-06-28T17:46:51Z
| null |
ekurtgl
|
huggingface/diffusers
| 3,882
|
How to use models like chilloutmix to do inpainting task?
|
I tried as https://huggingface.co/docs/diffusers/api/diffusion_pipeline mentioned:
`text2img = StableDiffusionPipeline.from_pretrained("/data/cx/ysp/aigc-smart-painter/models/chilloutmix_NiPrunedFp32Fix")
inpaint = StableDiffusionInpaintPipeline(**text2img.components)
seger = RawSeger()
REST_API_URL = 'http://localhost:9900/sd/inpaint'
painter = GridPainter()
img_path = "/data/cx/ysp/aigc-smart-painter/assets/cloth1.jpg"
image = Image.open(img_path)
box = [220, 20, 500, 320]
new_image = draw_box(np.array(image), cords=box, color=(255, 0, 0), thickness=2)
show_image(new_image)
mask = seger.prompt_with_box(image, box=box, reverse=False)
mask = Image.fromarray(mask)
show_image(mask)
end = time.time()
prompt = "best quality,symmetry realistic,real life,photography,masterpiece,8K,HDR,highres,1 gril, looking at viewer"
images = inpaint(prompt=prompt, image=image, mask_image=mask, num_images_per_prompt=1,
num_inference_steps=50, guidance_scale=7.5,)
painter.image_grid(images, rows=1, cols=len(images) // 1)
painter.image_show()
print("finished")`
I got this error:
expects 4 but received `num_channels_latents`: 4 + `num_channels_mask`: 1 +
`num_channels_masked_image`: 4 = 9. Please verify the config of `pipeline.unet`
or your `mask_image` or `image` input.
Process finished with exit code 1
How can I convert model like chilloutmix to do inpainting task?
Thank you !
|
https://github.com/huggingface/diffusers/issues/3882
|
closed
|
[
"stale"
] | 2023-06-27T15:25:31Z
| 2023-08-05T15:04:07Z
| null |
AdamMayor2018
|
huggingface/diffusers
| 3,881
|
How many images and how many epochs are required to fine tune LORA for stable diffusion on custom image dataset
|
I am trying to finetune LORA on a movie dataset , but I am using custom dataset which has 3-4 movie characters , instead of using the actual names of the actor we are using in movie name of the characters , how big the dataset would be required in terms of total number of images, and number of images per character and how many epochs would be required to fine tune this LORA model .
PS: I have already tried fine tuning with 200 images of a single character for 100,250 and 500 Epochs but the results are very bad , can anyone please provide some suggestion @patrickvonplaten @sayakpaul
|
https://github.com/huggingface/diffusers/issues/3881
|
closed
|
[
"stale"
] | 2023-06-27T11:05:53Z
| 2023-08-04T15:03:17Z
| null |
atharmzaalo2023
|
pytorch/TensorRT
| 2,062
|
❓ [Question] "When the performance of an int8 model improves compared to an fp32 model after QAT"
|
## ❓ Question
<!-- Your question -->
I have a question because there is something I do not understand during the QAT.
code ref: https://pytorch.org/TensorRT/_notebooks/vgg-qat.html#4
Phenomenon: The model with QAT applied and the simple TRT-converted model without QAT show higher accuracy than the fp32 model.
Data: 3-class dataset with approximately 210,000 images.
Model architecture: ResNet18.
Can the int8 converted TRT model perform better than the fp32 model?

** Another question
## Environment
- PyTorch Version (e.g., 1.0): v1.3.0
- CPU Architecture: intel i9-10980
- OS (e.g., Linux): ubuntu 20.04.3
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.8
- CUDA version: 11.6
- GPU models and configuration:
- Any other relevant information:
|
https://github.com/pytorch/TensorRT/issues/2062
|
closed
|
[
"question",
"No Activity",
"component: quantization"
] | 2023-06-27T08:20:34Z
| 2023-10-09T00:02:22Z
| null |
JongSeok553
|
pytorch/data
| 1,192
|
Is torchdata still being actively developed?
|
No commits since June 7 (3 weeks ago). And @ejguan mentioned in https://github.com/pytorch/data/issues/1184#issuecomment-1593476769 they and @NivekT, the primary contributors, are no longer working on it.
Can anyone comment on whether torchdata will continue to be developed or supported?
|
https://github.com/meta-pytorch/data/issues/1192
|
closed
|
[] | 2023-06-26T21:51:48Z
| 2023-07-24T02:41:31Z
| 6
|
lendle
|
huggingface/peft
| 636
|
How to save full model weights and not just the adapters ?
|
### System Info
peft==0.4.0.dev0
I'm not sure if this should be a bug report, so sorry if this is not convenient.
According to the `save_pretrained`method docstring, this saves the adapter model only and not the full model weights, is there an option where I can save the full model weights ? The use case is that we want to upload the full model to hf to be able to activate the inference API, however now we only save adapter weights
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
save_pretrained saves only adapters, maybe also add the option to save the full model
### Expected behavior
save_pretrained saves only adapters, maybe also add the option to save the full model
|
https://github.com/huggingface/peft/issues/636
|
closed
|
[] | 2023-06-26T15:30:48Z
| 2025-03-13T11:52:23Z
| null |
azayz
|
huggingface/peft
| 631
|
How to train multiple LoRAs at once?
|
Hi! I would like to train multiple LoRAs at once (for some reason). Although `requires_grad` is True for all LoRA weight matrices, only the first LoRA weight matrix will calculate the gradient, and the others will not calculate the gradient - and will not be updated. How can I train them in one forward process?
1. I initialize multiple LoRAs using the `add_adapter()` method
```python
bert_path = "prajjwal1/bert-tiny"
rank = 8
LoRA_amount = 6
model = CustomBert.from_pretrained(bert_path)
peft_config = LoraConfig(
inference_mode=False,
r=rank,
lora_alpha=32,
lora_dropout=0.1
)
model = PeftModel(model, peft_config, adapter_name="0")
for LoRA_index in range(1, LoRA_amount):
model.add_adapter(str(LoRA_index), peft_config)
```
2. This is the printed model architecture
```
testModel(
(model): PeftModel(
(base_model): LoraModel(
(model): CustomBert(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 128, padding_idx=0)
(position_embeddings): Embedding(512, 128)
(token_type_embeddings): Embedding(2, 128)
(LayerNorm): LayerNorm((128,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(
in_features=128, out_features=128, bias=True
(lora_dropout): ModuleDict(
(0): Dropout(p=0.1, inplace=False)
(1): Dropout(p=0.1, inplace=False)
(2): Dropout(p=0.1, inplace=False)
(3): Dropout(p=0.1, inplace=False)
(4): Dropout(p=0.1, inplace=False)
(5): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(0): Linear(in_features=128, out_features=16, bias=False)
(1): Linear(in_features=128, out_features=16, bias=False)
(2): Linear(in_features=128, out_features=16, bias=False)
(3): Linear(in_features=128, out_features=16, bias=False)
(4): Linear(in_features=128, out_features=16, bias=False)
(5): Linear(in_features=128, out_features=16, bias=False)
)
(lora_B): ModuleDict(
(0): Linear(in_features=16, out_features=128, bias=False)
(1): Linear(in_features=16, out_features=128, bias=False)
(2): Linear(in_features=16, out_features=128, bias=False)
(3): Linear(in_features=16, out_features=128, bias=False)
(4): Linear(in_features=16, out_features=128, bias=False)
(5): Linear(in_features=16, out_features=128, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(key): Linear(in_features=128, out_features=128, bias=True)
(value): Linear(
in_features=128, out_features=128, bias=True
(lora_dropout): ModuleDict(
(0): Dropout(p=0.1, inplace=False)
(1): Dropout(p=0.1, inplace=False)
(2): Dropout(p=0.1, inplace=False)
(3): Dropout(p=0.1, inplace=False)
(4): Dropout(p=0.1, inplace=False)
(5): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(0): Linear(in_features=128, out_features=16, bias=False)
(1): Linear(in_features=128, out_features=16, bias=False)
(2): Linear(in_features=128, out_features=16, bias=False)
(3): Linear(in_features=128, out_features=16, bias=False)
(4): Linear(in_features=128, out_features=16, bias=False)
(5): Linear(in_features=128, out_features=16, bias=False)
)
(lora_B): ModuleDict(
(0): Linear(in_features=16, out_features=128, bias=False)
(1): Linear(in_features=16, out_features=128, bias=False)
(2): Linear(in_features=16, out_features=128, bias=False)
(3): Linear(in_features=16, out_features=128, bias=False)
(4): Linear(in_features=16, out_features=128, bias=False)
(5
|
https://github.com/huggingface/peft/issues/631
|
closed
|
[
"enhancement"
] | 2023-06-26T09:30:16Z
| 2023-08-18T13:41:32Z
| null |
meteorlin
|
huggingface/optimum
| 1,135
|
Donut document parsing export to onnx does not work.
|
### System Info
```shell
optimum==1.8.8
python==3.11.3
system linux
```
### Who can help?
The donut export does not work with the following commands, does anybody know how to get this running or know about the status.
```
optimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/
...
...
...
Exception: The post-processing of the ONNX export failed. The export can still be performed by passing the option --no-post-process. Detailed error: Unable to merge decoders. Detailed error: Expected
a dynamic shape for the axis zero of onnx::Reshape_1045, found a static shape: 2
```
````
optimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/ --no-post-process
...
...
...
- last_hidden_state: max diff = 0.0012216567993164062
Validation 1 for the model donut_cord2_onnx/decoder_model.onnx raised: The exported ONNX model does not have the exact same outputs as what is provided in VisionEncoderDecoderOnnxConfig. Difference: onnx::Reshape_1263, onnx::Reshape_1359, onnx::Reshape_1364, onnx::Reshape_1045, onnx::Reshape_1146, onnx::Reshape_1258, onnx::Reshape_1151, onnx::Reshape_1050
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:encoder_hidden_states
An error occured during validation, but the model was saved nonetheless at donut_cord2_onnx. Detailed error: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:encoder_hidden_states.
```
Changing the task name to image-to-text instead of image-to-text-with-past does seem to run. However, I assume that this task is set specifically. Although, for me it is unclear why it is set to that particular task.
```
optimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/ --no-post-process --task image-to-text
Validating ONNX model donut_cord2_onnx/encoder_model.onnx...
-[✓] ONNX model output names match reference model (last_hidden_state)
- Validating ONNX Model output "last_hidden_state":
-[✓] (2, 1200, 1024) matches (2, 1200, 1024)
-[x] values not close enough, max diff: 0.00121307373046875 (atol: 0.001)
Validating ONNX model donut_cord2_onnx/decoder_model.onnx...
Validation 0 for the model donut_cord2_onnx/encoder_model.onnx raised: The maximum absolute difference between the output of the reference model and the ONNX exported model is not within the set tolerance 0.001:
- last_hidden_state: max diff = 0.00121307373046875
The ONNX export succeeded with the warning: The exported ONNX model does not have the exact same outputs as what is provided in VisionEncoderDecoderOnnxConfig. Difference: onnx::Reshape_1359, onnx::Reshape_1258, onnx::Reshape_1146, onnx::Reshape_1151, onnx::Reshape_1050, onnx::Reshape_1045, onnx::Reshape_1364, onnx::Reshape_1263.
The exported model was saved at: donut_cord2_onnx
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
optimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/
### Expected behavior
export to run correctly and validation report.
|
https://github.com/huggingface/optimum/issues/1135
|
closed
|
[
"bug"
] | 2023-06-26T08:57:01Z
| 2023-06-26T10:17:32Z
| 3
|
casperthuis
|
huggingface/peft
| 630
|
How to switch to P-Tuning v2
|
We can find the `P-Tuning v2` in
https://github.com/huggingface/peft/blob/8af8dbd2ec9b4b8f664541e9625f898db7c7c78f/README.md?plain=1#L29
But how can I switch to `P-Tuning v2`?
|
https://github.com/huggingface/peft/issues/630
|
closed
|
[
"solved"
] | 2023-06-26T08:52:42Z
| 2023-08-04T15:03:30Z
| null |
jiahuanluo
|
pytorch/pytorch
| 104,159
|
how to optimize torch.argwhere?
|
`t0 = time.time()
xx = torch.argwhere(x) ## x.shape = (15120,150) x.device = cuda:0 and the gpu is gtx1050
print(time.time() - t0)`
the output is always near 0.15s,how can i reduce the cost time ? or there is other high efficient methods to replace argwhere?
cc @albanD
|
https://github.com/pytorch/pytorch/issues/104159
|
closed
|
[
"module: performance",
"triaged",
"module: python frontend"
] | 2023-06-25T15:12:53Z
| 2023-06-28T18:10:17Z
| null |
Soikie
|
pytorch/torchx
| 735
|
With Volcano, why or when to use TorchX?
|
## ❓ Questions and Help
### Question
We can run Pytorch DDP or elastic with just Volcano, right? What does TorchX offer differently from Volcano?
|
https://github.com/meta-pytorch/torchx/issues/735
|
closed
|
[] | 2023-06-25T07:54:40Z
| 2023-07-12T20:41:59Z
| 2
|
zxcware
|
huggingface/optimum
| 1,134
|
ValueError: ..set the option `trust_remote_code=True` to remove this error
|
### System Info
```shell
- `optimum` version: 1.8.8
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 2.0.1+cpu (cuda availabe: False)
- Tensorflow version (GPU?): not installed (cuda availabe: NA)
```
### Who can help?
Hello,
I am running the optimum cli command
`optimum-cli export onnx --model mosaicml/mpt-7b-chat --task text-generation mpt-7b-chat\`
when I am getting this error:
```
File "C:\Users\dutta\AppData\Local\Programs\Python\Python311\Lib\site-packages\transformers\dynamic_module_utils.py", line 553, in resolve_trust_remote_code
raise ValueError(
ValueError: Loading mosaicml/mpt-7b-chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.
```
How to deal with this error? @michaelbenayoun
Thanks
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the same command replacing the output directory name to a name of your choice
### Expected behavior
I expect the command to run without error and product the ONNX model and other files in the output directory
|
https://github.com/huggingface/optimum/issues/1134
|
closed
|
[
"bug"
] | 2023-06-24T12:47:35Z
| 2023-07-06T16:38:30Z
| 5
|
diptenduLF
|
pytorch/tutorials
| 2,487
|
[BUG] No ways provided to replicate fps on retrained models.
|
### Add Link
https://pytorch.org/tutorials/intermediate/realtime_rpi.html
### Describe the bug
I am getting 25-30fps on my rpi4 with provided snippet.
However, after finetuning mobilenet_v2 and applying:
```
# Quantize the model
quantized_model = torch.quantization.quantize_dynamic(
model, {torch.nn.Linear}, dtype=torch.qint8
)
# Convert the quantized model to TorchScript
script_model = torch.jit.script(quantized_model)
```
I am only getting 2.5fps.
The tutorial suggests:
```
You can create your own model or fine tune an existing one. If you fine tune on one of the models from [torchvision.models.quantized](https://pytorch.org/vision/stable/models.html#quantized-models) most of the work to fuse and quantize has already been done for you so you can directly deploy with good performance on a Raspberry Pi.
```
But provides no guidance on how to do it.
My attempts to do so failed:
```
torch.backends.quantized.engine = 'qnnpack'
model = models.quantization.mobilenet_v2(pretrained=True, quantize=True) # INT
num_classes = 3
model.classifier[1] = torch.nn.Linear(model.last_channel, num_classes)
```
would result in
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-48-ddcd2d77aac5>](https://localhost:8080/#) in <cell line: 24>()
39
40 # Forward pass
---> 41 outputs = model(inputs)
42 loss = criterion(outputs, labels)
43
6 frames
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/linear.py](https://localhost:8080/#) in forward(self, input)
112
113 def forward(self, input: Tensor) -> Tensor:
--> 114 return F.linear(input, self.weight, self.bias)
115
116 def extra_repr(self) -> str:
RuntimeError: mat1 and mat2 must have the same dtype
```
Multiple attempts to create custom Linear layer that supports int8 dtype also failed.
### Describe your environment
not relevant
cc @datumbox @nairbv @fmassa @NicolasHug @YosuaMichael
|
https://github.com/pytorch/tutorials/issues/2487
|
open
|
[
"bug",
"module: vision"
] | 2023-06-24T12:04:23Z
| 2023-06-26T20:29:24Z
| 2
|
Huxwell
|
huggingface/chat-ui
| 322
|
Chat using WizardCoder
|
Hello,
Can you please post an example of .env.local for:
WizardLM/WizardCoder-15B-V1.0
|
https://github.com/huggingface/chat-ui/issues/322
|
open
|
[] | 2023-06-23T18:44:07Z
| 2023-08-14T20:52:39Z
| 2
|
vitalyshalumov
|
huggingface/chat-ui
| 321
|
Chat-UI not loading Tailwind colors.
|
**Problem**
When specifying `PUBLIC_APP_COLOR` in either the `.env` or the `.env.local` file, the chat-UI color does not change regardless of which color is used. Even when `PUBLIC_APP_COLOR=blue` as set in this repository, the chat-UI color does not match with TailwindCSS's blue color palette:
**TailwindCSS blue color palette:**
<img width="452" alt="blue" src="https://github.com/huggingface/chat-ui/assets/48559179/216923cf-6941-4629-b444-65a4930f3979">
**Chat-UI color palette:**
<img width="692" alt="chat" src="https://github.com/huggingface/chat-ui/assets/48559179/809aece3-3efe-4dd5-ac48-5cc0b6f32221">
**Observation**
Upon investigating the code, I noticed that the switchTheme.ts file contains the following code:
```
export function switchTheme() {
const { classList } = document.querySelector("html") as HTMLElement;
if (classList.contains("dark")) {
classList.remove("dark");
localStorage.theme = "light";
} else {
classList.add("dark");
localStorage.theme = "dark";
}
}
```
I think that instead of loading the Tailwind colors specified in either `.env` or `.env.local`, the chat-UI is actually using these `"light"` and `"dark"` themes. I couldn't find where these themes are specified in the repositories or if they can be changed at all.
**Requested Solution:**
I want to load the Tailwind colors by setting `PUBLIC_APP_COLOR` in `.env` and/or `.env.local`. However, if it turns out that the chat-UI laods colors based on the `"light"` and `"dark"`, adjusting these themes could also be a viable solution. Thank you in advance for your assistance.
|
https://github.com/huggingface/chat-ui/issues/321
|
closed
|
[
"question",
"front"
] | 2023-06-23T15:54:43Z
| 2023-09-18T13:12:15Z
| null |
ckanaar
|
huggingface/peft
| 622
|
LoRA results in 4-6% lower performance compared to full fine-tuning
|
I am working on fine-tuning LLMs (6B to 40B parameters) using the LoRA framework on an instruction tuning dataset comprising of instructions corresponding to ~20 tasks (a mix of factual as well as open-ended tasks). The input to the model consists of a conversation snippet between two individuals along with a task-specific prompt. The results I am observing do not align with the performance improvements reported in the [paper](https://arxiv.org/pdf/2106.09685.pdf). Specifically, the paper reports that fine-tuning using LoRA generally results in performance at par with or better than full fine-tuning of the model, however, throughout our experiments I observe a performance lower than full fine-tuning by an absolute margin of ~4-6% in terms of RougeL score.
Sharing some of the training details below:
**[Framework versions]**
Python: 3.8
PyTorch: 1.13.1
Transformers: 4.27.4
PEFT: 0.3.0
**[Infrastructure]**
8 X A100 40 GB GPUs
**[Hyper-parameter Range]**
Learning rate: 5e-5 to 3e-3
Learning rate scheduler: [Constant, Linear]
Epochs: [1, 2]
Batch size: [2, 4, 8]
Weight decay: 0.0
Precision: bf16
Specifically, I tried fine-tuning of `google/flan-t5-xxl` model in following two scenarios:
- **Scenario 1**
Full fine-tuning with constant `learning rate = 5e-5`, `batch size = 8`, `epochs = 1`
- **Scenario 2**
Fine-tuning using LoRA with constant `learning rate = 1e-3`, `batch size = 8`, `epochs = 1` and LoraConfig as follows:
`LoraConfig(r=8, lora_alpha=16, lora_dropout=0.05, bias='none', task_type="SEQ_2_SEQ_LM")`
**Observation:** Scenario 2 resulted in 4% lower RougeL as compared to scenario 1. I have also tried tuning the hyper-parameters in Scenario 2 as per the range specified above, however, the best I could get is to a gap of ~4% RougeL.
Thank you very much for your time and consideration. Looking forward to any relevant insights here.
|
https://github.com/huggingface/peft/issues/622
|
closed
|
[
"question"
] | 2023-06-23T10:50:24Z
| 2023-07-24T12:12:18Z
| null |
digvijayingle016
|
huggingface/setfit
| 389
|
gradient_accumulation
|
Is there a way in setFitTrainer to change the gradient_accumulation like you can do in the regular Trainer class in TrainingArguments? Also just in general I am looking for tips to make training faster.
|
https://github.com/huggingface/setfit/issues/389
|
closed
|
[
"question"
] | 2023-06-22T21:18:37Z
| 2023-11-11T05:32:34Z
| null |
zackduitz
|
huggingface/datasets
| 5,982
|
404 on Datasets Documentation Page
|
### Describe the bug
Getting a 404 from the Hugging Face Datasets docs page:
https://huggingface.co/docs/datasets/index
### Steps to reproduce the bug
1. Go to URL https://huggingface.co/docs/datasets/index
2. Notice 404 not found
### Expected behavior
URL should either show docs or redirect to new location
### Environment info
hugginface.co
|
https://github.com/huggingface/datasets/issues/5982
|
closed
|
[] | 2023-06-22T20:14:57Z
| 2023-06-26T15:45:03Z
| 2
|
kmulka-bloomberg
|
huggingface/chat-ui
| 317
|
Issues when trying to deploy on cPanel (shared hosting)
|
Hello there,
Is there something special to do to be able to deploy chat-ui on a shared hosting using cPanel?
I tried using the Node.JS Apps Manager as follows

But even when switching my entry point to server/index.js, it doesn't work.
I also tried to NPM install using the manager, but then it doesn't seem to be able to use vite, even when forcing any `npm install vite`...
So, if you could me out on this, it would be highly appreciated!
In advance, thanks a lot.
Regards,
Golluméo
|
https://github.com/huggingface/chat-ui/issues/317
|
closed
|
[
"support"
] | 2023-06-22T17:32:00Z
| 2023-09-18T13:12:53Z
| 1
|
gollumeo
|
huggingface/transformers.js
| 161
|
[Question] whisper vs. ort-wasm-simd-threaded.wasm
|
While looking into https://cdn.jsdelivr.net/npm/@xenova/transformers@2.2.0/dist/transformers.js I can see a reference to **ort-wasm-simd-threaded.wasm** however that one never seem to be loaded for whisper/automatic-speech-recognition ( https://huggingface.co/spaces/Xenova/whisper-web ) while it always use **ort-wasm-simd.wasm** . I wonder if there is a way to enable or enforce threaded wasm and so improve transcription speed?
|
https://github.com/huggingface/transformers.js/issues/161
|
open
|
[
"question"
] | 2023-06-22T06:41:31Z
| 2023-08-15T16:36:01Z
| null |
jozefchutka
|
huggingface/datasets
| 5,975
|
Streaming Dataset behind Proxy - FileNotFoundError
|
### Describe the bug
When trying to stream a dataset i get the following error after a few minutes of waiting.
```
FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json
If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
I have already set the proxy environment variables. Downloading a Dataset without streaming works as expected.
Still i suspect that this is connected to being behind a proxy.
Is there a way to set the proxy for streaming datasets? Possibly a keyword argument that gets passed to ffspec?
### Steps to reproduce the bug
This is the code i use.
```
import os
os.environ['http_proxy'] = "http://example.com:xxxx"
os.environ['https_proxy'] = "http://example.com:xxxx"
from datasets import load_dataset
ds = load_dataset("facebook/voxpopuli", name="de", streaming=True)
```
### Expected behavior
I would expect the streaming functionality to use the set proxy settings.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2
|
https://github.com/huggingface/datasets/issues/5975
|
closed
|
[] | 2023-06-21T19:10:02Z
| 2023-06-30T05:55:39Z
| 9
|
Veluchs
|
huggingface/transformers.js
| 158
|
[Question] How do I use this library with ts-node?
|
I have a non-Web/browser-based project that uses TypeScript with ts-node.
The "pipeline" function attempts to use the JavaScript Fetch API, which is not included with NodeJS, and the code therefore fails with an error: "fetch is not defined."
The "node-fetch" package doesn't seem to provide a compatible API.
|
https://github.com/huggingface/transformers.js/issues/158
|
open
|
[
"question"
] | 2023-06-21T17:42:11Z
| 2023-08-17T13:20:51Z
| null |
moonman239
|
pytorch/TensorRT
| 2,044
|
❓ [Question] How can I install the latest version of python API? Torch and Tensorrt's CUDA dependencies conflict with each other.
|
## ❓ Question
<!-- Your question -->
## What you have already tried
<!-- -->
I have already create a python=3.9 env, when I use the command 'pip install torch-tensorrt', I find that the torch version that the latest torch-tensorrt needs is 2.0.1 and the tensorrt version it needs is 8.6.1, but these two packages need different cuda versions(which one is cu11 and another is cu12). When I run a simple model(input) example python code, torch can't resolve the environment.
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.0.1
- CPU Architecture: x86_64
- OS (e.g., Linux): linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source): pip install torch-tensorrt
- Are you using local sources or building from archives: archives
- Python version: 3.9
- tensorrt version: 8.6.1
## Additional context
example python code:
```
import torch
#import torch_tensorrt #No tensorrt and torch_tensorrt installed, this code will run successfully.
conv=torch.nn.Conv2d(3,32,3,1,0,bias=False)
input=torch.randn(1,3,224,224)
conv.cuda()
input.cuda()
print(conv(input).shape)
```
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/2044
|
closed
|
[
"question",
"No Activity"
] | 2023-06-21T17:12:54Z
| 2023-10-16T00:02:24Z
| null |
1585231086
|
pytorch/pytorch
| 103,962
|
How to unwrap after auto_wrap in FSDP?
|
I am currently fine-tuning a LLM (LLaMA) and would like to retrieve the gradients of each weight (parameter) after every gradient update. However, I notice that weights are (auto) wrapped into stuff like “_fsdp_wrapped_module._flat_param” during training. I need to map these wrapped weights to the original LLaMA architecture such as “self_attn.v_proj”. Any code examples?
I guess “summon_full_params()” might be the function that I look for, but I am not sure if that is correct. I also have difficulty using this function. Thanks a lot for any help!
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
|
https://github.com/pytorch/pytorch/issues/103962
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 2023-06-21T11:27:10Z
| 2023-10-27T15:16:22Z
| null |
ZN1010
|
pytorch/pytorch
| 103,958
|
How to modify gradients of an FSDP model?
|
### 📚 The doc issue
I've initially posted the question on [forum](https://discuss.pytorch.org/t/modify-gradients-of-an-fsdp-model/182159) 7 days ago, but crossposting here as well for better visibility since I couldn't get any answers there.
Hi everyone,
I have an FSDP model which has zeros in some of the `torch.nn.Linear.weight` parameters. During the training I would like to keep those parameters fixed to zeros, and to zero-out their gradients during backward as well. The specific use-case is: I am loading a pruned model and I want to fine-tune it with FSDP while keeping the pruning mask fixed.
To achieve this I need to do two things:
1) multiply parameters with the mask before the forward pass (so that all pruned weights remain pruned),
2) multiply gradients of pruned parameters after the backward pass (so that gradients of pruned weights are zeros)
In the standard DDP training I would achieve this by:
1) registering forward pre-hook on `torch.nn.Linear` modules and multiplying weights with the mask before each forward pass,
2) registering a hook on the parameter `torch.nn.Linear.weight` and multiplying its gradient with the mask.
For example:
```python
def keep_param_pruned(mask, module, input):
with torch.no_grad():
module.weight.data.mul_(mask.to(module.weight.device))
def keep_grad_pruned(mask, grad):
return grad.mul_(mask.to(grad.device))
for n, m in model.named_modules():
if isinstance(m, torch.nn.Linear):
mask = m.weight > threshold
m.register_forward_pre_hook(partial(keep_param_pruned, mask))
m.weight.register_hook(partial(keep_grad_pruned, mask))
```
However, I am struggling to modify this idea to work with FSDP. Any suggestions/ideas on what I am doing wrong or if there is a simpler way to achieve this without playing with hooks?
### Suggest a potential alternative/fix
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
|
https://github.com/pytorch/pytorch/issues/103958
|
closed
|
[
"oncall: distributed",
"module: fsdp"
] | 2023-06-21T09:33:32Z
| 2025-04-03T23:45:25Z
| null |
eldarkurtic
|
huggingface/chat-ui
| 314
|
500 Internal Error
|

|
https://github.com/huggingface/chat-ui/issues/314
|
closed
|
[
"question",
"support"
] | 2023-06-21T08:58:52Z
| 2023-06-22T13:13:57Z
| null |
kasinadhsarma
|
huggingface/datasets
| 5,971
|
Docs: make "repository structure" easier to find
|
The page https://huggingface.co/docs/datasets/repository_structure explains how to create a simple repository structure without a dataset script.
It's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages.
|
https://github.com/huggingface/datasets/issues/5971
|
open
|
[
"documentation"
] | 2023-06-21T08:26:44Z
| 2023-07-05T06:51:38Z
| 5
|
severo
|
huggingface/chat-ui
| 313
|
MongoDB
|
I have a free teir MongoDB acount but not sure how to get url plz help
|
https://github.com/huggingface/chat-ui/issues/313
|
closed
|
[
"support"
] | 2023-06-21T07:47:18Z
| 2023-06-23T08:34:42Z
| 5
|
Toaster496
|
pytorch/TensorRT
| 2,028
|
❓ [Question] Torch-TensorRT 1.3.0 uses cuDNN 8.6.0 instead of 8.5.0
|
## ❓ Question
Hi, I am using torch-tensorRT 1.3.0, it seems it is linked to cuDNN 8.6.0 instead of 8.5.0 as described in the release note? Please find my environment setup below
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.13.1 with cu117
- OS (e.g., Linux): Linux (ubuntu 20.04)
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
- Python version:3.8
- CUDA version: 11.7
- TensorRT: 8.5.3.1
- torch-tensorrt: 1.3.0
I got the warning: tensorrt is linked to cuDNN 8.6.0 but cuDNN 8.5.0 is loaded --- when i print torch.backend.cudnn.version() it says 8500, so I assume if torch-tensorrt is linked with cuDNN as it is described in the release note, there should not be such warning?
Could you please let me know if there is something I'm doing wrong? Thank you!
|
https://github.com/pytorch/TensorRT/issues/2028
|
closed
|
[
"question",
"No Activity"
] | 2023-06-20T16:00:55Z
| 2023-09-30T00:02:07Z
| null |
akaimody123
|
huggingface/peft
| 607
|
trainer with multi-gpu
|
I want to use trainer.predict to predict datasets by multi-gpu, but actually I only use single one gpu
when I print Seq2SeqTrainingArguments , I get

It shows 8 gpu
I check my code, when I load model, I find something strange
base_model.device: cpu
peftModel is as follows:

it print cuda
how can i fix?
|
https://github.com/huggingface/peft/issues/607
|
closed
|
[
"question"
] | 2023-06-20T08:58:37Z
| 2023-07-28T15:03:31Z
| null |
hrdxwandg
|
pytorch/data
| 1,190
|
Dataloader2 with FullSyncIterDataPipe throws error during initilization
|
### 🐛 Describe the bug
Hi, we found some strange during using Dataloader2. Here's some details about the issue.
- We are a long run training job with 8 AWS P4 nodes. It's using HuggingFace trainer.
- In HuggingFace training, it will call evaluation every `traininig_args.eval_steps` training steps.
- I overrided the HF trainer to use Dataloader2 with training, evaluation and test dataset loading. At the same time, on the dataset part, I'm using `IterableDataPipe` with `ShardingFilterIterDataPipe`
- The issue that listed the log happens **randomly**. And most time it happens after the job runs for a long time (e.g. 20+ hours)
Can you help provide some context on what could be the root cause and how to fix this? Thanks!
Log:
```
| 2023-06-08T08:51:15.973-07:00 | File "/opt/conda/lib/python3.9/site-packages/transformers/trainer.py", line 1633, in train
-- | -- | --
| 2023-06-08T08:51:15.973-07:00 | return inner_training_loop(
| 2023-06-08T08:51:15.973-07:00 | File "/opt/conda/lib/python3.9/site-packages/transformers/trainer.py", line 1979, in _inner_training_loop
| 2023-06-08T08:51:15.973-07:00 | self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
| 2023-06-08T08:51:15.973-07:00 | File "/opt/conda/lib/python3.9/site-packages/transformers/trainer.py", line 2236, in _maybe_log_save_evaluate
| 2023-06-08T08:51:15.973-07:00 | metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
| 2023-06-08T08:51:15.973-07:00 | File "/opt/conda/lib/python3.9/site-packages/transformers/trainer.py", line 2932, in evaluate
| 2023-06-08T08:51:15.973-07:00 | output = eval_loop(
| 2023-06-08T08:51:15.973-07:00 | File "/workspace/mfive/mfive/trainer.py", line 236, in evaluation_loop
| 2023-06-08T08:51:15.973-07:00 | for step, inputs in enumerate(dataloader):
| 2023-06-08T08:51:15.973-07:00 | File "/opt/conda/lib/python3.9/site-packages/torchdata/dataloader2/dataloader2.py", line 46, in __next__
| 2023-06-08T08:51:15.973-07:00 | next_val = next(self.dataloader._datapipe_iter) # type: ignore[arg-type]
| 2023-06-08T08:51:15.973-07:00 | File "/opt/conda/lib/python3.9/site-packages/torch/utils/data/datapipes/_hook_iterator.py", line 173, in wrap_generator
| 2023-06-08T08:51:15.973-07:00 | response = gen.send(None)
| 2023-06-08T08:51:15.973-07:00 | File "/opt/conda/lib/python3.9/site-packages/torchdata/datapipes/iter/util/distributed.py", line 178, in __iter__
| 2023-06-08T08:51:15.973-07:00 | self._process_group = dist.new_group(backend="gloo")
| 2023-06-08T08:51:15.973-07:00 | File "/opt/conda/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 3520, in new_group
| 2023-06-08T08:51:15.973-07:00 | pg = _new_process_group_helper(
| 2023-06-08T08:51:15.973-07:00 | File "/opt/conda/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1009, in _new_process_group_helper
| 2023-06-08T08:51:15.973-07:00 | backend_class = ProcessGroupGloo(backend_prefix_store, group_rank, group_size, timeout=timeout)
| 2023-06-08T08:51:15.973-07:00 | RuntimeError: [../third_party/gloo/gloo/transport/tcp/pair.cc:176] bind: Address already in use
| 2023-06-08T08:51:15.973-07:00 | This exception is thrown by __iter__ of FullSyncIterDataPipe(datapipe=CollatorIterDataPipe, timeout=1800)
```
### Versions
```
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy==0.991
[pip3] mypy-boto3-batch==1.26.103
[pip3] mypy-boto3-ec2==1.26.136
[pip3] mypy-boto3-iam==1.26.97
[pip3] mypy-boto3-s3==1.26.127
[pip3] mypy-boto3-sagemaker==1.26.141
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torch-tb-profiler==0.4.1
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.4
[pip3] torchsnapshot-nightly==2023.3.15
[pip3] torchvision==0.15.2
[pip3] torchx-nightly==2023.5.25
[pip3] triton==2.0.0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torch-tb-profiler 0.4.1 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchsnapshot-nightly 2023.3.15 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] torchx-nightly 2023.5.25 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
```
|
https://github.com/meta-pytorch/data/issues/1190
|
open
|
[] | 2023-06-19T18:25:36Z
| 2023-06-22T17:30:46Z
| 3
|
chenxingyu-cs
|
huggingface/chat-ui
| 311
|
Unable to build with Docker
|
Hey,
I'm trying to create a docker container with Chat-Ui but i'm facing a wall.
I cloned this repo in a folder on a server and modified the `.env` file, thinking that it would be easy to deploy a docker container out of it but I could not be more wrong !
After trying to build my container with `docker build -t chat-ui .` I went to the same problem as [here](https://github.com/huggingface/chat-ui/issues/301).
I tried to build the docker container before and after running `npm install` but I went through the exact same problem, which is that it cannot run in the Dockerfile :
```
RUN --mount=type=secret,id=DOTENV_LOCAL,dst=.env.local \
npm run build
```
At first I thought it was an issue with docker not being able to run` npm install` so I added, at the begining of my dockerfile `CMD npm install` and went also throughout the same issue, I'm guessing it has something to do with the dockerfile itself.
To reproduce my error, here are the steps :
1. `git clone https://github.com/huggingface/chat-ui.git`
2. `cp .env .env.local `
3. modify my .env.local with my variables
4. `docker build -t chat-ui .`
Here is the error I'm getting when I launch the docker build command :
```
docker build -t chat-ui .
[+] Building 4.3s (16/17)
=> [internal] load .dockerignore 0.0s
=> => transferring context: 122B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 954B 0.0s
=> [internal] load metadata for docker.io/library/node:19 0.6s
=> [internal] load metadata for docker.io/library/node:19-slim 0.6s
=> [builder-production 1/4] FROM docker.io/library/node:19@sha256:92f06f 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 10.45kB 0.0s
=> [stage-2 1/5] FROM docker.io/library/node:19-slim@sha256:f58f1fcf5c9f 0.0s
=> CACHED [builder-production 2/4] WORKDIR /app 0.0s
=> CACHED [builder-production 3/4] COPY --link --chown=1000 package-lock 0.0s
=> CACHED [builder-production 4/4] RUN --mount=type=cache,target=/app/.n 0.0s
=> CACHED [builder 1/3] RUN --mount=type=cache,target=/app/.npm 0.0s
=> CACHED [builder 2/3] COPY --link --chown=1000 . . 0.0s
=> CACHED [stage-2 2/5] RUN npm install -g pm2 0.0s
=> CACHED [stage-2 3/5] COPY --from=builder-production /app/node_modules 0.0s
=> CACHED [stage-2 4/5] COPY --link --chown=1000 package.json /app/packa 0.0s
=> ERROR [builder 3/3] RUN --mount=type=secret,id=DOTENV_LOCAL,dst=.env. 3.7s
------
> [builder 3/3] RUN --mount=type=secret,id=DOTENV_LOCAL,dst=.env.local npm run build:
#0 0.622
#0 0.622 > chat-ui@0.3.0 build
#0 0.622 > vite build
#0 0.622
#0 0.831 ▲ [WARNING] Cannot find base config file "./.svelte-kit/tsconfig.json" [tsconfig.json]
#0 0.831
#0 0.831 tsconfig.json:2:12:
#0 0.831 2 │ "extends": "./.svelte-kit/tsconfig.json",
#0 0.831 ╵ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#0 0.831
#0 1.551
#0 1.551 vite v4.3.9 building SSR bundle for production...
#0 1.583 transforming...
#0 3.551 ✓ 165 modules transformed.
#0 3.551 ✓ built in 2.00s
#0 3.551 "PUBLIC_APP_ASSETS" is not exported by "$env/static/public", imported by "src/lib/components/icons/Logo.svelte".
#0 3.551 file: /app/src/lib/components/icons/Logo.svelte:3:10
#0 3.551 1: <script lang="ts">
#0 3.551 2: import { page } from "$app/stores";
#0 3.551 3: import { PUBLIC_APP_ASSETS, PUBLIC_APP_NAME, PUBLIC_ORIGIN } from "$env/static/public";
#0 3.551 ^
#0 3.551 4: import { base } from "$app/paths";
#0 3.553 error during build:
#0 3.553 RollupError: "PUBLIC_APP_ASSETS" is not exported by "$env/static/public", imported by "src/lib/components/icons/Logo.svelte".
#0 3.553 at error (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:2125:30)
#0 3.553 at Module.error (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:13452:16)
#0 3.553 at Module.traceVariable (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:13863:29)
#0 3.553 at ModuleScope.findVariable (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:12418:39)
#0 3.553 at ReturnValueScope.findVariable (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:6966:38)
#0 3.553 at ChildScope.findVariable (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:6966:38)
#0 3.553 at Identifier.bind (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:8116:40
|
https://github.com/huggingface/chat-ui/issues/311
|
closed
|
[
"support"
] | 2023-06-19T15:11:36Z
| 2023-09-18T13:14:04Z
| 1
|
samichaignonmejai
|
pytorch/text
| 2,183
|
ImportError: cannot import name 'Field' from 'torchtext.data'
|
## ❓ Questions and Help
**Description**
I'm using pytorch2.0.0, the version of torchtext is 0.15.2, when I import "Field" and "BucketIterator" in the code(`from torchtext.data import Field, BucketIterator`), I got an error from this sentence: `ImportError: cannot import name 'Field' from ' torchtext.data' (D:\ML_Pytorch\venv\lib\site-packages\torchtext\data\__init__.py)`
May I ask where did the `Field `go? ? If `Field `disappears, is there any other similar functionality that can be imported?
|
https://github.com/pytorch/text/issues/2183
|
open
|
[] | 2023-06-19T11:28:42Z
| 2023-08-20T06:14:30Z
| 2
|
MrMoe830
|
huggingface/chat-ui
| 310
|
Dockerfile issue : can't modify .env.local before building the docker
|
Hey, I'm having an issue building chat-ui dockerfile.
Indeed, i have to point my DB and my endpoints (or my HF token) in the .env.local file, but the file is built after running the `npm install`, therefore I can't modify my .env.local before building my Docker.
The issues are that both my connection with mongoDB and with my endpoints (or HF tokens) are impossible if I don't modify the .env.local file.
I think it is possible since coyotte508 (here https://github.com/huggingface/chat-ui/issues/204) mentioned that it is not possible to share a public container since it includes personal data but said that it was possible doing so privately.
I already launched a database with Docker with `docker run -d -p 27017:27017 --name mongo-chatui mongo:latest` and I pointed the link of my database in my .env file prior building the Docker of chat-ui but here it seems like it is not working (see the error below).
My questions are :
- how to build the Docker while pointing in the .env.local my endpoints and my database ?;
- how to can I link the database to avoid the following error ?
Here is the error showing for the database after launching my docker :
```
docker run chat-ui
-------------
__/\\\\\\\\\\\\\____/\\\\____________/\\\\____/\\\\\\\\\_____
_\/\\\/////////\\\_\/\\\\\\________/\\\\\\__/\\\///////\\\___
_\/\\\_______\/\\\_\/\\\//\\\____/\\\//\\\_\///______\//\\\__
_\/\\\\\\\\\\\\\/__\/\\\\///\\\/\\\/_\/\\\___________/\\\/___
_\/\\\/////////____\/\\\__\///\\\/___\/\\\________/\\\//_____
_\/\\\_____________\/\\\____\///_____\/\\\_____/\\\//________
_\/\\\_____________\/\\\_____________\/\\\___/\\\/___________
_\/\\\_____________\/\\\_____________\/\\\__/\\\\\\\\\\\\\\\_
_\///______________\///______________\///__\///////////////__
Runtime Edition
PM2 is a Production Process Manager for Node.js applications
with a built-in Load Balancer.
Start and Daemonize any application:
$ pm2 start app.js
Load Balance 4 instances of api.js:
$ pm2 start api.js -i 4
Monitor in production:
$ pm2 monitor
Make pm2 auto-boot at server restart:
$ pm2 startup
To go further checkout:
http://pm2.io/
-------------
pm2 launched in no-daemon mode (you can add DEBUG="*" env variable to get more messages)
2023-06-19T09:18:59: PM2 log: Launching in no daemon mode
2023-06-19T09:18:59: PM2 log: [PM2] Starting /app/build/index.js in cluster_mode (0 instance)
2023-06-19T09:18:59: PM2 log: App [index:0] starting in -cluster mode-
2023-06-19T09:18:59: PM2 log: App [index:0] online
2023-06-19T09:18:59: PM2 log: App [index:1] starting in -cluster mode-
2023-06-19T09:18:59: PM2 log: App [index:1] online
2023-06-19T09:18:59: PM2 log: App [index:2] starting in -cluster mode-
2023-06-19T09:18:59: PM2 log: App [index:2] online
2023-06-19T09:18:59: PM2 log: App [index:3] starting in -cluster mode-
2023-06-19T09:18:59: PM2 log: App [index:3] online
2023-06-19T09:18:59: PM2 log: [PM2] Done.
2023-06-19T09:18:59: PM2 log: ┌────┬──────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├────┼──────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ index │ default │ 0.3.0 │ cluster │ 19 │ 0s │ 0 │ online │ 0% │ 61.7mb │ root │ disabled │
│ 1 │ index │ default │ 0.3.0 │ cluster │ 26 │ 0s │ 0 │ online │ 0% │ 52.9mb │ root │ disabled │
│ 2 │ index │ default │ 0.3.0 │ cluster │ 33 │ 0s │ 0 │ online │ 0% │ 51.0mb │ root │ disabled │
│ 3 │ index │ default │ 0.3.0 │ cluster │ 44 │ 0s │ 0 │ online │ 0% │ 45.3mb │ root │ disabled │
└────┴──────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
2023-06-19T09:18:59: PM2 log: [--no-daemon] Continue to stream logs
2023-06-19T09:18:59: PM2 log: [--no-daemon] Exit on target PM2 exit pid=8
09:18:59 0|index | Listening on 0.0.0.0:3000
09:18:59 1|index | Listening on 0.0.0.0:3000
09:18:59 2|index | Listening on 0.0.0.0:3000
09:18:59 3|index | Listening on 0.0.0.0:3000
09:19:29 0|index | MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
09:19:29 0|index | at Timeout._onTimeout (/app/node_modules/mongodb/lib/sdam/topology.js:277:38)
09:19:29 0|index | at listOnTimeout (node:internal/timers:573:17)
09:19:29 0
|
https://github.com/huggingface/chat-ui/issues/310
|
open
|
[
"support"
] | 2023-06-19T10:48:04Z
| 2023-07-05T03:09:16Z
| 1
|
samichaignonmejai
|
huggingface/chat-ui
| 309
|
'Task not found in this model' when running another model
|
Hello there,
I tried to change the original model to guanaco-33d (also tried with the 65-b) but I always end up having the error "Task not found in this model".
Here's what I changed in the .env:
```.env
MODELS=`[
{
"name": "timdettmers/guanaco-33b",
"datasetName": "timdettmers/openassistant-guanaco",
"description": "",
"websiteUrl": "",
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>",
"messageEndToken": "</s>",
"preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.9,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 1024
}
}
]`
```
Any ideas about this one? It works fine in the dedicated playground.
In advance, thanks a lot!
Regards,
|
https://github.com/huggingface/chat-ui/issues/309
|
closed
|
[
"support",
"models"
] | 2023-06-19T09:42:41Z
| 2023-06-23T12:27:50Z
| 1
|
gollumeo
|
huggingface/chat-ui
| 308
|
'Task not found' when trying to use the guacano-33b model
|
Hello there,
I tried to change the original model, so my team can work with the guanaco-33b model. But now, I always end up having "Task not found for this model" errors.
Here's what I changed on the .env:
```.env
MODELS=`[
{
"name": "timdettmers/guanaco-33b",
"datasetName": "timdettmers/openassistant-guanaco",
"description": "",
"websiteUrl": "",
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>",
"messageEndToken": "</s>",
"preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.9,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 1024
}
}
]
```
Any ideas about that one?
In advance, thanks a lot!
Regards,
|
https://github.com/huggingface/chat-ui/issues/308
|
closed
|
[] | 2023-06-19T09:38:55Z
| 2023-06-19T09:39:08Z
| 0
|
gollumeo
|
huggingface/chat-ui
| 307
|
Add API endpoints documentation
|
We want to make it easy for people to build cool apps on top of chat-ui, and this requires API specs that are easily accessible.
I'm not sure what tools are available in the sveltekit ecosystem for this. My first guess would be to generate an openAPI spec somehow from our server endpoints (or do it manually if that isn't possible with sveltekit?) and pass the spec to a tool like [swagger-ui](https://github.com/swagger-api/swagger-ui) so we can display them somewhere.
This would help with issues like #299 and other requests I've received about API specs.
|
https://github.com/huggingface/chat-ui/issues/307
|
open
|
[
"documentation",
"enhancement",
"back",
"p2"
] | 2023-06-19T09:08:19Z
| 2024-05-29T13:43:10Z
| 5
|
nsarrazin
|
pytorch/tutorials
| 2,478
|
TransformerEncoder is not causal
|
### Add Link
https://pytorch.org/tutorials/beginner/transformer_tutorial.html

for language modeling, src_mask should be mask future words
### Describe the bug
is there anything wrong?
### Describe your environment
colab
cc @pytorch/team-text-core @Nayef211 @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen
|
https://github.com/pytorch/tutorials/issues/2478
|
closed
|
[
"bug",
"module: torchtext",
"medium",
"docathon-h2-2023"
] | 2023-06-18T15:26:46Z
| 2023-11-10T22:31:04Z
| 10
|
bigheary
|
huggingface/api-inference-community
| 295
|
What is the ratelimit for inference api for pro users?
|
What is the rate limit for inference API for pro users?
Also can we use the endpoint for prod, which makes 3 to 10 RPS?
|
https://github.com/huggingface/api-inference-community/issues/295
|
closed
|
[] | 2023-06-18T07:17:23Z
| 2023-06-19T09:01:02Z
| null |
bigint
|
huggingface/chat-ui
| 304
|
Code blocks
|
How do code blocks like img attached work under the hood?
Is it the model that generates ``` & it gets detected and converted to code?
Or is it the UI/Backend that detects code and converts it to look like a code block?
<img width="434" alt="Screenshot 2023-06-17 at 3 26 39 PM" src="https://github.com/huggingface/chat-ui/assets/62820084/d5b79272-d3d9-46c5-9761-e38515f3c73c">
|
https://github.com/huggingface/chat-ui/issues/304
|
closed
|
[
"question"
] | 2023-06-17T13:27:20Z
| 2023-09-18T13:17:47Z
| null |
Muennighoff
|
huggingface/optimum
| 1,118
|
Corrupted-tflite-weights while getting a model from huggingface
|
### System Info
```shell
System: MacOS
Onnx: 1.14
tensorflow: 2.11
While converting a model from hugging face to tflite using huggingface-cli, the model conversion ran okay, but later in inferencing(in python and on edge-device), the model started producing random results, as if it wasn't trained at all.
Virtually seeming the weights are corrupted
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
minimum reproducible example
`optimum-cli export tflite --model unitary/toxic-bert --sequence_length 128 toxic_bert/`
After tflite conversion is done, simply do an inference in python using WordPiece Bert tokenizer.
Detailed logs while conversion process
```
2023-06-17 02:53:29.604798: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
/Users/saurabhkumar/opt/anaconda3/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
2023-06-17 02:53:54.973334: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
/Users/saurabhkumar/opt/anaconda3/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Loading PyTorch model in TensorFlow before exporting.
2023-06-17 02:54:06.422503: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFBertForSequenceClassification: ['bert.embeddings.position_ids']
- This IS expected if you are initializing TFBertForSequenceClassification from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFBertForSequenceClassification from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFBertForSequenceClassification were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertForSequenceClassification for predictions without further training.
Using TensorFlow: 2.11.0
Overriding 1 configuration item(s)
- use_cache -> False
WARNING:absl:Found untraced functions such as embeddings_layer_call_fn, embeddings_layer_call_and_return_conditional_losses, encoder_layer_call_fn, encoder_layer_call_and_return_conditional_losses, pooler_layer_call_fn while saving (showing 5 of 420). These functions will not be directly callable after loading.
2023-06-17 02:55:02.650365: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format.
2023-06-17 02:55:02.650918: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency.
2023-06-17 02:55:02.652373: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /var/folders/q4/dklkx0m970scm0m4w3m_nzvc0000gn/T/tmpod9lpuk_
2023-06-17 02:55:02.718684: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2023-06-17 02:55:02.718712: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /var/folders/q4/dklkx0m970scm0m4w3m_nzvc0000gn/T/tmpod9lpuk_
2023-06-17 02:55:02.945563: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:357] MLIR V1 optimization pass is not enabled
2023-06-17 02:55:02.997217: I tensorflow/cc/saved_model/loader.cc:229] Restoring SavedModel bundle.
2023-06-17 02:55:03.837625: I tensorflow/cc/saved_model/loader.cc:213] Running initialization op on SavedModel bundle at path: /var/folders/q4/dklkx0m970scm0m4w3m_nzvc0000gn/T/tmpod9l
|
https://github.com/huggingface/optimum/issues/1118
|
open
|
[
"bug"
] | 2023-06-16T18:56:06Z
| 2023-06-19T05:18:10Z
| 1
|
saurabhkumar8112
|
huggingface/pytorch-pretrained-BigGAN
| 20
|
Is the model trained on truncated noise? What was input noise vector characteristics for training?
|
Hi,
I have noticed in the "utils.py" line 32, you truncated the normal noise in the range [-2,2] by this line of code:
`values = truncnorm.rvs(-2, 2, size=(batch_size, dim_z), random_state=state).astype(np.float32)`
Could you please let me know whether the pre-trained model is also trained using this truncated noise? If not, could you please let me know the characteristics of the input noise vectors during training your model? Thanks!
|
https://github.com/huggingface/pytorch-pretrained-BigGAN/issues/20
|
open
|
[] | 2023-06-16T08:02:52Z
| 2023-06-16T08:02:52Z
| null |
MHVali
|
pytorch/text
| 2,182
|
Explicit dependend on portalocker?
|
Shouldn't torch/text add an explicit dependency on portalocker now? Without it, I get:
```
= 979 failed, 204 passed, 12 skipped, 1 deselected, 6 warnings in 495.47s (0:08:15) =
```
that's >80% failed tests, and probably does not represent a functional torchtext?
_Originally posted by @h-vetinari in https://github.com/pytorch/text/issues/2056#issuecomment-1593761158_
|
https://github.com/pytorch/text/issues/2182
|
open
|
[] | 2023-06-15T21:45:32Z
| 2023-06-15T21:45:32Z
| 0
|
h-vetinari
|
huggingface/chat-ui
| 301
|
Error when deploying on a distant server : Cannot find base config file "./.svelte-kit/tsconfig.json"
|
Hey,
I'm having troubles deploying HuggingChat on a distant server, when I run HuggingChat, I get the following error :
```
ai@1.0.0 start-chat-ui
> cd ../chat-ui && npm run dev -- --host 127.0.0.1
> chat-ui@0.3.0 dev
> vite dev --host 127.0.0.1
▲ [WARNING] Cannot find base config file "./.svelte-kit/tsconfig.json" [tsconfig.json]
tsconfig.json:2:12:
2 │ "extends": "./.svelte-kit/tsconfig.json",
╵ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
failed to load config from /home/paperspace/***/chat-ui/vite.config.ts
error when starting dev server:
Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'unplugin-icons' imported from /home/paperspace/***/chat-ui/vite.config.ts.timestamp-1686857376175-9d68e4b73b2d7.mjs
at new NodeError (node:internal/errors:405:5)
at packageResolve (node:internal/modules/esm/resolve:781:9)
at moduleResolve (node:internal/modules/esm/resolve:830:20)
at defaultResolve (node:internal/modules/esm/resolve:1035:11)
at DefaultModuleLoader.resolve (node:internal/modules/esm/loader:269:12)
at DefaultModuleLoader.getModuleJob (node:internal/modules/esm/loader:153:32)
at ModuleWrap.<anonymous> (node:internal/modules/esm/module_job:76:33)
at link (node:internal/modules/esm/module_job:75:36)
```
I tried to reinstall svelte but I can't understand where this warning comes from as I have the latest version and my file tsconfig.json exists in the installation folder of svelte...
I tried to modify the package.json as suggested here https://github.com/sveltejs/kit/issues/7028 but it is still unable to work properly...
Anyone has an idea of why I'm still having this issue ?
|
https://github.com/huggingface/chat-ui/issues/301
|
closed
|
[
"support"
] | 2023-06-15T19:55:36Z
| 2023-06-19T10:50:26Z
| 2
|
samichaignonmejai
|
huggingface/transformers.js
| 150
|
[Question] How to use transformers.js like the python sentence_transformers library?
|
Hello all,
Thanks for this great library. I've just discovered it and I'm familiar with the python sentence_transformers module. I know from experience that sentence_transformers wraps a lot of the complexity compared to using transformers directly.
Can you point to an example of using this to replace python's sentence_transformers for semantic search document and question embedding? Does this solution handle the tokenization and attention windows automatically like sentence_transformers, or do I need to break my inputs into chunks, process them separately, and then mean pool them back together or something?
Thanks,
Dave
|
https://github.com/huggingface/transformers.js/issues/150
|
closed
|
[
"question"
] | 2023-06-15T15:30:49Z
| 2023-06-18T15:17:04Z
| null |
davidtbo
|
pytorch/kineto
| 770
|
On demand profiling example / code changes
|
Hi, is there an example for how we can enable on demand profiling with kineto?
The [libkineto README](https://github.com/pytorch/kineto/tree/main/libkineto) mentions that we can send a 'signal' or 'trigger' on demand profiling, but I am unclear on how we can do so from outside the PyTorch script.
Would highly appreciate if somebody could provide an example or point me to the relevant APIs / source files. Thank you!!
|
https://github.com/pytorch/kineto/issues/770
|
closed
|
[
"question"
] | 2023-06-15T04:12:22Z
| 2024-04-23T15:27:23Z
| null |
shradhasehgal
|
huggingface/chat-ui
| 299
|
Using HuggingChat in a JavaScript/node.js setting?
|
Hi, I'm not sure whether this is relevant here, but I'd like to use the HuggingChat in a personal web design project, and I'd like to access it through REST/axios, similar to this [here](https://stackoverflow.com/questions/75714587/node-js-turn-hugging-face-image-response-to-buffer-and-send-as-a-discord-attac) (stable diffusion hugging face example)
So far the only thing I could find was the [huggingChat python](https://github.com/Soulter/hugging-chat-api), and I'm not really sure how to use that in what I'm looking for. Can anyone help?
|
https://github.com/huggingface/chat-ui/issues/299
|
closed
|
[] | 2023-06-15T02:59:29Z
| 2023-09-18T13:19:32Z
| 3
|
VatsaDev
|
pytorch/xla
| 5,188
|
Slow Device To Host Transfers
|
## ❓ Questions and Help
Recently I tried ResNet-50 on TPUs using this repo and TensorFlow / Keras. The performance difference between the two was about 15% (2844.4 img/s per TPU vs 3283.52 img/s) in favor of TensorFlow / Keras. These results were with logging every _300_ iterations. When I removed the logging, the TensorFlow / Keras performance stayed the same while this repo caught up within a few percent (3193.6 img/s). I think this is somewhat expected, as in a previous issue and in the troubleshooting guide, these transfers are generally seen as bad for performance. However, TensorFlow / Keras didn't have a change in their performance, so I did some digging, and it seems they use a separate thread and [device-specific outfeed queue](https://github.com/tensorflow/estimator/blob/7d846da87ed70f9a6c21a33a1c7178697844d9c0/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py#LL450C19-L450C19) that lets them asynchronously transfer data (like the loss) to the host and display a progress bar and other metrics without any hit to TPU performance. Is there a reason they're able to do that and PyTorch XLA cannot?
More details:
- [PyTorch XLA script](https://github.com/pytorch/xla/blob/master/test/test_train_mp_imagenet.py) (taken from this repo's tests)
- [TensorFlow / Keras script](https://github.com/tensorflow/tpu/blob/master/models/experimental/resnet50_keras/resnet50.py) (taken from tensorflow/tpu)
- Performance statistics were taken in the exact same way across both scripts by measuring time right after each step (includes data loading time)
- For torch_xla, `add_step_closure` was tried with `run_async=False` and `run_async=True`.
- Both were run on the exact same v4-8 TPU VM with the latest version of torch_xla and tensorflow.
|
https://github.com/pytorch/xla/issues/5188
|
closed
|
[
"question",
"runtime"
] | 2023-06-14T23:01:59Z
| 2025-04-30T12:53:54Z
| null |
MikeynJerry
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.