repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js
| 841
|
Support opus-mt-mul-en translation in WebGPU
|
### Question
I've been having some trouble where translation sometimes wasn't working. For example, I just tried translating Polish into English using `opus-mt-mul-en`. But if outputs empty strings.
So I started looking for what could be wrong, and in the Transformers.js source code I found this `marian.py` file:
https://github.com/xenova/transformers.js/blob/7f5081da29c3f77ee830269ab801344776e61bcb/scripts/extra/marian.py#L18
It lists the supported Opus MT models, and while the model is available on Huggingface (https://huggingface.co/Xenova/opus-mt-mul-en), I'm guessing it isn't actually supported (yet)?
Do I understand correctly?
Related: is there a setting with the `mul` models that I need to set to select which language is translated into?
For completeness, here's some of my code:
Constructing the model:
```
const hf_model_url = 'Xenova/opus-mt-mul-en';
pipeline('translation', hf_model_url, {
progress_callback: progressCallback,
dtype: dtype_settings,
device: self.device
},
)
.then((pipe) => {
etc
```
And getting a translation out:
```
.pipe(sentence)
.then((translation) => {
etc
```
.. which already begs the question: as `oput-mt-en-mul` _is_ supported according to that file, ...then how would that multi-model know what language to output to?
I'll continue searching to see if I can answer my own question :-)
|
https://github.com/huggingface/transformers.js/issues/841
|
closed
|
[
"question"
] | 2024-07-09T11:52:12Z
| 2024-10-07T15:34:54Z
| null |
flatsiedatsie
|
huggingface/parler-tts
| 83
|
How big a dataset is needed to train the model?
|
I used 560+ hours of libritts_R data to train the model (187M) from scratch, but the audio synthesized by the model is not correct.
Is this because the size od the dataset is not enough?
|
https://github.com/huggingface/parler-tts/issues/83
|
open
|
[] | 2024-07-09T03:56:42Z
| 2024-09-21T10:46:39Z
| null |
zyy-fc
|
huggingface/datatrove
| 242
|
how to postpone filter init till it's running
|
So it appears that currently I can't instantiate a model on a gpu because the filter object is created by the launcher, which either doesn't have a gpu, or it is most likely the wrong gpu even if it has one, since we would need a dedicated gpu(s) for each task.
Is it possible to add a 2nd init which would be the user init that will run on the actual job?
The filter task is simple - instantiate a model on a gpu and then run filter using it - of course we don't want model to be re-instantiated on every filter call.
Needing to `import torch` inside the `filter` is super-weird as well, but I get that it's due to pickle - but perhaps we can have two inits - one of the framework - and then another of the user.
So when a job is launched the first thing the framework runs is user defined `init` if any, and then proceeds normally.
I guess I will try to overcome this meanwhile using `@functools.cache` or something similar.
Thank you!
tag: @guipenedo
|
https://github.com/huggingface/datatrove/issues/242
|
open
|
[] | 2024-07-09T01:11:13Z
| 2024-07-10T01:36:02Z
| null |
stas00
|
huggingface/hub-docs
| 1,328
|
Document how to filter and save searches on the hub (e.g. by model format, only LoRAs, by date range etc...)
|
**Doc request**
I'd really like to see documentation that clarifies how users can filter searches and when browsing models on the Hub.
Things I can't seem to find that I would expect / would make our lives better:
- A selection list or drop down to filter by popular model formats (GGUF, EXL2 etc...)
- A filter or 'explore by category' for original models, fine-tunes, quantisations, adapters etc...
- Filter by date created within (e.g. the last 2 months)
- How to save the filter/search so you can bookmark, share and come back to it later
**Additional context**
- Discussion about this on r/LocalLLaMA recently - https://www.reddit.com/r/LocalLLaMA/comments/1dyjh6m/comment/lc9dhjp/
If there actually isn't a way to do this on the hub a present, I would really love it if something like my shitty mock here could be considered:

|
https://github.com/huggingface/hub-docs/issues/1328
|
open
|
[] | 2024-07-08T22:51:55Z
| 2024-07-10T19:17:42Z
| null |
sammcj
|
huggingface/candle
| 2,323
|
How to do freeze VarMap Vars?
|
Hello everybody,
Is there away to freeze all Var Tensors in the VarMap like the below snippet ?
means something like implement the `Iterator` trait and detach the contained tensors from the graph and add a Var which can be trained !!!
```
# Freeze all the pre-trained layers
for param in model.parameters():
param.requires_grad = False
```
_Originally posted by @mohamed-180 in https://github.com/huggingface/candle/issues/891#issuecomment-2214407719_
|
https://github.com/huggingface/candle/issues/2323
|
open
|
[] | 2024-07-08T15:14:54Z
| 2024-07-08T15:14:54Z
| null |
mohamed-180
|
huggingface/trl
| 1,815
|
How to use DoRA with ORPO
|
Hi! I'm running experiments where I'm comparing SFT to ORPO.
For SFT I currently initialize a `trl.SFTTrainer`, and pass `args=transformers.TrainingArguments(..., use_dora=True, ...)`.
For ORPO I'm supposed to pass `args=trl.ORPOConfig`, but according to the documentation this doesn't seem to support passing `use_dora` as an argument.
What's the best way to combine DoRA with ORPO? In theory this should of course be possible to combine. Can I just pass `transformers.TrainingArguments` to `trl.ORPOTrainer` or would this (silently) break things?
|
https://github.com/huggingface/trl/issues/1815
|
closed
|
[] | 2024-07-08T11:12:48Z
| 2024-07-08T15:39:42Z
| null |
julianstastny
|
pytorch/pytorch
| 130,238
|
how to simplify torch.fx like using onnxsim?
|
### 🚀 The feature, motivation and pitch
lack of the corresponding tools to simplify the exported FX model and count the flops, memory, etc.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/130238
|
open
|
[
"triaged"
] | 2024-07-08T08:30:28Z
| 2024-08-16T13:40:42Z
| null |
MaltoseFlower
|
huggingface/text-generation-inference
| 2,200
|
How to clean the TGI guidance cache?
|
I use TGI guidance to enforce LLM choose a tool.
However, when I change the description of the tool, I find TGI does not re-compile the new grammar.
Therefore, I want to know how to clean the compiled grammar.
|
https://github.com/huggingface/text-generation-inference/issues/2200
|
closed
|
[] | 2024-07-08T05:37:55Z
| 2024-07-18T15:01:07Z
| null |
EdisonE3
|
pytorch/data
| 1,283
|
best practice for `snapshot_every_n_steps`
|
Hello,
Thank you for your awesome implementation of StatefulDataloader.
I have a question about `snapshot_every_n_steps`. It seems there is not much detailed explanation about this argument.
* Will frequent snapshots (i.e., `snapshot_every_n_steps=1`) cause a data loading burden?
* What is the best practice for setting this value? Is it related to checkpointing frequency?
cc @andrewkho
|
https://github.com/meta-pytorch/data/issues/1283
|
open
|
[
"documentation"
] | 2024-07-07T03:56:03Z
| 2024-11-17T19:41:33Z
| 5
|
ShoufaChen
|
huggingface/transformers.js
| 837
|
Model downloads or running on server?
|
### Question
Hey there,
I am using simple hosting with cPanel view as the admin. If I upload the ONNX model files to the file manager as well as the JS script to run the model, will it still need to download the model or will it not, since the file is uploaded there, along with the script. Provided of course that I disable automatic huggingface loading and add the directory to the models in the file manager through .env.
Your help will be highly appreciated.
Cheers.
|
https://github.com/huggingface/transformers.js/issues/837
|
closed
|
[
"question"
] | 2024-07-06T23:07:15Z
| 2025-01-20T19:50:12Z
| null |
moses-mbaga
|
pytorch/vision
| 8,515
|
How to write your own v2 transforms example does not work
|
### 🐛 Describe the bug
I copy pasted the custom transform from your [tutorial page](https://pytorch.org/vision/stable/auto_examples/transforms/plot_custom_transforms.html#:~:text=How%20to%20write%20your%20own%20v2%20transforms%20Note,from%20torchvision%20import%20tv_tensors%20from%20torchvision.transforms%20import%20v2) and inserted it into the transform pipeline in your reference/detection/presets.py script. When trying to run, I get the following error.
File "site-packages/torchvision/transforms/v2/_container.py", line 51, in forward
outputs = transform(*inputs)
^^^^^^^^^^^^^^^^^^
File "site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
^^^^^^^^^^^^^^^^^^^^
File "site-packages/torch/nn/modules/module.py", line 1709, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'MyCustomTransform' object has no attribute '_backward_hooks'
### Versions
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.5.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.5.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.5.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.5.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.5.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.5.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7702P 64-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1540.122
CPU max MHz: 2183,5930
CPU min MHz: 1500,0000
BogoMIPS: 3992.22
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulner
|
https://github.com/pytorch/vision/issues/8515
|
open
|
[] | 2024-07-06T23:04:22Z
| 2024-07-10T21:59:25Z
| null |
TonyCongqianWang
|
pytorch/xla
| 7,635
|
Inconsistency between xla/examples/train_resnet_base.py and docs
|
## 📚 Documentation
This isn't necessarily an issue with the documentation, but an inconsistency between the documentation and the simplest [Pytorch XLA example](https://github.com/pytorch/xla/blob/master/examples/train_resnet_base.py). The [docs](https://pytorch.org/xla/release/2.3/index.html) say that the one key change to a standard training loop (for single device use) is adding `xm.mark_step()`, but `train_resnet_base.py` doesn't have (and just has `xm.wait_device_ops()` after all all epochs are complete).
My understanding is that `xm.mark_step()` isn't necessary if we're not directly accessing any state on the TPU, which is why `train_resnet_base.py` doesn't use it and works around it via `xm.add_step_closure`. I assume the latter is actually preferred, but either way it would be helpful for folks getting started if there wasn't a confusing inconsistency like this for the simplest setting.
@JackCaoG I think this is your wheelhouse? Thanks for any clarification.
|
https://github.com/pytorch/xla/issues/7635
|
closed
|
[
"question"
] | 2024-07-06T19:52:50Z
| 2025-04-03T14:51:15Z
| null |
davidaknowles
|
pytorch/pytorch
| 130,137
|
How to get stream operators in custom backend compiler ?
|
### 🐛 Describe the bug
Hi, when I use a custom backend, I find that the fx graph that custom compiler gets does not have the stream related operations.
Then I found that the fx graph dropped those stream operations after aot_module_simplified.
So, I want to know how can we get a fx graph that contains stream-related operations, when using custom compiler and aot_module_simplified?
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @chauhang @penguinwu @ezyang @anijain2305 @zou3519 @ptrblck @msaroufim @yf225
Here is my test script.
When I use aot_toy_backend backend, no stream related ops in gx graph.
What can we do to fix this? Can you give me some guidance or advice on this issue.
```
import torch
import torch.nn as nn
class Layer(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
stream2 = torch.cuda.Stream()
with torch.cuda.stream(stream2):
z = x + 1
y = x - 1
return y + z
mm = Layer()
x=torch.randn([4]).cuda()
from torch._functorch.aot_autograd import aot_module_simplified
def toy_backend(gm, sample_inputs):
return gm
def aot_toy_backend(gm, sample_inputs):
return aot_module_simplified(gm, sample_inputs, fw_compiler=toy_backend)
mmc = torch.compile(mm, backend=aot_toy_backend)
yc= mmc(x)
```
### Versions
pytorch 2.3.0
|
https://github.com/pytorch/pytorch/issues/130137
|
open
|
[
"oncall: distributed",
"triaged",
"oncall: pt2"
] | 2024-07-05T03:41:24Z
| 2024-11-27T05:20:33Z
| null |
wbigat
|
huggingface/lerobot
| 305
|
how to eval the policy trained by lerobot in real env?
|
### System Info
```Shell
how to eval the policy trained by lerobot in real env?
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
in the code, i have not found any solution to transfer policy rollout to real env, please help me figure it out
### Expected behavior
how to infer the policy trained by lerobot in real env?
|
https://github.com/huggingface/lerobot/issues/305
|
closed
|
[] | 2024-07-05T03:23:01Z
| 2024-07-23T09:08:27Z
| null |
cong1024
|
pytorch/xla
| 7,634
|
Failed to install xla gpu
|
## ❓ Questions and Help
pip install torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl
But got the error:
ERROR: torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl is not a supported wheel on this platform.
How can i install torch_xla on GPU ?
|
https://github.com/pytorch/xla/issues/7634
|
closed
|
[
"xla:gpu"
] | 2024-07-05T02:37:12Z
| 2024-08-05T21:40:28Z
| 1
|
Beakboomboom
|
huggingface/transformers.js
| 836
|
How do I free up memory after transliteration
|
### Question
After I executed the translation in the worker, it seems that the memory could not be reclaimed when I called pipely. dispose(), and the memory would be reclaimed only when the woker was closed. Can you help me with this question?
|
https://github.com/huggingface/transformers.js/issues/836
|
closed
|
[
"question"
] | 2024-07-04T15:16:33Z
| 2024-07-05T07:19:31Z
| null |
raodaqi
|
huggingface/transformers
| 31,790
|
How to implement bind_tools to custom LLM from huggingface pipeline(Llama-3) for a custom agent
|
Example Code
```
name = "meta-llama/Meta-Llama-3-8B-Instruct"
auth_token = ""
tokenizer = AutoTokenizer.from_pretrained(name,use_auth_token=auth_token)
bnb_config = BitsAndBytesConfig(
load_in_8bit=True,
)
model_config = AutoConfig.from_pretrained(
name,
use_auth_token=auth_token,
tempreature=0.1,
)
model = AutoModelForCausalLM.from_pretrained(
name,
trust_remote_code=True,
config=model_config,
quantization_config=bnb_config,
device_map='auto',
use_auth_token=auth_token,
)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=4096, device_map="auto", streamer = streamer)
llm = HuggingFacePipeline(pipeline=pipe)
@tool
def some_custom_tool(input_string: str) -> str:
"""Executes some work and returns a success message if successfull else it return the error message"""
return "SUCCESS"
tools = [some_custom_tool]
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
f"""
You are an Assistant......
""",
),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = llm.bind_tools(tools)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
}
| prompt
| llm
| JsonOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, return_intermediate_steps= True)
```
Description
I am trying to bind a custom tool with the LLM just like ChatOpenAI but i am getting the following error. It looks like the bind_tools does exist in HuggingFacePipeline. Is there a way to bind a custom tool to an LLM from HuggingFacePipeline?
AttributeError: 'HuggingFacePipeline' object has no attribute 'bind_tools'
System Info:
```
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.11
langchain-openai==0.1.14
langchain-text-splitters==0.2.2
Python 3.10.13
```
I am doing this on Kaggle GPU t4x2
|
https://github.com/huggingface/transformers/issues/31790
|
closed
|
[] | 2024-07-04T08:59:38Z
| 2024-08-13T08:04:24Z
| null |
talhaty
|
pytorch/xla
| 7,633
|
Multiprocess inference warning: ignoring nprocs
|
## ❓ Questions and Help
When I made multiprocess inference of huggingface transformers frame, I used xmp.spawn(perform_inference, args=(args,), nprocs=4), and I wanted to run 4 scripts once. However, it reported a warning that WARNING:root:Unsupported nprocs (4), ignoring... I wonder if it is a bug or it has any mistake in my infer script.
My infer script is as following:
device = xm.xla_device()
print(f"tpu name: {device}")
sentences = ["Sample-1", "Sample-2"] * args.batch_size
print(f"sentences length: {len(sentences)}")
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path)
model = AutoModel.from_pretrained(args.model_name_or_path).to(device)
model.eval()
for i in range(20):
if i == 19:
print(f"log port: {port}")
xp.trace_detached(f'localhost:{port}', './profiles/', duration_ms=2000)
with xp.StepTrace('bge_test'):
with xp.Trace('build_graph'):
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt').to(device)
with torch.no_grad():
start = time.perf_counter()
model_output = model(**encoded_input)
end = time.perf_counter()
sentence_embeddings = model_output[0][:, 0]
print("inference time:", (end - start))
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings: ", sentence_embeddings)
if __name__ == "__main__":
torch.set_default_dtype(torch.float32)
args = get_args()
xmp.spawn(perform_inference, args=(args,), nprocs=4)
# detail log
WARNING:root:Unsupported nprocs (4), ignoring...
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1720080892.528224 2908632 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so
I0000 00:00:1720080892.528293 2908632 pjrt_api.cc:79] PJRT_Api is set for device type tpu
I0000 00:00:1720080892.528300 2908632 pjrt_api.cc:146] The PJRT plugin has PJRT API version 0.46. The framework PJRT API version is 0.46.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1720080892.544289 2908627 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so
I0000 00:00:1720080892.544426 2908627 pjrt_api.cc:79] PJRT_Api is set for device type tpu
I0000 00:00:1720080892.544434 2908627 pjrt_api.cc:146] The PJRT plugin has PJRT API version 0.46. The framework PJRT API version is 0.46.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1720080892.728254 2908631 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so
I0000 00:00:1720080892.728326 2908631 pjrt_api.cc:79] PJRT_Api is set for device type tpu
I0000 00:00:1720080892.728332 2908631 pjrt_api.cc:146] The PJRT plugin has PJRT API version 0.46. The framework PJRT API version is 0.46.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1720080892.916441 2908634 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so
I0000 00:00:1720080892.916616 2908634 pjrt_api.cc:79] PJRT_Api is set for device type tpu
I0000 00:00:1720080892.916625 2908634 pjrt_api.cc:146] The PJRT plugin has PJRT API version 0.46. The framework PJRT API version is 0.46.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1720080893.409535 2908636 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so
I0000 00:00:1720080893.409646 2908636 pjrt_api.cc:79] PJRT_Api is set for device type tpu
I0000 00:00:1720080893.409654 2908636 pjrt_api.cc:146] The PJRT plugin has PJRT API version 0.46. The framework PJRT API version is 0.46.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1720080893.658751 2908630 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so
I0000 00:00:1720080893.658883 2908630 pjrt_api.cc:79] PJRT_Api is set for device type tpu
I0000 00:00:1720080893.658891 2908630 pjrt_api.cc:146] The PJRT plugin has PJRT API version 0.46. The framework PJRT API version is 0.46.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1720080893.659256 2908635 pjrt_api.cc:100] GetPjrtApi was found for tpu at /home/liqing002/.local/lib/python3.10/site-packages/libtpu/libtpu.so
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1720080893.659285 2908633 pjrt_ap
|
https://github.com/pytorch/xla/issues/7633
|
closed
|
[
"question",
"distributed"
] | 2024-07-04T08:28:47Z
| 2025-04-03T14:52:10Z
| null |
SileonQuinn
|
huggingface/diffusers
| 8,788
|
VAE Tiling not supported with SD3 for non power of 2 images?
|
### Describe the bug
VAE tiling works for SD3 with power of 2 images, but for no other alignments.
The mentioned issues with VAE tiling are due to: [vae/config.json](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/blob/main/vae/config.json)
Having:
```
"use_post_quant_conv": false,
"use_quant_conv": false
```
Which causes the method used here:
https://github.com/huggingface/diffusers/blob/589931ca791deb8f896ee291ee481070755faa26/src/diffusers/models/autoencoders/autoencoder_kl.py#L363
And Here:
https://github.com/huggingface/diffusers/blob/589931ca791deb8f896ee291ee481070755faa26/src/diffusers/models/autoencoders/autoencoder_kl.py#L412
To be `None`
Perhaps at the moment, the model is simply not entirely compatible with the tiling in ``AutoEncoderKL``, as the state dict does not possess the keys `post_quant_conv.bias, quant_conv.weight, post_quant_conv.weight, quant_conv.bias`
Is this intended?
### Reproduction
```python
import diffusers
import PIL.Image
import os
os.environ['HF_TOKEN'] = 'your token'
cn = diffusers.SD3ControlNetModel.from_pretrained('InstantX/SD3-Controlnet-Canny')
pipe = diffusers.StableDiffusion3ControlNetPipeline.from_pretrained(
'stabilityai/stable-diffusion-3-medium-diffusers',
controlnet=cn)
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_tiling()
width = 1376
height = 920
# aligned by 16, but alignment by 64 also fails
output_size = (width-(width % 16), height-(height % 16))
not_pow_2 = PIL.Image.new('RGB', output_size)
args = {
'guidance_scale': 8.0,
'num_inference_steps': 30,
'width': output_size[0],
'height': output_size[1],
'control_image': not_pow_2,
'prompt': 'test prompt'
}
pipe(**args)
```
### Logs
```shell
REDACT\venv\Lib\site-packages\diffusers\models\attention_processor.py:1584: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
hidden_states = F.scaled_dot_product_attention(
Traceback (most recent call last):
File "REDACT\test.py", line 35, in <module>
pipe(**args)
File "REDACT\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "REDACT\venv\Lib\site-packages\diffusers\pipelines\controlnet_sd3\pipeline_stable_diffusion_3_controlnet.py", line 912, in __call__
control_image = self.vae.encode(control_image).latent_dist.sample()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "REDACT\venv\Lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "REDACT\venv\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 258, in encode
return self.tiled_encode(x, return_dict=return_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "REDACT\venv\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 363, in tiled_encode
tile = self.quant_conv(tile)
^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not callable
```
### System Info
Windows
diffusers 0.29.2
### Who can help?
@yiyixuxu @sayakpaul @DN6 @asomoza
|
https://github.com/huggingface/diffusers/issues/8788
|
closed
|
[
"bug"
] | 2024-07-04T03:52:54Z
| 2024-07-11T20:41:37Z
| 2
|
Teriks
|
huggingface/diffusers
| 8,785
|
adding PAG Support for Hunyuan-DIT and Pixart-Sigma
|
we recently added PAG support for SDXL. Is Anyone interested in extending PAG support to Hunyuan-DIT and Pixart-Sigma?
There is no implementation available, so it is a bit of a research-oriented project (= fun!!). and you can get directly feedbacks from the authors @sunovivid @HyoungwonCho
to add PAG support to new models:
* I think you should be able to use `PAGMixin` as it is (or with some modification)(https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pag_utils.py#L27)
* you will need to make PAG attention processors for the new model https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2564 based on the attention processor that the model uses, e.g. for Hunyuan-DIT, you need to make a `HunyuanPAGIdentitySelfAttnProcessor2_0` and `HunyuanPAGCFGIdentitySelfAttnProcessor2_0` based on `HunyuanAttnProcessor2_0` https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1499
* you will need to make a `HunyuanPAGPipeline` /`PixartSigmaPAGPipeline` under the `pag` folder (for now!)
|
https://github.com/huggingface/diffusers/issues/8785
|
closed
|
[
"help wanted",
"contributions-welcome",
"advanced"
] | 2024-07-03T18:17:32Z
| 2024-08-30T11:09:04Z
| 4
|
yiyixuxu
|
huggingface/diffusers
| 8,780
|
Model and input data type is not same
|
**Is your feature request related to a problem? Please describe.**
Hi, when I trained sdv1.5 model with fp16 mode by using the `examples/text_to_image/train_text_to_image.py` file, I found there is a mismatch between unet model and input data. Specificaly, In this [line](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py#L993) , the `unet` model has float32 dtype, but the `noisy_latents` has the float16 dtype. Although it will not raise an error in cuda , I use my custom device it will raise an error, I wonder how can I change this code to use float16.
**Describe the solution you'd like.**
To avoid get a wrong model, I would like you give a right code to match model and input.
**Describe alternatives you've considered.**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context.**
Add any other context or screenshots about the feature request here.
|
https://github.com/huggingface/diffusers/issues/8780
|
open
|
[
"stale"
] | 2024-07-03T06:57:44Z
| 2024-09-14T15:07:36Z
| 1
|
andyjiang1116
|
huggingface/peft
| 1,903
|
How to use multiple GPUs
|
### System Info
peft=0.11.1
python=3.10
### Who can help?
When I run this script, there is no problem with a single GPU. When I try to run 2 GPUs, the system resources show that the utilization rate of each GPU is only half. When I try to increase per-device_train_batch_size and gradient-accumulation_steps, there is a situation of memory overflow. What should I do?
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
from datasets import load_dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
logging,
)
from peft import LoraConfig, peft_model, TaskType
from trl import SFTTrainer, SFTConfig
# fix random sequence
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(
model_id,
# use_fast=False,
add_eos_token=True,
#trust_remote_code=True,
)
#tokenizer.pad_token = tokenizer.unk_token
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.padding_side = "right"
# Generate Llama 3 instruction
def generate_supervised_chat(row):
chat = [
{ 'role': 'system',
'content': '你是一位优秀的翻译专家。请把给定的中文文本翻译为日语,只回复翻译后的文本。'},
{ 'role': 'user',
'content': f'''请把下面的中文文本翻译为日语文本。
中文文本: {row["Ch"]}''' },
{ 'role': 'assistant',
'content': f'''此文本翻译后的结果如下。
日语翻译文本: {row["Ja"]}
以上。'''},
]
instruction = tokenizer.apply_chat_template(chat, tokenize=False)
# instruction = instruction + "<|end_of_text|>"
return instruction
def add_text(row):
row['text'] = generate_supervised_chat(row)
return row
# load dataset
jjs_dataset_dir = "wccjc-dataset"
dataset = load_dataset(
jjs_dataset_dir,
data_files={'train': 'train.tsv', 'test': 'test.tsv', 'valid': 'valid.tsv'},
sep='\t',
names=['Ch', 'Ja']
)
dataset = dataset["train"]
dataset = dataset.map(add_text)
print(dataset)
print(dataset[0]["text"])
# Quantization Config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16, # or float16
bnb_4bit_use_double_quant=True,
)
import datetime
# Load pretrained model
now = datetime.datetime.now()
print('Loading base model:', model_id, now)
print('Train epochs:', n_epochs)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=bnb_config,
device_map="auto", #{"": 0},
)
now = datetime.datetime.now()
print('Loading ended', now)
model.config.use_cache = False
model.config.pretraining_tp = 1
# LoRA Config
lora_config = LoraConfig(
r=8,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type=TaskType.CAUSAL_LM, # "CAUSUAL_LM",
target_modules=["q_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "k_proj", "v_proj"],
)
per_device_train_batch_size = 4
gradient_accumulation_steps = 4
print("per_device_train_batch_size:", per_device_train_batch_size)
print("gradient_accumulation_steps:", gradient_accumulation_steps)
# Training arguments
sft_config = SFTConfig(
output_dir="./train_logs",
fp16=True,
seed=42,
# max_steps=13200, # 300,
num_train_epochs=n_epochs,
per_device_train_batch_size=per_device_train_batch_size, #4,
gradient_accumulation_steps=gradient_accumulation_steps, # 1,
optim="paged_adamw_32bit",
learning_rate=2e-4,
lr_scheduler_type="cosine",
max_grad_norm=0.3,
warmup_ratio=0.03,
weight_decay=0.001,
save_steps=1000, #25,
logging_steps=25,
group_by_length=True,
report_to="tensorboard",
max_seq_length=512, #None
dataset_text_field="text",
)
# SFT arguments
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=dataset,
peft_config=lora_config,
# args=training_arguments,
args=sft_config,
packing=False,
)
```
### Expected behavior
run 2 GPUs
|
https://github.com/huggingface/peft/issues/1903
|
closed
|
[] | 2024-07-03T02:25:36Z
| 2024-08-11T15:03:29Z
| null |
Lihwnlp
|
pytorch/xla
| 7,622
|
How to avoid compilation in a section of code?
|
## ❓ Questions and Help
We are using Pytorch XLA w/ TPU to train a multi-modal language models.
We can make most of the code, such as image encoding and the forward pass in the LLM backbone, in a static shape, which XLA handles well. However, making the part that fuses image and text embeddings into the input embedding static is extremely challenging.
Currently, we use `mark_step` to isolate that section from the rest of the code, allowing it to recompile each time. **Although this part is very computationally light, the recompilation is extremely slow and often consumes the majority of training time**.
We find documentation on this issue very hard to find, and we are exploring better solutions, such as running that part on the CPU, in eager mode, or not saving that part of the graph to avoid OOM errors during long training runs. **We wonder if you have any suggestions/pointers on how to workaround this inefficiency?**
Following is a pesudo code to illustrate our problem
```python
for ... # loading data
# these tensors are with static shape, xla works great on them
image_embeddings = image_encoder(raw_image_tensor)
text_embeddings = get_text_embedding(text_token_idxs)
xm.mark_step()
# this part is very light in compute, but dynamic. We currently just recompile this graph every single time :(
input_embeddings = fuse_embedding(raw_image_tensor, text_token_idxs, sequence_info_dict)
xm.mark_step()
# these tensors are with static shape, xla works great on them
output_logits = llm(input_embeddings)
# loss compute / backward / optimizer step omited
```
|
https://github.com/pytorch/xla/issues/7622
|
closed
|
[
"question"
] | 2024-07-03T00:15:12Z
| 2025-04-03T14:54:28Z
| null |
Jiayi-Pan
|
pytorch/xla
| 7,614
|
Dynamo persistent cache real-time look-up
|
## 🚀 Feature
As described in https://github.com/pytorch/pytorch/issues/125958, we are integrating with vLLM on TPUs. We see that in the warm up phase of the vLLM, it needs to pre-compile ~30 different input shape combinations. PyTorch/XLA does not support dynamic shapes today so torch.compile will keep compiling the model code which slows down the development speed (waiting for 10 minutes before warm up is finished). PyTorch/XLA already cache the XLA compilation but torch.compile itself is pretty expensive.
This feature request pitches to achieve the similar effect of dynamic shapes by persistent caching and real time look up of the compiled program.
## Details
To do this, in high-level, we need to do the following:
- Turn on the dynamo dynamic shape mode, dynamo will start passing the inputs with dynamic shapes to PyTorch/XLA
- PyTorch/XLA can then try to figure out if this shape is compiled in XLA
- If it is, we can map the different input shape to different compiled binaries
## Open questions
- Does persistent FxGraph caching work with PyTorch/XLA? Details at https://github.com/pytorch/pytorch/issues/125958#issuecomment-2204040977.
- How can we properly map the different input shape to different compiled binaries?
cc @JackCaoG @WoosukKwon
|
https://github.com/pytorch/xla/issues/7614
|
closed
|
[] | 2024-07-02T21:01:36Z
| 2024-07-23T01:18:34Z
| 2
|
wonjoo-wj
|
pytorch/vision
| 8,510
|
Obscure error messages using VideoReader when PyAV version too old/not installed
|
### 🐛 Describe the bug
When a sufficiently recent version of PyAV is not installed, the script `vision/torchvision/io/video_reader.py` initialises the variable `av` to an `ImportError` object that contains a description of the issue, either at line 38:
```python
av = ImportError(
"""\
PyAV is not installed, and is necessary for the video operations in torchvision.
See https://github.com/mikeboers/PyAV#installation for instructions on how to
install PyAV on your system.
"""
)
```
or on line 28 (code omitted for brevity, but is similar to the above). This is potentially very useful information that would make it easy to see why an application isn't working. Unfortunately, this error is never actually raised.
Instead, when a VideoReader object is created, the `av` variable is simply assumed to contain the PyAV module object. This is first used on line 159:
```python
self.container = av.open(src, metadata_errors="ignore")
```
As an `ImportError` object does not have a method called `open`, this results in a rather mystifying error condition being raised: `AttributeError: 'ImportError' object has no attribute 'open'`.
I suspect there should be a test immediately prior to line 159 which checks if `av` is an ImportError object and raises it if it is.
### Versions
This bug is not related to specific versions, but can be seen by examination of the current version of the source code.
|
https://github.com/pytorch/vision/issues/8510
|
open
|
[] | 2024-07-02T18:51:34Z
| 2024-07-04T10:43:51Z
| 1
|
occipita
|
huggingface/text-embeddings-inference
| 320
|
how to deploy bge-reranker-v2-m3 on Text-embeddings-inference
|
https://github.com/huggingface/text-embeddings-inference/issues/320
|
closed
|
[] | 2024-07-02T15:18:48Z
| 2024-07-08T10:20:05Z
| null |
kennard520
|
|
huggingface/text-embeddings-inference
| 318
|
How to deploy bge-reranker-v2-m3 for multiple threads?
|
https://github.com/huggingface/text-embeddings-inference/issues/318
|
closed
|
[] | 2024-07-02T14:56:33Z
| 2024-07-08T10:20:01Z
| null |
kennard520
|
|
huggingface/diffusers
| 8,771
|
Removing LoRAAttnProcessor causes many dependencies to fail
|
### Describe the bug
https://github.com/huggingface/diffusers/pull/8623 removed obsolete `LoRAAttnProcessor` which in principle is a good thing, but it was done without considerations where is that feature currently in-use so it breaks many (and i mean many) community pipelines
it also breaks some core libraries such as huggingface's own <https://github.com/huggingface/optimum> library which is used to export model to onnx and also to enable use of olive backend.
suggestion is to add a dummy class `LoRAAttnProcessor` so it results in no-op for packages that import it.
### Reproduction
N/A
### Logs
```shell
> Failed to import optimum.onnxruntime.modeling_diffusion because of the following error (look up to see its traceback):
> Failed to import optimum.exporters.onnx.__main__ because of the following error (look up to see its traceback):
> cannot import name 'LoRAAttnProcessor' from 'diffusers.models.attention_processor' (/home/vlado/dev/sdnext/venv/lib/python3.12/site-packages/diffusers/models/attention_processor.py)
```
### System Info
diffusers==0.30.0.dev0
### Who can help?
@yiyixuxu @sayakpaul @DN6
|
https://github.com/huggingface/diffusers/issues/8771
|
closed
|
[
"bug"
] | 2024-07-02T13:11:33Z
| 2024-07-03T16:37:08Z
| 1
|
vladmandic
|
pytorch/pytorch
| 129,949
|
How to get stream operators in custom backend compiler ?
|
### 🐛 Describe the bug
Hi, when I use a custom backend, I find that the fx graph that custom compiler gets does not have the stream related operations.
Then I found that the fx graph dropped those stream operations after aot_module_simplified.
So, I want to know how can we get a fx graph that contains stream-related operations, when using aot_module_simplified and custom compiler?
cc @ezyang @anijain2305 @chauhang @penguinwu @zou3519 @ptrblck @msaroufim
Here is my test script.
```
import torch
import torch.nn as nn
class Layer(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
stream2 = torch.cuda.Stream()
with torch.cuda.stream(stream2):
z = x + 1
y = x - 1
return y + z
mm = Layer()
x=torch.randn([4]).cuda()
from torch._functorch.aot_autograd import aot_module_simplified
def toy_backend(gm, sample_inputs):
return gm
def aot_toy_backend(gm, sample_inputs):
return aot_module_simplified(gm, sample_inputs, fw_compiler=toy_backend)
mmc = torch.compile(mm, backend=aot_toy_backend)
yc= mmc(x)
```
When I use aot_toy_backend backend, no stream related ops in gx graph.
### Versions
pytorch 2.3.0
|
https://github.com/pytorch/pytorch/issues/129949
|
closed
|
[
"oncall: pt2"
] | 2024-07-02T09:05:54Z
| 2024-07-05T06:31:36Z
| null |
wbigat2
|
pytorch/xla
| 7,607
|
How to use spmd to support hybrid shard data parallelism?
|
## ❓ Questions and Help
Fsdp can be well expressed by spmd, but hsdp seems to be unable to be expressed. Is there any way to express hsdp in spmd?
|
https://github.com/pytorch/xla/issues/7607
|
closed
|
[
"question"
] | 2024-07-02T08:05:47Z
| 2025-04-03T14:54:52Z
| null |
mars1248
|
huggingface/candle
| 2,307
|
How to get all layers attentions?
|
I only see that candle returns last_hidden_state, but not all_hidden_states and attentions. I want to get attentions. Can I submit a PR to do this? I originally wanted to define the Model myself, but I found that all its methods are private
|
https://github.com/huggingface/candle/issues/2307
|
open
|
[] | 2024-07-02T02:16:52Z
| 2024-07-02T02:16:52Z
| null |
kitty-eu-org
|
huggingface/diffusers
| 8,760
|
Clarification Needed on Hardcoded Value in Conditional Statement in LeditPP
|
Hello @manuelbrack,
I was reviewing the source code and came across a line that seems to have a hardcoded value in a conditional statement. The line in question is:
https://github.com/huggingface/diffusers/blob/0bae6e447cba0459456c4f7e7e87d7db141d3235/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion.py#L1053
I understand that this condition decides whether cross_attention_mask, intersect_mask, or noise_mask is going to be used in the diffusion step, but any clarification on this this condition about the following questions will be appreciated:
- What is the significance of the value 800?
- Is this value based on empirical data, theoretical calculations, or an arbitrary choice?
- Are there specific scenarios or conditions under which this threshold was determined?
- Would it be possible to include a comment or documentation explaining this choice for future reference?
Thank you for your help!
|
https://github.com/huggingface/diffusers/issues/8760
|
open
|
[
"stale"
] | 2024-07-01T20:12:20Z
| 2024-12-13T15:05:35Z
| 3
|
ardofski
|
pytorch/pytorch
| 129,877
|
Eager and PT2 inconsistent on whether or not scalar tensor is allowed as input where int is expected
|
### 🐛 Describe the bug
Internal xref: https://fb.workplace.com/groups/1075192433118967/posts/1454391288532411/
The error looks like this:
```
TorchRuntimeError: Failed running call_function fbgemm.jagged_1d_to_dense(*(), **{'values': FakeTensor(..., device='cuda:7', size=(260039,), dtype=torch.int64), 'offsets': FakeTensor(..., device='cuda:7', size=(513,), dtype=torch.int64), 'max_sequence_length': FakeTensor(..., device='cuda:7', size=(), dtype=torch.int64), 'padding_value': 0}):
fbgemm::jagged_1d_to_dense() Expected a value of type 'int' for argument 'max_sequence_length' but instead found type 'FakeTensor'.
```
You can work around it by replacing `max_len = torch.max(lengths)` with `max_len = torch.max(lengths).item()` but it would be better if PT2 implicitly inserted the item call
@zou3519 I am not sure if this is a custom op problem or a Dynamo problem
A minimal repro should be relatively simple to create.
### Versions
main
cc @anijain2305 @chauhang @penguinwu @zou3519 @bdhirsh
|
https://github.com/pytorch/pytorch/issues/129877
|
closed
|
[
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 2024-07-01T14:11:55Z
| 2025-07-30T17:43:13Z
| null |
ezyang
|
huggingface/diffusers
| 8,748
|
SD3 cannot finetunes a better model (hand and face deformation)?
|
### Describe the bug
I want to finetune sd3 to improve its human generation quality with 3million high-quality human datasets (which has been proven useful on sdxl and other models). But hand and face deformation doesn't improve much after two days of training.
I am using [train](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_sd3.py) script
What I have been done so far:
1. regular training with 3 million data with batch size 2x24(V100) for 2 epochs with lr 5e-6 and adamw optimizer
2. prodigy optimizer training with same setting
3. Add q,k RMS norm to each attention layer
4. only train several blocks
All of my training gives me nearly the same deformation results, where the hands are never normal like human.
Could you some provide more experiments about sd3 training? There seems no easy way to adapt sd3 for human generation
### Reproduction
Has described in bug part
### Logs
_No response_
### System Info
V100 24GPU, batchsize 2 for each card, 3 million human data with aesthetic score > 4.5
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/8748
|
closed
|
[
"bug"
] | 2024-07-01T07:21:19Z
| 2024-07-17T06:01:31Z
| 4
|
KaiWU5
|
huggingface/transformers.js
| 833
|
convert.py has errors when i use yolov9
|
### Question
your repo
https://huggingface.co/Xenova/gelan-c
is really good and helpful for me
but i need to use the gelan-t, gelan-s edition , coz of mobile phone depoyment
when i u convert.py to convert to onnx edition , errors happen
The checkpoint you are trying to load has model type `yolov9` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date
|
https://github.com/huggingface/transformers.js/issues/833
|
open
|
[
"question"
] | 2024-07-01T03:51:53Z
| 2024-07-18T07:04:10Z
| null |
jifeng632
|
huggingface/transformers
| 31,722
|
how to generate router_logits in moe models using model.generate()?
|
### System Info
- `transformers` version: 4.41.2
- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.3
- Accelerate version: 0.31.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <yes>
- Using distributed or parallel set-up in script?: <yes>
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"/localssd/swlu/Qwen1.5-MoE-A2.7B-Chat",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("/localssd/swlu/Qwen1.5-MoE-A2.7B-Chat")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512,
return_dict_in_generate = True,
output_router_logits = True
)
print("outputs:", generated_ids.router_logits)
### Expected behavior
I want to get router_logits of moe models using model.generate() with the code above.
But got:
AttributeError: 'GenerateDecoderOnlyOutput' object has no attribute 'router_logits'
|
https://github.com/huggingface/transformers/issues/31722
|
closed
|
[
"Generation"
] | 2024-07-01T03:48:09Z
| 2024-09-13T08:07:40Z
| null |
Jimmy-Lu
|
huggingface/transformers.js
| 832
|
How to load version 3 from CDN?
|
### Question
The [README.md file on v3 branch](https://github.com/xenova/transformers.js/tree/v3?tab=readme-ov-file#installation) has a html snippet to import transformers version 3 from a CDN.
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@3.0.0-alpha.0';
</script>
```
That URL is unresolved by the CDN.
Is version 3 available on any CDN? If so what is the URL? If not is there an alternative to import from browser?
|
https://github.com/huggingface/transformers.js/issues/832
|
closed
|
[
"question"
] | 2024-06-30T23:39:08Z
| 2024-10-10T12:23:41Z
| null |
geoffroy-noel-ddh
|
huggingface/transformers
| 31,717
|
how to remove kv cache?
|
### Feature request
When I use the generate() function of a language model for inference, the kv-cache is also stored in the GPU memory. Is there any way to clear this kv-cache before continuing to call generate()?
### Motivation
I have a lot of text to process, so I use a for loop to call generate(). To avoid OOM, I need to clear the kv-cache before the end of each loop iteration.
### Your contribution
none
|
https://github.com/huggingface/transformers/issues/31717
|
closed
|
[
"Feature request",
"Generation",
"Cache"
] | 2024-06-30T12:09:48Z
| 2024-11-05T01:34:42Z
| null |
TuuSiwei
|
huggingface/accelerate
| 2,904
|
How to merge Qlora FSDP weights with an LLM and save model.
|
https://github.com/huggingface/accelerate/issues/2904
|
closed
|
[] | 2024-06-30T07:00:50Z
| 2024-07-01T14:20:53Z
| null |
Minami-su
|
|
huggingface/transformers.js
| 830
|
Error while using the library in nextjs (app based route)
|
### Question
Hello
I was going through the issues section to find out an solution for the issue i am facing.. I did tried some of the solutions provided by xenova but it seems like I am getting some wasm fallback error which I have no idea whats happening.. I doubt its on webpack but I wanted a clarity.
The error I see is like this while running `npm run dev`
```
✓ Compiled /api/openai in 1500ms (3656 modules)
TypeError: Cannot read properties of undefined (reading 'create')
at constructSession (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/models.js:436:39)
at async Promise.all (index 1)
at async BertModel.from_pretrained (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/models.js:1007:20)
at async AutoModel.from_pretrained (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/models.js:5026:20)
at async Promise.all (index 1)
at async loadItems (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/pipelines.js:2838:5)
at async pipeline (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/pipelines.js:2790:21)
at async HuggingFaceEmbedding.getExtractor (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/HuggingFaceEmbedding.js:37:30)
at async HuggingFaceEmbedding.getTextEmbedding (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/HuggingFaceEmbedding.js:44:27)
at async HuggingFaceEmbedding.getTextEmbeddings (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/types.js:30:31)
at async batchEmbeddings (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/types.js:61:32)
at async HuggingFaceEmbedding.getTextEmbeddingsBatch (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/types.js:40:16)
at async HuggingFaceEmbedding.transform (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/types.js:44:28)
at async VectorStoreIndex.getNodeEmbeddingResults (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:474:17)
at async VectorStoreIndex.insertNodes (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:571:17)
at async VectorStoreIndex.buildIndexFromNodes (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:486:9)
at async VectorStoreIndex.init (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:436:13)
at async VectorStoreIndex.fromDocuments (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:514:16)
at async getOpenAIModelRequest (webpack-internal:///(rsc)/./src/actions/openai.ts:62:23)
at async POST (webpack-internal:///(rsc)/./src/app/api/openai/route.ts:11:21)
at async /Users/jino.jose/rakuten/git/rr-services-version-dashboard/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:63809
at async eU.execute (/Users/jino.jose/rakuten/git/rr-services-version-dashboard/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:53964)
at async eU.handle (/Users/jino.jose/rakuten/git/rr-services-version-dashboard/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:65062)
at async doRender (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1333:42)
at async cacheEntry.responseCache.get.routeKind (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1555:28)
at async DevServer.renderToResponseWithComponentsImpl (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1463:28)
at async DevServer.renderPageComponent (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1856:24)
at async DevServer.renderToResponseImpl (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1894:32)
at async DevServer.pipeImpl (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:911:25)
at async NextNodeServer.handleCatchallRenderRequest (/opt/homebrew/lib/node_modules/next/dist/server/next-server.js:271:17)
at async DevServer.handleRequestImpl (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:807:17)
at async /opt/homebrew/lib/node_modules/next/dist/server/dev/next-dev-server.js:331:20
at async Span.traceAsyncFn (/opt/homebrew/lib/node_modules/next/dist/trace/trace.js:151:20)
at async DevServer.handleRequest (/opt/homebrew/lib/node_modules/next/dist/server/dev/next-dev-server.js:328:24)
at async invokeRender (/opt/homebrew/lib/node_modules/next/dist/server/lib/router-server.js:163:21)
at async handleRequest (/opt/homebrew/lib/node_modules/next/dist/server/lib/router-server.js:342:24)
at async requestHandlerImpl (/opt/homebrew/lib/node_modules/next/dist/server/lib/router-server.js:366:13)
at async Server.requestListener (/opt/homebrew/lib/node_modules/next/dist/server/lib/start
|
https://github.com/huggingface/transformers.js/issues/830
|
closed
|
[
"question"
] | 2024-06-29T15:00:09Z
| 2025-02-10T02:00:25Z
| null |
rr-jino-jose
|
pytorch/data
| 1,280
|
Importing `torchdata.stateful_dataloader` hides `torch` RandomSampler and BatchSampler
|
### 🐛 Describe the bug
### Description
In `torchdata.stateful_dataloader.sampler.py`, several Sampler classes in `torch.utils.data` are overwritten:
1. https://github.com/pytorch/data/blob/main/torchdata/stateful_dataloader/sampler.py#L61-L62
2. https://github.com/pytorch/data/blob/main/torchdata/stateful_dataloader/sampler.py#L134-L135
The implication here is that if code were to import `StatefulDataLoader` after importing torch, then there may be inconsistent definitions of `BatchSampler` and `RandomSampler` at runtime. See the gist below for a toy example, where a StatefulDataLoader has a handle to a `torch.utils.data.sampler.BatchSampler` rather than a `torchdata.stateful_dataloader.sampler.BatchSampler`.
This may possibly be the root cause of https://github.com/huggingface/accelerate/issues/2894
### How to reproduce
See gist: https://gist.github.com/byi8220/3091215e38d8f1caba01bc015aed32aa
### Versions
PyTorch version: 2.5.0.dev20240628
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-36-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: 12.5.40
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Ti
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 3600 6-Core Processor
CPU family: 23
Model: 113
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 83%
CPU max MHz: 4208.2031
CPU min MHz: 2200.0000
BogoMIPS: 7200.35
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of releva
|
https://github.com/meta-pytorch/data/issues/1280
|
closed
|
[] | 2024-06-28T23:28:50Z
| 2024-07-03T18:23:06Z
| 8
|
byi8220
|
huggingface/candle
| 2,294
|
How to get raw tensor data?
|
I am trying to implement an adaptive avg pool in candle. However, I guess my implementation will require an API to get the raw data/storage (storaged in plain/flatten array format).
Wondering if there is such an API for that?
Thanks!
|
https://github.com/huggingface/candle/issues/2294
|
open
|
[] | 2024-06-28T19:19:45Z
| 2024-06-28T21:51:57Z
| null |
WenheLI
|
huggingface/diffusers
| 8,730
|
Implementation of DDIM, why taking Xt and (t-1) as input?
|
### Describe the bug
I have tried to infer a diffusion model with DDIM with the number of timesteps = 10 and maximize timesteps as 1000.
I have printed the t in the for-loop, and the result is 901, 801, 801, 701, 601, 501, 401, 301, 201, 101, 1. It's really weird to me why 801 appears two times, and why we start from t=901 instead of t=1000. If we use t=901, we are trying to input x_1000 (the pure noise) and t_901 to the noise predictor, right? It seems weird because when we train the diffusion model, we feed (x_t, t). I mean, the timestep t should correspond to the version of images x_t.
I think the implementation may be right and some of my thoughts are wrong. Please kindly tell me the reason. Thank you!!!
### Reproduction
Just add a print in the forward for loop in DDIMPipeline.
### Logs
_No response_
### System Info
I believe this problem is not relevant to the system info.
### Who can help?
@yiyixuxu
|
https://github.com/huggingface/diffusers/issues/8730
|
closed
|
[
"bug"
] | 2024-06-28T18:45:55Z
| 2024-07-01T17:24:49Z
| 1
|
EPIC-Lab-sjtu
|
pytorch/torchtitan
| 434
|
Question about custom cuda operators for tensor parallelism
|
We are currently trying to apply torchtitan to MoE models. MoE models require using grouped_gemm https://github.com/fanshiqing/grouped_gemm. GroupedGemm ops basically follow the same rule as in ColumnLinear and RowLinear. Is there any way to make custom ops dtensor compatible? Great thanks for help!
|
https://github.com/pytorch/torchtitan/issues/434
|
open
|
[
"question"
] | 2024-06-28T12:29:43Z
| 2024-11-22T00:04:50Z
| null |
vermouth1992
|
huggingface/safetensors
| 490
|
How to save model checkpoint from a distributed training from multiple nodes?
|
Hello,
When I use accelerator and deepspeed Zero3 to train the model in one node with 8 GPUs, the following code smoothly saves the model checkpoint
```
ds_state_dict = model._zero3_consolidated_16bit_state_dict() # here model is sharded
if self.accelerator.is_main_process:
save_file(ds_state_dict, f"{output_dir}/full_model.safetensors")
```
However, when I move the code to two nodes with each node 8 GPUs, this code does not work.
The error is like:
```Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.```
Then I thought maybe I should not call main process only because there are two nodes, so I call the local rank 0 to save
```
ds_state_dict = model._zero3_consolidated_16bit_state_dict() # here model is sharded
if self.accelerator.local_process_index == 0:
save_file(ds_state_dict, f"{output_dir}/full_model.safetensors")
```
And the error becomes:
```
save_file(ds_state_dict, f"{output_dir}/full_model.safetensors")
File "/opt/conda/lib/python3.10/site-packages/safetensors/torch.py", line 284, in save_file
serialize_file(_flatten(tensors), filename, metadata=metadata)
File "/opt/conda/lib/python3.10/site-packages/safetensors/torch.py", line 457, in _flatten
raise ValueError(f"Expected a dict of [str, torch.Tensor] but received {type(tensors)}")
ValueError: Expected a dict of [str, torch.Tensor] but received <class 'NoneType'>
```
I am not sure in this case, what is the right way to use safetensors to save?
|
https://github.com/huggingface/safetensors/issues/490
|
closed
|
[
"Stale"
] | 2024-06-28T04:59:45Z
| 2024-07-31T11:46:06Z
| null |
Emerald01
|
huggingface/diffusers
| 8,728
|
Using `torchsde.BrownianInterval` instead of `torchsde.BrownianTree` in class `BatchedBrownianTree`
|
**Is your feature request related to a problem? Please describe.**
When I was doing some optimization for my pipeline, i found that the BrownianTree somehow took a bit more time.
**Describe the solution you'd like.**
I further dig into torchsde document, and found that they encouraged to use `BrownianInterval` to have best benefits for underlying structure utilization. The `BrownianTree` is actually just an abstraction layer of the `BrownianInterval` and as we all know, python function calls take time!
Code:
```
#diffusers/src/diffusers/schedulers/scheduling_dpmsolver_sde.py:41
self.trees = [torchsde.BrownianTree(t0, w0, t1, entropy=s, **kwargs) for s in seed]
# Modified
self.trees = [torchsde.BrownianInterval(t0, t1, size=w0.shape, dtype=w0.dtype, device=w0.device, cache_size=None, entropy=s, **kwargs) for s in seed]
```
**Additional context.**
[torchsde doc link](https://github.com/google-research/torchsde/blob/master/DOCUMENTATION.md)
|
https://github.com/huggingface/diffusers/issues/8728
|
closed
|
[] | 2024-06-28T04:33:55Z
| 2024-09-12T08:46:54Z
| 5
|
dianyo
|
huggingface/transformers.js
| 826
|
Support for GLiNER models?
|
### Question
is there a reason why models from the GLiNER family can't be supported?
I see they use a specialized library, does it take a lot of code to make them work?
|
https://github.com/huggingface/transformers.js/issues/826
|
open
|
[
"question"
] | 2024-06-28T01:54:37Z
| 2024-10-04T07:59:16Z
| null |
Madd0g
|
pytorch/torchtitan
| 431
|
Question about Pipeline parallelism
|
Just wonder does the current PipelineStage API supports variable length input shapes like in Megatron? https://github.com/NVIDIA/Megatron-LM/blob/e33c8f78a35765d5aa37475a144da60e8a2349d1/megatron/core/model_parallel_config.py#L212 This is particular useful for packed inputs where all the paddings are removed.
|
https://github.com/pytorch/torchtitan/issues/431
|
open
|
[
"enhancement",
"question",
"post training"
] | 2024-06-27T15:31:52Z
| 2025-10-02T02:32:07Z
| null |
vermouth1992
|
huggingface/diffusers
| 8,721
|
how to unload a pipeline
|
how to unload a pipeline and release the gpu memory
|
https://github.com/huggingface/diffusers/issues/8721
|
closed
|
[] | 2024-06-27T10:04:39Z
| 2024-07-02T14:40:39Z
| null |
nono909090
|
huggingface/transformers.js
| 825
|
Are there any examples on how to use paligemma model with transformer.js
|
### Question
First of all, thanks for this amazing library!
So my questions is, I happened to see this model available on transformers.js:
https://huggingface.co/Xenova/paligemma-3b-mix-224
But unfortunately I can't find any example on how to run the `image-text-to-text` pipeline. Are there are resources you could kindly point me to? Thanks in advance! 🙏🏻
|
https://github.com/huggingface/transformers.js/issues/825
|
open
|
[
"question"
] | 2024-06-27T09:49:22Z
| 2024-06-29T02:39:27Z
| null |
alextanhongpin
|
huggingface/lerobot
| 294
|
after training using lerobot framework,how to infer the trained policy directly in real environment(ep. aloha code)? i have not found a solution yet
|
### System Info
```Shell
os ubuntu20.04,
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
not yet
### Expected behavior
how to directly eval the policy trained by lerobot in aloha ?
|
https://github.com/huggingface/lerobot/issues/294
|
closed
|
[
"question",
"policies",
"robots",
"stale"
] | 2024-06-27T03:16:19Z
| 2025-10-23T02:29:25Z
| null |
cong1024
|
pytorch/serve
| 3,206
|
Docker swarm with TorchServe workflow
|
I want to scale the workflows through "Docker Swarm". (I hope it is possible, if not please tell me how one can achieve this? I know it is not supported yet through TorchServe directly, that is why I'm using docker to scale the workflow.)
I have few questions related to using TorchServe as a docker service in swarm mode while I encountered few issues.
**Problem Statement:**
- We are using TorchServe workflow as we have multiple models required to complete the use case.
- To make sure that there isn't any difference I've set the number of workers to 2 on each node, so that memory consumption doesn't go above 16GB, and each node has same number of workers and memory.
- While creating a docker service, the manager node seems to work fine with the below TorchServe config and completes the task in desired time, but when the manager assigns the task to any of the worker node it takes ~3X more time.
- Problem we are facing is while a TorchServe worker is executing on the worker node, looks like it is executing with intervals. i.e., it doesn’t show continuous GPU utilization/processing and stops printing logs as well along with delay in response and meanwhile that if another request comes it will stop executing the current request and starts executing new one.
- I did see something in logs (unfortunately, I'm unable to provide the logs here) like, when node `m5` is being executed and new request came then the current request directly stops (at least in the logs it looked like that, but no error was thrown) and new one starts. Correct me if I'm wrong but old request should be executing in the background, right?
- Now, the question is, Does TorchServe support routing the request through docker swarm?
- If so, then what would be the correct configuration to achieve similar results on the all the nodes apart from manager in swarm?
**My Docker Swarm Config:**
* 3 nodes, 1 manager 2 workers
* Manager has 4 X v100 sxm-2, 32GB each, Worker has 4 X v100 sxm-2, 16GB each
**My project config:**
(Please ignore the timeout, as I've put it this way because my inference request takes around 10 mins, as it takes over 100 images to process in a batch)
* There are 5 models
* **model-config.yaml**
```yaml
maxBatchDelay: 10000000
responseTimeout: 10000000
```
* **workflow.yaml**
```yaml
models:
min-workers: 1
max-workers: 2
max-batch-delay: 10000000
retry-attempts: 1
timeout-ms: 3000000
m1:
url: mode-1.mar
m2:
url: model-2.mar
m3:
url: model-3.mar
m4:
url: model-4.mar
m5:
url: model-5.mar
dag:
pre_processing: [m1]
m1: [m2]
m2: [m3]
m3: [m4]
m4: [m5]
m5: [post_processing]
```
* **config.properties**
```properties
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082
# management
default_response_timeout=10000000
default_workers_per_model=2
load_models=
model_store=model_store
workflow_store=wf_store
enable_envvars_config=true
job_queue_size=3
```
**Python Packages:**
```text
torch==1.13.1+cu117
torchvision==0.14.1+cu117
torchaudio==0.13.1+cu117
torchserve==0.10.0
torch-model-archiver==0.10.0
torch-workflow-archiver==0.2.12
nvgpu==0.10.0
captum==0.7.0
```
|
https://github.com/pytorch/serve/issues/3206
|
closed
|
[
"triaged",
"workflowx"
] | 2024-06-26T16:20:40Z
| 2024-07-25T14:54:32Z
| 6
|
KD1994
|
huggingface/chat-ui
| 1,312
|
[v0.9.1] Error: "Cannot resolve directory $env"
|
## Issue
For all client-side components, I get this:
```
"Cannot resolve directory $env"
```
<img width="589" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/26fa2eef-dbff-44f6-bb86-7700387abdf2">
<img width="837" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/e3668b40-396b-4244-9c78-4aaf805220ae">
This issue prevents a Docker run, because PUBLIC_ASSETS is not found.
@nsarrazin Please help.
|
https://github.com/huggingface/chat-ui/issues/1312
|
open
|
[
"support"
] | 2024-06-26T13:24:42Z
| 2024-06-26T15:14:48Z
| 2
|
adhishthite
|
huggingface/chat-ui
| 1,311
|
400 (no body) trying to reach openai compatible server
|
Hi everyone,
I have the following setup (containers are on the same device):
- Container 1: Nvidia NIM (openai-compatible) with Llama3 8B Instruct, port 8000;
- Container 2: chat-ui, port 3000.
This is the content of the `.env` file:
```
MONGODB_URL=mongodb://localhost:27017
MONGODB_DB_NAME=chat-ui
MODELS=`[{"name":"Llama3-8B-Instruct","id":"Llama3-8B-Instruct","endpoints":[{"type":"openai","baseURL":"http://192.168.120.240:8000/v1","extraBody":{"repetition_penalty":1.1}}]}]`
LOG_LEVEL=debug
ALLOW_INSECURE_COOKIES=true
```
And this is the error I get when I try to run inference from browser:
```
{"level":50,"time":1719403859826,"pid":31,"hostname":"592d634d7447","err":{"type":"BadRequestError","message":"400 status code (no body)","stack":"Error: 400 status code (no body)\n at APIError.generate (file:///app/build/server/chunks/index-3aabce5f.js:4400:20)\n at OpenAI.makeStatusError (file:///app/build/server/chunks/index-3aabce5f.js:5282:25)\n at OpenAI.makeRequest (file:///app/build/server/chunks/index-3aabce5f.js:5325:30)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async file:///app/build/server/chunks/models-e8725572.js:98846:36\n at async generateFromDefaultEndpoint (file:///app/build/server/chunks/index3-2417d430.js:213:23)\n at async generateTitle (file:///app/build/server/chunks/_server.ts-2c825ade.js:213:10)\n at async generateTitleForConversation (file:///app/build/server/chunks/_server.ts-2c825ade.js:177:19)","status":400,"headers":{"content-length":"1980","content-type":"application/json","date":"Wed, 26 Jun 2024 12:10:59 GMT","server":"uvicorn"}},"msg":"400 status code (no body)"}
BadRequestError: 400 status code (no body)
at APIError.generate (file:///app/build/server/chunks/index-3aabce5f.js:4400:20)
at OpenAI.makeStatusError (file:///app/build/server/chunks/index-3aabce5f.js:5282:25)
at OpenAI.makeRequest (file:///app/build/server/chunks/index-3aabce5f.js:5325:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async file:///app/build/server/chunks/models-e8725572.js:98846:36
at async generate (file:///app/build/server/chunks/_server.ts-2c825ade.js:426:30)
at async textGenerationWithoutTitle (file:///app/build/server/chunks/_server.ts-2c825ade.js:487:3) {
status: 400,
headers: {
'content-length': '543',
'content-type': 'application/json',
date: 'Wed, 26 Jun 2024 12:10:59 GMT',
server: 'uvicorn'
},
request_id: undefined,
error: undefined,
code: undefined,
param: undefined,
type: undefined
}
```
Is there something wrong with the .env file, or is Nvidia NIM simply not supported for some strange reason?
|
https://github.com/huggingface/chat-ui/issues/1311
|
open
|
[
"support"
] | 2024-06-26T12:34:44Z
| 2024-07-22T13:03:18Z
| 2
|
edesalve
|
huggingface/diffusers
| 8,710
|
Add PAG support to SD1.5
|
We recently integrated PAG into diffusers! See this PR [here] (https://github.com/huggingface/diffusers/pull/7944) we added PAG to SDXL
we also want to add PAG support to SD1.5 pipelines! we will need:
- [x] StableDiffusionPAGPipeline (assigned to @shauray8, PR https://github.com/huggingface/diffusers/pull/8725)
- [ ] StableDiffusionPAGImg2ImgPipeline https://github.com/huggingface/diffusers/pull/9463
- [ ] StableDiffusionPAGInpaintPipeline
- [ ] StableDiffusionControlNetPAGInpaintPipeline (https://github.com/huggingface/diffusers/pull/8875)
- [x] StableDiffusionControlNetPAGPipeline (assigned to @tuanh123789 )
- [ ] StableDiffusionControlNetPAGImg2ImgPipeline (assigned to @Bhavay-2001 https://github.com/huggingface/diffusers/pull/8864)
1. You should put it under the [pag folder](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/pag)
2. you can use the implementation of SDXL PAG pipelines as a reference (see this PRhttps://github.com/huggingface/diffusers/pull/7944 and you can find all the sdxl pag pipelines here https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/pag)
3. you need to add AutoPipeline so that you can use this API to create it
```python
AutoPipelineForImage2Image.from_pretrained(repo_id, controlnet=controlnet, enable_pag=True ...)
```
4. tests and docs
If you are interested in working on this, Let me know which pipeline(s) you want to work on:)
|
https://github.com/huggingface/diffusers/issues/8710
|
closed
|
[
"good first issue",
"help wanted",
"contributions-welcome"
] | 2024-06-26T08:23:17Z
| 2024-10-09T20:40:59Z
| 17
|
yiyixuxu
|
huggingface/chat-ui
| 1,309
|
"404 Resource Not Found" when using Azure OpenAI model endpoint
|
I run `chat-ui` with the `chat-ui-db` docker image. I would like to connect it to my Azure OpenAI API endpoint.
I have setup the `env.local` file as stated in your docs and binded it with the docker container:
```bash
MODELS=`[{
"id": "gpt-4-1106-preview",
"name": "gpt-4-1106-preview",
"displayName": "gpt-4-1106-preview",
"parameters": {
"temperature": 0.5,
"max_new_tokens": 4096,
},
"endpoints": [
{
"type": "openai",
"baseURL": "https://{resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions",
"defaultHeaders": {
"api-key": "{api-key}"
},
"defaultQuery": {
"api-version": "{api-version}"
}
}
]
}]`
```
When sending a message in `chat-ui`, I get a message `404 Resource Not Found` on the top right of the interface.
When I manually send an HTTP request to the Azure OpenAI API endpoint with the same parameters, I get a valid response.
How can I solve this?
|
https://github.com/huggingface/chat-ui/issues/1309
|
open
|
[
"support"
] | 2024-06-26T07:16:54Z
| 2024-06-26T18:53:51Z
| 2
|
gqoew
|
huggingface/chat-ui
| 1,308
|
Warning: To load an ES module in Azure environment
|
Hi Team,
We are currently facing issues deploying our Chat UI solution in Azure Web App. The error encountered in the console log is as follows:
```
npm http fetch GET 200 https://registry.npmjs.org/npm 141ms
(node:124) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.
(Use `node --trace-warnings ...` to show where the warning was created)
/home/site/wwwroot/node_modules/.bin/vite:2
import { performance } from 'node:perf_hooks'
^^^^^^
SyntaxError: Cannot use import statement outside a module
at internalCompileFunction (node:internal/vm:77:18)
at wrapSafe (node:internal/modules/cjs/loader:1288:20)
at Module._compile (node:internal/modules/cjs/loader:1340:27)
at Module._extensions..js (node:internal/modules/cjs/loader:1435:10)
at Module.load (node:internal/modules/cjs/loader:1207:32)
at Module._load (node:internal/modules/cjs/loader:1023:12)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:135:12)
at node:internal/main/run_main_module:28:49
Node.js v20.11.1
npm notice
npm notice New minor version of npm available! 10.5.0 -> 10.8.1
npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.8.1
npm notice Run npm install -g npm@10.8.1 to update!
npm notice
```
It appears to be a Node.js issue, and I believe there might be an error in my package.json configuration. I have tried using both Node.js 18 and 20 without success.
Could you please provide me with the correct configuration for package.json to resolve this issue?
|
https://github.com/huggingface/chat-ui/issues/1308
|
open
|
[
"support"
] | 2024-06-26T06:04:45Z
| 2024-06-27T09:07:35Z
| 3
|
pronitagrawalvera
|
huggingface/transformers.js
| 823
|
How to export q4f16.onnx
|
### Question
Thanks for providing such a great project, but I have a problem converting the model.
```
For example:
model_q4f16.onnx
```
What command is used to create and export such a q4/f16.onnx model?
Can you give me more tips or help? Thank you
|
https://github.com/huggingface/transformers.js/issues/823
|
closed
|
[
"question"
] | 2024-06-26T05:36:47Z
| 2024-06-26T07:46:57Z
| null |
juntaosun
|
pytorch/pytorch
| 129,542
|
How to Convert pytorch qat model to tensorrt
|
I find that the converted qat model in pytorch can't use GPU Kernel, But I don't find the function or ways to convert to tensorrt. How to Convert pytorch qat model to tensorrt?
|
https://github.com/pytorch/pytorch/issues/129542
|
closed
|
[] | 2024-06-26T02:46:20Z
| 2024-06-26T15:42:38Z
| null |
AnnaTrainingG
|
pytorch/xla
| 7,466
|
Register python implementation for the aten ops
|
## ❓ Questions and Help
Currently `F.interpolate(mode='tilinear)'` will be dispatched to `aten::upsample_trilinear3d` which we don't have c++ lowering. There is a python decomp for this op in https://github.com/pytorch/pytorch/blob/ad76da6c16c5dc465e8aac8d913532251db7b400/torch/_decomp/decompositions.py#L3591-L3602 so I am wondering if there is way for PyTorch/XLA to register this python implementation directly.
Similar request for `scaled_dot_product_attention`, we have the Pallas based implementation in https://github.com/pytorch/xla/blob/master/torch_xla/experimental/custom_kernel.py#L162 for TPU but I don't know how to register this for PyTorch/XLA.
cc @ezyang @bdhirsh @alband
|
https://github.com/pytorch/xla/issues/7466
|
closed
|
[
"question",
"lowering"
] | 2024-06-25T21:00:50Z
| 2025-04-07T12:46:14Z
| null |
JackCaoG
|
pytorch/TensorRT
| 2,955
|
❓ [Question] How do you compile a chunk operator with TensorRT?
|
## ❓ Question
How do you compile a chunk operator with TensorRT? I have been trying a basic example in a Jupyter Notebook but get an unbroadcastable dimension error. The below code executes in PyTorch inference and torchscript, but cannot be compiled with TensorRT.
## What you have already tried
```import torch
import torch.nn as nn
import torch_tensorrt
device = "cuda"
class TestModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, y):
y1, _ = y.chunk(2, dim=0) #y1.shape --> (1, 3)
return x + y1 #(2, 3) + (1, 3)
model = TestModel()
model.eval()
x = torch.randn((2, 3), device=device)
y = torch.randn((2, 3), device=device)
model(x, y)
traced_model = torch.jit.trace(model, (x, y))
trt_model = torch_tensorrt.compile(traced_model,
inputs=[torch_tensorrt.Input(shape=x.shape, dtype=torch.float32),
torch_tensorrt.Input(shape=y.shape, dtype=torch.float32)]
)
```
Error messages:
```ERROR: [Torch-TensorRT TorchScript Conversion Context] - ITensor::getDimensions: Error Code 4: Shape Error (broadcast dimensions must be conformable)
ERROR: [Torch-TensorRT TorchScript Conversion Context] - ITensor::getDimensions: Error Code 4: Shape Error (broadcast dimensions must be conformable)
ERROR: [Torch-TensorRT TorchScript Conversion Context] - IBuilder::buildSerializedNetwork: Error Code 4: Internal Error (%9 : Tensor = aten::add(%x, %y1, %3) # [...): IElementWiseLayer must have inputs with same dimensions or follow broadcast rules. Input dimensions were [2,3] and [1,0].)
```
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.3.0
- CPU Architecture:
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.10.14
- CUDA version: 12.1
- GPU models and configuration: A100
- Any other relevant information:
Thank you for the help!
|
https://github.com/pytorch/TensorRT/issues/2955
|
open
|
[
"question"
] | 2024-06-25T20:37:51Z
| 2024-06-25T21:45:45Z
| null |
joshuageddes
|
huggingface/diffusers
| 8,700
|
[PAG] add `StableDiffusionXLControlNetPAGImg2ImgPipeline`
|
We recently integrated PAG into diffusers! See the PR here: https://github.com/huggingface/diffusers/pull/7944
Does anyone want to add a `StableDiffusionXLControlNetPAGImg2ImgPipeline`?
1. You should put it under the [pag folder](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/pag)
2. you can use the implementation of [`StableDiffusionXLControlNetPAGPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_xl.py) and [`StableDiffusionXLPAGImg2ImgPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl_img2img.py) as reference
3. you need to add AutoPipeline so that you can use this API to create it
```python
AutoPipelineForImage2Image.from_pretrained(repo_id, controlnet=controlnet, enable_pag=True ...)
```
4. tests and docs
|
https://github.com/huggingface/diffusers/issues/8700
|
closed
|
[
"good first issue",
"help wanted",
"contributions-welcome"
] | 2024-06-25T18:52:18Z
| 2024-08-21T17:24:23Z
| 6
|
yiyixuxu
|
huggingface/sentence-transformers
| 2,779
|
what is the default tokenizer when "No sentence-transformers model found with name"?
|
I'm trying to use the sentence-transformer dangvantuan/sentence-camembert-large model and I'm getting a "no model found" error. This error is probably because some Sentence-Transformers-specific files are missing in their Huggingface (modules.json and config_sentence_transformers.json).
But then, Sentence Transformer warns it will create a new model with mean pooling, and this model performs really well on my data (!).
So, I would like to know what the tokeniser's model is when the model name hasn't been found?
|
https://github.com/huggingface/sentence-transformers/issues/2779
|
closed
|
[] | 2024-06-25T15:17:58Z
| 2024-07-05T10:42:27Z
| null |
Hortatori
|
huggingface/accelerate
| 2,891
|
How to set a custom Config in python code using Accelerate?
|
Hello everyone!
Could you please advise how to replace the console command for setting a config
```
accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2}
```
with code in the Python file script_name.py?
I am expecting something like the following functionality:
```
from accelerate import Accelerator
accelerator = Accelerator()
accelerator.set_config_file('path/to/config/my_config_file.yaml')
```
I would like to run the script through Python and use all the benefits of launching with the Accelerate launch command with config file:
```
python script_name.py
```
|
https://github.com/huggingface/accelerate/issues/2891
|
closed
|
[] | 2024-06-25T11:56:10Z
| 2024-10-07T15:08:01Z
| null |
konstantinator
|
pytorch/ao
| 436
|
what if below condition? about OCP Microscaling
|
assume we have a fp32 tensor like [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 127.99999], and set k to 32(default, convert to fp8 e5m2 mx block.
btw asfloat(0x42FFFFFF) = 127.9999f
from current code, the max absolute value is 127.9999, the unbiased exponent is 6, minus emax.fp8_e5m2(which is 15), so the shared scale is 6 - 15 + 127 is 118.
127.9999/2^118 = 0x1.FFFFF7p+15, assume we use RN rounding mode, after rounding this value will convert to 0x2.0p+15(fp8 e5m2) which is large than the max normal representation of fp8 e5m2, so clamp to the max normal<OCP Microscaling Formats (MX) Specification Version 1.0. chapter 6.3>.
current code do as above, the log shows below:
`tensor([ 1.0000, 2.0000, 3.0000, 4.0000, 5.0000, 6.0000, 7.0000,
8.0000, 9.0000, 10.0000, 11.0000, 12.0000, 13.0000, 14.0000,
15.0000, 16.0000, 17.0000, 18.0000, 19.0000, 20.0000, 21.0000,
22.0000, 23.0000, 24.0000, 25.0000, 26.0000, 27.0000, 28.0000,
29.0000, 30.0000, 31.0000, **127.9999**], device='cuda:0')
MXTensor: elem_dtype: torch.float8_e5m2, s_e8m0: tensor([118], device='cuda:0', dtype=torch.uint8), d: tensor([ 512., 1024., 1536., 2048., 2560., 3072., 3584., 4096., 4096.,
5120., 6144., 6144., 6144., 7168., 8192., 8192., 8192., 8192.,
10240., 10240., 10240., 12288., 12288., 12288., 12288., 12288., 14336.,
14336., 14336., 16384., 16384., **57344.**], device='cuda:0',
dtype=torch.float8_e5m2), d_hp: tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 8., 10., 12., 12.,
12., 14., 16., 16., 16., 16., 20., 20., 20., 24., 24., 24.,
24., 24., 28., 28., 28., 32., 32., **112**.], device='cuda:0')`
from above log, we see shared exp is 118, fp32 127.9999 convert to fp8 e5m2 57344. 112=2^(118-127)*57344 seems far less than 127.9999.
but if we add shred exp by 1 if max(abs)/scale are large than the max normal representation of fp8 e5m2 after rounding, which means shared_exp to 119, then 129.999 convert to fp8 e5m2 is 0x1.00p+15 = 32768, and 128 = 2^(119-127)*32768, seems more accurate than 112.
but this seems not compliance with mx1.0 spec, what we choose and why? anyone who can help me?
|
https://github.com/pytorch/ao/issues/436
|
closed
|
[
"question",
"mx"
] | 2024-06-25T08:37:17Z
| 2024-07-05T16:31:23Z
| null |
avater210
|
huggingface/diffusers
| 8,693
|
SD3 + SDXL refine fix lying on grass. How to do in diffusers colab workflow?
|
this is comfy workflow

how can i do in diffusers colab workflow?
|
https://github.com/huggingface/diffusers/issues/8693
|
closed
|
[
"stale"
] | 2024-06-25T07:30:55Z
| 2024-09-23T11:37:25Z
| null |
s9anus98a
|
huggingface/text-generation-inference
| 2,113
|
how to launch a service using downloaded model weights?
|
### System Info
I have downloaded model weights of bge-models, and I want to launch a model service using TGI, the command is :
```
model=/storage/nfs2/ModelHub/embedding/BAAI/bge-small-zh-v1.5
revision=refs/pr/5
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all \
-p 3001:3001 -v $volume:/data text-embeddings-inference:1.2 \
--model-id $model --port 3001 --revision $revision
```
but I got the follwing error:
```
2024-06-25T03:13:34.201754Z INFO text_embeddings_router: router/src/main.rs:140: Args { model_id: "BAA*/***-*****-**-v1.5", revision: Some("refs/pr/5"), tokenization_workers: None, dtype: None, pooling: None, max_concurrent_requests: 512, max_batch_tokens: 16384, max_batch_requests: None, max_client_batch_size: 32, auto_truncate: false, hf_api_token: None, hostname: "54903bb17567", port: 3001, uds_path: "/tmp/text-embeddings-inference-server", huggingface_hub_cache: Some("/data"), payload_limit: 2000000, api_key: None, json_output: false, otlp_endpoint: None, cors_allow_origin: None }
2024-06-25T03:13:34.201950Z INFO hf_hub: /root/.cargo/git/checkouts/hf-hub-1aadb4c6e2cbe1ba/b167f69/src/lib.rs:55: Token file not found "/root/.cache/huggingface/token"
2024-06-25T03:13:36.546198Z INFO download_artifacts: text_embeddings_core::download: core/src/download.rs:20: Starting download
Error: Could not download model artifacts
Caused by:
0: request error: error sending request for url (https://huggingface.co/BAAI/bge-large-zh-v1.5/resolve/refs%2Fpr%2F5/config.json): error trying to connect: Connection reset by peer (os error 104)
1: error sending request for url (https://huggingface.co/BAAI/bge-large-zh-v1.5/resolve/refs%2Fpr%2F5/config.json): error trying to connect: Connection reset by peer (os error 104)
2: error trying to connect: Connection reset by peer (os error 104)
3: Connection reset by peer (os error 104)
4: Connection reset by peer (os error 104)
```
It seems to download model from huggingface but I want to use my private model weight.
my privatre weight:
```
>> ls /storage/nfs2/ModelHub/embedding/BAAI/bge-small-zh-v1.5
1_Pooling model.safetensors README.md tokenizer_config.json
config.json modules.json sentence_bert_config.json tokenizer.json
config_sentence_transformers.json pytorch_model.bin special_tokens_map.json vocab.txt
```
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
docker run --gpus all \
-p 3001:3001 -v $volume:/data text-embeddings-inference:1.2 \
--model-id $model --port 3001 --revision $revision
### Expected behavior
luanch the service successfully
|
https://github.com/huggingface/text-generation-inference/issues/2113
|
closed
|
[] | 2024-06-25T03:18:14Z
| 2024-06-28T03:50:10Z
| null |
chenchunhui97
|
huggingface/chat-ui
| 1,302
|
Assistant feature: Send user query as part of template variable GET request
|
Trying to integrate RAG as an assistant. Thinking of using a template variable that makes a GET request (with the prompt as the request body), to get the relevant documents as context. Is this possible (i.e. there is a special variable in the system prompt page for the user query), or is there a better way of doing this?
|
https://github.com/huggingface/chat-ui/issues/1302
|
closed
|
[] | 2024-06-24T22:27:02Z
| 2025-01-02T12:09:23Z
| 2
|
ethayu
|
huggingface/diffusers
| 8,683
|
Why do Diffusers schedulers produce lower quality outputs compared to ComfyUI?
|
### Discussed in https://github.com/huggingface/diffusers/discussions/8682
<sup>Originally posted by **nducthang** June 24, 2024</sup>
Hi,
I'm encountering an issue when comparing the quality of ComfyUI and Diffusers. I've noticed that the output of Diffusers is consistently lower than ComfyUI in many cases, despite using the same settings and seed. For the base Diffusers, I've utilized: https://github.com/huggingface/diffusers/blob/main/examples/community/lpw_stable_diffusion_xl.py.
Upon closer inspection, I've identified differences in the scheduler/ksampler between the two base codes. I've also observed variations in CLIP Embedding between the two base codes, but in my experiments, this hasn't significantly impacted the output. The main issue seems to lie with the KSampler.
Has anyone else encountered this issue or have any ideas on improving the Scheduler algorithm of Diffusers?
Here are some prompts I've experimented:
Model: RVXL - Size: (896, 1152)
Positive prompt:
```
female, attractive woman, pretty middle-aged woman, thick hair, (((Caucasian, European, Scandinavian female))), ((hazel eyes, HazelEyed)). (Brunette (Light-Brown-Hair)), ((((long rectangular face, elongated face, oblong face shape, angular chiseled face)), ((wide jaw, big strong chin)))). (((1980s magazine advertisement. Living room. CRT Televesion. 1980s aesthetic. 1980s interior design.))) [object Object] . high quality, dim lighting, soft lighting, sharp focus, f5.6, dslr, High Detail, detailed, ((wide shot))
```
Negative prompt:
```
(((male))), (small chin, receding-chin, puffy face), (((Asian, Chinese, Korean, Japanese, Indian, Pakistani, Black, African, Persian, Arab, Middle Eastern, Hispanic, Latino))), (small chin, receding-chin, puffy face), (blurry), (BadDream:1.2), (UnrealisticDream:1.2), ((bad-hands-5)), (strabismus, cross-eyed:1.2), (signature, watermark, name), (worst quality, poor quality, low quality), ((deformed)), (extra limbs), (extra arms), (extra legs), disfigured, malformed, (nude:1.4), (naked:1.4), (nsfw:1.4), (bikini:1.4), (lingerie:1.4), (underwear:1.4), (teen:1.4), (tween:1.4), (teenage:1.4), (kid:1.6), (child:1.6), (topless, shirtless:1.4), (((greyscale))), (cleavage:1.2), (nipples:1.4)
```
|
https://github.com/huggingface/diffusers/issues/8683
|
closed
|
[] | 2024-06-24T14:37:19Z
| 2024-06-25T06:06:12Z
| 20
|
nducthang
|
pytorch/serve
| 3,204
|
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
|
Hi, I've been running models with Torchserve 0.11.0 on Sagemaker and noticed following warning:
`WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.` when starting the Torchserve.
I read that this method was removed in Java8 (https://stackoverflow.com/questions/23808803/sun-reflect-reflection-getcallerclass-alternative?noredirect=1&lq=1). How does lack of support for this method affect performance? What is required to get rid of this warning when running torchserve?
|
https://github.com/pytorch/serve/issues/3204
|
open
|
[
"java"
] | 2024-06-24T13:58:02Z
| 2024-06-26T21:20:17Z
| 1
|
aalbersk
|
pytorch/ao
| 430
|
Understanding 8da4w
|
Hi there,
I'm new to quantization. From my understanding, "8da4w" means that the weights are pre-quantized to 4 bits, and the activations are quantized to 8 bits at runtime. Following this, the GEMM (General Matrix Multiply) operation between weights and activations is computed in the `int8` data type. Do I have this correct?
However, I'm confused by the code for `Int8DynActInt4WeightQuantizer`. The `forward` method of `Int8DynActInt4WeightLinear` calls a method named `per_token_dynamic_quant`, which can be found [here](https://github.com/pytorch/ao/blob/fd9f95d614fa03f09d85d73a2c2740cc647d7b9b/torchao/quantization/utils.py#L436-L458). In this method, the input is first quantized to `int8` and then immediately converted back to its original data type without further processing. I don't understand the purpose of this function. Furthermore, I have launched a program using `Int8DynActInt4WeightQuantizer ` and observed the data types of `x` and `w_dq` in the method `linear_forward_8da4w`, which can be found [here](https://github.com/pytorch/ao/blob/fd9f95d614fa03f09d85d73a2c2740cc647d7b9b/torchao/quantization/GPTQ.py#L800), they both are `float32`. This seems to contradict my understanding of the computations involved in '8da4w'.
I realize that I'm likely missing some fundamental aspects of dynamic quantization. Could anyone kindly clarify this process for me?
Thank you!
|
https://github.com/pytorch/ao/issues/430
|
closed
|
[
"question"
] | 2024-06-24T08:43:44Z
| 2024-07-23T17:32:41Z
| null |
DzAvril
|
pytorch/vision
| 8,503
|
Can we add datatype support for examples under references
|
### 🚀 The feature
currently the examples under references only support default datatype (float32), can we support a argument like --data-type to allow user to specify the datatype for the model?
### Motivation, pitch
Many users like us always need to run different dataytpye for the model. like float16 and bfloat16. If this argument can be added, it will save many efforts.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/vision/issues/8503
|
open
|
[] | 2024-06-24T03:29:04Z
| 2024-07-12T15:09:10Z
| 2
|
wincent8
|
huggingface/alignment-handbook
| 174
|
Question about torch_dtype when runnging run_orpo.py
|
I have been using `run_orpo.py` with my personal data successfully. However, as I use it, I have a question.
When I look at the code for `run_orpo.py`, I see that there is a code to match torch_dtype to the dtype of the pretrained model. However, when I actually train and save the model, even if the pretrained model's dtype was `bf16`, it gets changed to `fp32`. Why is this happening?
|
https://github.com/huggingface/alignment-handbook/issues/174
|
closed
|
[] | 2024-06-23T08:28:02Z
| 2024-07-30T05:05:03Z
| 6
|
sylee96
|
huggingface/diffusers
| 8,666
|
Attention api changes no documentation ?
|
how can i see ur previous changes on attention ?
u have rename`` _slice_size , _sliced_attention and _attention`` attribute from attention
need to know what are alternative using of its ?
|
https://github.com/huggingface/diffusers/issues/8666
|
closed
|
[] | 2024-06-23T07:08:58Z
| 2024-06-23T11:31:47Z
| 4
|
xalteropsx
|
huggingface/transformers.js
| 819
|
Blog on walkthrough with transformers js
|
### Question
Hey, So I am writing this blog part of sharing knowledge in a blog series called Running AI/ML in the client. I am using transformer js example walkthrough in this part to validate some concepts. Can I get some feedback before it goes live? How do we connect?
|
https://github.com/huggingface/transformers.js/issues/819
|
closed
|
[
"question"
] | 2024-06-23T06:06:42Z
| 2024-06-27T19:10:05Z
| null |
ArijitCloud
|
huggingface/trl
| 1,763
|
What is the difference between PPOv2Trainer and PPOTrainer?
|
What is the difference between PPOv2Trainer and PPOTrainer? And in trl\examples\scripts\ppo\ppo.py and trl\examples\scripts\ppo.py , there are two dpo.py files, can you tell me what is different between them?
|
https://github.com/huggingface/trl/issues/1763
|
closed
|
[] | 2024-06-22T14:48:38Z
| 2024-08-24T09:25:52Z
| null |
mst272
|
pytorch/xla
| 7,326
|
dear teachers, i can connect the internet, but i can not download it the torch_xla
|
pip install torch_xla[tpu]~=2.3.0 -f https://storage.googleapis.com/libtpu-releases/index.html
ERROR: Could not find a version that satisfies the requirement torch_xla~=2.3.0 (from versions: none)
ERROR: No matching distribution found for torch_xla~=2.3.0
|
https://github.com/pytorch/xla/issues/7326
|
closed
|
[
"question"
] | 2024-06-21T07:42:57Z
| 2025-04-07T12:58:54Z
| null |
zhangwaer
|
huggingface/diffusers
| 8,649
|
SD3 - num_images_per_prompt no longer honoured (throws error)
|
### Describe the bug
With models prior to SD3, the parameter num_images_per_prompt is honoured, enabling generation of several images per prompt. With sd3-medium an error is generated.
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.
Note: I have insufficient VRAM to run tests without clearing text_encoder_3 and tokenizer_3 and am not sure how to use the
sd3_medium_incl_clips_t5xxlfp8.safetensors variant in a normal diffusers workflow. It is always possible that clearing the T5-xxl has a side-effect of breaking num_images_per_prompt.
### Reproduction
```
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained(
"stabilityai/stable-diffusion-3-medium-diffusers",
text_encoder_3=None,
tokenizer_3=None,
torch_dtype=torch.float16
)
pipe.to("cuda")
image = pipe(
"A cat holding a sign that says hello world",
negative_prompt="",
num_inference_steps=28,
num_images_per_prompt=2,
guidance_scale=7.0,
).images[0]
image.save("sd3_hello_world-no-T5.png")
```
### Logs
```shell
Traceback (most recent call last):
File "/home/developer/src/hug_test_txt2img_sd3.py", line 12, in <module>
image = pipe(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py", line 778, in __call__
) = self.encode_prompt(
File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py", line 413, in encode_prompt
prompt_embeds = torch.cat([clip_prompt_embeds, t5_prompt_embed], dim=-2)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.
```
### System Info
- 🤗 Diffusers version: 0.29.0
- Platform: Linux-6.8.0-35-generic-x86_64-with-glibc2.35
- Running on a notebook?: No
- Running on Google Colab?: No
- Python version: 3.10.12
- PyTorch version (GPU?): 2.3.1+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.23.4
- Transformers version: 4.41.2
- Accelerate version: 0.31.0
- PEFT version: 0.11.1
- Bitsandbytes version: not installed
- Safetensors version: 0.4.3
- xFormers version: 0.0.27+133d7f1.d20240619
- Accelerator: NVIDIA GeForce RTX 3060, 12288 MiB VRAM
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
|
https://github.com/huggingface/diffusers/issues/8649
|
closed
|
[
"bug"
] | 2024-06-20T11:28:22Z
| 2024-06-29T13:05:28Z
| 4
|
zagglez
|
huggingface/transformers.js
| 814
|
Consultation on the use of the library with chatbot models
|
### Question
Hello, Greetings Vladimir, programmer in a web environment with PHP, JS, AJAX, first I apologize for my English, my native language is Latin Spanish, I am not very good at writing it, I have used a translator, I wanted to consult, how can I use this interesting and useful tool, to be able to create a chatbot that can respond with personalized information from PDFs, the query is more like using the library, how to use the models both from Hugging Face and downloaded from the script that you share in the documentation and which models would be the most useful for this task considering that you will have to speak in Spanish, I remain attentive
|
https://github.com/huggingface/transformers.js/issues/814
|
open
|
[
"question"
] | 2024-06-20T03:24:34Z
| 2024-07-29T10:47:24Z
| null |
mate07
|
pytorch/torchtitan
| 412
|
ImportError in LLaMA Training Script
|
When attempting to run the training script for LLaMA with the following command:
`CONFIG_FILE="./train_configs/llama3_8b.toml" ./run_llama_train.sh`
an ImportError is encountered. The specific error message is:
`ImportError: cannot import name 'Partial' from 'torch.distributed._tensor' (/apps/torchtitan/torchtitan/lib/python3.10/site-packages/torch/distributed/_tensor/__init__.py)`
The training script should start without any import errors and utilize the specified configuration file to train the model across 8 GPUs.
The script fails to run due to an ImportError indicating that Partial cannot be imported from torch.distributed._tensor. The error traceback is as follows:
`Traceback (most recent call last):
File "/apps/torchtitan/train.py", line 34, in <module>
from torchtitan.models import model_name_to_cls, model_name_to_tokenizer, models_config
File "/apps/torchtitan/torchtitan/models/__init__.py", line 7, in <module>
from torchtitan.models.llama import llama2_configs, llama3_configs, Transformer
File "/apps/torchtitan/torchtitan/models/llama/__init__.py", line 10, in <module>
from torchtitan.models.llama.model import ModelArgs, Transformer
File "/apps/torchtitan/torchtitan/models/llama/model.py", line 17, in <module>
from torchtitan.models.norms import create_norm
File "/apps/torchtitan/torchtitan/models/norms.py", line 17, in <module>
from torch.distributed._tensor import Partial, Replicate, Shard
ImportError: cannot import name 'Partial' from 'torch.distributed._tensor' (/apps/torchtitan/torchtitan/lib/python3.10/site-packages/torch/distributed/_tensor/__init__.py)
`
|
https://github.com/pytorch/torchtitan/issues/412
|
closed
|
[
"question"
] | 2024-06-19T17:45:48Z
| 2024-07-12T16:06:10Z
| null |
viai957
|
huggingface/optimum
| 1,912
|
Could you provide the official onnx model of Qwen-VL-Chat(-Int4)?
|
### Feature request
Qwen-VL-Chat(-Int4) is useful to image-to-text model.
### Motivation
The image-to-text LMM model just like Qwen-VL-Chat(-Int4) is very useful.
### Your contribution
Not yet.
|
https://github.com/huggingface/optimum/issues/1912
|
open
|
[
"feature-request",
"quantization"
] | 2024-06-19T08:43:58Z
| 2024-10-09T07:52:54Z
| 0
|
yzq1990
|
pytorch/TensorRT
| 2,940
|
❓ [Question] Is there any plan to support bfloat16 compile
|
## What you have already tried
The nvidia tensorrt has already support the `bf16` precision after tensorrt>=9.2:
- https://github.com/NVIDIA/TensorRT/issues/1883
- https://github.com/AmusementClub/vs-mlrt/issues/64
However, the latest torch_tensorrt (`torch_tensorrt==2.3.0 w/ tensorrt==10.0.1`) has not support this.
Is there any plan to support bfloat16 in future verisons? The bf16 is very popular in the LLM inference.
```python
trt_model = torch_tensorrt.compile(
module=torch.jit.script(model),
inputs=[torch_tensorrt.Input(shape=(bs, seq, dim), dtype=torch.bfloat16)],
enabled_precisions={torch.int8, torch.bfloat16, torch.float32},
calibrator=calibrator,
device={
"device_type": torch_tensorrt.DeviceType.GPU,
"gpu_id": 0,
"dla_core": 0,
"allow_gpu_fallback": True,
"disable_tf32": True,
},
)
```
```
Traceback (most recent call last):
File "/data01/home/zhanglei.me/workspace/tensorrt_example/example_int8.py", line 38, in <module>
trt_model = torch_tensorrt.compile(
File "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/_compile.py", line 208, in compile
compiled_ts_module: torch.jit.ScriptModule = torchscript_compile(
File "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/ts/_compiler.py", line 151, in compile
compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))
File "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/ts/_compile_spec.py", line 208, in _parse_compile_spec
dtype=i.dtype.to(_C.dtype),
File "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/_enums.py", line 305, in to
raise TypeError(
TypeError: Provided an unsupported data type as an input data type (support: bool, int32, long, half, float), got: dtype.bf16
```
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.3.1
- CPU Architecture: x86
- OS (e.g., Linux): linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip3 install torch_tensorrt==2.3.0 tensorrt==10.0.1
- Build command you used (if compiling from source): no
- Are you using local sources or building from archives: no
- Python version: 3.10
- CUDA version: 12.2
- GPU models and configuration: Nvidia A100
- Any other relevant information:
|
https://github.com/pytorch/TensorRT/issues/2940
|
closed
|
[
"question"
] | 2024-06-19T06:05:30Z
| 2024-06-25T04:39:59Z
| null |
leeeizhang
|
pytorch/serve
| 3,195
|
How to send a torch array via request
|
I want to send a torch (cuda) array via python request to the inference API. Is that possible?
|
https://github.com/pytorch/serve/issues/3195
|
closed
|
[] | 2024-06-18T21:05:58Z
| 2024-06-19T19:21:59Z
| null |
lschaupp
|
huggingface/diffusers
| 8,626
|
More thorough guidance for multiple IP adapter images/masks and a single IP Adapter
|
### Describe the bug
I'm trying to use a single IP adapter with multiple IP adapter images and masks. This section of the docs gives an example of how I could do that: https://huggingface.co/docs/diffusers/v0.29.0/en/using-diffusers/ip_adapter#ip-adapter-masking
The docs provide the following code:
```python
from diffusers.image_processor import IPAdapterMaskProcessor
mask1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask1.png")
mask2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask2.png")
output_height = 1024
output_width = 1024
processor = IPAdapterMaskProcessor()
masks = processor.preprocess([mask1, mask2], height=output_height, width=output_width)
pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name=["ip-adapter-plus-face_sdxl_vit-h.safetensors"])
pipeline.set_ip_adapter_scale([[0.7, 0.7]]) # one scale for each image-mask pair
face_image1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl1.png")
face_image2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl2.png")
ip_images = [[face_image1, face_image2]]
masks = [masks.reshape(1, masks.shape[0], masks.shape[2], masks.shape[3])]
generator = torch.Generator(device="cpu").manual_seed(0)
num_images = 1
image = pipeline(
prompt="2 girls",
ip_adapter_image=ip_images,
negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality",
num_inference_steps=20,
num_images_per_prompt=num_images,
generator=generator,
cross_attention_kwargs={"ip_adapter_masks": masks}
).images[0]
```
One important point that should be highlighted is that images/scales/masks must be _lists of lists_ , otherwise we get the following error: `Cannot assign 2 scale_configs to 1 IP-Adapter`.
That error message is intuitive enough, however this gets confusing in other sections of the documentation, such as the `set_ip_adapter_scale()` function:
```python
# To use original IP-Adapter
scale = 1.0
pipeline.set_ip_adapter_scale(scale)
# To use style block only
scale = {
"up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale(scale)
# To use style+layout blocks
scale = {
"down": {"block_2": [0.0, 1.0]},
"up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale(scale)
# To use style and layout from 2 reference images
scales = [{"down": {"block_2": [0.0, 1.0]}}, {"up": {"block_0": [0.0, 1.0, 0.0]}}]
pipeline.set_ip_adapter_scale(scales)
```
Is it possible to use the style and layout from 2 reference images _with a single IP Adapter_?
I tried doing something like the following, which _builds on the knowledge of needing to use a list of lists_:
```python
# List of lists to support multiple images/scales/masks with a single IP Adapter
scales = [[{"down": {"block_2": [0.0, 1.0]}}, {"up": {"block_0": [0.0, 1.0, 0.0]}}]]
pipeline.set_ip_adapter_scale(scales)
# OR
# Use layout and style from InstantStyle for one image, but also use a numerical scale value for the other
scale = {
"down": {"block_2": [0.0, 1.0]},
"up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale([[0.5, scale]])
```
but I get the following error:
```
TypeError: unsupported operand type(s) for *: 'dict' and 'Tensor'\n
At:
/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py(2725): __call__
/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py(549): forward
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl\n /usr/local/lib/python3.10/dist-packages/diffusers/models/attention.py(366): forward\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl\n /usr/local/lib/python3.10/dist-packages/diffusers/models/transformers/transformer_2d.py(440): forward\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl\n /usr/local/lib/python3.10/dist-packages/diffusers/models/unets/unet_2d_blocks.py(1288): forward\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl\n /usr/local/lib/python3.10/dist-packages/diffusers/models/unets/unet_2d_condition.py(1220): forward\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\n /usr/local/lib/python3.10/dist-packages/torch/nn/mod
|
https://github.com/huggingface/diffusers/issues/8626
|
closed
|
[
"bug",
"stale"
] | 2024-06-18T18:06:37Z
| 2024-09-23T11:36:10Z
| 11
|
chrismaltais
|
pytorch/tutorials
| 2,939
|
[BUG] - is torch.compile necessary to use user defined triton kernel
|
### Add Link
https://pytorch.org/tutorials/recipes/torch_compile_user_defined_triton_kernel_tutorial.html
### Describe the bug
i think we can call triton kernel with torch.compile
what we get when call triton kernel through torch.compile?
### Describe your environment
none
cc @williamwen42 @msaroufim
|
https://github.com/pytorch/tutorials/issues/2939
|
closed
|
[
"bug",
"question",
"torch.compile"
] | 2024-06-18T16:12:15Z
| 2024-06-18T16:41:31Z
| null |
felixdae
|
huggingface/datasets
| 6,979
|
How can I load partial parquet files only?
|
I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it.
dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet")
How can I just using 000 - 100 from a 00314 from all partially?
I search whole net didn't found a solution, **this is stupid if they didn't support it, and I swear I wont using stupid parquet any more**
|
https://github.com/huggingface/datasets/issues/6979
|
closed
|
[] | 2024-06-18T15:44:16Z
| 2024-06-21T17:09:32Z
| 12
|
lucasjinreal
|
pytorch/vision
| 8,497
|
Improve empty import time of torchvision
|
### 🚀 The feature
When importing torchvision, a number of libraries are imported by default for more niche functionality of the library. To improve import time, I would favor delaying those imports to when they are needed
### Motivation, pitch
In my case, it is the av library in particular that contributes to the import time:
<img width="2087" alt="image" src="https://github.com/pytorch/vision/assets/2241296/2af05ab0-f97c-44bd-b7f2-fd5111f747d7">
(this assumes that torch, dynamo and onnx are already imported).
The import of `av` can easily be avoided as it is not needed by default.
### Alternatives
_No response_
### Additional context
I checked the code and I found this code here:
```
try:
import av
av.logging.set_level(av.logging.ERROR)
if not hasattr(av.video.frame.VideoFrame, "pict_type"):
av = ImportError(
"""\
Your version of PyAV is too old for the necessary video operations in torchvision.
If you are on Python 3.5, you will have to build from source (the conda-forge
packages are not up-to-date). See
https://github.com/mikeboers/PyAV#installation for instructions on how to
install PyAV on your system.
"""
)
except ImportError:
av = ImportError(
"""\
PyAV is not installed, and is necessary for the video operations in torchvision.
See https://github.com/mikeboers/PyAV#installation for instructions on how to
install PyAV on your system.
"""
)
```
The `pict_type` got added somewhere in the 0.5 range (released around 2020), 6.0 followed shortly. So I would suggest to change this test to not import av but the use `importlib` to check the version which would make this go away. This applies both to `torchvision/io/video_reader.py` as well as `torchvision/io/video.py`. I also wonder whether the logging call is still required given so much has changed since this code was written.
|
https://github.com/pytorch/vision/issues/8497
|
open
|
[] | 2024-06-18T09:24:43Z
| 2024-07-29T12:02:13Z
| 3
|
bschindler
|
huggingface/pytorch-image-models
| 2,211
|
How to Replicate Official Model Accuracy
|
Based on the accuracy provided by the official source, how can one replicate and train these models?
For example, for mobilenetv4_hybrid_large.e600_r384_in1k with a top-1 accuracy of 84.266
where can one find the training hyperparameters such as epochs, scheduler, warmup epochs, learning rate, batch size, and other parameters to replicate the model's performance?
|
https://github.com/huggingface/pytorch-image-models/issues/2211
|
closed
|
[
"enhancement"
] | 2024-06-18T05:30:59Z
| 2024-06-24T23:36:45Z
| null |
usergxx
|
huggingface/chat-ui
| 1,290
|
ERROR: Exception in ASGI application
|
Hello everyone, I have the following problem when using Huggingface ChatUI with FastChat. How can I change the configuration? Use npm to start development mode.
Thanks
```
MODELS=`[
{
"name": "Infinirc-7b-Llama2",
"id": "Infinirc-7b-Llama2",
"model": "Infinirc-7b-Llama2",
"parameters": {
"temperature": 0.9,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 1024,
"stop": []
},
"endpoints": [{
"type" : "openai",
"baseURL": "http://69.30.85.183:22152/v1",
"accessToken": "x"
}]
}
]`
```
FastChat:
```
`2024-06-18 01:07:42 | INFO | stdout | INFO: 59.125.15.126:60166 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
2024-06-18 01:07:42 | ERROR | stderr | ERROR: Exception in ASGI application
2024-06-18 01:07:42 | ERROR | stderr | Traceback (most recent call last):
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi
2024-06-18 01:07:42 | ERROR | stderr | result = await app( # type: ignore[func-returns-value]
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
2024-06-18 01:07:42 | ERROR | stderr | return await self.app(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__
2024-06-18 01:07:42 | ERROR | stderr | await super().__call__(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 123, in __call__
2024-06-18 01:07:42 | ERROR | stderr | await self.middleware_stack(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 186, in __call__
2024-06-18 01:07:42 | ERROR | stderr | raise exc
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 164, in __call__
2024-06-18 01:07:42 | ERROR | stderr | await self.app(scope, receive, _send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/cors.py", line 85, in __call__
2024-06-18 01:07:42 | ERROR | stderr | await self.app(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 65, in __call__
2024-06-18 01:07:42 | ERROR | stderr | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-06-18 01:07:42 | ERROR | stderr | raise exc
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-06-18 01:07:42 | ERROR | stderr | await app(scope, receive, sender)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 756, in __call__
2024-06-18 01:07:42 | ERROR | stderr | await self.middleware_stack(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 776, in app
2024-06-18 01:07:42 | ERROR | stderr | await route.handle(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 297, in handle
2024-06-18 01:07:42 | ERROR | stderr | await self.app(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 77, in app
2024-06-18 01:07:42 | ERROR | stderr | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-06-18 01:07:42 | ERROR | stderr | raise exc
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-06-18 01:07:42 | ERROR | stderr | await app(scope, receive, sender)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 72, in app
2024-06-18 01:07:42 | ERROR | stderr | response = await func(request)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 278, in app
2024-06-18 01:07:42 | ERROR | stderr | raw_response = await run_endpoint_function(
2024-06-18 01:07:42 | ERRO
|
https://github.com/huggingface/chat-ui/issues/1290
|
open
|
[
"support"
] | 2024-06-18T02:07:50Z
| 2024-06-23T13:26:59Z
| 1
|
rickychen-infinirc
|
huggingface/autotrain-advanced
| 684
|
Where is the fine-tuned model output?
|
I’m new to using AutoTrain on Hugging Face and I encountered an issue during my first attempt at fine-tuning a model. I have a free account, because I want to see whether I can get something to work before I start paying for training. Here’s a summary of what I did and the problem I’m facing:
Training Configuration:
I trained using Mistral-7B-Instruct-v0.2 and also openai-community/gpt2.
Dataset: I uploaded a tiny JSONL file (24 records) with a single “text” field for training.
Training Parameters: I set the training to run for one epoch.
Training Process:
The training ran for a couple of seconds.
I received a message that the space was paused, which I assumed meant the training had completed.
Issue:
After the training supposedly completed, I can’t find any output files or trained models.
I checked all available tabs and sections in the AutoTrain interface but didn’t see anything labeled “Models,” “Artifacts,” “Results,” or similar.
I reviewed the logs but didn’t find any clear indications of where the output is stored.
I checked my Hugging Face profile under the “Models” heading, but it says “None yet.”
Questions:
Where should I look in the AutoTrain interface to find the trained model and output files?
Are there any additional steps I need to take to ensure the trained model is saved and accessible?
With a free account, I don’t have any GPUs assigned. But is that a problem with only 24 short training samples and one epoch?
Any guidance or tips would be greatly appreciated!
|
https://github.com/huggingface/autotrain-advanced/issues/684
|
closed
|
[] | 2024-06-17T23:01:53Z
| 2024-06-22T03:49:27Z
| null |
RonPisaturo
|
pytorch/torchtitan
| 409
|
DataLoader state is empty for different ranks ?
|
Thanks for your amazing work !
We have been testing the llama3_8b model on slimpajama dataset. The training seem to be fine based on loss curves.
However, upon resuming the model from a previous checkpoint, we see the following warnings:
```
16: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 16, expected key dp_rank_16.
28: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 28, expected key dp_rank_28.
5: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 5, expected key dp_rank_5.
20: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 20, expected key dp_rank_20.
27: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 27, expected key dp_rank_27.
2: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 2, expected key dp_rank_2.
19: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 19, expected key dp_rank_19.
30: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 30, expected key dp_rank_30.
23: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 23, expected key dp_rank_23.
21: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 21, expected key dp_rank_21.
17: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 17, expected key dp_rank_17.
18: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 18, expected key dp_rank_18.
1: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 1, expected key dp_rank_1.
26: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 26, expected key dp_rank_26.
31: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 31, expected key dp_rank_31.
12: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 12, expected key dp_rank_12.
10: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is empty for dp rank 10, expected key dp_rank_10.
11: 2024-06-17 01:22:16,615 - root - WARNING - DataLoader state is empty for dp rank 11, expected key dp_rank_11.
14: 2024-06-17 01:22:16,616 - root - WARNING - DataLoader state is empty for dp rank 14, expected key dp_rank_14.
15: 2024-06-17 01:22:16,616 - root - WARNING - DataLoader state is empty for dp rank 15, expected key dp_rank_15.
13: 2024-06-17 01:22:16,616 - root - WARNING - DataLoader state is empty for dp rank 13, expected key dp_rank_13.
29: 2024-06-17 01:22:16,616 - root - WARNING - DataLoader state is empty for dp rank 29, expected key dp_rank_29.
7: 2024-06-17 01:22:16,617 - root - WARNING - DataLoader state is empty for dp rank 7, expected key dp_rank_7.
8: 2024-06-17 01:22:16,617 - root - WARNING - DataLoader state is empty for dp rank 8, expected key dp_rank_8.
4: 2024-06-17 01:22:16,617 - root - WARNING - DataLoader state is empty for dp rank 4, expected key dp_rank_4.
3: 2024-06-17 01:22:16,618 - root - WARNING - DataLoader state is empty for dp rank 3, expected key dp_rank_3.
9: 2024-06-17 01:22:16,618 - root - WARNING - DataLoader state is empty for dp rank 9, expected key dp_rank_9.
6: 2024-06-17 01:22:16,619 - root - WARNING - DataLoader state is empty for dp rank 6, expected key dp_rank_6.
22: 2024-06-17 01:22:16,619 - root - WARNING - DataLoader state is empty for dp rank 22, expected key dp_rank_22.
24: 2024-06-17 01:22:16,619 - root - WARNING - DataLoader state is empty for dp rank 24, expected key dp_rank_24.
25: 2024-06-17 01:22:16,619 - root - WARNING - DataLoader state is empty for dp rank 25, expected key dp_rank_25.
```
What can be the reason for DataLoader state being empty when loading the model ?
Also noting that checkpoints are loaded properly.
|
https://github.com/pytorch/torchtitan/issues/409
|
closed
|
[
"question"
] | 2024-06-17T17:46:42Z
| 2024-11-22T00:00:55Z
| null |
ahatamiz
|
huggingface/transformers
| 31,453
|
How to build and evaluate a vanilla transformer?
|
### Model description
"Attention Is All You Need" is a landmark 2017 research paper authored by eight scientists working at Google, responsible for expanding 2014 attention mechanisms proposed by Bahdanau et al. into a new deep learning architecture known as the transformer with an encoder, cross-attention, and a decoder.
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
EncoderDecoderModels are supported via the huggingface API. Though it isn't possible to evaluate them properly: https://github.com/huggingface/transformers/issues/28721
How is it possible to build and evaluate a vanilla transformer with an encoder, cross-attention, and a decoder in huggingface?
|
https://github.com/huggingface/transformers/issues/31453
|
closed
|
[] | 2024-06-17T17:17:11Z
| 2024-11-04T13:56:06Z
| null |
Bachstelze
|
huggingface/parler-tts
| 74
|
How to do with flan-t5 when i want to finetune based on Mini v0.1 but not from scratch? Flan t5 can not deal my language.
|
https://github.com/huggingface/parler-tts/issues/74
|
open
|
[] | 2024-06-17T06:39:24Z
| 2024-06-17T06:39:24Z
| null |
lyt719
|
|
huggingface/candle
| 2,269
|
How to select which GPU to use
|
We are working with the stable diffusion example. How do we select which GPU device on our system to use for the rendering?
thanks.
|
https://github.com/huggingface/candle/issues/2269
|
open
|
[] | 2024-06-16T19:53:18Z
| 2024-06-21T19:29:31Z
| null |
donkey-donkey
|
pytorch/pytorch
| 128,698
|
ONNX docs missing info about how to remove custom domains
|
### 📚 The doc issue
In the docs about exporting to onnx [here](https://pytorch.org/tutorials/beginner/onnx/export_simple_model_to_onnx_tutorial.html?highlight=torch%20onnx%20dynamo_export) there is not a mention of how to remove the functions. The use of aten operators defined as functions creates a problem when converting to tensorrt. When visualizing with netron the functions are composed of simpler official ai.onnx operators which have support for tensorrt but not the custom exported aten operators. There should be a way to save the models without using functions and custom operators and just export the raw operators even if that means more repetitions, but it would make models exportable to tensorrt.
### Suggest a potential alternative/fix
_No response_
cc @svekars @brycebortree
|
https://github.com/pytorch/pytorch/issues/128698
|
closed
|
[
"module: onnx",
"module: docs",
"triaged"
] | 2024-06-14T13:01:35Z
| 2025-09-07T22:35:57Z
| null |
Jerry-Master
|
huggingface/chat-ui
| 1,283
|
SELF_SIGNED_CERT_IN_CHAIN
|
I am experiencing this error. I'm on a corporate VPN and I tried turning it off and still the same error. The TLS reject is set to false as well.
SELF_SIGNED_CERT_IN_CHAIN
71.61
npm error errno SELF_SIGNED_CERT_IN_CHAIN
71.61
npm error request to https://registry.npmjs.org/failed, reason: self-signed certificate in certificate chain
|
https://github.com/huggingface/chat-ui/issues/1283
|
open
|
[
"support"
] | 2024-06-14T04:03:48Z
| 2024-06-17T06:50:29Z
| 2
|
solanki-aman
|
pytorch/torchtitan
| 399
|
How to use nsys?
|
Is there a recommended way to use nsys / nsight? I know there's a profiling hook for using the Pytorch profiler, but I'm wondering how to use nsys instead.
Can I use these APIs:
```
with torch.autograd.profiler.emit_nvtx():
profiler.start()
y = x.view(1, -1)
z = x.to(memory_format=torch.channels_last)
zz = z.reshape(1, -1)
profiler.stop()
```
Furthermore, I'm not sure which of the below I'm supposed to use:
```
import torch.cuda.profiler as profiler
with torch.autograd.profiler.emit_nvtx():
```
|
https://github.com/pytorch/torchtitan/issues/399
|
closed
|
[
"enhancement"
] | 2024-06-13T18:14:52Z
| 2024-11-22T00:00:02Z
| null |
vedantroy
|
huggingface/diffusers
| 8,527
|
how to add controlnet in sd3!
|
I currently use inpainting controlnet in sdxl because it uses unet to easily support controlnet. And I am curious about how to add controlnet in sd3 with transforms model structure.
|
https://github.com/huggingface/diffusers/issues/8527
|
closed
|
[] | 2024-06-13T10:14:38Z
| 2024-08-24T04:20:28Z
| null |
appleyang123
|
huggingface/lerobot
| 266
|
Question - how to handle additional sensory input
|
Hi guys, sorry to bother you again :wink:
and thanks for your work, I'm very excited by Lerobot!
I'm currently collecting some teleop data where the robot has tactile sensors on the fingertips, as well as a FT sensor on the wrist and I was wondering how I would integrate this best into a Lerobot Dataset.
One way would be to concatenate them into the `observation.state`, as this is the hardcoded location for non-image observations. But I want to train both with and without the tactile sensors and FT sensors as inputs to quantify the benefits of the other sensors, so I would then have to make separate datasets for each sensor combination which feels cumbersome.
Are there any plans in the near future to support 'dynamic configuration' of the state inputs for the policies? Or is my best option to just create different datasets for each combination?
|
https://github.com/huggingface/lerobot/issues/266
|
closed
|
[
"question",
"dataset",
"stale"
] | 2024-06-13T08:39:26Z
| 2025-10-23T02:29:29Z
| null |
tlpss
|
huggingface/nanotron
| 196
|
how to run benchmark tests
|
Hi,
I can build this project with your commands, but there is no "pyaottriton" when ran the benchmark test like: benchmark_forward.py or benchmark_backward.py.
anything I missed?
Thanks
|
https://github.com/huggingface/nanotron/issues/196
|
closed
|
[] | 2024-06-13T08:31:06Z
| 2024-06-13T08:38:24Z
| null |
jinsong-mao
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.