repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers
| 7,813
|
I feel confused about this TODO issue. how to pass timesteps as tensors?
|
https://github.com/huggingface/diffusers/blob/235d34cf567e78bf958344d3132bb018a8580295/src/diffusers/models/unets/unet_2d_condition.py#L918
|
https://github.com/huggingface/diffusers/issues/7813
|
closed
|
[
"stale"
] | 2024-04-29T03:46:21Z
| 2024-11-23T00:19:17Z
| null |
ghost
|
pytorch/torchchat
| 544
|
[DOCS, TESTS] quantization option table & quantization option table testing
|
can we pin down the details for this, because this update is too generous and doesn't represent the swiss cheese that is the support matrix?
I seem to recall some operators didn't have the full set of group sizes - the group sizes are just an enumeration of powers of 2, did we test them? (I can't say the other table was useful w.r.t to what to expect in eager, compile, AOTI, ET. (We list compile as a separate category for eager, not withstanding torch.compile is supported by a bunch of different compilers all of which may have a different answer...I guess much like ET ~ XNNPACK, compile ~ Inductor...)
We should also ensure that we have a test for each claimed supported config in periodic.yml
|
https://github.com/pytorch/torchchat/issues/544
|
closed
|
[] | 2024-04-29T03:37:26Z
| 2024-05-12T22:58:14Z
| 2
|
mikekgfb
|
pytorch/torchchat
| 543
|
[PAPERCUTS] error message repeated ad nauseam
|
I get it -- maybe the error is `aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'`
Seriously, though - I got it after the first error meesage about that, and certainly after the 5th? I'll assume it's for each call site? It's probably onerus to keep track of every error, especially at the point where the error is emitted. But I presume the message goes thru a common reporting site... maybe we can just keep track of the previous error message, and if it's the same as the immediately precedcing, we start a counter and emit
[repeated n times] when the error changes.
Or we compute a hash of each message, and add up counts for all messages after the first error, dumping a second instance at the end with a count?
One more thing -- can we put a filename and an error line? I recall that Soumith said in another meeting that we have that info for IR traces?
```
(py311) mikekg@mikekg-mbp torchchat % python export.py --checkpoint-path ${MODEL_PATH} --temperature 0 --quantize '{"linear:int4": {"groupsize": 128}}' --output-pte mode.pte
[...]
%aten__weight_int4pack_mm_default_42 : [num_users=1] = call_function[target=executorch.exir.dialects.edge._ops.aten._weight_int4pack_mm.default](args = (%aten_view_copy_default_144, %b_output_weight, 128, %b_output_scales_and_zeros), kwargs = {})
%aten_view_copy_default_145 : [num_users=1] = call_function[target=executorch.exir.dialects.edge._ops.aten.view_copy.default](args = (%aten__weight_int4pack_mm_default_42, [1, 1, 32000]), kwargs = {})
return (getitem_1, getitem_2, getitem_4, getitem_5, getitem_7, getitem_8, getitem_10, getitem_11, getitem_13, getitem_14, getitem_16, getitem_17, aten_view_copy_default_145)
WARNING:executorch.backends.xnnpack.partition.xnnpack_partitioner:Nothing can be partitioned!
INFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'
INFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'
INFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'
INFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'
INFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'
INFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor']'
INFO:root:Failed converting '<EdgeOpOverload: aten._weight_int4pack_mm.default>: schema = aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor q
|
https://github.com/pytorch/torchchat/issues/543
|
closed
|
[] | 2024-04-29T03:22:01Z
| 2024-08-30T15:19:47Z
| 1
|
mikekgfb
|
pytorch/torchchat
| 542
|
linear:int4 issues - RuntimeError: Missing out variants: {'aten::_weight_int4pack_mm'}
|
```
(py311) mikekg@mikekg-mbp torchchat % python export.py --checkpoint-path ${MODEL_PATH} --temperature 0 --quantize '{"linear:int4": {"groupsize": 128}}' --output-pte mode.pte
[...]
Traceback (most recent call last):
File "/Users/mikekg/qops/torchchat/export.py", line 111, in <module>
main(args)
File "/Users/mikekg/qops/torchchat/export.py", line 91, in main
export_model_et(
File "/Users/mikekg/qops/torchchat/export_et.py", line 98, in export_model
export_program = edge_manager.to_executorch(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/miniconda3/envs/py311/lib/python3.11/site-packages/executorch/exir/program/_program.py", line 899, in to_executorch
new_gm_res = p(new_gm)
^^^^^^^^^
File "/Users/mikekg/miniconda3/envs/py311/lib/python3.11/site-packages/torch/fx/passes/infra/pass_base.py", line 40, in __call__
res = self.call(graph_module)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/miniconda3/envs/py311/lib/python3.11/site-packages/executorch/exir/passes/__init__.py", line 423, in call
raise RuntimeError(f"Missing out variants: {missing_out_vars}")
RuntimeError: Missing out variants: {'aten::_weight_int4pack_mm'}
```
Current fail is expected -- somewhat anyway after adding the packed call to the _weight_int4pack_mm but documented incorrectly in docs/quantization.md. I think @lucylq most recently updated the specs to streamline them but that glossed over the reality that we have a bit of a swiss cheese situation. That's sad and not pretty to show, but sadly our current reality
I'll try to patch up most execution modes, but we really do need tests. And for performance, maybe the plan should be to hook up _weight_int4pack_mm to an asymmetric version of a8w4dq (as per https://github.com/pytorch/torchchat/issues/541).
Of course that's also not quite "correct", but how many modes and operators can we put with how much documentation? FP operators already have a bit of a spread in terms of accruacy based on rounding effects, so maybe that's justifiable...
|
https://github.com/pytorch/torchchat/issues/542
|
open
|
[] | 2024-04-29T03:03:40Z
| 2024-07-30T17:36:20Z
| 0
|
mikekgfb
|
pytorch/serve
| 3,120
|
If micro_batch_size of micro-batch is set to 1, then model inference is still batch processing?
|
### 📚 The doc issue
I set the batchSize of the registered model to 10, and then set the micro_batch_size to 1. So for model inference, will it wait for 10 requests to complete preprocessing in parallel before aggregating them for inference?
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/3120
|
open
|
[] | 2024-04-29T02:59:58Z
| 2024-04-29T18:48:28Z
| 1
|
pengxin233
|
pytorch/torchchat
| 533
|
[FEATURE REQUEST] 8b weight quantization on ET
|
What is the best we can do for int8 channel-wise quantization in XNNPACK (and elsewhere in ET) today? I see ATM we use` F.linear(x, weight.to(dtype=x.dtype)) * scales` as implementation in [ET examples](https://www.internalfb.com/code/fbsource/[7e7c1690e5ac43a50e5e17e41321005d126e3faf]/fbcode/executorch/examples/models/llama2/source_transformation/quantize.py?lines=374) and [torchchat](https://github.com/pytorch/torchchat/blob/main/quantize.py#L401).
This function works well for CUDA using AOTI (because AOTI + Triton merge the conversion into the operation), but not so much for CPUs where this forces allocation of a full buffer of float weights. Do we recognize this for XNNPACK and convert into a more efficient primitive? If not, what should we do to do this?
On PT CPU, we now have [torch.ops.aten._weight_int8pack_mm](https://github.com/pytorch/torchchat/blob/main/quantize.py#L403). If we don't already, can we recognize the idiom and convert it? Or should we generate an executorch op during quantization that is more efficient?
|
https://github.com/pytorch/torchchat/issues/533
|
closed
|
[] | 2024-04-28T17:02:49Z
| 2024-07-21T22:14:01Z
| 7
|
mikekgfb
|
huggingface/datasets
| 6,846
|
Unimaginable super slow iteration
|
### Describe the bug
Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the dataset……?Is there something wrong with my iteration?
### Steps to reproduce the bug
```python
import datasets
import time
import random
num_rows = 52000
num_cols = 500
random_input = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)]
random_output = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)]
s=time.time()
d={'random_input':random_input,'random_output':random_output}
dataset=datasets.Dataset.from_dict(d)
print('from dict',time.time()-s)
print(dataset)
for i in range(len(dataset)):
aa=time.time()
a,b=dataset['random_input'][i],dataset['random_output'][i]
print(time.time()-aa)
```
corresponding output
```bash
from dict 9.215498685836792
Dataset({
features: ['random_input', 'random_output'],
num_rows: 52000
})
19.129778146743774
19.329464197158813
19.27668261528015
19.28557538986206
19.247620582580566
19.624247074127197
19.28673791885376
19.301053047180176
19.290496110916138
19.291821718215942
19.357765197753906
```
### Expected behavior
Under normal circumstances, iteration should be very rapid as it does not involve the main tasks other than getting items
### Environment info
- `datasets` version: 2.19.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0
|
https://github.com/huggingface/datasets/issues/6846
|
closed
|
[] | 2024-04-28T05:24:14Z
| 2024-05-06T08:30:03Z
| 1
|
rangehow
|
pytorch/torchchat
| 528
|
[DOCS] runner documentation
|
1 - Add llama2/3 options to docs/runner from https://github.com/pytorch/torchchat/pull/486
2 - Also does the file need a name change because it covers both build and run for the runners?
3 - Do we have the necessary documentation - how to build the tokenizer.bin?
That we have to use a different tokenizer for SentencePiece than the Python runners? We can grab some of that from docs/ADVANCED-USERS.md and move it here.
4 - should we actually split this file?
5 - we're using stories15M here, should we upgrade to llama3 .
|
https://github.com/pytorch/torchchat/issues/528
|
closed
|
[] | 2024-04-27T22:24:30Z
| 2024-07-21T21:38:37Z
| 1
|
mikekgfb
|
pytorch/torchchat
| 526
|
[Better Engineering] Is no KV cache still a thing?
|
I put the code there originally, but... wondering whether running models without KV cache is still a thing?
We don't really offer a way to build it without KV Cache...
https://github.com/pytorch/torchchat/blame/e26c5289453ccac7f4b600babcb40e30634bdeb2/runner/run.cpp#L175-L185
```
#ifndef __KV_CACHE__
// @lint-ignore CLANGTIDY facebook-hte-LocalUncheckedArrayBounds
ManagedTensor tokens_managed(
&(s->toks[pos]),
/*ignored*/ sizeof(int64_t) * (pos + 1),
{1, 1},
ScalarType::Long);
#else // __KV_CACHE__
ManagedTensor tokens_managed(
token_buffer, sizeof(int64_t), {1, 1}, ScalarType::Long);
#endif
```
|
https://github.com/pytorch/torchchat/issues/526
|
closed
|
[] | 2024-04-27T22:07:53Z
| 2024-04-28T14:30:48Z
| 0
|
mikekgfb
|
huggingface/lerobot
| 112
|
Do we want to use `transformers`?
|
I'd really go against establishing transformers as a dependency of lerobot and importing their whole library just to use the `PretrainedConfig` (or even other components). I think in this case it's very overkill and wouldn't necessarily fit our needs right now. The class is ~1000 lines of code - which we can copy into our lib anyway - and looks way more mature and feature-rich than what — IMO — we need and have with the rest of our code base.
Copying code is even part of [Transformers' philosophy](https://huggingface.co/blog/transformers-design-philosophy) — which we *do* copy.
_Originally posted by @aliberts in https://github.com/huggingface/lerobot/pull/101#discussion_r1581860998_
|
https://github.com/huggingface/lerobot/issues/112
|
closed
|
[
"question"
] | 2024-04-27T17:24:20Z
| 2024-04-30T11:59:25Z
| null |
qgallouedec
|
pytorch/tutorials
| 2,849
|
Transformer tutorial multiplying with sqrt(d_model)
|
https://github.com/pytorch/tutorials/blob/5e772fa2bf406598103e61e628a0ca0b8e471bfa/beginner_source/translation_transformer.py#L135
src = self.embedding(src) * math.sqrt(self.d_model)
shouln't this be
src = self.embedding(src) / math.sqrt(self.d_model)
at least that is the impression I got when reading the "Attention is all you need" paper.
Or is there some new research finding that multiplying is better?
cc @sekyondaMeta @svekars @kit1980 @subramen @albanD
|
https://github.com/pytorch/tutorials/issues/2849
|
closed
|
[
"easy",
"docathon-h1-2024"
] | 2024-04-27T07:45:10Z
| 2024-06-11T09:15:26Z
| 3
|
RogerJL
|
pytorch/TensorRT
| 2,782
|
❓ [Question] Unexpected exception _Map_base::at during PTQ
|
## ❓ Question
I am attempting to execute [PTQ](https://pytorch.org/TensorRT/user_guide/ptq.html). During the compiling process, I get the following exception:
```
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Finalize: %142 : Tensor = aten::matmul(%x, %143) # /fsx_home/homes/srdecny/meaning/vocoder/hifigan/hifigan/vec2enc.py:84:0 Set kernel index: 5
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Total number of generated kernels selected for the engine: 7
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 0 CASK_STATIC
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 1 CASK_STATIC
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 2 CASK_STATIC
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 3 TRT_SERIALIZABLE:generatedNativePointwise
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 4 TRT_SERIALIZABLE:generatedNativePointwise
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 5 CASK_STATIC
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Kernel: 6 CASK_STATIC
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Disabling unused tactic source: EDGE_MASK_CONVOLUTIONS
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Disabling unused tactic source: JIT_CONVOLUTIONS
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Engine generation completed in 1.64955 seconds.
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Total per-runner device persistent memory is 0
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Total per-runner host persistent memory is 73616
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Allocated activation device memory of size 33692160
INFO: [Torch-TensorRT TorchScript Conversion Context] - [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +32, now: CPU 0, GPU 888 (MiB)
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - CUDA lazy loading is enabled.
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Calculating Maxima
INFO: [Torch-TensorRT TorchScript Conversion Context] - Starting Calibration.
INFO: [Torch-TensorRT TorchScript Conversion Context] - Post Processing Calibration data in 8.6e-07 seconds.
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Assigning tensor scales: (Unnamed Layer* 164) [Concatenation]_output using (Unnamed Layer* 164) [Concatenation]_output [
ERROR: [Torch-TensorRT TorchScript Conversion Context] - 1: Unexpected exception _Map_base::at
Traceback (most recent call last):
File "/fsx_home/homes/srdecny/meaning/vojta_notebooks/trt_quant_single_v1.py", line 435, in <module>
quanted = trt_decoder = torch_tensorrt.compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx_home/homes/srdecny/meaning/env_bender6_3.11/lib/python3.11/site-packages/torch_tensorrt/_compile.py", line 185, in compile
compiled_ts_module: torch.jit.ScriptModule = torchscript_compile(
^^^^^^^^^^^^^^^^^^^^
File "/fsx_home/homes/srdecny/meaning/env_bender6_3.11/lib/python3.11/site-packages/torch_tensorrt/ts/_compiler.py", line 151, in compile
compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: [Error thrown at core/conversion/conversionctx/ConversionCtx.cpp:169] Building serialized network failed in TensorRT
```
I don't really know how to proceed from here. What does this exception indicate?
The compiling code is roughly this:
```
calibrator = torch_tensorrt.ptq.DataLoaderCalibrator(
dloader,
cache_file="./encoder_calibrator.cache",
use_cache=False,
algo_type=torch_tensorrt.ptq.CalibrationAlgo.ENTROPY_CALIBRATION_2,
device=DEVICE
)
inputs = model.dummy_inputs()
trace = torch.jit.trace(model, inputs, check_trace=False, strict=False)
signature = torch_tensorrt.Input(shape=inputs.shape, dtype=inputs.dtype)
torch_tensorrt.compile(
trace,
input_signature=signature,
enabled_precisions={torch.float, torch.int8, torch.half},
calibrator=calibrator,
truncate_long_and_double=True,
)
```
`inputs` is a single float `Tensor` (although very large). Unfortunately, I can't share the model.
## What you have already tried
All I managed to find online was [this](https://forums.developer.nvidia.com/t/tensorrt-int8-calibration-error-indexerror-map-base-at/169511/5) issue where somone indicates that the calibration dataloader might be empty. However, the following runs without any exception:
```
dummy_inputs = model.dummy_inputs()
trace = torch.jit.trace(model, inputs, check_trace=False, strict=False)
trace(dummy_inputs) # the traced model still works
for input in dloader:
trace(input) # the model also works with batches from the calibration dataloader
```
Additonally, running the c
|
https://github.com/pytorch/TensorRT/issues/2782
|
closed
|
[
"question"
] | 2024-04-26T18:29:58Z
| 2025-03-27T12:42:10Z
| null |
srdecny
|
pytorch/xla
| 6,979
|
Support non-traceable Custom Ops
|
## 🚀 Feature
`torch.export` supports exporting blackbox custom ops, however, we fails to export it to StableHLO using `exported_program_to_stablehlo` API
https://pytorch.org/tutorials/intermediate/torch_export_tutorial.html#custom-ops
## Motivation
if we have non-traceable python codes in the custom ops, we can't export it to stablehlo program. This means we won't be able to cover as much of the model when exporting through StableHLO.
## Pitch
Here is the example pytorch codes
```
import torch
from torch.library import Library, impl, impl_abstract
m = Library("my_custom_library", "DEF")
m.define("custom_op(Tensor input) -> Tensor")
@impl(m, "custom_op", "CompositeExplicitAutograd")
def custom_op(x):
raise Exception("DON'T GO HERE")
return torch.relu(x)
@impl_abstract("my_custom_library::custom_op")
def custom_op_meta(x):
return torch.empty_like(x)
class CustomOpExample(torch.nn.Module):
def forward(self, x):
x = torch.sin(x)
x = torch.ops.my_custom_library.custom_op(x)
x = torch.cos(x)
return x
em = torch.export.export(CustomOpExample(), (torch.randn(3, 3),))
em.graph_module.graph.print_tabular()
from torch_xla.stablehlo import exported_program_to_stablehlo
stablehlo_program = exported_program_to_stablehlo(em)
print(stablehlo_program.get_stablehlo_text())
```
As you can see, `torch.export` runs fine and give us this fx graph, without caring what is inside `custom_op` impl.
```
opcode name target args kwargs
------------- --------- ----------------------------------- ------------ --------
placeholder arg0_1 arg0_1 () {}
call_function sin aten.sin.default (arg0_1,) {}
call_function custom_op my_custom_library.custom_op.default (sin,) {}
call_function cos aten.cos.default (custom_op,) {}
output output output ((cos,),) {}
```
`exported_program_to_stablehlo` fails because it runs the `custom_op` and hits `Exception`.
When I comment out the line `raise Exception("DON'T GO HERE")`, `exported_program_to_stablehlo` works fine, however it traces into `custom_op` by converting `relu` to `stablehlo.maximum`,
```
module @IrToHlo.8 attributes {mhlo.cross_program_prefetches = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {
func.func @main(%arg0: tensor<3x3xf32>) -> tensor<3x3xf32> {
%0 = stablehlo.constant dense<0.000000e+00> : tensor<3x3xf32>
%1 = stablehlo.sine %arg0 : tensor<3x3xf32>
%2 = stablehlo.maximum %1, %0 : tensor<3x3xf32>
%3 = stablehlo.cosine %2 : tensor<3x3xf32>
return %3 : tensor<3x3xf32>
}
}
```
I wonder if we can support exporting blackbox custom ops all the way to StableHLO without executing the op. We want to see something like this in the output,
```
module @IrToHlo.8 attributes {mhlo.cross_program_prefetches = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {
func.func @main(%arg0: tensor<3x3xf32>) -> tensor<3x3xf32> {
%0 = stablehlo.constant dense<0.000000e+00> : tensor<3x3xf32>
%1 = stablehlo.sine %arg0 : tensor<3x3xf32>
%2 = stablehlo.custom_call {name = "my_custom_library.custom_op"}} : (tensor<3x3xf32>) -> tensor<3x3xf32>
%3 = stablehlo.cosine %2 : tensor<3x3xf32>
return %3 : tensor<3x3xf32>
}
}
```
|
https://github.com/pytorch/xla/issues/6979
|
closed
|
[
"stablehlo"
] | 2024-04-26T16:53:16Z
| 2024-09-03T04:13:05Z
| 4
|
thong3le
|
huggingface/evaluate
| 582
|
How to pass generation_kwargs to the TextGeneration evaluator ?
|
How can I pass the generation_kwargs to TextGeneration evaluator ?
|
https://github.com/huggingface/evaluate/issues/582
|
open
|
[] | 2024-04-25T16:09:46Z
| 2024-04-25T16:09:46Z
| null |
swarnava112
|
huggingface/chat-ui
| 1,074
|
503 error
|
Hello, I was trying to install the chat-ui
I searched for any documentation to how to handle that on my vps
error 500 after build and not working with https although allow_insecure=false
|
https://github.com/huggingface/chat-ui/issues/1074
|
closed
|
[
"support"
] | 2024-04-25T15:34:07Z
| 2024-04-27T14:58:45Z
| 1
|
abdalladorrah
|
huggingface/chat-ui
| 1,073
|
Support for Llama-3-8B-Instruct model
|
hi,
For model meta-llama/Meta-Llama-3-8B-Instruct, it is unlisted, not sure when will be supported?
https://github.com/huggingface/chat-ui/blob/3d83131e5d03e8942f9978bf595a7caca5e2b3cd/.env.template#L229
thanks.
|
https://github.com/huggingface/chat-ui/issues/1073
|
open
|
[
"question",
"models",
"huggingchat"
] | 2024-04-25T14:03:35Z
| 2024-04-30T05:47:05Z
| null |
cszhz
|
huggingface/chat-ui
| 1,072
|
[v0.8.3] serper, serpstack API, local web search not working
|
## Context
I have serper.dev API key, serpstack API key and I have put it correctly in my `.env.local` file.
<img width="478" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/5082893a-7ecd-4ab5-9cb9-059875118dcd">
## Issue
However, even if I enable Web Search, it still does not reach out to those APIs, and shows me "an error occured" no the Web Search part.
<img width="931" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/da96c121-89e0-402b-8e93-33c9e6709c71">
I don't see calls reaching Serper and SerpStack as well.
<img width="1365" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/7230b1a0-2567-424f-8884-8fc53417fa41">
<img width="1302" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/b35c1a7f-1c2c-4c8a-9c46-5c2171f73f9b">
It was working for a bit on `v0.8.2`, but then it stopped working there as well. Now, for `v.0.8.3`, it's not working at all. Am I missing something? I have tried using either of those APIs too, but it still does not work.
Please help.
|
https://github.com/huggingface/chat-ui/issues/1072
|
closed
|
[
"support"
] | 2024-04-25T13:24:40Z
| 2024-05-09T16:28:15Z
| 14
|
adhishthite
|
huggingface/diffusers
| 7,775
|
How to input gradio settings in Python
|
Hi.
I use **realisticStockPhoto_v20** on Fooocus with **sdxl_film_photography_style** lora and I really like the results.
Fooocus and other gradio implementations come with settings inputs that I want to utilize in Python as well. In particular, if this is my code:
```
device = "cuda"
model_path = "weights/realisticStockPhoto_v20.safetensors"
pipe = StableDiffusionXLInpaintPipeline.from_single_file(
model_path,
torch_dtype=torch.float16,
num_in_channels=4).to(device)
pipe.load_lora_weights(".", weight_name="weights/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors", adapter_name="film")
```
how can I set the following settings/parameters in code?
- Negative Prompt
- Preset (initial, lcm, default, lighting, realistic, sai, anime)
- Performance (quality, speed, extreme speed, lightning)
- width-height
- image number
- output format
- Style (Fooocus v2, fooocus photography, fooocus negative, foocus enhance, etc.)
- Base Model
- Refiner
- Lora 1,2,3,4,5,...
- Guidance scale
- Image sharpness
|
https://github.com/huggingface/diffusers/issues/7775
|
closed
|
[] | 2024-04-25T08:43:20Z
| 2024-11-20T00:07:26Z
| null |
levoz92
|
huggingface/chat-ui
| 1,069
|
CohereForAI ChatTemplate
|
Now that there is official support for tgi in CohereForAI/c4ai-command-r-v01. How to use the chat template found in the tokenizer config for the ui. Or alternatively, is it possible to add in PROMPTS.md the correct template for cohere?
|
https://github.com/huggingface/chat-ui/issues/1069
|
open
|
[] | 2024-04-25T05:45:35Z
| 2024-04-25T05:45:35Z
| 0
|
yanivshimoni89
|
huggingface/transformers.js
| 727
|
Preferred citation of Transformers.js
|
### Question
Love the package, and am using it in research - I am wondering, does there exist a preferred citation format for the package to cite it in papers?
|
https://github.com/huggingface/transformers.js/issues/727
|
open
|
[
"question"
] | 2024-04-24T23:07:20Z
| 2024-04-24T23:21:13Z
| null |
ludgerpaehler
|
pytorch/pytorch
| 124,887
|
How to catch NCCL collective timeout in Python
|
## Issue description
Currently, there are several error handling modes ([link](https://github.com/pytorch/pytorch/blob/bc117898f18e8a698b00823f57c19b2d874b93ba/torch/csrc/distributed/c10d/ProcessGroupNCCL.hpp#L114-L126)) for when NCCL collectives timeout. These error handling modes can be set via `TORCH_NCCL_ASYNC_ERROR_HANDLING`/`NCCL_ASYNC_ERROR_HANDLING`. My current observation on single/multi-host CUDA environments using NCCL distributed backend is that when a timeout exception is raised at the C++ level (when `TORCH_NCCL_ASYNC_ERROR_HANDLING=1`), this exception propagates through a few try/catch blocks, but eventually is left unhandled, resulting in the Python processes terminating via SIGABRT/SEGFAULT.
Question: Is it possible without many any modifications to torch to catch the error raised at the C++ level within my Python torch script?
Based on digging around, I don't think it is possible (open to any suggestions). I've done some experimentation to the PyTorch source code locally by adding some logic based on [Python docs](https://docs.python.org/3.10/extending/extending.html#intermezzo-errors-and-exceptions) to [ProcessGroupNCCL.cpp](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp) such that the exception can be caught in Python. However, `#include <Python.h>` in [ProcessGroupNCCL.cpp](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp) results in `fatal error: Python.h: No such file or directory` when building torch from source.
Question: Are there explicit reasons why I shouldn't add python to NCCL logic?
## Code example
TODO if needed
## System Info
TODO if needed
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @wz337 @tianyu-l @wconstab @yf225 @chauhang @d4l3k
|
https://github.com/pytorch/pytorch/issues/124887
|
closed
|
[
"needs reproduction",
"oncall: distributed"
] | 2024-04-24T22:27:43Z
| 2024-05-01T06:16:25Z
| null |
gkroiz
|
huggingface/diarizers
| 4
|
How to save the finetuned model as a .bin file?
|
Hi,
I finetuned the pyannote-segmentation model for my usecase but it is saved as a model.safetensors file. Can I convert it to a pytorch_model.bin file? I am using whisperx to create speaker-aware transcripts and .safetensors isn't working with that library. Thanks!
|
https://github.com/huggingface/diarizers/issues/4
|
closed
|
[] | 2024-04-24T20:50:19Z
| 2024-04-30T21:02:32Z
| null |
anuragrawal2024
|
pytorch/torchchat
| 460
|
First generated token not being displayed in chat mode sometimes.
|
What is your system prompt?
I am superman
What is your prompt?
How can i save the world?
, up, and away! As Superman, you're uniquely equipped
Seems like 'up' in up, up, and away is being lost. This happens with most responses.
|
https://github.com/pytorch/torchchat/issues/460
|
closed
|
[] | 2024-04-24T19:00:40Z
| 2024-04-24T22:13:06Z
| 0
|
JacobSzwejbka
|
pytorch/executorch
| 3,303
|
How can I convert llama3 safetensors to the pth file needed to use with executorch?
|
Fine-tunes of Llama3 usually only have safetensors uploaded. In order to compile a Llama3 model following the tutorial, I need the original pth checkpoint file.
Is there a way to convert the safetensors to the checkpoint file?
|
https://github.com/pytorch/executorch/issues/3303
|
closed
|
[
"enhancement",
"help wanted",
"high priority",
"triage review"
] | 2024-04-24T14:20:17Z
| 2024-05-30T03:29:23Z
| null |
l3utterfly
|
huggingface/transformers.js
| 725
|
How to choose a language's dialect when using `automatic-speech-recognition` pipeline?
|
### Question
Hi, so I was originally using the transformers library (python version) in my backend, but when refactoring my application for scale. It made more sense to move my implementation of whisper from the backend to the frontend (for my specific usecase). So I was thrilled when I saw that transformers.js supported whisper via the `automatic-speech-recognition` pipeline. However I'm a little confused by the implementation and the documentation left me with the question in the title.
How to choose a language's dialect when using `automatic-speech-recognition` pipeline?
In the python implementation of whisper, you don't have to specify the language being spoken as long as you're using the correct model size for multilingual support. But from your examples on transformers.js, it seems like you do in the js implementation.
```
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-small');
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/french-audio.mp3';
const output = await transcriber(url, { language: 'french', task: 'transcribe' });
// { text: " J'adore, j'aime, je n'aime pas, je déteste." }
```
However there's no list of supported languages, beyond what you can find on the whisper github repo. That's usually not a problem. But how do you deal with a language like Chinese, that has two main dialects; Mandarin and Cantonese. In python, I didn't have to worry about it, but in js, it seems to be a potential issue.
Please help. Any guidance will be appreciated.
|
https://github.com/huggingface/transformers.js/issues/725
|
closed
|
[
"question"
] | 2024-04-24T09:44:38Z
| 2025-11-06T20:36:01Z
| null |
jquintanilla4
|
huggingface/text-embeddings-inference
| 248
|
how to support gpu version 10.1 rather than 12.2
|
### Feature request
how to support gpu version 10.1 rather than 12.2
### Motivation
how to support gpu version 10.1 rather than 12.2
### Your contribution
how to support gpu version 10.1 rather than 12.2
|
https://github.com/huggingface/text-embeddings-inference/issues/248
|
closed
|
[] | 2024-04-24T08:49:45Z
| 2024-04-26T13:02:44Z
| null |
fanqiangwei
|
huggingface/diffusers
| 7,766
|
IP-Adapter FaceID PLus How to use questions
|
https://github.com/huggingface/diffusers/blob/9ef43f38d43217f690e222a4ce0239c6a24af981/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L492
## error msg:
pipe.unet.encoder_hid_proj.image_projection_layers[0].clip_embeds = clip_embeds.to(dtype=torch.float16)
AttributeError: 'list' object has no attribute 'to'
hi!
I'm having some problems using the ip adapter FaceID PLus. Can you help me answer these questions? Thank you very much
1. first question: What should I pass in the `ip_adapter_image` parameter in the `prepare_ip_adapter_image_embeds` function
2. second question: What problem does this cause when the following code does not match in the merge code link below and in the example in the ip_adapter.md file
this is merge link:
https://github.com/huggingface/diffusers/pull/7186#issuecomment-1986961595
Differential code:
```
ref_images_embeds = torch.stack(ref_images_embeds, dim=0).unsqueeze(0)
neg_ref_images_embeds = torch.zeros_like(ref_images_embeds)
id_embeds = torch.cat([neg_ref_images_embeds, ref_images_embeds]).to(dtype=torch.float16, device="cuda"))
```
@yiyixuxu @fabiorigano
## os:
diffusers==diffusers-0.28.0.dev0
## this is my code:
```
# @FileName:StableDiffusionIpAdapterFaceIDTest.py
# @Description:
# @Author:dyh
# @Time:2024/4/24 11:45
# @Website:www.xxx.com
# @Version:V1.0
import cv2
import numpy as np
import torch
from PIL import Image
from diffusers import StableDiffusionPipeline
from insightface.app import FaceAnalysis
from transformers import CLIPVisionModelWithProjection
model_path = '../../../aidazuo/models/Stable-diffusion/stable-diffusion-v1-5'
clip_path = '../../../aidazuo/models/CLIP-ViT-H-14-laion2B-s32B-b79K'
ip_adapter_path = '../../../aidazuo/models/IP-Adapter-FaceID'
ip_img_path = '../../../aidazuo/jupyter-script/test-img/vermeer.png'
def extract_face_features(image_lst: list, input_size: tuple):
# Extract Face features using insightface
ref_images = []
app = FaceAnalysis(name="buffalo_l",
root=ip_adapter_path,
providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=input_size)
for img in image_lst:
image = cv2.cvtColor(np.asarray(img), cv2.COLOR_BGR2RGB)
faces = app.get(image)
image = torch.from_numpy(faces[0].normed_embedding)
ref_images.append(image.unsqueeze(0))
ref_images = torch.cat(ref_images, dim=0)
return ref_images
ip_adapter_img = Image.open(ip_img_path)
image_encoder = CLIPVisionModelWithProjection.from_pretrained(
clip_path,
torch_dtype=torch.float16,
use_safetensors=True
)
pipe = StableDiffusionPipeline.from_pretrained(
model_path,
variant="fp16",
safety_checker=None,
image_encoder=image_encoder,
torch_dtype=torch.float16).to("cuda")
adapter_file_lst = ["ip-adapter-faceid-plus_sd15.bin"]
adapter_weight_lst = [0.5]
pipe.load_ip_adapter(ip_adapter_path, subfolder=None, weight_name=adapter_file_lst)
pipe.set_ip_adapter_scale(adapter_weight_lst)
face_id_embeds = extract_face_features([ip_adapter_img], ip_adapter_img.size)
clip_embeds = pipe.prepare_ip_adapter_image_embeds(ip_adapter_image=[ip_adapter_img],
ip_adapter_image_embeds=None,
device='cuda',
num_images_per_prompt=1,
do_classifier_free_guidance=True)
pipe.unet.encoder_hid_proj.image_projection_layers[0].clip_embeds = clip_embeds.to(dtype=torch.float16)
pipe.unet.encoder_hid_proj.image_projection_layers[0].shortcut = False # True if Plus v2
generator = torch.manual_seed(33)
images = pipe(
prompt='a beautiful girl',
ip_adapter_image_embeds=clip_embeds,
negative_prompt="",
num_inference_steps=30,
num_images_per_prompt=1,
generator=generator,
width=512,
height=512).images
print(images)
```
|
https://github.com/huggingface/diffusers/issues/7766
|
closed
|
[] | 2024-04-24T07:56:38Z
| 2024-11-20T00:02:30Z
| null |
Honey-666
|
huggingface/peft
| 1,673
|
How to set Lora_dropout=0 when loading trained peft model for inference?
|
### System Info
peft==0.10.0
transformers==4.39.3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
class Linear(nn.Module, LoraLayer):
def forward(self, x: torch.Tensor, *args: Any, **kwargs: Any) -> torch.Tensor:
self._check_forward_args(x, *args, **kwargs)
adapter_names = kwargs.pop("adapter_names", None)
if self.disable_adapters:
if self.merged:
self.unmerge()
result = self.base_layer(x, *args, **kwargs)
elif adapter_names is not None:
result = self._mixed_batch_forward(x, *args, adapter_names=adapter_names, **kwargs)
elif self.merged:
result = self.base_layer(x, *args, **kwargs)
else:
result = self.base_layer(x, *args, **kwargs)
torch_result_dtype = result.dtype
for active_adapter in self.active_adapters:
if active_adapter not in self.lora_A.keys():
continue
lora_A = self.lora_A[active_adapter]
lora_B = self.lora_B[active_adapter]
dropout = self.lora_dropout[active_adapter]
scaling = self.scaling[active_adapter]
x = x.to(lora_A.weight.dtype)
if not self.use_dora[active_adapter]:
result = result + lora_B(lora_A(dropout(x))) * scaling
else:
x = dropout(x)
result = result + self._apply_dora(x, lora_A, lora_B, scaling, active_adapter)
result = result.to(torch_result_dtype)
return result
```
### Expected behavior
We can see that `lora_dropout` in forward function is working the same way whether under train or inference mode.
|
https://github.com/huggingface/peft/issues/1673
|
closed
|
[] | 2024-04-24T07:47:19Z
| 2024-05-10T02:22:17Z
| null |
flyliu2017
|
pytorch/torchchat
| 450
|
[Feature Request] Support for delegate information in torchchat
|
@lucylq can you please add the delegate summary info you added to ET's llama2/export_llama_lib to export_et.py?
Can you add a line or two about XNNPACK delegate (probably just a link to some text on the ET website?) and how to interpret the operator stats in docs/ADVANCED-USERS.md as well?
Thanks so much!
cc: @iseeyuan
|
https://github.com/pytorch/torchchat/issues/450
|
closed
|
[
"enhancement"
] | 2024-04-24T06:19:02Z
| 2024-04-30T00:29:06Z
| 0
|
mikekgfb
|
pytorch/vision
| 8,394
|
Run all torchvision models in one script.
|
### 🚀 The feature
Is there a test script that can run models.
### Motivation, pitch
Hl, i am testing a model migration script from cuda to sycl and i would like to test it on torch vision model set, i would like to know do we have a test script that can run all models in torchvision? like run.py [code](https://github.com/pytorch/benchmark/blob/main/run.py) in torchbenchmark, thanks.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/vision/issues/8394
|
closed
|
[] | 2024-04-24T01:39:23Z
| 2024-04-29T10:18:17Z
| 1
|
leizhenyuan
|
pytorch/torchchat
| 430
|
[Feature Request] centralize measurement code
|
@malfet said in https://github.com/pytorch/torchchat/pull/426
This code is repeated thrice in this PR. Can we have something like
```
with report_block-time("Time to load model"):
model = _load_model(builder_args, only_config=True)
device_sync(device=builder_args.device)
```
Might be a good component for build/utils.py - item for post-release.
cc: @metascroy
|
https://github.com/pytorch/torchchat/issues/430
|
closed
|
[
"enhancement"
] | 2024-04-23T22:26:16Z
| 2024-05-12T21:32:58Z
| 0
|
mikekgfb
|
huggingface/optimum
| 1,826
|
Phi3 support
|
### Feature request
Microsoft's new phi3 mode, in particular the 128K context mini model, is not supported by Optimum export.
Error is:
"ValueError: Trying to export a phi3 model, that is a custom or unsupported architecture, but no custom export configuration was passed as `custom_export_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type phi3 to be supported natively in the ONNX export."
### Motivation
Phi3-mini is potentially very significant as it has a large context but a small size. This could be used in lots of scenarios if it has good performance.
### Your contribution
Unlikely I could do a PR as ONNX work is not my forte.
|
https://github.com/huggingface/optimum/issues/1826
|
closed
|
[] | 2024-04-23T15:54:21Z
| 2024-05-24T13:53:08Z
| 4
|
martinlyons
|
huggingface/datasets
| 6,830
|
Add a doc page for the convert_to_parquet CLI
|
Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova
|
https://github.com/huggingface/datasets/issues/6830
|
closed
|
[
"documentation"
] | 2024-04-23T09:49:04Z
| 2024-04-25T10:44:11Z
| 0
|
severo
|
pytorch/serve
| 3,103
|
How to pass parameters from preprocessing to postprocessing when using micro-batch operations
|
### 📚 The doc issue
I have a variable that is obtained by parsing the image data in pre-processing, but it is not an input to the model. I want to pass it to post-processing and return it together with the results. Like knowing how to pass it from pre-processing to post-processing
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/3103
|
closed
|
[
"triaged"
] | 2024-04-23T03:17:05Z
| 2024-04-29T02:49:49Z
| null |
pengxin233
|
huggingface/transformers.js
| 723
|
404 when trying Qwen in V3
|
### Question
This is probably just because V3 is a work in progress, but I wanted to make sure.
When trying to run Qwen 1.5 - 0.5B it works with the V2 script, but when swapping to V3 I get a 404 not found.
```
type not specified for model. Using the default dtype: q8.
GET https://huggingface.co/Xenova/Qwen1.5-0.5B-Chat/resolve/main/onnx/model_quantized.onnx 404 (Not Found)
```
It seems V3 is looking for a file that was renamed 3 months ago.
[Rename onnx/model_quantized.onnx to onnx/decoder_model_merged_quantized.onnx](https://huggingface.co/Xenova/Qwen1.5-0.5B-Chat/commit/09e055ac27002bb954137751b31376de79ae17a5)
I've tried setting `dtype` to 16 and 32, which does change the URL it tries to get, but those URL's also do not exist :-D
e.g. `https://huggingface.co/Xenova/Qwen1.5-0.5B-Chat/resolve/main/onnx/model_fp16.onnx` when using `dtype: 'fp16'`.
Is there something I can do to make V3 find the correct files?
(I'm still trying to find that elusive small model with a large context size to do document summarization with)
|
https://github.com/huggingface/transformers.js/issues/723
|
open
|
[
"question"
] | 2024-04-22T19:14:17Z
| 2024-05-28T08:26:09Z
| null |
flatsiedatsie
|
huggingface/diffusers
| 7,740
|
How to get config of single_file
|
Hi,
Is there any way to get the equivalent of model_index.json from a single_file?
|
https://github.com/huggingface/diffusers/issues/7740
|
closed
|
[] | 2024-04-22T14:00:21Z
| 2024-04-22T23:26:50Z
| null |
suzukimain
|
pytorch/torchchat
| 372
|
[Release] Documentation is sparse.
|
What does "the following models are supported" mean? Ostensibly you can load other models like language llama, as long as you have a params.json and they fit into the architectural parameters ?
the preamble explains it supports "Android (Devices that support XNNPACK)" - how do I know that as a user?
"Supporting both GGUF fp32/16 " - also Q4_0 and Q6_0
"Export
Compiles a model and saves it to run later." - and how do I do this? it's just presented as here's export, no go figure out what to do with a DSO or a PTE?
Should we say tested - where do we discuss how to add new models?
|
https://github.com/pytorch/torchchat/issues/372
|
closed
|
[] | 2024-04-22T08:17:04Z
| 2024-04-25T18:47:09Z
| 1
|
mikekgfb
|
pytorch/torchchat
| 364
|
[Release][documentation] Docs Regression: documentation for export_et / install_et broken
|
From chat:
@iseeyuan
> Separate question: When I tried python torchchat.py export stories15M --output-pte-path stories15M.pte, I got Export with executorch requested but ExecuTorch could not be loaded.
If I run the culprit line, from export_et import export_model as export_model_et, I got this stack, [P1219614729](https://www.internalfb.com/intern/paste/P1219614729/)
Is it a known issue?
@kimishpatel
> Might be unrelated but did you run scripts/install_et.sh?
@iseeyuan
> I got "scripts/install_et.sh: line 61: TORCHCHAT_ROOT: unbound variable". Should I run it with any argument?
> nvm, I should add prefix of TORCHCHAT_ROOT
@kimishpatel
> Yeah just export TORCHCHAT_ROOT={pwd} or something. it used to be in readme at https://github.com/pytorch/torchchat/. but dont see it anymore
@kimishpatel Should this go in Executorch documentation or in Torchchat docs?
cc: @GregoryComer @byjlw @orionr
|
https://github.com/pytorch/torchchat/issues/364
|
closed
|
[] | 2024-04-22T04:01:22Z
| 2024-04-24T02:32:50Z
| 1
|
mikekgfb
|
pytorch/torchchat
| 357
|
runner-et build documentation broken
|
The runner build information in our documentation is in even worse shape than the ci.
@shoumikhin
> anyhow just followed the readme and then tried that cmake command, got [P1219498869 (https://www.internalfb.com/intern/paste/P1219498869/)
```
cmake -S ./runner-et -B et-build/cmake-out -G Ninja
-- Using ET BUILD DIR: --[et-build]--
-- Using ET BUILD DIR: --[et-build]--
-- The C compiler identification is AppleClang 15.0.0.15000309
-- The CXX compiler identification is AppleClang 15.0.0.15000309
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode_15.3.0_15E204a_fb.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode_15.3.0_15E204a_fb.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- TORCHCHAT_ROOT=""
-- Looking for excutorch in /et-build/install/lib/cmake/ExecuTorch
CMake Error at CMakeLists.txt:29 (find_package):
Could not find a package configuration file provided by "executorch" with
any of the following names:
executorchConfig.cmake
executorch-config.cmake
Add the installation prefix of "executorch" to CMAKE_PREFIX_PATH or set
"executorch_DIR" to a directory containing one of the above files. If
"executorch" provides a separate development package or SDK, be sure it has
been installed.
-- Configuring incomplete, errors occurred!
```
|
https://github.com/pytorch/torchchat/issues/357
|
closed
|
[] | 2024-04-21T21:37:26Z
| 2024-05-12T21:38:59Z
| 4
|
mikekgfb
|
pytorch/torchchat
| 356
|
runner, runner-et and runner-aoti documentation
|
Add a description of the runner/run.cpp
highlight that it's only a few lines of C++ code that need to be different for PyTorch AOTI and PyTorch ET.
Might also check how many lines of llama2.c we avoid having to write by autogenerating llama.{pte,so}
maybe @shoumikhin and Hansong (@cbilgin can you put the right git reference for him) can add some text on how to
adapt / re-use the code for integerating LLMs into an app (using their iOS/Android as an example)
cc: @orionr @metascroy @larryliu0820 @shoumikhin @cbilgin
|
https://github.com/pytorch/torchchat/issues/356
|
closed
|
[] | 2024-04-21T21:30:59Z
| 2024-04-25T07:57:47Z
| 2
|
mikekgfb
|
pytorch/torchchat
| 354
|
[Feature Request] add Dr. CI (when this repository goes public)
|
Now that we're a real pytorch project and in the pytorch repo, can we have @pytorch-bot build the same summaries for pytorch/torchchat as it does for pytorch/pytorch? I find those exceedingly helpful to navigate.
https://github.com/pytorch/pytorch/pull/124570#issuecomment-2068152908
🔗 Helpful Links
🧪 See artifacts and rendered test results at [hud.pytorch.org/pr/124570](https://hud.pytorch.org/pr/124570)
📄 Preview [Python docs built from this PR](https://docs-preview.pytorch.org/pytorch/pytorch/124570/index.html)
📄 Preview [C++ docs built from this PR](https://docs-preview.pytorch.org/pytorch/pytorch/124570/cppdocs/index.html)
❓ Need help or want to give feedback on the CI? Visit the [bot commands wiki](https://github.com/pytorch/pytorch/wiki/Bot-commands) or our [office hours](https://github.com/pytorch/pytorch/wiki/Dev-Infra-Office-Hours)
Note: Links to docs will display an error until the docs builds have been completed.
⏳ 33 Pending, 2 Unrelated Failures
As of commit https://github.com/pytorch/pytorch/commit/2fe671d38ce391f8de80611f1ccbcf6f3e912faf with merge base https://github.com/pytorch/pytorch/commit/fd90991790b4cdf66a076711844ca620669dcc04 (image):
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes.
cc: @seemethere @malfet
|
https://github.com/pytorch/torchchat/issues/354
|
closed
|
[
"enhancement"
] | 2024-04-21T21:01:47Z
| 2024-05-13T17:29:28Z
| 4
|
mikekgfb
|
pytorch/torchchat
| 347
|
[Release] Seems like we get a bit of a garbage output?
|
Maybe this has to do with how we leverage the start and end tokens for prompt and response, but I feel like I'm getting garbage output?
Steps to reproduce:
1. Run `python torchchat.py chat stories15M`
2. Enter `Can you tell me about your day?` as the prompt
3. I then see the following result
```
What is your prompt?
Can you tell me about your day?
was very tired and needed to sleep. No matter how hard you tried, I couldn't keep up with you," said the voice.
Lily was surprised. She had never heard such a voice before. She asked, "What is the song?"
The voice replied, "It brings you joy. It brings you a star."
Lily was very excited and wanted to know more about the star. So, she asked the voice, "What is the song?"
The voice said, "The song might bring you something special. You should hope and you will remember to dream. I will always remember the beautiful song you had heard and tell you."
Lily smiled in understanding. She thanked the voice and went back to sleep.
The next morning, Lily woke up and found the beautiful song she had heard earlier. She was so happy and thankful to the friendly voice. Once upon a time, there was a
```
4. Note that the result is clipped (` was...` as the start) and also didn't go to the end token
5. Also wasn't interactive, which I called out in https://github.com/pytorch/torchchat/issues/346
Expected:
1. A reasonable chat with the LLM
cc @byjlw @mikekgfb
|
https://github.com/pytorch/torchchat/issues/347
|
closed
|
[] | 2024-04-21T18:19:45Z
| 2024-04-22T21:13:17Z
| 1
|
orionr
|
pytorch/torchchat
| 346
|
[Release] Chat only responds to one line of text?
|
I would expect chat to be interactive, but it isn't for me right now.
Steps to reproduce:
1. Run `python torchchat.py chat stories15M`
2. Enter some text like "Hello"
3. Notice that you get a response, but then the command exits
Expected:
1. I'd be able to continue chatting with the model until I hit Ctrl-C or something
cc @byjlw @mikekgfb
|
https://github.com/pytorch/torchchat/issues/346
|
closed
|
[] | 2024-04-21T18:16:01Z
| 2024-04-25T07:58:45Z
| 2
|
orionr
|
pytorch/torchchat
| 345
|
[Feature request] Allow for GPU and MPS as defaults on machines that support it?
|
Given that we won't see good performance without GPU enabled for machines that support CUDA, should we make sure we select `gpu`, `mps` and then `cpu` in that order for `chat` and `generate` commands?
Is this potentially a blocker for full launch?
cc @malfet @mikekgfb @dbort @byjlw
|
https://github.com/pytorch/torchchat/issues/345
|
closed
|
[
"enhancement"
] | 2024-04-21T17:54:06Z
| 2024-04-30T06:31:55Z
| 2
|
orionr
|
pytorch/torchchat
| 344
|
[Resolve] Force requirements.txt or README.md to install PyTorch nightlies?
|
Given that we won't see good performance with the release version of PyTorch, should we update requirements.txt and/or README.md to have people install nightlies?
Is this potentially a blocker for full launch?
cc @malfet @mikekgfb @dbort @byjlw
|
https://github.com/pytorch/torchchat/issues/344
|
closed
|
[] | 2024-04-21T17:52:19Z
| 2024-04-22T13:58:30Z
| 4
|
orionr
|
pytorch/torchchat
| 336
|
[Mitigated, pending confirmation/closure] Review update documentation for GPTQ
|
https://github.com/pytorch/torchchat/edit/main/docs/quantization.md
Please update the documentation to include all necessary options and information to use GPTQ with eager execution and export .
cc: @jerryzh168 @HDCharles
|
https://github.com/pytorch/torchchat/issues/336
|
closed
|
[] | 2024-04-21T08:19:38Z
| 2024-04-25T17:13:46Z
| 0
|
mikekgfb
|
huggingface/diffusers
| 7,724
|
RuntimeError: Error(s) in loading state_dict for AutoencoderKL: Missing Keys! How to solve?
|
### Describe the bug
I am trying to get a Lora to run locally on my computer by using this code: https://github.com/hollowstrawberry/kohya-colab and changing it to a local format. When I get to the loading of the models, it gives an error, It seems that the AutoEncoder model has changed but I do not know how to adjust this or solve this issue in any of the files. I am a very amateur coder, could some one still help me out?
### Reproduction
Here is the code: https://github.com/hollowstrawberry/kohya-colab
### Logs
```shell
Traceback (most recent call last):
File "/Users/veravanderburg/Loras/kohya-trainer/train_network_wrapper.py", line 9, in <module>
train(args)
File "/Users/veravanderburg/Loras/kohya-trainer/train_network.py", line 168, in train
text_encoder, vae, unet, _ = train_util.load_target_model(args, weight_dtype, accelerator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/veravanderburg/Loras/kohya-trainer/library/train_util.py", line 3149, in load_target_model
text_encoder, vae, unet, load_stable_diffusion_format = _load_target_model(
^^^^^^^^^^^^^^^^^^^
File "/Users/veravanderburg/Loras/kohya-trainer/library/train_util.py", line 3115, in _load_target_model
text_encoder, vae, unet = model_util.load_models_from_stable_diffusion_checkpoint(args.v2, name_or_path, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/veravanderburg/Loras/kohya-trainer/library/model_util.py", line 873, in load_models_from_stable_diffusion_checkpoint
info = vae.load_state_dict(converted_vae_checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for AutoencoderKL:
Missing key(s) in state_dict: "encoder.mid_block.attentions.0.to_q.weight", "encoder.mid_block.attentions.0.to_q.bias", "encoder.mid_block.attentions.0.to_k.weight", "encoder.mid_block.attentions.0.to_k.bias", "encoder.mid_block.attentions.0.to_v.weight", "encoder.mid_block.attentions.0.to_v.bias", "encoder.mid_block.attentions.0.to_out.0.weight", "encoder.mid_block.attentions.0.to_out.0.bias", "decoder.mid_block.attentions.0.to_q.weight", "decoder.mid_block.attentions.0.to_q.bias", "decoder.mid_block.attentions.0.to_k.weight", "decoder.mid_block.attentions.0.to_k.bias", "decoder.mid_block.attentions.0.to_v.weight", "decoder.mid_block.attentions.0.to_v.bias", "decoder.mid_block.attentions.0.to_out.0.weight", "decoder.mid_block.attentions.0.to_out.0.bias".
Unexpected key(s) in state_dict: "encoder.mid_block.attentions.0.key.bias", "encoder.mid_block.attentions.0.key.weight", "encoder.mid_block.attentions.0.proj_attn.bias", "encoder.mid_block.attentions.0.proj_attn.weight", "encoder.mid_block.attentions.0.query.bias", "encoder.mid_block.attentions.0.query.weight", "encoder.mid_block.attentions.0.value.bias", "encoder.mid_block.attentions.0.value.weight", "decoder.mid_block.attentions.0.key.bias", "decoder.mid_block.attentions.0.key.weight", "decoder.mid_block.attentions.0.proj_attn.bias", "decoder.mid_block.attentions.0.proj_attn.weight", "decoder.mid_block.attentions.0.query.bias", "decoder.mid_block.attentions.0.query.weight", "decoder.mid_block.attentions.0.value.bias", "decoder.mid_block.attentions.0.value.weight".
```
### System Info
that command does not work for me
### Who can help?
@saya
|
https://github.com/huggingface/diffusers/issues/7724
|
closed
|
[
"bug"
] | 2024-04-19T13:27:17Z
| 2024-04-22T08:45:24Z
| null |
veraburg
|
huggingface/optimum
| 1,821
|
Idefics2 Support in Optimum for ONNX export
|
### Feature request
With reference to the new Idefics2 model- https://huggingface.co/HuggingFaceM4/idefics2-8b
I would like to export it to ONNX which is currently not possible.
Please enable conversion support. Current Error with pip install transformers via GIT
```
Traceback (most recent call last):
File "/usr/local/bin/optimum-cli", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/optimum/commands/optimum_cli.py", line 163, in main
service.run()
File "/usr/local/lib/python3.10/dist-packages/optimum/commands/export/onnx.py", line 265, in run
main_export(
File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 352, in main_export
onnx_export_from_model(
File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1048, in onnx_export_from_model
raise ValueError(
ValueError: Trying to export a idefics2 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type idefics2 to be supported natively in the ONNX export.
```
### Motivation
The model is good and would like to export it to onnx asap
### Your contribution
-
|
https://github.com/huggingface/optimum/issues/1821
|
open
|
[
"feature-request",
"onnx"
] | 2024-04-19T07:12:41Z
| 2025-02-18T19:25:11Z
| 8
|
gtx-cyber
|
pytorch/pytorch
| 124,452
|
How to use system cuda/cudnn
|
### 🚀 The feature, motivation and pitch
I have a machine with cuda/cudnn compatible rocm device.
```
$ nvcc --version
HIPHSA: Author SUGON
HIP version: 5.4.23453
Cuda compilation tools, release 11.8, V11.8.89
clang version 15.0.0 (http://10.15.3.7/dcutoolkit/driverruntime/llvm-project.git 1be90618e508074abc746ab4963d7ad92710d6c5)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /public/software/compiler/dtk-23.10.1/llvm/bin
```
The cuda/cudnn is installed:
```
cuda]$ ll
总用量 30
drwxr-xr-x 3 root root 4096 12月 19 14:20 bin
-rw-r--r-- 1 root root 634 12月 6 20:31 env.sh
drwxr-xr-x 3 root root 4096 12月 19 14:20 extras
lrwxrwxrwx 1 root root 28 12月 19 14:21 include -> targets/x86_64-linux/include
lrwxrwxrwx 1 root root 24 12月 19 14:21 lib64 -> targets/x86_64-linux/lib
drwxr-xr-x 3 root root 4096 12月 19 14:20 nvvm
drwxr-xr-x 5 root root 4096 12月 19 14:21 samples
drwxr-xr-x 3 root root 4096 12月 19 14:21 src
drwxr-xr-x 3 root root 4096 12月 19 14:21 targets
drwxr-xr-x 2 root root 4096 12月 19 14:21 tools
-rw-r--r-- 1 root root 20 12月 6 20:31 version.txt
```
I then install pytorch 2.2 with cuda 11.8 by:
```
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
```
But when I import torch, it can’t find cuda device:
```
$ python
Python 3.11.8 (main, Feb 26 2024, 21:39:34) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.device_count()
0
```
I think the problem is pytorch use the cuda/cudnn runtime lib of its own. But I want it to use system cuda.
I have set CUDA_HOME and LD_LIBRARY_PATH. But it seems not work.
|
https://github.com/pytorch/pytorch/issues/124452
|
closed
|
[] | 2024-04-19T03:23:41Z
| 2024-04-19T15:13:57Z
| null |
fancyerii
|
huggingface/alignment-handbook
| 158
|
How to work with local data
|
I downloaded a dataset from hf. I want to load it locally, but it still tries to download it from hf and place it into the cache.
How can I use the local one I already downloaded?
Thank you.
|
https://github.com/huggingface/alignment-handbook/issues/158
|
open
|
[] | 2024-04-18T10:26:14Z
| 2024-05-14T11:20:55Z
| null |
pretidav
|
huggingface/optimum-quanto
| 182
|
Can I use quanto on AMD GPU?
|
Does quanto work with AMD GPUs ?
|
https://github.com/huggingface/optimum-quanto/issues/182
|
closed
|
[
"question",
"Stale"
] | 2024-04-18T03:06:54Z
| 2024-05-25T01:49:56Z
| null |
catsled
|
huggingface/accelerate
| 2,680
|
How to get pytorch_model.bin from ckeckpoint files without zero_to_fp32.py
|
https://github.com/huggingface/accelerate/issues/2680
|
closed
|
[] | 2024-04-17T11:30:32Z
| 2024-04-18T22:40:14Z
| null |
lipiji
|
|
huggingface/datasets
| 6,819
|
Give more details in `DataFilesNotFoundError` when getting the config names
|
### Feature request
After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error:
```
{
"error": "Cannot get the config names for the dataset.",
"cause_exception": "DataFilesNotFoundError",
"cause_message": "No (supported) data files found in cis-lmu/Glot500",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_runners/dataset/config_names.py\", line 73, in compute_config_names_response\n config_names = get_dataset_config_names(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 347, in get_dataset_config_names\n dataset_module = dataset_module_factory(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1873, in dataset_module_factory\n raise e1 from None\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1854, in dataset_module_factory\n return HubDatasetModuleFactoryWithoutScript(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1245, in get_module\n module_name, default_builder_kwargs = infer_module_for_data_files(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 595, in infer_module_for_data_files\n raise DataFilesNotFoundError(\"No (supported) data files found\" + (f\" in {path}\" if path else \"\"))\n",
"datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in cis-lmu/Glot500\n"
]
}
```
because the deleted files were still listed in the README, see https://huggingface.co/datasets/cis-lmu/Glot500/discussions/4
Ideally, the error message would include the name of the first configuration with missing files, to help the user understand how to fix it. Here, it would tell that configuration `aze_Ethi` has no supported data files, instead of telling that the `cis-lmu/Glot500` *dataset* has no supported data files (which is not true).
### Motivation
Giving more detail in the error would help the Datasets Hub users to debug why the dataset viewer does not work.
### Your contribution
Not sure how to best fix this, as there are a lot of loops on the dataset configs in the traceback methods. "maybe" it would be easier to handle if the code was completely isolating each config.
|
https://github.com/huggingface/datasets/issues/6819
|
open
|
[
"enhancement"
] | 2024-04-17T11:19:47Z
| 2024-04-17T11:19:47Z
| 0
|
severo
|
pytorch/vision
| 8,382
|
Regarding IMAGENET1K_V1 and IMAGENET1K_V2 weights
|
### 🐛 Describe the bug
I found a very strange "bug" while I was trying to find similiar instances in a vector database of pictures. The model I used is ResNet50. The problem occurs only when using the` IMAGENET1K_V2` weights, but does not appear when using the legacy `V1` weights (referring to https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/).
When I calculate the **cosine similarity** with `V1` weights for two almost identical pictures I get `values > 0.95`, however when I use `V2` weights with the same pictures I get `values < 0.7`. In layman terms with `V2` identical pictures are not recognized as such anymore. I gave you two example pictures below and the code to reproduce the problem. Does somebody have a concise explanation for this behaviour?
When you increase the size in your `transform.resize((x, y))` the problem gradually begins to vanish, however this is not really a good solution since it produces overhead during inference.
Would be happy for any insights on this topic :)
```
from torchvision import models
from torchvision.models import ResNet50_Weights
import torchvision.io
from torch import nn
import numpy as np
from numpy.linalg import norm
class Identity(nn.Module):
def __init__(self):
super(Identity, self).__init__()
def forward(self, x):
return x
# Get weights
weights = ResNet50_Weights.IMAGENET1K_V1
preprocess = weights.transforms()
model = models.resnet50(weights=ResNet50_Weights.IMAGENET1K_V1).to("cuda:0")
model.fc = Identity()
a = model(preprocess(torchvision.io.read_image("/raid/..../datasets/lion/lion_ori_small.jpg").unsqueeze(dim=0).to("cuda:0"))).cpu().detach().numpy().squeeze()
b = model(preprocess(torchvision.io.read_image("/raid/.../datasets/lion/lion_fake_small.jpg").unsqueeze(dim=0).to("cuda:0"))).cpu().detach().numpy().squeeze()
cosine = np.dot(a,b)/(norm(a)*norm(b))
```


### Versions
torchvision 0.19
|
https://github.com/pytorch/vision/issues/8382
|
open
|
[] | 2024-04-17T09:30:50Z
| 2024-04-17T09:33:44Z
| 0
|
asusdisciple
|
pytorch/TensorRT
| 2,759
|
❓ [Question] How should the CMakeLists look like for running .ts files in C++?
|
## ❓ Question
I am trying to load a .ts model in C++ on Jetson Orin NX. I am running on this container [https://github.com/dusty-nv/jetson-containers/tree/master/packages/pytorch/torch_tensorrt](), version:[r35.3.1].
```#include <torch/script.h> // One-stop header.
#include <torch_tensorrt/torch_tensorrt.h>
#include <iostream>
#include <memory>
int main(int argc, const char* argv[]) {
torch::jit::Module module;
try {
// Deserialize the ScriptModule from a file using torch::jit::load().
module = torch::jit::load("classificator_float.ts");
}
catch (const c10::Error& e) {
std::cerr << "error loading the model\n";
return -1;
}
std::cout << "ok\n";
}
```
However, I am struggling to make CMakeLists.txt which would properly include the tensorrt runtime. This is what I currently have:
```
cmake_minimum_required(VERSION 3.12 FATAL_ERROR)
project(custom_ops)
execute_process(
COMMAND python3 -c "import torch; print(torch.utils.cmake_prefix_path)"
OUTPUT_VARIABLE PYTORCH_CMAKE_PREFIX_PATH
OUTPUT_STRIP_TRAILING_WHITESPACE
)
set(CMAKE_PREFIX_PATH "${PYTORCH_CMAKE_PREFIX_PATH}")
find_package(Torch REQUIRED)
add_executable(example-app example-app.cpp)
target_include_directories(example-app PRIVATE "/usr/local/lib/python3.8/dist-packages/torch_tensorrt/include")
target_link_libraries(example-app torch)
set_property(TARGET example-app PROPERTY CXX_STANDARD 17)
```
It builds without issues, however when I try to execute it I get:

How should I modify CMakeLists.txt?
## What you have already tried
I have looked at these tutorials, but they do not have CMakeLists for running models compiled with TensorRT:
https://pytorch.org/tutorials/advanced/cpp_export.html
https://pytorch.org/TensorRT/getting_started/getting_started_with_cpp_api.html
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.0.0+nv23.5
- TensorRT: 8.5.2.2-1
- CPU Architecture: arm64
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): jetson-container
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.8.10
- CUDA version: 11.4
- GPU models and configuration: Jetson Orin NX
|
https://github.com/pytorch/TensorRT/issues/2759
|
closed
|
[
"question"
] | 2024-04-17T09:15:23Z
| 2024-04-24T05:39:27Z
| null |
DmytroIvakhnenkov
|
huggingface/optimum
| 1,818
|
Request for ONNX Export Support for Blip Model in Optimum
|
Hi Team,
I hope this message finds you well.
I've encountered an issue while attempting to export Blip model into the ONNX format using Optimum. I have used below command.
`! optimum-cli export onnx -m Salesforce/blip-itm-base-coco --task feature-extraction blip_onnx`
It appears that Optimum currently lacks support for this functionality, leading to errors during the export process.
`ValueError: Trying to export a blip model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type blip to be supported natively in the ONNX export`
Could you kindly provide insights into when we might expect support for exporting Blip models to ONNX to be implemented in Optimum?
Thank you for considering this request. I look forward to any updates or information you can provide on this matter.
|
https://github.com/huggingface/optimum/issues/1818
|
open
|
[
"feature-request",
"question",
"onnx"
] | 2024-04-17T08:55:45Z
| 2024-10-14T12:26:36Z
| null |
n9s8a
|
huggingface/transformers.js
| 715
|
How to unload/destroy a pipeline?
|
### Question
I tried to find how to unload a pipeline to free up memory in the documentation, but couldn't find a mention of how to do that properly.
If there a proper way to "unload" a pipeline?
I'd be happy to add the answer to the documentation.
|
https://github.com/huggingface/transformers.js/issues/715
|
closed
|
[
"question"
] | 2024-04-16T09:02:05Z
| 2024-05-29T09:32:23Z
| null |
flatsiedatsie
|
pytorch/torchchat
| 211
|
[Feature request] Support more GGUF tensor formats
|
Today we support parsing for F16, F32, Q4_0, and Q6_K GGUF tensors (see gguf_util.py). We'd like to add support for more GGUF quantization formats in https://github.com/ggerganov/llama.cpp/blob/master/ggml-quants.c.
Adding support for a new format should be straightforward, using Q4_0 and Q6_K as guides.
For Q4_0 and Q6_K, we convert GGUF tensors with a class that represents groupwise quantization, e.g., for Q4_0, we have a class as follows:
```
class Q4_0:
groupsize = 32
n_bit = 4
@staticmethod
def unpack(gguf_tensor: gguf.gguf_reader.ReaderTensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
```
The unpack method parses the gguf tensor and returns a tuple of tensors q, s, and z, where
* q is an tensor of shape (nr, nc) and of type torch.int32, with values in [0, 2^(n_bit)-1] that represents the unsigned quantized values. It has the shape of the input GGUF tensor, but its shape is reversed to align with how torch stores weights in a state_dict.
* s is a tensor of shape (nr, ng) and of type torch.float32, where ng = nc // groupsize is the number of groups per row. It represents the scale per group.
* z is a tensor of shape (nr, ng) and of type torch.float32, where ng = nc // groupsize is the number of groups per row. It represents the zero per group.
To convert q, s, and z to a float, we do the following calculation:
```
q_grouped = q.reshape(-1, groupsize)
s = s.reshape(-1, 1) # one per group
z = z.reshape(-1, 1) # one per group
float = q_grouped.sub(2 ** (n_bit - 1)).mul(s).add(z).reshape_as(q)
```
Note that for Q4_0 and Q6_K, z is a zero vector because these are scale-only quantization schemes.
To add a new scheme like Q4_1, we could copy the recipe for Q4_0 nearly exactly. We need to parse the GGUF block and translate the dequantization logic from https://github.com/ggerganov/llama.cpp/blob/master/ggml-quants.c to python using the bit functions in torch.
|
https://github.com/pytorch/torchchat/issues/211
|
open
|
[
"enhancement"
] | 2024-04-16T01:57:25Z
| 2024-04-25T18:13:44Z
| 0
|
metascroy
|
pytorch/pytorch
| 124,090
|
Fakeifying a non-leaf subclass where inner tensor is noncontiguous incorrectly produces contiguous tensor.
|
Minified repro from internal:
```
def test_dtensor_tensor_is_not_autograd_leaf_but_local_is_noncontiguous(self):
# Temporarily ignore setUp(), and use rank3 graphs during tracing
dist.destroy_process_group()
fake_store = FakeStore()
dist.init_process_group(
"fake", store=fake_store, rank=3, world_size=2
)
mesh = DeviceMesh(self.device_type, [1, 3])
x = torch.randn(10, 257, 160, requires_grad=True)
x_dt = DTensor.from_local(x, mesh, [_Partial()], run_check=False, shape=(10, 257, 160), stride=(41120, 160, 1))
tmp_dt = x_dt.redistribute(mesh, (Shard(1),))
from torch._subclasses import FakeTensorMode
m = FakeTensorMode()
tmp_dt_fake = m.from_tensor(tmp_dt)
self.assertEqual(tmp_dt.shape, tmp_dt_fake.shape)
self.assertEqual(tmp_dt.stride(), tmp_dt_fake.stride())
self.assertEqual(tmp_dt._local_tensor.shape, tmp_dt_fake._local_tensor.shape)
# This assert **fails**
# tmp_dt._local_tensor is not contiguous, but tmp_dt_fake._local_tensor advertises as contiguous
self.assertEqual(tmp_dt._local_tensor.stride(), tmp_dt_fake._local_tensor.stride())
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @anijain2305 @chauhang
|
https://github.com/pytorch/pytorch/issues/124090
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: pt2-dispatcher"
] | 2024-04-15T19:11:01Z
| 2024-05-01T21:56:06Z
| null |
bdhirsh
|
huggingface/transformers.js
| 714
|
Reproducing model conversions
|
### Question
I'm trying to reproduce the conversion of `phi-1_5_dev` to better understand the process. I'm running into a few bugs / issues along the way that I thought it'd be helpful to document.
The model [`@Xenova/phi-1_5_dev`](https://huggingface.co/Xenova/phi-1_5_dev) states:
> https://huggingface.co/susnato/phi-1_5_dev with ONNX weights to be compatible with Transformers.js.
I'm doing the following:
```
git clone https://github.com/xenova/transformers.js.git && cd transformers.js/scripts
git clone https://huggingface.co/susnato/phi-1_5_dev
python3 -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt
python3 convert.py --quantize --model_id phi-1_5_dev --task "text-generation"
```
Here, I hit my first issue - it looks like `transformers` on `pypi` does not support Phi:
```
raise KeyError(key)
KeyError: 'phi'
```
So I install from Github:
```
pip install git+https://github.com/huggingface/transformers.git
```
That produces:
```
RuntimeError: Failed to import optimum.exporters.onnx.__main__ because of the following error (look up to see its traceback):
cannot import name 'is_torch_less_than_1_11' from 'transformers.pytorch_utils' (/Users/thekevinscott/code/codegen/research/model-conversion/throwaway/transformers.js/scripts/.venv/lib/python3.10/site-packages/transformers/pytorch_utils.py)
```
I believe `optimum` is also out of date:
```
pip install git+https://github.com/huggingface/optimum.git
```
With those two dependencies updated, this command now works:
```
python3 convert.py --quantize --model_id phi-1_5_dev --task "text-generation"
```
Though there are a few warnings I'm assuming I can ignore:
```
Ignore MatMul due to non constant B: /[/model/layers.22/self_attn/MatMul]
Ignore MatMul due to non constant B: /[/model/layers.22/self_attn/MatMul_1]
Ignore MatMul due to non constant B: /[/model/layers.23/self_attn/MatMul]
Ignore MatMul due to non constant B: /[/model/layers.23/self_attn/MatMul_1]
```
However, out of the box it can't find the right `onnx` file:
```
Error: `local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at "transformers.js/scripts/models/phi-1_5_dev/onnx/decoder_model_merged_quantized.onnx".
```
I see in the [`@Xenova` repo history](https://huggingface.co/Xenova/phi-1_5_dev/commit/ae1a980babe16f9d136c22eb119d171dec7c6a09) that the files were manually renamed; I'll try that too:
```
mv model.onnx decoder_model_merged.onnx
mv model_quantized.onnx decoder_model_merged_quantized.onnx
mv model.onnx_data decoder_model_merged.onnx_data
```
I then try to run the model with:
```
const model = await loadModel('transformers.js/scripts/models/phi-1_5_dev', {
});
const result = await model('Write me a list of numbers:\n', {
});
console.log('result', result);
```
The model loads, but upon generating I see:
```
WARNING: Too many inputs were provided (51 > 3). The following inputs will be ignored: "past_key_values.0.key, past_key_values.0.value, past_key_values.1.key, past_key_values.1.value, past_key_values.2.key, past_key_values.2.value, past_key_values.3.key, past_key_values.3.value, past_key_values.4.key, past_key_values.4.value, past_key_values.5.key, past_key_values.5.value, past_key_values.6.key, past_key_values.6.value, past_key_values.7.key, past_key_values.7.value, past_key_values.8.key, past_key_values.8.value, past_key_values.9.key, past_key_values.9.value, past_key_values.10.key, past_key_values.10.value, past_key_values.11.key, past_key_values.11.value, past_key_values.12.key, past_key_values.12.value, past_key_values.13.key, past_key_values.13.value, past_key_values.14.key, past_key_values.14.value, past_key_values.15.key, past_key_values.15.value, past_key_values.16.key, past_key_values.16.value, past_key_values.17.key, past_key_values.17.value, past_key_values.18.key, past_key_values.18.value, past_key_values.19.key, past_key_values.19.value, past_key_values.20.key, past_key_values.20.value, past_key_values.21.key, past_key_values.21.value, past_key_values.22.key, past_key_values.22.value, past_key_values.23.key, past_key_values.23.value".
2024-04-15 11:00:50.956 node[91488:12372370] 2024-04-15 11:00:50.956090 [E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running Gather node. Name:'/model/layers.0/self_attn/Gather_4' Status Message: indices element out of data bounds, idx=8 must be within the inclusive range [-1,0]
An error occurred during model execution: "Error: Non-zero status code returned while running Gather node. Name:'/model/layers.0/self_attn/Gather_4' Status Message: indices element out of data bounds, idx=8 must be within the inclusive range [-1,0]".
Inputs given to model: [Object: null prototype] {
input_ids: Tensor {
dims: [ 1, 1 ],
type: 'int64',
data: BigInt64Array(1) [ 13n ],
size: 1
},
attention_mask: T
|
https://github.com/huggingface/transformers.js/issues/714
|
open
|
[
"question"
] | 2024-04-15T15:02:33Z
| 2024-05-10T14:26:00Z
| null |
thekevinscott
|
huggingface/sentence-transformers
| 2,594
|
What is the maximum number of sentences that a fast cluster can cluster?
|
What is the maximum number of sentences that a fast cluster can cluster? When I cluster 2 million sentences, the cluster gets killed.
|
https://github.com/huggingface/sentence-transformers/issues/2594
|
open
|
[] | 2024-04-15T09:55:06Z
| 2024-04-15T09:55:06Z
| null |
BinhMinhs10
|
huggingface/dataset-viewer
| 2,721
|
Help dataset owner to chose between configs and splits?
|
See https://huggingface.slack.com/archives/C039P47V1L5/p1713172703779839
> Am I correct in assuming that if you specify a "config" in a dataset, only the given config is downloaded, but if you specify a split, all splits for that config are downloaded? I came across it when using facebook's belebele (https://huggingface.co/datasets/facebook/belebele). Instead of a config for each language, they use a split for each language, but that seems to mean that the full dataset is downloaded, even if you select just one language split.
For languages, we recommend using different configs, not splits.
Maybe we should also show a warning / open a PR/discussion? when a dataset contains more than 5 splits, hinting that it might be better to use configs?
|
https://github.com/huggingface/dataset-viewer/issues/2721
|
open
|
[
"question",
"P2"
] | 2024-04-15T09:51:43Z
| 2024-05-24T15:17:51Z
| null |
severo
|
pytorch/serve
| 3,086
|
How to modify torchserve’s Python runtime from 3.8.0 to 3.10
|
### 📚 The doc issue
My handle uses the syntax of Python 3.10, but the log shows Python runtime: 3.8.0. causing the model to fail to run. I would like to ask how to convert its environment to Python 3.10. I have introduced the dependencies of the Python 3.10 version into the corresponding dockerfile.
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/3086
|
closed
|
[
"triaged"
] | 2024-04-15T05:39:53Z
| 2024-04-23T17:26:08Z
| null |
pengxin233
|
huggingface/diffusers
| 7,676
|
How to determine the type of file, such as checkpoint, etc.
|
Hello.
Is there some kind of script that determines the type of file "checkpoint", "LORA", "textual_inversion", etc.?
|
https://github.com/huggingface/diffusers/issues/7676
|
closed
|
[] | 2024-04-14T23:58:08Z
| 2024-04-15T02:50:43Z
| null |
suzukimain
|
huggingface/diffusers
| 7,670
|
How to use IDDPM in diffusers ?
|
The code base is here:
https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
|
https://github.com/huggingface/diffusers/issues/7670
|
closed
|
[
"should-move-to-discussion"
] | 2024-04-14T12:30:34Z
| 2024-11-20T00:17:18Z
| null |
jiarenyf
|
pytorch/torchchat
| 174
|
core dump in ci
|
We get quite repeatable core dumps with a segmentation fault, e.g., here https://github.com/pytorch/torchat/actions/runs/8676531709/job/23791140949?pr=171
/home/runner/work/_temp/aa3d75e7-8cff-4789-ba8a-71b211235396.sh: line 4: 2369 Segmentation fault (core dumped) python generate.py --dtype ${DTYPE} --checkpoint-path ${MODEL_PATH} --temperature 0 --dso-path ${MODEL_DIR}/${MODEL_NAME}.so > ./output_aoti
This is Python so even if the input programs are broken, it should not core dump but report an error.
In terms of actionabile next steps, how do we get the core dump and debug this?
cc: @malfet @guangy10 @seemethere
|
https://github.com/pytorch/torchchat/issues/174
|
closed
|
[] | 2024-04-14T07:39:12Z
| 2024-04-25T08:07:14Z
| 2
|
mikekgfb
|
huggingface/transformers.js
| 713
|
Help understanding logits and model vocabs
|
### Question
I'm trying to write a custom `LogitsProcessor` and have some questions. For reference, I'm using [`Xenova/phi-1_5_dev`](https://huggingface.co/Xenova/phi-1_5_dev). I'm trying to implement a custom logic for white or blacklisting tokens, but running into difficulties understanding how to interpret token ids, tokens, and their decoded counterparts.
Here's what I think I understand:
- [The vocab file is defined at `vocab.json`](https://huggingface.co/Xenova/phi-1_5_dev/blob/main/vocab.json), and has 50,257 entries.
- This file is exposed on `pipeline.tokenizer.vocab`, translated from the object representation of `vocab.json` (`{ token: tokenID }`), to an array of `token`s whose indices correspond to `tokenID`.
- **Question:** `vocab.json` has 50,257 entries, but `pipeline.tokenizer.vocab` has 50,295 entries. Is this because `pipeline.tokenizer.vocab` _also_ includes `added_tokens.json`?
- And [`special_tokens_map.json`](https://huggingface.co/Xenova/phi-1_5_dev/blob/main/special_tokens_map.json) is already included in `vocab.json` it appears
- The tokens in the vocab file must be decoded before being displayed
- for example, the token in `vocab.json` at `50255` is `"Ġgazed"`, but if I decode this character by character (`pipeline.tokenizer.decoder.byte_decoder('Ġ')` becomes `32` which corresponds to a space `" "`) I get `" gazed"`. I _think_ these correspond to code points.
- The `logits` argument contains scores where the index of each score is the `tokenID`. So setting the score at position `50255` to `-Infinity` should ensure that the token `"Ġgazed"` (or, decoded, `" gazed"`) should never appear.
- The `logits` argument I'm getting back for this model in my `LogitsProcessor` has dimensions of `[51200,]`. `pipeline.tokenizer.vocab` has size of is 50,295. That would seem to indicate 905 unused tokens at the end of the tensor; can these be safely ignored, or do they correspond to something important that I'm missing?
I'd appreciate any insight or feedback on whether my assumptions above are correct or not. Thank you!
|
https://github.com/huggingface/transformers.js/issues/713
|
closed
|
[
"question"
] | 2024-04-13T21:06:14Z
| 2024-04-14T15:17:43Z
| null |
thekevinscott
|
pytorch/audio
| 3,773
|
DEVICE AV-ASR WITH EMFORMER RNN-T tutorial : avsr not found
|
### 🐛 Describe the bug
Hi, I am trying the device av-asr tutorial (https://pytorch.org/audio/stable/tutorials/device_avsr.html). When I trying to run the codes in the tutorial, it shows "no module named avsr" when executing the following code:
`from avsr.data_prep.detectors.mediapipe.detector import LandmarksDetector`.
**I have tried to locate the avsr library, but seems there is no related repository to install or include. Would like to know where can I find this avsr library? Seems it is already been removed from pip / conda?**
Plus, there is a line of code : `sys.path.insert(0,“/../../examples)`, I would also like to know what is the purpose of this directory? Is it OK to change it with other directory?
### Versions
2024-04-13 22:29:37 (1.54 MB/s) - ‘collect_env.py’ saved [22068/22068]
Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-27-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080 SUPER
Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-14900K
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 6000.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.
|
https://github.com/pytorch/audio/issues/3773
|
closed
|
[] | 2024-04-13T14:31:19Z
| 2024-04-13T14:37:11Z
| 0
|
sfcgta4794
|
huggingface/lighteval
| 155
|
How to run 30b plus model with lighteval when accelerate launch failed? OOM
|
CUDA Memory OOM when I launch an evaluation for 30b model using lighteval.
Whats the correct config for it?
|
https://github.com/huggingface/lighteval/issues/155
|
closed
|
[] | 2024-04-13T03:49:20Z
| 2024-05-04T11:18:38Z
| null |
xiechengmude
|
huggingface/transformers
| 30,213
|
Mamba: which tokenizer has been saved and how to use it?
|
### System Info
Hardware independent.
### Who can help?
@ArthurZucker
I described the doubts in the link below around 1 month ago, but maybe model-hub discussions are not so active. Then I post it here as repo issue. Please, let me know where to discuss it :)
https://huggingface.co/state-spaces/mamba-2.8b-hf/discussions/1
Thanks!
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
.
### Expected behavior
.
|
https://github.com/huggingface/transformers/issues/30213
|
closed
|
[] | 2024-04-12T11:28:17Z
| 2024-05-17T13:13:12Z
| null |
javiermcebrian
|
huggingface/sentence-transformers
| 2,587
|
Implementing Embedding Quantization for Dynamic Serving Contexts
|
I'm currently exploring embedding quantization strategies to enhance storage and computation efficiency while maintaining high accuracy. Specifically, I'm looking at integrating these strategies with Infinity (https://github.com/michaelfeil/infinity/discussions/198), a high-throughput, low-latency REST API for serving vector embeddings.
Here is the quantization method I want to use from sentence-transformers (specifically scalar int8, because binary quant. also reduces the vector dimensions, something I do not want to keep the accuracy high): https://sbert.net/examples/applications/embedding-quantization/README.html
So this is what I want to apply:
```
from sentence_transformers import SentenceTransformer
from sentence_transformers.quantization import quantize_embeddings
from datasets import load_dataset
# 1. Load an embedding model
model = SentenceTransformer("mixedbread-ai/mxbai-embed-large-v1")
# 2. Prepare an example calibration dataset
corpus = load_dataset("nq_open", split="train[:1000]")["question"]
calibration_embeddings = model.encode(corpus)
# 3. Encode some text without quantization & apply quantization afterwards
embeddings = model.encode(["I am driving to the lake.", "It is a beautiful day."])
int8_embeddings = quantize_embeddings(
embeddings,
precision="int8",
calibration_embeddings=calibration_embeddings,
)
```
The main challenge for me which arises with scalar quantization is, that it requires a calibration dataset to compute min and max values, making the embedding process stateful. This conflicts with the need for a flexible, dynamic serving via the Infinity API, which typically handles embeddings on the fly. So this embedding API I created is used by various other services which have different types of datasets. Therefore I am looking for a way to not need such calibration dataset.
I am seeking advice on:
- Managing the statefulness introduced by scalar quantization.
- Alternative strategies that might be more suitable for dynamic environments where embeddings are generated on demand.
- Any guidance or suggestions on how to tackle these issues would be greatly appreciated.
Thank you!
|
https://github.com/huggingface/sentence-transformers/issues/2587
|
open
|
[
"question"
] | 2024-04-11T11:03:23Z
| 2024-04-12T07:28:48Z
| null |
Nookbe
|
huggingface/diffusers
| 7,636
|
how to use the controlnet sdxl tile model in diffusers
|
### Describe the bug
I want to use [this model](https://huggingface.co/TTPlanet/TTPLanet_SDXL_Controlnet_Tile_Realistic_V1) to make my slightly blurry photos clear, so i found this model.
I follow the code [here](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile) , but as the model mentioned above is XL not 1.5 , so i change the code, but it error.
### Reproduction
import torch
from PIL import Image
from diffusers import ControlNetModel, DiffusionPipeline, StableDiffusionXLControlNetPipeline
def resize_for_condition_image(input_image: Image, resolution: int):
input_image = input_image.convert("RGB")
W, H = input_image.size
k = float(resolution) / min(H, W)
H *= k
W *= k
H = int(round(H / 64.0)) * 64
W = int(round(W / 64.0)) * 64
img = input_image.resize((W, H), resample=Image.LANCZOS)
return img
controlnet = ControlNetModel.from_pretrained('/mnt/asian-t2i/pretrained_models/TTPLanet_SDXL_Controlnet_Tile_Realistic_V1',
torch_dtype=torch.float16, use_safetensors = True)
pipe = DiffusionPipeline.from_pretrained("/mnt/asian-t2i/pretrained_models/RealVisXL_V3.0",
custom_pipeline="stable_diffusion_controlnet_img2img",
controlnet=controlnet,
torch_dtype=torch.float16,).to('cuda')
pipe.enable_xformers_memory_efficient_attention()
source_image = Image.open("/mnt/asian-t2i/data/luchuan/1024/0410-redbook-luchuan-6.jpg")
condition_image = resize_for_condition_image(source_image, 1024)
image = pipe(
prompt="best quality",
negative_prompt="blur, lowres, bad anatomy, bad hands, cropped, worst quality",
image=condition_image,
controlnet_conditioning_image=condition_image,
width=condition_image.size[0],
height=condition_image.size[1],
strength=1.0,
generator=torch.manual_seed(0),
num_inference_steps=32,
).images[0]
image.save('output.png')
### Logs
```shell
/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:678: FutureWarning: 'cached_download' is the legacy way to download files from the HF hub, please consider upgrading to 'hf_hub_download'
warnings.warn(
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:02<00:00, 2.00it/s]
You have disabled the safety checker for <class 'diffusers_modules.git.stable_diffusion_controlnet_img2img.StableDiffusionControlNetImg2ImgPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
0%| | 0/32 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/mnt/asian-t2i/demo.py", line 31, in <module>
image = pipe(
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/.cache/huggingface/modules/diffusers_modules/git/stable_diffusion_controlnet_img2img.py", line 839, in __call__
down_block_res_samples, mid_block_res_sample = self.controlnet(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/asian-t2i/diffusers/src/diffusers/models/controlnet.py", line 775, in forward
if "text_embeds" not in added_cond_kwargs:
TypeError: argument of type 'NoneType' is not iterable
```
### System Info
Name: diffusers
Version: 0.27.0.dev0
### Who can help?
@sayakpaul @yiyixuxu @DN6
|
https://github.com/huggingface/diffusers/issues/7636
|
closed
|
[
"bug",
"stale"
] | 2024-04-11T03:20:42Z
| 2024-06-29T13:26:58Z
| null |
xinli2008
|
huggingface/optimum-quanto
| 161
|
Question: any plan to formally support smooth quantization and make it more general
|
Awesome work!
I noticed there are smooth quant implemented under [external](https://github.com/huggingface/quanto/tree/main/external/smoothquant). Currently, its implementation seems to be model-specific, we can only apply smooth on special `Linear`.
However, in general, the smooth can be applied on any `Linear` by inserting a `mul`. Are there any plans to officially support smooth quantization in-tree? My initial thought was, is it possible to define a `SmoothTensor` and use `__torch_dispatch__` to override the `bmm` behavior?
|
https://github.com/huggingface/optimum-quanto/issues/161
|
closed
|
[
"question",
"Stale"
] | 2024-04-11T02:45:31Z
| 2024-05-18T01:49:52Z
| null |
yiliu30
|
pytorch/xla
| 6,916
|
SPMD + Dynamo
|
## ❓ Questions and Help
Is there a way to get SPMD working with Dynamo/`torch.compile` to reduce the overhead of Pytorch re-tracing the module every time it gets called?
|
https://github.com/pytorch/xla/issues/6916
|
closed
|
[] | 2024-04-11T01:50:44Z
| 2024-04-12T19:50:56Z
| 4
|
BitPhinix
|
pytorch/vision
| 8,372
|
Nightly build flaky pytorch/vision / conda-py3_11-cpu builds
|
### 🐛 Describe the bug
Flaky issue on pytorch/vision / conda-py3_11-cpu builds. Has been happening for a while now.
Most likely due to corrupt worker environment:
```
+ __conda_exe run -p /Users/ec2-user/runner/_work/_temp/pytorch_pkg_helpers_8521283920_smoke python3 pytorch/vision/test/smoke_test.py
+ /opt/homebrew/Caskroom/miniconda/base/bin/conda run -p /Users/ec2-user/runner/_work/_temp/pytorch_pkg_helpers_8521283920_smoke python3 pytorch/vision/test/smoke_test.py
/Users/ec2-user/.local/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
torchvision: 0.19.0a0+480eec2
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/vision/vision/pytorch/vision/test/smoke_test.py", line 103, in <module>
main()
File "/Users/ec2-user/runner/_work/vision/vision/pytorch/vision/test/smoke_test.py", line 83, in main
print(f"{torch.ops.image._jpeg_version() = }")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ec2-user/runner/_work/_temp/pytorch_pkg_helpers_8521283920_smoke/lib/python3.11/site-packages/torch/_ops.py", line 927, in __getattr__
raise AttributeError(
AttributeError: '_OpNamespace' 'image' object has no attribute '_jpeg_version'
ERROR conda.cli.main_run:execute(124): `conda run python3 pytorch/vision/test/smoke_test.py` failed. (See above for error)
torch.cuda.is_available: False
```
Rerun is ususally successful
### Versions
0.19.0
|
https://github.com/pytorch/vision/issues/8372
|
open
|
[] | 2024-04-10T15:48:12Z
| 2024-04-10T15:49:09Z
| 1
|
atalman
|
pytorch/serve
| 3,078
|
Serve multiple models with both CPU and GPU
|
Hi guys, I have a question: Can I serve several models (about 5 - 6 models) using both CPU and GPU inference?
|
https://github.com/pytorch/serve/issues/3078
|
open
|
[
"question",
"triaged"
] | 2024-04-10T15:03:35Z
| 2025-01-12T06:29:51Z
| null |
hungtrieu07
|
huggingface/accelerate
| 2,647
|
How to use deepspeed with dynamic batch?
|
### System Info
```Shell
- `Accelerate` version: 0.29.1
- Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
- `accelerate` bash location: /home/yuchao/miniconda3/envs/TorchTTS/bin/accelerate
- Python version: 3.10.13
- Numpy version: 1.23.5
- PyTorch version (GPU?): 2.2.2+cu118 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- PyTorch MLU available: False
- System RAM: 125.48 GB
- GPU type: NVIDIA GeForce RTX 4090
- `Accelerate` default config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: none
offload_param_device: none
zero3_init_flag: false
zero_stage: 2
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
For sequence task, we always use dynamic batch to group long sequence to small batches while group short sequence to large batches. But deepspeed here needs to specify either `batch_size` or `train_micro_batch_size_per_gpu` which is unavailable for use. Any idea to fix that?
```
When using DeepSpeed, `accelerate.prepare()` requires you to pass at least one of training or evaluation dataloaders with `batch_size` attribute returning an integer value or alternatively set an integer value in `train_micro_batch_size_per_gpu` in the deepspeed config file or assign integer value to `AcceleratorState().deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu']`.
```
### Expected behavior
Be able to train deepspeed with dynamic batch
|
https://github.com/huggingface/accelerate/issues/2647
|
closed
|
[] | 2024-04-10T09:09:53Z
| 2025-05-11T15:07:27Z
| null |
npuichigo
|
huggingface/transformers.js
| 690
|
Is top-level await necessary in the v3 branch?
|
### Question
I saw the excellent performance of WebGPU, so I tried to install xenova/transformers.js#v3 as a dependency in my project.
I found that v3 uses the top-level await syntax. If I can't restrict users to using the latest browser version, I have to make it compatible (using `vite-plugin-top-level-await` or `rollup-plugin-tla`).
Is it possible to use other methods instead of top-level await? Or is this project not intended to support users who do not have support for top-level await?
Thanks.
|
https://github.com/huggingface/transformers.js/issues/690
|
closed
|
[
"question"
] | 2024-04-10T08:49:32Z
| 2024-04-11T17:18:42Z
| null |
ceynri
|
huggingface/optimum-quanto
| 158
|
How dose quanto support int8 conv2d and linear?
|
Hi, I look into the code and didn't find any cuda kernel related to conv2d and linear. How did you implement the cuda backend for conv2d/linear? Thanks
|
https://github.com/huggingface/optimum-quanto/issues/158
|
closed
|
[
"question"
] | 2024-04-10T05:41:43Z
| 2024-04-11T09:26:35Z
| null |
zhexinli
|
huggingface/transformers.js
| 689
|
Abort the audio recognition process
|
### Question
Hello! How can I stop the audio file recognition process while leaving the loaded model? If I terminate the worker I have to reload the model to start the process of recognizing a new audio file. I need either functionality to be able to send a pipeline command to stop the recognition process, or the ability to first load the model and then pass it as an object to the pipeline. Thank you.
|
https://github.com/huggingface/transformers.js/issues/689
|
open
|
[
"question"
] | 2024-04-10T02:51:37Z
| 2024-04-20T06:09:11Z
| null |
innoware11
|
huggingface/transformers
| 30,154
|
Question about how to write code for trainer and dataset for multi-gpu
|
### System Info
- Platform: Linux-5.15.0-1026-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.27.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi I have a quick question on how to write code for dataset and trainer for multi-gpu setting.
Here is my workflow.
I have a dataset where I called
```
dataset = dataset.load_dataset(...)
```
I need to do some preprocessing for it and the dataset becomes an Iterable dataset.
and then I pass the dataset into the trainer like
```
trainer = Trainer(train_data=dataset)
trainer.train()
```
My question is since I am running on multi-gpu and use command
```
torchrun --standalone --nnodes=1 --nproc_per_node=2 train_lora.py
```
Two process is executing the same code above and this cause dataset and trainer created twice. Should the dataset and trainer be created once or twice? If created once, should I wrapper all the code like that?
```
if accelerator.is_main_process:
dataset = dataset.load_dataset(...)
trainer = Trainer(train_data=dataset)
trainer.train()
````
I do observe that we only use 1 dataset for generating the samples even if we create two dataset object and do not wrap accelerator.is_main_process. That is because the dataset already convert by trainer for distributed training. So I think there is no point for creating dataset twice since we only use the first dataset. How to write the code such that there is no error on the second process? if I make second process dataset is None, the trainer will give error for dataset is empty
Do we need to create two trainer where each trainer is corresponding to one gpu or should we only have one trainer that is in charge for two gpu? What is the best way to write the code to achieve this in this case?
### Expected behavior
the correct way of implement this situation.
|
https://github.com/huggingface/transformers/issues/30154
|
closed
|
[] | 2024-04-10T00:08:00Z
| 2024-04-10T22:57:53Z
| null |
zch-cc
|
huggingface/accelerate
| 2,643
|
How to use gather_for_metrics for object detection models?
|
### Reproduction
I used the `gather_for_metrics` function as follows:
```python
predictions, ground_truths = accelerator.gather_for_metrics((predictions, ground_truths))
```
And i've got the error:
```
accelerate.utils.operations.DistributedOperationException: Impossible to apply the desired operation due to inadequate shapes. All shapes on the devices must be valid.
```
* ground_truths are dictionaries of torch.tensor with keys: `boxes`, `labels`, `image_id`, `area`, `iscrowd` following pytorch conventions: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html.
* predictions are dictionaries of torch.tensor with `boxes`, `labels` and `scores` keys.
I use 3 gpus, and in each I have 120 dictionaries of predictions and ground truths, but as expected inside each dictionary the tensor size should vary from 0 to n bbox predictions/ground truths.
But during gather_predictions, the `verify_operation` decorator raises an because all the tensor shapes inside the different dictionaries vary.
### Expected behavior
Have the possibility to gather complex objects like dictionaries of torch.tensor with different shapes!
Thank you for your help and for this amazing framework 🙏
|
https://github.com/huggingface/accelerate/issues/2643
|
closed
|
[] | 2024-04-09T23:15:20Z
| 2024-04-30T07:48:36Z
| null |
yann-rdgz
|
pytorch/torchx
| 875
|
Fix Nightly push permissions
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our
[documentation](https://pytorch.org/torchx).
### Question
<!-- your question here -->
Is it possible to fix the nightly push permissions? Many pr have been merged into main, but last nightly release was from 2024.2.12 (https://pypi.org/project/torchx-nightly/)
Currently nightly push is failing due to:
```
ERROR HTTPError: 403 Forbidden from https://upload.pypi.org/legacy/
The user 'd4l3k' isn't allowed to upload to project 'torchx-nightly'.
See https://pypi.org/help/#project-name for more information.
```
(https://github.com/pytorch/torchx/actions/runs/8614806013/job/23608993087#step:6:473)
|
https://github.com/meta-pytorch/torchx/issues/875
|
closed
|
[] | 2024-04-09T19:38:04Z
| 2024-04-10T18:26:16Z
| 6
|
ryxli
|
huggingface/candle
| 2,033
|
How to use CUDA as the backend in `candle-wasm-examples/llama2-c` ?
|
How to use CUDA as the backend in `candle-wasm-examples/llama2-c` ?
In `candle-wasm-examples/llama2-c`, I do some changes shown below.
```diff
--- a/candle-wasm-examples/llama2-c/Cargo.toml
+++ b/candle-wasm-examples/llama2-c/Cargo.toml
@@ -9,7 +9,7 @@ categories.workspace = true
license.workspace = true
[dependencies]
-candle = { workspace = true }
+candle = { workspace = true, features = ["cuda"] }
candle-nn = { workspace = true }
candle-transformers = { workspace = true }
num-traits = { workspace = true }
```
```diff
--- a/candle-wasm-examples/llama2-c/src/bin/m.rs
+++ b/candle-wasm-examples/llama2-c/src/bin/m.rs
@@ -14,7 +14,7 @@ pub struct Model {
impl Model {
fn process(&mut self, tokens: &[u32]) -> candle::Result<String> {
const REPEAT_LAST_N: usize = 64;
- let dev = Device::Cpu;
+ let dev = Device::new_cuda(0)?;
let input = Tensor::new(tokens, &dev)?.unsqueeze(0)?;
let logits = self.inner.llama.forward(&input, tokens.len())?;
let logits = logits.squeeze(0)?;
```
```diff
--- a/candle-wasm-examples/llama2-c/src/worker.rs
+++ b/candle-wasm-examples/llama2-c/src/worker.rs
@@ -65,7 +65,7 @@ impl Model {
top_p: f64,
prompt: String,
) -> Result<()> {
- let dev = Device::Cpu;
+ let dev = Device::new_cuda(0)?;
let temp = if temp <= 0. { None } else { Some(temp) };
let top_p = if top_p <= 0. || top_p >= 1.0 {
None
@@ -248,7 +248,7 @@ impl TransformerWeights {
impl Model {
pub fn load(md: ModelData) -> Result<Self> {
- let dev = Device::Cpu;
+ let dev = Device::new_cuda(0)?;
let mut model = std::io::Cursor::new(md.model);
let config = Config::from_reader(&mut model)?;
let weights = TransformerWeights::from_reader(&mut model, &config, &dev)?;
```
But when I execute `trunk serve --release --public-url / --port 8080`, some errors occur.
```shell
= note: rust-lld: error: unable to find library -lcuda
rust-lld: error: unable to find library -lnvrtc
rust-lld: error: unable to find library -lcurand
rust-lld: error: unable to find library -lcublas
rust-lld: error: unable to find library -lcublasLt
error: could not compile `candle-wasm-example-llama2` (bin "worker") due to 1 previous error
2024-04-09T16:12:09.062364Z ERROR error
error from build pipeline
Caused by:
0: HTML build pipeline failed (2 errors), showing first
1: error from asset pipeline
2: running cargo build
3: error during cargo build execution
4: cargo call to executable 'cargo' with args: '["build", "--target=wasm32-unknown-unknown", "--manifest-path", "/work/training/candle/candle-wasm-examples/llama2-c/Cargo.toml", "--bin", "worker"]' returned a bad status: exit status: 101
```
How should I solve the above problem?
I confirm that my CUDA installed correctly and I'm able to execute the following commands.
```shell
cargo new myapp
cd myapp
cargo add --git https://github.com/huggingface/candle.git candle-core --features "cuda"
cargo build
```
|
https://github.com/huggingface/candle/issues/2033
|
closed
|
[] | 2024-04-09T16:16:55Z
| 2024-04-12T08:26:24Z
| null |
wzzju
|
huggingface/optimum
| 1,804
|
advice for simple onnxruntime script for ORTModelForVision2Seq (or separate encoder/decoder)
|
I am trying to use implement this [class ](https://github.com/huggingface/optimum/blob/69af5dbab133f2e0ae892721759825d06f6cb3b7/optimum/onnxruntime/modeling_seq2seq.py#L1832) in C++ because unfortunately I didn't find any C++ implementation for this.
Therefore, my current approach is to revert this class and the auxiliary classes to a simple onnxruntime prediction, to make things easier to port to C++.
Does anyone have any advice in this matter? Thank you
|
https://github.com/huggingface/optimum/issues/1804
|
open
|
[
"question",
"onnxruntime"
] | 2024-04-09T15:14:40Z
| 2024-10-14T12:41:15Z
| null |
eduardatmadenn
|
huggingface/chat-ui
| 997
|
Community Assistants
|
Hi, I've looked through all the possible issues but I didn't find what I was looking for.
On self-hosted is the option to have the community assistants such as the ones on https://huggingface.co/chat/ not available? I've also noticed that when I create Assistants on my side they do not show up on community tabs either they are purely user restricted, I am missing something? I've configured the hf token and the API base, any hints are appreciated.

|
https://github.com/huggingface/chat-ui/issues/997
|
closed
|
[
"help wanted",
"assistants"
] | 2024-04-09T12:44:49Z
| 2024-04-23T06:09:47Z
| 2
|
Coinficient
|
huggingface/evaluate
| 570
|
[Question] How to have no preset values sent into `.compute()`
|
We've a use-case https://huggingface.co/spaces/alvations/llm_harness_mistral_arc/blob/main/llm_harness_mistral_arc.py
where default feature input types for `evaluate.Metric` is nothing and we get something like this in our `llm_harness_mistral_arc/llm_harness_mistral_arc.py`
```python
import evaluate
import datasets
import lm_eval
@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
class llm_harness_mistral_arc(evaluate.Metric):
def _info(self):
# TODO: Specifies the evaluate.EvaluationModuleInfo object
return evaluate.MetricInfo(
# This is the description that will appear on the modules page.
module_type="metric",
description="",
citation="",
inputs_description="",
# This defines the format of each prediction and reference
features={},
)
def _compute(self, pretrained=None, tasks=[]):
outputs = lm_eval.simple_evaluate(
model="hf",
model_args={"pretrained":pretrained},
tasks=tasks,
num_fewshot=0,
)
results = {}
for task in outputs['results']:
results[task] = {'acc':outputs['results'][task]['acc,none'],
'acc_norm':outputs['results'][task]['acc_norm,none']}
return results
```
And in our expected user-behavior is something like, [in]:
```python
import evaluate
module = evaluate.load("alvations/llm_harness_mistral_arc")
module.compute(pretrained="mistralai/Mistral-7B-Instruct-v0.2", tasks=["arc_easy"])
```
And the expected output as per our `tests.py`, https://huggingface.co/spaces/alvations/llm_harness_mistral_arc/blob/main/tests.py [out]:
```
{'arc_easy': {'acc': 0.8131313131313131, 'acc_norm': 0.7680976430976431}}
```
But the `evaluate.Metric.compute()` somehow expects a default batch and `module.compute(pretrained="mistralai/Mistral-7B-Instruct-v0.2", tasks=["arc_easy"])` throws an error:
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-20-bd94e5882ca5>](https://localhost:8080/#) in <cell line: 1>()
----> 1 module.compute(pretrained="mistralai/Mistral-7B-Instruct-v0.2",
2 tasks=["arc_easy"])
2 frames
[/usr/local/lib/python3.10/dist-packages/evaluate/module.py](https://localhost:8080/#) in _get_all_cache_files(self)
309 if self.num_process == 1:
310 if self.cache_file_name is None:
--> 311 raise ValueError(
312 "Evaluation module cache file doesn't exist. Please make sure that you call `add` or `add_batch` "
313 "at least once before calling `compute`."
ValueError: Evaluation module cache file doesn't exist. Please make sure that you call `add` or `add_batch` at least once before calling `compute`.
```
#### Q: Is it possible for the `.compute()` to expect no features?
I've also tried this but somehow the `evaluate.Metric.compute` is still looking for some sort of `predictions` variable.
```
@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
class llm_harness_mistral_arc(evaluate.Metric):
def _info(self):
# TODO: Specifies the evaluate.EvaluationModuleInfo object
return evaluate.MetricInfo(
# This is the description that will appear on the modules page.
module_type="metric",
description="",
citation="",
inputs_description="",
# This defines the format of each prediction and reference
features=[
datasets.Features(
{
"pretrained": datasets.Value("string", id="sequence"),
"tasks": datasets.Sequence(datasets.Value("string", id="sequence"), id="tasks"),
}
)]
)
def _compute(self, pretrained, tasks):
outputs = lm_eval.simple_evaluate(
model="hf",
model_args={"pretrained":pretrained},
tasks=tasks,
num_fewshot=0,
)
results = {}
for task in outputs['results']:
results[task] = {'acc':outputs['results'][task]['acc,none'],
'acc_norm':outputs['results'][task]['acc_norm,none']}
return results
````
then:
```python
import evaluate
module = evaluate.load("alvations/llm_harness_mistral_arc")
module.compute(pretrained="mistralai/Mistral-7B-Instruct-v0.2", tasks=["arc_easy"])
```
[out]:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-36-bd94e5882c
|
https://github.com/huggingface/evaluate/issues/570
|
open
|
[] | 2024-04-08T22:58:41Z
| 2024-04-08T23:54:42Z
| null |
alvations
|
huggingface/transformers
| 30,122
|
What is the default multi-GPU training type?
|
### System Info
NA
### Who can help?
@ArthurZucker , @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When running training using the transformers trainer, and setting device_map to auto, what is the default distributed training type that is used when the model is too large to fit on one GPU?
(assume that I have not yet run `accelerate config`).
Does the model just run with naive model parallel with layers split between different GPUs and with DP (not DDP) on the data side? Are the full gradients and also the optimizer state copied onto each GPU?
It would be helpful if this could be described in the Trainer section of the docs and also in the Multi-GPU docs.
### Expected behavior
NA
|
https://github.com/huggingface/transformers/issues/30122
|
closed
|
[] | 2024-04-08T11:45:59Z
| 2024-05-10T10:35:41Z
| null |
RonanKMcGovern
|
huggingface/optimum
| 1,798
|
Issue Report: Unable to Export Qwen Model to ONNX Format in Optimum
|
### System Info
```shell
Optimum Version: 1.18.0
Python Version: 3.8
Platform: Windows, x86_64
```
### Who can help?
@michaelbenayoun @JingyaHuang @echarlaix
I am writing to report an issue I encountered while attempting to export a Qwen model to ONNX format using Optimum.
Error message:
" ValueError: Trying to export a qwen model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type qwen to be supported natively in the ONNX export. "
Attached screenshot for reference.
<img width="957" alt="qwen_error_export" src="https://github.com/huggingface/optimum/assets/166393333/5b9e75fd-1839-434c-809e-5dd6832b0e05">
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
optimum-cli export onnx --model Qwen/Qwen-7B qwen_optimum_onnx/ --trust-remote-code
### Expected behavior
I would expect Optimum to successfully export the Qwen model to ONNX format without encountering any errors or issues.
|
https://github.com/huggingface/optimum/issues/1798
|
open
|
[
"bug"
] | 2024-04-08T11:36:09Z
| 2024-04-08T11:36:09Z
| 0
|
Harini-Vemula-2382
|
huggingface/chat-ui
| 986
|
Github actions won't push built docker images on releases
|
We currently have a [github actions workflow](https://github.com/huggingface/chat-ui/blob/main/.github/workflows/build-image.yml) that builds an image on every push to `main` and tags it with `latest` and the commit id. [(see here)](https://github.com/huggingface/chat-ui/pkgs/container/chat-ui/versions)
The workflow should also push images tagged for each releases, for example `v0.8` but the workflow [fails](https://github.com/huggingface/chat-ui/actions/runs/8536772524) with a `buildx failed with: ERROR: tag is needed when pushing to registry` error.
I think it would be really nice to have support for tagged images for each releases, but I'm not the best with github actions so if someone has some time and would like to look at it, that would be super appreciated 🤗
|
https://github.com/huggingface/chat-ui/issues/986
|
closed
|
[
"help wanted",
"CI/CD"
] | 2024-04-08T07:51:13Z
| 2024-04-08T11:27:42Z
| 2
|
nsarrazin
|
huggingface/candle
| 2,025
|
How to specify which graphics card to run a task on in a server with multiple graphics cards?
|
https://github.com/huggingface/candle/issues/2025
|
closed
|
[] | 2024-04-07T10:48:35Z
| 2024-04-07T11:05:52Z
| null |
lijingrs
|
|
pytorch/torchchat
| 77
|
[Feature request] Need a format for test reports and how we might track them?
|
Maybe we build a table, with something like
| Model. | Target tested | Platform tested (*) | submitter | test date | link to test transcript |
|--|--|--|--|--|--|
| stories15M | generate, AOTI CPU | Ubuntu x86 24.04 | mikekgfb | 2024-04-06 | [test transcript](https://github.com/pytorch-labs/llama-fast/actions/runs/8586564185/job/23529165773?pr=74) |
* may need a script to capture system info?
|
https://github.com/pytorch/torchchat/issues/77
|
open
|
[
"enhancement"
] | 2024-04-07T06:04:24Z
| 2024-04-25T18:14:04Z
| 0
|
mikekgfb
|
pytorch/torchchat
| 70
|
[Usability] Clean installation and first example steps in README to standardize on stories15M?
|
Looking great! However, I went through the README steps on a new M1 and hit a few issues. It would be ideal if we can make this a clean list of commands that a person could cut and paste all the way through. Here are some thoughts:
Can we move "The model definition (and much more!) is adopted from gpt-fast, so we support the same models. To download llama models, go to https://huggingface.co/meta-llama/Llama-2-7b and go through steps to obtain access. Then, login with huggingface-cli login" and those below into the dedicated `Installation` section referenced at https://github.com/pytorch-labs/llama-fast?tab=readme-ov-file#installation and also move it to the top?
That section (somewhat matching to https://pytorch.org/executorch/stable/getting-started-setup.html) could include:
```
python3 -m pip install --user virtualenv
python3 -m virtualenv .llama-fast
source .llama-fast/bin/activate
git clone https://github.com/pytorch-labs/llama-fast.git
cd llama-fast
git submodule sync
git submodule update --init
# If we need PyTorch nightlies
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
# Otherwise
# pip install torch torchvision
pip install sentencepiece huggingface_hub
# Eventually should be (when Dave has the PyPI packages)
# pip install sentencepiece huggingface_hub executorch
# I had some issues with the pytorch submodule not downloading from ExecuTorch - not sure why
# To download Llama 2 models, go to https://huggingface.co/meta-llama/Llama-2-7b and go through steps to obtain access.
# Once approved, login with
huggingface-cli login
# You will be asked for a token from https://huggingface.co/settings/tokens
# Set the model and paths for stories15M as an example to test things on desktop and mobile
MODEL_NAME=stories15M
MODEL_PATH=checkpoints/${MODEL_NAME}/stories15M.pt
MODEL_DIR=~/llama-fast-exports
# Could we make this stories15 instead?
export MODEL_DOWNLOAD=meta-llama/Llama-2-7b-chat-hf
./scripts/prepare.sh $MODEL_DOWNLOAD
python generate.py --compile --checkpoint-path ${MODEL_PATH} --prompt "Hello, my name is" --device {cuda,cpu,mps}
... Steps for running with AOTI and then ExecuTorch ...
```
Unfortunate I get the following error when trying to run generate:
```
generate.py: error: unrecognized arguments: cpu mps
```
Tagging @mikekgfb @byjlw @GregoryComer @cbilgin @dbort @mergennachin
Thank you!
|
https://github.com/pytorch/torchchat/issues/70
|
closed
|
[] | 2024-04-06T22:13:18Z
| 2024-04-20T01:35:39Z
| 6
|
orionr
|
huggingface/text-embeddings-inference
| 229
|
Question: How to add a prefix to the underlying server
|
I've managed to run the text embeddings inference perfectly using the already built docker images and I'm trying to allow it to our internal components
Right now they're sharing the following behavior
Myhost.com/modelname/v1/embeddings
I was wondering if this "model name" is possible to add as a prefix inside the application through some configuration.
How could I do that?
|
https://github.com/huggingface/text-embeddings-inference/issues/229
|
closed
|
[] | 2024-04-06T17:29:59Z
| 2024-04-08T09:14:40Z
| null |
Ryojikn
|
pytorch/torchchat
| 69
|
[Feature request] Torchchat performance comparison to gpt-fast
|
At present, llama-fast is 2x slower than gpt-fast when run out of the box. The root cause is we default to fp32 rather than bf16 (reducing our peak perf potential in a major way).
I changed the default to fp32 because some mobile targets do not support FP16 (and not at all bfloat16), so this was the least common denominator to run out of the box.
Will add additional controls for fp data width setting, beyond that we need to decide how to set this up. Do we want the default to run well everywhere, or do we optimize the default for one particular target family.
Alternative might be different defaults for different targets but that too is confusing.
cc: @chauhang @malfet @guangy10
|
https://github.com/pytorch/torchchat/issues/69
|
closed
|
[
"enhancement"
] | 2024-04-06T16:36:03Z
| 2024-05-12T21:36:56Z
| 3
|
mikekgfb
|
huggingface/transformers.js
| 685
|
Transformers.js seems to need an internet connection when it shouldn't? (Error: no available backend found.)
|
### Question
What is the recommended way to get Transformers.js to work even when, later on, there is no internet connection?
Is it using a service worker? Or are there other (perhaps hidden) settings for managing caching of files?
I'm assuming here that the `Error: no available backend found` error message is related to Transformers.js not being able to find files once Wi-Fi has been turned off. I was a bit surprised by that, since I do see a cache called `transformers-cache` being created. Is that not caching all the required files?
|
https://github.com/huggingface/transformers.js/issues/685
|
open
|
[
"question"
] | 2024-04-06T12:40:15Z
| 2024-09-03T01:22:15Z
| null |
flatsiedatsie
|
huggingface/trl
| 1,510
|
[question] how to apply model parallism to solve cuda memory error
|
hi team. I am using the SFT and PPO code to train my model, link https://github.com/huggingface/trl/tree/main/examples/scripts.
Due to long context length and 7B-level model size, I am facing cuda memory issue on my single gpu.
Is there any straightforward manner to utilize multiple gpus on my server to train the model thru SFT and PPO script ?
such as spliting the model to multiple gpus as model parallism. Is there any argument parameters I can directly pass into my training script ?
Thanks a lot.
```
export CUDA_VISIBLE_DEVICES='7'; python examples/scripts/sft_travel.py \
--model_name_or_path="mistralai/Mistral-7B-Instruct-v0.2" \
--report_to="wandb" \
--learning_rate=5e-5 \
--per_device_train_batch_size=4 \
--gradient_accumulation_steps=16 \
--logging_steps=1 \
--num_train_epochs=120 \
--lr_scheduler_type "constant" \
--max_steps=-1 \
--gradient_checkpointing \
--max_seq_length 16000 \
--output_dir "8bit" \
--overwrite_output_dir True \
--logging_strategy "epoch" \
--evaluation_strategy "no"
```
|
https://github.com/huggingface/trl/issues/1510
|
closed
|
[] | 2024-04-06T02:09:36Z
| 2024-05-06T17:02:35Z
| null |
yanan1116
|
pytorch/tutorials
| 2,827
|
Misleading example for per-sample gradient
|
In the example of per-sample gradient, the following line can be misleading since the `predictions` of a net are logits:
https://github.com/pytorch/tutorials/blob/08a61b7cae9d00312d0029b1f86a248ec1253a83/intermediate_source/per_sample_grads.py#L49
The correct way should be:
``` python
return F.nll_loss(F.log_softmax(predictions, dim=-1), targets)
```
Would appreciate if this can be corrected.
|
https://github.com/pytorch/tutorials/issues/2827
|
closed
|
[] | 2024-04-06T00:27:51Z
| 2024-04-24T17:52:48Z
| 3
|
mingfeisun
|
huggingface/dataset-viewer
| 2,667
|
Rename datasets-server to dataset-viewer in infra internals?
|
Follow-up to #2650.
Is it necessary? Not urgent in any Case.
Some elements to review:
- [ ] https://github.com/huggingface/infra
- [ ] https://github.com/huggingface/infra-deployments
- [ ] docker image tags (https://hub.docker.com/r/huggingface/datasets-server-services-search -> https://hub.docker.com/r/huggingface/dataset-viewer-services-search)
- [ ] Helm chart name
- [ ] AWS parameters
- [ ] kubernetes namespaces
- [ ] Hub app names and tokens
- [ ] https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server
- [ ] buckets: hf-datasets-server-statics-test, hf-datasets-server-statics
- [ ] MongoDB databases
- [ ] BetterUptime
- [ ] shared directories (PARQUET_METADATA_CACHE_APPNAME)
|
https://github.com/huggingface/dataset-viewer/issues/2667
|
closed
|
[
"question",
"P2"
] | 2024-04-05T16:53:34Z
| 2024-04-08T09:26:14Z
| null |
severo
|
huggingface/dataset-viewer
| 2,666
|
Change API URL to dataset-viewer.huggingface.co?
|
Follow-up to https://github.com/huggingface/dataset-viewer/issues/2650
Should we do it?
- https://github.com/huggingface/dataset-viewer/issues/2650#issuecomment-2040217875
- https://github.com/huggingface/moon-landing/pull/9520#issuecomment-2040220911
If we change it, we would have to update:
- moon-landing
- datasets
- the docs (hub, datasets, dataset-viewer)
- other written support (blog, observable, notion...)
If so, also change the dev URL: https://datasets-server.us.dev.moon.huggingface.tech.
We should also handle the redirection from the old URL to the new one.
|
https://github.com/huggingface/dataset-viewer/issues/2666
|
closed
|
[
"question",
"P2"
] | 2024-04-05T16:49:13Z
| 2024-04-08T09:24:43Z
| null |
severo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.