repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
β | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/xla
| 7,255
|
[RFC] torch_xla2 dynamo integration
|
# Dynamo backend for torchxla2
## Goal
Have a dynamo backend backend by torch_xla2.
The users should be able to do the following:
```python
m = model ...
m_compiled = torch.compile(m, backend='torch_xla2_compile') # backend name TBD
result = m_compiled(*inputs)
```
The above should run on TPU will low overhead.
## Challenge
Usually the challenge of a dynamo backend is the compiler that
transforms a fx graph with torch (or Aten) ops to the compiled executable.
However, in our case, that piece is solved.
For every `call_function` node; we lookup the corresponding implementation of
said ATen op in a dictionary for it's corresponding implementation in Jax,
and we just call it.
This is illustrated here: https://github.com/pytorch/xla/blob/master/experimental/torch_xla2/torch_xla2/export.py#L23
Now, the challenge is for dynamo to be able to 1. produce the graph; and 2. n
not incur any data copies in this process.
Consider this following pseudocode:
```python
class XLATensor2:
_data: jax.Array
def __torch_dispatch__(...):
# do stuff with _data, get new data
return XLATensor2(new_data)
def dynamo_backend(fx, sample):
compiled = compile fx into graph that manipulate jax.Array.
def returned_callable(inputs):
datas = [i._data for i in inputs]
res = compiled(*datas)
return TensorSubclass(res)
return returned_callable
model = torch.compile(model, backend = dynamo_backend)
inputs = a list of TensorSubclass or a list of torch.Tensor?
model(*inputs)
```
What would be the type of inputs?
If inputs are of type `TensorSubclass`, then dynamo
will attempt to trace through the `__torch_dispatch__` method,
and throws error because it doesn't know what is `_data` and the
operations on it.
If `inputs` is of type `torch.Tensor`, then it works: dynamo
calls the backend, the backend can produce correct result.
But, `inputs` need to be converted to `TensorSubclass` first inside of
the backend; which usually means a data copy. This happens everytime
the compiled backend is executed, therefore not desirable.
## The Desired behavior
When *tracing* dynamo treats TensorSubclass as if it is a regular tensor
without dispatch override; and when executing the compiled callable,
TensorSubclass is passed in as-is. We know that dynamo can do this with
some tensor subclass, namely `FakeTensor`.
Let's list out the possible ways we could accomplish this behavior.
# Option 1. Have the jax.Array object hold in C++
Roughly we would have a `Tensor` subclass in C++, this is very
similar to the `LazyTensor` subclass that is the current `XLATensor`.
This tensor can hold it's own states in C++. In our case, that would
be a `PyObject*` that happens to point to either `jnp.ndarray` or
jax's `Traced<ShapedArray>` during jax.jit. We might further result the
`XLA` dispatch key to route the operators to the jax implementation,
emulating what `__torch_dispatch__` does.
This way, eager mode will continue to work, and dynamo would work
because the Python class is still `torch.Tensor` (not a subclass), and
there are no Python logic in dispatching so dynamo cannot trace through.
## Pros:
* Very clear that this will work.
## Cons:
Now need to deal with C++ builds. In particular, `torch` becomes a source
dependency instead of a pip dependency; meaning, again we need to start
building torch first then build torch_xla2. This might be mitigated if
that subclass can be upstreamed.
# Option 2. Modify dynamo to do the desired behavior
We have one instance where a `torch.Tensor` dispatch subclass
just works with dynamo, without dynamo make a fuss when it traces
`__torch_dispatch__`. This is `FakeTensor`. (https://github.com/pytorch/pytorch/pull/100017/files)
The idea is to make dynamo trace as-if the inputs are `FakeTensor` and
not `XLATensor`. and only after the creation of fx graph and backend, dynamo
calls the compiled callable with `XLATensor`.
Pros:
* Likely pure python changes.
Cons:
* We also need to design a mechanism to represent tensor subclasses that
is desirable for dynamo to trace through, and those is not.
* Likely significant amount of work.
# Option 3. Register All the ops as custom_ops
So currently dynamo traces `__torch_dispatch__`, and we don't like that
because it will find the operations on Jax arrays, and doesn't understand those.
What if we make dynamo **able** to understand what is inside?
The [Black box python functions](https://docs.google.com/document/d/1ZuCVyMfibExwvtzhd9cfMWk5zXT3Dhy1b3kuvAIkBoU/edit#heading=h.56tggsazyrkh) doc
points the possibility of registering things that we don't want dynamo
to go into as a custom op. So we could, theoretically do the following:
1. Register the jax impl of an Aten op as a custom op.
i.e. register `jaten.add` for `aten.add`.
2. For meta kernels, just call the meta kernel of `aten.add`.
3. In `_
|
https://github.com/pytorch/xla/issues/7255
|
open
|
[
"dynamo",
"RFC",
"torchxla2"
] | 2024-06-12T17:31:23Z
| 2025-11-12T19:14:04Z
| 7
|
qihqi
|
huggingface/chat-ui
| 1,277
|
Difficulties with chat-ui promp to text-generation-webui openai api endpoint
|
Hello,
I'm trying my best to get the huggingface ```chat-ui``` working with the API endpoint of ```text-generation-webui```.
I would be really happy if I could get a hint what I am doing wrong.
Here is a reverse proxied test instance: https://chat-ui-test.pischem.com/
I can't get my prompt that I input into the chat-ui to pass to the text-generation-webui. Every prompt will be ignored and a random answer is returned.
Here is the command I start ```text-generation-webui```:
<details>
```./start_linux.sh --listen --listen-port 8000 --api --api-port 8001 --verbose --model NTQAI_Nxcode-CQ-7B-orpo```
</details>
Here is my current ```.local.env``` of the ```chat-ui``` and the command I run it with:
<details>
```npm run dev -- --host```
```
MODELS=`[
{
"name": "text-generation-webui",
"id": "text-generation-webui",
"parameters": {
"temperature": 0.9,
"top_p": 0.95,
"max_new_tokens": 1024,
"stop": []
},
"endpoints": [{
"type" : "openai",
"baseURL": "http://172.16.0.169:8001/v1",
"extraBody": {
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000
}
}]
}
]`
MONGODB_URL=`mongodb://localhost:27017`
DEBUG=`true`
```
</details>
Here are the logs what happen when I write a prompt:
```chatui```:
<details>
```
> chat-ui@0.9.1 dev
> vite dev --host
VITE v4.5.3 ready in 777 ms
β Local: http://localhost:5173/
β Network: http://172.16.0.135:5173/
β Network: http://172.17.0.1:5173/
β press h to show help
(node:6250) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
[13:58:52.476] INFO (6250): [MIGRATIONS] Begin check...
[13:58:52.478] INFO (6250): [MIGRATIONS] "Update search assistants" already applied. Skipping...
[13:58:52.478] INFO (6250): [MIGRATIONS] "Update deprecated models in assistants with the default model" should not be applied for this run. Skipping...
[13:58:52.478] INFO (6250): [MIGRATIONS] "Add empty 'tools' record in settings" already applied. Skipping...
[13:58:52.478] INFO (6250): [MIGRATIONS] "Convert message updates to the new schema" already applied. Skipping...
[13:58:52.478] INFO (6250): [MIGRATIONS] "Convert message files to the new schema" already applied. Skipping...
[13:58:52.478] INFO (6250): [MIGRATIONS] "Trim message updates to reduce stored size" already applied. Skipping...
[13:58:52.478] INFO (6250): [MIGRATIONS] All migrations applied. Releasing lock
[13:58:52.498] INFO (6250): Metrics server listening on port 5565
Browserslist: caniuse-lite is outdated. Please run:
npx update-browserslist-db@latest
Why you should do it regularly: https://github.com/browserslist/update-db#readme
(node:6250) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.
(node:6250) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.
Source path: /opt/chat-ui/src/lib/components/chat/FileDropzone.svelte?svelte&type=style&lang.css
Setting up new context...
Source path: /opt/chat-ui/src/lib/components/chat/ChatInput.svelte?svelte&type=style&lang.css
Source path: /opt/chat-ui/src/lib/components/ToolsMenu.svelte?svelte&type=style&lang.css
Source path: /opt/chat-ui/src/lib/components/chat/ChatMessage.svelte?svelte&type=style&lang.css
JIT TOTAL: 265.317ms
(node:6250) Warning: Label 'JIT TOTAL' already exists for console.time()
(node:6250) Warning: Label 'JIT TOTAL' already exists for console.time()
(node:6250) Warning: Label 'JIT TOTAL' already exists for console.time()
(node:6250) Warning: No such label 'JIT TOTAL' for console.timeEnd()
(node:6250) Warning: No such label 'JIT TOTAL' for console.timeEnd()
(node:6250) Warning: No such label 'JIT TOTAL' for console.timeEnd()
Source path: /opt/chat-ui/src/lib/components/OpenWebSearchResults.svelte?svelte&type=style&lang.css
Source path: /opt/chat-ui/src/lib/components/chat/ToolUpdate.svelte?svelte&type=style&lang.css
JIT TOTAL: 1.355ms
(node:6250) Warning: Label 'JIT TOTAL' already exists for console.time()
(node:6250) Warning: No such label 'JIT TOTAL' for console.timeEnd()
Source path: /opt/chat-ui/src/styles/main.css
Setting up new context...
Finding changed files: 8.775ms
Reading changed files: 158.906ms
Sorting candidates: 7.72ms
Generate rules: 397.398ms
Build stylesheet: 11.899ms
Potential classes: 8755
Active contexts: 2
JIT TOTAL: 767.815ms
Source path: /opt/chat-ui/src/styles/main.css?inline=
Setting up new context...
Finding changed files: 3.466ms
Reading changed files: 119.942ms
Sorting candidates: 7.852ms
Generate rules: 339.343ms
Build stylesheet: 6.497ms
Potential classes: 8755
Active contexts: 3
JIT TOTAL: 635.226ms
|
https://github.com/huggingface/chat-ui/issues/1277
|
closed
|
[
"support"
] | 2024-06-12T14:18:12Z
| 2025-01-30T18:46:22Z
| 7
|
Monviech
|
huggingface/chat-ui
| 1,275
|
Feature Request - support for session sharing, archiving, and collaboration
|
AFAIK, HuggingChat (HC) currently has no support for session sharing, archiving, and collaboration. At least, neither the HC server nor my GitHub (GH) searching found anything like this. So, if this doesn't exist, please consider how it could be implemented. For example, if I wanted to publish an HC session, maybe I could ask HC to send me a transcript in a form suitable for sharing (e.g., as a GH repo). To reduce friction, perhaps I could simply ask HC to create (or update) a repo.
Making it easy for HC users (and researchers) to examine and/or collaborate on sessions seems to me to be a Good Thing...
|
https://github.com/huggingface/chat-ui/issues/1275
|
open
|
[
"question"
] | 2024-06-12T11:35:31Z
| 2024-06-14T05:24:08Z
| null |
RichMorin
|
huggingface/lerobot
| 263
|
Seeking advice on how to choose between ACT and DP algorithms
|
Hello,
Thank you very much for the work you have done in bringing together the current excellent imitation learning collections for convenient use. Regarding the ACT algorithm and DP algorithm, besides the basic differences in the algorithms themselves, how should one choose between them for different tasks? Do they have specific types of tasks they are particularly suited for? I have just started using your project and am unsure how to select the appropriate algorithm. I would greatly appreciate any advice you can provide.
Thank you!
|
https://github.com/huggingface/lerobot/issues/263
|
closed
|
[
"question"
] | 2024-06-12T07:45:39Z
| 2024-06-19T14:02:43Z
| null |
le-wei
|
pytorch/xla
| 7,253
|
[RFC] PyTorch/XLA eager mode as default
|
# Context
## Objective
In this RFC I will talk about the roadmap to enable eager mode as the default computation mode for PyTorch/XLA users and how to enable graph compilation in this mode.
## Background
PyTorch/XLA has been using tracing mode as the default mode since the project started. All of the torch operation users issued will be accumulated in the background and sent to the XLA for compilation and execution upon a `mark_step` call.
The upside of this approach is that users donβt need to change their model code too much. As long as the user adds a `mark_step` at the right place everything should just work. However from the user feedback in the last couple years this approach creates too much confusion and frustration for the user. Both PyTorch and JAX took the approach of using eager mode as default and asking users to specify the function that they want to compile. PyTorch/XLA should take the same approach.
# Design
## Eager mode
There is no real eager mode in TPU. However we can fake the eager mode by compiling and executing each torch operation. Such mode already exist as a debug only mode today, it was contributed by @aws-rhsoln 2 year ago in https://github.com/pytorch/xla/pull/3306. The work here is to do a better API level wrapping and make sure this mode work with other features(debug output, SPMD, multiprocess etc). This approach was way too slow a couple years ago due to XRT not being able to execute small executions very efficiently but with PJRT the performance is much better.
The whole eager mode still builds on top of the existing Lazy tensor framework, but becomes invisible to the user. A couple things we need to do to accommodate the eager mode are
1. Increase the compilation cache from 1024 to 2048 since each torch op will also reside in the compilation cache. We also need to recompile every torch op for different input shapes.
2. Increase the max execution we can queue in the PJRT level since now we will execute a lot more small computations.
## Compile
For the compile part we currently have 2 options, lazy tensor and torch dynamo(torch.compile).
For lazy tensor based compile I will add a new API_
```
torch_xla.experimental.compile(fn) -> compiled_fn
```
Which under the hood just enables the tracing mode upon running the function and executes the traced graph before returning. Here is the [implementation](https://github.com/pytorch/xla/pull/7246/files#diff-1e2407471d3328b83dabbeb29cdf3ef468a201d3d4aecac8f4cd46f76751b8c1). For `torch.compile` we can just use the existing API.
# Example UX
```python
import torch_xla
torch_xla.experimental.eager_mode(True)
Class TrainDecoderOnlyBase():
def __init__():
train_loader = MyLoader()
self.model = DecoderOnlyModel(self.config).to(torch_xla.device())
# if run with dynamo, use
# self.step_fn = torch.compile(self.step_fn, backend="openxla")
self.step_fn = torch_xla.experimental.compile(self.step_fn)
def step_fn(self, data, target):
self.optimizer.zero_grad()
logits = self.model(data)
loss = self.loss_fn(
logits.view(-1, self.config.vocab_size), target.view(-1))
loss.backward()
self.run_optimizer()
return loss
def start_training(self):
for step, (data, target) in enumerate(loader):
loss = self.step_fn(data, target)
if __name__ == '__main__':
base = TrainDecoderOnlyBase()
base.start_training()
```
Note that two changes user need to make is to enable the eager mode by `torch_xla.experimental.eager_mode(True)` and then compile the step function with `torch_xla.experimental.compile` or `torch.compile`.
Users can also choose to run the whole model in eager mode.
# Why
IMO using tracing mode as the default has a couple very significant drawback
1. Users are often confused about when the framework is tracing and when the framework is executing.
2. Users donβt know where to add the `mark_step`.
3. Random python code(data preprocessing for example) often generates some small pending execution that gets leaked into the main graph(step function) and causes recompilation. The recompilation of the whole graph is usually very expensive.
4. It is hard to debug when/why recompilation happens.
Both JAX and PyTorch took the approach of asking users to explicitly mark the region/function for compilation. This methodology seems well received for users that want compilation mode. I think this proposal will make a much better usability story by
1. Allow users to use eager mode to do the initial model development and use compile mode to scale up. This also significantly lowers the bar for a normal pytorch user to onboard PyTorch/XLA.
2. Reduce the number of recompilation generated by non-core model codes, since those will get executed eagerly.
3. Make graph recompilation easier to debug since only the `compiled_fn` should generate graphs.
# Benchmark
I am
|
https://github.com/pytorch/xla/issues/7253
|
open
|
[
"usability",
"RFC",
"eager"
] | 2024-06-12T03:40:12Z
| 2025-11-09T19:39:21Z
| 5
|
JackCaoG
|
pytorch/executorch
| 3,939
|
How can I use the generated pte file to process my own data and predict the results?
|
auto train_loader = torch::data::make_data_loader(
SWaTegLoader("/dataset/train.csv", 100, 10, "train"),
batch_size=256,
torch::data::DataLoaderOptions().workers(0).shuffle(true)
);
Is this correct? Then how do we process the data with the model?
for (auto& batch : *train_loader) {
auto input = batch.data.to(device), labels = batch.target.to(device);
auto output = method->execute(input)
Is it correct to write code in libtorch way?
|
https://github.com/pytorch/executorch/issues/3939
|
closed
|
[
"need-user-input"
] | 2024-06-11T22:22:13Z
| 2025-02-05T17:44:36Z
| null |
tayloryoung-o
|
huggingface/dataset-viewer
| 2,899
|
Standardize access to metrics and healthcheck
|
In some apps, the metrics and healthcheck are public:
- https://datasets-server.huggingface.co/admin/metrics
- https://datasets-server.huggingface.co/sse/metrics
- https://datasets-server.huggingface.co/sse/healthcheck
- https://datasets-server.huggingface.co/healthcheck
- On others, itβs forbidden or not found:
- https://datasets-server.huggingface.co/metrics
- https://datasets-server.huggingface.co/filter/metrics
As @severo suggests, it should be coherent among all the services. (Do we want the metrics to be public, or not?)
|
https://github.com/huggingface/dataset-viewer/issues/2899
|
open
|
[
"question",
"infra",
"P2"
] | 2024-06-11T14:39:10Z
| 2024-07-11T15:38:17Z
| null |
AndreaFrancis
|
huggingface/lerobot
| 261
|
Which low cost robot with teleoperation to test the library ?
|
Firstly, thank you for all the work. At my company we would like to obtain results on real robots from this repository. However, the original setups are either quite expensive (around ~30k for Aloha) or require reconstruction for the UMI interface from Colombia via 3D printing, which would be time-consuming considering we don't have direct experience in the subject.
**Do you have any recommendations for one or more robots with a low-cost teleoperation setup on which we could test and iterate quickly on these algorithms?** I have seen some people doing things with low-cost robots on LinkedIn, and I will reach out to them, but apparently, they do not seem to be selling them.
Thanks,
|
https://github.com/huggingface/lerobot/issues/261
|
closed
|
[
"question"
] | 2024-06-11T13:21:32Z
| 2024-07-23T07:55:15Z
| null |
RochMollero
|
pytorch/pytorch
| 128,414
|
How to enable XNNPACK instead of NNPACK/MKLDNN in Windows?
|
### π The feature, motivation and pitch
I'm trying to compile PyTorch for Windows on ARM64 device. I've got one workable version, but NNPACK/MKLDNN doesn't work in ARM64 windows. May I know how to enable XNNPACK as the default 'PACK' to improve the performance?
Thanks in advance!
### Alternatives
_No response_
### Additional context
_No response_
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @malfet @snadampal
|
https://github.com/pytorch/pytorch/issues/128414
|
open
|
[
"module: windows",
"triaged",
"module: xnnpack",
"module: arm"
] | 2024-06-11T12:53:01Z
| 2024-09-04T10:33:25Z
| null |
zhanweiw
|
huggingface/diarizers
| 11
|
How can I save the model locally before pushing it to the Hub ?!
|
https://github.com/huggingface/diarizers/issues/11
|
closed
|
[] | 2024-06-11T06:37:45Z
| 2024-06-13T16:24:19Z
| null |
ma-mohsen
|
|
huggingface/parler-tts
| 68
|
How to predict after finetune? There is no config.json in checkpoint dir.
|
https://github.com/huggingface/parler-tts/issues/68
|
open
|
[] | 2024-06-11T03:30:04Z
| 2024-06-17T01:57:04Z
| null |
lyt719
|
|
pytorch/data
| 1,271
|
Returning tensor instead of dict for state_dict causes failure
|
### π Describe the bug
```
class TensorStateDataset(torch.utils.data.IterableDataset, Stateful, Iterator):
def __init__(self, length):
self.length = length
self.i = 0
def __iter__(self):
return self
def __next__(self):
if self.i >= self.length:
raise StopIteration
self.i += 1
return self.i
def state_dict(self):
return torch.rand(2, 2)
def load_state_dict(self, state_dict):
pass
class TestSimple(TestCase):
def test(self):
dataset = TensorStateDataset(100)
dl = StatefulDataLoader(
dataset=dataset,
num_workers=1,
)
it = iter(dl)
for _ in range(30):
next(it)
self.assertTrue(False)
```
Running this, I hit an error as follows:
```
self = <torch._utils.ExceptionWrapper object at 0x7f921c5fde10>
def reraise(self):
r"""Reraises the wrapped exception in the current thread"""
# Format a message such as: "Caught ValueError in DataLoader worker
# process 2. Original Traceback:", followed by the traceback.
msg = f"Caught {self.exc_type.__name__} {self.where}.\nOriginal {self.exc_msg}"
if self.exc_type == KeyError:
# KeyError calls repr() on its argument (usually a dict key). This
# makes stack traces unreadable. It will not be changed in Python
# (https://bugs.python.org/issue2651), so we work around it.
msg = KeyErrorMessage(msg)
elif getattr(self.exc_type, "message", None):
# Some exceptions have first argument as non-str but explicitly
# have message field
raise self.exc_type(message=msg)
try:
exception = self.exc_type(msg)
except TypeError:
# If the exception takes multiple arguments, don't try to
# instantiate since we don't know how to
raise RuntimeError(msg) from None
> raise exception
E RuntimeError: Caught RuntimeError in DataLoader worker process 0.
E Original Traceback (most recent call last):
E File "/home/gokulg/torchdata/data/torchdata/stateful_dataloader/worker.py", line 233, in _worker_loop
E delta_state_dict = incremental_worker_state.generate_delta(state_dict)
E File "/home/gokulg/torchdata/data/torchdata/stateful_dataloader/incremental_state.py", line 142, in generate_delta
E if iter_state := fetcher_state.get(_DATASET_ITER_STATE, None):
E RuntimeError: Boolean value of Tensor with more than one value is ambiguous
E
E
E To execute this test, run the following from the base repo dir:
E python test/stateful_dataloader/test_state_dict.py -k test2
E
E This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
../../.conda/envs/basetorch/lib/python3.10/site-packages/torch/_utils.py:722: RuntimeError
```
If an integer is returned or say dict (with value as tensor) is returned, there is no error.
### Versions
Latest git commit 82918dd
|
https://github.com/meta-pytorch/data/issues/1271
|
closed
|
[
"bug",
"stateful_dataloader"
] | 2024-06-10T23:49:43Z
| 2024-06-13T19:16:27Z
| 2
|
gokulavasan
|
pytorch/tutorials
| 2,926
|
π‘ [REQUEST] - New recipe tutorial on calculating layer output dimensions
|
### π Describe the improvement or the new tutorial
This tutorial will help users understand how to transition from convolutional and pooling layers to linear layers in their models.
Learning objectives:
- How to manually calculate the output dimensions after applying a convolution or pooling layer
- How to print the shape of internal tensors for inspecting dimensionality changes in a model
- How to use the ``torchinfo`` package to show output dimensions for all layers in a model
### Existing tutorials on this topic
_No response_
### Additional context
I created this draft (https://github.com/pytorch/tutorials/pull/2923) as a part of the PyTorch Docathon H1 2024 effort. I did not realize new tutorials weren't being accepted as part of the sprint and was asked to fill out an issue and convert the PR to a draft.
|
https://github.com/pytorch/tutorials/issues/2926
|
closed
|
[] | 2024-06-10T23:01:44Z
| 2025-04-16T20:08:34Z
| 2
|
loganthomas
|
pytorch/tutorials
| 2,925
|
π‘ [REQUEST] - New recipe tutorial on implementing a Keras progress bar
|
### π Describe the improvement or the new tutorial
This tutorial will help users to understand better how to implement a Keras progress bar in PyTorch.
- How to implement with a traditional train/test loop
- How to implement with a train loop with validation data
### Existing tutorials on this topic
_No response_
### Additional context
I created this draft (https://github.com/pytorch/tutorials/pull/2921) as a part of the PyTorch Docathon H1 2024 effort. I did not realize new tutorials weren't being accepted as part of the sprint and was asked to fill out an issue and convert the PR to a draft.
|
https://github.com/pytorch/tutorials/issues/2925
|
closed
|
[] | 2024-06-10T22:59:38Z
| 2025-04-16T20:08:41Z
| 0
|
loganthomas
|
pytorch/tutorials
| 2,924
|
π‘ [REQUEST] - New recipe tutorial on accessing model parameters
|
### π Describe the improvement or the new tutorial
This tutorial will help begginers understand how to access and make sense of model parameters, collect trainable parameters, and use `torchinfo.summary()`.
Learning objectives:
- How to inspect a model's parameters using ``.parameters()`` and ``.named_parameters()``
- How to collect the trainable parameters of a model
- How to use the ``torchinfo`` package (formerly ``torch-summary``) to print a model summary
### Existing tutorials on this topic
_No response_
### Additional context
I created this draft (https://github.com/pytorch/tutorials/pull/2914) as a part of the PyTorch Docathon H1 2024 effort. I did not realize new tutorials weren't being accepted as part of the sprint and was asked to fill out an issue and convert the PR to a draft.
|
https://github.com/pytorch/tutorials/issues/2924
|
open
|
[] | 2024-06-10T22:56:58Z
| 2024-06-10T23:01:48Z
| 0
|
loganthomas
|
pytorch/xla
| 7,232
|
How to convert hlo.pb to hlo text?
|
## β Questions and Help
### How to convert hlo.pb to hlo_text in torch xla eco system?
In JAX we can do the following:
```python
from jax.lib.xla_bridge import xla_client
fname = "model.hlo.pb"
with open(fname, mode="rb") as f:
comp = xla_client.XlaComputation(f.read())
print(comp.as_hlo_text())
```
Result:
```c
HloModule Test, entry_computation_layout={(f32[5]{0})->f32[5]{0}}
%test_add_one_func.0 (x.1: f32[]) -> f32[] {
%x.1 = f32[] parameter(0)
%y.2 = f32[] constant(1)
ROOT %add.0 = f32[] add(f32[] %x.1, f32[] %y.2)
}
ENTRY %main (x: f32[5]) -> f32[5] {
%x = f32[5]{0} parameter(0)
ROOT %bar = f32[5]{0} map(f32[5]{0} %x), dimensions={0}, to_apply=%test_add_one_func.0
}
```
|
https://github.com/pytorch/xla/issues/7232
|
closed
|
[
"question"
] | 2024-06-10T20:50:31Z
| 2025-06-05T01:49:49Z
| null |
apivovarov
|
huggingface/transformers.js
| 802
|
Long running transcription using webgpu-whisper
|
### Question
Noob question - the [webgpu-whisper](https://github.com/xenova/transformers.js/tree/v3/examples/webgpu-whisper) demo does real time transcription, however it doesn't build out a full transcript from the start ie. 2 mins into transcription, the first few transcribed lines disappear.
Transcript at time x π
```
Cool, let's test this out. We'll see how this works. So turns out that the transcription when I try to access it is actually just empty. And so the only thing that actually comes through is. So yeah, so the output that's getting cut is basically coming from the
```
Transcript at time x+1 π
```
this out, we'll see how this works. So turns out that the transcription when I try to access it is actually just empty. And so the only thing that actually comes through is. So yeah, so the output that's getting cut is basically coming from the work
```
Note how the "Cool, let's test" is missing from the start of the second transcript.
I'm wondering what it would take to keep building the transcript for a long running meeting without losing any of the previously transcribed stuff?
I tried a naive appending approach and that just results in a transcript full of repetition.
So I'm very curious about what it would take to build out a streaming transcription similar to what something like [Deepgram](https://developers.deepgram.com/docs/node-sdk-streaming-transcription) would offer. Would that require a change to the pipeline? Are there models that can take an appended transcript with lots of repetition and trim it down to a clean transcript?
Please let me know if my questions are unclear. Just looking for some direction so that I can potentially put up a PR for this (if needed).
|
https://github.com/huggingface/transformers.js/issues/802
|
open
|
[
"question"
] | 2024-06-10T16:44:01Z
| 2025-05-30T05:52:37Z
| null |
iamhitarth
|
huggingface/sentence-transformers
| 2,738
|
How is `max_length` taken into account compared to models setting
|
What happens under the hood, if I set max_length > than model's max_length?
it seems to work, but are inputs truncated or doi you apply RoPE-Extension?
|
https://github.com/huggingface/sentence-transformers/issues/2738
|
open
|
[] | 2024-06-09T15:59:09Z
| 2024-06-10T06:45:49Z
| null |
l4b4r4b4b4
|
huggingface/datasets
| 6,961
|
Manual downloads should count as downloads
|
### Feature request
I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats
### Motivation
This would ensure that downloads are accurately reported to end users.
### Your contribution
N/A
|
https://github.com/huggingface/datasets/issues/6961
|
open
|
[
"enhancement"
] | 2024-06-09T04:52:06Z
| 2024-06-13T16:05:00Z
| 1
|
umarbutler
|
huggingface/diffusers
| 8,439
|
How to use EDM2 model with diffusers?
|
model safetensors: https://huggingface.co/RedRocket/Fluffyrock-Unbound/blob/main/Fluffyrock-Unbound-v1-1.safetensors
yaml: https://huggingface.co/RedRocket/Fluffyrock-Unbound/raw/main/Fluffyrock-Unbound-v1-1.yaml
colab demo:
https://colab.research.google.com/drive/1LSGvjWXNVjs6Tthcpf0F5VwuTFJ_d-oB
results:

|
https://github.com/huggingface/diffusers/issues/8439
|
open
|
[
"stale"
] | 2024-06-09T03:39:05Z
| 2024-09-14T15:10:19Z
| null |
s9anus98a
|
huggingface/transformers
| 31,323
|
Language modeling examples do not show how to do multi-gpu training / fine-tuning
|
### System Info
- `transformers` version: 4.41.2
- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35
- Python version: 3.9.18
- Huggingface_hub version: 0.23.3
- Safetensors version: 0.4.2
- Accelerate version: 0.31.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@muellerz @stevhliu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
n/a
### Expected behavior
The `run_clm.py` and other related scripts in:
`https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling`
notionally support training / fine-tuning of models whose gradients are too large to fit on a single GPU, if you believe their CLI. However there is no example showing how to actually do that.
For instance, `accelerate estimate-memory` says training the Mistral-7B family with Adam takes roughly 55 GB with float16, which is more memory than a single 40GB A100 has. So I'd need to use more than one GPU.
Would it be possible to modify the language_modeling documentation to explain how to do that?
|
https://github.com/huggingface/transformers/issues/31323
|
closed
|
[
"Documentation"
] | 2024-06-07T18:49:35Z
| 2024-12-02T08:11:31Z
| null |
csiefer2
|
huggingface/candle
| 2,258
|
How to Implement New Operators Using CUDA Host Functions Along with Thrust and CUB Libraries
|
As stated, the CUDA code in the candle-kernels repository seems to only contain kernel functions. When I want to implement new operators (such as nonzero), it seems I'm only able to use Rust for higher-level functionality, which means I cannot utilize the device_vector from Thrust or the flagged APIs from CUB. This poses a significant challenge for implementing my algorithms. For example, to implement nonzero, it seems I would have to reimplement algorithms like exclusive_scan and scatter using the current approach?
I am hoping for a better way to utilize the CUDA ecosystem!
Specifically, I'm interested in how to:
1. Incorporate host functions in CUDA code to facilitate the use of libraries like Thrust and CUB.
2. Effectively leverage these libraries to implement algorithms and operators that are not natively supported in the current codebase.
Any guidance or best practices for achieving this would be greatly appreciated.
(Translate from Chinese using LLM, Might be a little bit.. formal^_^)
|
https://github.com/huggingface/candle/issues/2258
|
open
|
[] | 2024-06-07T16:52:44Z
| 2024-06-09T15:56:36Z
| null |
chenwanqq
|
huggingface/text-generation-inference
| 2,035
|
What is TGI's graceful shutdown behavior?
|
When SIGKILL arrives,
- does TGI process all pending inputs?
- does TGI blocks incoming inputs?
I saw a PR that adds graceful shutdown but it did not specify the exact program behavior.
|
https://github.com/huggingface/text-generation-inference/issues/2035
|
closed
|
[] | 2024-06-07T06:24:00Z
| 2024-06-07T08:08:51Z
| null |
seongminp
|
huggingface/tokenizers
| 1,549
|
How to use `TokenizerBuilder`?
|
I expected `TokenizerBuilder` to produce a `Tokenizer` from the `build()` result, but instead `Tokenizer` wraps `TokenizerImpl`.
No problem, I see that it impl `From<TokenizerImpl> for Tokenizer`, but it's attempting to do quite a bit more for some reason? Meanwhile I cannot use `Tokenizer(unwrapped_build_result_here)` as the struct is private π€ (_while the `Tokenizer::new()` method won't take this in either_)
---
```rs
let mut tokenizer = Tokenizer::from(TokenizerBuilder::new()
.with_model(unigram)
.with_decoder(Some(decoder))
.with_normalizer(Some(normalizer))
.build()
.map_err(anyhow::Error::msg)?
);
```
```rs
error[E0283]: type annotations needed
--> mistralrs-core/src/pipeline/gguf_tokenizer.rs:139:41
|
139 | let mut tokenizer = Tokenizer::from(TokenizerBuilder::new()
| ^^^^^^^^^^^^^^^^^^^^^ cannot infer type of the type parameter `PT` declared on the struct `TokenizerBuilder`
|
= note: cannot satisfy `_: tokenizers::PreTokenizer`
= help: the following types implement trait `tokenizers::PreTokenizer`:
tokenizers::pre_tokenizers::bert::BertPreTokenizer
tokenizers::decoders::byte_level::ByteLevel
tokenizers::pre_tokenizers::delimiter::CharDelimiterSplit
tokenizers::pre_tokenizers::digits::Digits
tokenizers::decoders::metaspace::Metaspace
tokenizers::pre_tokenizers::punctuation::Punctuation
tokenizers::pre_tokenizers::sequence::Sequence
tokenizers::pre_tokenizers::split::Split
and 4 others
note: required by a bound in `tokenizers::TokenizerBuilder::<M, N, PT, PP, D>::new`
--> /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.19.1/src/tokenizer/mod.rs:314:9
|
314 | PT: PreTokenizer,
| ^^^^^^^^^^^^ required by this bound in `TokenizerBuilder::<M, N, PT, PP, D>::new`
...
319 | pub fn new() -> Self {
| --- required by a bound in this associated function
help: consider specifying the generic arguments
|
139 | let mut tokenizer = Tokenizer::from(TokenizerBuilder::<tokenizers::models::unigram::Unigram, tokenizers::NormalizerWrapper, PT, PP, tokenizers::DecoderWrapper>::new()
| +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
```
Why is this an issue? Isn't the point of the builder so that you don't have to specify the optional types not explicitly set?
> ```
> cannot infer type of the type parameter `PT` declared on the struct `TokenizerBuilder`
> ```
I had a glance over the source on github but didn't see an example or test for using this API and the docs don't really cover it either.
---
Meanwhile with `Tokenizer` instead of `TokenizerBuilder` this works:
```rs
let mut tokenizer = Tokenizer::new(tokenizers::ModelWrapper::Unigram(unigram));
tokenizer.with_decoder(decoder);
tokenizer.with_normalizer(normalizer);
```
|
https://github.com/huggingface/tokenizers/issues/1549
|
closed
|
[
"Stale"
] | 2024-06-07T01:18:07Z
| 2024-07-20T01:52:03Z
| null |
polarathene
|
huggingface/transformers.js
| 796
|
No performance gain on using WebGPU
|
### Question
I want to use the model: https://huggingface.co/Xenova/clip-vit-large-patch14 with WebGPU for quick inference in the browser. I ran the WebGPU benchmark to observe the performance increase and indeed it showed a ~7x improvement in speed on my device.
But when I run the clip model linked above, there's barely any difference between performance with and without WebGPU.
|
https://github.com/huggingface/transformers.js/issues/796
|
closed
|
[
"question"
] | 2024-06-06T20:16:07Z
| 2024-06-09T01:44:17Z
| null |
mr-sarthakgupta
|
huggingface/optimum
| 1,895
|
Lift upper version limit of transformers for habana
|
### Feature request
optimium currently limits transformers to `>= 4.38.0, < 4.39.0`. @regisss bumped the upper version limit in PR #1851 a month ago. Is there any technical reason to limit the upper version to `< 4.39`? Other dependencies allow for more recent versions. For example neuronx allows `< 4.42.0`, see #1881.
### Motivation
We would like to use newer versions of transformers and tokenizers in InstructLab. The upper version limit for optimum makes this harder on us. We need optimum-habana for Intel Gaudi support.
### Your contribution
I can create a PR. It's a trivial one line change.
Testing is less trivial. I have access to an 8-way Gaudi 2 system, but the system is currently busy. I can do some testing in about two weeks from now after I have updated the system from 1.15.1 to 1.16.0.
|
https://github.com/huggingface/optimum/issues/1895
|
closed
|
[] | 2024-06-06T07:52:41Z
| 2024-06-24T08:53:27Z
| 4
|
tiran
|
pytorch/xla
| 7,203
|
[RFC] PR Cherrypicking Process After a Release Branch Cut
|
## π Feature
In this RFC, we propose the policy aiming to guide the decision-making process for determining whether Pull Requests (PRs) should be cherry-picked onto a release branch after the release branch has been cut. The goal is to maintain the stability and predictability of releases while addressing critical issues and incorporating essential improvements.
## Motivation
Cherry-picking pull requests (PRs) onto a release branch can introduce additional overhead and goes against established best practices. While cherry-picks are sometimes unavoidable, we can mitigate their necessity through well-defined policies. This proposal outlines a framework for making informed decisions about when and how to cherry-pick changes.
## Proposed Policies:
The following outlines the specific scenarios under which cherry-picking pull requests (PRs) onto a release branch will be considered acceptable after the official release branch cut.
- The PR is for __severe/P0__ bug fixing purposes
- The PR is for improving __unforeseen__ code stability or security issues
- The PR has __significant__ impact on usability improvements
- The PR is related to a planned release feature __urgent fix__
- The PR only updates documentation, not changing any code
- The PR is for improving release infrastructure
|
https://github.com/pytorch/xla/issues/7203
|
open
|
[
"RFC"
] | 2024-06-05T22:19:07Z
| 2025-09-11T23:04:41Z
| 2
|
lsy323
|
huggingface/peft
| 1,829
|
How to change to PEFT model dynamically?
|
python==3.7.12
PEFT==0.3.0
@BenjaminBossan
I fine-tune the eleventh transformer of Bert as below:
```bash
target_modules = []
target_modules.append("11.attention.self.query")
target_modules.append("11.attention.self.value")
lora_config = LoraConfig(
r = self.args.lora_rank,
lora_alpha = self.args.lora_alpha,
target_modules = target_modules,
lora_dropout = 0.05,
bias = "none"
)
```
After training for a few epochs, I also want to fine-tune the first transformer. How to achieve this?
|
https://github.com/huggingface/peft/issues/1829
|
closed
|
[] | 2024-06-05T13:24:40Z
| 2024-06-06T00:37:06Z
| null |
whr819987540
|
pytorch/xla
| 7,196
|
Distributed spmd training with multiple compilations
|
## β Questions and Help
When starting gpu spmd training with `torchrun`, why does it need to be compiled once per machine? Although the resulting graph is the same. Is there any way to avoid it
|
https://github.com/pytorch/xla/issues/7196
|
closed
|
[
"question"
] | 2024-06-05T08:46:55Z
| 2025-04-07T13:32:17Z
| null |
mars1248
|
pytorch/torchchat
| 857
|
[Feature Request]: Continuous batching
|
Does torchchat plan to support asynchronous requests and continuous batching?
To get higher tokens/second by making efficient use of compute, continuous batching is a common strategy that is used.
We could specify the `batch_size` `n` as a parameter and `torchchat` behind the scene would send `n` number of prompts with varying lengths asynchronously
```
python3 torchchat.py generate llama3 --prompt "write me a story about a boy and his bear" --batch_size 8
```
|
https://github.com/pytorch/torchchat/issues/857
|
closed
|
[] | 2024-06-05T02:22:36Z
| 2024-06-14T09:21:53Z
| 1
|
agunapal
|
huggingface/transformers.js
| 792
|
Feature request: YOLO-World/Grounding DINO (Zero shot object detection)
|
### Question
Hi!
I'm trying out some of the zero shot capabilities and I've been working with the owlv2 but I was wondering, is support for yolo-world and grounding Dino coming? They seem to be faster than owlv2.
Thanks!
|
https://github.com/huggingface/transformers.js/issues/792
|
open
|
[
"question"
] | 2024-06-04T21:39:18Z
| 2024-06-24T07:04:27Z
| null |
rogueturnip
|
pytorch/xla
| 7,191
|
How do I know which pytorch parameter corresponds to which parameter in hlo ir
|
## β Questions and Help
I am dumping the optimized HLO IR and designing a new backend. There are some parameters and the corresponding shapes of them in the IR file. But I don't know which parameter is which module in the defined PyTorch model. Is there a way to get the mapping details of the model's input(weights and inputs) and the parameter in the HLO IR?
Thanks!
|
https://github.com/pytorch/xla/issues/7191
|
closed
|
[
"question"
] | 2024-06-04T18:32:56Z
| 2025-04-07T13:33:10Z
| null |
yao-jz
|
huggingface/transformers.js
| 791
|
env.allowLocalModels and env.allowRemoteModels
|
### Question
When I set env.allowLocalModels = true and look at the env object I see both
env.allowLocalModels and env.allowRemoteModels set to true. Does this mean that it will look for models locally first and then if not found go to the remoteHost?
|
https://github.com/huggingface/transformers.js/issues/791
|
open
|
[
"question"
] | 2024-06-04T17:07:38Z
| 2024-09-15T14:00:48Z
| null |
mram0509
|
pytorch/xla
| 7,189
|
Add example for training small LLM
|
## π Documentation
Create an example on how to train a small LLM.
Add it to the examples directory here:
https://github.com/pytorch/xla/tree/master/examples
|
https://github.com/pytorch/xla/issues/7189
|
open
|
[
"docathon-h1-2024",
"advanced"
] | 2024-06-04T16:42:54Z
| 2024-06-19T01:14:21Z
| 4
|
alchemicduncan
|
pytorch/xla
| 7,185
|
Try running inference on an ARM CPU
|
## π Documentation
Install the CPU PJRT plugin from the instructions here:
https://github.com/pytorch/xla/blob/master/plugins/cpu/README.md
Next try getting a model to run on a ARM CPU, if it works, create a tutorial on how to get it running.
|
https://github.com/pytorch/xla/issues/7185
|
open
|
[
"docathon-h1-2024",
"advanced"
] | 2024-06-04T16:40:13Z
| 2024-06-17T17:59:07Z
| 4
|
alchemicduncan
|
pytorch/xla
| 7,183
|
Create a distributed and single device example
|
## π Documentation
Select a model of your own to train. Then create an example of both running it on a single device, and running it on a distributed device of your choice.
Add both training examples that you came up with to the examples directory: https://github.com/pytorch/xla/tree/master/examples
|
https://github.com/pytorch/xla/issues/7183
|
open
|
[
"docathon-h1-2024",
"advanced"
] | 2024-06-04T16:38:24Z
| 2025-06-08T02:04:27Z
| 1
|
alchemicduncan
|
pytorch/xla
| 7,182
|
Try running Resnet example on GPU
|
## π Documentation
Try running the Resnet training example on a GPU: https://github.com/pytorch/xla/blob/master/examples/train_resnet_base.py
If it works add a section about how to do it to the GPU instructions here: https://github.com/pytorch/xla/blob/master/docs/gpu.md
|
https://github.com/pytorch/xla/issues/7182
|
closed
|
[
"docathon-h1-2024",
"medium"
] | 2024-06-04T16:37:36Z
| 2024-06-11T18:37:09Z
| 1
|
alchemicduncan
|
pytorch/xla
| 7,180
|
Adding a new arg to a PyTorch op
|
## β Questions and Help
I'm trying to add a new (optional) argument to the `cumsum` operator in PyTorch - a boolean arg `full` which prepends a 0 to the beginning of the returned tensor. I'd appreciate some help to figure out how to get XLA to build with this change, and what the update process should look like (considering that the XLA and pytorch repos will be out of sync during the development).
PR/issue on the PyTorch side:
https://github.com/pytorch/pytorch/pull/127675
https://github.com/pytorch/pytorch/issues/76191
The XLA builds are failing on my PR:
https://github.com/pytorch/pytorch/actions/runs/9360674517/job/25766868220
```
2024-06-04T03:56:39.3106543Z torch_xla/csrc/aten_xla_type.cpp:1147:12: error: no declaration matches 'at::Tensor torch_xla::XLANativeFunctions::cumsum(const at::Tensor&, int64_t, std::optional<c10::ScalarType>)'
2024-06-04T03:56:39.3108366Z 1147 | at::Tensor XLANativeFunctions::cumsum(const at::Tensor& self, int64_t dim,
2024-06-04T03:56:39.3109178Z | ^~~~~~~~~~~~~~~~~~
2024-06-04T03:56:39.3109813Z In file included from torch_xla/csrc/aten_xla_type.cpp:22:
2024-06-04T03:56:39.3111932Z bazel-out/k8-opt/bin/torch_xla/csrc/XLANativeFunctions.h:166:19: note: candidate is: 'static at::Tensor torch_xla::XLANativeFunctions::cumsum(const at::Tensor&, int64_t, std::optional<c10::ScalarType>, bool)'
2024-06-04T03:56:39.3114227Z 166 | static at::Tensor cumsum(const at::Tensor & self, int64_t dim, ::std::optional<at::ScalarType> dtype, bool full);
2024-06-04T03:56:39.3115256Z | ^~~~~~
2024-06-04T03:56:39.3115866Z In file included from torch_xla/csrc/aten_xla_type.cpp:22:
2024-06-04T03:56:39.3117419Z bazel-out/k8-opt/bin/torch_xla/csrc/XLANativeFunctions.h:14:8: note: 'struct torch_xla::XLANativeFunctions' defined here
2024-06-04T03:56:39.3118602Z 14 | struct XLANativeFunctions {
2024-06-04T03:56:39.3119113Z | ^~~~~~~~~~~~~~~~~~
```
I've tried patching the build on the XLA side:
https://github.com/pytorch/xla/compare/master...davidberard98:xla:update-cumsum-args?expand=1
This works when combined with my changes on the PyTorch side, but not when combined with the main branch of PyTorch today. i.e.:
* trunk pytorch + trunk xla -> builds
* pytorch w/ my patches + xla w/ my patches -> builds
* trunk pytorch + xla w/ my patches -> does not build
It seems like the issue is that the definition in `torch_xla/csrc/aten_xla_type.cpp` needs to match the signature in XLANativeFunctions.h (presumably code-genned from native_functions.yaml or similar?)
|
https://github.com/pytorch/xla/issues/7180
|
closed
|
[] | 2024-06-04T16:35:37Z
| 2024-06-10T16:47:49Z
| 0
|
davidberard98
|
huggingface/diffusers
| 8,400
|
how can we load model to lora from singlefile ?
|
pipe.load_lora_weights("lora/aesthetic_anime_v1s.safetensors")
File "Z:\software\python11\Lib\site-packages\diffusers\loaders\lora.py", line 1230, in load_lora_weights
raise ValueError("PEFT backend is required for this method.")
ValueError: PEFT backend is required for this method.
pipe.load_lora_weights("lora/aesthetic_anime_v1s.safetensors")
how can i use this model https://civitai.com/models/295100?modelVersionId=331598
|
https://github.com/huggingface/diffusers/issues/8400
|
closed
|
[] | 2024-06-04T13:54:56Z
| 2024-06-04T15:53:32Z
| null |
xalteropsx
|
huggingface/datasets
| 6,953
|
Remove canonical datasets from docs
|
Remove canonical datasets from docs, now that we no longer have canonical datasets.
|
https://github.com/huggingface/datasets/issues/6953
|
closed
|
[
"documentation"
] | 2024-06-04T12:09:03Z
| 2024-07-01T11:31:25Z
| 1
|
albertvillanova
|
pytorch/ao
| 320
|
Saving autoquant quantization plan
|
First of all, thank you for the great library! It makes quantization really easy.
Is it possible to run autoquant once and later applying the same quantization plan again? Or would I need to manually look at logs right now to see what autoquant came up with so I can apply the same quantization later?
// I see there's `AUTOQUANT_CACHE` that gets used to save the timings, maybe just saving/loading that will do?
// Seems like ^ works!
|
https://github.com/pytorch/ao/issues/320
|
closed
|
[
"question"
] | 2024-06-04T11:10:41Z
| 2024-06-07T10:45:07Z
| null |
RobinKa
|
huggingface/datasets
| 6,951
|
load_dataset() should load all subsets, if no specific subset is specified
|
### Feature request
Currently load_dataset() is forcing users to specify a subset. Example
`from datasets import load_dataset
dataset = load_dataset("m-a-p/COIG-CQIA")`
```---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-10-c0cb49385da6>](https://localhost:8080/#) in <cell line: 2>()
1 from datasets import load_dataset
----> 2 dataset = load_dataset("m-a-p/COIG-CQIA")
3 frames
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _create_builder_config(self, config_name, custom_features, **config_kwargs)
582 if not config_kwargs:
583 example_of_usage = f"load_dataset('{self.dataset_name}', '{self.BUILDER_CONFIGS[0].name}')"
--> 584 raise ValueError(
585 "Config name is missing."
586 f"\nPlease pick one among the available configs: {list(self.builder_configs.keys())}"
ValueError: Config name is missing.
Please pick one among the available configs: ['chinese_traditional', 'coig_pc', 'exam', 'finance', 'douban', 'human_value', 'logi_qa', 'ruozhiba', 'segmentfault', 'wiki', 'wikihow', 'xhs', 'zhihu']
Example of usage:
`load_dataset('coig-cqia', 'chinese_traditional')`
```
This means a dataset cannot contain all the subsets at the same time. I guess one workaround is to manually specify the subset files like in [here](https://huggingface.co/datasets/m-a-p/COIG-CQIA/discussions/1#658698b44bb41498f75c5622), which is clumsy.
### Motivation
Ideally, if not subset is specified, the API should just try to load all subsets. This makes it much easier to handle datasets w/ subsets.
### Your contribution
Not sure since I'm not familiar w/ the lib src.
|
https://github.com/huggingface/datasets/issues/6951
|
closed
|
[
"enhancement"
] | 2024-06-04T11:02:33Z
| 2024-11-26T08:32:18Z
| 5
|
windmaple
|
huggingface/datasets
| 6,950
|
`Dataset.with_format` behaves inconsistently with documentation
|
### Describe the bug
The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation.
https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays
https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays
> If your dataset consists of N-dimensional arrays, you will see that by default they are considered as nested lists.
> In particular, a PyTorch formatted dataset outputs nested lists instead of a single tensor.
> A TensorFlow formatted dataset outputs a RaggedTensor instead of a single tensor.
But I get a single tensor by default, which is inconsistent with the description.
Actually the current behavior seems more reasonable to me. Therefore, the document needs to be modified.
### Steps to reproduce the bug
```python
>>> from datasets import Dataset
>>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]]
>>> ds = Dataset.from_dict({"data": data})
>>> ds = ds.with_format("torch")
>>> ds[0]
{'data': tensor([[1, 2],
[3, 4]])}
>>> ds = ds.with_format("tf")
>>> ds[0]
{'data': <tf.Tensor: shape=(2, 2), dtype=int64, numpy=
array([[1, 2],
[3, 4]])>}
```
### Expected behavior
```python
>>> from datasets import Dataset
>>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]]
>>> ds = Dataset.from_dict({"data": data})
>>> ds = ds.with_format("torch")
>>> ds[0]
{'data': [tensor([1, 2]), tensor([3, 4])]}
>>> ds = ds.with_format("tf")
>>> ds[0]
{'data': <tf.RaggedTensor [[1, 2], [3, 4]]>}
```
### Environment info
datasets==2.19.1
torch==2.1.0
tensorflow==2.13.1
|
https://github.com/huggingface/datasets/issues/6950
|
closed
|
[
"documentation"
] | 2024-06-04T09:18:32Z
| 2024-06-25T08:05:49Z
| 2
|
iansheng
|
huggingface/sentence-transformers
| 2,708
|
What is the training order in the multi-task learning example?
|
hello. In the case of multi-task learning in the example below, what is the learning order? The example below is taken from https://www.sbert.net/examples/training/quora_duplicate_questions/README.html.
Regarding the dataset below, I know that the learning results are good if you learn mnrl after learning the cl dataset. Does the learning proceed sequentially like this? Or does it go the other way? Simply put, which of the three below is your learning order?
1. cl -> mnrl
2. mnrl -> cl
3. shuffled two datasets
```
Multi-Task-Learning
[ContrastiveLoss]
(https://www.sbert.net/docs/package_reference/sentence_transformer/losses.html#sentence_transformers.losses.ContrastiveLoss) works well for pair classification, i.e., given two pairs, are these duplicates or not. It pushes negative pairs far away in vector space, so that the distinguishing between duplicate and non-duplicate pairs works good.
[MultipleNegativesRankingLoss]
(https://www.sbert.net/docs/package_reference/sentence_transformer/losses.html#sentence_transformers.losses.MultipleNegativesRankingLoss) on the other sides mainly reduces the distance between positive pairs out of large set of possible candidates. However, the distance between non-duplicate questions is not so large, so that this loss does not work that well for pair classification.
In [training_multi-task-learning.py](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/quora_duplicate_questions/training_multi-task-learning.py) I demonstrate how we can train the network with both losses. The essential code is to define both losses and to pass it to the fit method.
```
```py
from datasets import load_dataset
from sentence_transformers.losses import ContrastiveLoss, MultipleNegativesRankingLoss
from sentence_transformers import SentenceTransformerTrainer, SentenceTransformer
model_name = "stsb-distilbert-base"
model = SentenceTransformer(model_name)
# https://huggingface.co/datasets/sentence-transformers/quora-duplicates
mnrl_dataset = load_dataset(
"sentence-transformers/quora-duplicates", "triplet", split="train"
) # The "pair" subset also works
mnrl_train_dataset = mnrl_dataset.select(range(100000))
mnrl_eval_dataset = mnrl_dataset.select(range(100000, 101000))
mnrl_train_loss = MultipleNegativesRankingLoss(model=model)
# https://huggingface.co/datasets/sentence-transformers/quora-duplicates
cl_dataset = load_dataset("sentence-transformers/quora-duplicates", "pair-class", split="train")
cl_train_dataset = cl_dataset.select(range(100000))
cl_eval_dataset = cl_dataset.select(range(100000, 101000))
cl_train_loss = ContrastiveLoss(model=model, margin=0.5)
# Create the trainer & start training
trainer = SentenceTransformerTrainer(
model=model,
train_dataset={
"mnrl": mnrl_train_dataset,
"cl": cl_train_dataset,
},
eval_dataset={
"mnrl": mnrl_eval_dataset,
"cl": cl_eval_dataset,
},
loss={
"mnrl": mnrl_train_loss,
"cl": cl_train_loss,
},
)
trainer.train()
```
|
https://github.com/huggingface/sentence-transformers/issues/2708
|
closed
|
[] | 2024-06-04T07:42:37Z
| 2024-06-04T08:29:30Z
| null |
daegonYu
|
pytorch/xla
| 7,177
|
Why not register low precision autocast for scaled dot product attention?
|
## π Bug
<!-- A clear and concise description of what the bug is. -->
MultiHeadAttention can not run with auto mixed precision mode.
Steps to reproduce the behavior:
```bash
import torch
import torch.nn as nn
import torch_xla
import torch_xla.core.xla_model as xm
xla_device = xm.xla_device()
embed_dim = 1024
num_heads = 64
multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
input = torch.ones([4,32,1024], dtype=torch.float32).to(xla_device)
attn_mask = torch.ones([32,32], dtype=torch.float32).to(xla_device)
multihead_attn = nn.MultiheadAttention(embed_dim, num_heads, batch_first=True).to(xla_device)
with torch.amp.autocast("xla", dtype=torch.float16):
attn_output = multihead_attn(input, input, input, attn_mask=attn_mask, need_weights=False)
xm.mark_step()
print(attn_output[0].dtype)
print(attn_output)
```
RuntimeError: Expected attn_mask dtype to be bool or to match query dtype, but got attn_mask.dtype: float and query.dtype: c10::Half instead.
## Expected behavior
MultiHeadAttention module can run successfully and get correct result tensor type.
## Environment
- Reproducible on XLA backend [CPU/TPU/CUDA]: CPU
## Additional context
Though I reproduce the bug by CPU, but I believe it will occur with any kind of pjrt device except cuda. I can reproduce it on intel gpu also. To solve this bug, we only need to register low precision autocast for scaled dot product attention and has verified it. I want to ask why we don't register this and does there exist any problem?
|
https://github.com/pytorch/xla/issues/7177
|
closed
|
[] | 2024-06-04T06:17:53Z
| 2024-06-17T02:58:42Z
| 2
|
ghost
|
huggingface/datasets
| 6,949
|
load_dataset error
|
### Describe the bug
Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r').
### Steps to reproduce the bug
1. pip install datasets==2.19.2
2. from datasets import Dataset, DatasetDict, NamedSplit, Split, load_dataset
3. data = load_dataset('json', data_files='train.json')
### Expected behavior
It is able to load my json correctly
### Environment info
datasets==2.19.2
|
https://github.com/huggingface/datasets/issues/6949
|
closed
|
[] | 2024-06-04T01:24:45Z
| 2024-07-01T11:33:46Z
| 2
|
frederichen01
|
huggingface/transformers.js
| 789
|
Can I use Xenova/Phi-3-mini-4k-instruct model server side?
|
### Question
Hey there! Iβm trying to run Xenova/Phi-3-mini-4k-instruct model using transformers.js 2.17.2 on the server in my Node.js project, but I get an error saying that Phi-3 is not supported. Can I make it work somehow? Any ideas appreciated
|
https://github.com/huggingface/transformers.js/issues/789
|
closed
|
[
"question"
] | 2024-06-03T18:43:20Z
| 2024-06-04T04:57:42Z
| null |
StepanKukharskiy
|
pytorch/serve
| 3,172
|
Two-way authentication/Mutual SSL in gRPC
|
### π The feature
Torchserve currently supports SSL for gRPC but one way authentication. Can we make it two way ?
### Motivation, pitch
More security
### Alternatives
reverse proxy like nginx is an option i think
### Additional context
_No response_
|
https://github.com/pytorch/serve/issues/3172
|
open
|
[
"enhancement"
] | 2024-06-03T14:58:07Z
| 2024-06-03T17:37:53Z
| 0
|
MohamedAliRashad
|
huggingface/datasets
| 6,947
|
FileNotFoundErrorοΌerror when loading C4 dataset
|
### Describe the bug
can't load c4 datasets
When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'}
How can I fix thisοΌ
### Steps to reproduce the bug
1.from datasets import load_dataset
2.dataset = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation')
3. raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at local_path/c4_val/allenai/c4/c4.py or any data file in the same directory. Couldn't find 'allenai/c4' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-validation.00003-of-00008.json.gz' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
### Expected behavior
The data was successfully imported
### Environment info
python version 3.9
datasets version 2.19.2
|
https://github.com/huggingface/datasets/issues/6947
|
closed
|
[] | 2024-06-03T13:06:33Z
| 2024-06-25T06:21:28Z
| 15
|
W-215
|
huggingface/dataset-viewer
| 2,878
|
Remove or increase the 5GB limit?
|
The dataset viewer shows statistics and provides filter + sort + search only for the first 5GB of each split. We are also unable to provide the exact number of rows for bigger splits.
Note that we "show" all the rows for parquet-native datasets (i.e., we can access the rows randomly, i.e., we have pagination).
Should we provide a way to increase or remove this limit?
|
https://github.com/huggingface/dataset-viewer/issues/2878
|
closed
|
[
"question",
"feature request"
] | 2024-06-03T08:55:08Z
| 2024-07-22T11:32:49Z
| null |
severo
|
huggingface/transformers
| 31,195
|
How to get back the input time series after using PatchTSTForPretraining?
|
### System Info
-
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
My model is PatchTSTForPretraining(
(model): PatchTSTModel(
(scaler): PatchTSTScaler(
(scaler): PatchTSTStdScaler()
)
(patchifier): PatchTSTPatchify()
(masking): PatchTSTMasking()
(encoder): PatchTSTEncoder(
(embedder): PatchTSTEmbedding(
(input_embedding): Linear(in_features=5, out_features=768, bias=True)
)
(positional_encoder): PatchTSTPositionalEncoding(
(positional_dropout): Identity()
)
(layers): ModuleList(
(0-11): 12 x PatchTSTEncoderLayer(
(self_attn): PatchTSTAttention(
(k_proj): Linear(in_features=768, out_features=768, bias=True)
(v_proj): Linear(in_features=768, out_features=768, bias=True)
(q_proj): Linear(in_features=768, out_features=768, bias=True)
(out_proj): Linear(in_features=768, out_features=768, bias=True)
)
(dropout_path1): Identity()
(norm_sublayer1): PatchTSTBatchNorm(
(batchnorm): BatchNorm1d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ff): Sequential(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELUActivation()
(2): Identity()
(3): Linear(in_features=3072, out_features=768, bias=True)
)
(dropout_path3): Identity()
(norm_sublayer3): PatchTSTBatchNorm(
(batchnorm): BatchNorm1d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
)
)
(head): PatchTSTMaskPretrainHead(
(dropout): Dropout(p=0.0, inplace=False)
(linear): Linear(in_features=768, out_features=5, bias=True)
)
)
prediction_output = model(time_series_data)
Output:
time_series_data = tensor([[[430.3000],
[431.7600],
[431.7600],
[431.7600],
[431.7600],
[431.7600],
[431.7600],
[431.7600],
[431.7600],
[430.3000],
[430.3000],
[428.9600],
[430.3000],
[430.3000],
[430.3000]]], device='cuda:0')
prediction_output = tensor([[[[-0.2321, 0.1897, 0.4731, 0.8893, 0.6723],
[-0.5465, -0.9017, 0.0778, 0.0078, 1.3323],
[ 0.4945, 0.5145, -0.5386, -0.7045, -1.5766],
[ 0.2064, 0.6290, -0.8145, 1.0450, -0.2886]]]], device='cuda:0')
### Expected behavior
x_hat = self.head(model_output.last_hidden_state) produces output which is not consistent to the range of input time series values. I am trying to pretrain PatchTST for autoencoding. How do I get back the input time series?
|
https://github.com/huggingface/transformers/issues/31195
|
closed
|
[] | 2024-06-03T06:44:31Z
| 2024-10-26T07:44:56Z
| null |
nikhilajoshy
|
huggingface/optimum
| 1,885
|
onnx optimum ORTOptimizer inference runs slower than setfit.export_onnx runtime.InferenceSession inference
|
### System Info
Hi,
i did a test between onnx optimum export + ORTOptimizer inference vs. setfit.export_onnx + onnxruntime.InferenceSession.
it seems that onnx optimum ORTOptimizer inference runs slower than setfit.export_onnx runtime.InferenceSession inference
any idea why is that the reason?
i also changed from AutoOptimizationConfig.O2() =AutoOptimizationConfig.O4() - still onnxruntime.InferenceSession is faster.
set train_model = True - to train the finetuned model before and export it.
gpu: nvidia T4
output:
```
python setfit-onnx-optimum-example.py
Repo card metadata block was not found. Setting CardData to empty.
Model size (MB) - 86.68
Accuracy on test set - 0.888
Average latency (ms) - 6.23 +\- 0.51
Framework not specified. Using pt to export the model.
Using the export variant default. Available variants are:
- default: The default ONNX variant.
***** Exporting submodel 1/1: BertModel *****
Using framework PyTorch: 2.2.1+cu121
Overriding 1 configuration item(s)
- use_cache -> False
2024-06-02 22:27:53.640590789 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-06-02 22:27:53.640623671 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/optimum/onnxruntime/configuration.py:770: FutureWarning: disable_embed_layer_norm will be deprecated soon, use disable_embed_layer_norm_fusion instead, disable_embed_layer_norm_fusion is set to True.
warnings.warn(
Optimizing model...
Configuration saved in all-MiniLM-L6-v2_auto_opt_O2/ort_config.json
Optimized model saved at: all-MiniLM-L6-v2_auto_opt_O2 (external data format: False; saved all tensor to one file: True)
2024-06-02 22:27:55.548291362 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-06-02 22:27:55.548316947 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
Model size (MB) - 86.10
Accuracy on test set - 0.888
Average latency (ms) - 1.83 +\- 0.46
Speedup: 3.40x
2024-06-02 22:27:59.483816381 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 2 Memcpy nodes are added to the graph main_graph_ed6a60ecdb95455bac10d5392cf78d36 for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.
2024-06-02 22:27:59.485393795 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-06-02 22:27:59.485413289 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
providers: ['CUDAExecutionProvider', 'CPUExecutionProvider']
Model size (MB) - 86.23
Accuracy on test set - 0.888
Average latency (ms) - 1.40 +\- 0.17
Speedup: 4.44x
```
code:
```
# https://github.com/huggingface/setfit/blob/main/notebooks/setfit-onnx-optimum.ipynb
from pathlib import Path
from time import perf_counter
import evaluate
import numpy as np
import torch
from tqdm.auto import tqdm
import os
import matplotlib.pyplot as plt
import pandas as pd
from setfit import SetFitModel
from setfit import SetFitModel, Trainer, TrainingArguments
from datasets import load_dataset
from setfit.exporters.utils import mean_pooling
from optimum.onnxruntime import ORTModelForFeatureExtraction, AutoOptimizationConfig, ORTOptimizer
from transformers import AutoTokenizer
from setfit.exporters.onnx import export_onnx
import onnxruntime
metric = evaluate.load("accuracy")
train_model = False
class PerformanceBenchmark:
def __init__(self, model, dataset, optim_type):
self.model = model
self.dataset = dataset
self.optim_type = optim_type
def compute_accuracy(self):
preds = self.model.predict(self.dataset["text"])
labels = self.dataset["label"]
accuracy = metric.compute(predictions=preds, references=labels)
print(f"Accuracy on test set - {accuracy['accuracy']:.3f}")
return accuracy
def compute_size(self):
state_dict = self.model.model_body.state_dict()
tmp_path = Path("model.pt
|
https://github.com/huggingface/optimum/issues/1885
|
open
|
[
"bug"
] | 2024-06-02T22:34:37Z
| 2024-06-08T03:02:40Z
| 1
|
geraldstanje
|
huggingface/chat-ui
| 1,241
|
π»π»How to deploy to vercel
|
Hi,
I am currently having troubles with deploying to Vercel, I am experiencing an error 404 NOT FOUND. I think i am using the wrong build command or the wrong default directory. Can someone please help?

Thanksyou!
|
https://github.com/huggingface/chat-ui/issues/1241
|
open
|
[
"support"
] | 2024-06-02T10:05:45Z
| 2025-01-10T17:00:37Z
| null |
haydenkong
|
huggingface/transformers.js
| 788
|
Is it possible to use transformers.js to implement audio source separation tasks?
|
### Question
Hello, I have a beginner's question.
I want to implement the task of removing the human voice from the audio in the video and retaining the background sound in the browser. The idea is to load the model for audio source separation related to transformers.js to achieve the separation of the background sound and human voice, and then only return the background sound.
But I couldn't find relevant examples in the documentation, so I was wondering if this can be implemented? If so, what are the learning or research paths?
Looking forward to your reply
|
https://github.com/huggingface/transformers.js/issues/788
|
open
|
[
"question"
] | 2024-06-02T04:00:55Z
| 2024-12-26T06:05:26Z
| null |
asasas234
|
huggingface/lerobot
| 238
|
how to use on wslcan not visulize
|
how to use on wslcan not visulize
|
https://github.com/huggingface/lerobot/issues/238
|
closed
|
[
"simulation"
] | 2024-06-02T03:58:44Z
| 2025-10-08T08:25:31Z
| null |
jackylee1
|
huggingface/chat-ui
| 1,236
|
No Setup Deploy: Multiple models supported?
|
How can I make **multiple models** available on Chat UI using **No Setup Deploy**?
## Further Details
The form (see below) seems to only allow one model.
<details><summary>Form</summary>
<p>
<img width="661" alt="image" src="https://github.com/huggingface/chat-ui/assets/14152377/e5595c34-b5c5-4c09-8b83-d5a0f839016d">
</p>
</details>
## Tried so far
(Without success)
- I checked the [full tutorial](https://huggingface.co/docs/hub/spaces-sdks-docker-chatui#chatui-on-spaces) linked from the [README.md](https://github.com/huggingface/chat-ui/blob/93b39a0beb72378c76d5d146bfd3a8355c1d110d/README.md), but couldn't find neither how to use multiple models nor a note about a limitation.
- I tried deploying one model and adding an `.env.local` to the deployment on my space, but the web interface threw an error when trying to commit `.env.local` due to potential secrets included in the file.
|
https://github.com/huggingface/chat-ui/issues/1236
|
open
|
[
"enhancement",
"docker"
] | 2024-06-01T11:41:22Z
| 2024-06-03T07:55:12Z
| 1
|
rodrigobdz
|
huggingface/optimum
| 1,884
|
Add support for porting CLIPVisionModelWithProjection
|
### Feature request
Currently there is not support for porting CLIPVisionModelWithProjection class models from the transformers library to onnx through optimum. I'd like to add support for the same for which we'd need to change the optimum/exporters/onnx/model_configs.py file. I'd like ot request you to help me guide how can I try to understand the code and make this feature.
### Motivation
I need the same for a personal project and would be happy to contribute to the library as well.
### Your contribution
I would be happy to submit a PR
|
https://github.com/huggingface/optimum/issues/1884
|
open
|
[
"feature-request",
"onnx"
] | 2024-05-31T22:25:45Z
| 2024-10-09T07:56:28Z
| 0
|
mr-sarthakgupta
|
huggingface/datasets
| 6,940
|
Enable Sharding to Equal Sized Shards
|
### Feature request
Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation.
### Motivation
Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining shards will have length (n // i).". However, when using FSDP we want the shards to have the same size. This requires the user to manually handle this situation, but it will be nice if we had an option to shard the dataset into equally sized shards.
### Your contribution
For now just a PR. I can also add code that does what is needed, but probably not efficient.
Shard to equal size by duplication:
```
remainder = len(dataset) % num_shards
num_missing_examples = num_shards - remainder
duplicated = dataset.select(list(range(num_missing_examples)))
dataset = concatenate_datasets([dataset, duplicated])
shard = dataset.shard(num_shards, shard_idx)
```
Or by truncation:
```
shard = dataset.shard(num_shards, shard_idx)
num_examples_per_shard = len(dataset) // num_shards
shard = shard.select(list(range(num_examples_per_shard)))
```
|
https://github.com/huggingface/datasets/issues/6940
|
open
|
[
"enhancement"
] | 2024-05-31T21:55:50Z
| 2024-06-01T07:34:12Z
| 0
|
yuvalkirstain
|
pytorch/tutorials
| 2,894
|
~PyTorch Docathon H1 2024!~
|
### **PyTorch Docathon H1 2024!**
Hooray! It's this time of the year again and we are excited for you to participate in the PyTorch docathon. We have the following repositories participating:
- [pytorch/pytorch](https://github.com/pytorch/pytorch)
- [pytorch/tutorials](https://github.com/pytorch/tutorials)
- [pytorch/xla](https://github.com/pytorch/xla)
- [pytorch-labs/torchfix](https://github.com/pytorch-labs/torchfix)
The docathon starts on June 4 10 AM PST. Please do not work on tasks until then. We will continue accepting new submissions until 5 PM PST on June 16th.
#### **Date and location**
**WHEN:** The docathon starts on June 4 at 10 AM PST. Please do not work on tasks until then. We will continue accepting new submissions until 5 PM PST on June 16th.
**WHERE:** Virtual
**WHAT:** Issues with the docathon-h1-2024 label - will be posted on June 4th.
Watch our intro video to learn more details about the event.
### **Can everyone participate?**
We encourage everyone to consider participating in the docathon but there are a few things we expect from the participants:
- You must have a GitHub account and know how to use Git and GitHub, how to submit or rebase your PR on the latest main branch, how to fork or clone the repo, how to view errors in the CI and troubleshoot. We reserve the right to reject incorrectly submitted PRs.
- You must be familiar with Python, the basics of Machine Learning, and have at least a basic knowledge of PyTorch. Familiarity with Sphinx, sphinx-gallery, and reStructuredText is a plus.
Before you start contributing make sure to read [Linux Foundation Code of Conduct](https://events.linuxfoundation.org/about/code-of-conduct/) as well as the [GitHub Code of Conduct](https://docs.github.com/en/site-policy/github-terms/github-community-code-of-conduct).
### **What contributions are we looking for?**
All issues for this docathon are tagged with the _docathon-h1-2024_ label. Please note that contributions that address other issues won't be counted. We are primarily looking for the following contributions:
- Docstring fixes
- Documentation bug fixes
- Tutorial fixes and testing
**NOTE:** Due to the large number of RSVPs, the tasks are provided on a first come first serve basis β please don't hoard the tasks!
### **Difficulty Levels**
The issues have three levels of difficulty: _easy, medium_, and _advanced_. If this is your first time contributing to PyTorch, we recommend that you start with an issue that is tagged as easy.
### **How to contribute to tutorials?**
1. Read [PyTorch Contributor Document](https://github.com/pytorch/tutorials/blob/main/CONTRIBUTING.md?rgh-link-date=2023-05-26T19%3A09%3A32Z) for general guidelines on how the submission process works and overall style and voice.
2. Pick an issue that is labeled as _docathon-h1-2024_.
3. In the issue, add a comment with the text /assigntome. If the issue is already assigned, please find another issue to work on. We ask that you assign one issue at a time - we want to give everyone a fair chance to participate. When you are done with one issue and get it approved, you can assign another one to yourself and start working on it.
4. If you are submitting a new tutorial, use [this template](https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py?rgh-link-date=2023-05-26T19%3A09%3A32Z).
5. Fork or clone the PyTorch repository to your computer. For simple fixes, like incorrect URLs, you could use the GitHub UI as well.
6. Create a branch and work on the fix.
7. Test your fix by running the single tutorial locally. Don't run the whole build as it takes hours and requires a GPU. You can run one tutorial as a script `python3 <tutorial-name.py> or GALLERY_PATTERN="neural_style_transfer_tutorial.py" make html`
8. After you fix all the issues, you are ready to submit your PR.
### **Submit Your PR**
1. Submit your PR referencing the issue you've picked. For example:

3. If you have not yet, sign the Contributor License Agreement (CLA) - prompted as a check in the PR. We can't accept any PRs without a signed CLA.
4. Watch for any CI errors and fix as needed - all checks must pass successfully.
5. When the build is finished, you will see a preview link to preview your changes.
6. The reviewers might provide feedback that we expect you to address.
7. When all feedback is addressed and your PR is approved - one of the reviewers will merge your PR.
### **Can I partner with someone to work on an issue?**
Unless you are working on a completely new tutorial from scratch, most of the issues should be possible to address on your own. If you decide to partner with someone, you can find someone to work with on our Slack channel by posting a free-form request to collaborate. One individual from the group can submit a PR referring
|
https://github.com/pytorch/tutorials/issues/2894
|
closed
|
[
"docathon-h1-2024"
] | 2024-05-31T16:25:09Z
| 2024-07-15T18:38:28Z
| 0
|
sekyondaMeta
|
pytorch/examples
| 1,264
|
reference of weight initialization for llama2 model
|
first of all, thank you for supporting native TP for torch.
i just have been reading your TP tutorial code and found [the initialization detail](https://github.com/pytorch/examples/blob/main/distributed/tensor_parallelism/llama2_model.py#L316-L319) is different from the pytorch default parameterization (kaming init).
is there any reference for depth init ??
|
https://github.com/pytorch/examples/issues/1264
|
closed
|
[] | 2024-05-31T03:18:46Z
| 2024-05-31T04:18:26Z
| 1
|
SeunghyunSEO
|
pytorch/examples
| 1,263
|
`local_rank` or `rank` for multi-node FSDP
|
I am wondering for multi-node FSDP, does `local_rank` and `rank` have any obvious difference here?
I think I understand that `local_rank` is the rank within a node.
I see in a few places it looks like `local_rank` is specifically used
For example
https://github.com/pytorch/examples/blob/main/distributed/FSDP/T5_training.py#L111
`torch.cuda.set_device(local_rank)`
and
https://github.com/pytorch/examples/blob/main/distributed/FSDP/utils/train_utils.py#L48
`batch[key] = batch[key].to(local_rank)`
Is there any problem if using `rank` instead?
|
https://github.com/pytorch/examples/issues/1263
|
open
|
[] | 2024-05-30T19:47:21Z
| 2024-05-30T19:47:21Z
| 0
|
Emerald01
|
huggingface/chat-ui
| 1,225
|
SyntaxError: JSON5: invalid character 'u' at 1:1
|
Where can I find out more about the following error? Is there an issue with the existing template?
## Reproduction Steps
1. Deploy [Chat UI using default template](https://huggingface.co/new-space?template=huggingchat/chat-ui-template) with `MONGO_URL` set to `mongodb+srv://<USER_SECRET>:<PASSWORD_SECRET>@<CLUSTER_SECRET>`
2. Add secret called `HF_TOKEN` with access token value.
## Error Logs
Additionally to https://github.com/huggingface/chat-ui/issues/1174, the following error is shown:
```
2024-05-30T11:56:43: PM2 log: [--no-daemon] Exit on target PM2 exit pid=403
11:56:43 2|index | You have triggered an unhandledRejection, you may have forgotten to catch a Promise rejection:
11:56:43 2|index | SyntaxError: JSON5: invalid character 'u' at 1:1
11:56:43 2|index | at syntaxError (/app/node_modules/json5/lib/parse.js:1110:17)
11:56:43 2|index | at invalidChar (/app/node_modules/json5/lib/parse.js:1055:12)
11:56:43 2|index | at Object.value (/app/node_modules/json5/lib/parse.js:309:15)
11:56:43 2|index | at lex (/app/node_modules/json5/lib/parse.js:100:42)
11:56:43 2|index | at Object.parse (/app/node_modules/json5/lib/parse.js:25:17)
11:56:43 2|index | at file:///app/build/server/chunks/auth-9412170c.js:28:16
11:56:43 2|index | at ModuleJob.run (node:internal/modules/esm/module_job:222:25)
11:56:43 2|index | at async ModuleLoader.import (node:internal/modules/esm/loader:316:24)
11:56:43 2|index | at async Server.init (file:///app/build/server/index.js:4189:24)
11:56:43 2|index | at async file:///app/build/handler.js:1140:1
```
<details><summary>Full error log</summary>
<p>
```
===== Application Startup at 2024-05-30 09:52:12 =====
2024-05-30T09:54:31.991512Z INFO text_generation_launcher: Args {
model_id: "mistralai/Mistral-7B-Instruct-v0.1",
revision: None,
validation_workers: 2,
sharded: None,
num_shard: Some(
1,
),
quantize: None,
speculate: None,
dtype: None,
trust_remote_code: true,
max_concurrent_requests: 128,
max_best_of: 2,
max_stop_sequences: 4,
max_top_n_tokens: 5,
max_input_tokens: None,
max_input_length: None,
max_total_tokens: None,
waiting_served_ratio: 0.3,
max_batch_prefill_tokens: None,
max_batch_total_tokens: None,
max_waiting_tokens: 20,
max_batch_size: None,
cuda_graphs: None,
hostname: "r-center-for-humans-and-machines-llm-stresstest-ubo8g-c2578-oc7",
port: 8080,
shard_uds_path: "/tmp/text-generation-server",
master_addr: "localhost",
master_port: 29500,
huggingface_hub_cache: Some(
"/data",
),
weights_cache_override: None,
disable_custom_kernels: false,
cuda_memory_fraction: 1.0,
rope_scaling: None,
rope_factor: None,
json_output: false,
otlp_endpoint: None,
cors_allow_origin: [],
watermark_gamma: None,
watermark_delta: None,
ngrok: false,
ngrok_authtoken: None,
ngrok_edge: None,
tokenizer_config_path: None,
disable_grammar_support: false,
env: false,
max_client_batch_size: 4,
}
2024-05-30T09:54:31.991620Z INFO hf_hub: Token file not found "/home/user/.cache/huggingface/token"
2024-05-30T09:54:32.027992Z INFO text_generation_launcher: Default `max_input_tokens` to 4095
2024-05-30T09:54:32.028013Z INFO text_generation_launcher: Default `max_total_tokens` to 4096
2024-05-30T09:54:32.028016Z INFO text_generation_launcher: Default `max_batch_prefill_tokens` to 4145
2024-05-30T09:54:32.028018Z INFO text_generation_launcher: Using default cuda graphs [1, 2, 4, 8, 16, 32]
2024-05-30T09:54:32.028022Z WARN text_generation_launcher: `trust_remote_code` is set. Trusting that model `mistralai/Mistral-7B-Instruct-v0.1` do not contain malicious code.
2024-05-30T09:54:32.028109Z INFO download: text_generation_launcher: Starting download process.
{"t":{"$date":"2024-05-30T11:54:32.245+02:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"main","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":21},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":21},"outgoing":{"minWireVersion":6,"maxWireVersion":21},"isInternalClient":true}}}
{"t":{"$date":"2024-05-30T11:54:32.246+02:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2024-05-30T11:54:32.247+02:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2024-05-30T11:54:32.248+02:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","
|
https://github.com/huggingface/chat-ui/issues/1225
|
open
|
[
"docker"
] | 2024-05-30T11:07:36Z
| 2025-01-16T22:54:08Z
| 8
|
rodrigobdz
|
huggingface/chat-ui
| 1,221
|
500 Internal Server Error with chat-ui
|
I executed an inference server with the address http://192.168.0.185:7777/generate_stream using text-generation-inference (TGI) v.2.0.4. When executing commands with curl, the inference results are responding normally. For ease of use, I am going to use chat-ui. Below is the .env.local file's content of chat-ui.
```
$ vi .env.local
1 MONGODB_URL=mongodb://127.0.0.1:27017
2 HF_TOKEN=hf_***********************************
3 ALLOW_INSECURE_COOKIES=true
4 MODELS=`[
5 {
6 "name":"samsung-codellama3-70b-custom",
7 "endpoints":[{"type":"tgi","url":"http://192.168.0.185:7777/generate_stream"}],
8 "description":"A_Coding_Assistant_Model",
9 "userMessageToken":"<|prompter|>",
10 "assistantMessageToken":"<|assistant|>",
11 "messageEndToken":"</s>",
12 "preprompt":"It_is_an_LLM-based_AI_assistant."',
13 "parameters":{
14 "temperature":0.2,
15 "top_p":0.9,
16 "repetition_penalty":1.2,
17 "top_k":10,
18 "truncate":1000,
19 "max_new_tokens":500
20 }
21 }
22 ]`
```
Then, I run `$ docker run -p 3000:3000 --env-file .env.local -v chat-ui:/data --name chat-ui ghcr.io/huggingface/chat-ui-db` command. Unfortunately, when I visited http://localhost:3000 with the MS Edge web browser, I got the error β500: An error occurredβ as shown below.
* Screenshot:

* log message:
`{"level":50,"time":1717033937576,"pid":30,"hostname":"c5e9372bf1c1","locals":{"sessionId":"f19bea94fb83ffe9b2aa5d9c3247d9dc1e819772e3b0b4557294cc9a7e884bf0"},"url":"http://localhost:3000/","params":{},"request":{},"error":{"lineNumber":1,"columnNumber":1},"errorId":"7b3df79b-b4d0-4573-b92d-4ba0c182828b"}`
I am wondering what could be causing this error. Welcome to any hints to fix this issue.
#### References
* https://github.com/huggingface/chat-ui/issues?q=is%3Aissue+%22internal+server+error%22
* https://github.com/huggingface/chat-ui/blob/main/src/lib/server/models.ts#L198
|
https://github.com/huggingface/chat-ui/issues/1221
|
closed
|
[
"support"
] | 2024-05-30T00:35:58Z
| 2024-05-31T00:19:49Z
| 4
|
leemgs
|
huggingface/transformers.js
| 785
|
Using AutoModel, AutoTokenizer with distilbert models
|
### Question
Does transformers.js have a function to get the label after getting the logits? How to get the labels from the inference output?
let tokenizer = await AutoTokenizer.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english');
let model = await AutoModel.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english');
let inputs = await tokenizer('I love transformers!');
let { logits } = await model(inputs);
|
https://github.com/huggingface/transformers.js/issues/785
|
open
|
[
"question"
] | 2024-05-29T20:35:17Z
| 2024-05-30T11:09:17Z
| null |
mram0509
|
huggingface/chat-ui
| 1,220
|
A few questions about the Cloudflare integration
|
Howdy π ,
Working on a corresponding page for this in the [Cloudflare docs](https://developers.cloudflare.com/workers-ai/) and had a few [questions that I need answered](https://github.com/cloudflare/cloudflare-docs/pull/14488#issuecomment-2101481990) in this PR.
## Questions
1. If I'm reading [this line](https://github.com/huggingface/chat-ui/blob/25d6df858f15128e6ca23214ce7ad08f176a68ed/src/lib/server/endpoints/cloudflare/endpointCloudflare.ts#L18C21-L18C29) correctly, it sounds like [their example is actually incorrect](https://github.com/huggingface/chat-ui/blob/main/README.md?plain=1#L598) and might need to be updated?
2. If ^^^ is correct, does that mean that we should also be specifying the [`model` parameter](https://github.com/huggingface/chat-ui/blob/25d6df858f15128e6ca23214ce7ad08f176a68ed/src/lib/server/endpoints/cloudflare/endpointCloudflare.ts#L19) w/in the endpoint configuration?
3. Correct assumption that this only works with models prefixed with `@hf`, think so based on [their code](https://github.com/huggingface/chat-ui/blob/25d6df858f15128e6ca23214ce7ad08f176a68ed/src/lib/server/endpoints/cloudflare/endpointCloudflare.ts#L19).
Mind helping me out so I can get this live in our docs?
|
https://github.com/huggingface/chat-ui/issues/1220
|
closed
|
[
"documentation"
] | 2024-05-29T19:11:14Z
| 2024-06-20T12:53:52Z
| 3
|
kodster28
|
huggingface/transformers.js
| 784
|
Shouldn't this work? #v3
|
### Question
### Issue with Transformer.js v3 and WebGPU
#### Description
Yesterday I installed `transformer.js` with the "v3" branch to test the new features with WebGPU, but I get an error.
#### Error Message
```
@xenova_transformers.js?v=3b2ad0ed:24861 Uncaught (in promise)
Error: This pipeline is not yet supported in Transformers.js v3.
```
#### My code
```javascript
const transcriber = await pipeline("automatic-speech-recognition", "Xenova/whisper-small.en", {
device: 'webgpu',
dtype: 'fp32'
});
```
#### Additional Information
With the following code, it works perfectly fine:
```javascript
const extractor = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2', {
device: 'webgpu',
dtype: 'fp32', // or 'fp16'
});
```
|
https://github.com/huggingface/transformers.js/issues/784
|
open
|
[
"question"
] | 2024-05-29T13:36:52Z
| 2024-05-29T14:59:49Z
| null |
kalix127
|
pytorch/xla
| 7,139
|
Setting FrontEnd attributes for CC ops replica groups in the HLO
|
## π Feature
<!-- A clear and concise description of the feature proposal -->
The metadata of the CC operation needs to have an extra field/key, indicating whether the replica groups are represented directly with all the ids or encoded in some other manner, expanded into actual ids downstream into the stack. These will be lowered as front end attributes of the op so that the compiler/runtime understands if a direct or indirect representation is used.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
The replica groups become very long at scale. To represent then concisely, a condensed form of representation is necessary. The basic idea can be thought of as an Iota like operation. With an attribute indicating whether its direct or indirect, the compiler/runtime can infer if the groups represent the actual replica ids. If not, these will be expanded based on the representation coded.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
The framework will exercise the option of turning on or off the condensed form of replica group representation. When the condensed/indirect form is used to represent the replica groups, we would need to have the frontend_attributes={replica_grps="indirect"} set for the CC ops indicating the format of the replica groups to be consumed by compiler/runtime.
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
|
https://github.com/pytorch/xla/issues/7139
|
closed
|
[
"enhancement",
"distributed"
] | 2024-05-29T12:47:47Z
| 2025-04-07T13:55:20Z
| 2
|
amithrm
|
pytorch/vision
| 8,450
|
Let `v2.functional.gaussian_blur` backprop through `sigma` parameter
|
the v1 version of `gaussian_blur` allows to backprop through sigma
(example taken from https://github.com/pytorch/vision/issues/8401)
```
import torch
from torchvision.transforms.functional import gaussian_blur
device = "cuda"
device = "cpu"
k = 15
s = torch.tensor(0.3 * ((5 - 1) * 0.5 - 1) + 0.8, requires_grad=True, device=device)
blurred = gaussian_blur(torch.randn(1, 3, 256, 256, device=device), k, [s])
blurred.mean().backward()
print(s.grad)
```
on CPU and on GPU (after https://github.com/pytorch/vision/pull/8426).
However, the v2 version fails with
```
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
The support in v1 is sort of undocumented and probably just works out of luck (sigma is typically expected to be a list of floats rather than a tensor). So while it works, it's not 100% clear to me whether this is a feature we absolutely want. I guess we can implement it if it doesn't make the code much more complex or slower.
|
https://github.com/pytorch/vision/issues/8450
|
closed
|
[] | 2024-05-29T12:45:21Z
| 2024-07-29T15:45:14Z
| 3
|
NicolasHug
|
huggingface/datasets
| 6,930
|
ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}
|
### Describe the bug
When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}.
However, running dataset = load_dataset('allenai/c4', streaming=True, data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') works fine. What is the issue here?
### Steps to reproduce the bug
run codeοΌ
import os
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
from datasets import load_dataset
en = load_dataset("allenai/c4", "en", streaming=True)
### Expected behavior
Successfully loaded the dataset.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.17
- Python version: 3.8.19
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.0.3
- `fsspec` version: 2024.2.0
|
https://github.com/huggingface/datasets/issues/6930
|
open
|
[] | 2024-05-29T12:40:05Z
| 2024-07-23T06:25:24Z
| 2
|
Polarisamoon
|
huggingface/datasets
| 6,929
|
Avoid downloading the whole dataset when only README.me has been touched on hub.
|
### Feature request
`datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same.
I think the current behaviour of the load_dataset function is triggered whenever a change of the hash of latest commit on huggingface hub, but is there a clever way to only download again the dataset **if and only if** data is modified ?
### Motivation
The current behaviour is a waste of network bandwidth / disk space / research time.
### Your contribution
I don't have time to submit a PR, but I hope a simple solution will emerge from this issue !
|
https://github.com/huggingface/datasets/issues/6929
|
open
|
[
"enhancement"
] | 2024-05-29T10:36:06Z
| 2024-05-29T20:51:56Z
| 2
|
zinc75
|
huggingface/candle
| 2,226
|
How to load LoRA adapter along with the GGUF model?
|
Hello all,
I have recently managed to convert the flan-t5 base model to GGUF #2215 . But I also have multiple LoRA adapters trained for different tasks.
@EricLBuehler @LaurentMazare So I wish to know if there is a way to also load single/multiple LoRA adapters along with the GGUF model. I am currently running an inference using the following command:
```bash
cargo run --example quantized-t5 --release -- --weight-file "flant5large_f16.gguf" \
--config-file "flan-t5-large/config.json" \
--prompt "Make this text coherent: Their flight is weak. They run quickly through the tree canopy."
```
But I have the adapter as (adapter_model.bin and adapter_config.json), which I would like load along with this model **Without Weight Merging**.
|
https://github.com/huggingface/candle/issues/2226
|
open
|
[] | 2024-05-29T06:03:10Z
| 2024-06-05T03:34:14Z
| null |
niranjanakella
|
pytorch/pytorch
| 127,320
|
[While_loop] How to use layer like `torch.nn.BatchNorm2d` with while_loop?
|
### π Describe the bug
Hi, I'm trying to support `while_loop` with `DispatchKey.XLA`;
when I try linear and MNIST with torch, code would be dispatched to `DispatchKey.CompositeExplicitAutograd` to use pure python while, and finish;
my local example code for MNIST:
```python
import torch
from torch._higher_order_ops.while_loop import while_loop
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
def test_while_loop_tpu_MNIST_inside_loop(self):
torch.set_grad_enabled(False)
n_epochs = 3
batch_size_train = 8
batch_size_test = 10
learning_rate = 0.01
momentum = 0.5
log_interval = 10
random_seed = 1
torch.backends.cudnn.enabled = False
torch.manual_seed(random_seed)
class MNIST(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5, stride=1, padding=2)
self.bn1 = torch.nn.BatchNorm2d(10)
self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
self.bn2 = torch.nn.BatchNorm2d(20)
self.fc1 = torch.nn.Linear(500, 50)
self.fc2 = torch.nn.Linear(50, 10)
def forward(self, iteri, x, y):
def cond_fn(iteri, x, y):
return iteri > 0
def body_fn(iteri, x, y):
y = F.relu(F.max_pool2d(self.conv1(x), 2))
y = self.bn1(y) # torch.while_loop's body_fn might be modifying the input!
y = F.relu(F.max_pool2d(self.conv2(y), 2))
y = self.bn2(y)
y = torch.flatten(y, 1)
y = F.relu(self.fc1(y))
y = self.fc2(y)
return iteri - 1, x.clone(), F.log_softmax(y, dim=1)
return while_loop(cond_fn, body_fn, (iteri, x, y))
def forward_compare(self, iteri, x, y):
y = F.relu(F.max_pool2d(self.conv1(x), 2))
y = self.bn1(y) # torch.while_loop's body_fn might be modifying the input!
y = F.relu(F.max_pool2d(self.conv2(y), 2))
y = self.bn2(y)
y = torch.flatten(y, 1)
y = F.relu(self.fc1(y))
y = self.fc2(y)
return iteri - 1, x.clone(), F.log_softmax(y, dim=1)
mnist = MNIST()
bs=16
l_in_0 = torch.randn(bs, 1, 28, 28, dtype=torch.float32)
l_out = torch.randn(bs, 10, dtype=torch.float32)
iteri = torch.tensor(3, dtype=torch.int64)
_, _, res = mnist(iteri, l_in_0, l_out)
# === expected result for one iteration to be compared since body_fn defined use the same input in each iteration ===
_, _, expected_res = mnist.forward_compare(iteri, l_in_0, l_out)
self.assertTrue(torch.all(torch.eq(res, expected_res)))
```
---
for code with `DispatchKey.XLA` and `torch.nn.BatchNorm2d`, it would stoped/failed at `[_has_potential_branch_input_mutation](https://github.com/pytorch/pytorch/blob/d6e3e89804c4063827ea21ffcd3d865e5fe365d9/torch/_higher_order_ops/while_loop.py#L250C16-L250C52)` check with ERROR:
```
torch._higher_order_ops.utils.UnsupportedAliasMutationException: torch.while_loop's body_fn might be modifying the input!
```
do we have example for model with layer like `torch.nn.BatchNorm2d` which `_has_potential_branch_input_mutation` is true, and without using pure while loop?
my local code with `DispatchKey.XLA` and `torch.nn.BatchNorm2d`:
```
import torch
import torch_xla
import torch_xla.experimental.fori_loop
from torch_xla.experimental.fori_loop import fori_loop
from torch._higher_order_ops.while_loop import while_loop
import torch_xla.core.xla_model as xm
import torch_xla.core.xla_builder as xb
import torch_xla.utils.utils as xu
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
def test_while_loop_tpu_MNIST_inside_loop_without_BN(self):
xm.mark_step()
device = xm.xla_device()
torch.set_grad_enabled(False)
n_epochs = 3
batch_size_train = 8
batch_size_test = 10
learning_rate = 0.01
momentum = 0.5
log_interval = 10
random_seed = 1
torch.backends.cudnn.enabled = False
torch.manual_seed(random_seed)
class MNIST(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5, stride=1, padding=2)
self.bn1 = torch.nn.BatchNorm2d(10)
self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
self.bn2 = torch.nn.BatchNorm2d(20)
self.fc1 = torch.nn.Linear(500, 50)
self.fc2 = torch.nn.Linear(50, 10)
def forward(self, iteri, x, y):
def cond_fn(iteri, x, y):
return iteri > 0
def body_fn(iteri, x, y):
# y = self.bn1(F.relu(F.max_pool2d(self.conv1(x), 2)))
# y = self.bn2(F.relu(F.max_pool2d(self.conv2(y), 2)))
y = F.relu(F.max_pool2d(self.conv1(x), 2))
y = self.bn1(y) # torch.while_loop's body_fn might be modifying the input!
y = F.relu(F.max_pool2d(self.conv2(y
|
https://github.com/pytorch/pytorch/issues/127320
|
closed
|
[
"triaged",
"module: xla",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 2024-05-28T18:37:15Z
| 2024-05-29T22:42:57Z
| null |
ManfeiBai
|
huggingface/transformers.js
| 781
|
Progress callback for Moondream?
|
### Question
While implementing Moondream (from the excellent example) I stumbled upon a few questions.
- How can I implement a callback while Moondream is generating tokens? A normal progressCallback didnβt work?
```
self.model.generate({
...text_inputs,
...vision_inputs,
do_sample: false,
max_new_tokens: 500,
progress_callback: (progress_data) => {
console.log("progress_data: ", progress_data);
if (progress_data.status !== 'progress') return;
self.postMessage(progress_data);
},
})
```
Iβve also tried the new CallbackStreamer option, but that had no effect either.
From the [demo](https://github.com/xenova/transformers.js/issues/743) I know it should be possible. But I [couldn't find the source code](https://github.com/xenova/transformers.js/tree/v3) for it (yet). And trying to learn anything from the demo as-is was, well, difficult with all that [minifying](https://xenova-experimental-moondream-webgpu.static.hf.space/assets/worker-DHaYXnZx.js) and framework stuff.
- Is this warning in the browser console anything to worry about?
```
The number of image tokens was not set in the model configuration. Setting it to the number of features detected by the vision encoder (729).models.js:3420
```
- What would be the effect of changing these values? E.g. what would be the expected outcome of changing decoder_model_merged from from q4 to q8?
```
embed_tokens: 'fp16',
vision_encoder: 'q8', // or 'fp16'
decoder_model_merged: 'q4', // or 'q8'
```
- What's the difference between Moondream and [NanoLlava](https://huggingface.co/spaces/Xenova/experimental-nanollava-webgpu)? When should I use one over the other?
|
https://github.com/huggingface/transformers.js/issues/781
|
closed
|
[
"question"
] | 2024-05-28T14:07:07Z
| 2024-06-03T18:49:10Z
| null |
flatsiedatsie
|
huggingface/competitions
| 29
|
How to notify awardees or contact participantsοΌ
|
The competition just shows the participants' id.
So, how to contact them via email to inform them of the award requirements and request additional personal information?
|
https://github.com/huggingface/competitions/issues/29
|
closed
|
[] | 2024-05-28T08:11:38Z
| 2024-06-09T07:03:25Z
| null |
shangfenghuang
|
huggingface/datatrove
| 196
|
How to deduplicate multiple datasets?
|
fineweb offer a deduplication demo for one dump. If want to deduplicate more dumps, should I merge dumps before deduplication ?
|
https://github.com/huggingface/datatrove/issues/196
|
closed
|
[] | 2024-05-28T03:00:31Z
| 2024-06-07T07:25:45Z
| null |
canghaiyunfan
|
huggingface/chat-ui
| 1,183
|
Prompt template for WizardLM-2-8x22B?
|
What is the prompt template for `WizardLM-2-8x22B` in the `.env.local`?
When setting it to the default one: `<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}`
the generated output is very odd and incoherent.
When setting the prompt template to the one displayed in the [model card:](https://huggingface.co/bartowski/WizardLM-2-8x22B-GGUF) `{system_prompt} USER: {prompt} ASSISTANT: </s>`
the output gets even worse.
Can anyone help?
|
https://github.com/huggingface/chat-ui/issues/1183
|
open
|
[
"support",
"models"
] | 2024-05-27T14:28:47Z
| 2024-07-29T15:27:25Z
| 3
|
Arche151
|
huggingface/chat-ui
| 1,178
|
Improve Domain Search Results for Assistants
|
The domain search for assistants is a great idea, but the current implementation is not really useful if the domains are less likely to be top results like Wikipedia.
This seems happen because the web is searched first, and the domain filter is applied afterward. This method can easily result in zero parseable results (especially because PDF parsing is currently not available).
Proposed solution: Change the implementation so that the search process continues until at least one parseable result is found. To avoid excessive searching, an upper limit on the number of pages to be searched makes sense (e.g. at 100), but it should definitely be more than current limit of 8 pages.
|
https://github.com/huggingface/chat-ui/issues/1178
|
open
|
[
"question",
"websearch"
] | 2024-05-27T10:33:22Z
| 2024-05-31T11:02:11Z
| null |
lueschow
|
huggingface/datatrove
| 195
|
What is the difference between tasks and workersοΌ
|
What is the difference between tasks and workers, what is the definition of tasks and how to determine the number of tasks?
|
https://github.com/huggingface/datatrove/issues/195
|
closed
|
[] | 2024-05-27T06:32:25Z
| 2024-05-27T07:08:11Z
| null |
canghaiyunfan
|
huggingface/transformers.js
| 778
|
Pipeline execution time with 'image-classification' pipeline
|
### Question
While calling the 'image-classification' pipeline we pass the image url. So this does a fetch of the image. So will the time taken to process the image include the download time of the image? So if the network is slow this may impact the pipeline performance. Is there a way to use an image thats already been downloaded by the webpage for an image element?
|
https://github.com/huggingface/transformers.js/issues/778
|
open
|
[
"question"
] | 2024-05-26T20:15:21Z
| 2024-05-27T04:14:52Z
| null |
mram0509
|
huggingface/transformers
| 31,039
|
What if past_key_values is in model_kwargs but is None
|
https://github.com/huggingface/transformers/blob/4c6c45ba138202f42582b5cea98126af87195a95/src/transformers/generation/utils.py#L1317
This line fails for me when past_key_values is in model_kwargs but is None. Line 1321 raises an error
Could you advice?
Thank you
|
https://github.com/huggingface/transformers/issues/31039
|
closed
|
[] | 2024-05-26T07:58:18Z
| 2024-06-10T06:32:23Z
| null |
estelleafl
|
huggingface/chat-ui
| 1,174
|
Unable to deploy space with chatUI, getting error ** Failed to connect to 127.0.0.1 port 8080 after 0 ms**
|
Hi guys, so i am trying to deploy space with chatui template and **abacusai/Smaug-Llama-3-70B-Instruct** model but i am getting following error again and again in container logs.
`
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in 10 seconds. 40 retries
Warning: left.
2024-05-26T07:02:16.945294Z INFO text_generation_launcher: Downloaded /data/models--abacusai--Smaug-Llama-3-70B-Instruct/snapshots/fbaa713bdcdc2a2f85bbbe5808ec7046700a36e5/model-00007-of-00030.safetensors in 0:00:29.
2024-05-26T07:02:16.945393Z INFO text_generation_launcher: Download: [7/30] -- ETA: 0:10:47.285711
2024-05-26T07:02:16.945714Z INFO text_generation_launcher: Download file: model-00008-of-00030.safetensors
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in 10 seconds. 39 retries
Warning: left.
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in 10 seconds. 38 retries
Warning: left.
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in 10 seconds. 37 retries
Warning: left.
2024-05-26T07:02:47.664282Z INFO text_generation_launcher: Downloaded /data/models--abacusai--Smaug-Llama-3-70B-Instruct/snapshots/fbaa713bdcdc2a2f85bbbe5808ec7046700a36e5/model-00008-of-00030.safetensors in 0:00:30.
2024-05-26T07:02:47.664376Z INFO text_generation_launcher: Download: [8/30] -- ETA: 0:10:27
2024-05-26T07:02:47.664710Z INFO text_generation_launcher: Download file: model-00009-of-00030.safetensors
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in 10 seconds. 36 retries
Warning: left.
{"t":{"$date":"2024-05-26T09:02:57.879+02:00"},"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":{"ts_sec":1716706977,"ts_usec":879791,"thread":"8:0x7f4c6fd8f640","session_name":"WT_SESSION.checkpoint","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 37, snapshot max: 37 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}}
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in 10 seconds. 35 retries
Warning: left.
`
please help me out thanks
and yes i've added ` HF_TOEKN ` secret too
|
https://github.com/huggingface/chat-ui/issues/1174
|
open
|
[
"support",
"docker"
] | 2024-05-26T07:05:12Z
| 2025-06-27T10:30:24Z
| 5
|
starlord263
|
huggingface/optimum
| 1,876
|
Unable to generate question-answering model for Llama and there is also no list of what are the supported models for question-answering
|
### Feature request
Hi, I received this error:
ValueError: Asked to export a llama model for the task question-answering, but the Optimum ONNX exporter only supports the tasks feature-extraction, feature-extraction-with-past, text-generation, text-generation-with-past, text-classification for llama. Please use a supported task. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the task question-answering to be supported in the ONNX export for llama.
I was trying to generate an ONNX model for QuanAI/llama-2-7b-question-answering.
I also tried to search for the supported question-answering models on https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model which had a broken link pointing to https://huggingface.co/exporters/task_manager (returns a 404). I am happy to consider other question-answering models instead of Llama if there is a list of what is available.
### Motivation
Unable to export Llama question-answering model
### Your contribution
Not sure how to contribute, I am a new user
|
https://github.com/huggingface/optimum/issues/1876
|
open
|
[
"bug",
"onnx"
] | 2024-05-26T06:10:47Z
| 2024-10-09T07:57:24Z
| null |
customautosys
|
huggingface/transformers.js
| 776
|
How to point to a specific model path in order to use compressed models? (brotli)
|
### Question
Hi,
I just can't find the configuration to point to a specific model file path to use .onnx.br instead of .onnx for example.
I can run the model (distilbert-base-cased-distilled-squad) offline without any issue and it works. But I want to deploy it compressed using brotli. All I can see in the config files is references to the folder of the model but not the actual file paths.
E.g "model_quantized.onnx"
Any help is appreciated.
|
https://github.com/huggingface/transformers.js/issues/776
|
open
|
[
"question"
] | 2024-05-24T18:31:12Z
| 2024-05-25T10:24:25Z
| null |
KamilCSPS
|
huggingface/chat-ui
| 1,169
|
Help debugging "Sorry, something went wrong. Please try again."
|
I am a developer working on extending this project. Sometimes I get this error "Sorry, something went wrong. Please try again." I can't figure out how to debug it when it happens. What I want is for it to display the full error somehow, like with a console.log. Is there some way to do that? Or is the error saved in the mongodb? This will help me a lot with debugging.
|
https://github.com/huggingface/chat-ui/issues/1169
|
closed
|
[] | 2024-05-24T18:30:08Z
| 2024-06-17T12:47:03Z
| 1
|
loganlebanoff
|
pytorch/pytorch
| 127,075
|
What is the processing principle when the complex64 input tensor contains nan or inf for addition?
|
### π Describe the bug
>>> import torch
>>> a = torch.tensor(complex(3, float('nan')))
>>> torch.add(a,a)
tensor(nan+nanj)
The rule for adding complex numbers is to add the real and imaginary parts separately.
In the above example, why is the real part nan instead of 4?
How to deal with nan/inf in the output when complex tensor addition contains nan or inf? Which codes in which directory should I refer to?
Thank you!
### Versions
'2.0.0+cpu'
The results of the cpu / cuda versions of torch2.3 are the same
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @amjames
|
https://github.com/pytorch/pytorch/issues/127075
|
open
|
[
"triaged",
"module: complex"
] | 2024-05-24T09:55:35Z
| 2024-05-27T03:59:52Z
| null |
liying-1997
|
pytorch/torchchat
| 847
|
Figure out how to leverage kernels in torchao
|
For quantized linear a lot of the kernels will be living in torchao: https://github.com/pytorch/ao/tree/main/torchao/csrc
We need to figure out how to use these kernels in torchchat/executorch.
|
https://github.com/pytorch/torchchat/issues/847
|
closed
|
[] | 2024-05-23T19:04:48Z
| 2024-07-21T21:53:58Z
| null |
larryliu0820
|
pytorch/xla
| 7,103
|
Why does my 3-layer linear graph need to output two Transposes?
|
## β Questions and Help
torchxla is the latest version
this is my codeοΌ
```
import torch
import torch_xla
import torch_xla.runtime as xr
import torch_xla.core.xla_model as xm
import torch_xla.experimental.xla_sharding as xs
from torch_xla.experimental.xla_sharding import Mesh
from torch_xla.amp import autocast, GradScaler
import numpy as np
import torch.optim as optim
import torch_xla.debug.profiler as xp
import time
import os
# Setup profiler env var
os.environ['XLA_HLO_DEBUG'] = '1'
t1 = torch.randn(1600, 12800, device='cpu')
xt1 = t1.to(xm.xla_device())
class MyModel(nn.Module):
def __init__(self):
self.linear1 = torch.nn.Linear(12800, 9600)
self.linear2 = torch.nn.Linear(9600, 1280)
self.linear3 = torch.nn.Linear(1280, 128)
def forward(self, xt1):
output = self.linear1(xt1)
output1 = self.linear2(output)
output2 = self.linear3(output1)
return output2
my_model = MyModel().to(xm.xla_device())
ans = my_model(xt1)
xm.mark_step()
```
In the hlo graph that was dumped, you can see that there are two transpose tensors in the output fieldοΌ
```
HloModule SyncTensorsGraph.30, entry_computation_layout={(f32[9600]{0}, f32[9600,12800]{1,0}, f32[1600,12800]{1,0}, f32[1280,9600]{1,0}, f32[1280]{0}, /*index=5*/f32[128,1280]{1,0}, f32[128]{0})->(f32[1600,9600]{1,0}, f32[9600,1280]{1,0}, f32[1600,1280]{1,0}, f32[1280,128]{1,0}, f32[1600,128]{1,0})}, replica_count=8
ENTRY SyncTensorsGraph.30 {
p2.4 = f32[1600,12800]{1,0} parameter(2), metadata={op_type="xla__device_data" op_name="xla__device_data"}
p1.2 = f32[9600,12800]{1,0} parameter(1), metadata={op_type="xla__device_data" op_name="xla__device_data"}
transpose.3 = f32[12800,9600]{0,1} transpose(p1.2), dimensions={1,0}, metadata={op_type="aten__permute" op_name="aten__permute"}
dot.5 = f32[1600,9600]{1,0} dot(p2.4, transpose.3), lhs_contracting_dims={1}, rhs_contracting_dims={0}, metadata={op_type="aten__addmm" op_name="aten__addmm"}
p0.1 = f32[9600]{0} parameter(0), metadata={op_type="xla__device_data" op_name="xla__device_data"}
reshape.6 = f32[1,9600]{1,0} reshape(p0.1), metadata={op_type="aten__addmm" op_name="aten__addmm"}
broadcast.7 = f32[1,9600]{1,0} broadcast(reshape.6), dimensions={0,1}, metadata={op_type="aten__addmm" op_name="aten__addmm"}
reshape.8 = f32[9600]{0} reshape(broadcast.7), metadata={op_type="aten__addmm" op_name="aten__addmm"}
broadcast.9 = f32[1600,9600]{1,0} broadcast(reshape.8), dimensions={1}, metadata={op_type="aten__addmm" op_name="aten__addmm"}
add.10 = f32[1600,9600]{1,0} add(dot.5, broadcast.9), metadata={op_type="aten__addmm" op_name="aten__addmm"}
p3.11 = f32[1280,9600]{1,0} parameter(3), metadata={op_type="xla__device_data" op_name="xla__device_data"}
transpose.12 = f32[9600,1280]{0,1} transpose(p3.11), dimensions={1,0}, metadata={op_type="aten__permute" op_name="aten__permute"}
dot.14 = f32[1600,1280]{1,0} dot(add.10, transpose.12), lhs_contracting_dims={1}, rhs_contracting_dims={0}, metadata={op_type="aten__addmm" op_name="aten__addmm"}
p4.13 = f32[1280]{0} parameter(4), metadata={op_type="xla__device_data" op_name="xla__device_data"}
reshape.15 = f32[1,1280]{1,0} reshape(p4.13), metadata={op_type="aten__addmm" op_name="aten__addmm"}
broadcast.16 = f32[1,1280]{1,0} broadcast(reshape.15), dimensions={0,1}, metadata={op_type="aten__addmm" op_name="aten__addmm"}
reshape.17 = f32[1280]{0} reshape(broadcast.16), metadata={op_type="aten__addmm" op_name="aten__addmm"}
broadcast.18 = f32[1600,1280]{1,0} broadcast(reshape.17), dimensions={1}, metadata={op_type="aten__addmm" op_name="aten__addmm"}
add.19 = f32[1600,1280]{1,0} add(dot.14, broadcast.18), metadata={op_type="aten__addmm" op_name="aten__addmm"}
p5.20 = f32[128,1280]{1,0} parameter(5), metadata={op_type="xla__device_data" op_name="xla__device_data"}
transpose.21 = f32[1280,128]{0,1} transpose(p5.20), dimensions={1,0}, metadata={op_type="aten__permute" op_name="aten__permute"}
dot.23 = f32[1600,128]{1,0} dot(add.19, transpose.21), lhs_contracting_dims={1}, rhs_contracting_dims={0}, metadata={op_type="aten__addmm" op_name="aten__addmm"}
p6.22 = f32[128]{0} parameter(6), metadata={op_type="xla__device_data" op_name="xla__device_data"}
reshape.24 = f32[1,128]{1,0} reshape(p6.22), metadata={op_type="aten__addmm" op_name="aten__addmm"}
broadcast.25 = f32[1,128]{1,0} broadcast(reshape.24), dimensions={0,1}, metadata={op_type="aten__addmm" op_name="aten__addmm"}
reshape.26 = f32[128]{0} reshape(broadcast.25), metadata={op_type="aten__addmm" op_name="aten__addmm"}
broadcast.27 = f32[1600,128]{1,0} broadcast(reshape.26), dimensions={1}, metadata={op_type="aten__addmm" op_name="aten__addmm"}
add.28 = f32[1600,128]{1,0} add(dot.23, broadcast.27), metadata={op_type="aten__addmm" op_name="aten__addmm"}
ROOT tuple.29 = (f32[1600,9600]{1,0}, f32[9600,1280]{0,1}, f32[1600,1280]{1,0}, f32[1280,128]{0,1}, f32[
|
https://github.com/pytorch/xla/issues/7103
|
closed
|
[
"question"
] | 2024-05-23T08:54:02Z
| 2025-04-07T13:59:14Z
| null |
mars1248
|
pytorch/xla
| 7,102
|
Problem with mesh shape in HybridMesh on TPU
|
## β Questions and Help
I recived error when try create sqmd mesh on kaggle notebook when flow [Huggingface optimum-tpu](https://github.com/huggingface/optimum-tpu/blob/695ee84d657d9ed2761fcf481685afad0e849a90/examples/language-modeling/run_clm.py#L484)
```
import os
import numpy as np
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.distributed.xla_multiprocessing as xmp
from torch_xla.distributed.fsdp import checkpoint_module
from torch_xla.distributed.fsdp.utils import apply_xla_patch_to_nn_linear
import torch_xla.distributed.parallel_loader as pl
import torch_xla.core.xla_env_vars as xenv
import torch_xla.debug.metrics as met
import torch_xla.distributed.spmd.xla_sharding as xs
from torch_xla.distributed.spmd.xla_sharding import Mesh, HybridMesh
from torch_xla.distributed.spmd.xla_sharded_tensor import XLAShardedTensor
import torch_xla.runtime as xr
xr.use_spmd()
os.environ['USE_TORCH'] = 'True'
os.environ["PJRT_DEVICE"] = "TPU"
os.environ['TPU_NUM_DEVICES'] = '8'
os.environ[xenv.TPU_VISIBLE_CHIPS] = '0,1,2,3'
os.environ[xenv.TPU_PROCESS_BOUNDS] = '1,1,1'
num_devices = xr.global_runtime_device_count() # 8
model_axis = 1
assert xr.device_type() == 'TPU', "Only TPU is supported"
# dcn_axis = model_args.spmd_dcn_parallelism # 1
dcn_axis = 1
data_axis = num_devices // model_axis // dcn_axis
# mesh data setup
ici_mesh_shape = (1, data_axis, model_axis)
dcn_mesh_shape = (dcn_axis, 1, 1)
axis_names=('dcn', 'data', 'model')
print('ici', ici_mesh_shape)
print('dcn', dcn_mesh_shape)
# Note that we do not pass the spmd_mesh to the model because it is not JSON-serializable.
spmd_mesh = HybridMesh(ici_mesh_shape=ici_mesh_shape, dcn_mesh_shape=dcn_mesh_shape, axis_names=axis_names)
```
full error:
```
ici (1, 8, 1)
dcn (1, 1, 1)
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[28], line 41
39 print('dcn', dcn_mesh_shape)
40 # Note that we do not pass the spmd_mesh to the model because it is not JSON-serializable.
---> 41 spmd_mesh = HybridMesh(ici_mesh_shape=ici_mesh_shape, dcn_mesh_shape=dcn_mesh_shape, axis_names=axis_names)
File /usr/local/lib/python3.10/site-packages/torch_xla/distributed/spmd/xla_sharding.py:188, in HybridMesh.__init__(self, ici_mesh_shape, dcn_mesh_shape, axis_names)
185 mesh = self._create_hybrid_device_mesh(self.ici_mesh_shape,
186 self.dcn_mesh_shape)
187 else:
--> 188 mesh = self._create_device_mesh(self.ici_mesh_shape)
189 device_ids = mesh.flatten()
190 super().__init__(device_ids, mesh_shape, axis_names)
File /usr/local/lib/python3.10/site-packages/torch_xla/distributed/spmd/xla_sharding.py:323, in HybridMesh._create_device_mesh(self, mesh_shape, devices)
319 raise ValueError(
320 f'Number of devices {len(devices)} must equal the product '
321 f'of mesh_shape {mesh_shape}')
322 physical_mesh = self._get_physical_tpu_mesh(devices)
--> 323 device_mesh, assignment = self._create_device_mesh_for_nd_torus(
324 physical_mesh, mesh_shape)
325 return device_mesh
File /usr/local/lib/python3.10/site-packages/torch_xla/distributed/spmd/xla_sharding.py:286, in HybridMesh._create_device_mesh_for_nd_torus(self, physical_mesh, mesh_shape)
282 else:
283 # If the num_axes for loop did not break, i.e. none of the candidates work
284 # goto here with this while-else construct.
285 if logical_axis_size > 1:
--> 286 raise NotImplementedError(
287 'Failed to find assignment for logical_axis_index'
288 f' {logical_axis_index} of size {logical_axis_size} with remaining'
289 f' assignable mesh {assignable_physical_mesh}. The size of each'
290 ' axis in your logical mesh must be equal to the product of'
291 ' some subset of the physical mesh axis sizes. E.g logical mesh (4,'
292 ' 16) is compatible with physical mesh 4x4x4 since 4=4 and 16=4x4.'
293 )
294 # Flatten the assignment
295 transpose: List[int] = []
NotImplementedError: Failed to find assignment for logical_axis_index 1 of size 8 with remaining assignable mesh [2, 2, 0]. The size of each axis in your logical mesh must be equal to the product of some subset of the physical mesh axis sizes. E.g logical mesh (4, 16) is compatible with physical mesh 4x4x4 since 4=4 and 16=4x4.
```
TPUv3-8 of kaggle have 8 cores(2x4) so I don't know why i get error. What problem? Thanks for your help!
|
https://github.com/pytorch/xla/issues/7102
|
closed
|
[
"question",
"distributed",
"xla:tpu"
] | 2024-05-23T06:39:44Z
| 2025-04-17T13:33:19Z
| null |
hiwamk
|
huggingface/datasets
| 6,916
|
```push_to_hub()``` - Prevent Automatic Generation of Splits
|
### Describe the bug
I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening?
### Steps to reproduce the bug
1. Have a unsplit dataset
```python
Dataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], num_rows: 944685 })
```
2. Push it to huggingface
```python
dataset.push_to_hub(dataset_name)
```
3. On the hugging face dataset repo, the dataset then appears to be splited:

4. Indeed, when loading the dataset from this repo, the dataset is split in two testing and training set.
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Jetlime/NF-CSE-CIC-IDS2018-v2", streaming=True)
dataset
```
output:
```
IterableDatasetDict({
train: IterableDataset({
features: ['input', 'output', 'Attack', '__index_level_0__'],
n_shards: 2
})
test: IterableDataset({
features: ['input', 'output', 'Attack', '__index_level_0__'],
n_shards: 1
})
```
### Expected behavior
The dataset shall not be splited, as not requested.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
|
https://github.com/huggingface/datasets/issues/6916
|
closed
|
[] | 2024-05-22T23:52:15Z
| 2024-05-23T00:07:53Z
| 0
|
jetlime
|
pytorch/vision
| 8,437
|
Add mobilenetv4 support and pretrained models?
|
### π The feature
Google has published the mobilenetv4 model. When will pytorch support it and open the pre-trained model?
### Motivation, pitch
I very much hope to use the latest lightweight backbone
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/vision/issues/8437
|
closed
|
[] | 2024-05-22T06:16:00Z
| 2024-06-14T02:01:20Z
| 5
|
LiYufengzz
|
huggingface/peft
| 1,750
|
How to finetune embeddings and LM head as a single layer when they are tied?
|
I am looking to LoRA-finetune models like Gemma, which have tied embeddings.
But, I would also like to have the shared embeddings as trainable (the common embedding table corresponding to both input and output embeddings of the network).
How do I achieve this?
---
_Note:_ Passing both `["embed_tokens","lm_head"]` to `modules_to_save` will result in untying them, because PEFT will create separate tensor copies. Passing only `["embed_tokens"]` will result in only the input embeddings trainable (by making a separate PEFT copy), while the output embeddings being as it is (the original tensor).
|
https://github.com/huggingface/peft/issues/1750
|
closed
|
[] | 2024-05-21T18:32:07Z
| 2025-08-12T11:54:09Z
| null |
GokulNC
|
pytorch/audio
| 3,797
|
RTSP with StreamReader
|
Does torchaudio supports RTSP streams? I've been using with RTMP but when running RTSP streams is always crashes, mainly reporting that "threads" argument passed to FFMPEG is not supported.
Using FFMPEG 6.0

|
https://github.com/pytorch/audio/issues/3797
|
closed
|
[] | 2024-05-21T14:55:21Z
| 2024-05-21T15:59:40Z
| 0
|
pedromoraesh
|
huggingface/blog
| 2,078
|
Idefics2's perceiver how to make attentionamsk to None?
|
I set atttentionmask to None, but the model doesn't learned well, my inputs didn't padded so I dont want attention mask. How to resolve this?
I also tried add a all ones attnetionmask, but the result also very worse.
|
https://github.com/huggingface/blog/issues/2078
|
open
|
[] | 2024-05-21T07:38:57Z
| 2024-05-21T07:38:57Z
| null |
lucasjinreal
|
huggingface/peft
| 1,749
|
how to fine tune LoRA HQQ?
|
### Feature request
how to fine tune LoRA to HQQ?
### Motivation
how to fine tune LoRA to HQQ?
### Your contribution
how to fine tune LoRA to HQQ?
|
https://github.com/huggingface/peft/issues/1749
|
closed
|
[] | 2024-05-21T02:56:18Z
| 2024-06-29T15:03:18Z
| null |
NickyDark1
|
huggingface/trl
| 1,650
|
how to save v_head
|
currently, I use `ppo_trainer.save_pretrained` to save a model that is still in training, because the machine I used is rather unstable, and I would often need to resume retraining should it be interrupted. When I resume the training I got the following warning:
```
WARNING:root:A <class 'peft.peft_model.PeftModelForCausalLM'> model is loaded from 'RLGAF_gemma-7b-lima_sft_preprocessing_20epochs', and no v_head weight is found. This IS expected if you are not resuming PPO training.
```
I guess this is relevant to my case, since I need to resume PPO training. What is the proper way then to save the checkpoint of PPO training with the goal of resuming it later?
|
https://github.com/huggingface/trl/issues/1650
|
closed
|
[] | 2024-05-20T17:06:00Z
| 2025-04-11T10:14:36Z
| null |
zyzhang1130
|
pytorch/torchchat
| 837
|
Cannot build mobile android app in unit test - due to licensing question in build process?
| ERROR: type should be string, got "https://github.com/pytorch/torchchat/actions/runs/9161687849/job/25187114502?pr=831\r\n\r\nJanuary 16, 2019\r\n---------------------------------------\r\nAccept? (y/N): Skipping following packages as the license is not accepted:\r\nGoogle APIs Intel x86_64 Atom System Image\r\nThe following packages can not be installed since their licenses or those of the packages they depend on were not accepted:\r\n system-images;android-34;google_apis;x86_64\r\n[=======================================] 100% Computing updates... \r\n\r\n+ avdmanager list avd\r\n+ grep -q torchchat\r\n+ avdmanager create avd --name torchchat --package 'system-images;android-34;google_apis;x86_64'\r\nLoading local repository... \r\n[========= ] 25% Loading local repository... \r\n[========= ] 25% Fetch remote repository... \r\n[=======================================] 100% Fetch remote repository... \r\nError: Package path is not valid. Valid system image paths are:\r\nnull"
|
https://github.com/pytorch/torchchat/issues/837
|
closed
|
[] | 2024-05-20T17:01:29Z
| 2024-08-20T18:26:20Z
| 0
|
mikekgfb
|
huggingface/chat-ui
| 1,153
|
Can we use Hugging Face Chat with a Custom Server
|
Requirement:
I have a custom API which takes in the inputs queries and passes it through a RAG pipeline and finally to llm and returns the result.
Question is, can I integrate it with Chat-UI (utilizing just chat-ui frontend and my custom backend). If yes, is there any documentation around it. As per what I understood till now, it looks like it is possible, but I have to make a lot of changes in the UI code itself to accommodate this. What I can see is that the UI is tightly coupled with the text generation from models and doesn't fully support calling an API directly without making code changes.
Are there any docs for this?
Also, can we use any other db other than mongodb?
|
https://github.com/huggingface/chat-ui/issues/1153
|
closed
|
[] | 2024-05-20T16:44:01Z
| 2024-09-03T07:52:18Z
| 9
|
snps-ravinu
|
huggingface/nanotron
| 176
|
Where is the "nanotron format" defined?
|
I see that any(?) hf model can be converted to nanotron format with this [script](https://github.com/huggingface/nanotron/blob/main/examples/llama/convert_hf_to_nanotron.py).
Is there documentation describing this format?
Can any model that may be loaded with AutoModelForCausalLM be converted to nanotron format for training?
|
https://github.com/huggingface/nanotron/issues/176
|
closed
|
[] | 2024-05-20T13:54:52Z
| 2024-05-21T17:22:50Z
| null |
RonanKMcGovern
|
huggingface/chat-ui
| 1,151
|
Can I change localhost to remote IP?
|
I am running Chat-UI in local, but I want to change localhost to IP, I am unable to find this configguration in the code. Can anyone help?
|
https://github.com/huggingface/chat-ui/issues/1151
|
closed
|
[] | 2024-05-20T05:34:23Z
| 2024-05-20T07:01:30Z
| 1
|
snps-ravinu
|
huggingface/candle
| 2,197
|
How to slice a tensor?
|
tch has the function `slice` that return a tensor slice. Is there a corresponding function for candle?
|
https://github.com/huggingface/candle/issues/2197
|
closed
|
[] | 2024-05-20T00:55:08Z
| 2024-05-20T01:46:58Z
| null |
Gadersd
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.