repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
โ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/setfit
| 178
|
Question : evaluation after every training epoch
|
# Thank you
Hello!
I am Yongtae, a senior ML engineer in japan.
Thank you for publishing a genuinely excellent paper and code.
Few-shot learning and multilingual support are appreciated by engineers like me who work abroad!
# Question
I felt this model easily overfit to train data if the number of epochs is over 2 or train data contains similar data.
Therefore I would like to evaluate the model after every training epoch to find out the best epoch number.
But as shown [here](https://github.com/huggingface/setfit/blob/99c30746799a09e0267427b8a7b8650568222b48/src/setfit/trainer.py#L363), it seems difficult to evaluate the model at every epoch, because the body part is trained on full epoch at the beginning of the training.
so I would like to change like below
```python
for epoch in num_epochs:
self.model.model_body.fit(
train_objectives=[(train_dataloader, train_loss)],
epochs=1,
steps_per_epoch=train_steps,
optimizer_params={"lr": learning_rate},
warmup_steps=warmup_steps,
show_progress_bar=True,
use_amp=self.use_amp,
)
if not is_differentiable_head or not self._freeze:
# Train the final classifier
self.model.fit(
x_train,
y_train,
num_epochs=1,
batch_size=batch_size,
learning_rate=learning_rate,
body_learning_rate=body_learning_rate,
l2_weight=l2_weight,
show_progress_bar=True,
)
somehow_evaluete()
```
Does it make sense to you?
Or if I fork and make that change, are there any problem?
I am looking forward to your reply
Best and thank you in advance!
|
https://github.com/huggingface/setfit/issues/178
|
closed
|
[
"question"
] | 2022-11-13T09:36:47Z
| 2022-12-26T03:12:16Z
| null |
Yongtae723
|
pytorch/tutorials
| 2,117
|
Stable Diffusion Question
|
I am looking to leverage torch.nn.parallel.DistributedDataParallel per the documentation you have written to integrate dual 3090s into a workflow. I am using the automatic repo and after trying multiple things to update the following code to include what you have in the torch wiki, I have been unsuccessful in switching the cuda current device to leverage the model methodology outlined in your documentation and stackoverflow examples. Do you have any recommendations on what I can read or leverage to test further? I know that Meta has been releasing some wonderful tools I have been using to support the Stable Diffusion project so I hope this is in your purview. If it is not, feel free to ignore.
def caching_allocator_alloc(size, device: Union[Device, int] = None, stream=None):
r"""Performs a memory allocation using the CUDA memory allocator.
Memory is allocated for a given device and a stream, this
function is intended to be used for interoperability with other
frameworks. Allocated memory is released through
:func:`~torch.cuda.caching_allocator_delete`.
Args:
size (int): number of bytes to be allocated.
device (torch.device or int, optional): selected device. If it is
``None`` the default CUDA device is used.
stream (torch.cuda.Stream or int, optional): selected stream. If is ``None`` then
the default stream for the selected device is used.
.. note::
See :ref:`cuda-memory-management` for more details about GPU memory
management.
"""
if device is None:
device = torch.cuda.current_device()
device = _get_device_index(0)
if stream is None:
stream = torch.cuda.current_stream(device)
if isinstance(stream, torch.cuda.streams.Stream):
stream = stream.cuda_stream
if not isinstance(stream, int):
raise TypeError('Invalid type for stream argument, must be '
'`torch.cuda.Stream` or `int` representing a pointer '
'to a exisiting stream')
with torch.cuda.device(device):
return torch._C._cuda_cudaCachingAllocator_raw_alloc(size, stream)
cc @mrshenli @osalpekar @H-Huang @kwen2501
|
https://github.com/pytorch/tutorials/issues/2117
|
closed
|
[
"question",
"distributed"
] | 2022-11-12T06:18:43Z
| 2025-05-12T15:33:35Z
| null |
jasonewest
|
pytorch/TensorRT
| 1,449
|
โ [Question] How do you compile for multiple GPU architectures?
|
## โ Question
How do you compile for multiple GPU architectures? Or do you need to compile one torchscript per architecture?
|
https://github.com/pytorch/TensorRT/issues/1449
|
closed
|
[
"question",
"No Activity",
"component: runtime"
] | 2022-11-11T22:02:07Z
| 2023-05-04T00:02:18Z
| null |
dfung
|
huggingface/setfit
| 173
|
How to setup gradient_accumulation?
|
Hi,
in order to train a model SetFit, I would like simulate a `batch_size` of 16 but with a `batch_size` of 8. For doing that, I need to setup `gradient_accumulation` to 2.
How to do that?
Thanks.
|
https://github.com/huggingface/setfit/issues/173
|
closed
|
[
"question"
] | 2022-11-10T21:19:52Z
| 2022-12-20T08:49:13Z
| null |
piegu
|
huggingface/datasets
| 5,226
|
Q: Memory release when removing the column?
|
### Describe the bug
How do I release memory when I use methods like `.remove_columns()` or `clear()` in notebooks?
```python
from datasets import load_dataset
common_voice = load_dataset("mozilla-foundation/common_voice_11_0", "ja", use_auth_token=True)
# check memory -> RAM Used (GB): 0.704 / Total (GB) 33.670
common_voice = common_voice.remove_columns(column_names=common_voice.column_names['train'])
common_voice.clear()
# check memory -> RAM Used (GB): 0.705 / Total (GB) 33.670
```
I tried `gc.collect()` but did not help
### Steps to reproduce the bug
1. load dataset
2. remove all the columns
3. check memory is reduced or not
[link to reproduce](https://www.kaggle.com/code/bayartsogtya/huggingface-dataset-memory-issue/notebook?scriptVersionId=110630567)
### Expected behavior
Memory released when I remove the column
### Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 8.0.0
- Pandas version: 1.3.5
|
https://github.com/huggingface/datasets/issues/5226
|
closed
|
[] | 2022-11-10T18:35:27Z
| 2022-11-29T15:10:10Z
| 3
|
bayartsogt-ya
|
huggingface/datasets
| 5,225
|
Add video feature
|
### Feature request
Add a `Video` feature to the library so folks can include videos in their datasets.
### Motivation
Being able to load Video data would be quite helpful. However, there are some challenges when it comes to videos:
1. Videos, unlike images, can end up being extremely large files
2. Often times when training video models, you need to do some very specific sampling. Videos might end up needing to be broken down into X number of clips used for training/inference
3. Videos have an additional audio stream, which must be accounted for
4. The feature needs to be able to encode/decode videos (with right video settings) from bytes.
### Your contribution
I did work on this a while back in [this (now closed) PR](https://github.com/huggingface/datasets/pull/4532). It used a library I made called [encoded_video](https://github.com/nateraw/encoded-video), which is basically the utils from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo), but without the `torch` dep. It included the ability to read/write from bytes, as we need to do here. We don't want to be using a sketchy library that I made as a dependency in this repo, though.
Would love to use this issue as a place to:
- brainstorm ideas on how to do this right
- list ways/examples to work around it for now
CC @sayakpaul @mariosasko @fcakyon
|
https://github.com/huggingface/datasets/issues/5225
|
open
|
[
"enhancement",
"help wanted",
"vision"
] | 2022-11-10T17:36:11Z
| 2022-12-02T15:13:15Z
| 7
|
nateraw
|
huggingface/optimum
| 462
|
Add support for EncoderDecoderModel
|
### Feature request
There's already support for `marian` and various LLMs. But sometimes users create their own generic `EncoderDecoderModel`, e.g.
```
from transformers import EncoderDecoderModel
from optimum.onnxruntime import ORTModelForSeq2SeqLM
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-multilingual-cased", "bert-base-multilingual-cased")
model.save_pretrained("model_dir")
# Should be able to load this, but isn't supported yet.
ort_model = ORTModelForSeq2SeqLM.from_pretrained("model_dir", from_transformers=True)
```
### Motivation
The `EncoderDecoderModel` is generic enough to cover quite a lot of use-cases but this is might be hard too since it can most probably only cover EncoderDecoder of ORT supported LLMs
### Your contribution
Maybe, if there's some guidance on how to do so.
|
https://github.com/huggingface/optimum/issues/462
|
closed
|
[] | 2022-11-10T13:54:48Z
| 2023-09-01T11:11:43Z
| 1
|
alvations
|
huggingface/evaluate
| 353
|
What is the MAE range in evaluate?
|
In the MAE demo space, it is indicated that "Each MAE float value ranges from 0.0 to 1.0, with the best value being 0.0."
Doesn't it range from 0 to +inf in general ?
Is it a programmatic constraint added on the evaluate MAE score?
|
https://github.com/huggingface/evaluate/issues/353
|
closed
|
[] | 2022-11-10T13:29:30Z
| 2022-11-16T09:45:15Z
| null |
clefourrier
|
pytorch/kineto
| 681
|
what is happen when I use torch.profiler.profile with activities=[torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA]
|
I use profiler with `activities=[torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA]`, and get time 0.22ms in cpu time , avg 5.14us in cuda time, and when I use `time.time()` with `torch.cuda.synchronize()`,the result is 0.24 ms. What is the difference between these results๏ผ
My code looks like:
```
activities = [torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA]
torch.cuda.synchronize()
start = time.time()
with torch.profiler.profile(activities= activities, record_shapes=True, profile_memory=False) as pf:
output = model(input)
torch.cuda.synchronize()
end = time.time()
runtime = end-start()
```
|
https://github.com/pytorch/kineto/issues/681
|
closed
|
[
"question"
] | 2022-11-10T07:19:18Z
| 2023-10-10T15:13:14Z
| null |
qq1243196045
|
pytorch/functorch
| 1,060
|
aten::all not implemented
|
When I vmap "torch.all" function, I get the following:
/tmp/ipykernel_39088/2496106444.py:7: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::all. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /__w/functorch/functorch/functorch/csrc/BatchedFallback.cpp:85.)
f = functorch.vmap(torch.all, in_dims=1)
Is it possible to make it available for v1.12?
|
https://github.com/pytorch/functorch/issues/1060
|
closed
|
[
"actionable",
"high priority",
"small"
] | 2022-11-09T13:01:55Z
| 2023-01-11T05:56:13Z
| 7
|
iliTheFallen
|
huggingface/diffusers
| 1,204
|
[Community] Can we composite Dreambooth network training?
|
Very impressed with Dreambooth capabilities. I have what i think is a feature request - or perhaps a clarification on what is and is not possible in training networks with Dreambooth. In particular, i was wondering if there was a way to composite two networks to enable embedding of two instances (e.g. an sks dog >and< an sqs cat). I tried the plain vanilla training one network with an instance prompt using stable v1-5 as base and then fed this network into another Dreambooth training on a second instance prompt - and my result could only represent the first instance prompt. I note i can train a network on a textual inversion token and use this network to feed into Dreambooth - and the resulting network is able to combine the two concepts - the token from textual inversion and the sks instance token from Dreambooth. Just wondering if there was a way to layer multiple tokens with multiple Dreambooth trainings. Again, super powerful - i'm very impressed by how you can embed a variety of different classes of entities in Dreambooth which each responding very realistically to prompts.
|
https://github.com/huggingface/diffusers/issues/1204
|
closed
|
[
"question",
"stale"
] | 2022-11-09T01:59:05Z
| 2022-12-21T15:03:19Z
| null |
felgryn
|
huggingface/datasets
| 5,216
|
save_elasticsearch_index
|
Hi,
I am new to Dataset and elasticsearch. I was wondering is there any equivalent approach to save elasticsearch index as of save_faiss_index locally for later use, to remove the need to re-index a dataset?
|
https://github.com/huggingface/datasets/issues/5216
|
open
|
[] | 2022-11-08T23:06:52Z
| 2022-11-09T13:16:45Z
| 1
|
amobash2
|
huggingface/diffusers
| 1,168
|
What is "class images" mean for dreambooth training?
|
What is "class images" mean for dreambooth training?
If instance images meaning the subject i want to train on ๏ผ what does "class images" mean?
|
https://github.com/huggingface/diffusers/issues/1168
|
closed
|
[] | 2022-11-07T03:41:07Z
| 2022-11-08T06:07:10Z
| null |
universewill
|
huggingface/transformers
| 20,083
|
Where is the Translation template ?
|
I want to translate the doc in leisure time, and I followed the guide, but not found Translation template...
|
https://github.com/huggingface/transformers/issues/20083
|
closed
|
[] | 2022-11-06T06:44:12Z
| 2022-11-14T08:40:44Z
| null |
bfss
|
pytorch/torchx
| 648
|
Use GPU with `local_docker`
|
## ๐ Bug
Can't use GPU with the `local_docker` scheduler.
Module (check all that applies):
* [ ] `torchx.spec`
* [ ] `torchx.component`
* [ ] `torchx.apps`
* [ ] `torchx.runtime`
* [x] `torchx.cli`
* [x] `torchx.schedulers`
* [ ] `torchx.pipelines`
* [ ] `torchx.aws`
* [ ] `torchx.examples`
* [ ] `other`
## To Reproduce
Steps to reproduce the behavior:
1. create a `test.py` with
```python
import torch
print("torch.cuda.is_available():", torch.cuda.is_available())
```
2. create a `Dockerfile`
```Dockerfile
FROM ghcr.io/pytorch/torchx:0.3.0
COPY test.py test.py
```
3. run the following commands
```bash
docker build -t test:latest .
docker run --gpus all test:latest python test.py
torchx run --scheduler local_cwd utils.python --script test.py
torchx run --scheduler local_docker utils.python --script test.py
```
```
Sending build context to Docker daemon 6.144kB
Step 1/2 : FROM ghcr.io/pytorch/torchx:0.3.0
---> 343f0f3b1a07
Step 2/2 : COPY test.py test.py
---> Using cache
---> fa75170948b2
Successfully built fa75170948b2
Successfully tagged test:latest
torch.cuda.is_available(): True
torchx 2022-11-05 13:29:02 INFO loaded configs from /home/costa/Documents/go/src/github.com/vwxyzjn/test/y/torchx_test/.torchxconfig
torchx 2022-11-05 13:29:02 INFO Log directory not set in scheduler cfg. Creating a temporary log dir that will be deleted on exit. To preserve log directory set the `log_dir` cfg option
torchx 2022-11-05 13:29:02 INFO Log directory is: /tmp/torchx_6_h698gw
local_cwd://torchx/torchx_utils_python-mfc1scwb7dncd
torchx 2022-11-05 13:29:02 INFO Waiting for the app to finish...
python/0 torch.cuda.is_available(): True
torchx 2022-11-05 13:29:04 INFO Job finished: SUCCEEDED
torchx 2022-11-05 13:29:05 WARNING `gpus = all` was declared in the [local_docker] section of the config file but is not a runopt of `local_docker` scheduler. Remove the entry from the config file to no longer see this warning
torchx 2022-11-05 13:29:05 INFO loaded configs from /home/costa/Documents/go/src/github.com/vwxyzjn/test/y/torchx_test/.torchxconfig
torchx 2022-11-05 13:29:05 INFO Checking for changes in workspace `file:///home/costa/Documents/go/src/github.com/vwxyzjn/test/y/torchx_test`...
torchx 2022-11-05 13:29:05 INFO To disable workspaces pass: --workspace="" from CLI or workspace=None programmatically.
torchx 2022-11-05 13:29:06 INFO Built new image `sha256:32cf796cecfd488d7e0e5ba5069e9218098bed75597b3b402b9c557a796e5f4a` based on original image `ghcr.io/pytorch/torchx:0.3.0` and changes in workspace `file:///home/costa/Documents/go/src/github.com/vwxyzjn/test/y/torchx_test` for role[0]=python.
local_docker://torchx/torchx_utils_python-bq7cx57f1c6wr
torchx 2022-11-05 13:29:06 INFO Waiting for the app to finish...
python/0 torch.cuda.is_available(): False
torchx 2022-11-05 13:29:07 INFO Job finished: SUCCEEDED
```
## Expected behavior
Notice that torch identifies the GPU device when running with `poetry run torchx run --scheduler local_cwd utils.python --script test.py`, but it fails to do so when running with `poetry run torchx run --scheduler local_docker utils.python --script test.py`. Also, when running `docker run --gpus all test:latest python test.py`, GPU is also recognized.
## Environment
```Collecting environment information...
PyTorch version: 1.13.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Pop!_OS 21.10 (x86_64)
GCC version: (Ubuntu 11.2.0-7ubuntu2) 11.2.0
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.34
Python version: 3.9.5 (default, Jul 19 2021, 13:27:26) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.17.5-76051705-generic-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3060 Ti
GPU 1: NVIDIA GeForce RTX 3060 Ti
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] botorch==0.6.0
[pip3] gpytorch==1.9.0
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.4
[pip3] pytorch-lightning==1.5.10
[pip3] torch==1.13.0
[pip3] torch-model-archiver==0.6.0
[pip3] torchmetrics==0.10.2
[pip3] torchserve==0.6.0
[pip3] torchtext==0.14.0
[pip3] torchvision==0.14.0
[pip3] torchx==0.3.0
[conda] Could not collect
|
https://github.com/meta-pytorch/torchx/issues/648
|
closed
|
[
"question",
"docker"
] | 2022-11-05T17:31:18Z
| 2022-11-13T01:26:30Z
| 2
|
vwxyzjn
|
pytorch/data
| 884
|
Steps per epoch for training
|
### ๐ The feature
For huge datasets, an epoch may take a very long time to complete and it's good practice to perform evaluation and model checkpointing every N steps instead of at the end of an epoch. The tricky part lies at resuming training: how to tell the data loader to start from where it was left off? It would be great if torchdata could provide such a feature.
I have no idea how such a feature could be implemented, but from a user perspective, the interface would be best to resemble the common usage:
- A `dataloader.state_dict()` method that returns necessary information on where the data loading was left off.
- A `dataloader.load_state_dict(saved_state_dict)` method for loading the saved state_dict.
### Motivation, pitch
See above.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/meta-pytorch/data/issues/884
|
closed
|
[] | 2022-11-04T16:39:32Z
| 2022-11-04T21:00:30Z
| 6
|
netw0rkf10w
|
huggingface/datasets
| 5,200
|
Some links to canonical datasets in the docs are outdated
|
As we don't have canonical datasets in the github repo anymore, some old links to them doesn't work. I don't know how many of them are there, I found link to SuperGlue here: https://huggingface.co/docs/datasets/dataset_script#multiple-configurations, probably there are more of them. These links should be replaced by links to the corresponding datasets on the Hub.
|
https://github.com/huggingface/datasets/issues/5200
|
closed
|
[
"documentation"
] | 2022-11-04T10:06:21Z
| 2022-11-07T18:40:20Z
| 1
|
polinaeterna
|
pytorch/xla
| 4,157
|
How to wrap a model with dynamo
|
## โ Questions and Help
I am trying to add a dynamo model test with
```
import torch
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.utils.utils as xu
import torch_xla.debug.metrics as met
import torch._dynamo as dynamo
import torchvision
import unittest
class DynamoBasicTest(unittest.TestCase):
@dynamo.optimize('torchxla_trace_once')
def resetnet_18_dynamo(self, data):
model = torchvision.models.resnet18()
#model.eval()
return model(data)
def test_resnet18(self):
batch_size = xu.getenv_as('BATCH_SIZE', int, defval=4)
sample_count = xu.getenv_as('SAMPLE_COUNT', int, defval=10)
loader = xu.SampleGenerator(
data=(torch.zeros(batch_size, 3, 224,
224), torch.zeros(batch_size, dtype=torch.int64)),
sample_count=sample_count)
for data, _ in loader:
import pdb; pdb.set_trace()
output = self.resetnet_18_dynamo(data)
```
I get an error
```
Traceback (most recent call last):
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/optimizations/backends.py", line 53, in inner
return fn(model, **kwargs)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/optimizations/backends.py", line 823, in torchxla_trace_once
return integration.extract_compiled_graph(model, example_inputs)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/optimizations/torchxla_integration.py", line 79, in extract_compiled_graph
orig_device = example_inputs[0].device
IndexError: list index out of range
```
and I saw that `example_inputs` is empty. If I try with a simple example
```
def fn_simple(self, x, y):
a = torch.cos(x)
b = torch.sin(y)
return a + b
@dynamo.optimize('torchxla_trace_once')
def fn_simple_dynamo(self, x, y):
return self.fn_simple(x, y)
```
it worked as expected. I am wondering what did I missed here.
@shunting314 @wconstab
|
https://github.com/pytorch/xla/issues/4157
|
closed
|
[
"dynamo"
] | 2022-11-04T02:19:01Z
| 2022-11-04T22:55:13Z
| null |
JackCaoG
|
huggingface/setfit
| 147
|
Reproducing RAFT experiments (Table 3)
|
Hi, I wasn't able to locate the code to reproduce Table 3. I looked in the `scripts` folder but didn't have success.
Any help with this is greatly appreciated!
A side question on the RAFT results: did you use 10 random seeds for this experiment?
|
https://github.com/huggingface/setfit/issues/147
|
closed
|
[
"question"
] | 2022-11-02T18:34:55Z
| 2022-12-13T22:50:48Z
| null |
dgiova
|
pytorch/TensorRT
| 1,437
|
โ [Question] Are the interpolate plugins with align_corners=True still necessary?
|
## โ Question
<!-- Your question -->
## What you have already tried
https://github.com/pytorch/TensorRT/blob/master/core/conversion/converters/impl/interpolate.cpp#L566
This note is in the aten::upsample_bilinear2d converter:
`Align corners and scale factor behave slightly different together in TRT and PyTorch so run the layer in ATen to maintain consistency between Torch-TensorRT and PyTorch https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.interpolate`
With TRT 8.2.3 when I manually disable the plugin implementation and use the TRT resize_layer implementation I don't see any additional inaccuracy in my model. I also don't see any failures in the interpolate unit tests.
Are these plugins still necessary? What was the nature of the discrepancy with align corners?
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/1437
|
closed
|
[
"question"
] | 2022-11-02T18:26:01Z
| 2022-12-15T17:59:25Z
| null |
mfeliz-cruise
|
huggingface/setfit
| 145
|
SetFit for a large number of classes
|
Hi there, thanks for releasing such an interesting library.
I am curious if any experiments have been run using SetFit in the extreme multiclass setting, say as `n_classes>=100`?
|
https://github.com/huggingface/setfit/issues/145
|
closed
|
[
"question"
] | 2022-11-02T16:34:51Z
| 2024-05-14T10:46:30Z
| null |
steve-marmalade
|
huggingface/datasets
| 5,189
|
Reduce friction in tabular dataset workflow by eliminating having splits when dataset is loaded
|
### Feature request
Sorry for cryptic name but I'd like to explain using code itself. When I want to load a specific dataset from a repository (for instance, this: https://huggingface.co/datasets/inria-soda/tabular-benchmark)
```python
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True)
print(next(iter(dataset["train"])))
```
`datasets` library is essentially designed for people who'd like to use benchmark datasets on various modalities to fine-tune their models, and these benchmark datasets usually have pre-defined train and test splits. However, for tabular workflows, having train and test splits usually ends up model overfitting to validation split so usually the users would like to do validation techniques like `StratifiedKFoldCrossValidation` or when they tune for hyperparameters they do `GridSearchCrossValidation` so often the behavior is to create their own splits. Even [in this paper](https://hal.archives-ouvertes.fr/hal-03723551) a benchmark is introduced but the split is done by authors.
It's a bit confusing for average tabular user to try and load a dataset and see `"train"` so it would be nice if we would not load dataset into a split called `train `by default.
```diff
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True)
-print(next(iter(dataset["train"])))
+print(next(iter(dataset)))
```
### Motivation
I explained it above ๐
### Your contribution
I think this is quite a big change that seems small (e.g. how to determine datasets that will not be load to train split?), it's best if we discuss first!
|
https://github.com/huggingface/datasets/issues/5189
|
open
|
[
"enhancement"
] | 2022-11-02T09:15:02Z
| 2022-12-06T12:13:17Z
| 33
|
merveenoyan
|
huggingface/datasets
| 5,183
|
Loading an external dataset in a format similar to conll2003
|
I'm trying to load a custom dataset in a Dataset object, it's similar to conll2003 but with 2 columns only (word entity), I used the following script:
features = datasets.Features(
{"tokens": datasets.Sequence(datasets.Value("string")),
"ner_tags": datasets.Sequence(
datasets.features.ClassLabel(
names=["B-PER", .... etc.]))}
)
from datasets import Dataset
INPUT_COLUMNS = "tokens ner_tags".split(" ")
def read_conll(file):
#all_labels = []
example = {col: [] for col in INPUT_COLUMNS}
idx = 0
with open(file) as f:
for line in f:
if line:
if line.startswith("-DOCSTART-") and example["tokens"] != []:
print(idx, example)
yield idx, example
idx += 1
example = {col: [] for col in INPUT_COLUMNS}
elif line == "\n" or (line.startswith("-DOCSTART-") and example["tokens"] == []):
continue
else:
row_cols = line.split(" ")
for i, col in enumerate(example):
example[col] = row_cols[i].rstrip()
dset = Dataset.from_generator(read_conll, gen_kwargs={"file": "/content/new_train.txt"}, features = features)
The following error happened:
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <genexpr>(.0)
285 for key in unique_values(itertools.chain(*dicts)): # set merge all keys
286 # Will raise KeyError if the dict don't have the same keys
--> 287 yield key, tuple(d[key] for d in dicts)
288
TypeError: tuple indices must be integers or slices, not str
What does this mean and what should I modify?
|
https://github.com/huggingface/datasets/issues/5183
|
closed
|
[] | 2022-11-01T13:18:29Z
| 2022-11-02T11:57:50Z
| 0
|
Taghreed7878
|
huggingface/datasets
| 5,182
|
Add notebook / other resource links to the task-specific data loading guides
|
Does it make sense to include links to notebooks / scripts that show how to use a dataset for training / fine-tuning a model?
For example, here in [https://huggingface.co/docs/datasets/image_classification] we could include a mention of https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb.
Applies to https://huggingface.co/docs/datasets/object_detection as well.
Cc: @osanseviero @nateraw
|
https://github.com/huggingface/datasets/issues/5182
|
closed
|
[
"enhancement"
] | 2022-11-01T07:57:26Z
| 2022-11-03T01:49:57Z
| 2
|
sayakpaul
|
huggingface/datasets
| 5,181
|
Add a guide for semantic segmentation
|
Currently, we have these guides for object detection and image classification:
* https://huggingface.co/docs/datasets/object_detection
* https://huggingface.co/docs/datasets/image_classification
I am proposing adding a similar guide for semantic segmentation.
I am happy to contribute a PR for it.
Cc: @osanseviero @nateraw
|
https://github.com/huggingface/datasets/issues/5181
|
closed
|
[
"documentation"
] | 2022-11-01T07:54:50Z
| 2022-11-04T18:23:36Z
| 2
|
sayakpaul
|
huggingface/datasets
| 5,180
|
An example or recommendations for creating large image datasets?
|
I know that Apache Beam and `datasets` have [some connector utilities](https://huggingface.co/docs/datasets/beam). But it's a little unclear what we mean by "But if you want to run your own Beam pipeline with Dataflow, here is how:". What does that pipeline do?
As a user, I was wondering if we have this support for creating large image datasets. If so, we should mention that [here](https://huggingface.co/docs/datasets/image_dataset).
Cc @lhoestq
|
https://github.com/huggingface/datasets/issues/5180
|
open
|
[] | 2022-11-01T07:38:38Z
| 2022-11-02T10:17:11Z
| 2
|
sayakpaul
|
pytorch/functorch
| 1,058
|
Cuda Memory Overflow in Jacobian Computation
|
Hi,
I implemented a Jacobian computation using functorch, but encoutnered a memory overflow issue.
The function that I want to differentiate is `ResidualFunctional.residual`. I'd like to compute the Jacobian of this function w.r.t. its first argument `inputs`.
The output of `ResidualFunctional.residual` is a tensor of size (10000, ) and `inputs` is a tensor of size (1001, ). Thus, the Jacobian is 10000 by 1001, which takes about 74 MB using double precision.
However, `functorch.jacrev` had a memory overflow error on a 24 GB GPU. The error message is shown below. I am wondering why FuncTorch takes so much memory in the reverse mode autodiff, and if there is a solution to this issue.
```
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 38.00 GiB (GPU 0; 23.69 GiB total capacity; 810.80 MiB already allocated; 21.25 GiB free; 824.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
Below is a working example that reproduce this issue.
CUDA 11.4
FuncTorch 1.13.0
PyTorch 1.13.0
GPyTorch 1.9.0
Thanks!
```
import torch
import gpytorch
import functorch
from functorch import make_functional_with_buffers
class ResidualFunctional():
def __init__(self,
kernel, m, d,
outputscale=None, sigma=None,
lengthscale_penalty=None
):
self.func, _, self.buffers = make_functional_with_buffers(kernel)
self.m = m
self.d = d
self.outputscale = outputscale
self.sigma = sigma
def _residual(self, u, x, y, params, sigma):
with gpytorch.settings.trace_mode(), gpytorch.settings.lazily_evaluate_kernels(False):
m = u.size(0)
func_nl = lambda params, buffers, x1, x2: self.func(params, buffers, x1, x2).evaluate()
Kxu = func_nl(params, self.buffers, x, u)
A = torch.cat(
[Kxu, sigma * torch.eye(m, device=u.device)],
dim=-2,
)
ybar = torch.cat([y, y.new_zeros(m)], dim=-1)
c = torch.linalg.lstsq(A, ybar.unsqueeze(-1), rcond=None).solution.squeeze()
r = ybar - A @ c
return r
def residual(self, inputs, x, y):
u = inputs[:self.m * self.d].view(self.m, self.d)
lengthscale = torch.nn.functional.softplus(inputs[-1])
return self._residual(u, x, y, (lengthscale, self.outputscale), self.sigma)
if __name__ == "__main__":
device = "cuda:0"
n = 10000
d = 10
m = 100
u = torch.randn(m, d, device=device)
x = torch.randn(n, d, device=device)
y = torch.randn(n, device=device)
kernel = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
kernel = kernel.to(device)
functional = ResidualFunctional(
kernel, m=m, d=d,
outputscale=kernel.outputscale, sigma=1e-2,
)
inputs = torch.cat(
(u.view(-1), kernel.base_kernel.raw_lengthscale.view(-1)),
dim=-1
)
residual = functional.residual(inputs, x, y)
print(residual.shape)
jacobian = functorch.jacrev(functional.residual, argnums=0)(inputs, x, y)
print(jacobian.shape)
```
|
https://github.com/pytorch/functorch/issues/1058
|
open
|
[] | 2022-10-31T22:29:07Z
| 2022-11-08T14:53:53Z
| 6
|
kayween
|
huggingface/optimum
| 442
|
Add support for ORTModelForObjectDetection
|
### Feature request
Hi, I went through optimum's code base and could not find support for object detection models. Is there plan to add ORTModelForObjectDetection just like ORTModelForImageClassification exists? Would be great to have this feature.
Object detection task is also supported as part of transformers `pipeline` feature so I guess it should be possible to support this as part of optimum?
### Motivation
I want to leverage onnx support for YOLOS model
### Your contribution
I would be happy to help in adding support for this feature if someone can guide me.
|
https://github.com/huggingface/optimum/issues/442
|
open
|
[
"onnxruntime",
"onnx"
] | 2022-10-31T19:59:21Z
| 2025-12-05T10:42:26Z
| 9
|
shivalikasingh95
|
pytorch/pytorch
| 88,073
|
How to export pytorch model to onnx, with input of List[Tuple[Tensor,Tensor]] and output of List[Tuple[Tensor,Tensor]]
|
I have no idea how to export this model to onnx. One of the inputs for this model accepts a list of uncertain tuple, each of which contains 2 tensor with size of (2, 1024). This model also returns a list of tuple of two tensors(2, 1024).
How can I export it? I've already searched in pytorch community, but most of the issues have no replies.
## Code example
state[in] is a list, and state[out] is also a list.
model definition
```python
class Module(nn.Module):
...
def forward(self, enc_out, enc_mask, tgt_seq,
state: Optional[List[Tuple[torch.Tensor, torch.Tensor]]] = None,
bias_embedding: Optional[torch.Tensor] = None):
if state is not None:
hid = list()
cell = list()
for h, c in state:
hid.append(h)
cell.append(c)
state_in = (torch.stack(hid, dim=1), torch.stack(cell, dim=1))
else:
state_in = None
logit, attn, state_out = self.decoder(tgt_seq, enc_out, enc_mask, state_in,
bias_embedding=bias_embedding)
hid, cell = state_out
state = [(hid[:, j, :], cell[:, j, :]) for j in range(logit.size(0))]
logit = logit[:, -1, :].squeeze(1)
return torch.log_softmax(logit, -1), attn, state
```
model export
```python
input_names = ["enc_out", "enc_mask", "tgt_seq", "cache_state", "bias_embedding"]
output_names = ["dec_out", "attn", "state"]
dynamic_axes = {
"enc_out": {0: "batch_size", 1: "enc_out_len"},
"enc_mask": {0: "batch_size", 1: "enc_out_len"},
"tgt_seq": {0: "batch_size"},
"dec_out": {0: "batch_size"},
"attn": {0: "batch_size", 3: "enc_out_len"},
"state": {1: "batch_size"},
}
torch.onnx.export(
model,
(enc_out, enc_mask, tgt_seq, cache_state, bias_embedding),
"decoder.onnx",
export_params=True,
opset_version=13,
do_constant_folding=True,
input_names=input_names,
output_names=output_names,
dynamic_axes=dynamic_axes
)
```
|
https://github.com/pytorch/pytorch/issues/88073
|
closed
|
[
"module: onnx",
"triaged",
"onnx-triaged",
"onnx-needs-info"
] | 2022-10-31T08:22:16Z
| 2022-11-22T06:07:07Z
| null |
yszhou2019
|
pytorch/tutorials
| 2,105
|
training fail
|
image https://docs.nvidia.com/deeplearning/tensorrt/container-release-notes/index.html?from=groupmessage
I train my model with ngc docker.
Sometime I train the network (like yolov7 ) , it will let linux disconect and reboot .
How can I debug it to find cause root?
|
https://github.com/pytorch/tutorials/issues/2105
|
closed
|
[
"question"
] | 2022-10-30T04:10:22Z
| 2022-11-14T20:50:47Z
| null |
alicera
|
pytorch/functorch
| 1,057
|
Installing functorch breaks torchaudio
|
I'm following along with [this](https://colab.research.google.com/drive/1GNfb01W_xf8JRu78ZKoNnLqiwcrJrbYG#scrollTo=nBj3vMvIhD9t) colab from the [functorch installation docs](https://pytorch.org/functorch/stable/install.html#colab).
After installing and restarting, when I try to import `torchaudio`, the runtime crashes. At first, I got this error:
```python
OSError: /usr/local/lib/python3.7/dist-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZN2at4_ops7resize_4callERKNS_6TensorEN3c108ArrayRefIlEENS5_8optionalINS5_12MemoryFormatEEE
```
Now, I'm just getting the runtime crashing with no visible error.
I know functorch was merged into pytorch proper, but I don't see any instructions about how to use it from there. Would that fix the issue? If so, should the main docs be updated?
|
https://github.com/pytorch/functorch/issues/1057
|
closed
|
[
"actionable"
] | 2022-10-28T18:16:13Z
| 2022-12-09T18:59:35Z
| 11
|
dellis23
|
pytorch/pytorch
| 87,862
|
torch.where: `out` kwarg support is undocumented
|
### ๐ The doc issue
https://pytorch.org/docs/stable/generated/torch.where.html doesn't mention anything about `out` kwarg support.
Ref:
https://github.com/pytorch/pytorch/blob/aaba0bd30641c56db1dc0550b81fbc458db46276/aten/src/ATen/native/native_functions.yaml#L5653
Eg.
```python
>>> torch.where(x < 0, x, -x, out=x)
tensor([-0.6862, -0.6860, -1.4944])
```
### Suggest a potential alternative/fix
_No response_
cc @svekars @carljparker
|
https://github.com/pytorch/pytorch/issues/87862
|
closed
|
[
"module: docs",
"good first issue",
"actionable"
] | 2022-10-27T14:50:44Z
| 2022-10-27T21:03:47Z
| null |
kshitij12345
|
pytorch/pytorch
| 87,789
|
Any ideas on how we can convert a model from huggingface (transformers library )to tensorflow lite?
|
### ๐ Describe the bug
I want to convert CamembertQuestionAnsewring model to tensoflow lite, i download it from huggingface platform, because when i want to save the model locally it gives me the model with 'bin' format.
i'm asking here because huggingface use pytorch pretrained models.
- when i try to convert the model it gives me this error : AttributeError: 'CamembertForQuestionAnswering' object has no attribute 'call' by using tf_model.h5 file.
- Also i can't load it using : tf.keras.models.load_model() it gives me : ValueError: No model config found in the file at <tensorflow.python.platform.gfile.GFile object at 0x7f27cceb1810>.
- when i want to save the transformers model locally it gives me the model with 'bin' format, so i download it from the platform.
### Versions
https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf?context=Etalab+est+une+administration+publique+fran%C3%A7aise+qui+fait+notamment+office+de+Chief+Data+Officer+de+l%27%C3%89tat+et+coordonne+la+conception+et+la+mise+en+%C5%93uvre+de+sa+strat%C3%A9gie+dans+le+domaine+de+la+donn%C3%A9e+%28ouverture+et+partage+des+donn%C3%A9es+publiques+ou+open+data%2C+exploitation+des+donn%C3%A9es+et+intelligence+artificielle...%29.+Ainsi%2C+Etalab+d%C3%A9veloppe+et+maintient+le+portail+des+donn%C3%A9es+ouvertes+du+gouvernement+fran%C3%A7ais+data.gouv.fr.+Etalab+promeut+%C3%A9galement+une+plus+grande+ouverture+l%27administration+sur+la+soci%C3%A9t%C3%A9+%28gouvernement+ouvert%29+%3A+transparence+de+l%27action+publique%2C+innovation+ouverte%2C+participation+citoyenne...+elle+promeut+l%E2%80%99innovation%2C+l%E2%80%99exp%C3%A9rimentation%2C+les+m%C3%A9thodes+de+travail+ouvertes%2C+agiles+et+it%C3%A9ratives%2C+ainsi+que+les+synergies+avec+la+soci%C3%A9t%C3%A9+civile+pour+d%C3%A9cloisonner+l%E2%80%99administration+et+favoriser+l%E2%80%99adoption+des+meilleures+pratiques+professionnelles+dans+le+domaine+du+num%C3%A9rique.+%C3%80+ce+titre+elle+%C3%A9tudie+notamment+l%E2%80%99opportunit%C3%A9+de+recourir+%C3%A0+des+technologies+en+voie+de+maturation+issues+du+monde+de+la+recherche.+Cette+entit%C3%A9+charg%C3%A9e+de+l%27innovation+au+sein+de+l%27administration+doit+contribuer+%C3%A0+l%27am%C3%A9lioration+du+service+public+gr%C3%A2ce+au+num%C3%A9rique.+Elle+est+rattach%C3%A9e+%C3%A0+la+Direction+interminist%C3%A9rielle+du+num%C3%A9rique%2C+dont+les+missions+et+l%E2%80%99organisation+ont+%C3%A9t%C3%A9+fix%C3%A9es+par+le+d%C3%A9cret+du+30+octobre+2019.%E2%80%89+Dirig%C3%A9+par+Laure+Lucchesi+depuis+2016%2C+elle+rassemble+une+%C3%A9quipe+pluridisciplinaire+d%27une+trentaine+de+personnes.&question=Comment+s%27appelle+le+portail+open+data+du+gouvernement+%3F
|
https://github.com/pytorch/pytorch/issues/87789
|
closed
|
[] | 2022-10-26T16:00:34Z
| 2022-10-27T05:40:57Z
| null |
BENSAFOUAN-Abdelhalim
|
huggingface/setfit
| 126
|
Does num_iterations create duplicate data?
|
I am trying to get a better understanding behind this hyperparam. As far as I understand, you are iterating over the data `num_iterations` times and create a positive and negative pair by sampling. Could this result in duplicate data?
Also sometimes it tends to result in more examples than potential pairs for example in `imdb` for 3 shot there are 6 examples, 2 per class. Setting `num_iterations` to 5 creates 6 (examples) * 2 (1 positive + 1 negative) * 5 (num_iterations) = 60 examples. The possible combinations though are 6*6/2-6 = 12, essentially half of the matrix of all pairs without the diagonal.
If the above is correct it seems that its like running training for multiple epochs. Is that right? If so, why are you not creating all pairs instead and keep the `epochs` hyperparam as is which might be more intuitive. If you want a way to sample less data, why not introduce a `sample_size` to cap those combinations to a lesser number for experimentation?
|
https://github.com/huggingface/setfit/issues/126
|
open
|
[
"question"
] | 2022-10-26T13:09:52Z
| 2022-12-20T09:10:53Z
| null |
nsorros
|
huggingface/datasets
| 5,157
|
Consistent caching between python and jupyter
|
### Feature request
I hope this is not my mistake, currently if I use `load_dataset` from a python session on a custom dataset to do the preprocessing, it will be saved in the cache and in other python sessions it will be loaded from the cache, however calling the same from a jupyter notebook does not work, meaning the preprocessing starts from scratch.
If adjusting the hashes is impossible, is there a way to manually set dataset fingerprint to "force" this behaviour?
### Motivation
If this is not already the case and I am doing something wrong, it would be useful to have the two fingerprints consistent so one can create the dataset once and then try small things on jupyter without preprocessing everything again.
### Your contribution
I am happy to try a PR if you give me some pointers where the changes should happen
|
https://github.com/huggingface/datasets/issues/5157
|
closed
|
[
"enhancement"
] | 2022-10-25T01:34:33Z
| 2022-11-02T15:43:22Z
| 2
|
gpucce
|
huggingface/setfit
| 120
|
Using SetFit Embeddings for Semantic Search?
|
Hi,
I was wondering if the semantic search would improve if one would train a multilabel-classification model and use those embeddings?
After training a binary classification model I have seen that the embeddings between similar topics on `all-MiniLM-L12-v2` vs `all-MiniLM-L12-v2-setfit` (fitted model) are very close in fitted model which makes sense for me.
```python
# Cosine Similarity
def get_cosine_similarity(vector1, vector2):
sim = 1 - spatial.distance.cosine(vector1, vector2)
return sim
word_1 = "acne"
word_2 = "red skin"
emb_fit_1 = model.model_body.encode([word_1])
emb_fit_2 = model.model_body.encode([word_2])
emb_base_1 = model_sbert.encode([word_1])
emb_base_2 = model_sbert.encode([word_2])
print(f"{word_1} vs {word_2} (base)", get_cosine_similarity(emb_base_1, emb_base_2))
print(f"{word_1} vs {word_2} (fit)", get_cosine_similarity(emb_fit_1, emb_fit_2))
```
```
acne vs pimple (base) 0.5959747433662415
acne vs pimple (fit) 0.9996786117553711
acne vs red skin (base) 0.36421263217926025
acne vs red skin (fit) 0.9994498491287231
acne vs red car (base) 0.17558744549751282
acne vs red car (fit) 0.0051751588471233845
```
I would assume that if the model is trained on multi-label-classification task the embeddings would somehow clustered based on the labels which are provided during training. Would that improve the semantic search if enough labels are provided during training?
Of course I could train a model and test it but maybe you have done similar tests and already know if it's working or not :-)
Thanks!
|
https://github.com/huggingface/setfit/issues/120
|
open
|
[
"question"
] | 2022-10-25T00:00:03Z
| 2024-07-12T02:02:04Z
| null |
Raidus
|
pytorch/pytorch
| 87,564
|
Meta impl for pirms.where is incorrect
|
### ๐ Describe the bug
```
device = "meta"
pred = torch.randn(5, 5, device=device) > 0
a = torch.rand(5, 5, device=device).t()
out = torch.where(pred, a, 0)
print("pred.stride()", pred.stride())
print("a.stride()", a.stride())
print("out.stride()", out.stride())
pred.stride() (5, 1)
a.stride() (1, 5)
out.stride() (1, 5)
```
if I have device=โcudaโ, the output is
```
pred.stride() (5, 1)
a.stride() (1, 5)
out.stride() (5, 1)
```
### Versions
master
cc @ezyang @mruberry @ngimel @Lezcano @fdrocha
|
https://github.com/pytorch/pytorch/issues/87564
|
closed
|
[
"triaged",
"module: primTorch",
"module: decompositions"
] | 2022-10-23T01:15:28Z
| 2022-10-26T00:48:07Z
| null |
SherlockNoMad
|
huggingface/setfit
| 119
|
Using SetFit for regression tasks?
|
I was curious about using SetFit for ordinal Likert scale outcomes (ie IMDB movie reviews). It doesn't seem like an obvious option in the SetFit API. Has anyone tried using SetFit for regression tasks?
|
https://github.com/huggingface/setfit/issues/119
|
open
|
[
"question"
] | 2022-10-21T19:15:29Z
| 2023-02-01T16:48:33Z
| null |
ericlinML
|
pytorch/data
| 848
|
[RFC] Verify that the docs contain working code and self-contained examples using doctest
|
### ๐ The feature
Currently there does not seem to be an automatic way to verify that the examples in the documentation are actually working. This leads to issues like (https://github.com/pytorch/data/issues/433).
An example should also be complete enough so that developers can easily try out the code.
A solution could be to use the sphinx doctest extension to test the documentation before building it. Docstrings can be continuously migrated from standard reST doctests to test code that runs using the sphinx doctest extension.
### Motivation, pitch
Working examples that are up-to-date boost adoption of the library and make it easier for developers to become proficient in using the library.
Therefore one could consider using doctest in order to be forced to write self-contained examples that execute without error.
### Alternatives
Doctests can be executed in different ways:
- Invoking plain python to execute the doctests as described [here](https://docs.python.org/3/library/doctest.html)
- Using [pytest --doctest](https://docs.pytest.org/en/7.1.x/how-to/doctest.html) to execute the tests
- Run within the documentation build process as `cd docs && make doctest`
I would recommend running the doctests while building the documentation using sphinx because it is easy to continuously
migrate the existing non-tested example code to code being tested.
### Additional context
A minimal example of the RFC can be found
[here](https://github.com/pytorch/data/pull/850). Please
note that the code is only meant as an example for discussion and might not (yet) meet the quality criteria of a PR.
The example implementation consists of the following parts:
- An updated `docs/Makefile` with a `doctest` target
- Enabling the sphinx extension `sphinx.ext.doctest` in `docs/source/conf.py`
- A minimal example of an updated docstring in `torchdata/dataloader2/adapter.py`
- Adding the `doctest` step to the CI in `.github/workflows/_build_test_upload.yml`
The tests can be executed like this: `cd docs && make doctest`.
|
https://github.com/meta-pytorch/data/issues/848
|
closed
|
[
"documentation",
"Better Engineering"
] | 2022-10-21T14:45:36Z
| 2022-10-27T20:55:48Z
| 1
|
mathiasburger
|
huggingface/dataset-viewer
| 614
|
[feat req] Alphabetical ordering for splits in dataset viewer
|
### Link
https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0
### Description
Currently, the datasets splits for the viewer are displayed in a seemingly random order, see example for [Common Voice 11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0):
<img width="1505" alt="Screenshot 2022-10-21 at 14 04 39" src="https://user-images.githubusercontent.com/93869735/197192381-46ca4041-db69-423e-be55-abf96e70167a.png">
It would be easier to traverse the list of possible splits if they were arranged alphabetically!
|
https://github.com/huggingface/dataset-viewer/issues/614
|
closed
|
[
"question",
"feature request"
] | 2022-10-21T12:11:00Z
| 2022-10-26T09:48:29Z
| null |
sanchit-gandhi
|
huggingface/datasets
| 5,144
|
Inconsistent documentation on map remove_columns
|
### Describe the bug
The page [process](https://huggingface.co/docs/datasets/process) says this about the parameter `remove_columns` of the function `map`:
When you remove a column, it is only removed after the example has been provided to the mapped function.
So it seems that the `remove_columns` parameter removes after the mapped functions.
However, another page, [the documentation of the function map](https://huggingface.co/docs/datasets/v2.6.1/en/package_reference/main_classes#datasets.Dataset.map.remove_columns) says:
Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding columns with names in remove_columns, these columns will be kept.
So one page says "after the mapped function" and another says "before the mapped function."
Is there something wrong?
### Steps to reproduce the bug
Not about code.
### Expected behavior
consistent about the descriptions of the behavior of the parameter `remove_columns` in the function `map`.
### Environment info
datasets V2.6.0
|
https://github.com/huggingface/datasets/issues/5144
|
closed
|
[
"documentation",
"duplicate",
"good first issue",
"hacktoberfest"
] | 2022-10-21T08:37:53Z
| 2022-11-15T14:15:10Z
| 3
|
zhaowei-wang-nlp
|
huggingface/setfit
| 117
|
Using this for code gen?
|
Can we use this for code generation?
|
https://github.com/huggingface/setfit/issues/117
|
closed
|
[
"question"
] | 2022-10-20T16:53:59Z
| 2022-12-20T09:32:50Z
| null |
krrishdholakia
|
huggingface/datasets
| 5,143
|
DownloadManager Git LFS support
|
### Feature request
Maybe I'm mistaken but the `DownloadManager` does not support extracting git lfs files out of the box right?
Using `dl_manager.download()` or `dl_manager.download_and_extract()` still returns lfs files afaict.
Is there a good way to write a dataset loading script for a repo with lfs files?
### Motivation
/
### Your contribution
/
|
https://github.com/huggingface/datasets/issues/5143
|
closed
|
[
"enhancement"
] | 2022-10-20T15:29:29Z
| 2022-10-20T17:17:10Z
| 2
|
Muennighoff
|
huggingface/setfit
| 116
|
How to take advantage of Mac M1 GPUs?
|
More than an issue, this is a request for help.
Do you have advice on how to take advantage of the Mac M1 Pro GPU for training a model, assuming the underlying Torch implementation provides support?
There are some tutorials on how to use Torch with the MPS driver, but I'm not sure how to signal SetFit to use a specific GPU.
|
https://github.com/huggingface/setfit/issues/116
|
closed
|
[
"question"
] | 2022-10-20T08:43:24Z
| 2024-01-29T16:58:04Z
| null |
secastro
|
huggingface/setfit
| 115
|
How many samples for setfit?
|
I understood that setfit is a light weight solution for few shot learning. Two questions came up:
.) What would be a number of samples of class you would switch to standard supervised learning and fine-tuning? E.g. 100 samples?
.) Is there any disadvantage of generating too many pairs (num_iterations) If I have 30 classes, wouldnt be the default of 20 too small to learn meaningful embeddings?
|
https://github.com/huggingface/setfit/issues/115
|
open
|
[
"question"
] | 2022-10-20T06:13:41Z
| 2023-02-27T10:52:50Z
| null |
hanshupe
|
huggingface/optimum
| 424
|
Convert Seq2Seq model to ONNX while splitting encoder-decoder.
|
Hi guys, I've recently been trying to convert my trained BART model to onnx. I've found that when using `transformers.onnx` from transformers, the resulting onnx file is a singular `.onnx` file. However, when using `ORTModelForSequenceClassification.from_pretrained()` and then saving the result I have three files, encoder, decoder and decoder-with-past. I want to use the pipeline provided by optimum for inference, but I am unable to convert my PyTorch trained BART model directly into the three different models.
Is there any way I could do this? Thanks.
|
https://github.com/huggingface/optimum/issues/424
|
closed
|
[
"question",
"onnxruntime"
] | 2022-10-19T09:17:50Z
| 2022-10-20T01:29:30Z
| null |
ZiyueWangUoB
|
huggingface/datasets
| 5,135
|
Update docs once dataset scripts transferred to the Hub
|
## Describe the bug
As discussed in:
- https://github.com/huggingface/hub-docs/pull/423#pullrequestreview-1146083701
we should update our docs once dataset scripts have been transferred to the Hub (and removed from GitHub):
- #4974
Concretely:
- [x] Datasets on GitHub (legacy): https://huggingface.co/docs/datasets/main/en/share#datasets-on-github-legacy
- [x] ADD_NEW_DATASET: https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md
- ...
This PR complements the work of:
- #5067
This PR is a follow-up of PRs:
- #3777
CC: @julien-c
|
https://github.com/huggingface/datasets/issues/5135
|
closed
|
[
"documentation"
] | 2022-10-19T06:58:19Z
| 2022-10-20T08:10:01Z
| 0
|
albertvillanova
|
huggingface/accelerate
| 771
|
What is the best practice to do inference in bf16 with accelerate during training?
|
### System Info
```Shell
Basically, I want to do training with mixed precision and evaluate the model with bfloat16.
I found the model is stored in fp32 after calling acclerate.prepare() and have to convert it to bf16 for faster inference. Can I avoid `explictly` model conversion and make the most use of accelerate?
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
,
### Expected behavior
```Shell
Ideally, we do not want manually model conversion.
```
|
https://github.com/huggingface/accelerate/issues/771
|
closed
|
[] | 2022-10-18T13:15:39Z
| 2022-10-18T13:32:02Z
| null |
huchinlp
|
huggingface/setfit
| 110
|
more metrics addition (i.e f1score, precision ) in the trainer.evaluate()
|
was just checking the code and saw only accuracy as a metrics, are we planning to add more metrics?
|
https://github.com/huggingface/setfit/issues/110
|
closed
|
[
"question"
] | 2022-10-18T11:03:18Z
| 2023-06-26T14:49:05Z
| null |
snayan06
|
huggingface/setfit
| 108
|
Are checkpoints directly available with the SetFitTrainer?
|
Hi, just looking to see if checkpoints are implemented with the SetFitTrainer. Couldn't find it, unlike how the normal models in Hugging Face use `output_dir` for saving checkpoints when training a model.
|
https://github.com/huggingface/setfit/issues/108
|
open
|
[
"question"
] | 2022-10-17T18:46:13Z
| 2022-12-20T09:34:41Z
| null |
ajmcgrail
|
pytorch/vision
| 6,779
|
How do you put a LibTorch (C++) torch::nn::Module on the CUDA device?
|
### ๐ Describe the bug
I get an error when I try to put a `torch::nn::Module` on the CUDA device. How do I put the model on the CUDA device?
```
#include <torch/torch.h>
using namespace torch::indexing;
torch::Device device(torch::kCUDA);
struct Critic_Net : torch::nn::Module {
torch::Tensor next_state_batch__sampled_action;
public:
Critic_Net() {
lin1 = torch::nn::Linear(427, 42);
lin2 = torch::nn::Linear(42, 286);
lin3 = torch::nn::Linear(286, 1);
}
torch::Tensor forward(torch::Tensor next_state_batch__sampled_action) {
auto h = next_state_batch__sampled_action;
h = torch::relu(lin1->forward(h));
h = torch::tanh(lin2->forward(h));
h = lin3->forward(h);
return torch::nan_to_num(h);
}
torch::nn::Linear lin1{nullptr}, lin2{nullptr}, lin3{nullptr};
};
```
I have tried putting it on the CUDA device like so
`auto critic = Critic_Net();`
`critic->to(device);`
This causes `/home/iii/tor/m_gym/multiv_normal.cpp:190:1: error: โcriticโ does not name a type
190 | critic->to(device);
| ^~~~~~`
I have actually tried to put `->to(device);` behind almost everything everywhere the model shows up and I get these errors.
I've also tried using `auto critic = torch::jit::load(critic, device);` after [reading this](https://discuss.pytorch.org/t/how-to-load-model-on-specific-device-in-libtorch/94416).
I get this error.
Is putting a model on CUDA possible with `torch::jit::load`? I think this "model" is the kind that is saved on a disk and not the kind that is an nn::Module.
```
error: no matching function for call to โload(Critic_Net&, c10::Device&)โ
186 | auto critico = torch::jit::load(critic, device);
| ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~
In file included from /home/iii/tor/m_gym/libtorch/include/torch/script.h:9,
from /home/iii/tor/m_gym/multiv_normal.cpp:2:
/home/iii/tor/m_gym/libtorch/include/torch/csrc/jit/serialization/import.h:66:1: note: candidate: โtorch::jit::Module torch::jit::load(std::istream&, c10::optional<c10::Device>)โ
66 | load(std::istream& in, c10::optional<c10::Device> device = c10::nullopt);
| ^~~~
/home/iii/tor/m_gym/libtorch/include/torch/csrc/jit/serialization/import.h:66:20: note: no known conversion for argument 1 from โCritic_Netโ to โstd::istream&โ {aka โstd::basic_istream<char>&โ}
66 | load(std::istream& in, c10::optional<c10::Device> device = c10::nullopt);
| ~~~~~~~~~~~~~~^~
/home/iii/tor/m_gym/libtorch/include/torch/csrc/jit/serialization/import.h:68:18: note: candidate: โtorch::jit::Module torch::jit::load(std::istream&, c10::optional<c10::Device>, torch::jit::ExtraFilesMap&)โ
68 | TORCH_API Module load(
| ^~~~
/home/iii/tor/m_gym/libtorch/include/torch/csrc/jit/serialization/import.h:68:18: note: candidate expects 3 arguments, 2 provided
/home/iii/tor/m_gym/libtorch/include/torch/csrc/jit/serialization/import.h:78:18: note: candidate: โtorch::jit::Module torch::jit::load(const string&, c10::optional<c10::Device>)โ
78 | TORCH_API Module load(
| ^~~~
/home/iii/tor/m_gym/libtorch/include/torch/csrc/jit/serialization/import.h:79:24: note: no known conversion for argument 1 from โCritic_Netโ to โconst string&โ {aka โconst std::basic_string<char>&โ}
79 | const std::string& filename,
```
### Versions
This is my LibTorch version, 1.12.1+cu116
I don't have any problems putting a tensor on the CUDA device and I assume I would not have a problem putting a simpler model on the CUDA device. The issue is the question of where to point this large nn::Module struct to the CUDA device.
|
https://github.com/pytorch/vision/issues/6779
|
closed
|
[
"question"
] | 2022-10-16T23:36:15Z
| 2022-10-17T14:52:20Z
| null |
MotorCityCobra
|
huggingface/datasets
| 5,118
|
Installing `datasets` on M1 computers
|
## Describe the bug
I wanted to install `datasets` dependencies on my M1 (in order to start contributing to the project). However, I got an error regarding `tensorflow`.
On M1, `tensorflow-macos` needs to be installed instead. Can we add a conditional requirement, so that `tensorflow-macos` would be installed on M1?
## Steps to reproduce the bug
Fresh clone this project (on m1), create a virtualenv and run this:
```python
pip install -e ".[dev]"
```
## Expected results
Installation should be smooth, and all the dependencies should be installed on M1.
## Actual results
You should receive an error, saying pip couldn't find a version that matches this pattern:
```
tensorflow>=2.3,!=2.6.0,!=2.6.1
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.6.2.dev0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.0
|
https://github.com/huggingface/datasets/issues/5118
|
closed
|
[
"bug"
] | 2022-10-16T16:50:08Z
| 2022-10-19T09:10:08Z
| 1
|
david1542
|
pytorch/pytorch
| 87,029
|
how to add adaptive_max_pool2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'
|
### ๐ Describe the bug
i have added attention mechanism to yolov5 repo, while doing the training I'm getting this issue , how can I solve this error?
error
i tried to train model using yolov5 command ,& i used C3CBAM attention mechanism , & I'm getting this error
```
!python train.py --img 640 --batch 16 --cfg /content/yolov5/models/yolov5s.yaml --epochs 250 --data coco128.yaml --weights yolov5s.pt --cache
```
<img width="359" alt="image" src="https://user-images.githubusercontent.com/62583018/196018476-062b6719-2804-4e86-a8c9-c12550a244a5.png">
````
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
0% 0/8 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 637, in <module>
main(opt)
File "train.py", line 531, in main
train(opt.hyp, opt, device, callbacks)
File "train.py", line 320, in train
scaler.scale(loss).backward()
File "/usr/local/lib/python3.7/dist-packages/torch/_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py", line 175, in backward
allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
RuntimeError: adaptive_max_pool2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation, or you can use the 'warn_only=True' option, if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.
```
|
https://github.com/pytorch/pytorch/issues/87029
|
closed
|
[] | 2022-10-16T04:37:56Z
| 2022-10-16T05:17:15Z
| null |
akashAD98
|
pytorch/pytorch
| 87,027
|
how to add adaptive_max_pool2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'.
|
I'm adding an attention mechanism on yolov5 to train my model, I added C3CBAM ,& im getting this issue, what should I need to do to solve this issue?
|
https://github.com/pytorch/pytorch/issues/87027
|
closed
|
[] | 2022-10-16T04:01:06Z
| 2022-10-16T04:33:34Z
| null |
akashAD98
|
huggingface/setfit
| 106
|
Function to get probability values of predicted output (like sklearn's predict_proba)?
|
Hi! I wanted to ask if there was an in-built function to get the probability value of predicted output from a classification task, something like predict_proba() from sklearn?
From what i understand currently the only way to get output is to run SetFitModel([text]), which works similar to sklearn predict().
|
https://github.com/huggingface/setfit/issues/106
|
closed
|
[
"question"
] | 2022-10-14T13:45:26Z
| 2022-12-20T09:34:57Z
| null |
a-sharma123
|
pytorch/data
| 831
|
document of parameter buffer_size in MaxTokenBucketizer is wrong
|
According to the document [MaxTokenBucketizer](https://pytorch.org/data/main/generated/torchdata.datapipes.iter.MaxTokenBucketizer.html#torchdata.datapipes.iter.MaxTokenBucketizer)
buffer_size โ This restricts how many **tokens** are taken from prior DataPipe to bucketize
However, in the code, [bucketbatcher.py#L277](https://github.com/pytorch/data/blob/84587ff57575fd47fcae61635a3f4ffc1e639941/torchdata/datapipes/iter/transform/bucketbatcher.py#L277)
The unit of buffer_size is **sample** not **token**
|
https://github.com/meta-pytorch/data/issues/831
|
closed
|
[
"documentation"
] | 2022-10-14T08:41:20Z
| 2022-10-17T17:36:48Z
| 1
|
ling0322
|
pytorch/tensorpipe
| 457
|
Question: how to disable IB at runtime?
|
I wonder if there is an environment variable like `NCCL_IB_DISABLE` in NCCL so that I can disable IB at runtime.
Thanks!
|
https://github.com/pytorch/tensorpipe/issues/457
|
open
|
[] | 2022-10-14T02:22:23Z
| 2022-10-14T09:49:57Z
| null |
jasperzhong
|
pytorch/examples
| 1,082
|
Query on loss calculation in word language model
|
In the main.py of word language model, I find that in the evaluate function the total_loss is getting multiplied by length of data
https://github.com/pytorch/examples/blob/ca1bd9167f7216e087532160fc5b98643d53f87e/word_language_model/main.py#L163
However in the train function, total_loss is not getting multiplied by length of data https://github.com/pytorch/examples/blob/ca1bd9167f7216e087532160fc5b98643d53f87e/word_language_model/main.py#L194
Is this proper?
|
https://github.com/pytorch/examples/issues/1082
|
open
|
[
"help wanted"
] | 2022-10-14T01:23:15Z
| 2022-10-17T21:31:55Z
| 0
|
AvisP
|
huggingface/transformers
| 19,592
|
Sagemaker Estimator for fine tuning where all the transform code is in the train.py
|
### Feature request
I work for a company that is a heavy user of AWS sagemaker. I am on a professional services team where I build a lot of examples for our data scientists to follow. I recently wanted to use the Sagemaker Huggingface estimator to fine tune a transformer and create a model for our custom NLP task.
I had csv data in S3. I found several examples of fine tuning that involved pulling nicely curated datasets from HF hub down to the SM notebook and then transforming it into arrow with `save_to_disk` and pushing it to S3 as a dataset that could be read in the train.py file.
I struggled mightily to find an example and never found a good example of how to start with just CSV files, use the HF existing tools load the data and then pass it to the estimator. Furthermore, the examples I find have the user pulling the data over to the notebook and doing the conversion to arrow there. That seems inefficient when the point of an estimator is to utilize a small instance to host your notebook and a large instance to do the work. If I had a large amount of data to to convert to arrow and I followed the given examples, I would need a large notebook instance and a large estimator instance.
I wrote an example that puts all the transform code in the train.py and only invokes it from the notebook. In my train.py, I use load_dataset with the csv script to transform the data to arrow and do the save and load there. I wanted to use the arrow format for efficiency.
I propose that I update your documentation with this unique example.
### Motivation
I feel that the proposed documentation is unifies several previously documented concepts into a single, useful example.
### Your contribution
I would be happy to build the example and have you guys approve it. I have never contributed to HF before, so I would need a bit of guidance to get started.
|
https://github.com/huggingface/transformers/issues/19592
|
closed
|
[] | 2022-10-13T19:24:14Z
| 2022-11-21T15:02:11Z
| null |
j2cunningham
|
pytorch/functorch
| 1,043
|
Is there a way to parallelize or accelerate a loop of column-by-column jvp?
|
Hi, experts.
I am currently calculating a Jacobian column-by-column and calculating the squared sum of each column to calculate the Trace of the Jacobian.
The code looks something like this:
```
def jvp_func(x, tgt):
return jvp(net, (x,), (tgt,))
tr = 0
for j in range(x[0].shape[0]):
tgt = torch.zeros_like(x)
tgt[:, j] = 1.
_, grad = vmap(jvp_func)(x, tgt)
tr += torch.sum(grad * grad, dim=1)
```
As you can see, my code calculates a batched Jacobian column by column (inside each j loop) and calculates the Trace.
(motivated by this code: https://github.com/facebookresearch/jacobian_regularizer/blob/main/jacobian/jacobian.py)
I am mainly doing this instead of calculating the entire Jacobian at once because the entire Jacobian is huge and it blows up the memory.
However, this code is quite slow. I am not sure if this code is doing a lot of redundant computation, e.g., I wonder if net(x) is being calculated repetitively on each loop of j.
Is there a way to parallelize the j loop, or at least remove any repetitive computation for each j loop to speed up the current code?
I briefly looked at functorch.compile.ts_compile but was not able to make it work, and am not sure if that is something that can be helpful.
Any suggestions will be highly appreciated!
Thank you,
Best regards,
Kiwan
|
https://github.com/pytorch/functorch/issues/1043
|
open
|
[] | 2022-10-11T00:43:43Z
| 2022-10-11T21:34:44Z
| 3
|
kwmaeng91
|
pytorch/torchx
| 611
|
Kubernetes: Support mounting secrets as a volume
|
## Description
<!-- concise description of the feature/enhancement -->
## Motivation/Background
<!-- why is this feature/enhancement important? provide background context -->
Kubernetes has a concept of a secret that can be mounted as a volume to a pod.
https://kubernetes.io/docs/concepts/configuration/secret/
```
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
volumes:
- name: foo
secret:
secretName: mysecret
defaultMode: 0400
```
## Detailed Proposal
<!-- provide a detailed proposal -->
We can add a new bind mount type for secrets so a user can add the secret mount as normal.
```
torchx run utils.sh --mounts type=secret,name=foo,dst=/etc/foo ...
```
Specs https://github.com/pytorch/torchx/blob/main/torchx/specs/api.py#L218-L269 and add new SecretMount
Integrate it into kubernetes_scheduler.py at https://github.com/pytorch/torchx/blob/main/torchx/schedulers/kubernetes_scheduler.py#L267
## Alternatives
<!-- discuss the alternatives considered and their pros/cons -->
## Additional context/links
<!-- link to code, documentation, etc. -->
* utils.sh mount argument https://github.com/pytorch/torchx/blob/main/torchx/components/utils.py#L83
* Docker also has a slightly different concept of secrets https://docs.docker.com/engine/swarm/secrets/
* AWS Batch has environment variable secrets https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data-secrets.html
|
https://github.com/meta-pytorch/torchx/issues/611
|
open
|
[
"enhancement",
"module: specs",
"kubernetes"
] | 2022-10-10T18:53:20Z
| 2022-10-10T18:56:25Z
| 0
|
d4l3k
|
pytorch/TensorRT
| 1,396
|
Question about triton example in tutorial
|
Why using platform: "pytorch_libtorch" while model.pt as Torch TensorRT
> model.pt as platform: "pytorch_libtorch"
instead of
> model.pt as platform: "tensorrt_plan" in [serving_torch_tensorrt_with_triton](https://pytorch.org/TensorRT/tutorials/serving_torch_tensorrt_with_triton.html)
|
https://github.com/pytorch/TensorRT/issues/1396
|
closed
|
[
"question",
"examples"
] | 2022-10-10T11:57:33Z
| 2022-12-15T17:55:53Z
| null |
allen-ash
|
huggingface/setfit
| 91
|
Using Setfit for similarity classification
|
Hello,
I would like to test this promising framework on a similarity classification task. So basically, I have got a dataset with 3 columns: (sentence1,sentence2,label). From what I understand, currently it is only possible to train on a single sentence classification problem.
Is there a get around to use Setfit for a pair sentence classification problem ? If not, would it be possible to add this feature in a future integration ?
Thank you in advance
|
https://github.com/huggingface/setfit/issues/91
|
open
|
[
"question"
] | 2022-10-07T09:58:09Z
| 2025-01-21T10:05:54Z
| null |
castafra
|
pytorch/examples
| 1,077
|
Running on Windows
|
## ๐ Documentation
I'm trying to get DCGAN running on my Windows machine. It appears that the code may not support windows, but this is not mentioned in the readme. Is there a procedure to get it running on Windows?
|
https://github.com/pytorch/examples/issues/1077
|
open
|
[] | 2022-10-07T03:11:19Z
| 2023-03-21T23:00:09Z
| 6
|
maxbonzulak
|
huggingface/setfit
| 86
|
num_epochs range
|
Hi there!
I was wondering whether you can provide a range for typically "good" values to use/test for the argument num_epochs both in the single label classification case and the multi label classification case. Of course, the best performing number depends on the classes to be predicted and the dataset, but in non-FSL settings, typically one uses a range between 2-5 (whereas many researchers may also stick to common defaults such as 3). I'm asking because I noticed that you use rather `num_epochs = 20` in your example scripts, so perhaps in general in setfit num_epochs should be higher than in non-FSL settings?
|
https://github.com/huggingface/setfit/issues/86
|
open
|
[
"question"
] | 2022-10-06T15:35:48Z
| 2022-12-20T09:36:09Z
| null |
fhamborg
|
huggingface/setfit
| 83
|
Running Evaluation
|
Hi,
Thanks for sharing this work.
I am wondering if it is possible to run evaluation dataset to tune hyperparameters.
The SetFitTrainer doesn't seem to accept arguments like 'evaluation_strategy', 'save_strategy', 'compute_metrics', etc.
Or perhaps Im doing something wrong?
Thanks.
|
https://github.com/huggingface/setfit/issues/83
|
open
|
[
"question"
] | 2022-10-06T05:58:19Z
| 2022-12-20T09:36:43Z
| null |
dhkhey
|
huggingface/setfit
| 81
|
Fine-tuning for Question-Answering
|
Hello,
Can this library be used for fine-tuning a question-answering model with small amount of data as well ?
I have a data that is in the same format with squad data. It has small amount of context, question, and answers data.
Is it possible use this library to fine tune a question-answering model in huggingface (e.g. deepset/roberta-base-squad2) on my small data ? If it is, how should I set the **column_mapping** argument of the **SetFitTrainer()** function ?
|
https://github.com/huggingface/setfit/issues/81
|
open
|
[
"question"
] | 2022-10-04T17:47:10Z
| 2022-12-20T09:36:55Z
| null |
ozyurtf
|
pytorch/pytorch
| 86,205
|
How to save only parts of the state_dict()
|
### ๐ Describe the bug
Hi, I want to save only a small part of the model.
e.g. A layer requires grad but B layer does not. So I only want to save A layer rather than both A and B . Many thanks!
```
def model(nn.Module):
...
model=model()
model.save(model.state_dict())
```
### Versions
```
PyTorch version: 1.13.0.dev20220709
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.23.1
Libc version: N/A
Python version: 3.9.10 | packaged by conda-forge | (main, Feb 1 2022, 21:25:34) [Clang 11.1.0 ] (64-bit runtime)
Python platform: macOS-12.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.2
[pip3] pytorch-ignite==0.4.9
[pip3] pytorch-lightning==1.6.5
[pip3] torch==1.13.0.dev20220709
[pip3] torchaudio==0.14.0.dev20220603
[pip3] torchmetrics==0.9.2
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.14.0.dev20220708
[conda] numpy 1.23.2 pypi_0 pypi
[conda] pytorch-ignite 0.4.9 pypi_0 pypi
[conda] pytorch-lightning 1.6.5 pypi_0 pypi
[conda] torch 1.13.0.dev20220709 pypi_0 pypi
[conda] torchaudio 0.14.0.dev20220603 pypi_0 pypi
[conda] torchmetrics 0.9.2 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220708 pypi_0 pypi
```
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
|
https://github.com/pytorch/pytorch/issues/86205
|
closed
|
[
"module: nn",
"triaged"
] | 2022-10-04T13:47:47Z
| 2022-10-13T13:46:21Z
| null |
CaffreyR
|
pytorch/pytorch
| 86,204
|
How to perform unstructured interpolation
|
### ๐ The feature, motivation and pitch
My feature request is very simple, I'm not sure if there already exists some approach or implementation to achieve this functionality.
In scipy, there is scipy.interpolate.NearestNDInterpolator class or scipy.interpolate.LinearNDInterpolator class to achieve unstructured interpolation, i.e., giving a set of sparse points that distribute non-uniformly in the spatial domain and interpolate values at any given points, however, currently torch seems only support simple grid structured interpolation like grid_sample
### Alternatives
I have found some implementation for this in 1d, but not sure if this is very efficient and support GPU, also how to extend it to 2D.
https://github.com/aliutkus/torchinterp1d
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/86204
|
open
|
[
"triaged",
"module: interpolation"
] | 2022-10-04T13:03:50Z
| 2022-10-10T11:59:06Z
| null |
twangnh
|
pytorch/TensorRT
| 1,388
|
โ [Question] How can we use torch_executed_modules?
|
## โ Question
Could you give us and example of how to use `torch_executed_modules` in `torch_tensorrt.ts.compile`
## What you have already tried
I tried many things. I would appreciate a little sample of how to use it.
Thanks
|
https://github.com/pytorch/TensorRT/issues/1388
|
closed
|
[
"question",
"examples"
] | 2022-10-03T21:55:23Z
| 2022-10-04T16:57:03Z
| null |
mjack3
|
pytorch/functorch
| 1,037
|
Get intermediate derivatives with nested jacobian and has_aux
|
Is it possible to get intermediate results with nested jacobian?
Say `functorch.jacfwd `is nested twice with `has_aux=True`, how to get 1st derivative in this case?
```python
import torch
import functorch
def foo(x):
y = torch.cos(x)
return y, y
def nest(fun, num):
bar = fun
for _ in range(num):
bar = functorch.jacfwd(bar, has_aux=True)
return bar
x = torch.tensor(0.0)
print(nest(foo, 1)(x))
# 1st derivative and value
# (tensor(-0.), tensor(1.000000000000e+00))
print(nest(foo, 2)(x))
# 2nd derivative and value, no 1st derivative
# (tensor(-1.000000000000e+00), tensor(1.000000000000e+00))
```
|
https://github.com/pytorch/functorch/issues/1037
|
closed
|
[] | 2022-10-03T08:29:27Z
| 2022-10-03T16:54:06Z
| 2
|
i-a-morozov
|
pytorch/vision
| 6,676
|
torchvision.transforms.Normalize has large absolute difference
|
### ๐ Describe the bug
By definition, `torchvision.transforms.Normalize` should produce the same results if all input, std, and mean are divided or multiplied by the same number. When I test the API with the following input, I get the absolute difference up to 24978131.5 and the relative difference up to 3.5765e-08 for the float64 data type. Is this kind of difference expected? What's the threshold pytorch uses in tests to determine normal behavior versus buggy behavior?
Reproduce code:
```
import torch
import torchvision
input = torch.tensor([ 0.0000, 41.3108, 0.0000], dtype=torch.float64).view(3, 1, 1)
std = torch.tensor([61860.0, 3586.0, 60300.0])
mean = torch.tensor([4287419396147613455, -7376768754095287866, -6969696485275369284])
r1 = torchvision.transforms.Normalize(mean, std)(input)
input = input / 255.0
std = std / 255.0
mean = mean / 255.0
r2 = torchvision.transforms.Normalize(mean, std)(input)
print(r2 - r1)
print((r2-r1)/r1)
```
Output:
```
tensor([[[ 588325.7422]],
[[24978131.5000]],
[[ 4133818.0781]]], dtype=torch.float64)
tensor([[[-8.4885e-09]],
[[ 1.2142e-08]],
[[ 3.5765e-08]]], dtype=torch.float64)
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0.dev20220919+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-176-generic-x86_64-with-debian-buster-sid
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.13.0.dev20220919+cpu
[pip3] torchaudio==0.13.0.dev20220919+cpu
[pip3] torchvision==0.14.0.dev20220919+cpu
[conda] numpy 1.21.6 pypi_0 pypi
[conda] torch 1.13.0.dev20220919+cpu pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220919+cpu pypi_0 pypi
[conda] torchvision 0.14.0.dev20220919+cpu pypi_0 pypi
```
cc @vfdev-5 @datumbox
|
https://github.com/pytorch/vision/issues/6676
|
closed
|
[
"question",
"module: transforms"
] | 2022-10-02T20:56:19Z
| 2022-10-03T16:42:13Z
| null |
jiannanWang
|
huggingface/datasets
| 5,053
|
Intermittent JSON parse error when streaming the Pile
|
## Describe the bug
I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash.
This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tokens and 4 days into a training run, and now just happened 2 minutes into one, but I can't reliably reproduce it.
I'm using a remote machine with 8 A6000 GPUs via runpod.io
## Expected results
I have a DataLoader which can iterate through the whole Pile
## Actual results
Stack trace:
```
Failedย toย readย fileย 'zstd://12.jsonl::https://the-eye.eu/public/AI/pile/train/12.jsonl.zst'ย withย errorย <classย 'pyarrow.lib.ArrowInvalid'>:ย JSONย parseย error:ย Invalidย value.ย inย rowย 0
```
I'm currently using HuggingFace accelerate, which also gave me the following stack trace, but I've also experienced this problem intermittently when using DataParallel, so I don't think it's to do with parallelisation
```
Traceback (most recent call last):
File "ddp_script.py", line 1258, in <module>
main()
File "ddp_script.py", line 1143, in main
for c, batch in tqdm.tqdm(enumerate(data_iter)):
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 503, in __iter__
next_batch, next_batch_info, next_skip = self._fetch_batches(main_iterator)
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 454, in _fetch_batches
broadcast_object_list(batch_info)
File "/opt/conda/lib/python3.7/site-packages/accelerate/utils/operations.py", line 333, in broadcast_object_list
torch.distributed.broadcast_object_list(object_list, src=from_process)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1900, in broadcast_object_list
object_list[i] = _tensor_to_object(obj_view, obj_size)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1571, in _tensor_to_object
return _unpickler(io.BytesIO(buf)).load()
_pickle.UnpicklingError: invalid load key, '@'.
```
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset(
cfg["dataset_name"], streaming=True, split="train")
dataset = dataset.remove_columns("meta")
dataset = dataset.map(tokenize_and_concatenate, batched=True)
dataset = dataset.with_format(type="torch")
train_data_loader = DataLoader(
dataset, batch_size=cfg["batch_size"], num_workers=3)
for batch in train_data_loader:
continue
```
`tokenize_and_concatenate` is a custom tokenization function I defined on the GPT-NeoX tokenizer to tokenize the text, separated by endoftext tokens, and reshape to have length batch_size, I don't think this is related to tokenization:
```
import numpy as np
import einops
import torch
def tokenize_and_concatenate(examples):
texts = examples["text"]
full_text = tokenizer.eos_token.join(texts)
div = 20
length = len(full_text) // div
text_list = [full_text[i * length: (i + 1) * length]
for i in range(div)]
tokens = tokenizer(text_list, return_tensors="np", padding=True)[
"input_ids"
].flatten()
tokens = tokens[tokens != tokenizer.pad_token_id]
n = len(tokens)
curr_batch_size = n // (seq_len - 1)
tokens = tokens[: (seq_len - 1) * curr_batch_size]
tokens = einops.rearrange(
tokens,
"(batch_size seq) -> batch_size seq"
|
https://github.com/huggingface/datasets/issues/5053
|
open
|
[
"bug"
] | 2022-10-02T11:56:46Z
| 2022-10-04T17:59:03Z
| 3
|
neelnanda-io
|
pytorch/examples
| 1,075
|
Need C++ L2 regularization example
|
How to add L2 Regularization to a layer like keras
model.add(Dense(kernel_regularizer=regularizers.l2(0.01), activation='elu'))
|
https://github.com/pytorch/examples/issues/1075
|
closed
|
[
"help wanted"
] | 2022-10-02T08:35:30Z
| 2022-10-04T01:47:36Z
| 1
|
bitnick10
|
pytorch/functorch
| 1,036
|
support scan
|
it would be really nice to be able to eg take models implemented in jax with `jax.lax.scan` and port them over to torch without having to unroll scans over modules
|
https://github.com/pytorch/functorch/issues/1036
|
open
|
[] | 2022-09-30T18:02:23Z
| 2023-02-13T08:41:10Z
| 3
|
GallagherCommaJack
|
pytorch/TensorRT
| 1,386
|
โ [Question] Why my model is not accelerated by using fp16 ?
|
## โ Question
I tried the exact same example in the [notebook](https://github.com/pytorch/TensorRT/blob/master/notebooks/Hugging-Face-BERT.ipynb).
## What you have already tried
see the code (I just copy the code in the [.ipynb](https://github.com/pytorch/TensorRT/blob/master/notebooks/Hugging-Face-BERT.ipynb). But the result is very different while I use the same A100 GPU!!!
```python
from transformers import BertTokenizer, BertForMaskedLM
import torch
import timeit
import numpy as np
import torch_tensorrt
import torch.backends.cudnn as cudnn
enc = BertTokenizer.from_pretrained('bert-base-uncased')
batch_size = 4
batched_indexed_tokens = [[101, 64]*64]*batch_size
batched_segment_ids = [[0, 1]*64]*batch_size
batched_attention_masks = [[1, 1]*64]*batch_size
tokens_tensor = torch.tensor(batched_indexed_tokens)
segments_tensor = torch.tensor(batched_segment_ids)
attention_masks_tensor = torch.tensor(batched_attention_masks)
mlm_model_ts = BertForMaskedLM.from_pretrained('bert-base-uncased', torchscript=True)
traced_mlm_model = torch.jit.trace(mlm_model_ts, [tokens_tensor, segments_tensor, attention_masks_tensor])
masked_sentences = ['Paris is the [MASK] of France.',
'The primary [MASK] of the United States is English.',
'A baseball game consists of at least nine [MASK].',
'Topology is a branch of [MASK] concerned with the properties of geometric objects that remain unchanged under continuous transformations.']
pos_masks = [4, 3, 9, 6]
encoded_inputs = enc(masked_sentences, return_tensors='pt', padding='max_length', max_length=128)
outputs = mlm_model_ts(**encoded_inputs)
most_likely_token_ids = [torch.argmax(outputs[0][i, pos, :]) for i, pos in enumerate(pos_masks)]
unmasked_tokens = enc.decode(most_likely_token_ids).split(' ')
unmasked_sentences = [masked_sentences[i].replace('[MASK]', token) for i, token in enumerate(unmasked_tokens)]
for sentence in unmasked_sentences:
print(sentence)
encoded_inputs = enc(masked_sentences, return_tensors='pt', padding='max_length', max_length=128)
outputs = traced_mlm_model(encoded_inputs['input_ids'], encoded_inputs['token_type_ids'], encoded_inputs['attention_mask'])
most_likely_token_ids = [torch.argmax(outputs[0][i, pos, :]) for i, pos in enumerate(pos_masks)]
unmasked_tokens = enc.decode(most_likely_token_ids).split(' ')
unmasked_sentences = [masked_sentences[i].replace('[MASK]', token) for i, token in enumerate(unmasked_tokens)]
for sentence in unmasked_sentences:
print(sentence)
trt_model = torch_tensorrt.compile(traced_mlm_model,
inputs= [torch_tensorrt.Input(shape=[batch_size, 128], dtype=torch.int32), # input_ids
torch_tensorrt.Input(shape=[batch_size, 128], dtype=torch.int32), # token_type_ids
torch_tensorrt.Input(shape=[batch_size, 128], dtype=torch.int32)], # attention_mask
enabled_precisions= {torch.float32}, # Run with 32-bit precision
workspace_size=2000000000,
truncate_long_and_double=True
)
enc_inputs = enc(masked_sentences, return_tensors='pt', padding='max_length', max_length=128)
enc_inputs = {k: v.type(torch.int32).cuda() for k, v in enc_inputs.items()}
output_trt = trt_model(enc_inputs['input_ids'], enc_inputs['token_type_ids'], enc_inputs['attention_mask'])
most_likely_token_ids_trt = [torch.argmax(output_trt[i, pos, :]) for i, pos in enumerate(pos_masks)]
unmasked_tokens_trt = enc.decode(most_likely_token_ids_trt).split(' ')
unmasked_sentences_trt = [masked_sentences[i].replace('[MASK]', token) for i, token in enumerate(unmasked_tokens_trt)]
for sentence in unmasked_sentences_trt:
print(sentence)
trt_model_fp16 = torch_tensorrt.compile(traced_mlm_model,
inputs= [torch_tensorrt.Input(shape=[batch_size, 128], dtype=torch.int32), # input_ids
torch_tensorrt.Input(shape=[batch_size, 128], dtype=torch.int32), # token_type_ids
torch_tensorrt.Input(shape=[batch_size, 128], dtype=torch.int32)], # attention_mask
enabled_precisions= {torch.half}, # Run with 16-bit precision
workspace_size=2000000000,
truncate_long_and_double=True
)
def timeGraph(model, input_tensor1, input_tensor2, input_tensor3, num_loops=50):
print("Warm up ...")
with torch.no_grad():
for _ in range(20):
features = model(input_tensor1, input_tensor2, input_tensor3)
torch.cuda.synchronize()
print("Start timing ...")
timings = []
with torch.no_grad():
for i in range(num_loops):
start_time = timeit.default_timer()
features = model(input_tensor1, input_tensor2, input_tensor3)
torch.cuda.synchronize()
end_time = timeit.default_timer()
timings.append(end_time - start_time)
# print("Iteration {}: {:.6f} s".format(i, end_time - start_time))
return timings
def printStats(graphN
|
https://github.com/pytorch/TensorRT/issues/1386
|
closed
|
[
"question",
"No Activity",
"performance"
] | 2022-09-30T16:04:34Z
| 2023-01-13T00:02:26Z
| null |
jcyk
|
huggingface/datasets
| 5,044
|
integrate `load_from_disk` into `load_dataset`
|
**Is your feature request related to a problem? Please describe.**
Is it possible to make `load_dataset` more universal similar to `from_pretrained` in `transformers` so that it can handle the hub, and the local path datasets of all supported types?
Currently one has to choose a different loader depending on how the dataset has been created.
e.g. this won't work:
```
$ git clone https://huggingface.co/datasets/severo/test-parquet
$ python -c 'from datasets import load_dataset; ds=load_dataset("test-parquet"); \
ds.save_to_disk("my_dataset"); load_dataset("my_dataset")'
[...]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 1277, in _prepare_split
writer.write_table(table)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_writer.py", line 524, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 2005, in table_cast
return cast_table_to_schema(table, schema)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 1968, in cast_table_to_schema
raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
```
both times the dataset is being loaded from disk. Why does it fail the second time?
Why can't `save_to_disk` generate a dataset that can be immediately loaded by `load_dataset`?
e.g. the simplest hack would be to have `save_to_disk` add some flag to the saved dataset, that tells `load_dataset` to internally call `load_from_disk`. like having `save_to_disk` create a `load_me_with_load_from_disk.txt` file ;) and `load_dataset` will support that feature from saved datasets from new `datasets` versions. The old ones will still need to use `load_from_disk` explicitly. Unless the flag is not needed and one can immediately tell by looking at the saved dataset that it was saved via `save_to_disk` and thus use `load_from_disk` internally.
The use-case is defining a simple API where the user only ever needs to pass a `dataset_name_or_path` and it will always just work. Currently one needs to manually add additional switches telling the system whether to use one loading method or the other which works but it's not smooth.
Thank you!
|
https://github.com/huggingface/datasets/issues/5044
|
open
|
[
"enhancement"
] | 2022-09-29T17:37:12Z
| 2025-06-28T09:00:44Z
| 15
|
stas00
|
huggingface/setfit
| 72
|
Few-Shot Named Entity Recognition work
|
Hi, really like your work, have you considered using this framework for few-shot named entity recognition work? or do you have an example code for it, looking forward to the progress in few-shot named entity recognition!
|
https://github.com/huggingface/setfit/issues/72
|
open
|
[
"question"
] | 2022-09-29T09:32:11Z
| 2022-12-20T09:37:02Z
| null |
zhanghaok
|
pytorch/vision
| 6,664
|
Add a function to remove degenerate boxes
|
### ๐ The feature
This function would filter boxes where x2 <= x1 or y2 <= y1
### Motivation, pitch
Degenerate boxes are filtered in at least two places in the current torchvision code:
* https://github.com/pytorch/vision/blob/f725901dde5bc996fe3d4e163f4d4e7d53720146/torchvision/prototype/transforms/_augment.py
* https://github.com/pytorch/vision/blob/96dbada4d588cabbd24ab1eee57cd261c9b93d20/references/detection/transforms.py
This could be refactored, a function ```remove_degenerate_boxes``` could be added in ```ops.boxes``` and exposed publicaly.
### Alternatives
_No response_
### Additional context
If relevant I would be happy to work on it !
cc @vfdev-5 @datumbox
|
https://github.com/pytorch/vision/issues/6664
|
closed
|
[
"question",
"module: transforms"
] | 2022-09-28T19:23:36Z
| 2022-09-28T21:54:52Z
| null |
Quintulius
|
pytorch/examples
| 1,071
|
resnet training on imagenet is failing
|
## Environment
pyTorch - upstream code base > 1.12
UB 20.04
GPU - 4
## Steps to Reproduce
`python imagenet/main.py -a resnet50 --dist-url tcp://127.0.0.1:8080 --dist-backend nccl --multiprocessing-distributed --world-size 1 --rank 0 <imagenet data dir> --epochs 3 --batch-size 256 -j64`
## Failure signature
731:731 [2] NCCL INFO comm 0x7f0078000ef0 rank 2 nranks 4 cudaDev 2 busId 88000 - Abort COMPLETE
730:730 [1] NCCL INFO comm 0x7f1750000ef0 rank 1 nranks 4 cudaDev 1 busId 3d000 - Abort COMPLETE
732:732 [3] NCCL INFO comm 0x7fe214000ef0 rank 3 nranks 4 cudaDev 3 busId b1000 - Abort COMPLETE
729:729 [0] NCCL INFO comm 0x7f2edc000ef0 rank 0 nranks 4 cudaDev 0 busId 1a000 - Abort COMPLETE
Traceback (most recent call last):
File "main.py", line 516, in <module>
main()
File "main.py", line 117, in main
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
while not context.join():
File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/var/lib/jenkins/examples/imagenet/main.py", line 278, in main_worker
train(train_loader, model, criterion, optimizer, epoch, args)
File "/var/lib/jenkins/examples/imagenet/main.py", line 331, in train
loss = criterion(output, target)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1131, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 1166, in forward
label_smoothing=self.label_smoothing)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2970, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu! (when checking argument for argument target in method wrapper_nll_loss_forward)
## Possible Regression
Git reset to commit 5a06e9cac1728c860b53ebfc6792e0a0e21a5678
is working fine.
https://github.com/pytorch/examples/commit/5a06e9cac1728c860b53ebfc6792e0a0e21a5678
|
https://github.com/pytorch/examples/issues/1071
|
closed
|
[
"bug",
"help wanted"
] | 2022-09-28T06:16:22Z
| 2022-09-30T19:42:39Z
| 1
|
pruthvistony
|
pytorch/data
| 794
|
Does torchdata already work with GCP and Azure blob storage
|
### ๐ The feature
We already have an S3 integration and it seems like the S3 API already works with both
* Azure: https://devblogs.microsoft.com/cse/2016/05/22/access-azure-blob-storage-from-your-apps-using-s3-api/
* GCP: https://vamsiramakrishnan.medium.com/a-study-on-using-google-cloud-storage-with-the-s3-compatibility-api-324d31b8dfeb
### Motivation, pitch
So ideally we can already support Azure, GCP without doing much
### Alternatives
Build a new integration for each of Azure and GCP using their native APIs
h/t: @chauhang for the idea
|
https://github.com/meta-pytorch/data/issues/794
|
closed
|
[] | 2022-09-27T21:37:43Z
| 2022-10-20T17:52:35Z
| 7
|
msaroufim
|
pytorch/pytorch
| 85,695
|
How to load checkpoint from .pt file
|
### ๐ Describe the bug
I finetuned T5-large by pytorch lightning and saved a ckpt file.
```
ckpt = torch.load(<ckpt_path>)
print(ckpt.keys())
dict_keys(['epoch', 'global_step', 'pytorch-lightning_version', 'state_dict', 'loops', 'callbacks', 'optimizer_states', 'lr_schedulers', 'hparams_name', 'hyper_parameters'])
```
It does have state_dict which means I can use it as my inference task.
I have tried the following two snippets.
1.
This can not run correctly
```
ckpt = torch.load(<ckpt_path>)
model = AutoModelForSeq2SeqLM.from_pretrained('t5-large')
model.load_state_dict(ckpt)
print(model.lm_head.weight)
```
2. This seems don't load the weight correctly.
```
ckpt = torch.load(<ckpt_path>)
model_config = AutoConfig.from_pretrained('t5-large')
model_2 = AutoModelForSeq2SeqLM.from_pretrained(None,config = model_config,state_dict = ckpt)
print(model_2.lm_head.weight)
```
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.4 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.10
Python version: 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-4.15.0-189-generic-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Quadro RTX 8000
GPU 1: Quadro RTX 8000
GPU 2: Quadro RTX 8000
GPU 3: Quadro RTX 8000
GPU 4: Quadro RTX 8000
GPU 5: Quadro RTX 8000
GPU 6: Quadro RTX 8000
GPU 7: Quadro RTX 8000
GPU 8: Quadro RTX 8000
GPU 9: Quadro RTX 8000
Nvidia driver version: 470.141.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] pytorch-lightning==1.7.7
[pip3] torch==1.12.1
[pip3] torchmetrics==0.7.2
[conda] numpy 1.21.6 pypi_0 pypi
[conda] pytorch-lightning 1.7.7 pypi_0 pypi
[conda] torch 1.12.1 pypi_0 pypi
|
https://github.com/pytorch/pytorch/issues/85695
|
closed
|
[] | 2022-09-27T08:23:40Z
| 2022-09-29T17:38:58Z
| null |
ZeyiLiao
|
pytorch/functorch
| 1,030
|
Add support for `tree_map` or document recommended alternative
|
I'm working on testing some models using `functorch` along with `torch-mlir` and IREE. I don't see an analog of jax's `tree_map`. Is this something it makes sense for `functorch` to implement, or is there a recommended alternative?
|
https://github.com/pytorch/functorch/issues/1030
|
open
|
[] | 2022-09-26T19:20:21Z
| 2022-11-12T15:22:23Z
| 9
|
dellis23
|
pytorch/pytorch
| 85,625
|
How to install pytorch with cuda 11.7 in anaconda envirment?
|
### ๐ The doc issue


could not find the version of cuda 11.7 when use conda or pip
### Suggest a potential alternative/fix
add cuda 11.7 in conda
|
https://github.com/pytorch/pytorch/issues/85625
|
open
|
[
"triaged"
] | 2022-09-26T13:15:02Z
| 2022-10-04T08:23:30Z
| null |
verigle
|
pytorch/TensorRT
| 1,379
|
Why is size of tensorrt compiled INT8 model after QAT is same as size of FP16 model
|
## โ Question
<!-- Your question -->
I have been trying to use INT8 inference for a trained pytorch model.
I followed this:
https://pytorch.org/TensorRT/_notebooks/vgg-qat.html
and
https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/tutorials/quant_resnet50.html
**My steps are outlined as:**
1) **Train pytorch model**
- Normal training loop
2) **Calibrate the model with quant_modules**
`
quant_modules.initialize(float_module_list=['ConvTranspose2d']) # error when used in quantization, so I add it to the exception list
quant_desc_input = QuantDescriptor(calib_method='histogram')
# conv and Linear layers to be replaced by there quantized versions
quant_nn.QuantConv2d.set_default_quant_desc_input(quant_desc_input)
quant_nn.QuantLinear.set_default_quant_desc_input(quant_desc_input)
# quant_nn.QuantConvTranspose2d.set_default_quant_desc_input(quant_desc_input)
# now load pre-trained model
net1 = model_simple.BevDetNetSimple(input_channel_numbers, settings.N_CHANNELS_PREDICTION_KP,
scale_H=2, scale_W=2, predict_3d_center=True).cuda(DEVICE_ID_GPU)
cuda_dev = "cuda:{0}".format(DEVICE_ID_GPU)
net1.load_state_dict(torch.load(model_weights, map_location=cuda_dev))
print(net1)
calib_data = dataset_classes.CalibDataset(path_calib_dataset_bev=settings.val_bev_save_path)
calib_loader = DataLoader(calib_data, batch_size=6, drop_last=True)
# calibrate
with torch.no_grad():
collect_stats(net1, calib_loader, num_batches=4)
compute_amax(net1, method="percentile", percentile=99.99)
`
3. **Train the model with quantized layers**
- Normal training loop, 50 epochs
- Save as torchscript
`# export to torchscript
quant_nn.TensorQuantizer.use_fb_fake_quant = True
with torch.no_grad():
jit_model = torch.jit.trace(net1, train_x)
torch.jit.save(jit_model, settings.MODEL_SAVE_PATH_INTERIM + str(epoch) + '_qat.ts')
`
4. **Convert QAT trained torchscript to tensorrt - int8**
`
def export_qat_to_trt_int8(path_trained_qat_ts, path_save_ts_trt_int8):
"""
Function exports the QAT trained model saved as torchscript and weighst to tensorrt usig INT8 precision
"""
# load the saved QAT ts model
qat_model = torch.jit.load(path_trained_qat_ts).eval()
# compile to torchscript
compile_spec = {"inputs": [trt.Input([1, 4, 384, 384])],
"enabled_precisions": [torch.int8],
"truncate_long_and_double":True,
"sparse_weights": True}
trt_mod = trt.compile(qat_model, **compile_spec)
torch.jit.save(trt_mod, path_save_ts_trt_int8)
`
After doing the above steps, I get a model of size 48MB, which is same as that of FP16 model. The runtime is also similar.
I have then tried PTQ technique for INT8 - this gives me a model of size 28MB, which is expected. Also, the runtime is about half of FP16 model. This is fine. However, the accuracy is not acceptable.
Please let me know what am I missing in the QAT way? Why is my model larger and slower wrt PTQ?
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.12
- CPU Architecture: x86_64
- OS (e.g., Linux): linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.8.10
- CUDA version: 11.6
- GPU models and configuration: RTX3090/ RTX2080 MAXQ
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/1379
|
closed
|
[
"question",
"No Activity",
"component: quantization"
] | 2022-09-26T09:30:28Z
| 2023-05-02T15:33:16Z
| null |
SM1991CODES
|
huggingface/datasets
| 5,013
|
would huggingface like publish cpp binding for datasets package ?
|
HI:
I use cpp env libtorch, I like use hugggingface ,but huggingface not cpp binding, would you like publish cpp binding for it.
thanks
|
https://github.com/huggingface/datasets/issues/5013
|
closed
|
[
"wontfix"
] | 2022-09-23T07:42:49Z
| 2023-02-24T16:20:57Z
| 5
|
mullerhai
|
huggingface/datasets
| 5,012
|
Force JSON format regardless of file naming on S3
|
I have a file on S3 created by Data Version Control, it looks like `s3://dvc/ac/badff5b134382a0f25248f1b45d7b2` but contains a json file. If I run
```python
dataset = load_dataset(
"json",
data_files='s3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
)
```
It gives me
```
InvalidSchema: No connection adapters were found for 's3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
```
However, I cannot go ahead and change the names of the s3 file. Is there a way to "force" load a S3 url with certain decoder (JSON, CSV, etc.) regardless of s3 URL naming?
|
https://github.com/huggingface/datasets/issues/5012
|
closed
|
[
"enhancement"
] | 2022-09-22T18:28:15Z
| 2023-08-16T09:58:36Z
| 4
|
junwang-wish
|
huggingface/datasets
| 5,000
|
Dataset Viewer issue for asapp/slue
|
### Link
https://huggingface.co/datasets/asapp/slue/viewer/
### Description
Hi,
I wonder how to get the dataset viewer of our slue dataset to work.
Best,
Felix
### Owner
Yes
|
https://github.com/huggingface/datasets/issues/5000
|
closed
|
[] | 2022-09-20T16:45:45Z
| 2022-09-27T07:04:03Z
| 9
|
fwu-asapp
|
pytorch/data
| 782
|
Definition of `IterDataPipe` in `pyi` file breaks inheritance path for static type checking
|
See comments in https://github.com/pytorch/data/pull/780
@pmeier
At list for the first Error, the proper typing should be:
```py
def load(path: pathlib.Path) -> IterDataPipe[Tuple[str, BinaryIO]]:
if path.is_dir():
dp: IterDataPipe = FileLister(str(path), recursive=True)
else:
dp = IterableWrapper([str(path)])
return FileOpener(dp, mode="rb")
```
However, even with the proper typing shown above, the Error is changed to `Incompatible types in assignment (expression has type "FileListerIterDataPipe", variable has type "IterDataPipe[Any]")`. And, it doesn't explain what causes the second Error.
So, I spent a few hours figuring out what is the root cause of the `mypy` Error. In the generated `datapipe.pyi` file, a new `IterDataPipe` class is defined, which overrides the original `IterDataPipe` from the inheritance graph for all other `DataPipe`.
All Errors are eliminated when I remove new `IterDataPipe` definition from `datapipe.pyi` and import `IterDataPipe` directly from `torch.utils.data.datapipe`. And, the reason we define new `IterDataPipe` in `pyi` file is attaching all functional APIs to it. We need to do it in a different way by keeping the original `IterDataPipe` and extending the class with all functional APIs.
cc: @NivekT for python interface file
For this PR, I will revert it because our typing system needs to be fixed generally.
_Originally posted by @ejguan in https://github.com/pytorch/data/issues/780#issuecomment-1252595095_
|
https://github.com/meta-pytorch/data/issues/782
|
open
|
[
"Better Engineering"
] | 2022-09-20T16:23:07Z
| 2023-04-11T16:49:04Z
| 3
|
ejguan
|
pytorch/functorch
| 1,024
|
Get .item() error without calling .item()
|
Hello guys, I'm new to this package and I want to calculate batched Jacobian w.r.t a self-implemented vector function. But I got the following error when I'm doing this.
_RuntimeError: vmap: It looks like you're calling .item() on a Tensor. We don't support vmap over calling .item() on a Tensor, please try to rewrite what you're doing with other operations. If error is occurring somewhere inside PyTorch internals, please file a bug report._
Here is my code. I don't understand where the `.item()` comes from. Is this slicing operation ` q_current[0:3]` wrong? How can I fix this?
```python
import torch
from functorch import jacrev,vmap
#batch * len
q_current = torch.randn((4,4*3-1),requires_grad=True)
def geoCompute(q_current):
k1 = q_current[0:3]
return k1
jacobian = vmap(jacrev(geoCompute))(q_current)
```
|
https://github.com/pytorch/functorch/issues/1024
|
open
|
[] | 2022-09-20T07:57:25Z
| 2022-09-20T12:16:59Z
| 1
|
LiXinrong1012
|
huggingface/datasets
| 4,990
|
"no-token" is passed to `huggingface_hub` when token is `None`
|
## Describe the bug
In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I feel that here it is working because we assume the token will never be validated.
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L753
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L1121
## Expected results
Pass `token=None` to `huggingface_hub`.
## Actual results
`token="no-token"` is passed.
## Environment info
`huggingface_hub v0.10.0dev`
|
https://github.com/huggingface/datasets/issues/4990
|
closed
|
[
"bug"
] | 2022-09-19T15:14:40Z
| 2022-09-30T09:16:00Z
| 6
|
Wauplin
|
pytorch/examples
| 1,063
|
question about drop_last=True on validation mode
|
I don't know why this code use drop_last=True on validation mode.
Also, this code only uses batch_size dividable datas for calculating average top1,5 errors.
And then re-generate auxiliary validation data&dataloader for printing remaining logs.
Can anyone tell me why this code uses this method?
|
https://github.com/pytorch/examples/issues/1063
|
closed
|
[
"question",
"triaged"
] | 2022-09-19T01:41:05Z
| 2022-09-23T04:08:12Z
| null |
DY112
|
pytorch/TensorRT
| 1,362
|
โ [Question] Why do you not build & release Windows wheels?
|
## โ Question
Just curious why you only make Linux wheels. It seems like since the last release it totally should have been possible for you guys to pre-build windows wheels. Just curious why this is
|
https://github.com/pytorch/TensorRT/issues/1362
|
closed
|
[
"question",
"channel: windows"
] | 2022-09-17T14:22:22Z
| 2022-09-19T15:53:58Z
| null |
joeyballentine
|
huggingface/datasets
| 4,983
|
How to convert torch.utils.data.Dataset to huggingface dataset?
|
I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:
```python
from datasets import Dataset
data = [[1, 2],[3, 4]]
ds = Dataset.from_dict({"data": data})
ds = ds.with_format("torch")
ds[0]
ds[:2]
```
So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert?
Thanks.
|
https://github.com/huggingface/datasets/issues/4983
|
closed
|
[
"enhancement"
] | 2022-09-16T09:15:10Z
| 2023-12-14T20:54:15Z
| 15
|
DEROOCE
|
pytorch/torchx
| 602
|
YAML example of submitting a job using kubernetes
|
## โ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our
[documentation](https://pytorch.org/torchx).
### Question
I am new to use torchx with kubernetes scheduling. I followed the document to launch the etcd service successfully, which gives me the following results:
```console
foo@bar:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
etcd-client ClusterIP 192.168.50.248 <none> 2379/TCP 30m
etcd-server ClusterIP 192.168.53.173 <none> 2379/TCP,2380/TCP 30m
```
This is a little bit different than the example result given by the elastic tutorial (with two clusters): https://github.com/pytorch/elastic/tree/master/kubernetes.
I am not sure who to write or modify a yaml file to submit a training job similar to the example provided by the elastic tutorial:
https://github.com/pytorch/elastic/blob/master/kubernetes/config/samples/imagenet.yaml .
I wonder if it is possible to provide a similar training yaml file for me to study?
Best,
Yihao
|
https://github.com/meta-pytorch/torchx/issues/602
|
closed
|
[] | 2022-09-15T22:42:21Z
| 2022-10-20T17:44:15Z
| 1
|
yihaocs
|
huggingface/datasets
| 4,981
|
Can't create a dataset with `float16` features
|
## Describe the bug
I can't create a dataset with `float16` features.
I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same exact error.
The bug seems to arise from `datasets` casting the values to `double` and then `pyarrow` doesn't know how to convert those back to `float16`... does that sound right? Is there a way to bypass this since it's not necessary in the `numpy` and `torch` cases?
Thanks!
## Steps to reproduce the bug
All of the following raise the following error with the same exact (as far as I can tell) traceback:
```python
ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float
```
```python
from datasets import Dataset, Features, Value
Dataset.from_dict({"x": [0.0, 1.0, 2.0]}, features=Features(x=Value("float16")))
import numpy as np
Dataset.from_dict({"x": np.arange(3, dtype=np.float16)}, features=Features(x=Value("float16")))
import torch
Dataset.from_dict({"x": torch.arange(3).to(torch.float16)}, features=Features(x=Value("float16")))
```
## Expected results
A dataset with `float16` features is successfully created.
## Actual results
```python
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
Cell In [14], line 1
----> 1 Dataset.from_dict({"x": [1.0, 2.0, 3.0]}, features=Features(x=Value("float16")))
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split)
865 mapping = features.encode_batch(mapping)
866 mapping = {
867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col)
868 for col, data in mapping.items()
869 }
--> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping)
871 if info.features is None:
872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()})
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs)
734 @classmethod
735 def from_pydict(cls, *args, **kwargs):
736 """
737 Construct a Table from Arrow arrays or columns
738
(...)
748 :class:`datasets.table.Table`:
749 """
--> 750 return cls(pa.Table.from_pydict(*args, **kwargs))
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:3648, in pyarrow.lib.Table.from_pydict()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:5174, in pyarrow.lib._from_pydict()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:343, in pyarrow.lib.asarray()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:231, in pyarrow.lib.array()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type)
192 # otherwise we can finally use the user's type
193 elif type is not None:
194 # We use cast_array_to_feature to support casting to custom types like Audio and Image
195 # Also, when trying type "string", we don't want to convert integers or floats to "string".
196 # We only do it if trying_type is False - since this is what the user asks for.
--> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
198 return out
199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs)
1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1682 else:
-> 1683 return func(array, *args, **kwargs)
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str)
1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)
1852 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
1854 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kw
|
https://github.com/huggingface/datasets/issues/4981
|
open
|
[
"bug"
] | 2022-09-15T21:03:24Z
| 2025-06-12T11:47:42Z
| 8
|
dconathan
|
huggingface/dataset-viewer
| 560
|
Fill some of the dataset card info automatically?
|
See https://github.com/huggingface/datasets/issues/4977: `Providing dataset size`
Related issues: https://github.com/huggingface/datasets/issues/3507#issuecomment-1033752157 and https://github.com/huggingface/datasets/issues/4876
|
https://github.com/huggingface/dataset-viewer/issues/560
|
closed
|
[
"question",
"feature request"
] | 2022-09-14T16:20:30Z
| 2023-06-14T12:15:54Z
| null |
severo
|
pytorch/pytorch
| 84,988
|
Document how to use parameters in C++ modular API (was How to use torch.nn.Parameter in libtorch cpp?)
|
### ๐ Describe the bug
HI Dear torch team:
in pytorch python env ,we always use torch.nn.Parameter for cache some tensor variable ,like this
```
import torch
import torch.nn as nn
memory_value = nn.Parameter(torch.cat([self.init_memory_value.unsqueeze(0) for _ in range(batch_size)], 0).data)
self.mem.init_value_memory(memory_value)
```
but when I want to use cpp libtorch 1.12.1 , I am not found [ torch::nn::Parameter() ], I don't how to use Parameter in libtorch cpp env, could you help me ,thanks a lot.
### Versions
libtorch 1.12.1
cpp 20
MacOS lastest
|
https://github.com/pytorch/pytorch/issues/84988
|
closed
|
[] | 2022-09-14T06:31:31Z
| 2022-09-16T02:59:51Z
| null |
mullerhai
|
pytorch/TensorRT
| 1,355
|
โ [Question] How can I add torchvision.transforms.functional.gaussian_blur to the conversion?
|
## โ Question
How could I make torchvision.transforms.functional.gaussian_blur operation compatible with torch_tensorrt ?
## What you have already tried
Hello. The last step of my forward method is to apply gaussian_blur. Unfortunately this is not compatible with this framework and I must to put it out of the forward method. If I do, the model is correctly parsed to TensorRT engine. If not, I get this error
```
ERROR: [Torch-TensorRT TorchScript Conversion Context] - 4: (Unnamed Layer* 613) [Convolution]: two inputs (data and weights) are allowed only in explicit-quantization mode.
ERROR: [Torch-TensorRT TorchScript Conversion Context] - 4: [network.cpp::validateWeightedLayersInputs::2378] Error Code 4: Internal Error ((Unnamed Layer* 613) [Convolution]: Cannot set more than one input unless network has Q/DQ layers.)
ERROR: [Torch-TensorRT TorchScript Conversion Context] - 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
Traceback (most recent call last):
File "/home/jack3/tkh-projects/02-AD/code/TKHAD/kk.py", line 78, in <module>
trt_model = torch_tensorrt.compile(
File "/home/jack3/tkh-projects/02-AD/code/TKHAD/env/lib/python3.10/site-packages/torch_tensorrt/_compile.py", line 115, in compile
return torch_tensorrt.ts.compile(ts_mod, inputs=inputs, enabled_precisions=enabled_precisions, **kwargs)
File "/home/jack3/tkh-projects/02-AD/code/TKHAD/env/lib/python3.10/site-packages/torch_tensorrt/ts/_compiler.py", line 113, in compile
compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))
RuntimeError: [Error thrown at core/conversion/conversionctx/ConversionCtx.cpp:147] Building serialized network failed in TensorRT
```
I suppose there is any operation inside gaussian_blur incompatible, although the error is not clear for me.
This is the code that convert the model
```
dummy_input = torch.empty((1, 3, 224 ,224), device=torch.device('cuda'))
jit_model = torch.jit.trace(model, dummy_input)
trt_model = torch_tensorrt.compile(
jit_model,
"default",
[torch_tensorrt.Input((1, 3, 224, 224), dtype=torch.float32)],
torch.float32,
truncate_long_and_double = True
)
```
And this is the part of my model with gaussian_blur
```
from torchvision.transforms.functional import gaussian_blur
import torch
class MyModel(torch.nn.Module):
def __init__(self):
...
self.kernel = 2 * int(4.0 * 4 + 0.5) + 1
def forward(self, x: torch.Tensor):
...
map_scores = gaussian_blur(map_scores, [self.kernel , self.kernel ], [4, 4])
return map_scores
```
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.11.0+cu113
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Python version: 3.10
- CUDA version: 11.7
- Torch_tensorrt Version: 1.1.0
|
https://github.com/pytorch/TensorRT/issues/1355
|
closed
|
[
"question",
"component: converters",
"No Activity"
] | 2022-09-13T09:34:32Z
| 2023-04-23T00:02:34Z
| null |
mjack3
|
pytorch/pytorch
| 84,923
|
[FX] How to replace torch.functional with nn.module? TypeError: forward() takes 2 positional arguments but 3 were given
|
### ๐ Describe the bug
I would like to use `torch.fx` to replace `toirch.functional` into `nn.module` for further model optimization.
Example:
```Python
# Original
F.adaptive_avg_pool2d(x, 1)
# Target
nn.AdaptiveAvgPool2d(1)
```
Here is my code:
```Python
with model.graph.inserting_before(node):
new_module_str = str(node._prev).split('_')[0] + ".adaptive_avg_pool2d"
model.add_submodule(new_module_str, nn.AdaptiveAvgPool2d(node.args[1:]))
new_node = model.graph.call_module(new_module_str, node.args)
node.replace_all_uses_with(new_node)
model.graph.erase_node((node))
```
The generated model can be recompiled through fx
```Python
...
(conv1): Module(
(0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), B=1)
(1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, B=1)
(2): ReLU6(inplace=True)
(adaptive_avg_pool2d): AdaptiveAvgPool2d(output_size=(1,))
)
(conv2): Conv2d(1280, 10, kernel_size=(1, 1), stride=(1, 1), B=1)
...
conv1_0 = getattr(self.conv1, "0")(stage7_residual_7); stage7_residual_7 = None
conv1_1 = getattr(self.conv1, "1")(conv1_0); conv1_0 = None
conv1_2 = getattr(self.conv1, "2")(conv1_1); conv1_1 = None
conv1_adaptive_avg_pool2d = self.conv1.adaptive_avg_pool2d(conv1_2, 1); conv1_2 = None
conv2 = self.conv2(conv1_adaptive_avg_pool2d); conv1_adaptive_avg_pool2d = None
flatten_replacement = hydro_fx_fuse_flatten_replacement(conv2, 1); conv2 = None
return flatten_replacement
```
but can not train:
```bash
Traceback (most recent call last):
File "/home/xxx/cifar_fuse.py", line 148, in <module>
train_result = train_epoch(train_loader, model, criterion, optimizer)
File "/home/xxx/cifar_fuse.py", line 102, in train_epoch
pred = model(X)
File "/home/xxx/miniconda3/lib/python3.9/site-packages/torch/fx/graph_module.py", line 652, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/home/xxx/miniconda3/lib/python3.9/site-packages/torch/fx/graph_module.py", line 277, in __call__
raise e
File "/home/xxx/miniconda3/lib/python3.9/site-packages/torch/fx/graph_module.py", line 267, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/home/xxx/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "<eval_with_key>.3", line 157, in forward
File "/home/xxx/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
TypeError: forward() takes 2 positional arguments but 3 were given
```
### Versions
Version: Pytorch 1.12.1
cc @ezyang @SherlockNoMad @soumith
|
https://github.com/pytorch/pytorch/issues/84923
|
closed
|
[
"fx"
] | 2022-09-13T06:17:55Z
| 2022-09-13T07:23:46Z
| null |
Qinghao-Hu
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.