repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/dataset-viewer
| 455
|
what to do with /is-valid?
|
Currently, the endpoint /is-valid is not documented in https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json (but it is in https://github.com/huggingface/datasets-server/blob/main/services/api/README.md).
It's not used in the dataset viewer in moonlanding, but https://github.com/huggingface/model-evaluator uses it (cc @lewtun).
I have the impression that we could change this endpoint to something more precise, since "valid" is a bit loose, and will be less and less precise when other services will be added to the dataset server (statistics, random access, parquet file, etc). Instead, maybe we could create a new endpoint with more details about what services are working for the dataset. Or do we consider a dataset valid if all the services are available?
What should we do?
- [ ] keep it this way
- [ ] create a new endpoint with details of the available services
also cc @lhoestq
|
https://github.com/huggingface/dataset-viewer/issues/455
|
closed
|
[
"question"
] | 2022-07-22T19:29:08Z
| 2022-08-02T14:16:24Z
| null |
severo
|
pytorch/torchx
| 567
|
[exploratory] TorchX Dashboard
|
## Description
<!-- concise description of the feature/enhancement -->
Add a new `torchx dashboard` command that will launch a local HTTP server that allows users to view all of their jobs with statuses, logs and integration with any ML specific extras such as artifacts, Tensorboard, etc.
## Motivation/Background
<!-- why is this feature/enhancement important? provide background context -->
Currently the interface for TorchX is only via programmatic or via the CLI. It would also be nice to have a UI dashboard that could be used to monitor all of your job as well as support deeper integrations such as experiment tracking and metrics.
Right now if users want to use a UI they have to use their platform specific one (i.e aws batch/ray dashboard) and many don't have one (slurm/volcano).
## Detailed Proposal
<!-- provide a detailed proposal -->
This would be a fairly simple interface built on top of something such as Flask (https://flask.palletsprojects.com/en/2.1.x/quickstart/).
Pages:
* `/` the main page with a list of all of the users jobs and filters
* `/<scheduler>/<jobid>` an overview of the job, the job def and the status with a tab for logs, artifacts and any other URLs that are logged
* `/<scheduler>/<jobid>/logs` - view the logs
* `/<scheduler>/<jobid>/external/<metadata key>` - iframes based off of external services such as tensorboard etc
## Alternatives
<!-- discuss the alternatives considered and their pros/cons -->
Providing a way to view URLs for external services via the terminal.
## Additional context/links
<!-- link to code, documentation, etc. -->
* https://docs.ray.io/en/latest/ray-core/ray-dashboard.html#logical-view
|
https://github.com/meta-pytorch/torchx/issues/567
|
open
|
[
"enhancement",
"RFC",
"cli"
] | 2022-07-22T19:28:51Z
| 2022-08-02T21:23:14Z
| 1
|
d4l3k
|
pytorch/torchx
| 566
|
add a TORCHX_JOB_ID environment variable to all jobs launched via runner
|
## Description
<!-- concise description of the feature/enhancement -->
As part of the future experiment tracking we want to be able to have the application know it's own identity. When we launch a job we return the full job id (i.e. `kubernetes://session/app_id`) but the app itself doesn't have this exact same job ID. We do provide an `app_id` macro that can be used in the app def for both env and arguments but it's up to the app owner to manually add that.
## Motivation/Background
<!-- why is this feature/enhancement important? provide background context -->
If we add a `TORCHX_JOB_ID` environment variable it allows us to write more standardized integrations for experiment tracking that use the job ID as a key. There's no added cost from an extra environment variable and will enable deeper automatic integrations into other libraries.
## Detailed Proposal
<!-- provide a detailed proposal -->
Add a new environment variable to Runner.dryrun
https://github.com/pytorch/torchx/blob/main/torchx/runner/api.py#L241
that uses the macros.app_id to add the full job ID using the scheduler and session information form the runner.
https://github.com/pytorch/torchx/blob/main/torchx/specs/api.py#L156
## Alternatives
<!-- discuss the alternatives considered and their pros/cons -->
## Additional context/links
<!-- link to code, documentation, etc. -->
|
https://github.com/meta-pytorch/torchx/issues/566
|
open
|
[
"enhancement",
"module: runner",
"tracking"
] | 2022-07-22T18:22:24Z
| 2022-07-22T21:28:02Z
| 0
|
d4l3k
|
pytorch/functorch
| 979
|
ImportError: ~/.local/lib/python3.9/site-packages/functorch/_C.so: undefined symbol: _ZNK3c1010TensorImpl16sym_sizes_customEv
|
Hi All,
I was running an older version of PyTorch ( - built from source) with FuncTorch ( - built from source), and somehow I've broken the older version of functorch. When I import functorch I get the following error,
```
import functorch
#returns ImportError: ~/.local/lib/python3.9/site-packages/functorch/_C.so: undefined symbol: _ZNK3c1010TensorImpl16sym_sizes_customEv
```
The version I had of `functorch` was `0.2.0a0+9d6ee76`, is there a way to perhaps re-install to fix this ImportError? I do have the latest version of PyTorch/FuncTorch in a separate conda environment but I wanted to check how it compares to the older version in this 'older' conda environment PyTorch/Functorch were versions ,1.12.0a0+git7c2103a and 0.2.0a0+9d6ee76 respectively.
Is there a way to download a specific version of `functorch` with `https://github.com/pytorch/functorch.git` ? Or another way to fix this issue?
|
https://github.com/pytorch/functorch/issues/979
|
closed
|
[] | 2022-07-22T14:51:13Z
| 2022-07-25T19:22:04Z
| 24
|
AlphaBetaGamma96
|
huggingface/datasets
| 4,736
|
Dataset Viewer issue for deepklarity/huggingface-spaces-dataset
|
### Link
https://huggingface.co/datasets/deepklarity/huggingface-spaces-dataset/viewer/deepklarity--huggingface-spaces-dataset/train
### Description
Hi Team,
I'm getting the following error on a uploaded dataset. I'm getting the same status for a couple of hours now. The dataset size is `<1MB` and the format is csv, so I'm not sure if it's supposed to take this much time or not.
```
Status code: 400
Exception: Status400Error
Message: The split is being processed. Retry later.
```
Is there any explicit step to be taken to get the viewer to work?
### Owner
Yes
|
https://github.com/huggingface/datasets/issues/4736
|
closed
|
[
"dataset-viewer"
] | 2022-07-22T12:14:18Z
| 2022-07-22T13:46:38Z
| 1
|
dk-crazydiv
|
pytorch/TensorRT
| 1,199
|
Cant import torch_tensorrt
|
ERROR:
from torch.fx.passes.pass_manager import PassManager
ModuleNotFoundError: No module named 'torch.fx.passes.pass_manager'
- PyTorch Version : 1.11
- CPU Architecture: jetson AGX xavier
- OS (e.g., Linux):
- How you installed PyTorch: nvidia forum wheel
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:3.8
- CUDA version: 11.4
|
https://github.com/pytorch/TensorRT/issues/1199
|
closed
|
[
"question",
"channel: linux-jetpack",
"component: fx"
] | 2022-07-22T08:00:34Z
| 2022-09-02T18:04:29Z
| null |
sanath-tech
|
huggingface/datasets
| 4,732
|
Document better that loading a dataset passing its name does not use the local script
|
As reported by @TrentBrick here https://github.com/huggingface/datasets/issues/4725#issuecomment-1191858596, it could be more clear that loading a dataset by passing its name does not use the (modified) local script of it.
What he did:
- he installed `datasets` from source
- he modified locally `datasets/the_pile/the_pile.py` loading script
- he tried to load it but using `load_dataset("the_pile")` instead of `load_dataset("datasets/the_pile")`
- as explained here https://github.com/huggingface/datasets/issues/4725#issuecomment-1191040245:
- the former does not use the local script, but instead it downloads a copy of `the_pile.py` from our GitHub, caches it locally (inside `~/.cache/huggingface/modules`) and uses that.
He suggests adding a more clear explanation about this. He suggests adding it maybe in [Installation > source](https://huggingface.co/docs/datasets/installation))
CC: @stevhliu
|
https://github.com/huggingface/datasets/issues/4732
|
closed
|
[
"documentation"
] | 2022-07-22T06:07:31Z
| 2022-08-23T16:32:23Z
| 3
|
albertvillanova
|
pytorch/TensorRT
| 1,198
|
❓ [Question] Where can we get VGG-16 checkpoint pretrained on CIFAR-10 ?
|
## ❓ Question
To get $pwd/vgg16_ckpts/ckpt_epoch110.pth, I tried to run the script named [python3 finetune_qat.py](https://github.com/pytorch/TensorRT/tree/v1.1.1/examples/int8/training/vgg16#quantization-aware-fine-tuning-for-trying-out-qat-workflows).
However, the script needs VGG-16 pretrained model at 100-epoch as follows:
```bash
Loading from checkpoint $(PATH_TOTensorRT)/examples/int8/training/vgg16/vgg16_ckpts/ckpt_epoch100.pth
```
Then where can we download the checkpoint-epoch 100 model?
I failed to download it from other internet site
|
https://github.com/pytorch/TensorRT/issues/1198
|
closed
|
[
"question"
] | 2022-07-22T05:06:34Z
| 2022-07-22T05:13:32Z
| null |
zinuok
|
pytorch/TensorRT
| 1,197
|
❓ [Question] Where can we get 'trained_vgg16_qat.jit.pt' ?
|
## ❓ Question
Where can we get 'trained_vgg16_qat.jit.pt' ?
the link in [test_qat_trt_accuracy.py](https://github.com/pytorch/TensorRT/blob/master/tests/py/test_qat_trt_accuracy.py#L74)
doesn't work now.
|
https://github.com/pytorch/TensorRT/issues/1197
|
closed
|
[
"question"
] | 2022-07-22T04:38:53Z
| 2022-07-22T04:46:46Z
| null |
zinuok
|
pytorch/serve
| 1,753
|
how to return the predictions in JSON format(in JSON string and JSON header)?
|
I was using torchserve to production service, I was able to return the predictions with a JSON string, but I was unable to get the response with a JSON header.
|
https://github.com/pytorch/serve/issues/1753
|
closed
|
[
"triaged_wait",
"support"
] | 2022-07-22T04:04:26Z
| 2022-07-24T16:50:32Z
| null |
Vincentwei1021
|
pytorch/functorch
| 977
|
Hessian (w.r.t inputs) calculation in PyTorch differs from FuncTorch
|
Hi All,
I've been trying to calculate the Hessian of the output of my network with respect to its inputs within FuncTorch. I had a version within PyTorch that supports batches, however, they seem to disagree with each other and I have no idea why they don't give the same results. Something is clearly wrong, I know my PyTorch version is right so either there's an issue in my version of FuncTorch or I've implemented it wrong in FuncTorch.
Also, how can I use the `has_aux` flag in `jacrev` to return the jacobian from the first `jacrev` so I don't have to repeat the jacobian calculation?
The only problem with my example is that it uses `torch.linalg.slogdet` and from what I remember FuncTorch can't vmap over `.item()`. I do have my own fork of pytorch where I edited the backward to remove the `.item()` call so it works with vmap. Although, it's not the greatest implementation as I just set it to the default `nonsingular_case_backward` like so,
```
Tensor slogdet_backward(const Tensor& grad_logabsdet,
const Tensor& self,
const Tensor& signdet, const Tensor& logabsdet) {
auto singular_case_backward = [&](const Tensor& grad_logabsdet, const Tensor& self) -> Tensor {
Tensor u, sigma, vh;
std::tie(u, sigma, vh) = at::linalg_svd(self, false);
Tensor v = vh.mH();
// sigma has all non-negative entries (also with at least one zero entry)
// so logabsdet = \sum log(abs(sigma))
// but det = 0, so backward logabsdet = \sum log(sigma)
auto gsigma = grad_logabsdet.unsqueeze(-1).div(sigma);
return svd_backward({}, gsigma, {}, u, sigma, vh);
};
auto nonsingular_case_backward = [&](const Tensor& grad_logabsdet, const Tensor& self) -> Tensor {
// TODO: replace self.inverse with linalg_inverse
return unsqueeze_multiple(grad_logabsdet, {-1, -2}, self.dim()) * self.inverse().mH();
};
auto nonsingular = nonsingular_case_backward(grad_logabsdet, self);
return nonsingular;
}
```
My 'minimal' reproducible script is below with the output shown below that. It computes the Laplacian via a PyTorch method and via FuncTorch for a single sample of size `[A,1]` where `A` is the number of input nodes to the network.
```
import torch
import torch.nn as nn
from torch import Tensor
import functorch
from functorch import jacrev, jacfwd, hessian, make_functional, vmap
import time
_ = torch.manual_seed(0)
print("PyTorch version: ", torch.__version__)
print("CUDA version: ", torch.version.cuda)
print("FuncTorch version: ", functorch.__version__)
def sync_time() -> float:
torch.cuda.synchronize()
return time.perf_counter()
B=1 #batch
A=3 #input nodes
device=torch.device("cuda")
class model(nn.Module):
def __init__(self, num_inputs, num_hidden):
super(model, self).__init__()
self.num_inputs=num_inputs
self.func = nn.Tanh()
self.fc1 = nn.Linear(2, num_hidden)
self.fc2 = nn.Linear(num_hidden, num_inputs)
def forward(self, x):
"""
Takes x in [B,A,1] and maps it to sign/logabsdet value in Tuple([B,], [B,])
"""
idx=len(x.shape)
rep=[1 for _ in range(idx)]
rep[-2] = self.num_inputs
g = x.mean(dim=(idx-2), keepdim=True).repeat(*rep)
f = torch.cat((x,g), dim=-1)
h = self.func(self.fc1(f))
mat = self.fc2(h)
sgn, logabs = torch.linalg.slogdet(mat)
return sgn, logabs
net = model(A, 64)
net = net.to(device)
fnet, params = make_functional(net)
def logabs(params, x):
_, logabs = fnet(params, x)
#print("functorch logabs: ",logabs)
return logabs
def kinetic_pytorch(xs: Tensor) -> Tensor:
"""Method to calculate the local kinetic energy values of a netork function, f, for samples, x.
The values calculated here are 1/f d2f/dx2 which is equivalent to d2log(|f|)/dx2 + (dlog(|f|)/dx)^2
within the log-domain (rather than the linear-domain).
:param xs: The input positions of the many-body particles
:type xs: class: `torch.Tensor`
"""
xis = [xi.requires_grad_() for xi in xs.flatten(start_dim=1).t()]
xs_flat = torch.stack(xis, dim=1)
_, ys = net(xs_flat.view_as(xs))
#print("pytorch logabs: ",ys)
ones = torch.ones_like(ys)
#df_dx calculation
(dy_dxs, ) = torch.autograd.grad(ys, xs_flat, ones, retain_graph=True, create_graph=True)
#d2f_dx2 calculation (diagonal only)
lay_ys = sum(torch.autograd.grad(dy_dxi, xi, ones, retain_graph=True, create_graph=False)[0] \
for xi, dy_dxi in zip(xis, (dy_dxs[..., i] for i in range(len(xis))))
)
#print("(PyTorch): ",lay_ys, dy_dxs)
ek_local_per_walker = -0.5 * (lay_ys + dy_dxs.pow(2).sum(-1)) #move const out of loop?
return ek_local_per_walker
jacjaclogabs = jacrev(jacrev(logabs, argnums=1), argnums=1)
jaclogabs = jacrev(logabs, argnums=1)
def kinetic_functorch(params, x):
d2f_dx2 = vmap(jacjaclogabs, in_dims=(None, 0))(par
|
https://github.com/pytorch/functorch/issues/977
|
closed
|
[] | 2022-07-21T12:11:09Z
| 2022-08-01T19:37:18Z
| 18
|
AlphaBetaGamma96
|
pytorch/benchmark
| 1,046
|
How to add an new backend?
|
Hello, I want to add an new backend to run benchmark **without** modify this repo's code. In torchdynamo repo, I use @create_backend decorator to finish this, but I can't find suitable interface in this repo.
|
https://github.com/pytorch/benchmark/issues/1046
|
closed
|
[] | 2022-07-20T08:45:36Z
| 2022-07-27T22:47:49Z
| null |
zzpmiracle
|
huggingface/datasets
| 4,719
|
Issue loading TheNoob3131/mosquito-data dataset
|

So my dataset is public in the Huggingface Hub, but when I try to load it using the load_dataset command, it shows that it is downloading the files, but throws a ValueError. When I went to my directory to see if the files were downloaded, the folder was blank.
Here is the error below:
ValueError Traceback (most recent call last)
Input In [8], in <cell line: 3>()
1 from datasets import load_dataset
----> 3 dataset = load_dataset("TheNoob3131/mosquito-data", split="train")
File ~\Anaconda3\lib\site-packages\datasets\load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1678 # Download and prepare data
-> 1679 builder_instance.download_and_prepare(
1680 download_config=download_config,
1681 download_mode=download_mode,
1682 ignore_verifications=ignore_verifications,
1683 try_from_hf_gcs=try_from_hf_gcs,
1684 use_auth_token=use_auth_token,
1685 )
1687 # Build dataset for splits
1688 keep_in_memory = (
1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1690 )
Is the dataset in the wrong format or is there some security permission that I should enable?
|
https://github.com/huggingface/datasets/issues/4719
|
closed
|
[] | 2022-07-19T17:47:37Z
| 2022-07-20T06:46:57Z
| 2
|
thenerd31
|
pytorch/TensorRT
| 1,189
|
❓ [Question]Why the GPU memory has doubled when I loaded model from Torch-TensorRT by Pytorch?
|
## ❓ Question
<!-- Your question -->
When I'm using Pytorch to load model from Torch-TensorRT(torch.jit.load (*.ts)) file, the model's GPU memory has doubled(1602MB to 3242MB of GPU Memory from Nvidia-smi). At the same time, the gradient of model tensors are both not included. What I'm concern is that the context memory of torch is not reused, is restart a new context memory of torch.
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):1.10.0
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Python version: 3.7
- CUDA version:11.2
- Any other relevant information: torch-tensorrt version: 1.1.0
- NVIDIA GPU: Tesla v100
## Additional context
import torch
import torch_tensorrt
# memory is 1.6G
a= torch.randn()
a= torch.randn([1,1,224,224])
a.cuda()
# memory become 3.2G
model = torch.jit.load()
|
https://github.com/pytorch/TensorRT/issues/1189
|
closed
|
[
"question",
"No Activity",
"performance"
] | 2022-07-19T10:21:14Z
| 2023-03-26T00:02:18Z
| null |
Jancapcc
|
huggingface/datasets
| 4,711
|
Document how to create a dataset loading script for audio/vision
|
Currently, in our docs for Audio/Vision/Text, we explain how to:
- Load data
- Process data
However we only explain how to *Create a dataset loading script* for text data.
I think it would be useful that we add the same for Audio/Vision as these have some specificities different from Text.
See, for example:
- #4697
- and comment there: https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492
CC: @stevhliu
|
https://github.com/huggingface/datasets/issues/4711
|
closed
|
[
"documentation"
] | 2022-07-19T08:03:40Z
| 2023-07-25T16:07:52Z
| 1
|
albertvillanova
|
huggingface/optimum
| 306
|
`ORTModelForConditionalGeneration` did not have `generate()` module after converting from `T5ForConditionalGeneration`
|
### System Info
```shell
Machine: Apple M1 Pro
Optimum version: 1.3.0
Transformers version: 4.20.1
Onnxruntime version: 1.11.1
# Question
How to inference a quantized onnx model from class ORTModelForConditionalGeneration (previously using T5ForConditionalGeneration). I've successfully converted T5ForConditionalGeneration PyTorch model to onnx, then quantize it. But did not know why the `model.generate` was not found from ORTModelForConditionalGeneration model. How to inference?
A bit of context, this is text to text generation task. So generate a paraphrase from a sentence.
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Sample code:
```
import os
from optimum.onnxruntime.modeling_seq2seq import ORTModelForConditionalGeneration
from transformers import T5ForConditionalGeneration,T5Tokenizer
save_directory = "onnx/"
file_name = "model.onnx"
onnx_path = os.path.join(save_directory, "model.onnx")
# Load a model from transformers and export it through the ONNX format
# model_raw = T5ForConditionalGeneration.from_pretrained(f'model_{version}/t5_keyword')
model = ORTModelForConditionalGeneration.from_pretrained(f'model_{version}/t5_keyword', from_transformers=True)
tokenizer = T5Tokenizer.from_pretrained(f'model_{version}/t5_keyword')
# Save the onnx model and tokenizer
model.save_pretrained(save_directory, file_name=file_name)
tokenizer.save_pretrained(save_directory)
```
Quantization code:
```
from optimum.onnxruntime.configuration import AutoQuantizationConfig
from optimum.onnxruntime import ORTQuantizer
# Define the quantization methodology
qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)
quantizer = ORTQuantizer.from_pretrained(f'model_{version}/t5_keyword', feature="seq2seq-lm")
# Apply dynamic quantization on the model
quantizer.export(
onnx_model_path=onnx_path,
onnx_quantized_model_output_path=os.path.join(save_directory, "model-quantized.onnx"),
quantization_config=qconfig,
)
```
Reader:
```
from optimum.onnxruntime.modeling_seq2seq import ORTModelForConditionalGeneration
from transformers import pipeline, AutoTokenizer
model = ORTModelForConditionalGeneration.from_pretrained(save_directory, file_name="model-quantized.onnx")
tokenizer = AutoTokenizer.from_pretrained(save_directory)
```
Error when:
```
text = "Hotelnya bagus sekali"
encoding = tokenizer.encode_plus(text,padding=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"], encoding["attention_mask"]
beam_outputs = model.generate(
input_ids=input_ids,
attention_mask=attention_masks,
)
```
`AttributeError: 'ORTModelForConditionalGeneration' object has no attribute 'generate'`
### Expected behavior
Can predict using same T5 class `generate`
|
https://github.com/huggingface/optimum/issues/306
|
closed
|
[
"bug"
] | 2022-07-19T07:14:48Z
| 2022-07-19T09:29:09Z
| 2
|
tiketdatailham
|
pytorch/TensorRT
| 1,188
|
❓ [Question] Cannot install torch-tensorrt package
|
Hi! Can someone explain why this is error
```shell
(tf-gpu-11.6) C:\Users\myxzlpltk>pip install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases
Looking in links: https://github.com/NVIDIA/Torch-TensorRT/releases
Collecting torch-tensorrt
Using cached torch-tensorrt-0.0.0.post1.tar.gz (9.0 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [13 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\myxzlpltk\AppData\Local\Temp\pip-install-t86xj3rx\torch-tensorrt_a472ada85c9e492d8f4d7d614046053d\setup.py", line 125, in <module>
raise RuntimeError(open("ERROR.txt", "r").read())
RuntimeError:
###########################################################################################
The package you are trying to install is only a placeholder project on PyPI.org repository.
To install Torch-TensorRT please run the following command:
$ pip install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases
###########################################################################################
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
|
https://github.com/pytorch/TensorRT/issues/1188
|
closed
|
[
"question",
"channel: windows"
] | 2022-07-19T01:48:13Z
| 2024-02-26T17:16:23Z
| null |
myxzlpltk
|
pytorch/TensorRT
| 1,186
|
❓ [Question] Python Package for V1.1.1 Release?
|
## ❓ Question
Does the latest release include the python package for supporting JP5.0 too?
- PyTorch Version (e.g., 1.0): 1.11
- CPU Architecture: Arm64
- Python version: 3.8
- CUDA version: 11.4
|
https://github.com/pytorch/TensorRT/issues/1186
|
closed
|
[
"question",
"release: patch",
"channel: linux-jetpack"
] | 2022-07-18T15:20:13Z
| 2022-07-18T21:47:06Z
| null |
haichuanwang001
|
huggingface/datasets
| 4,694
|
Distributed data parallel training for streaming datasets
|
### Feature request
Any documentations for the the `load_dataset(streaming=True)` for (multi-node multi-GPU) DDP training?
### Motivation
Given a bunch of data files, it is expected to split them onto different GPUs. Is there a guide or documentation?
### Your contribution
Does it requires manually split on data files for each worker in `DatasetBuilder._split_generator()`? What is`IterableDatasetShard` expected to do?
|
https://github.com/huggingface/datasets/issues/4694
|
open
|
[
"enhancement"
] | 2022-07-17T01:29:43Z
| 2023-04-26T18:21:09Z
| 6
|
cyk1337
|
pytorch/data
| 661
|
DataLoader2 with reading service
|
For user dev and onboarding experience of the data component, we will provide examples, tutorials, up-to-date documentations as well as the operational support. We added a simple train loop example. This is to further track adding the uscase and example of DataLoader2 with different reading services.
|
https://github.com/meta-pytorch/data/issues/661
|
closed
|
[
"documentation"
] | 2022-07-15T17:29:41Z
| 2022-11-10T23:07:24Z
| 2
|
dahsh
|
huggingface/datasets
| 4,684
|
How to assign new values to Dataset?
|

Hi, if I want to change some values of the dataset, or add new columns to it, how can I do it?
For example, I want to change all the labels of the SST2 dataset to `0`:
```python
from datasets import load_dataset
data = load_dataset('glue','sst2')
data['train']['label'] = [0]*len(data)
```
I will get the error:
```
TypeError: 'Dataset' object does not support item assignment
```
|
https://github.com/huggingface/datasets/issues/4684
|
closed
|
[
"enhancement"
] | 2022-07-15T04:17:57Z
| 2023-03-20T15:50:41Z
| 2
|
beyondguo
|
pytorch/data
| 655
|
DataLoader2 with OSS datasets/datapipes
|
For user dev and onboarding experience of the data component, we will provide examples, tutorials, up-to-date documentations as well as the operational support. We added a simple train loop example. This is to further track adding the uscase and example of DataLoader2 with open source datasets/datapipes.
|
https://github.com/meta-pytorch/data/issues/655
|
closed
|
[] | 2022-07-14T17:51:13Z
| 2022-11-10T23:06:20Z
| 2
|
dahsh
|
huggingface/datasets
| 4,682
|
weird issue/bug with columns (dataset iterable/stream mode)
|
I have a dataset online (CloverSearch/cc-news-mutlilingual) that has a bunch of columns, two of which are "score_title_maintext" and "score_title_description". the original files are jsonl formatted. I was trying to iterate through via streaming mode and grab all "score_title_description" values, but I kept getting key not found after a certain point of iteration. I found that some json objects in the file don't have "score_title_description". And in SOME cases, this returns a NONE and in others it just gets a key error. Why is there an inconsistency here and how can I fix it?
|
https://github.com/huggingface/datasets/issues/4682
|
open
|
[] | 2022-07-14T13:26:47Z
| 2022-07-14T13:26:47Z
| 0
|
eunseojo
|
pytorch/torchx
| 557
|
how does i run the script and use script args
|
## ❓ Questions and Help
how does i run the script and use the script_args --
torchx run --scheduler local_cwd --scheduler_args log_dir=/tmp dist.ddp -j 1x2 --script dlrm_main.py --epoch 30
when i test dlrm by next code
```shell
torchx run --scheduler local_cwd --scheduler_args log_dir=/tmp dist.ddp -j 1x2 --script dlrm_main.py --epoch 30
```

### Question
the error is :
usage: torchx run <run args...> ddp [--help] [--script SCRIPT] [-m M] [--image IMAGE] [--name NAME] [-h H] [--cpu CPU] [--gpu GPU] [--memMB MEMMB] [-j J] [--env ENV] [--max_retries MAX_RETRIES] [--rdzv_port RDZV_PORT]
[--mounts MOUNTS]
...
torchx run <run args...> ddp : error: unrecognized arguments: --epoch
|
https://github.com/meta-pytorch/torchx/issues/557
|
closed
|
[] | 2022-07-14T08:50:39Z
| 2023-07-03T19:51:50Z
| 3
|
davidxiaozhi
|
pytorch/examples
| 1,022
|
How to build a generator for a layout 2 image GANs with images of size 256 and 512
|
Hello I am new to GANs and I need you help :
Please could you help me to make the model accept the image size of 256x256 and 512x512
I included the generator model for 128x128
`import torch
import torch.nn as nn
import torch.nn.functional as F
from math import *
from models.bilinear import crop_bbox_batch
def get_z_random(batch_size, z_dim, random_type='gauss'):
if random_type == 'uni':
z = torch.rand(batch_size, z_dim) * 2.0 - 1.0
elif random_type == 'gauss':
z = torch.randn(batch_size, z_dim)
return z
def transform_z_flat(batch_size, time_step, z_flat, obj_to_img):
# restore z to batch with padding
z = torch.zeros(batch_size, time_step, z_flat.size(1)).to(z_flat.device)
for i in range(batch_size):
idx = (obj_to_img.data == i).nonzero()
if idx.dim() == 0:
continue
idx = idx.view(-1)
n = idx.size(0)
z[i, :n] = z_flat[idx]
return z
class ConditionalBatchNorm2d(nn.Module):
def __init__(self, num_features, num_classes):
super().__init__()
self.num_features = num_features
self.bn = nn.BatchNorm2d(num_features, affine=False)
self.embed = nn.Embedding(num_classes, num_features * 2)
self.embed.weight.data[:, :num_features].normal_(1, 0.02) # Initialise scale at N(1, 0.02)
self.embed.weight.data[:, num_features:].zero_() # Initialise bias at 0
def forward(self, x, y):
out = self.bn(x)
gamma, beta = self.embed(y).chunk(2, 1)
out = gamma.view(-1, self.num_features, 1, 1) * out + beta.view(-1, self.num_features, 1, 1)
return out
class ResidualBlock(nn.Module):
"""Residual Block with instance normalization."""
def __init__(self, dim_in, dim_out):
super(ResidualBlock, self).__init__()
self.main = nn.Sequential(
nn.Conv2d(dim_in, dim_out, kernel_size=3, stride=1, padding=1, bias=False),
nn.BatchNorm2d(dim_out, affine=True, track_running_stats=True),
nn.ReLU(inplace=True),
nn.Conv2d(dim_out, dim_out, kernel_size=3, stride=1, padding=1, bias=False),
nn.BatchNorm2d(dim_out, affine=True, track_running_stats=True))
def forward(self, x):
return x + self.main(x)
class ConvLSTMCell(nn.Module):
def __init__(self, input_size, input_dim, hidden_dim, kernel_size, bias):
"""
Initialize ConvLSTM cell.
Parameters
----------
input_size: (int, int)
Height and width of input tensor as (height, width).
input_dim: int
Number of channels of input tensor.
hidden_dim: int
Number of channels of hidden state.
kernel_size: (int, int)
Size of the convolutional kernel.
bias: bool
Whether or not to add the bias.
"""
super(ConvLSTMCell, self).__init__()
self.height, self.width = input_size
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.kernel_size = kernel_size
self.padding = kernel_size[0] // 2, kernel_size[1] // 2
self.bias = bias
self.conv = nn.Conv2d(in_channels=self.input_dim + self.hidden_dim,
out_channels=4 * self.hidden_dim,
kernel_size=self.kernel_size,
padding=self.padding,
bias=self.bias)
def forward(self, input_tensor, cur_state):
h_cur, c_cur = cur_state
combined = torch.cat([input_tensor, h_cur], dim=1) # concatenate along channel axis
combined_conv = self.conv(combined)
cc_i, cc_f, cc_o, cc_g = torch.split(combined_conv, self.hidden_dim, dim=1)
i = torch.sigmoid(cc_i)
f = torch.sigmoid(cc_f)
o = torch.sigmoid(cc_o)
g = torch.tanh(cc_g)
c_next = f * c_cur + i * g
h_next = o * torch.tanh(c_next)
return h_next, c_next
def init_hidden(self, batch_size, device):
return (torch.zeros(batch_size, self.hidden_dim, self.height, self.width).to(device),
torch.zeros(batch_size, self.hidden_dim, self.height, self.width).to(device))
class ConvLSTM(nn.Module):
def __init__(self, input_size, input_dim, hidden_dim, kernel_size, batch_first=False, bias=True, return_all_layers=False):
super(ConvLSTM, self).__init__()
self._check_kernel_size_consistency(kernel_size)
if isinstance(hidden_dim, list):
num_layers = len(hidden_dim)
elif isinstance(hidden_dim, int):
num_layers = 1
# Make sure that both `kernel_size` and `hidden_dim` are lists having len == num_layers
kernel_size = self._extend_for_multilayer(kernel_size, num_layers)
hidden_dim = self._extend_for_multilayer(hidden_di
|
https://github.com/pytorch/examples/issues/1022
|
closed
|
[] | 2022-07-13T15:45:09Z
| 2022-07-16T17:13:15Z
| null |
TahaniFennir
|
pytorch/data
| 648
|
Chainer/Concater from single datapipe?
|
The `Concater` datapipe takes multiple DPs as input. Is there a class that would take a **single** datapipe of iterables instead? Something like this:
```py
class ConcaterIterable(IterDataPipe):
def __init__(self, source_datapipe):
self.source_datapipe = source_datapipe
def __iter__(self):
for iterable in self.source_datapipe:
yield from iterable
```
Basically:
[`itertools.chain` ](https://docs.python.org/3/library/itertools.html#itertools.chain)== `Concater`
[`itertools.chain.from_iterable`](https://docs.python.org/3/library/itertools.html#itertools.chain.from_iterable) == `ConcaterIterable`
Maybe a neat way of implementing this would be to keep a single `Concater` class, which would fall back to the `ConcaterIterable` behaviour if it's passed only one DP as input?
-----
Details: I need this for my benchmarking on manifold where each file is a big pickle archive of multiple images. My DP builder looks like this:
```py
def make_manifold_dp(root, dataset_size):
handler = ManifoldPathHandler()
dp = IoPathFileLister(root=root)
dp.register_handler(handler)
dp = dp.shuffle(buffer_size=dataset_size).sharding_filter()
dp = IoPathFileOpener(dp, mode="rb")
dp.register_handler(handler)
dp = PickleLoaderDataPipe(dp)
dp = ConcaterIterable(dp) # <-- Needed here!
return dp
```
|
https://github.com/meta-pytorch/data/issues/648
|
closed
|
[
"good first issue"
] | 2022-07-13T14:19:43Z
| 2023-03-14T20:25:01Z
| 9
|
NicolasHug
|
huggingface/optimum
| 290
|
Quantized Model size difference when using Optimum vs. Onnxruntime
|
Package versions




While exporting a question answering model ("deepset/minilm-uncased-squad2") to ONNX and quantizing it(dynamic quantization) with Optimum, the model size is 68 MB.
The same model exported while using ONNXRuntime is 32 MB.
Why is there a difference between both the exported models when the model is the same and the quantization too ?
**Optimum Code to convert the model to ONNX and Quantization.**
```python
from pathlib import Path
from optimum.onnxruntime import ORTModelForQuestionAnswering, ORTOptimizer
from optimum.onnxruntime.configuration import AutoQuantizationConfig, OptimizationConfig
from optimum.onnxruntime import ORTQuantizer
from optimum.pipelines import pipeline
from transformers import AutoTokenizer
model_checkpoint = "deepset/minilm-uncased-squad2"
save_directory = Path.home()/'onnx/optimum/minilm-uncased-squad2'
save_directory.mkdir(exist_ok=True,parents=True)
file_name = "minilm-uncased-squad2.onnx"
onnx_path = save_directory/"minilm-uncased-squad2.onnx"
# Load a model from transformers and export it through the ONNX format
model = ORTModelForQuestionAnswering.from_pretrained(model_checkpoint, from_transformers=True)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
# Save the onnx model and tokenizer
model.save_pretrained(save_directory, file_name=file_name)
tokenizer.save_pretrained(save_directory)
# Define the quantization methodology
qconfig = AutoQuantizationConfig.avx2(is_static=False, per_channel=True)
quantizer = ORTQuantizer.from_pretrained(model_checkpoint, feature="question-answering")
# Apply dynamic quantization on the model
quantizer.export(
onnx_model_path=onnx_path,
onnx_quantized_model_output_path= save_directory/"minilm-uncased-squad2-quantized.onnx",
quantization_config=qconfig,
)
quantizer.model.config.save_pretrained(save_directory)
Path(save_directory/"minilm-uncased-squad2-quantized.onnx").stat().st_size/1024**2
```
**ONNX Runtime Code**
```python
from transformers.convert_graph_to_onnx import convert
from transformers import AutoTokenizer
from pathlib import Path
model_ckpt = "deepset/minilm-uncased-squad2"
onnx_model_path = Path("../../onnx/minilm-uncased-squad2.onnx")
tokenizer= AutoTokenizer.from_pretrained(model_ckpt)
convert(framework="pt", model=model_ckpt, tokenizer=tokenizer,
output=onnx_model_path, opset=12, pipeline_name="question-answering")
from onnxruntime.quantization import quantize_dynamic, QuantType
onnx_model_path = Path("../../../onnx/minilm-uncased-squad2.onnx")
model_output = "../../onnx/minilm-uncased-squad2.quant.onnx"
quantize_dynamic(onnx_model_path, model_output, weight_type=QuantType.QInt8)
Path(model_output).stat().st_size/1024**2
```
Thank you
|
https://github.com/huggingface/optimum/issues/290
|
closed
|
[] | 2022-07-13T10:12:45Z
| 2022-07-14T09:24:23Z
| 3
|
Shamik-07
|
pytorch/pytorch
| 81,395
|
How to Do Semi-Asynchronous or Asynchronous Training with Pytorch
|
### 🚀 The feature, motivation and pitch
When PyTorch is used for distributed training, DDP is normally good enough for most situations. However, when if performance of different nodes differs, the performance of the whole training will be decided by the worst node. E.g. worker 0 needs 1 second for a forward and backward pass while worker 1 needs 2 seconds, the time for one step will be 2 seconds.
So I am wondering if there is way to do semi-asynchronous training with Pytorch?
### Alternatives
There is a similar library called [hivemind](tps://github.com/learning-at-home/hivemind), but it is designed for Internet while we prefer to run the training job in our cluster.
### Additional context
_No response_
|
https://github.com/pytorch/pytorch/issues/81395
|
closed
|
[] | 2022-07-13T09:42:48Z
| 2022-07-13T16:50:59Z
| null |
lsy643
|
pytorch/data
| 647
|
Update out-of-date example and colab
|
### 📚 The doc issue
The examples for Text/Vision/Audio are out-of-date: https://github.com/pytorch/data/tree/main/examples
The colab attached in README needs to be updated as well:
- How to install torchdata
- Example needs shuffle + sharding_filter
### Suggest a potential alternative/fix
None
|
https://github.com/meta-pytorch/data/issues/647
|
closed
|
[] | 2022-07-12T21:09:53Z
| 2023-02-02T14:39:40Z
| 5
|
ejguan
|
huggingface/datasets
| 4,675
|
Unable to use dataset with PyTorch dataloader
|
## Describe the bug
When using `.with_format("torch")`, an arrow table is returned and I am unable to use it by passing it to a PyTorch DataLoader: please see the code below.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
ds = load_dataset(
"para_crawl",
name="enfr",
cache_dir="/tmp/test/",
split="train",
keep_in_memory=True,
)
dataloader = DataLoader(ds.with_format("torch"), num_workers=32)
print(next(iter(dataloader)))
```
Is there something I am doing wrong? The documentation does not say much about the behavior of `.with_format()` so I feel like I am a bit stuck here :-/
Thanks in advance for your help!
## Expected results
The code should run with no error
## Actual results
```
AttributeError: 'str' object has no attribute 'dtype'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.18.0-348.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.4
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
https://github.com/huggingface/datasets/issues/4675
|
open
|
[
"bug"
] | 2022-07-12T15:04:04Z
| 2022-07-14T14:17:46Z
| 1
|
BlueskyFR
|
pytorch/functorch
| 956
|
Batching rule for searchsorted implementation
|
Hi,
Thanks for the great work, really enjoying functorch in my work. I have encountered the following when using vmap on a function which uses torch.searchsorted:
UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::searchsorted.Tensor. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /Users/runner/work/functorch/functorch/functorch/csrc/BatchedFallback.cpp:85.)
Looking forward to the implementation.
|
https://github.com/pytorch/functorch/issues/956
|
closed
|
[
"actionable"
] | 2022-07-12T06:36:04Z
| 2022-07-18T13:49:42Z
| 6
|
mingu6
|
pytorch/data
| 637
|
[TODO] Create dependency on TorchArrow?
|
This issue is generated from the TODO line
https://github.com/pytorch/data/blob/2f29adba451e1b87f1c0c654557d9dd98673fdd8/torchdata/datapipes/iter/util/dataframemaker.py#L15
|
https://github.com/meta-pytorch/data/issues/637
|
open
|
[] | 2022-07-11T17:34:07Z
| 2022-07-11T17:34:07Z
| 0
|
VitalyFedyunin
|
huggingface/datasets
| 4,671
|
Dataset Viewer issue for wmt16
|
### Link
https://huggingface.co/datasets/wmt16
### Description
[Reported](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/12#62cb83f14c7f35284e796f9c) by a user of AutoTrain Evaluate. AFAIK this dataset was working 1-2 weeks ago, and I'm not sure how to interpret this error.
```
Status code: 400
Exception: NotImplementedError
Message: This is a abstract method
```
Thanks!
### Owner
No
|
https://github.com/huggingface/datasets/issues/4671
|
closed
|
[
"dataset-viewer"
] | 2022-07-11T08:34:11Z
| 2022-09-13T13:27:02Z
| 6
|
lewtun
|
huggingface/optimum
| 276
|
Force write of vanilla onnx model with `ORTQuantizer.export()`
|
### Feature request
Force write of the non-quantized onnx model with `ORTQuantizer.export()`, or add an option to force write.
### Motivation
Currently, if the `onnx_model_path` already exists, we don't write the non-quantized model in to the indicated path.
https://github.com/huggingface/optimum/blob/04a2a6d290ca6ea6949844d1ae9a208ca95a79da/optimum/onnxruntime/quantization.py#L313-L315
Meanwhile, the quantized model is always written, even if there is already a model at the `onnx_quantized_model_output_path` (see https://github.com/onnx/onnx/blob/60d29c10c53ef7aa580291cb2b6360813b4328a3/onnx/__init__.py#L170).
Is there any reason for this different behavior? It led me to unexpected behaviors, where the non-quantized / quantized models don't correspond if I change the model in my script. In this case, the `export()` reuses the old non-quantized model to generate the quantized model, and all the quantizer attributes are ignored!
### Your contribution
I can do this if approved
|
https://github.com/huggingface/optimum/issues/276
|
closed
|
[] | 2022-07-09T08:44:27Z
| 2022-07-11T10:38:48Z
| 2
|
fxmarty
|
pytorch/data
| 580
|
[Linter] Ability to disable some lints
|
### 🚀 The feature
There are several options to disable specific linters.
Option 1. Disable with `linter-ignore: code`
Pros:
- Similar to known syntax of various linters
Cons:
- Need to modify code of datasets to disable something
```
datapipe = datapipe.sharding_filter().shuffle() # linter-ignore: shuffle-shard
```
Option 2. Global & Context disables
Pros:
- Can control datasets without modification of the code
Cons:
- Global might disable important errors
- Context requires additional indent
- Syntax feels weird
- Annoying to disable construct time linters (see below)
```
from torchdata import linter
linter.disable('shuffle-shard') # global
with linter.disable('shuffle-shard'): # context based
dl = DataLoader2(...)
```
Option 3. DLv2 argument / ReadingService argument
Pros:
- Local to specific DataLoader
- Can control datasets without modication of the code
Cons:
- Syntax feels weird
- Some linters might trigger/not in various ReadingServices
- Annoying to disable construct time linters (see below)
```
dl = DataLoader2(dp_graph, [adapter], disable_lint = ['shuffle-shard'])
```
Option 4. DataPipe 'attribute'
Pros:
- Can be defined by DataSet developer or by the user
- Can impact construct time error handling
Cons:
- Syntax feels weird
```datapipe = datapipe.sharding_filter().shuffle().disable_lint('shuffle-shard')```
and/or (as we can have an adapter to do the same job)
```dl = DataLoader(dp_graph,[DisableLint('shuffle-shard')], ...)```
Personally, I prefer the last variant, but I'm open to discussion.
|
https://github.com/meta-pytorch/data/issues/580
|
open
|
[] | 2022-07-08T17:25:25Z
| 2022-07-15T21:23:17Z
| 3
|
VitalyFedyunin
|
pytorch/pytorch
| 81,103
|
[Discussion] How to add MPS extension with custom kernel?
|
### 🚀 The feature, motivation and pitch
Hi,
I am working on adding MPS op for MPS backend with a custom kernel.
Here is an example:
https://github.com/grimoire/TorchMPSCustomOpsDemo
I am new to Metal. I am not sure if it is a good way (or the right way) to add such op. There are something I want to discuss:
## Device and CommandQueue
Since PyTorch has not exposed the MPS-related API, I have to copy some head [from torch csrc](https://github.com/grimoire/TorchMPSCustomOpsDemo/tree/master/csrc/pytorch/mps). The library is build with `MPSDevice::getInstance()->device()` and the command is commit to `getCurrentMPSStream()`. I am not sure if I should flush on commit or not.
## LibraryFromUrl vs LibraryFromSource
It seems that Metal library can not be linked together with the other object file. So I have to:
Either load it at runtime, which leads to the problem of how to find the relative location of the `.metallib`.
```objc
// load from url
NSURL* metal_url = [NSURL fileURLWithPath: utl_str];
library->_library = [at::mps::MPSDevice::getInstance()->device() newLibraryWithURL: metal_url error:&error];
```
Or build it at runtime. Which might take a long time to compile the kernel at runtime.
```objc
// build library from source string
NSString* code_str = [NSString stringWithCString: sources.c_str()];
library->_library = [at::mps::MPSDevice::getInstance()->device() newLibraryWithSource: code_str options: nil error:&error];
```
## BuildExtension
If we does not build metal kernel at runtime, we need to setup the compiler for metal kernel in the `setup.py`.
Since the `build_ext` provided by Python and PyTorch does not support build Metal, I patched the `UnixCCompiler` in `BuildExtension` to add the support. Both `compile` and `link` need to be updated:
```python
# compile
def darwin_wrap_single_compile(obj, src, ext, cc_args, extra_postargs,
pp_opts) -> None:
cflags = copy.deepcopy(extra_postargs)
try:
original_compiler = self.compiler.compiler_so
if _is_metal_file(src):
# use xcrun metal to compile metal file to `.air`
metal = ['xcrun', 'metal']
self.compiler.set_executable('compiler_so', metal)
if isinstance(cflags, dict):
cflags = cflags.get('metal', [])
else:
cflags = []
elif isinstance(cflags, dict):
cflags = cflags['cxx']
original_compile(obj, src, ext, cc_args, cflags, pp_opts)
finally:
self.compiler.set_executable('compiler_so', original_compiler)
# link
def darwin_wrap_single_link(target_desc,
objects,
output_filename,
output_dir=None,
libraries=None,
library_dirs=None,
runtime_library_dirs=None,
export_symbols=None,
debug=0,
extra_preargs=None,
extra_postargs=None,
build_temp=None,
target_lang=None):
if osp.splitext(objects[0])[1].lower() == '.air':
for obj in objects:
assert osp.splitext(obj)[1].lower(
) == '.air', f'Expect .air file, but get {obj}.'
# link `.air` with xcrun metallib
linker = ['xcrun', 'metallib']
self.compiler.spawn(linker + objects + ['-o', output_filename])
else:
return original_link(target_desc, objects, output_filename,
output_dir, libraries, library_dirs,
runtime_library_dirs, export_symbols,
debug, extra_preargs, extra_postargs,
build_temp, target_lang)
```
The code looks ... ugly. Hope there is a better way to do that.
So ... any advice?
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @zou3519 @kulinseth @albanD
|
https://github.com/pytorch/pytorch/issues/81103
|
closed
|
[
"module: cpp-extensions",
"triaged",
"enhancement",
"topic: docs",
"module: mps"
] | 2022-07-08T12:32:14Z
| 2023-07-28T17:11:42Z
| null |
grimoire
|
pytorch/pytorch.github.io
| 1,071
|
Where is documented the resize and crop in EfficientNet for torchvision v0.12.0
|
## 📚 Documentation
Hello, I do not see in any place what resize and center crop were done for training the efficientNet_bx models.
Where is that information?
I saw it in the torchvision v0.13.0 documentation or code ([for example](https://github.com/pytorch/vision/blob/main/torchvision/models/efficientnet.py#L522))
Many of us have still projects in the older version.
Thanks
|
https://github.com/pytorch/pytorch.github.io/issues/1071
|
closed
|
[] | 2022-07-08T12:20:23Z
| 2022-07-22T22:06:23Z
| null |
mjack3
|
pytorch/vision
| 6,249
|
Error when create_feature_extractor in AlexNet
|
### 🐛 Describe the bug
When I try to obtain the feature of layer "classifier.4" in AlexNet, the program has reported an error. The code is as follows:
```
import torch
from torchvision.models import alexnet, AlexNet_Weights
from torchvision.models.feature_extraction import create_feature_extractor
model = alexnet(weights=AlexNet_Weights.IMAGENET1K_V1)
extractor = create_feature_extractor(model, {'classifier.4': 'feat'})
img = torch.rand(3,224,224)
out = extractor(img)
```
**Error message**
```
RuntimeError: mat1 and mat2 shapes cannot be multiplied (256x36 and 9216x4096)
```
I guess it is because that the shape of output from "flatten" of AlexNet is 256x36 rather than 9216.
### Versions
```
Collecting environment information...
PyTorch version: 1.12.0+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-117-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.55
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 3090
GPU 5: NVIDIA GeForce RTX 3090
GPU 6: NVIDIA GeForce RTX 3090
GPU 7: NVIDIA GeForce RTX 3090
Nvidia driver version: 510.73.08
cuDNN version: Probably one of the following:
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.4.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.12.0+cu116
[pip3] torchmetrics==0.9.1
[pip3] torchtext==0.12.0
[pip3] torchvision==0.13.0+cu116
[conda] blas 1.0 mkl defaults
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 defaults
[conda] mkl 2021.4.0 h06a4308_640 defaults
[conda] mkl-service 2.4.0 py39h7f8727e_0 defaults
[conda] mkl_fft 1.3.1 py39hd3c417c_0 defaults
[conda] mkl_random 1.2.2 py39h51133e4_0 defaults
[conda] numpy 1.22.3 py39he7a7128_0 defaults
[conda] numpy-base 1.22.3 py39hf524024_0 defaults
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.12.0+cu116 pypi_0 pypi
[conda] torchmetrics 0.9.1 pypi_0 pypi
[conda] torchtext 0.12.0 py39 pytorch
[conda] torchvision 0.13.0+cu116 pypi_0 pypi
```
cc @datumbox
|
https://github.com/pytorch/vision/issues/6249
|
closed
|
[
"question",
"module: models",
"topic: feature extraction"
] | 2022-07-08T09:28:06Z
| 2022-07-08T10:11:43Z
| null |
githwd2016
|
pytorch/vision
| 6,247
|
Probable missing argument for swin transformer
|
Hello,
When I inspect the swin transformer codes in the original swin repo, mmdetection or detectron2, I have noticed that there is a parameter called `drop_path_rate` which I cannot see in the in the torchvision repo. Maybe, I am overlooking. Is there a similar parameter and is it an important parameter?
Thanks in advance
cc @datumbox
|
https://github.com/pytorch/vision/issues/6247
|
closed
|
[
"question",
"module: models"
] | 2022-07-08T08:21:58Z
| 2022-07-11T13:17:40Z
| null |
artest08
|
pytorch/functorch
| 940
|
Question on how to batch over both: inputs and tangent vectors
|
I want to compute the jacobian vector product of a function F from R^d to R^D. But I need to do this at a batch of points x_1, ..., x_n in R^d and a batch of tangent vectors v_1, ..., v_m in R^d. Namely, for all i = 1, ..., n and j = 1, ..., m I need to compute the nxm jacobian vector products: J_F(x_i) * v_j.
Is there a way to do this by using vmap twice to loop over the batches x_i and v_j?
|
https://github.com/pytorch/functorch/issues/940
|
open
|
[] | 2022-07-07T14:57:28Z
| 2022-07-12T17:47:23Z
| null |
sgstepaniants
|
pytorch/serve
| 1,725
|
Serving other framework models with Torchserve?
|
Hi everyone.
As in the title, I want to ask if torchserve can serve other framework models or pytorch models only?
For example, I have a model written in mxnet. This is the snippet code of `initialize` method in my custom handler.
```python
def initialize(self, context):
properties = context.system_properties
if (torch.cuda.is_available() and
properties.get("gpu_id") is not None):
ctx_id = properties.get("gpu_id")
else:
ctx_id = -1
self.manifest = context.manifest
model_dir = properties.get("model_dir")
prefix = os.path.join(model_dir, "model/resnet-50")
# load model
sym, arg_params, aux_params = mx.model.load_checkpoint(prefix, 0)
if ctx_id >= 0:
self.ctx = mx.gpu(ctx_id)
else:
self.ctx = mx.cpu()
self.model = mx.mod.Module(symbol=sym,
context=self.ctx,
label_names=None)
self.model.bind(
data_shapes=[('data', (1, 3, 640, 640))],
for_training=False
)
self.model.set_params(arg_params, aux_params)
self.initialized = True
```
For some reason, pretrained mxnet model can't be loaded. But that same model works fine in my training and inferencing script. This is the error log.
```
2022-07-06T15:48:07,468 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - File "/apps/conda/huyvd/envs/insightface/lib/python3.8/site-packages/ts/model_loader.py", line 151, in load
2022-07-06T15:48:07,468 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - initialize_fn(service.context)
2022-07-06T15:48:07,469 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - File "/tmp/models/4b6bbba5e16445ffbe70f89282a0d30a/handler.py", line 34, in initialize
2022-07-06T15:48:07,469 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - sym, arg_params, aux_params = mx.model.load_checkpoint(prefix, 0)
2022-07-06T15:48:07,469 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - File "/apps/conda/huyvd/envs/insightface/lib/python3.8/site-packages/mxnet/model.py", line 476, in load_checkpoint
2022-07-06T15:48:07,470 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - symbol = sym.load('%s-symbol.json' % prefix)
2022-07-06T15:48:07,470 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - File "/apps/conda/huyvd/envs/insightface/lib/python3.8/site-packages/mxnet/symbol/symbol.py", line 3054, in load
2022-07-06T15:48:07,470 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - check_call(_LIB.MXSymbolCreateFromFile(c_str(fname), ctypes.byref(handle)))
2022-07-06T15:48:07,471 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - File "/apps/conda/huyvd/envs/insightface/lib/python3.8/site-packages/mxnet/base.py", line 246, in check_call
2022-07-06T15:48:07,471 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - raise get_last_ffi_error()
2022-07-06T15:48:07,471 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - mxnet.base.MXNetError: Traceback (most recent call last):
2022-07-06T15:48:07,472 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - File "../include/dmlc/././json.h", line 718
2022-07-06T15:48:07,472 [INFO ] W-9000-face_detect_1.0-stdout MODEL_LOG - MXNetError: Check failed: !is_->fail(): Error at Line 32, around ^``, Expect number
```
|
https://github.com/pytorch/serve/issues/1725
|
closed
|
[
"help wanted",
"question"
] | 2022-07-06T09:08:44Z
| 2022-07-13T07:58:10Z
| null |
vuongdanghuy
|
huggingface/optimum
| 262
|
How can i set number of threads for Optimum exported model?
|
### System Info
```shell
optimum==1.2.3
onnxruntime==1.11.1
onnx==1.12.0
transformers==4.20.1
python version 3.7.13
```
### Who can help?
@JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi!
I can't specify the number of threads for inferencing Optimum ONNX models.
I didn't have such a problem with the default transformers model before.
Is there any Configuration in Optimum?
### Optimum doesn't have a config for assigning the number of threads
```
from onnxruntime import SessionOptions
SessionOptions().intra_op_num_threads = 1
```
### also limiting on OS level doesn't work:
```bash
taskset -c 0-16 python inference_onnx.py
```

```bash
taskset -c 0 python inference_onnx.py
```

|
https://github.com/huggingface/optimum/issues/262
|
closed
|
[
"bug"
] | 2022-07-06T06:53:30Z
| 2022-09-19T11:25:23Z
| 1
|
MiladMolazadeh
|
huggingface/optimum
| 257
|
Optimum Inference next steps
|
# What is this issue for?
This issue is a list of potential next steps for improving inference experience using `optimum`. The current list applies to the main namespace of optimum but should be soon extended to other namespaces including `intel`, `habana`, `graphcore`.
## Next Steps/Features
- [x] #199
- [x] #254
- [x] #213
- [x] #258
- [x] #259
- [x] #260
- [x] #261
- [ ] add new Accelerators, INC, OpenVino.....
---
_Note: this issue will be continuously updated to keep track of the developments. If you are part of the community and interested in contributing feel free to pick on and open a PR._
|
https://github.com/huggingface/optimum/issues/257
|
closed
|
[
"inference",
"Stale"
] | 2022-07-06T05:02:12Z
| 2025-09-13T02:01:29Z
| 1
|
philschmid
|
pytorch/TensorRT
| 1,166
|
❓ [Question] How to run Torch-Tensorrt on JETSON AGX ORIN?
|
## ❓ Question
**Not able to run Torch-Tensorrt on Jetson AGX ORIN**
As per the [release note](https://github.com/pytorch/TensorRT/discussions/1043), it is mentioned that current release doesn't have support for Jetpack 5.0DP but ORIN only supports Jetpack 5.0DP (I might be wrong but inferring from this [Jetpack Archives.](https://developer.nvidia.com/embedded/jetpack-archive). **Is there a way to run Torch-Tensort on ORIN?** if not what's the possible timeline for new release with this support?
## What you have already tried
I did tried building for python, as suggested in the repo, it enables `import torch_tensorrt` but doesn't supports any attributes.
## Environment
- PyTorch Version (e.g., 1.0): 1.11
- CPU Architecture: arm64
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): tried both, wheels provided [here ](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-11-now-available/72048) and building from source(instruction from [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-11-now-available/72048)).
- Build command you used (if compiling from source): python3 setup.py --use_cxx11_abi (however, this refers to jetpack 4.6 by default)
- Python version: 3.8
- CUDA version: 11.4
- GPU models and configuration: Jetson ORIN
- Any other relevant information:
|
https://github.com/pytorch/TensorRT/issues/1166
|
closed
|
[
"question",
"channel: linux-jetpack"
] | 2022-07-05T19:46:00Z
| 2022-08-11T02:55:46Z
| null |
krmayankb
|
pytorch/functorch
| 933
|
Cannot import vmap after new release
|
I am installing functorch on google colab; when I don't specify the version, it installs version 0.2.2 and PyTorch version 1.12.0, and uninstall currently installed PyTorch 1.11.0 on colab. But, in the line where I import vmap, it throws an error that functorch is not compatible with PyTorch 1.12.0:
```
RuntimeError Traceback (most recent call last)
[<ipython-input-1-0691ca18293b>](https://localhost:8080/#) in <module>()
3
4 from torchsummary import summary
----> 5 from functorch import vmap
6 import torch
7 import torch.nn as nn
[/usr/local/lib/python3.7/dist-packages/functorch/__init__.py](https://localhost:8080/#) in <module>()
20 if torch_cuda_version not in pytorch_cuda_restrictions:
21 raise RuntimeError(
---> 22 f"We've detected an installation of PyTorch 1.12 with {verbose_torch_cuda_version} support. "
23 "This functorch 0.2.0 binary is not compatible with the PyTorch installation. "
24 "Please see our install page for suggestions on how to resolve this: "
RuntimeError: We've detected an installation of PyTorch 1.12 with CUDA 10.2 support. This functorch 0.2.0 binary is not compatible with the PyTorch installation. Please see our install page for suggestions on how to resolve this: https://pytorch.org/functorch/stable/install.html
```
I tried the older version, functorch 0.1.1 with PyTorch 1.11.0, but it also gives some errors during the import:
```
ImportError Traceback (most recent call last)
[<ipython-input-3-abbd2ba6241c>](https://localhost:8080/#) in <module>()
3
4 from torchsummary import summary
----> 5 from functorch import vmap
6 import torch
7 import torch.nn as nn
[/usr/local/lib/python3.7/dist-packages/functorch/__init__.py](https://localhost:8080/#) in <module>()
5 # LICENSE file in the root directory of this source tree.
6 import torch
----> 7 from . import _C
8
9 # Monkey patch PyTorch. This is a hack, we should try to upstream
ImportError: /usr/local/lib/python3.7/dist-packages/functorch/_C.so: undefined symbol: _ZNK3c1010TensorImpl5sizesEv
```
Note: I was able to use vmap from older version just a few hours ago, then I came to notebook started it and now it doesn't work
|
https://github.com/pytorch/functorch/issues/933
|
open
|
[] | 2022-07-05T18:47:06Z
| 2022-08-08T14:31:27Z
| 4
|
KananMahammadli
|
pytorch/vision
| 6,239
|
n classes in ConvNeXt model
|
### 🐛 Describe the bug
HI,
I'm trying to train a ConvNeXt tiny model as a binary classifier by loading the model architecture and pretrained weights from torchvision.models.
I use the following two lines of code to load the model and change the number of output nodes:
>num_classes=2
model_ft = models.convnext_tiny(weights=ConvNeXt_Tiny_Weights.DEFAULT)
model_ft.classifier[2].out_features = num_classes
And when I print this layer of the mode I get:
>print(model_ft.classifier[2])
>Linear(in_features=768, out_features=2, bias=True)
This suggests that the change had been made. However, when I train the model, the output has dimensions of 42 x 1,000. _i.e. batch_size_ x n classes in ImageNet:
>batch_size=42
outputs = model(inputs)
print(outputs.size())
>torch.Size([42, 1000])
Any thoughts on how solve this problem?
Cheers,
Jamie
p.s. it seems like the issue might be that the number of classes is hard coded as 1000 in:
pytorch/vision/tree/main/torchvision/models/convnext.py
Lines 90:100
>class ConvNeXt(nn.Module):
def init(
self,
block_setting: List[CNBlockConfig],
stochastic_depth_prob: float = 0.0,
layer_scale: float = 1e-6,
num_classes: int = 1000,
block: Optional[Callable[..., nn.Module]] = None,
norm_layer: Optional[Callable[..., nn.Module]] = None,
**kwargs: Any,
) -> None:
### Versions
Pytorch version: 1.13.0.dev20220624
Python 3.8
cc @datumbox
|
https://github.com/pytorch/vision/issues/6239
|
closed
|
[
"question",
"module: models"
] | 2022-07-05T17:47:40Z
| 2022-07-06T08:13:15Z
| null |
jrsykes
|
pytorch/vision
| 6,235
|
Creating a `cache-dataset` for Video classification.
|
Hello, now I am trying to test the video classification model R(2+1)D on Kinetics400. However the speed of loading data is so slow. I believe the loading speed can be improved by caching the data but I am not sure how to cache video files. In the code also, it is mentioned. I want to know to cache video files? is cache dataset creating feature also included in future updates?
Thank you !
cc @datumbox
|
https://github.com/pytorch/vision/issues/6235
|
closed
|
[
"question",
"module: reference scripts",
"module: video"
] | 2022-07-05T04:27:54Z
| 2022-07-05T08:28:20Z
| null |
yakhyo
|
huggingface/datasets
| 4,621
|
ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present
|
## Describe the bug
If you pass `drop_metadata=True` and `drop_labels=False` when a `data_dir` contains at least one `matadata.jsonl` file, you will get a KeyError. This is probably not a very useful case but we shouldn't get an error anyway. Asking users to move metadata files manually outside `data_dir` or pass features manually (when there is a tool that can infer them automatically) don't look like a good idea to me either.
## Steps to reproduce the bug
### Clone an example dataset from the Hub
```bash
git clone https://huggingface.co/datasets/nateraw/test-imagefolder-metadata
```
### Try to load it
```python
from datasets import load_dataset
ds = load_dataset("test-imagefolder-metadata", drop_metadata=True, drop_labels=False)
```
or even just
```python
ds = load_dataset("test-imagefolder-metadata", drop_metadata=True)
```
as `drop_labels=False` is a default value.
## Expected results
A DatasetDict object with two features: `"image"` and `"label"`.
## Actual results
```
Traceback (most recent call last):
File "/home/polina/workspace/datasets/debug.py", line 18, in <module>
ds = load_dataset(
File "/home/polina/workspace/datasets/src/datasets/load.py", line 1732, in load_dataset
builder_instance.download_and_prepare(
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1218, in _prepare_split
example = self.info.features.encode_example(record)
File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1596, in encode_example
return encode_nested_example(self, example)
File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in encode_nested_example
{
File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in <dictcomp>
{
File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: 'label'
```
## Environment info
`datasets` master branch
- `datasets` version: 2.3.3.dev0
- Platform: Linux-5.14.0-1042-oem-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 6.0.1
- Pandas version: 1.4.1
|
https://github.com/huggingface/datasets/issues/4621
|
closed
|
[
"bug"
] | 2022-07-04T11:21:44Z
| 2022-07-15T14:24:24Z
| 0
|
polinaeterna
|
pytorch/audio
| 2,526
|
Need more detail and tutorial on how to use the language model to decrease the word rate error.
|
### 📚 The doc issue
1. How do we build our own language model and add it to the language model, such as wav2vec2? However many of the solutions from the doc require using another library.
2. If 1 requires training the language model again, then It looks like we can use our own text file for the language model to form a bean search

https://github.com/facebookresearch/fairseq/issues/3157
I was working on a project for deaf students to have a subtitle. You know they found out that after wav2vec2 using a language model, such as n-gram the word rate error will be dropped. Thus, I was thinking to add a lecture note or textbook to decrease the WRE for college class subtitling. But a lot of language model implementation for Pytorch audio model requires other library, such as KenLM. But I was thinking if it is a n-gram model, it shouldn't be difficult to have it in Pytorch. If we want to deploy it in other language, such as Javascript, it will require ONNX in Pytorch, so we may need to write the language model in Pytorch rather than in KenLM
First, this has been asked that it looks like we do not need to train the language model(such as n-gram) again. We just need to put the text file that has all the possible words that we want the n-gram model to do the beam search.

But you can see the doc only gives you one line of code to "short cut" everything without telling the user how to use their own text file.
Again, if we look at the doc, we see "Builds CTC beam search decoder from Flashlight". Thus, how do we use our own language model? Again, my point of using my own language model is not because I have some powerful transformer models. It is I need to be clear on how the model handles the process of wav2vec2 output to text with the language model.
Thus this issue was proposed and asked, and I feel it was not explained detailly.
https://github.com/facebookresearch/fairseq/issues/3157
Suggestion: I prefer HuBERT since it is smaller than Wav2vec2.
|
https://github.com/pytorch/audio/issues/2526
|
open
|
[] | 2022-07-03T11:05:05Z
| 2022-07-18T21:02:59Z
| null |
AliceSum
|
huggingface/datasets
| 4,619
|
np arrays get turned into native lists
|
## Describe the bug
When attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen?
## Steps to reproduce the bug
```python
>>> import datasets, numpy as np
>>> dataset = datasets.load_dataset("glue", "mrpc")["validation"]
Reusing dataset glue (...)
100%|███████████████████████████████████████████████| 3/3 [00:00<00:00, 1360.61it/s]
>>> dataset2 = dataset.map(lambda x: {"tmp": np.array([0.5])}, batched=False)
100%|██████████████████████████████████████████| 408/408 [00:00<00:00, 10819.97ex/s]
>>> dataset2[0]["tmp"]
[0.5]
>>> type(dataset2[0]["tmp"])
<class 'list'>
```
## Expected results
`dataset2[0]["tmp"]` should be an `np.ndarray`.
## Actual results
It's a list.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: mac, though I'm pretty sure it happens on a linux machine too
- Python version: 3.9.7
- PyArrow version: 6.0.1
|
https://github.com/huggingface/datasets/issues/4619
|
open
|
[
"bug"
] | 2022-07-02T17:54:57Z
| 2022-07-03T20:27:07Z
| 3
|
ZhaofengWu
|
pytorch/tutorials
| 1,961
|
Update SpaCy to latest.
|
The old `spacy==2.3.2` is out of date, and I cannot install it (due to build failure). Is it possible to remove the version constraint?
|
https://github.com/pytorch/tutorials/issues/1961
|
closed
|
[
"dependencies"
] | 2022-07-02T11:01:23Z
| 2022-12-09T17:47:43Z
| 2
|
evan0greenup
|
pytorch/tutorials
| 1,960
|
Question: how to run individual tutorial?
|
I don't make to `make doc`, I just want to run a specific individual tutorial.
Is it safe to directly run it as script?
|
https://github.com/pytorch/tutorials/issues/1960
|
closed
|
[
"question"
] | 2022-07-02T10:59:34Z
| 2022-08-01T21:15:19Z
| null |
evan0greenup
|
pytorch/TensorRT
| 1,156
|
❓ [Question] Support for CUDA 11.6?
|
## Does latest version support CUDA 11.6❓
Pytorch officially supports CUDA 11.6, however docs say torch_tensort supports CUDA 11.3 at max. But in some issues it is said that CUDA version 11.6 is used. Is CUDA 11.6 officially supported by torch_tensorrt?
## Environment
- PyTorch Version (e.g., 1.0): any
- CPU Architecture:
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.8 or 3.9
- CUDA version: 11.6
- GPU models and configuration:
- Any other relevant information:
|
https://github.com/pytorch/TensorRT/issues/1156
|
closed
|
[
"question",
"component: dependencies"
] | 2022-07-01T11:41:24Z
| 2022-08-12T03:16:44Z
| null |
alercelik
|
pytorch/data
| 564
|
[RFC] Restricting `IterDataPipe` to have method `__iter__` as a generator function without method `__next__`
|
### 🚀 The feature
** Note that this is a RFC to solely discuss the design. There is currently no plan to implement this feature. This issue serves as a developer documentation of the current design and the complexity/issue that we encounter with certain aspects of `IterDataPipe`. It also provides a space to discuss what we can potentially do.
The overarching goal is to simplify certain aspects of `IterDataPipe` while providing flexibility for users.
The proposed feature is to restrict `IterDataPipe`, such that it must have a method `__iter__` that is a generator function and it cannot have the method `__next__`. All built-in `IterDataPipe` is already implemented that way, so this will only impact custom `IterDataPipe` that users create.
Alternate solutions are also discussed below. We welcome suggestions as well!
### Motivation, pitch
For context, currently, there are 3 main types of `IterDataPipe` that is allowed. The ones with:
1. `__iter__` is a generator function (e.g. use `yield`)
2. `__iter__` that returns an iterator but is not a generator function
3. `__iter__` returns `self` and a `__next__` method exists
Note that it is possible for users to have `__next__` but not have `__iter__` returning `self`, but that is not recommended and have unexpected behaviors. All built-in DataPipes belong to type 1.
The fact that there are 3 types of `IterDataPipe` makes the implementation of [`hook_iterator`](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/datapipes/_hook_iterator.py) very complicated.
The hook is called every time `__iter__` of an `IterDataPipe` is invoked. The hook tries to do a few things:
* Enforce the single iterator per `IterDataPipe` constraint (seeoperations to related to `valid_iterator_id`) and reset the DataPipe as needed
* Count the number of elements yielded
* Allow performance profiling of operations
The fact that there is no restriction on how users can implement `__iter__` and `__next__` for custom DataPipes means `hook_iterator` must be complicated in order to handle the many corner cases that can happen. As you can see, we have a long code block to manage the behavior of type 1, and have a custom class to manage the behavior of type 2 and 3. The behavior of the method `__next__` (type 3) is difficult to control and can lead to unexpected behaviors if users aren't careful.
If we are able to restrict `IterDataPipe`, the implementation of those functionalities within `hook_iterator` will be much cleaner at the cost of providing less flexibility for `IterDataPipe`. I believe users also will be less likely to run into errors if we have such restriction.
### Alternatives
Suggestion from @ejguan:
Create a class called `DataPipeIterator`, which contains `__self__` and `__next__`. `__iter__` from DataPipe always return a specific DataPipeIterator object. This might resolve the most of our problem.
### Additional context
Such restriction will likely break some downstream usages. Whatever we do, we will proceed carefully.
Performance impact is also an aspect that we must consider as well.
Feedback and suggestions are more than welcomed. Let us know if you have experienced issues while using `torchdata` or have a bad experience while implementing new features.
|
https://github.com/meta-pytorch/data/issues/564
|
open
|
[] | 2022-06-30T20:39:17Z
| 2022-06-30T20:41:30Z
| 0
|
NivekT
|
huggingface/datasets
| 4,603
|
CI fails recurrently and randomly on Windows
|
As reported by @lhoestq,
The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs:
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6'
No parent package detected, impossible to derive `name`
running bdist_wheel
running build
running build_py
package init file 'seqeval\__init__.py' not found (or not a regular file)
package init file 'seqeval\metrics\__init__.py' not found (or not a regular file)
C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
installing to build\bdist.win-amd64\wheel
running install
running install_lib
warning: install_lib: 'build\lib' does not exist -- no Python modules to install
running install_egg_info
running egg_info
creating UNKNOWN.egg-info
writing UNKNOWN.egg-info\PKG-INFO
writing dependency_links to UNKNOWN.egg-info\dependency_links.txt
writing top-level names to UNKNOWN.egg-info\top_level.txt
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
reading manifest file 'UNKNOWN.egg-info\SOURCES.txt'
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info
running install_scripts
creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL
creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it
adding 'UNKNOWN-0.0.0.dist-info/METADATA'
adding 'UNKNOWN-0.0.0.dist-info/WHEEL'
adding 'UNKNOWN-0.0.0.dist-info/top_level.txt'
adding 'UNKNOWN-0.0.0.dist-info/RECORD'
removing build\bdist.win-amd64\wheel
Building wheel for seqeval (setup.py): finished with status 'done'
Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1
Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7
WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN'
```
|
https://github.com/huggingface/datasets/issues/4603
|
closed
|
[
"bug"
] | 2022-06-30T10:59:58Z
| 2022-06-30T13:22:25Z
| 0
|
albertvillanova
|
pytorch/vision
| 6,221
|
Customize FasterRCNN
|
Hi,
I've been trying, unsuccessfully to customize a bit the implementation of FasterRCNN proposed by torchvision. For example, one thing I would like to do, would be to write a customized [postprocess_detections ](https://github.com/pytorch/vision/blob/87cde716b7f108f3db7b86047596ebfad1b88380/torchvision/models/detection/roi_heads.py#L668) function that return confidence for all labels and not only the one with highest confidence.
In the past I've managed to successfully overwrite the loss function by doing something like
```
model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)
torchvision.models.detection.roi_heads.fastrcnn_loss = custom_loss
```
But the postprocess_detections function is within the RoIHeads class. If I try to replace the RoIHead class before defining my model I get this error:
```
torchvision.models.detection.roi_heads.RoIHeads = RoIHeadsCustom
model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(
pretrained=True
)
```
```
Traceback (most recent call last):
File "test2.py", line 80, in <module>
model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(
File "/home/paul/.local/lib/python3.8/site-packages/torchvision/models/detection/faster_rcnn.py", line 470, in fasterrcnn_mobilenet_v3_large_fpn
return _fasterrcnn_mobilenet_v3_large_fpn(weights_name, pretrained=pretrained, progress=progress,
File "/home/paul/.local/lib/python3.8/site-packages/torchvision/models/detection/faster_rcnn.py", line 393, in _fasterrcnn_mobilenet_v3_large_fpn
model = FasterRCNN(backbone, num_classes, rpn_anchor_generator=AnchorGenerator(anchor_sizes, aspect_ratios),
File "/home/paul/.local/lib/python3.8/site-packages/torchvision/models/detection/faster_rcnn.py", line 222, in __init__
roi_heads = RoIHeads(
File "/home/paul/.local/lib/python3.8/site-packages/torchvision/models/detection/roi_heads.py", line 512, in __init__
super(RoIHeads, self).__init__()
TypeError: super(type, obj): obj must be an instance or subtype of type
```
But if I define it afterwards, the object is already created and the custom class is not taken into account
```
model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(
pretrained=True
)
torchvision.models.detection.roi_heads.RoIHeads = RoIHeadsCustom
```
If anyone has some ideas on how to easily customize torchvision models that would be a great help. The only solution I'm seeing is creating a fork of torchvision, which I'd rather avoid.
Thanks.
cc @datumbox @YosuaMichael
|
https://github.com/pytorch/vision/issues/6221
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2022-06-30T09:40:50Z
| 2022-07-06T14:15:49Z
| null |
paullixo
|
huggingface/dataset-viewer
| 430
|
Shuffle the rows?
|
see https://github.com/huggingface/moon-landing/issues/3375
|
https://github.com/huggingface/dataset-viewer/issues/430
|
closed
|
[
"question",
"feature request",
"P2"
] | 2022-06-30T08:31:20Z
| 2023-09-08T13:41:42Z
| null |
severo
|
pytorch/TensorRT
| 1,150
|
❓ [Question] The same inputs producing very different outputs via pytorch & TensorRT.
|
## ❓ Question
<!-- Your question -->
Hey, guys!
I'm new to TensorRT, after the environment setup. I'm very excited to try the official demo in this page. [Resnet50-example.](https://pytorch.org/TensorRT/_notebooks/Resnet50-example.html). I got very different outputs when inference with the same inputs via pytorch & TensorRT.
But when I use efficientnet_b3 as the model, the results are same.
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version: 1.11.0+cu113
- TensorRT Version: 8.4.1.5
- torch_tensorrt. Version: 1.1.0
- CPU Architecture: x86_64
- OS (e.g., Linux): Ubuntu 20.04.2 LTS
- How you installed PyTorch : pip
- How you installed TensorRT: pip
- Are you using local sources or building from archives: No
- Python version: 3.8.8
- CUDA version: 11.4
- GPU models and configuration: NVIDIA GeForce RTX 3090
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
Here is my model convert code from PyTorch to TensorRT
```python
import time
import numpy as np
import torch
torch.manual_seed(1989)
import tensorrt
import torch_tensorrt
from torchvision import models
if __name__ == '__main__':
# 1 get pytorch model
model = models.resnet50(pretrained=False)
#model = models.efficientnet_b3(pretrained=False)
model = model.eval().to('cuda')
# 2 conver to tensorrt model
input_shape=(1,3,224,224)
ts_model = torch.jit.script(model)
trt_model = torch_tensorrt.compile(
model,
inputs=[torch_tensorrt.Input(input_shape, dtype=torch.float32)],
enabled_precisions = torch.float32,
workspace_size = 1 << 22
)
print('Convert over.')
#torch.jit.save(trt_model, 'trt_model.pt')
#trt_model = torch.jit.load('trt_model.pt')
# 3 check speedup
inputs = torch.randn(input_shape).to('cuda')
benchmark(model, inputs, dtype='fp32')
benchmark(ts_model, inputs, dtype='fp32')
benchmark(trt_model, inputs, dtype='fp32')
```
And here is the benchmark function for the same inputs.
```python
def benchmark(model, inputs, dtype='fp32', nwarmup=50, nruns=3000):
model.eval()
if dtype=='fp16':
inputs = inputs.half()
print("Warm up ...")
with torch.no_grad():
for _ in range(nwarmup):
outputs = model(inputs)
torch.cuda.synchronize()
print("Start timing ...")
timings = []
with torch.no_grad():
for i in range(1, nruns+1):
start_time = time.time()
outputs = model(inputs)
torch.cuda.synchronize()
end_time = time.time()
timings.append(end_time - start_time)
if i%1000==0:
print('Iteration %d/%d, avg batch time %.2f ms'%(i, nruns, np.mean(timings)*1000))
print(outputs[0][:8])
```
And here are the strange outputs that I got. 🤯
For efficientnet_b3
>WARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuDNN 8.4.1 but loaded cuDNN 8.4.0
WARNING: [Torch-TensorRT TorchScript Conversion Context] - TensorRT was linked against cuDNN 8.4.1 but loaded cuDNN 8.4.0
WARNING: [Torch-TensorRT TorchScript Conversion Context] - The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
WARNING: [Torch-TensorRT] - TensorRT was linked against cuDNN 8.4.1 but loaded cuDNN 8.4.0
WARNING: [Torch-TensorRT] - TensorRT was linked against cuDNN 8.4.1 but loaded cuDNN 8.4.0
Convert over.
Warm up ...
Start timing ...
Iteration 1000/3000, avg batch time 10.76 ms
Iteration 2000/3000, avg batch time 10.75 ms
Iteration 3000/3000, avg batch time 10.75 ms
tensor([ 2.5864e-15, -2.6358e-15, 4.9805e-15, 6.8343e-15, 3.6509e-16,
1.3975e-15, 1.7666e-15, -2.6696e-15], device='cuda:0')
Warm up ...
Start timing ...
Iteration 1000/3000, avg batch time 6.92 ms
Iteration 2000/3000, avg batch time 6.92 ms
Iteration 3000/3000, avg batch time 6.92 ms
tensor([ 2.5864e-15, -2.6358e-15, 4.9805e-15, 6.8343e-15, 3.6509e-16,
1.3975e-15, 1.7666e-15, -2.6696e-15], device='cuda:0')
Warm up ...
Start timing ...
Iteration 1000/3000, avg batch time 0.59 ms
Iteration 2000/3000, avg batch time 0.59 ms
Iteration 3000/3000, avg batch time 0.59 ms
tensor([ 2.5864e-15, -2.6358e-15, 4.9805e-15, 6.8343e-15, 3.6509e-16,
1.3975e-15, 1.7666e-15, -2.6696e-15], devic
|
https://github.com/pytorch/TensorRT/issues/1150
|
closed
|
[
"bug",
"question",
"No Activity",
"performance"
] | 2022-06-29T10:10:01Z
| 2023-03-26T00:02:20Z
| null |
Amoko
|
pytorch/vision
| 6,216
|
EfficientNet_v2 models not loading through torchvision
|
### 🐛 Describe the bug
I am trying to train efficient_v2 classification models on custom dataset using
[this script](https://github.com/pytorch/vision/tree/f75272fa704452a1d9405126c3a09e2d7432d489/references/classification)
I used following command
```
python3 train.py --model efficientnet_v2 --batch-size 128 --lr 0.5 --lr-scheduler cosineanne
alinglr --lr-warmup-epochs 5 --lr-warmup-method linear --auto-augment ta_wide --epochs 600 --random-erase 0.1 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 --w
eight-decay 0.00002 --norm-weight-decay 0.0 --train-crop-size 384 --model-ema --val-crop-size 480 --val-resize-size 480
```
I get following error
```
Traceback (most recent call last):
File "train.py", line 501, in <module>
main(args)
File "train.py", line 224, in main
model = torchvision.models.__dict__[args.model](weights=args.weights, num_classes=num_classes)
KeyError: 'efficientnet_v2'
```
### Versions
Collecting environment information...
PyTorch version: 1.10.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.19.4
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-1030-aws-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.5.119
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 510.73.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.2
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn.so.8.3.1
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.3.1
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.3.1
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.3.1
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.3.1
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.3.1
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.3.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn.so.8.3.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.3.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.3.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.3.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.3.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.3.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.3.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.0
[pip3] torch==1.10.0
[pip3] torchaudio==0.8.0
[pip3] torchvision==0.11.1
[conda] Could not collect
cc @datumbox
|
https://github.com/pytorch/vision/issues/6216
|
closed
|
[
"question",
"module: models"
] | 2022-06-29T09:12:09Z
| 2022-06-29T11:09:27Z
| null |
suyashhchougule
|
huggingface/datasets
| 4,591
|
Can't push Images to hub with manual Dataset
|
## Describe the bug
If I create a dataset including an 'Image' feature manually, when pushing to hub decoded images are not pushed,
instead it looks for image where image local path is/used to be.
This doesn't (at least didn't used to) happen with imagefolder. I want to build dataset manually because it is complicated.
This happens even though the dataset is looking like decoded images:

and I use `embed_external_files=True` while `push_to_hub` (same with false)
## Steps to reproduce the bug
```python
from PIL import Image
from datasets import Image as ImageFeature
from datasets import Features,Dataset
#manually create dataset
feats=Features(
{
"images": [ImageFeature()], #same even if explicitly ImageFeature(decode=True)
"input_image": ImageFeature(),
}
)
test_data={"images":[[Image.open("test.jpg"),Image.open("test.jpg"),Image.open("test.jpg")]], "input_image":[Image.open("test.jpg")]}
test_dataset=Dataset.from_dict(test_data,features=feats)
print(test_dataset)
test_dataset.push_to_hub("ceyda/image_test_public",private=False,token="",embed_external_files=True)
# clear cache rm -r ~/.cache/huggingface
# remove "test.jpg" # remove to see that it is looking for image on the local path
test_dataset=load_dataset("ceyda/image_test_public",use_auth_token="")
print(test_dataset)
print(test_dataset['train'][0])
```
## Expected results
should be able to push image bytes if dataset has `Image(decode=True)`
## Actual results
errors because it is trying to decode file from the non existing local path.
```
----> print(test_dataset['train'][0])
File ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py:2154, in Dataset.__getitem__(self, key)
2152 def __getitem__(self, key): # noqa: F811
2153 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2154 return self._getitem(
2155 key,
2156 )
File ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs)
2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2139 formatted_output = format_table(
2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2141 )
2142 return formatted_output
File ~/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
...
-> 3068 fp = builtins.open(filename, "rb")
3069 exclusive_fp = True
3071 try:
FileNotFoundError: [Errno 2] No such file or directory: 'test.jpg'
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
|
https://github.com/huggingface/datasets/issues/4591
|
closed
|
[
"bug"
] | 2022-06-29T00:01:23Z
| 2022-07-08T12:01:36Z
| 1
|
cceyda
|
pytorch/serve
| 1,713
|
How to specify which gpu is to be used for serve?
|
### 🚀 The feature
```console
:~$ lspci | grep VGA
0000:00:02.0 VGA compatible controller: Intel Corporation Alder Lake-P Integrated Graphics Controller (rev 0c)
0000:01:00.0 VGA compatible controller: NVIDIA Corporation GA103M [GeForce RTX 3080 Ti Mobile] (rev a1)
:~$ glxinfo | egrep -i "device|memory"
Device: Mesa Intel(R) Graphics (ADL GT2) (0x46a6)
Video memory: 29872MB
Unified memory: yes
GL_AMD_performance_monitor, GL_AMD_pinned_memory,
GL_EXT_framebuffer_object, GL_EXT_framebuffer_sRGB, GL_EXT_memory_object,
GL_EXT_memory_object_fd, GL_EXT_packed_depth_stencil, GL_EXT_packed_float,
GL_AMD_pinned_memory, GL_AMD_query_buffer_object,
GL_EXT_gpu_program_parameters, GL_EXT_gpu_shader4, GL_EXT_memory_object,
GL_EXT_memory_object_fd, GL_EXT_multi_draw_arrays,
GL_EXT_memory_object, GL_EXT_memory_object_fd, GL_EXT_multi_draw_arrays,
:~$ nvidia-smi
Command 'nvidia-smi' not found, but can be installed with:
sudo apt install nvidia-utils-418-server # version 418.226.00-0ubuntu4, or
sudo apt install nvidia-utils-390 # version 390.151-0ubuntu0.22.04.1
sudo apt install nvidia-utils-450-server # version 450.191.01-0ubuntu0.22.04.1
sudo apt install nvidia-utils-470 # version 470.129.06-0ubuntu0.22.04.1
sudo apt install nvidia-utils-470-server # version 470.129.06-0ubuntu0.22.04.1
sudo apt install nvidia-utils-510 # version 510.73.05-0ubuntu0.22.04.1
sudo apt install nvidia-utils-510-server # version 510.73.08-0ubuntu0.22.04.1
```
### Motivation, pitch
Just wanna **NVIDIA driver** for **torchserve** and the other one **Intel** for display if possible ???
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/serve/issues/1713
|
closed
|
[
"triaged_wait",
"support"
] | 2022-06-28T19:35:33Z
| 2022-07-07T02:13:46Z
| null |
jiapei-nexera
|
huggingface/dataset-viewer
| 423
|
Add terms of service to the API?
|
See https://swagger.io/specification/#info-object
Maybe to mention a rate-limiter, if we implement one
|
https://github.com/huggingface/dataset-viewer/issues/423
|
closed
|
[
"question"
] | 2022-06-28T11:27:16Z
| 2022-09-16T17:30:38Z
| null |
severo
|
pytorch/vision
| 6,206
|
Wrong for pytorch-nightly version
|
### 🐛 Describe the bug
The wrong is below:
Traceback (most recent call last):
File "/home/hxj/PycharmProjects/ImageNetTrain/main.py", line 9, in <module>
weights = P.models.ResNet50_Weights.IMAGENET1K_V1
AttributeError: module 'torchvision.prototype.models' has no attribute 'ResNet50_Weights'
### Versions
pytorch-nightly 1.13
cc @datumbox
|
https://github.com/pytorch/vision/issues/6206
|
open
|
[
"question",
"module: models"
] | 2022-06-27T08:44:00Z
| 2022-06-27T08:55:49Z
| null |
wwwsent
|
huggingface/datasets
| 4,571
|
move under the facebook org?
|
### Link
https://huggingface.co/datasets/gsarti/flores_101
### Description
It seems like streaming isn't supported for this dataset:
```
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset/flores101_dataset.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
```
### Owner
No
|
https://github.com/huggingface/datasets/issues/4571
|
open
|
[] | 2022-06-26T11:19:09Z
| 2023-09-25T12:05:18Z
| 3
|
lewtun
|
huggingface/datasets
| 4,570
|
Dataset sharding non-contiguous?
|
## Describe the bug
I'm not sure if this is a bug; more likely normal behavior but i wanted to double check.
Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset?
This might be related to this pull request (https://github.com/huggingface/datasets/pull/4466) but I have to admit I did not properly look into the changes made.
## Steps to reproduce the bug
```python
max_shard_size = convert_file_size_to_int('300MB')
dataset_nbytes = dataset.data.nbytes
num_shards = int(dataset_nbytes / max_shard_size) + 1
num_shards = max(num_shards, 1)
print(f"{num_shards=}")
for shard_index in range(num_shards):
shard = dataset.shard(num_shards=num_shards, index=shard_index)
shard.to_parquet(f"tokenized/tokenized-{shard_index:03d}.parquet")
os.listdir('tokenized/')
```
## Expected results
I expected the shards to match the order of the data of the original dataset; i.e. `dataset[10]` being the same as `shard_1[10]` for example
## Actual results
Only the first element is the same; i.e. `dataset[0]` is the same as `shard_1[0]`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31
- Python version: 3.10.4
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
|
https://github.com/huggingface/datasets/issues/4570
|
closed
|
[
"bug"
] | 2022-06-26T08:34:05Z
| 2022-06-30T11:00:47Z
| 5
|
cakiki
|
huggingface/datasets
| 4,569
|
Dataset Viewer issue for sst2
|
### Link
https://huggingface.co/datasets/sst2
### Description
Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem):
```
Status code: 400
Exception: Exception
Message: Give up after 5 attempts with ConnectionError
```
### Owner
No
|
https://github.com/huggingface/datasets/issues/4569
|
closed
|
[
"dataset-viewer"
] | 2022-06-26T07:32:54Z
| 2022-06-27T06:37:48Z
| 2
|
lewtun
|
pytorch/data
| 550
|
DataLoader2 should reset when a new iterator is created?
|
When a new iterator is created, `DataLoader2` currently resumes from when it was left off rather than resetting and starting from the beginning again (see code snippet below). This is divergent from the behavior of the original `DataLoader`. Users likely expect the latter behavior and we should properly reset the state of `DataLoader2` when a new iterator is created.
```python
from torchdata.dataloader2 import DataLoader2
from torchdata.datapipes.iter import IterableWrapper
dl = DataLoader2(IterableWrapper(range(10)))
for i in iter(dl):
print(i)
if i == 4:
print('--------------')
break
for i in iter(dl):
print(i)
```
cc: @VitalyFedyunin
|
https://github.com/meta-pytorch/data/issues/550
|
closed
|
[] | 2022-06-24T18:19:09Z
| 2022-08-26T21:02:39Z
| 1
|
NivekT
|
pytorch/data
| 549
|
DataLoader2.__len__() ?
|
This is somewhat related to https://github.com/pytorch/data/issues/533
As described in https://github.com/pytorch/data/issues/533#issuecomment-1163381945, we like to check the `len()` of the DataLoader in torchvision in our logging utils.
Are there plans to implement `__len__()` on `DataLoader2`?
|
https://github.com/meta-pytorch/data/issues/549
|
open
|
[] | 2022-06-24T17:10:39Z
| 2022-07-06T19:21:39Z
| 1
|
NicolasHug
|
pytorch/data
| 538
|
Warn about pickle-ablity when using `dp.map(some_local_function)` ?
|
`torchdata` issues a warning about pickle when we use lambdas (which is great!)
Another kind of function that isn't compatible with pickle are local functions. Would it be possible to throw the same warning there?
```py
import torchdata
import pickle
def make_dp():
def f(x): # local function, not pickleable
return x
return torchdata.datapipes.iter.IterableWrapper(range(40)).map(f)
dp = make_dp() # no warning
f = "/tmp/dp"
pickle.dump(dp, open(f, "wb")) # fails
```
|
https://github.com/meta-pytorch/data/issues/538
|
closed
|
[] | 2022-06-23T13:02:33Z
| 2022-06-27T21:48:29Z
| 1
|
NicolasHug
|
huggingface/dataset-viewer
| 416
|
Remove the Kubernetes CPU "limits"?
|
https://github.com/robusta-dev/alert-explanations/wiki/CPUThrottlingHigh-%28Prometheus-Alert%29#why-you-dont-need-cpu-limits
> ## Why you don't need CPU limits
>
> As long as your pod has a CPU request, [Kubernetes maintainers like Tim Hockin recommend not using limits at all](https://twitter.com/thockin/status/1134193838841401345). This way pods are free to use spare CPU instead of letting the CPU stay idle.
>
> Contrary to common belief, [even if you remove this pod's CPU limit, other pods are still guaranteed the CPU they requested](https://github.com/kubernetes/design-proposals-archive/blob/8da1442ea29adccea40693357d04727127e045ed/node/resource-qos.md#compressible-resource-guaranteess). The CPU limit only effects how spare CPU is distributed.
|
https://github.com/huggingface/dataset-viewer/issues/416
|
closed
|
[
"question"
] | 2022-06-23T12:26:39Z
| 2022-07-22T13:15:41Z
| null |
severo
|
huggingface/dataset-viewer
| 415
|
Expose an endpoint with the column types/modalities of each dataset?
|
It could be used on the Hub to find all the "images" or "audio" datasets.
By the way, the info is normally already in the datasets-info.json (.features)
|
https://github.com/huggingface/dataset-viewer/issues/415
|
closed
|
[
"question"
] | 2022-06-23T10:36:01Z
| 2022-09-16T17:32:45Z
| null |
severo
|
pytorch/data
| 533
|
`len(dataloader)` in distributed setting is different with datapipes and with map-style datasets
|
In a distributed setting, `len(dataloader)` will return:
- `len(dataset) // (batch_size * num_GPUs)` if `dataset` is a map-style dataset
- `len(dataset) // batch_size` if `dataset` is a datapipe
This discrepancy makes it a bit difficult to work with torchvision's training recipes, where we often need the size of the dataloader.
Below is an illustration of this discrepancy - you can run the snippet (even without a GPU) with `torchrun --nproc_per_node 4 script.py`
```py
# Run this with e.g. `torchrun --nproc_per_node 4 script.py`
import torch.utils.data as data
import torch.distributed as dist
import torchdata
def replace_print():
import builtins as __builtin__
builtin_print = __builtin__.print
def print(*args, **kwargs):
if dist.get_rank() == 0:
builtin_print(f"[GPU 0]", *args, **kwargs)
__builtin__.print = print
# Setting up DDP - you can ignore this
dist.init_process_group(backend="gloo")
replace_print()
dist.barrier()
size = 800
dp = torchdata.datapipes.iter.IterableWrapper(range(size)).sharding_filter()
dl = data.DataLoader(dp, batch_size=10, num_workers=4, drop_last=True)
print(f"with dp, {len(dl) = }")
# Gives : 80
ds = list(range(size))
dl = data.DataLoader(ds, batch_size=10, num_workers=4, drop_last=True, sampler=data.DistributedSampler(ds, shuffle=False))
print(f"with mapstyle, {len(dl) = }")
# Gives: 20
```
|
https://github.com/meta-pytorch/data/issues/533
|
open
|
[] | 2022-06-22T16:32:01Z
| 2022-06-22T16:57:09Z
| 2
|
NicolasHug
|
huggingface/datasets
| 4,542
|
[to_tf_dataset] Use Feather for better compatibility with TensorFlow ?
|
To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory.
It seems that using `tensorflow_io` we could have something similar for `to_tf_dataset` if we provide sharded Feather files: https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowFeatherDataset
Feather is a format almost equivalent to the Arrow IPC Stream format we're using in `datasets`: Feather V2 is equivalent to Arrow IPC File format, which is an extension of the stream format (it has an extra footer). Therefore we could store datasets as Feather instead of Arrow IPC Stream format without breaking the whole library.
Here are a few points to explore
- [ ] check the performance of ArrowFeatherDataset in tf.data
- [ ] check what would change if we were to switch to Feather if needed, in particular check that those are fine: memory mapping, typing, writing, reading to python objects, etc.
We would also need to implement sharding when loading a dataset (this will be done anyway for #546)
cc @Rocketknight1 @gante feel free to comment in case I missed anything !
I'll share some files and scripts, so that we can benchmark performance of Feather files with tf.data
|
https://github.com/huggingface/datasets/issues/4542
|
open
|
[
"generic discussion"
] | 2022-06-22T14:42:00Z
| 2022-10-11T08:45:45Z
| 48
|
lhoestq
|
huggingface/dataset-viewer
| 413
|
URL design
|
Currently, the API is available at the root, ie: https://datasets-server.huggingface.co/rows?...
This can lead to some issues:
- if we add other services, such as /doc or /search, the API will share the namespace with these other services. This means that we must take care of avoiding collisions between services and endpoints (I think it's OK), and that we cannot simply delegate a subroute to the `api` service (not really an issue either because we "just" have to treat all the other services first in the nginx config, then send the rest to the `api` service)
- version: if we break the API one day, we might want to serve two versions of the API, namely v1 and v2. Notes: 1. it's better not to break the API, 2. if we create a v2 API, we can still namespace it under /v2/, so: not really an issue
Which one do you prefer?
1. https://datasets-server.huggingface.co/ (current)
2. https://datasets-server.huggingface.co/api/
3. https://datasets-server.huggingface.co/api/v1/
|
https://github.com/huggingface/dataset-viewer/issues/413
|
closed
|
[
"question"
] | 2022-06-22T07:13:24Z
| 2022-06-28T08:48:02Z
| null |
severo
|
pytorch/pytorch
| 80,007
|
when forward use **kwargs,how to construct the example_ Inputs parameter in jit.trace?
|
### 🐛 Describe the bug
import torch
class Model(nn.Module):
def forward(self, **kwargs):
# kwargs contains more than dozens of tensors
pass
model = Model()
trace_model = torch.jit.trace(model, example_inputs=??)
### Versions
PyTorch version: 1.6.0+cu101
Is debug build: False
CUDA used to build PyTorch: 10.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.3 LTS (x86_64)
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.26
Python version: 3.7.5 (default, Nov 7 2019, 10:50:52) [GCC 8.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-1.3.2.el7.x86_64-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
Nvidia driver version: 440.44
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.18.5
[pip3] torch==1.6.0+cu101
[pip3] torchvision==0.7.0+cu101
[conda] blas 1.0 mkl
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py38h27cfd23_1
[conda] mkl_fft 1.3.0 py38h42c9631_2
[conda] mkl_random 1.2.1 py38ha9443f7_2
[conda] numpy 1.20.1 py38h93e21f0_0
[conda] numpy-base 1.20.1 py38h7d8b39e_0
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
|
https://github.com/pytorch/pytorch/issues/80007
|
open
|
[
"oncall: jit"
] | 2022-06-22T03:20:17Z
| 2023-03-11T03:33:15Z
| null |
zyDotwei
|
huggingface/datasets
| 4,538
|
Dataset Viewer issue for Pile of Law
|
### Link
https://huggingface.co/datasets/pile-of-law/pile-of-law
### Description
Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines and so would like to turn off dataset previews. But we do not want to collect user emails because it would violate single blind review, allowing us to deduce potential reviewers' identities. Is there a way that we can turn off the dataset viewer without collecting identity information?
Thanks so much!
### Owner
Yes
|
https://github.com/huggingface/datasets/issues/4538
|
closed
|
[
"dataset-viewer"
] | 2022-06-22T02:48:40Z
| 2022-06-27T07:30:23Z
| 5
|
Breakend
|
pytorch/TensorRT
| 1,138
|
problem build in jetson nano jetpack4.6
|
## ❓ Question
Hello
I tried to compile the torch-tensorrt on the jetson nano I got this error
suggestions please
Thanks
bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6 --verbose_failures
jetson@jetson-desktop:~/TensorRT$ bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6 --verbose_failures
INFO: Analyzed target //:libtorchtrt (10 packages loaded, 2870 targets configured).
INFO: Found 1 target...
ERROR: /home/jetson/TensorRT/core/lowering/BUILD:10:11: Compiling core/lowering/register_trt_placeholder_ops.cpp failed: (Exit 1): gcc failed: error executing command
(cd /home/jetson/.cache/bazel/_bazel_jetson/8770c998fbff2b8d5ee14d56a02ce872/sandbox/linux-sandbox/66/execroot/Torch-TensorRT && \
exec env - \
PATH=/home/jetson/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin \
PWD=/proc/self/cwd \
/usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer '-std=c++0x' -MD -MF bazel-out/aarch64-fastbuild/bin/core/lowering/_objs/lowering/register_trt_placeholder_ops.pic.d '-frandom-seed=bazel-out/aarch64-fastbuild/bin/core/lowering/_objs/lowering/register_trt_placeholder_ops.pic.o' -fPIC -iquote . -iquote bazel-out/aarch64-fastbuild/bin -iquote external/tensorrt -iquote bazel-out/aarch64-fastbuild/bin/external/tensorrt -iquote external/cuda -iquote bazel-out/aarch64-fastbuild/bin/external/cuda -iquote external/cudnn -iquote bazel-out/aarch64-fastbuild/bin/external/cudnn -iquote external/libtorch -iquote bazel-out/aarch64-fastbuild/bin/external/libtorch -Ibazel-out/aarch64-fastbuild/bin/external/libtorch/_virtual_includes/ATen -Ibazel-out/aarch64-fastbuild/bin/external/libtorch/_virtual_includes/c10_cuda -Ibazel-out/aarch64-fastbuild/bin/external/libtorch/_virtual_includes/c10 -isystem external/tensorrt/include/aarch64-linux-gnu -isystem bazel-out/aarch64-fastbuild/bin/external/tensorrt/include/aarch64-linux-gnu -isystem external/cuda/include -isystem bazel-out/aarch64-fastbuild/bin/external/cuda/include -isystem external/cudnn/include -isystem bazel-out/aarch64-fastbuild/bin/external/cudnn/include -isystem external/libtorch/include -isystem bazel-out/aarch64-fastbuild/bin/external/libtorch/include -isystem external/libtorch/include/torch/csrc/api/include -isystem bazel-out/aarch64-fastbuild/bin/external/libtorch/include/torch/csrc/api/include '-fdiagnostics-color=always' '-std=c++14' -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c core/lowering/register_trt_placeholder_ops.cpp -o bazel-out/aarch64-fastbuild/bin/core/lowering/_objs/lowering/register_trt_placeholder_ops.pic.o)
# Configuration: 308cf0c0559d698e898984ad86ba68902429f53ed3b621b21d0881d53f6d42af
# Execution platform: @local_config_platform//:host
Use --sandbox_debug to see verbose messages from the sandbox
core/lowering/register_trt_placeholder_ops.cpp:16:34: error: invalid user-defined conversion from 'torch::jit::<lambda(torch::jit::Stack&)>' to 'torch::jit::OperationCreator {aka std::function<void(std::vector<c10::IValue>*)> (*)(const torch::jit::Node*)}' [-fpermissive]
aliasAnalysisFromSchema()),
^
core/lowering/register_trt_placeholder_ops.cpp:15:24: note: candidate is: torch::jit::<lambda(torch::jit::Stack&)>::operator void (*)(torch::jit::Stack&)() const <near match>
[](Stack& stack) { /*noop*/ },
^
core/lowering/register_trt_placeholder_ops.cpp:15:24: note: no known conversion from 'void (*)(torch::jit::Stack&) {aka void (*)(std::vector<c10::IValue>&)}' to 'torch::jit::OperationCreator {aka std::function<void(std::vector<c10::IValue>*)> (*)(const torch::jit::Node*)}'
In file included from external/libtorch/include/torch/csrc/jit/runtime/custom_operator.h:5:0,
from core/lowering/register_trt_placeholder_ops.cpp:1:
external/libtorch/include/torch/csrc/jit/runtime/operator.h:98:3: note: initializing argument 2 of 'torch::jit::Operator::Operator(std::__cxx11::string, torch::jit::OperationCreator, c10::AliasAnalysisKind)'
Operator(
^~~~~~~~
Target //:libtorchtrt failed to build
INFO: Elapsed time: 115.163s, Critical Path: 73.60s
INFO: 16 processes: 5 internal, 11 linux-sandbox.
FAILED: Build did NOT complete successfully
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch v1.8.0
- Jetson nano
- How you installed PyTorch ( `pip`):
## Additional context
|
https://github.com/pytorch/TensorRT/issues/1138
|
closed
|
[
"question",
"channel: linux-jetpack"
] | 2022-06-21T16:45:05Z
| 2022-09-02T18:08:45Z
| null |
Sylia-C
|
pytorch/functorch
| 892
|
Figure out how to get test coverage for more compositions of transforms
|
## Motivation
Currently, we only test the following compositions:
- vmap
- jvp
- vjp
- vmap x jvp
- vmap x vjp
- vjp x vjp
- vjp x vmap
This has caught most of our bugs, but users still come to us with code that doesn't work due to it not being one of the above compositions. For example:
- vmap x vmap can still error out even if just vmap works
- vmap x vjp x vjp can error out if there is some backward operator (e.g. convolution_backward) that has a backward formula that is not composite compliant. Ditto for vmap x jvp x vjp.
## The Ask
Figure to get better test coverage for more compositions of transforms
## Possibly related: OpInfos
This also is related to better OpInfo testing. OpInfos do not cover all aten operators. One way for us to really get good coverage using our existing tests is to add OpInfos for torch.ops.aten operations. For example, instead of checking the batching rule of torch.ops.aten.convolution_backward via a vmap x vjp test, it would be sufficient for us to just run a vmap test for torch.ops.aten.convolution_backward.
|
https://github.com/pytorch/functorch/issues/892
|
closed
|
[
"actionable"
] | 2022-06-21T14:34:58Z
| 2022-09-15T15:01:19Z
| null |
zou3519
|
pytorch/serve
| 1,701
|
curl 404 ResourceNotFoundException
|
Hello,
I am stuck with an error that I am not sure what does it mean.
when I do `curl "http://localhost:8080/models"` I get :
`{
"code": 404,
"type": "ResourceNotFoundException",
"message": "Requested resource is not found, please refer to API document."
}`
I make an `.mar` file for my model with
`
torch-model-archiver -f \
--model-name=classifier \
--version=1.0 \
--serialized-file=pytorch_model.bin \
--handler=custom_handler.py \
--extra-files "config.json,index_to_name.json,special_tokens_map.json,tokenizer_config.json,tokenizer.json,training_args.bin,vocab.txt" \
--export-path=model_store
`
All of those files are stored in the same directory.
When i run the serve `torchserve --start --model-store model_store --models classifier=classifier.mar` I dont get any error. normally when I do `curl "http://localhost:8080/models"` I will get my classifier but I instead I get that message.
is there anything that I am missing here? or should I add something?
I want to mention that I am using a handler (custom_handler.py) from [GoogleCloudPlatform](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/community-content/pytorch_text_classification_using_vertex_sdk_and_gcloud/pytorch-text-classification-vertex-ai-train-tune-deploy.ipynb). also, `curl localhost:8080/ping` give me `Healthy`
Thanks!
|
https://github.com/pytorch/serve/issues/1701
|
open
|
[
"help wanted",
"question"
] | 2022-06-21T14:15:31Z
| 2023-01-31T16:04:09Z
| null |
ma-batita
|
pytorch/serve
| 1,699
|
How to properly understand MaxBatchDelay
|
From the documentation https://github.com/pytorch/serve/blob/master/docs/management_api.md#register-a-model
The parameter `maxBatchDelay` is the maximum delay for batch aggregation. It will wait this amount of time before aggregating all the requests (please correct me if I am wrong) into batches. Now, on the user side, if I set this number high, like 5000, then TorchServe will have enough time to receive possibly a large number of requests, then aggregates them. However, a large number like 5000 also means that the total time for the user to send requests and receive inference results will also be higher, and much higher if I set this number to 50. A user/client for sure wants to have as little time as possible before having the results, but setting maxBatchDelay low would also mean TorchServe wouldn't have enough time to aggregate.
How to properly understand this issue? Do I need a better metrics to measure the total time for the client, or should I set maxBatchDelay high? Or something else?
|
https://github.com/pytorch/serve/issues/1699
|
closed
|
[
"documentation",
"benchmark"
] | 2022-06-20T19:01:44Z
| 2023-08-18T02:53:37Z
| null |
Hegelim
|
pytorch/serve
| 1,698
|
Confused about Cumulative Inference Duration vs. PredictionTime
|
### 📚 The doc issue
I am running a model on TorchServe and I am trying to see how long it takes for inference.
If I use logging and view the logs, then I can see there is something called PredictionTime:

However, if I use the Metrics API, then I got something called "Cumulative Inference Duration"

And in terms of values those 2 are very different. So I am not sure which one should I use to measure the total inference time for my requests?
Btw, there is also something else called `HandlerTime` in the logs

What does it mean? Where can I find related information about what are the meanings of these metrics?
Thanks,
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/1698
|
open
|
[
"help wanted",
"question"
] | 2022-06-20T18:35:03Z
| 2022-07-08T18:50:45Z
| null |
Hegelim
|
pytorch/data
| 523
|
Document how to create a DataLoader when reading data from S3
|
### 📚 The doc issue
I find the provided example [here](https://github.com/pytorch/data/blob/main/torchdata/datapipes/iter/load/README.md#example) a bit confusing.
```
from torchdata.datapipes.iter import S3FileLister, S3FileLoader
s3_prefixes = ['s3://bucket-name/folder/', ...]
dp_s3_urls = S3FileLister(s3_prefixes)
dp_s3_files = S3FileLoader(s3_urls) # outputs in (url, StreamWrapper(BytesIO))
# more datapipes to convert loaded bytes, e.g.
datapipe = StreamWrapper(dp_s3_files).parse_csv(delimiter=' ')
for d in datapipe: # Start loading data
pass
```
First, I think there is a mistake in the example: `s3_urls` should be `dp_s3_urls`?
Second, it is not clear why `parse_csv(delimiter=' ')` is being used.
Last, I can't access my data after creating the `datapipe`. It would be great to have an example similar to [this one of the old plugin](https://github.com/aws/amazon-s3-plugin-for-pytorch/blob/master/examples/s3_cv_transform.py)
I believe that an example of how to load a S3 folder containing images into a `torch.utils.data.DataLoader` would be very useful for new users (like me).
### Suggest a potential alternative/fix
Provide an example that starts with a S3 url of a folder with some images, and produce a dataloader with such images.
|
https://github.com/meta-pytorch/data/issues/523
|
closed
|
[] | 2022-06-20T15:52:53Z
| 2022-06-23T00:17:12Z
| 4
|
enric1994
|
pytorch/TensorRT
| 1,136
|
❓ [Question] unable to save the model in TorchScript format?
|
## ❓ Question
I'm trying to save my model as TorchScript format, unfortunately getting error.
## What you have already tried
```torch.jit.script(model)```
## Environment
python
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):1.11.0+cu113
- CPU Architecture:
- OS (e.g., Linux): ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:3.9.7
- CUDA version:11.7
- GPU models and configuration: RTX GEFORCE 2060
- Any other relevant information:
## Additional context
could you please help me save the model in TorchScript format
@dignakov @narendasan @peri044

|
https://github.com/pytorch/TensorRT/issues/1136
|
closed
|
[
"question",
"bug: triaged [not a bug]"
] | 2022-06-20T10:43:11Z
| 2022-07-05T20:57:03Z
| null |
IamExperimenting
|
pytorch/TensorRT
| 1,134
|
❓ [Question] Why TensorRT model is slower?
|
## ❓ Question
<!-- Your question -->
Why TensorRT model is slower? I have tried TensorRT in a MHA (multihead attention) model, but found it is even slower than the jit scripted model.
## What you have already tried
I tested the original model, the jit scripted model, the jit model after optimization, and the TensorRT model. Then, I found the tensorrt model is not as fast as I expected. The model here is a simple MHA module modified from `fairseq` so it could pass the compilation.
```py
import time
import tmp_attn
import torch
import tensorrt
import torch_tensorrt as torch_trt
def timer(m, i):
st = time.time()
for _ in range(10000):
m(i, i, i)
ed = time.time()
return ed - st
t1 = torch.randn(64, 1, 1280, device="cuda:0")
model = tmp_attn.MultiheadAttention(1280, 8).to("cuda:0")
model2 = torch.jit.script(model)
model3 = torch.jit.optimize_for_inference(model2)
model4 = torch_trt.compile(model, inputs=[t1, t1, t1]).to("cuda:0")
print("Original Model", timer(model, t1))
print("Jit Script Model", timer(model2, t1))
print("Jit Script Model after optimization", timer(model3, t1))
print("TensorRT Model", timer(model4, t1))
```
<!-- A clear and concise description of what you have already done. -->
I ran these models 10000 times and record the spent time.
The output is:
Original Model 5.6981117725372314
Jit Script Model 4.5694739818573
Jit Script Model after optimization 3.3332810401916504
TensorRT Model 4.772718667984009
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.11.0
- CPU Architecture: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
- OS (e.g., Linux): Linux, CentOS7
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): conda
- Build command you used (if compiling from source): /
- Are you using local sources or building from archives: No
- Python version: 3.7
- CUDA version: 11.7
- GPU models and configuration:
- TensorRT version: 8.2.5.1
- Torch_tensorrt version: 1.1.0
## Additional context
The code of MHA is here.
`tmp_attn.py`
[tmp_attn.py.zip](https://github.com/pytorch/TensorRT/files/8938221/tmp_attn.py.zip)
|
https://github.com/pytorch/TensorRT/issues/1134
|
closed
|
[
"question",
"No Activity",
"performance"
] | 2022-06-20T06:55:23Z
| 2023-11-09T09:01:52Z
| null |
geekinglcq
|
pytorch/TensorRT
| 1,133
|
❓ [Question] How to install torch_tensorrt python API in ubuntu 20.04?
|
## ❓ Question
I want to install ```torch_tensorrt``` python API in ubuntu 20.04. could you please provide step by a step installation procedure? I tried
```pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases```
when I try to import the module
```import torch_tensorrt```
I'm getting the below error,

## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.11.0
- CPU Architecture:
- OS (e.g., Linux): LINUX
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:no
- Python version: 3.7.13
- CUDA version: 11.3.1
- GPU models and configuration:
- Any other relevant information:
@narendasan @peri044
|
https://github.com/pytorch/TensorRT/issues/1133
|
closed
|
[
"question",
"component: build system",
"component: packaging",
"component: dependencies"
] | 2022-06-19T14:50:36Z
| 2022-12-15T17:24:39Z
| null |
IamExperimenting
|
pytorch/serve
| 1,692
|
TorchServe How to Curl Multiple Images Properly
|
I am using TorchServe to potentially serve a model from MMOCR (https://github.com/open-mmlab/mmocr), and I have several questions:
1. I tried to do inference on hundreds of images together using batch mode by using & to concatenate curl commands together, such as suggested here https://github.com/pytorch/serve/issues/1235#issuecomment-938231201. However, this doesn't provide a neat solution if I have hundreds of curls concatenated together. I can of course have a super long command that looks like
```
curl -X POST http://localhost:8080/predictions/ABINet -T image1.png & curl -X POST http://localhost:8080/predictions/ABINet -T image2.png & curl -X POST http://localhost:8080/predictions/ABINet -T image3.png & curl -X POST http://localhost:8080/predictions/ABINet -T image4.png &...
```
But I don't think this is the right way to go.
My questions are: is using & really parallel? What is a good/suggested way to do inference on hundreds of images? What is a Pythonic way to do this (maybe using requests/subprocess)?
2. I used config.properties file that looks like below
```
Inference address: http://127.0.0.1:8080
Management address: http://127.0.0.1:8081
Metrics address: http://127.0.0.1:8082
load_models=ABINet.mar
models={\
"ABINet": {\
"1.0": {\
"defaultVersion": true,\
"marName": "ABINet.mar",\
"runtime": "python",\
"minWorkers": 1,\
"maxWorkers": 8,\
"batchSize": 200,\
"maxBatchDelay": 50,\
"responseTimeout": 120,\
"max_request_size": 65535000\
}\
}\
}
```
I noticed that each time I do inference (using `curl -X POST http://localhost:8080/predictions/ABINet T image1.png & curl -X POST http://localhost:8080/predictions/ABINet T image2.png &...` hundreds of times concatenated), the GPU usage will increase, and the memory wouldn't be released after the inference is done.
For example, if I want to do inference on 300 images with config.properties that looks like
```
Inference address: http://127.0.0.1:8080
Management address: http://127.0.0.1:8081
Metrics address: http://127.0.0.1:8082
load_models=ABINet.mar
models={\
"ABINet": {\
"1.0": {\
"defaultVersion": true,\
"marName": "ABINet.mar",\
"runtime": "python",\
"minWorkers": 4,\
"maxWorkers": 8,\
"batchSize": 600,\
"maxBatchDelay": 50,\
"responseTimeout": 120,\
"max_request_size": 65535000\
}\
}\
}
```
using `gpustat`, after I start torchserve, before I run the first inference, the GPU usage looks like

After running the inference the 1st time, the GPU usage looks like

After running the inference the 2nd time,

So if I do this inference on hundreds of images for 3 times, it will break and error like
```
{
"code": 503,
"type": "ServiceUnavailableException",
"message": "Model \"ABINet\" has no worker to serve inference request. Please use scale workers API to add workers."
}
```
Now, I tried registering model with `initial_workers` as suggested here https://github.com/pytorch/serve/issues/29 but with no luck.
My questions are:
* How to set this config.properties properly to handle this situation? How would I know what to set for batchsize and maxBatchDelay?
* How to allow torchserve to release memory after one inference? Is there something similar to `gc.collect()` or `torch.cuda.reset_peak_memory_stats(device=None)`?
* How does TorchServe work under the hood? If I send a request with hundreds of images, say, 600, will TorchServe take all in or take only whatever portion it can take? Or will it automatically partition the request (say, take 300 the first time, then take the rest 300)?
I am attaching the MMOCR custom handler for reference
```
class MMOCRHandler(BaseHandler):
threshold = 0.5
def initialize(self, context):
properties = context.system_properties
self.map_location = 'cuda' if torch.cuda.is_available() else 'cpu'
self.device = torch.device(self.map_location + ':' +
str(properties.get('gpu_id')) if torch.cuda.
is_available() else self.map_location)
self.manifest = context.manifest
model_dir = properties.get('model_dir')
serialized_file = self.manifest['model']['serializedFile']
checkpoint = os.path.join(model_dir, serialized_file)
self.config_file = os.path.join(model_dir, 'config.py')
self.model = init_detector(self.config_file, checkpoint, self.device)
self.initialized = True
|
https://github.com/pytorch/serve/issues/1692
|
open
|
[
"documentation",
"help wanted",
"perf"
] | 2022-06-17T18:54:26Z
| 2024-08-04T15:18:11Z
| null |
Hegelim
|
huggingface/datasets
| 4,522
|
Try to reduce the number of datasets that require manual download
|
> Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to ≈ 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, which we can ignore
from https://github.com/huggingface/datasets-server/issues/12#issuecomment-1026920432
|
https://github.com/huggingface/datasets/issues/4522
|
open
|
[] | 2022-06-17T11:42:03Z
| 2022-06-17T11:52:48Z
| 0
|
severo
|
huggingface/dataset-viewer
| 394
|
Implement API pagination?
|
Should we add API pagination right now? Maybe useful for the "technical" endpoints like https://datasets-server.huggingface.co/queue-dump-waiting-started or https://datasets-server.huggingface.co/cache-reports
https://simonwillison.net/2021/Jul/1/pagnis/
|
https://github.com/huggingface/dataset-viewer/issues/394
|
closed
|
[
"question"
] | 2022-06-17T08:54:41Z
| 2022-08-01T19:02:00Z
| null |
severo
|
pytorch/TensorRT
| 1,129
|
❓ [Question] Torch traced model conversion with List[torch.Tensor] input
|
Is it possible to convert a torch traced model that accepts List[torch.Tensor] type of input to trt ts module?
|
https://github.com/pytorch/TensorRT/issues/1129
|
closed
|
[
"question",
"component: core"
] | 2022-06-17T08:17:26Z
| 2022-08-12T01:53:15Z
| null |
ArmenGhambaryan
|
huggingface/dataset-viewer
| 390
|
How to best manage the datasets that we cannot process due to RAM?
|
The dataset worker pod is killed (OOMKilled) for:
```
bigscience/P3
Graphcore/gqa-lxmert
echarlaix/gqa-lxmert
```
and the split worker pod is killed (OOMKilled) for:
```
imthanhlv/binhvq_news21_raw / started / train
openclimatefix/nimrod-uk-1km / sample / train/test/validation
PolyAI/minds14 / zh-CN / train
```
With the current jobs management (https://github.com/huggingface/datasets-server/issues/264) the killed jobs remain marked as "STARTED" in the mongo db. If we "cancel" them with
```
kubectl exec datasets-server-prod-admin-79798989fb-scmjw -- make cancel-started-dataset-jobs
kubectl exec datasets-server-prod-admin-79798989fb-scmjw -- make cancel-started-split-jobs
```
they are re-enqueue with the status "WAITING" until they are processed and killed again.
Possibly we should allow up to 3 attempts, for example, maybe increasing the dedicated RAM (see https://github.com/huggingface/datasets-server/issues/264#issuecomment-1158596143). Even so, we cannot have more RAM than the underlying node (eg: 32 GiB on the current nodes) and some datasets will still fail.
In that case, we should mark them as ERROR with a proper error message.
|
https://github.com/huggingface/dataset-viewer/issues/390
|
closed
|
[
"bug",
"question"
] | 2022-06-17T08:04:45Z
| 2022-09-19T09:42:36Z
| null |
severo
|
huggingface/dataset-viewer
| 388
|
what happened to the pods?
|
```
$ k get pods -w
...
datasets-server-prod-datasets-worker-776b774978-g7mpk 1/1 Evicted 0 73m │DEBUG: 2022-06-16 18:42:46,966 - datasets_server.worker - try to process a split job
datasets-server-prod-datasets-worker-776b774978-cdb4b 0/1 Pending 0 1s │DEBUG: 2022-06-16 18:42:47,011 - datasets_server.worker - job assigned: 62ab6804a502851c834d7e43 for split 'test' from dataset 'luozhou
datasets-server-prod-datasets-worker-776b774978-cdb4b 0/1 Pending 0 1s │yang/dureader' with config 'robust'
datasets-server-prod-datasets-worker-776b774978-cdb4b 0/1 OutOfmemory 0 1s │INFO: 2022-06-16 18:42:47,012 - datasets_server.worker - compute split 'test' from dataset 'luozhouyang/dureader' with config 'robust'
datasets-server-prod-datasets-worker-776b774978-7hw4j 0/1 Pending 0 0s │Downloading builder script: 100%|██████████| 8.67k/8.67k [00:00<00:00, 4.85MB/s]
datasets-server-prod-datasets-worker-776b774978-7hw4j 0/1 Pending 0 0s │Downloading metadata: 100%|██████████| 2.85k/2.85k [00:00<00:00, 1.43MB/s]
datasets-server-prod-datasets-worker-776b774978-7hw4j 0/1 OutOfmemory 0 0s │Downloading builder script: 100%|██████████| 8.67k/8.67k [00:00<00:00, 5.07MB/s]
datasets-server-prod-datasets-worker-776b774978-qtmtd 0/1 Pending 0 0s │Downloading metadata: 100%|██████████| 2.85k/2.85k [00:00<00:00, 1.18MB/s]
datasets-server-prod-datasets-worker-776b774978-qtmtd 0/1 Pending 0 0s │Downloading builder script: 100%|██████████| 8.67k/8.67k [00:00<00:00, 4.52MB/s]
datasets-server-prod-datasets-worker-776b774978-qtmtd 0/1 OutOfmemory 0 0s │Downloading metadata: 100%|██████████| 2.85k/2.85k [00:00<00:00, 1.76MB/s]
datasets-server-prod-datasets-worker-776b774978-54zr6 0/1 Pending 0 0s │Downloading and preparing dataset dureader/robust (download: 19.57 MiB, generated: 57.84 MiB, post-processed: Unknown size, total: 77.4
datasets-server-prod-datasets-worker-776b774978-54zr6 0/1 Pending 0 0s │1 MiB) to /cache/datasets/luozhouyang___dureader/robust/1.0.0/bdab4855e88c197f2297db78cfc86259fb874c2b977134bbe80d3af8616f33b1...
datasets-server-prod-datasets-worker-776b774978-54zr6 0/1 OutOfmemory 0 0s │Downloading data: 1%| | 163k/20.5M [01:45<3:40:25, 1.54kB/s]
datasets-server-prod-datasets-worker-776b774978-rxcb2 0/1 Pending 0 0s │DEBUG: 2022-06-16 18:44:44,235 - datasets_server.worker - job finished with error: 62ab6804a502851c834d7e43 for split 'test' from datas
datasets-server-prod-datasets-worker-776b774978-rxcb2 0/1 Pending 0 0s │et 'luozhouyang/dureader' with config 'robust'
datasets-server-prod-datasets-worker-776b774978-rxcb2 0/1 OutOfmemory 0 0s │DEBUG: 2022-06-16 18:44:44,236 - datasets_server.worker - try to process a split job
datasets-server-prod-datasets-worker-776b774978-d8m42 0/1 Pending 0 0s │DEBUG: 2022-06-16 18:44:44,281 - datasets_server.worker - job assigned: 62ab6804a502851c834d7e45 for split 'test' from dataset 'opencli
datasets-server-prod-datasets-worker-776b774978-d8m42 0/1 Pending 0 0s │matefix/nimrod-uk-1km' with config 'sample'
datasets-server-prod-datasets-worker-776b774978-d8m42 0/1 OutOfmemory 0 0s │INFO: 2022-06-16 18:44:44,281 - datasets_server.worker - compute split 'test' from dataset 'openclimatefix/nimrod-uk-1km' with config '
datasets-server-prod-datasets-worker-776b774978-xx7hv 0/1 Pending 0 0s │sample'
datasets-server-prod-datasets-worker-776b774978-xx7hv 0/1 Pending 0 0s │Downloading builder script: 100%|██████████| 15.2k/15.2k [00:00<00:00, 6.04MB/s]
datasets-server-prod-datasets-worker-776b774978-xx7hv 0/1 OutOfmemory 0 1s │Downloading builder script: 100%|██████████| 15.2k/15.2k [00:00<00:
|
https://github.com/huggingface/dataset-viewer/issues/388
|
closed
|
[
"question"
] | 2022-06-16T19:46:00Z
| 2022-06-17T07:48:20Z
| null |
severo
|
pytorch/functorch
| 882
|
Can I use jvp with vmap?
|
Hi, experts.
I want to use jvp with vmap, so that I can run jvp for each sample in a batch.
However, unlike the jacrev example, jvp does not return a callable function, so I am not sure if it is compatible with vmap.
It seems like vjp returns a function like jacrev, so might be usable, but can I use jvp with vmap?
It is not clear to me whether vjp and jvp is interchangeable -- I don't see how I can use vjp instead to achieve what I need.
Thank you for the help!
|
https://github.com/pytorch/functorch/issues/882
|
closed
|
[] | 2022-06-16T18:40:05Z
| 2022-06-16T19:00:52Z
| 2
|
kwmaeng91
|
huggingface/pytorch_block_sparse
| 17
|
What is "custom" "custom-back" in dispatch_policies.h?
|
Hi! I am learning SGEMM and find in dispatch_policies.h has a "Custom", "CustomBack". Not sure what does this mean? Thank you!!!
|
https://github.com/huggingface/pytorch_block_sparse/issues/17
|
open
|
[] | 2022-06-16T05:46:42Z
| 2022-06-16T05:46:42Z
| null |
ziyuhuang123
|
huggingface/datasets
| 4,507
|
How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script
|
If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair.
Or I can paraphrase the question in the following way: how to skip `_split_generators` step in `DatasetBuilder` to let `as_dataset` gives a single `Dataset` rather than a list`[Dataset]`?
Many thanks for any help.
|
https://github.com/huggingface/datasets/issues/4507
|
closed
|
[
"enhancement"
] | 2022-06-15T18:56:34Z
| 2022-06-16T10:40:08Z
| 2
|
liyucheng09
|
pytorch/torchx
| 520
|
[torchx/ray] Is elastic training on ray clusters supported?
|
## 🐛 Bug
Hi, I would like to know the current state of running elastic training on ray clusters.
I tried to repeat some experiments([notebook](https://colab.research.google.com/drive/1vVCpgQ9z_1SN8K9CJxUT2LtvUDN0AlND?usp=sharing)) in this [blog](https://www.anyscale.com/blog/large-scale-distributed-training-with-torchx-and-ray) on my ray cluster, but I got unexpected behavior.
- I EXPECT to see when use custom component and the cluster has fewer available nodes than the job requested, the submitted job continues running with current nodes, and when there are new nodes become available, they join can join the training process. What I OBSERVED is the job failed and got the error below:
```
TimeoutError: Placement group creation timed out. Make sure your cluster either has enough resources or use an autoscaling cluster. Current resources available: {'memory': 18038862642.0, 'CPU': 8.0, 'node:10.130.6.66': 0.999, 'object_store_memory': 15071908982.0, 'GPU': 1.0, 'node:10.130.6.67': 1.0}, resources requested by the placement group: [{'CPU': 2.0}, {'CPU': 2.0}, {'CPU': 2.0}, {'CPU': 2.0}, {'CPU': 2.0}]
```
- When use the built-in `dist.ddp` component, even if there are enough computation resources, the ray job status always shows succeed, but from the ray job logs, the expected output never appears, and the only information in the log is
```
Waiting for placement group to start.
```
- When use custom component and the cluster has the required resources, the submitted job has expected log information in the log file, but the job will never stop, when I check the ray job status, it always shown
```
Status for job 'raysubmit_kqtEAYVSmx4c1XgD': RUNNING
Status message: Job is currently running.
```
### Question
<!-- your question here -->
<!-- A clear and concise description of what the bug is. -->
Module (check all that applies):
* [ ] `torchx.spec`
* [x] `torchx.component`
* [ ] `torchx.apps`
* [ ] `torchx.runtime`
* [ ] `torchx.cli`
* [x] `torchx.schedulers`
* [ ] `torchx.pipelines`
* [ ] `torchx.aws`
* [ ] `torchx.examples`
* [ ] `other`
## To Reproduce
I tried two ways to launch a TorchX job on ray:
```bash
# Use custom component
# Required resouses are defined in the component.py file
torchx run -s ray \ # use ray scheduler
-cfg dashboard_address=addr-of-cluster:8265,working_dir=. \ # ray scheduler arguments
component.py:trainer # use custom component
# Use built-in dist.ddp component
torchx run -s ray \ # use ray scheduler
-cfg dashboard_address=addr-of-cluster:8265,working_dir=. \ # ray scheduler arguments
dist.ddp \ # use dist.ddp component
-j 4x1 \ # nproc and nnodes
--script ./compute_world_size.py # a distributed script
```
A detailed description of the command is [here](https://pytorch.org/torchx/latest/quickstart.html).
The provisioned ray cluster:
```python
"headCPU": "4",
"headGPU": "0",
"headMemory": "12Gi",
"headMaxMemory": "24Gi",
"workerMinCount": 1,
"workerMaxCount": 4,
"workerCPU": "4",
"workerGPU": "0",
"workerMemory": "12Gi",
"workerMaxMemory": "24Gi"
```
Performed following experiments:
- **(Autoscaling)** To test if torchx will trigger ray autoscaler to provide more nodes than the minimum nodes, I launched a job that requires 4 nodes.
The results are listed below:
- [Custom component](torchx-ray/component.py):
- Ray job status:
```shell
Status for job 'raysubmit_kqtEAYVSmx4c1XgD': RUNNING
Status message: Job is currently running.
```
- Ray job logs:
```shell
Waiting for placement group to start.
(scheduler +1s) Tip: use `ray status` to view detailed cluster status. To disable these messages, set RAY_SCHEDULER_EVENTS=0.
(scheduler +1s) Adding 3 nodes of type worker_node.
(scheduler +21s) Resized to 20 CPUs, 4 GPUs.
(CommandActor pid=223, ip=10.130.6.73) initializing `gloo` process group
(CommandActor pid=223, ip=10.130.6.73) successfully initialized process group
(CommandActor pid=223, ip=10.130.6.73) rank: 3, actual world_size: 4, computed world_size: 4
(CommandActor pid=221, ip=10.131.6.32) initializing `gloo` process group
(CommandActor pid=221, ip=10.131.6.32) successfully initialized process group
(CommandActor pid=221, ip=10.131.6.32) rank: 1, actual world_size: 4, computed world_size: 4
(CommandActor pid=222, ip=10.130.6.74) initializing `gloo` process group
(CommandActor pid=222, ip=10.130.6.74) successfully initialized process group
(CommandActor pid=222, ip=10.130.6.74) rank: 0, actual world_size: 4, computed world_size: 4
(CommandActor pid=225, ip=10.131.6.30) initializing `gloo` process group
(CommandActor pid=225, ip=10.131.6.30) successfully initialized process group
(CommandActor pid=225, ip=10.131.6.30) rank: 2, actual world_siz
|
https://github.com/meta-pytorch/torchx/issues/520
|
open
|
[
"question",
"ray"
] | 2022-06-15T18:25:55Z
| 2022-06-22T21:34:39Z
| 7
|
ntlm1686
|
huggingface/datasets
| 4,504
|
Can you please add the Stanford dog dataset?
|
## Adding a Dataset
- **Name:** *Stanford dog dataset*
- **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Data:** *[link to the Github repository or current dataset location](http://vision.stanford.edu/aditya86/ImageNetDogs/)*
- **Motivation:** *The dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It is useful for fine-grain purpose *
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
https://github.com/huggingface/datasets/issues/4504
|
closed
|
[
"good first issue",
"dataset request"
] | 2022-06-15T15:39:35Z
| 2024-12-09T15:44:11Z
| 16
|
dgrnd4
|
huggingface/datasets
| 4,502
|
Logic bug in arrow_writer?
|
https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488
I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows:
```
- if batch_examples and len(next(iter(batch_examples.values()))) == 0:
+ if not batch_examples or len(next(iter(batch_examples.values()))) == 0:
return
```
@lhoestq
|
https://github.com/huggingface/datasets/issues/4502
|
closed
|
[] | 2022-06-15T14:50:00Z
| 2022-06-18T15:15:51Z
| 10
|
changjonathanc
|
huggingface/optimum
| 219
|
Support to wav2vec2
|
### Feature request
Is there any plan to include wav2vec2 class to optimum?
```python
from optimum.onnxruntime.configuration import AutoQuantizationConfig
from optimum.onnxruntime import ORTQuantizer
# The model we wish to quantize
model_checkpoint = "facebook/wav2vec2-base-960h"
# The type of quantization to apply
qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)
quantizer = ORTQuantizer.from_pretrained(model_checkpoint, feature="sequence-classification")
# Quantize the model!
quantizer.export(
onnx_model_path="model.onnx",
onnx_quantized_model_output_path="model-quantized.onnx",
quantization_config=qconfig,
)
```
Output:
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-27-b874ded560cc>](https://localhost:8080/#) in <module>()
6 # The type of quantization to apply
7 qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)
----> 8 quantizer = ORTQuantizer.from_pretrained(model_checkpoint, feature="sequence-classification")
9
10 # Quantize the model!
1 frames
[/usr/local/lib/python3.7/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
446 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
447 raise ValueError(
--> 448 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
449 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
450 )
ValueError: Unrecognized configuration class <class 'transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config'> for this kind of AutoModel: AutoModelForSequenceClassification.
Model type should be one of BartConfig, YosoConfig, NystromformerConfig, QDQBertConfig, FNetConfig, PerceiverConfig, GPTJConfig, LayoutLMv2Config, PLBartConfig, RemBertConfig, CanineConfig, RoFormerConfig, BigBirdPegasusConfig, GPTNeoConfig, BigBirdConfig, ConvBertConfig, LEDConfig, IBertConfig, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaXLConfig, XLMRobertaConfig, MBartConfig, MegatronBertConfig, MPNetConfig, BartConfig, ReformerConfig, LongformerConfig, RobertaConfig, DebertaV2Config, DebertaConfig, FlaubertConfig, SqueezeBertConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ElectraConfig, FunnelConfig, LayoutLMConfig, TapasConfig, Data2VecTextConfig.
```
### Motivation
To get some speed up on wav2vec2 models.
### Your contribution
No at the moment, but I could help.
|
https://github.com/huggingface/optimum/issues/219
|
closed
|
[] | 2022-06-15T12:47:42Z
| 2022-07-08T10:34:33Z
| 4
|
asr-lord
|
pytorch/serve
| 1,687
|
How to install torchserve from source ???
|
### 🚀 The feature
Without using
`pip install torchserve` and `docker pull pytorch/torchserve`, how can I install **torchserve** using this open source ??
I can build `model-archiver` and `workflow-archiver`, but how can I build out `torchserve` from source ?
### Motivation, pitch
Without using
`pip install torchserve` and `docker pull pytorch/torchserve`, how can I install **torchserve** using this open source ??
I can build `model-archiver` and `workflow-archiver`, but how can I build out `torchserve` from source ?
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/pytorch/serve/issues/1687
|
closed
|
[] | 2022-06-14T18:03:48Z
| 2022-06-15T03:05:30Z
| null |
jiapei-nexera
|
huggingface/dataset-viewer
| 373
|
Add support for building GitHub Codespace dev environment
|
Add support for building a GitHub Codespace dev environment (as it was done for the [moon landing](https://github.com/huggingface/moon-landing/pull/3188) project) to make it easier to contribute to the project.
|
https://github.com/huggingface/dataset-viewer/issues/373
|
closed
|
[
"question"
] | 2022-06-14T14:37:58Z
| 2022-09-19T09:05:26Z
| null |
mariosasko
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.