repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
β | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets
| 4,491
|
Dataset Viewer issue for Pavithree/test
|
### Link
https://huggingface.co/datasets/Pavithree/test
### Description
I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missing from my end? Kindly help.
### Owner
_No response_
|
https://github.com/huggingface/datasets/issues/4491
|
closed
|
[
"dataset-viewer"
] | 2022-06-14T13:23:10Z
| 2022-06-14T14:37:21Z
| 1
|
Pavithree
|
pytorch/examples
| 1,012
|
Using SLURM for Imagenet training on multiple nodes
|
In the pytorch imagenet example of this repo, it says that for multiple nodes we have to run the command on each node like below:

Since I am using a shared HPC cluster with SLURM, I cannot actively know which nodes my training will use so I'm not sure how to run these two commands. How can I run these two commands on the separate nodes using SLURM?
|
https://github.com/pytorch/examples/issues/1012
|
closed
|
[
"distributed"
] | 2022-06-14T09:39:59Z
| 2022-07-10T20:11:43Z
| 2
|
b0neval
|
pytorch/pytorch
| 79,495
|
How to stacked RGB images
|
### π The feature, motivation and pitch
Hi, pytorch support teams.
I want to stack a RGB images.
I want to construct a 3D or 4D RGB tensor.
And, create a GAN model using these tensor.
How do I define how to create such a tensor?
I would like to stack the attached 2D RGB images.
Or can you extract each RGB element from a 3D image as a 3D tensor?
Kind regards,
yoshimura.
### Alternatives
_No response_
### Additional context

|
https://github.com/pytorch/pytorch/issues/79495
|
closed
|
[] | 2022-06-14T02:40:40Z
| 2022-06-14T18:01:50Z
| null |
kazuma0606
|
pytorch/tutorials
| 1,945
|
Calculating accuracy.
|
How can i calculate the accuracy of the model on seq2seq with attention chatbot?
|
https://github.com/pytorch/tutorials/issues/1945
|
closed
|
[
"question"
] | 2022-06-13T22:34:03Z
| 2022-08-17T20:26:00Z
| null |
OmarHaitham520
|
pytorch/torchx
| 514
|
Launching hello world job on Kubernetes and getting logs
|
## π Documentation
## Link
<!-- link to the problematic documentation -->
https://pytorch.org/torchx/0.1.0rc2/quickstart.html
## What does it currently say?
<!-- copy paste the section that is wrong -->
`torchx run --scheduler kubernetes my_component.py:greet --image "my_app:latest" --user "your name"`
The documentation lacks information about getting logs for the hello world example with Kubernetes cluster.
## What should it say?
<!-- the proposed new documentation -->
The user should have a kubectl CLI configured. Refer to [this](https://kubernetes.io/docs/reference/kubectl/)
To get the logs of hello world job:
`kubectl logs <pod name>`
|
https://github.com/meta-pytorch/torchx/issues/514
|
open
|
[
"documentation"
] | 2022-06-13T14:20:20Z
| 2022-06-13T16:50:35Z
| 1
|
vishakha-ramani
|
pytorch/TensorRT
| 1,114
|
How can i compile CUDA C in this projectβ [Question] How do you ....?
|
## β Question
I want compile tensorrt plugin in this project. But I do not know how to use bazel to compile the cuda c.
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/1114
|
closed
|
[
"question"
] | 2022-06-13T11:27:52Z
| 2022-06-20T22:11:37Z
| null |
p517332051
|
pytorch/serve
| 1,684
|
How to decode the gRPC PredictionResponse string efficiently
|
### π The doc issue
There is no documentation about decoding the received bytes form PredictionResponse into torch tensor efficiently. Currently, the only working solution is using `ast.literal_eval`, which is extremely slow.
```
response = inference_stub.Predictions(
inference_pb2.PredictionsRequest(model_name=model_name, input=input_data))
predictions = torch.astensor(literal_eval(response.prediction.decode('utf-8')))
```
Using methods like numpy.fromstring, numpy.frombuffer or torch.frombuffer returns the following error:
```
> np.fromstring(response.prediction.decode("utf-8"))
Traceback (most recent call last):
File "<string>", line 1, in <module>
ValueError: string size must be a multiple of element size
```
The following returns an incorrect tensor values. The number of elements are not the same as expected number of elements.
```
torch.frombuffer(response.prediction, dtype = torch.float32)
```
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/serve/issues/1684
|
open
|
[
"documentation"
] | 2022-06-13T10:47:16Z
| 2022-09-20T11:50:44Z
| null |
IamMohitM
|
pytorch/pytorch
| 79,384
|
torch.load() fails on MPS backend ("don't know how to restore data location")
|
### π Describe the bug
```bash
# warning: 5.8GB file
wget https://huggingface.co/Cene655/ImagenT5-3B/resolve/main/model.pt
```
```python
import torch
torch.load('./model.pt', map_location='mps')
```
Error thrown [from serialization.py](https://github.com/pytorch/pytorch/blob/bd1a35dfc894eced537b825e5569836e6a91266d/torch/serialization.py#L178):
```
Exception has occurred: RuntimeError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
don't know how to restore data location of torch.storage._UntypedStorage (tagged with mps)
File "/Users/birch/git/imagen-pytorch-cene/venv/lib/python3.9/site-packages/torch/serialization.py", line 178, in default_restore_location
raise RuntimeError("don't know how to restore data location of "
File "/Users/birch/git/imagen-pytorch-cene/venv/lib/python3.9/site-packages/torch/serialization.py", line 970, in restore_location
return default_restore_location(storage, map_location)
File "/Users/birch/git/imagen-pytorch-cene/venv/lib/python3.9/site-packages/torch/serialization.py", line 1001, in load_tensor
wrap_storage=restore_location(storage, location),
File "/Users/birch/git/imagen-pytorch-cene/venv/lib/python3.9/site-packages/torch/serialization.py", line 1019, in persistent_load
load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "/Users/birch/git/imagen-pytorch-cene/venv/lib/python3.9/site-packages/torch/serialization.py", line 1049, in _load
result = unpickler.load()
File "/Users/birch/git/imagen-pytorch-cene/venv/lib/python3.9/site-packages/torch/serialization.py", line 712, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/Users/birch/git/imagen-pytorch-cene/repro.py", line 2, in <module>
torch.load('./ImagenT5-3B/model.pt', map_location='mps')
File "/Users/birch/anaconda3/envs/torch-nightly/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/birch/anaconda3/envs/torch-nightly/lib/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/birch/anaconda3/envs/torch-nightly/lib/python3.9/runpy.py", line 268, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/birch/anaconda3/envs/torch-nightly/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/birch/anaconda3/envs/torch-nightly/lib/python3.9/runpy.py", line 197, in _run_module_as_main (Current frame)
return _run_code(code, main_globals, None,
```
I think the solution will involve adding a [`register_package()` entry](https://github.com/pytorch/pytorch/blob/bd1a35dfc894eced537b825e5569836e6a91266d/torch/serialization.py#L160-L161) for the mps backend.
### Versions
```
PyTorch version: 1.13.0.dev20220610
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (arm64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.30)
CMake version: version 3.22.1
Libc version: N/A
Python version: 3.9.12 (main, Jun 1 2022, 06:34:44) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-12.4-arm64-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] imagen-pytorch==0.0.0
[pip3] numpy==1.22.4
[pip3] torch==1.13.0.dev20220610
[pip3] torchaudio==0.14.0.dev20220603
[pip3] torchvision==0.14.0.dev20220609
[conda] numpy 1.23.0rc2 pypi_0 pypi
[conda] torch 1.13.0.dev20220606 pypi_0 pypi
[conda] torchaudio 0.14.0.dev20220603 pypi_0 pypi
[conda] torchvision 0.14.0a0+f9f721d pypi_0 pypi
```
cc @mruberry @kulinseth @albanD
|
https://github.com/pytorch/pytorch/issues/79384
|
closed
|
[
"module: serialization",
"triaged",
"module: mps"
] | 2022-06-12T19:30:24Z
| 2022-08-06T09:25:21Z
| null |
Birch-san
|
huggingface/datasets
| 4,478
|
Dataset slow during model training
|
## Describe the bug
While migrating towards π€ Datasets, I encountered an odd performance degradation: training suddenly slows down dramatically. I train with an image dataset using Keras and execute a `to_tf_dataset` just before training.
First, I have optimized my dataset following https://discuss.huggingface.co/t/solved-image-dataset-seems-slow-for-larger-image-size/10960/6, which actually improved the situation from what I had before but did not completely solve it.
Second, I saved and loaded my dataset using `tf.data.experimental.save` and `tf.data.experimental.load` before training (for which I would have expected no performance change). However, I ended up with the performance I had before tinkering with π€ Datasets.
Any idea what's the reason for this and how to speed-up training with π€ Datasets?
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
import os
dataset_dir = "./dataset"
prep_dataset_dir = "./prepdataset"
model_dir = "./model"
# Load Data
dataset = load_dataset("Lehrig/Monkey-Species-Collection", "downsized")
def read_image_file(example):
with open(example["image"].filename, "rb") as f:
example["image"] = {"bytes": f.read()}
return example
dataset = dataset.map(read_image_file)
dataset.save_to_disk(dataset_dir)
# Preprocess
from datasets import (
Array3D,
DatasetDict,
Features,
load_from_disk,
Sequence,
Value
)
import numpy as np
from transformers import ImageFeatureExtractionMixin
dataset = load_from_disk(dataset_dir)
num_classes = dataset["train"].features["label"].num_classes
one_hot_matrix = np.eye(num_classes)
feature_extractor = ImageFeatureExtractionMixin()
def to_pixels(image):
image = feature_extractor.resize(image, size=size)
image = feature_extractor.to_numpy_array(image, channel_first=False)
image = image / 255.0
return image
def process(examples):
examples["pixel_values"] = [
to_pixels(image) for image in examples["image"]
]
examples["label"] = [
one_hot_matrix[label] for label in examples["label"]
]
return examples
features = Features({
"pixel_values": Array3D(dtype="float32", shape=(size, size, 3)),
"label": Sequence(feature=Value(dtype="int32"), length=num_classes)
})
prep_dataset = dataset.map(
process,
remove_columns=["image"],
batched=True,
batch_size=batch_size,
num_proc=2,
features=features,
)
prep_dataset = prep_dataset.with_format("numpy")
# Split
train_dev_dataset = prep_dataset['test'].train_test_split(
test_size=test_size,
shuffle=True,
seed=seed
)
train_dev_test_dataset = DatasetDict({
'train': train_dev_dataset['train'],
'dev': train_dev_dataset['test'],
'test': prep_dataset['test'],
})
train_dev_test_dataset.save_to_disk(prep_dataset_dir)
# Train Model
import datetime
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.applications import InceptionV3
from tensorflow.keras.layers import Dense, Dropout, GlobalAveragePooling2D, BatchNormalization
from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
from transformers import DefaultDataCollator
dataset = load_from_disk(prep_data_dir)
data_collator = DefaultDataCollator(return_tensors="tf")
train_dataset = dataset["train"].to_tf_dataset(
columns=['pixel_values'],
label_cols=['label'],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator
)
validation_dataset = dataset["dev"].to_tf_dataset(
columns=['pixel_values'],
label_cols=['label'],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator
)
print(f'{datetime.datetime.now()} - Saving Data')
tf.data.experimental.save(train_dataset, model_dir+"/train")
tf.data.experimental.save(validation_dataset, model_dir+"/val")
print(f'{datetime.datetime.now()} - Loading Data')
train_dataset = tf.data.experimental.load(model_dir+"/train")
validation_dataset = tf.data.experimental.load(model_dir+"/val")
shape = np.shape(dataset["train"][0]["pixel_values"])
backbone = InceptionV3(
include_top=False,
weights='imagenet',
input_shape=shape
)
for layer in backbone.layers:
layer.trainable = False
model = Sequential()
model.add(backbone)
model.add(GlobalAveragePooling2D())
model.add(Dense(128, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Dense(64, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Dense(10, activation='softmax'))
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
print(model.summary())
earlyStopping = EarlyStopping(
monitor='val_loss',
patience=10,
verbose=0,
mode='min'
)
mcp_save = ModelCheckp
|
https://github.com/huggingface/datasets/issues/4478
|
open
|
[
"bug"
] | 2022-06-11T19:40:19Z
| 2022-06-14T12:04:31Z
| 5
|
lehrig
|
pytorch/pytorch
| 79,332
|
How to reimplement same behavior in AdaptiveAvgPooling2D
|
### π The doc issue
Hi, am trying written an op which should mimic behavior in Pytorch's AdaptiveAvgPooling, but I can not align the result.
Here is what I do:
```
def test_pool():
a = np.fromfile("in.bin", dtype=np.float32)
a = np.reshape(a, [1, 12, 25, 25])
a = torch.as_tensor(a)
b = F.adaptive_avg_pool2d(a, [7, 7])
print(b)
print(b.shape)
avg_pool = torch.nn.AvgPool2d([7, 7], [3, 3])
c = avg_pool(a)
print(c)
print(c.shape)
```
the `b` and `c` are not equal.
My algorithm was:
```
k = output_size // input_size
stride = input_size - (output_size - 1) * k
padding = 0
```
I think there maybe some gap in real algorithm in pytorch. But can not found any where said it.
so, please make me clarify.
### Suggest a potential alternative/fix
Details in adaptiveavgpool2d
|
https://github.com/pytorch/pytorch/issues/79332
|
closed
|
[] | 2022-06-11T02:06:59Z
| 2022-08-18T11:39:51Z
| null |
lucasjinreal
|
pytorch/functorch
| 867
|
Why is using vmap(jacrev) for BatchNorm2d in non-tracking mode not working?
|
Hi, experts.
I am trying to use vmap(jacrev) to calculate the per-sample jacobian in a batch for my network during inference. However, when there is BatchNorm2d, it does not work. Because during inference, BatchNorm2d is simply applying the statistics previously tracked (and not doing any inter-sample operations), I think it should work just as any other simple operation from my understanding. Is there a way for me to make it work, or is there anything I am misunderstanding?
Below is my minimal code:
```
from functorch import jacrev, vmap
import torch
from torch import nn
layers = nn.Sequential(
nn.Conv2d(3, 3, kernel_size=(3, 3)),
nn.BatchNorm2d(3, track_running_stats=False),
)
x = torch.randn(4, 3, 30, 30)
j = vmap(jacrev(layers))(x)
```
And I get this error in the bn layer
`ValueError: expected 4D input (got 3D input)`
I think this should fundamentally be doable, and just might be because of how vmap and jacrev is implemented.
Is there any simple workaround, or am I misunderstanding anything?
Thank you for any help
|
https://github.com/pytorch/functorch/issues/867
|
closed
|
[] | 2022-06-11T00:15:32Z
| 2022-07-18T18:44:14Z
| 6
|
kwmaeng91
|
pytorch/pytorch
| 79,106
|
How to find the code in '...'?
|
https://github.com/pytorch/pytorch/blob/4305f8e9bda34f18eb7aacab51c63651cfc61802/torch/storage.py#L34
Here, I want to read the detailed code in `.cuda` func, however, I do not find any code about this api?π’
Hope someone could help meοΌβ€
cc @ngimel
|
https://github.com/pytorch/pytorch/issues/79106
|
closed
|
[
"module: cuda",
"triaged"
] | 2022-06-08T02:49:10Z
| 2022-06-13T20:44:10Z
| null |
juinshell
|
pytorch/data
| 574
|
Support offloading data pre-processing to auxiliary devices
|
### π The feature, motivation and pitch
Occasionally one might find that their GPU is idle due to a bottleneck on the input data pre-processing pipeline (which might include data loading/filtering/manipulation/augmentation/etc). In these cases one could improve resource utilization by offloading some of the pre-processing to auxiliary CPU devices.
I have demonstrated how to do this using gRPC in the following blog post: https://towardsdatascience.com/overcoming-ml-data-preprocessing-bottlenecks-with-grpc-ca30fdc01bee
TensorFlow has built in (experimental) support for this feature (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service) that enables offloading in a few simple steps.
The request here is to include PyTorch APIs for offloading data pre-processing in a manner that would be simple and straight forward to the user... Similar to the TensorFlow APIs (though preferably without any limitations on pre-processing workload) .
### Alternatives
_No response_
### Additional context
_No response_
cc @SsnL @VitalyFedyunin @ejguan @NivekT
|
https://github.com/meta-pytorch/data/issues/574
|
open
|
[
"feature",
"module: dataloader",
"triaged",
"module: data"
] | 2022-06-07T10:12:00Z
| 2022-07-06T18:12:47Z
| 2
|
czmrand
|
pytorch/kineto
| 615
|
How to limit the scope of the profiler?
|
I am wondering if it is possible to limit the scope of the profiler to a particular part of the neural network. Currently, I am trying to analyze the bottleneck of my model using the following pseudocode:
```
import torch.profiler as profiler
with profiler.profile(
activities=[
profiler.ProfilerActivity.CPU,
profiler.ProfilerActivity.CUDA,
],
profile_memory=True,
schedule=profiler.schedule(wait=5, warmup=2, active=1, repeat=1),
on_trace_ready=profiler.tensorboard_trace_handler(tensorboard_logdir)
) as p:
for sample in dataloader:
model(sample)
```
However, the trace I created is still way too large (~800MB) for the tensorboard to function properly. Apparently tensorboard is only able to load the trace if it is smaller than about 500 MB, so I am thinking about limiting the trace of the profiler to only look at part of the neural net that leads to the issue. However, it seems like a warmup is necessary, so inserting the profiler.profile within a network will generate inaccurate results. Is there a way to limit the scope of the profiler without breaking the interface?
|
https://github.com/pytorch/kineto/issues/615
|
closed
|
[] | 2022-06-06T20:34:35Z
| 2022-06-21T17:57:42Z
| null |
hyhuang00
|
pytorch/torchx
| 510
|
Implement an HPO builtin
|
## Description
Add a builtin component for launching HPO (hyper-parameter optimization) jobs. At a high-level something akin to:
```
# for grid search
$ torchx run -s kubernetes hpo.grid_search --paramspacefile=~/parameters.json --component dist.ddp
# for bayesian search
$ torchx run -s kubernetes hpo.bayesian ...
```
In both cases we use the Ax/TorchX integration to run the HPO driver job. (see motivation section below for details)
## Motivation/Background
TorchX already integrates with Ax that supports both bayesian and grid_search HPO. Some definitions before we get started:
1. Ax: Experiment - ([docs](https://ax.dev/docs/glossary.html#experiment)) Defines the HPO search space and holds the optimizer state. Vends out the next set of parameters to search based on the observed results (relevant for Bayesian and Bandit optimizations, not so much for grid search).
2. Ax: Trials - ([docs](https://ax.dev/docs/glossary.html#trial)) A step in an experiment, aka a (training) job that runs with a specific set of hyper-parameters as vended out by the optimizer in the experiment
3. Ax: Runner - ([docs](https://ax.dev/docs/glossary.html#runner)) Responsible for launching trials.
Ax/TorchX integration is done at the Runner level. We implemented an [`ax/TorchXRunner`](https://ax.dev/api/runners.html#module-ax.runners.torchx) that implements Ax's `Runner` interface (do not confuse this with the TorchX runner. TorchX itself defines a runner concept). The `ax/TorchXRunner` runs the ax Trials using TorchX.
The [`ax/TorchXRunnerTest`](https://github.com/facebook/Ax/blob/main/ax/runners/tests/test_torchx.py#L72) serves as a full end-to-end example of how everything works. In summary the test runs a bayesian HPO to minimize the ["booth" function](https://en.wikipedia.org/wiki/Test_functions_for_optimization). **Note that in practice this function is replaced by your "trainer"**. The main module that computes the booth function given the parameters `x_1` and `x_2` as inputs is defined in [`torchx.apps.utils.booth`](https://github.com/pytorch/torchx/blob/main/torchx/apps/utils/booth_main.py).
The abridged code looks something like this:
```python
parameters: List[Parameter] = [
RangeParameter(
name="x1",
lower=-10.0,
upper=10.0,
parameter_type=ParameterType.FLOAT,
),
RangeParameter(
name="x2",
lower=-10.0,
upper=10.0,
parameter_type=ParameterType.FLOAT,
),
]
experiment = Experiment(
name="torchx_booth_sequential_demo",
search_space=SearchSpace(parameters=self._parameters),
optimization_config=OptimizationConfig(
objective = Objective(metric=TorchXMetric(name="booth_eval"),
minimize=True,
),
runner=TorchXRunner(
tracker_base=self.test_dir,
component=utils.booth,
scheduler="local_cwd",
cfg={"prepend_cwd": True},
),
)
scheduler = Scheduler(
experiment=experiment,
generation_strategy=choose_generation_strategy(search_space=experiment.search_space),
options=SchedulerOptions(),
)
for _ in range(3):
scheduler.run_n_trials(max_trials=2)
scheduler.report_results()
```
## Detailed Proposal
The task here is to essentially create pre-packaged applications for the code above. We can define a two types of HPO apps by the "strategy" used:
1. hpo.grid_search
2. hpo.bayesian
Each application will come with a companion "component" (e.g. `hpo.grid_search` and `hpo.bayesian`). The applications should be designed to take as input:
1. parameter space
2. what the objective function is (e.g. trainer)
3. torchx cfgs (e.g. scheduler, scheduler runcfg, etc)
4. ax experiment configs
The challenge is to be able to correctly and sanely "parameterize" the application in such a way that allows the user to sanely pass these argument from the CLI. For complex parameters such as parameter space, one might consider taking a file in a specific format rather than conjuring up a complex string encoding to pass as CLI input.
For instance for the `20 x 20` for `x_1` and `x_2` in the example above, rather than taking the parameter space as:
```
$ torchx run hpo.bayesian --parameter_space x_1=-10:10,x2_=-10:10
```
One can take it as a well defined python parameter file:
```
# params.py
# just defines the parameters using the regular Ax APIs
parameters: List[Parameter] = [
RangeParameter(
name="x1",
lower=-10.0,
upper=10.0,
parameter_type=ParameterType.FLOAT,
),
RangeParameter(
name="x2",
low
|
https://github.com/meta-pytorch/torchx/issues/510
|
open
|
[
"enhancement",
"module: components"
] | 2022-06-03T20:06:10Z
| 2022-10-27T01:55:08Z
| 0
|
kiukchung
|
huggingface/datasets
| 4,439
|
TIMIT won't load after manual download: Errors about files that don't exist
|
## Describe the bug
I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both cases the HF dataloader fails; it is looking for files that don't exist anywhere in the dataset: it is looking for files with lower-case letters like "**test*" (all the filenames in both my copies are uppercase) and certain file extensions that exclude the .DOC which is provided in TIMIT:
## Steps to reproduce the bug
```python
data = load_dataset('timit_asr', 'clean')['train']
```
## Expected results
The dataset should load with no errors.
## Actual results
This error message:
```
File "/home/ubuntu/envs/data2vec/lib/python3.9/site-packages/datasets/data_files.py", line 201, in resolve_patterns_locally_or_by_urls
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to resolve any data file that matches '['**test*', '**eval*']' at /home/ubuntu/datasets/timit with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
```
But this is a strange sort of error: why is it looking for lower-case file names when all the TIMIT dataset filenames are uppercase? Why does it exclude .DOC files when the only parts of the TIMIT data set with "TEST" in them have ".DOC" extensions? ...I wonder, how was anyone able to get this to work in the first place?
The files in the dataset look like the following:
```
Β³ PHONCODE.DOC
Β³ PROMPTS.TXT
Β³ SPKRINFO.TXT
Β³ SPKRSENT.TXT
Β³ TESTSET.DOC
```
...so why are these being excluded by the dataset loader?
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.27
- Python version: 3.9.9
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
|
https://github.com/huggingface/datasets/issues/4439
|
closed
|
[
"bug"
] | 2022-06-02T16:35:56Z
| 2022-06-03T08:44:17Z
| 3
|
drscotthawley
|
pytorch/vision
| 6,124
|
How to timing 'model.to(device)' correctly?
|
I am using pytorch's api in my python code to measure time for different layers of resnet152 to device(GPU, V-100).However, I cannot get a stable result.
Here is my code:
```python
import torch.nn as nn
device = torch.device('cuda:3' if torch.cuda.is_available() else 'cpu')
model = torchvision.models.resnet152(pretrained=True)
def todevice(_model_, _device_=device):
T0 = time.perf_counter()
_model_.to(_device_)
torch.cuda.synchronize()
T1 = time.perf_counter()
print("model to device %s cost:%s ms" % (_device_, ((T1 - T0) * 1000)))
model1 = nn.Sequential(*list(resnet152.children())[:6])
todevice(model1)
```
When I use the code to test at different time, I can always get different answers, some of them are ridiculous, even to `200ms`.
Also, there are 4 GPU(Tesla V100) in my lab, I don't know whether other extra GPUs will affect my result.
Could you tell me how to timing `model.to(device)` correctly? Is there anything wrong with my code or my lab environment?
|
https://github.com/pytorch/vision/issues/6124
|
closed
|
[
"question"
] | 2022-06-02T11:55:14Z
| 2022-06-06T08:34:34Z
| null |
juinshell
|
pytorch/functorch
| 848
|
AOTAutograd makes unsafe assumptions on how the backward pass will look like
|
## Context: how AOTAutograd works today
Given a function `f`:
- AOTAutograd traces out `run_forward_and_backward_f(*args, *grad_outputs)` to produce `forward_and_backward_trace`
- AOTAutograd partitions `forward_and_backward_trace` into a forward_trace and a backward_trace
- AOTAutograd compiles the forward_trace and backward_trace separately
- The compiled_forward_trace and compiled_backward_trace are stitched into an autograd.Function
## The Problem
In order to trace `run_forward_and_backward_f(*args, *grad_outputs)`, AOTAutograd needs to construct a Proxy for the grad_outputs. This ends up assuming properties of the grad_output: for example, AOTAutograd assumes that the grad_outputs are contiguous.
There are some more adversarial examples that we could construct. If the backward formula of at::sin were instead:
```
def sin_backward(grad_output, input):
if grad_output.is_sparse():
return grad_output * input.sin()
return grad_output * input.cos()
```
then, depending on the properties of the input, the backward that should get executed is different. If AOTAutograd assumes that the Proxy is dense and contiguous, then the backward pass of the generated autograd.Function would be incorrect.
## Potential proposal
Proposal: delay tracing the backward pass until the backward pass is invoked.
So, given a function `f`:
- AOTAutograd constructs a trace of f (that includes intermediates as outputs), `forward_trace`
- AOTAutograd constructs an autograd.Function that has `compiled(forward_trace)` as the forward pass
The autograd.Function's backward pass, when invoked:
- traces out `run_forward_and_backward_f(*args, *grad_outputs)` to produce `forward_and_backward_trace`
- takes the difference of `forward_and_backward_trace` and `forward_trace` to produce `backward_trace`.
- compiles `backward_trace` into `compiled_backward_trace`
- then invokes it.
Things that we haven't mentioned that will need to be thought about:
- how does AOTAutograd's rematerialization come into play here?
Things that we haven't mentioned that should be orthogonal:
- caching. `compiled(forward_trace)` needs a cache that uses the inputs as keys (among other things), `compiled(backward_trace)` needs a cache that takes the (inputs, grad_outputs) as keys.
- what if the backward is user-defined (e.g., autograd.Function) and isn't traceable? See https://github.com/pytorch/pytorch/issues/93723 for ideas
## Alternatives
Keep the current scheme (AOTAutograd traces out both the forward+backward pass at the time of the forward), but somehow prove to ourselves that the produced trace of the backward pass is always correct.
cc @Chillee @anijain2305 @ezyang @anjali411 @albanD
|
https://github.com/pytorch/functorch/issues/848
|
open
|
[] | 2022-06-01T18:18:28Z
| 2023-02-01T01:10:36Z
| 4
|
zou3519
|
huggingface/dataset-viewer
| 332
|
Change moonlanding app token?
|
Should we replace `dataset-preview-backend`with `datasets-server`:
- here: https://github.com/huggingface/moon-landing/blob/f2ee3896cff3aa97aafb3476e190ef6641576b6f/server/models/App.ts#L16
- and here: https://github.com/huggingface/moon-landing/blob/82e71c10ed0b385e55a29f43622874acfc35a9e3/server/test/end_to_end_apps.ts#L243-L271
What are the consequences then? How to do it without too much downtime?
|
https://github.com/huggingface/dataset-viewer/issues/332
|
closed
|
[
"question"
] | 2022-06-01T09:29:12Z
| 2022-09-19T09:33:33Z
| null |
severo
|
huggingface/dataset-viewer
| 325
|
Test if /valid is a blocking request
|
https://github.com/huggingface/datasets-server/issues/250#issuecomment-1142013300
> > the requests to /valid are very long: do they block the incoming requests?)
> Depends on if your long running query is blocking the GIL or not. If you have async calls, it should be able to switch and take care of other requests, if it's computing something then yeah, probably blocking everything else.
- [ ] find if the long requests like /valid are blocking the concurrent requests
- [ ] if so: fix it
|
https://github.com/huggingface/dataset-viewer/issues/325
|
closed
|
[
"bug",
"question"
] | 2022-05-31T13:43:20Z
| 2022-09-16T17:39:20Z
| null |
severo
|
huggingface/datasets
| 4,419
|
Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual`
|
**Is your feature request related to a problem? Please describe.**
So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating.
Find an example of an `assertEqual` over a tuple in π€ `datasets` unit tests over an `ArrowDataset` at https://github.com/huggingface/datasets/blob/0bb47271910c8a0b628dba157988372307fca1d2/tests/test_arrow_dataset.py#L570
**Describe the solution you'd like**
Start slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`.
**Additional context**
If so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks π€
|
https://github.com/huggingface/datasets/issues/4419
|
closed
|
[
"enhancement"
] | 2022-05-30T12:13:18Z
| 2022-09-30T16:01:37Z
| 3
|
alvarobartt
|
huggingface/datasets
| 4,417
|
how to convert a dict generator into a huggingface dataset.
|
### Link
_No response_
### Description
Hey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset.
The generator contains all the samples needed for training the model but I cannot convert it into a huggingface dataset.
The code looks like this:
```
for ex in seqio_data:
print(ex[βtextβ])
```
I need to convert the seqio_data (generator) into huggingface dataset.
the complete seqio code goes here:
```
import functools
import seqio
import tensorflow as tf
import t5.data
from datasets import load_dataset
from t5.data import postprocessors
from t5.data import preprocessors
from t5.evaluation import metrics
from seqio import FunctionDataSource, utils
TaskRegistry = seqio.TaskRegistry
def gen_dataset(split, shuffle=False, seed=None, column="text", dataset_params=None):
dataset = load_dataset(**dataset_params)
if shuffle:
if seed:
dataset = dataset.shuffle(seed=seed)
else:
dataset = dataset.shuffle()
while True:
for item in dataset[str(split)]:
yield item[column]
def dataset_fn(split, shuffle_files, seed=None, dataset_params=None):
return tf.data.Dataset.from_generator(
functools.partial(gen_dataset, split, shuffle_files, seed, dataset_params=dataset_params),
output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_name)
)
@utils.map_over_dataset
def target_to_key(x, key_map, target_key):
"""Assign the value from the dataset to target_key in key_map"""
return {**key_map, target_key: x}
dataset_name = 'oscar-corpus/OSCAR-2109'
subset= 'mr'
dataset_params = {"path": dataset_name, "language":subset, "use_auth_token":True}
dataset_shapes = None
TaskRegistry.add(
"oscar_marathi_corpus",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_params=dataset_params),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=dataset_shapes,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"targets": None,
}, target_key="targets")],
output_features={"targets": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},
metric_fns=[]
)
dataset = seqio.get_mixture_or_task("oscar_marathi_corpus").get_dataset(
sequence_length=None,
split="train",
shuffle=True,
num_epochs=1,
shard_info=seqio.ShardInfo(index=0, num_shards=10),
use_cached=False,
seed=42
)
for _, ex in zip(range(5), dataset):
print(ex['targets'].numpy().decode())
```
### Owner
_No response_
|
https://github.com/huggingface/datasets/issues/4417
|
closed
|
[
"question"
] | 2022-05-29T16:28:27Z
| 2022-09-16T14:44:19Z
| null |
StephennFernandes
|
pytorch/pytorch
| 78,365
|
How to calculate the gradient of the previous layer when the gradient of the latter layer is given?
|
Hi, there. Can someone help me solve this problem? if the gradients of a certain layer is known, how can I use the API in torch to calculate the gradient of the previous layer?I would appreciate it if anyone could reply me in time.
|
https://github.com/pytorch/pytorch/issues/78365
|
closed
|
[] | 2022-05-26T16:05:40Z
| 2022-05-31T14:46:40Z
| null |
mankasto
|
pytorch/data
| 469
|
Suggestion: Dataloader with RPC-based workers
|
### π The feature
A dataloader which communicates with its workers via torch.distributed.rpc API.
### Motivation, pitch
Presently, process-based workers for Dataloader mean the workers live on the same server/PC as the process consuming that data. This incurs the following limitations:
- the pre-processing workload cannot scale beyond the GPU server capacity
- with random sampling, each worker might eventually see all the dataset, which is not cache friendly
### Alternatives
_No response_
### Additional context
A proof of concept is available ~~[here](https://github.com/nlgranger/data/blob/rpc_dataloader/torchdata/rpc/dataloader.py)~~ -> https://github.com/CEA-LIST/RPCDataloader
I have not yet tested how efficient this is compared to communicating the preprocessed batch data via process pipes. Obviously the use of shared-memory is lost when the worker is remote but the TensorPipe rpc backend might be able to take advantage of other fast transfer methods (GPUDirect, rmda?).
The load distribution scheme used in this first implementation is round-robin. I have not yet put thoughts on how to make this modifiable both in term of implementation and API.
|
https://github.com/meta-pytorch/data/issues/469
|
closed
|
[] | 2022-05-26T11:14:13Z
| 2024-01-30T09:29:17Z
| 2
|
nlgranger
|
pytorch/examples
| 1,010
|
Accessing weights of a pre-trained model
|
Hi,
Can you share how to print weights and biases for each layer of a pre-trained Alexnet model?
Regards,
Nivedita
|
https://github.com/pytorch/examples/issues/1010
|
closed
|
[] | 2022-05-26T06:50:13Z
| 2022-06-02T00:11:56Z
| 1
|
nivi1501
|
pytorch/TensorRT
| 1,091
|
β [Question] Linking error with PTQ function
|
## β Question
I am getting a linking error when using `torch_tensorrt::ptq::make_int8_calibrator`. I am using the Windows build based on CMake, so I'm not sure if it's a problem with the way it was built, but I suspect not since I can use functions from ::torchscript just fine.
I am trying to create a barebones program to test ptq based on examples/int8/ptq/main.cpp, and I get this linker error whenever `torch_tensorrt::ptq::make_int8_calibrator` is used. Any help would be greatly appreciated.
## Environment
- PyTorch Version (e.g., 1.0): 1.11+cu113
- OS (e.g., Linux): Windows 10
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): libtorch from pytorch.org
- CUDA version: 11.3
## Additional context
This is the linker error that I get:
> Severity Code Description Project File Line Suppression State
Error LNK2019 unresolved external symbol "__declspec(dllimport) class torch_tensorrt::ptq::Int8Calibrator<class nvinfer1::IInt8EntropyCalibrator2,class std::unique_ptr<class torch::data::StatelessDataLoader<class torch::data::datasets::MapDataset<class torch::data::datasets::MapDataset<class datasets::CIFAR10,struct torch::data::transforms::Normalize<class at::Tensor> >,struct torch::data::transforms::Stack<struct torch::data::Example<class at::Tensor,class at::Tensor> > >,class torch::data::samplers::RandomSampler>,struct std::default_delete<class torch::data::StatelessDataLoader<class torch::data::datasets::MapDataset<class torch::data::datasets::MapDataset<class datasets::CIFAR10,struct torch::data::transforms::Normalize<class at::Tensor> >,struct torch::data::transforms::Stack<struct torch::data::Example<class at::Tensor,class at::Tensor> > >,class torch::data::samplers::RandomSampler> > > > __cdecl torch_tensorrt::ptq::make_int8_calibrator<class nvinfer1::IInt8EntropyCalibrator2,class std::unique_ptr<class torch::data::StatelessDataLoader<class torch::data::datasets::MapDataset<class torch::data::datasets::MapDataset<class datasets::CIFAR10,struct torch::data::transforms::Normalize<class at::Tensor> >,struct torch::data::transforms::Stack<struct torch::data::Example<class at::Tensor,class at::Tensor> > >,class torch::data::samplers::RandomSampler>,struct std::default_delete<class torch::data::StatelessDataLoader<class torch::data::datasets::MapDataset<class torch::data::datasets::MapDataset<class datasets::CIFAR10,struct torch::data::transforms::Normalize<class at::Tensor> >,struct torch::data::transforms::Stack<struct torch::data::Example<class at::Tensor,class at::Tensor> > >,class torch::data::samplers::RandomSampler> > > >(class std::unique_ptr<class torch::data::StatelessDataLoader<class torch::data::datasets::MapDataset<class torch::data::datasets::MapDataset<class datasets::CIFAR10,struct torch::data::transforms::Normalize<class at::Tensor> >,struct torch::data::transforms::Stack<struct torch::data::Example<class at::Tensor,class at::Tensor> > >,class torch::data::samplers::RandomSampler>,struct std::default_delete<class torch::data::StatelessDataLoader<class torch::data::datasets::MapDataset<class torch::data::datasets::MapDataset<class datasets::CIFAR10,struct torch::data::transforms::Normalize<class at::Tensor> >,struct torch::data::transforms::Stack<struct torch::data::Example<class at::Tensor,class at::Tensor> > >,class torch::data::samplers::RandomSampler> > >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,bool)" (__imp_??$make_int8_calibrator@VIInt8EntropyCalibrator2@nvinfer1@@V?$unique_ptr@V?$StatelessDataLoader@V?$MapDataset@V?$MapDataset@VCIFAR10@datasets@@U?$Normalize@VTensor@at@@@transforms@data@torch@@@datasets@data@torch@@U?$Stack@U?$Example@VTensor@at@@V12@@data@torch@@@transforms@34@@datasets@data@torch@@VRandomSampler@samplers@34@@data@torch@@U?$default_delete@V?$StatelessDataLoader@V?$MapDataset@V?$MapDataset@VCIFAR10@datasets@@U?$Normalize@VTensor@at@@@transforms@data@torch@@@datasets@data@torch@@U?$Stack@U?$Example@VTensor@at@@V12@@data@torch@@@transforms@34@@datasets@data@torch@@VRandomSampler@samplers@34@@data@torch@@@std@@@std@@@ptq@torch_tensorrt@@YA?AV?$Int8Calibrator@VIInt8EntropyCalibrator2@nvinfer1@@V?$unique_ptr@V?$StatelessDataLoader@V?$MapDataset@V?$MapDataset@VCIFAR10@datasets@@U?$Normalize@VTensor@at@@@transforms@data@torch@@@datasets@data@torch@@U?$Stack@U?$Example@VTensor@at@@V12@@data@torch@@@transforms@34@@datasets@data@torch@@VRandomSampler@samplers@34@@data@torch@@U?$default_delete@V?$StatelessDataLoader@V?$MapDataset@V?$MapDataset@VCIFAR10@datasets@@U?$Normalize@VTensor@at@@@transforms@data@torch@@@datasets@data@torch@@U?$Stack@U?$Example@VTensor@at@@V12@@data@torch@@@transforms@34@@datasets@data@torch@@VRandomSampler@samplers@34@@data@torch@@@std@@@std@@@01@V?$unique_ptr@V?$StatelessDataLoader@V?$MapDataset@V?$MapDataset@VCIFAR10@datasets@@U?$Normalize@VTensor@at@@@transforms@data@torch@@@datasets@data@torch@@U?$Stack@U?$Example@VTensor@at@@V12@@data@torch@@@transforms@34@@
|
https://github.com/pytorch/TensorRT/issues/1091
|
closed
|
[
"question",
"component: quantization",
"channel: windows"
] | 2022-05-26T01:19:17Z
| 2022-09-02T17:45:50Z
| null |
jonahclarsen
|
pytorch/torchx
| 503
|
add `torchx list` command and `Runner.list` APIs
|
## Description
<!-- concise description of the feature/enhancement -->
Add a `torchx list` and `Runner/Scheduler.list` methods. This would allow listing all jobs the user has launched and see their status when tracking multiple different jobs.
## Motivation/Background
<!-- why is this feature/enhancement important? provide background context -->
Currently users have to use the scheduler specific tools like `sacct/vcctl/ray job list` to see all of their jobs. Adding this would allow users to just interact via the torchx interface and not have to worry about interacting with other tools.
## Detailed Proposal
<!-- provide a detailed proposal -->
We'd likely want something similar to https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.ContainerCollection.list
Filters may be hard to support across all schedulers so we probably want to limit it to just a few common ones or none at all initially. We also want to filter so we only return torchx jobs instead of all jobs on the scheduler.
Limiting it to jobs that the user owns would also be nice to have though may not be feasible for all schedulers.
## Alternatives
<!-- discuss the alternatives considered and their pros/cons -->
## Additional context/links
<!-- link to code, documentation, etc. -->
* https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.ContainerCollection.list
* https://slurm.schedmd.com/sacct.html
* https://docs.aws.amazon.com/batch/latest/APIReference/API_ListJobs.html
* https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md#list_namespaced_custom_object
* https://docs.ray.io/en/master/cluster/jobs-package-ref.html#jobsubmissionclient
|
https://github.com/meta-pytorch/torchx/issues/503
|
closed
|
[
"enhancement",
"module: runner",
"cli"
] | 2022-05-25T21:02:11Z
| 2022-09-21T21:52:31Z
| 10
|
d4l3k
|
pytorch/TensorRT
| 1,089
|
I wonder if torch_tensorrt support mixed precisions for different layer
|
**Is your feature request related to a problem? Please describe.**
I write a converter and plugin, but plugin only support fp32, then if I convert with enabled_precisions: torch.int8, then error happend
**Describe the solution you'd like**
if different layer can use different precisions, i can use fp32 this plugin layer and int8 other layers
|
https://github.com/pytorch/TensorRT/issues/1089
|
closed
|
[
"question"
] | 2022-05-25T10:07:21Z
| 2022-05-30T06:05:07Z
| null |
pupumao
|
huggingface/dataset-viewer
| 309
|
Scale the worker pods depending on prometheus metrics?
|
We could scale the number of worker pods depending on:
- the size of the job queue
- the available resources
These data are available in prometheus, and we could use them to autoscale the pods.
|
https://github.com/huggingface/dataset-viewer/issues/309
|
closed
|
[
"question"
] | 2022-05-25T09:56:05Z
| 2022-09-19T09:30:49Z
| null |
severo
|
huggingface/dataset-viewer
| 307
|
Add a /metrics endpoint on every worker?
|
https://github.com/huggingface/dataset-viewer/issues/307
|
closed
|
[
"question"
] | 2022-05-25T09:52:28Z
| 2022-09-16T17:40:55Z
| null |
severo
|
|
pytorch/data
| 454
|
Make `IterToMap` loading more lazily
|
### π The feature
Currently, `IterToMap` starts to load all data from prior `IterDataPipe` when the first `__getitem__` is invoked here.
https://github.com/pytorch/data/blob/13b574c80e8732744fee6ab9cb7e35b5afc34a3c/torchdata/datapipes/iter/util/converter.py#L78
We can stop loading data from prior `IterDataPipe` whenever we find the requested index. And, we might need to add a flag to prevent loading data multiple times.
### Motivation, pitch
This would improve the performance if users simply iterate over the `MapDataPipe` as we don't need to pre-load everything at the beginning of the iteration, basically, simulating the behavior of `IterDataPipe`.
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/meta-pytorch/data/issues/454
|
open
|
[
"help wanted"
] | 2022-05-24T14:14:30Z
| 2022-06-02T08:24:35Z
| 7
|
ejguan
|
pytorch/data
| 453
|
Fix installation document for nightly and official release
|
### π The doc issue
In https://github.com/pytorch/data#local-pip-or-conda, we talk about the commands would install nightly pytorch and torchdata, which is actually the official release.
We should change this part and add another section for nightly installation
### Suggest a potential alternative/fix
_No response_
|
https://github.com/meta-pytorch/data/issues/453
|
closed
|
[
"documentation"
] | 2022-05-24T14:07:13Z
| 2022-05-24T17:33:20Z
| 0
|
ejguan
|
pytorch/torchx
| 498
|
Document .torchxconfig behavior in home directory
|
## π Documentation
## Link
<!-- link to the problematic documentation -->
https://pytorch.org/torchx/main/runner.config.html
Context: https://fb.workplace.com/groups/140700188041197/posts/326515519459662/?comment_id=328106399300574&reply_comment_id=328113552633192
## What does it currently say?
<!-- copy paste the section that is wrong -->
```
The CLI only picks up .torchxconfig files from the current-working-directory (CWD) so chose a directory where you typically run torchx from.
```
## What should it say?
<!-- the proposed new documentation -->
It should explain how it can also be read from home and how the options are merged together.
## Why?
<!-- (if not clear from the proposal) why is the new proposed documentation more correct/improvement over the existing one? -->
Behavior is unclear to users.
|
https://github.com/meta-pytorch/torchx/issues/498
|
open
|
[
"documentation"
] | 2022-05-23T18:39:05Z
| 2022-06-16T00:04:19Z
| 2
|
d4l3k
|
pytorch/serve
| 1,647
|
How to return n images instead of 1?
|
Hi,
I am trying to deploy a DALL-E type model, in which you get as input a text and you receive as output a couple of images.
```
outputs = []
for i, image in enumerate(images):
byte_output = io.BytesIO()
output.convert('RGB').save(byte_output, format='JPEG')
bin_img_data = byte_output.getvalue()
outputs.append(bin_img_data)
return [outputs]
```
This does not work and results in a failure, with the logs from torchserve saying 'object of type bytearray is not json serializable'
However, changing `return [outputs]` into `return [outputs[0]]` makes it work. What can I do regarding this?
|
https://github.com/pytorch/serve/issues/1647
|
closed
|
[] | 2022-05-23T15:13:07Z
| 2022-05-23T17:21:30Z
| null |
mhashas
|
pytorch/data
| 436
|
Is our handling of open files safe?
|
Our current strategy is to wrap all file handles in a [`StreamWrapper`](https://github.com/pytorch/pytorch/blob/88fca3be5924dd089235c72e651f3709e18f76b8/torch/utils/data/datapipes/utils/common.py#L154). It dispatches all calls to wrapped object and adds a `__del__` method:
```py
class StreamWrapper:
def __init__(self, file_obj):
self.file_obj = file_obj
def __del__(self):
try:
self.file_obj.close()
except Exception:
pass
```
It will be called as soon as there are no more references to instance. The rationale is that if this happens we can close the wrapped file object. Since the `StreamWrapper` has a reference to the file object, GC should never try to delete the file object before `__del__` of the `StreamWrapper` is called. Thus, we should never delete an open file object.
Unfortunately, the reasoning above seems not to be correct. In some cases, it seems GC will delete the file object before the `StreamWrapper` is deleted. This will emit a warning which the `torchvision` test suite will turn into an error. This was discussed at length in pytorch/vision#5801 and includes minimum requirements to reproduce the issue. Still, there was no minimal reproduction outside of the test environment found. The issue was presumably fixed in pytorch/pytorch#76345, but was popping up again in https://github.com/pytorch/data/runs/6500848588#step:9:1977.
Thus, I think it is valid question to ask if our approach is safe at all. It would be a quite bad UX if a user gets a lot of unclosed file warnings although they used `torchdata` or in extension `torchvision.datasets` as documented.
|
https://github.com/meta-pytorch/data/issues/436
|
closed
|
[] | 2022-05-23T10:37:11Z
| 2023-01-05T15:05:51Z
| 3
|
pmeier
|
huggingface/sentence-transformers
| 1,562
|
Why is "max_position_embeddings" 514 in sbert where as 512 in bert
|
Why is "max_position_embeddings" different in sbert then in Bert?
|
https://github.com/huggingface/sentence-transformers/issues/1562
|
open
|
[] | 2022-05-22T17:27:01Z
| 2022-05-22T20:52:40Z
| null |
omerarshad
|
pytorch/TensorRT
| 1,076
|
β [Question] What am I missing to install TensorRT v1.1.0 in a Jetson with JetPack 4.6
|
## β Question
I am getting some errors trying to install TensorRT v1.1.0 in a Jetson with JetPack 4.6 for using with Python3
## What you have already tried
I followed the Official installation of Pytorch v1.10.0 by using binaries according to the [ offical Nvidia Forum](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048). Then, I followed the official steps of this repository which are:
1. Install Bazel - successfully
2. Build Natively on aarch64 (Jetson) - Here I am getting the problem
## Environment
- PyTorch Version :1.10.0
- OS (e.g., Linux): Ubuntu 18.04
- How you installed PyTorch: Using pip3 according to [Nvidia Forum](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-11-now-available/72048)
- Python version: 3.6
- CUDA version: 10.2
- TensorRT version: 8.2.1.8
- CUDNN version: 8.2.0.1
- GPU models and configuration: Jetson NX with JetPack 4.6
- Any other relevant information: Installation is clean. I am using the CUDA and TensorRT that come flashed wich JetPack.
## Additional context
Starting from a clean installation of JetPack and Torch 1.10.0 installed by using official binaries, I describe the installation steps I did for using this repository with the errors I am getting.
### 1- Install Bazel
```
git clone -b v1.1.0 https://github.com/pytorch/TensorRT.git
sudo apt-get install openjdk-11-jdk
export BAZEL_VERSION=$(cat /home/tkh/TensorRT.bazelversion)
mkdir bazel
cd bazel
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
unzip bazel-$BAZEL_VERSION-dist.zip
bash ./compile.sh
cp output/bazel /usr/local/bin/
```
At this point I can see `bazel 5.1.1- (@non-git)` with `bazel --version`.
### 2- Build Natively on aarch64 (Jetson)
Then, I modified my WORKSPACE file of this repository in this way
```
workspace(name = "Torch-TensorRT")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
http_archive(
name = "rules_python",
sha256 = "778197e26c5fbeb07ac2a2c5ae405b30f6cb7ad1f5510ea6fdac03bded96cc6f",
url = "https://github.com/bazelbuild/rules_python/releases/download/0.2.0/rules_python-0.2.0.tar.gz",
)
load("@rules_python//python:pip.bzl", "pip_install")
http_archive(
name = "rules_pkg",
sha256 = "038f1caa773a7e35b3663865ffb003169c6a71dc995e39bf4815792f385d837d",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
"https://github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
],
)
load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()
git_repository(
name = "googletest",
commit = "703bd9caab50b139428cea1aaff9974ebee5742e",
remote = "https://github.com/google/googletest",
shallow_since = "1570114335 -0400",
)
# External dependency for torch_tensorrt if you already have precompiled binaries.
local_repository(
name = "torch_tensorrt",
path = "/opt/conda/lib/python3.8/site-packages/torch_tensorrt"
)
# CUDA should be installed on the system locally
new_local_repository(
name = "cuda",
build_file = "@//third_party/cuda:BUILD",
path = "/usr/local/cuda-10.2/",
)
new_local_repository(
name = "cublas",
build_file = "@//third_party/cublas:BUILD",
path = "/usr",
)
#############################################################################################################
# Tarballs and fetched dependencies (default - use in cases when building from precompiled bin and tarballs)
#############################################################################################################
####################################################################################
# Locally installed dependencies (use in cases of custom dependencies or aarch64)
####################################################################################
# NOTE: In the case you are using just the pre-cxx11-abi path or just the cxx11 abi path
# with your local libtorch, just point deps at the same path to satisfy bazel.
# NOTE: NVIDIA's aarch64 PyTorch (python) wheel file uses the CXX11 ABI unlike PyTorch's standard
# x86_64 python distribution. If using NVIDIA's version just point to the root of the package
# for both versions here and do not use --config=pre-cxx11-abi
new_local_repository(
name = "libtorch",
path = "/home/tkh-ad/.local/lib/python3.6/site-packages/torch",
build_file = "third_party/libtorch/BUILD"
)
new_local_repository(
name = "libtorch_pre_cxx11_abi",
path = "/home/tkh-ad/.local/lib/python3.6/site-packages/torch",
build_file = "third_party/libtorch/BUILD"
)
new_local_repository(
name = "cudnn",
path = "/usr/local/cud
|
https://github.com/pytorch/TensorRT/issues/1076
|
closed
|
[
"question",
"channel: linux-jetpack"
] | 2022-05-20T13:56:30Z
| 2022-05-20T22:35:42Z
| null |
mjack3
|
pytorch/data
| 433
|
HashChecker example is broken
|
https://github.com/pytorch/data/blob/6a8415b1ced33e5653f7a38c93f767ac8e1c7e79/torchdata/datapipes/iter/util/hashchecker.py#L36-L48
Running this will raise a `StopIteration`. The reason is simple: we want to read from a stream that was already exhausted by the hash checking. The docstring tells us that much
https://github.com/pytorch/data/blob/6a8415b1ced33e5653f7a38c93f767ac8e1c7e79/torchdata/datapipes/iter/util/hashchecker.py#L32-L33
and we correctly set `rewind=False`.
|
https://github.com/meta-pytorch/data/issues/433
|
closed
|
[
"documentation",
"good first issue"
] | 2022-05-20T11:44:59Z
| 2022-05-23T22:29:38Z
| 1
|
pmeier
|
pytorch/functorch
| 823
|
Dynamic shape error in vmap with jacrev of jacrev
|
I'd like to compute the following expression in a vectorized way: first take the derivative wrt. to the data, and then take the derivative of this expression wrt. the parameters. I tried implementing it like this
```
func, params, buffer = make_functional_with_buffers(network)
vmap(jacrev(jacrev(func, 2), 0), (None, None, 0))(params, buffers, data)
```
but this isn't working since I get this error message:
> RuntimeError: vmap: We do not support batching operators that can support dynamic shape. Attempting to batch over indexing with a boolean mask.
I'm a bit surprised since I expected a second application of `jacrev` shouldn't change how `vmap` interacts with the function, but I guess that was incorrect.
**Edit**:
I also tried replacing this expression above using the `hessian` operation (and just ignoring the small computational overhead of computing the double derivatives I'm not interested in)
```
vmap(hessian(func, (0, 2)), (None, None, 0))(params, buffers, data)
```
but that code resulted in the same error.
Can you please point me to information about how to solve this problem?
|
https://github.com/pytorch/functorch/issues/823
|
closed
|
[] | 2022-05-20T10:41:39Z
| 2022-05-25T12:12:20Z
| 5
|
zimmerrol
|
pytorch/data
| 432
|
The developer install instruction are outdated
|
https://github.com/pytorch/data/blob/6a8415b1ced33e5653f7a38c93f767ac8e1c7e79/CONTRIBUTING.md?plain=1#L49-L56
While debugging #418 it took my quite a while to figure out that I need to set
https://github.com/pytorch/data/blob/6a8415b1ced33e5653f7a38c93f767ac8e1c7e79/tools/setup_helpers/extension.py#L41
for the C++ code to be built.
|
https://github.com/meta-pytorch/data/issues/432
|
closed
|
[
"documentation"
] | 2022-05-20T08:35:01Z
| 2022-06-10T20:04:08Z
| 3
|
pmeier
|
huggingface/datasets
| 4,374
|
extremely slow processing when using a custom dataset
|
## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub
I have a large .txt file of 22 GB which i load into HF dataset
`lang_dataset = datasets.load_dataset("text", data_files="hi.txt")`
further i use a pre-processing function to clean the dataset
`lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)`
the following processing takes astronomical time to process, while hoging all the ram.
similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data.
`lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)`
the hours predicted to preprocess are as follows:
huggingface hub dataset: 6.5 hrs
custom loaded dataset: 7000 hrs
note: both the datasets are almost actually same, just provided by different sources with has +/- some samples, only one is hosted on the HF hub and the other is downloaded in a text format.
## Steps to reproduce the bug
```
import datasets
import psutil
import sys
import glob
from fastcore.utils import listify
import re
import gc
def remove_non_indic_sentences(example):
tmp_ls = []
eng_regex = r'[. a-zA-Z0-9ΓΓΓ
ΓΆΓ€Γ₯ _.,!"\'\/$]*'
for e in listify(example['text']):
matches = re.findall(eng_regex, e)
for match in (str(match).strip() for match in matches if match not in [""," ", " ", ",", " ,", ", ", " , "]):
if len(list(match.split(" "))) > 2:
e = re.sub(match," ",e,count=1)
tmp_ls.append(e)
gc.collect()
example['clean_text'] = tmp_ls
return example
lang_dataset = datasets.load_dataset("text", data_files="hi.txt")
lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)
## same thing work much faster when loading similar dataset from hub
lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", split="train", use_auth_token=True)
lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)
```
## Actual results
similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data.
`lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)
**the hours predicted to preprocess are as follows:**
huggingface hub dataset: 6.5 hrs
custom loaded dataset: 7000 hrs
**i even tried the following:**
- sharding the large 22gb text files into smaller files and loading
- saving the file to disk and then loading
- using lesser num_proc
- using smaller batch size
- processing without batches ie : without `batched=True`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2.dev0
- Platform: Ubuntu 20.04 LTS
- Python version: 3.9.7
- PyArrow version:8.0.0
|
https://github.com/huggingface/datasets/issues/4374
|
closed
|
[
"bug",
"question"
] | 2022-05-19T14:18:05Z
| 2023-07-25T15:07:17Z
| null |
StephennFernandes
|
huggingface/optimum
| 198
|
Posibility to load an ORTQuantizer or ORTOptimizer from Onnx
|
FIrst, thanks a lot for this library, it make work so much easier.
I was wondering if it's possible to quantize and then optimize a model (or the reverse) but looking at the doc, it seems possible to do so only by passing a huggingface vanilla model.
Is it possible to do so with already compiled models?
Like : MyFineTunedModel ---optimize----> MyFineTunedOnnxOptimizedModel -----quantize-----> MyFinalReallyLightModel
```python
# Note that self.model_dir is my local folder with my custom fine-tuned hugginface model
onnx_path = self.model_dir.joinpath("model.onnx")
onnx_quantized_path = self.model_dir.joinpath("quantized_model.onnx")
onnx_chad_path = self.model_dir.joinpath("chad_model.onnx")
onnx_path.unlink(missing_ok=True)
onnx_quantized_path.unlink(missing_ok=True)
onnx_chad_path.unlink(missing_ok=True)
quantizer = ORTQuantizer.from_pretrained(self.model_dir, feature="token-classification")
quantized_path = quantizer.export(
onnx_model_path=onnx_path, onnx_quantized_model_output_path=onnx_quantized_path,
quantization_config=AutoQuantizationConfig.arm64(is_static=False, per_channel=False),
)
quantizer.model.save_pretrained(optimized_path.parent) # To have the model config.json
quantized_path.parent.joinpath("pytorch_model.bin").unlink() # To ensure that we're not loading the vanilla pytorch model
# Load an Optimizer from an onnx path...
# optimizer = ORTOptimizer.from_pretrained(quantized_path.parent, feature="token-classification") <-- this fails
# optimizer.export(
# onnx_model_path=onnx_path,
# onnx_optimized_model_output_path=onnx_chad_path,
# optimization_config=OptimizationConfig(optimization_level=99),
# )
model = ORTModelForTokenClassification.from_pretrained(quantized_path.parent, file_name="quantized_model.onnx")
# Ideally would load onnx_chad_path (with chad_model.onnx) if the commented section works.
tokenizer: PreTrainedTokenizer = AutoTokenizer.from_pretrained(self.model_dir)
self.pipeline = cast(TokenClassificationPipeline, pipeline(
model=model, tokenizer=tokenizer,
task="token-classification", accelerator="ort",
aggregation_strategy=AggregationStrategy.SIMPLE,
device=device_number(self.device),
))
```
Note that optimization alone works perfectly fine, quantization too, but I was hopping that both would be feasible.. unless optimization also does some kind of quantization or lighter model ?
Thanks in advance.
Have a great day
|
https://github.com/huggingface/optimum/issues/198
|
closed
|
[] | 2022-05-18T20:19:23Z
| 2022-06-30T08:33:58Z
| 1
|
ierezell
|
pytorch/pytorch
| 77,732
|
multiprocessing: how to put a model which copied from main thread in the shared_queue
|
### π Describe the bug
1. If I shared a model in cuda, it raises
```RuntimeError: Attempted to send CUDA tensor received from another process; this is not currently supported. Consider cloning before sending.```
Specifically, I accept a model from the main process and return a duplication create by using ```copy.deepcopy(model)```
2. ```torch.multiprocessing.manager.queue.get``` taken a long time to finish. If the queue just passed a file descriptor, I don't think it should take 1/3 of the total time, is there any faster way?
Here's my script
I opened a [thread](https://discuss.pytorch.org/t/how-sharing-memory-actually-worked-in-pytorch/151706) in pytorch'forum also
I think this is related to #10375 #9996 and #7204
```python
import torch
import torch.multiprocessing as mp
from copy import deepcopy
from functools import partial
from time import *
from torchvision import models
import numpy as np
from tqdm import tqdm
def parallel_produce(
queue: mp.Queue,
model_method,
i
) -> None:
pure_model: torch.nn.Module = model_method()
# if you delete this line, model can be passed
pure_model.to('cuda')
pure_model.share_memory()
while True:
corrupt_model = deepcopy(pure_model)
dic = corrupt_model.state_dict()
dic[list(dic.keys())[0]]*=2
corrupt_model.share_memory()
queue.put(corrupt_model)
def parallel(
valid,
iteration: int = 1000,
process_size: int=2,
buffer_size: int=2
):
pool = mp.Pool(process_size)
manager = mp.Manager()
queue = manager.Queue(buffer_size)
SeedSequence = np.random.SeedSequence()
model_method = partial(models.squeezenet1_1,True)
async_result = pool.map_async(
partial(
parallel_produce,
queue,
model_method,
),
SeedSequence.spawn(process_size),
)
time = 0
for iter_times in tqdm(range(iteration)):
start = monotonic_ns()
# this takes a long time
corrupt_model: torch.nn.Module = queue.get()
time += monotonic_ns() - start
corrupt_model.to("cuda")
corrupt_result = corrupt_model(valid)
del corrupt_model
pool.terminate()
print(time / 1e9)
if __name__ == "__main__":
valid = torch.randn(1,3,224,224).to('cuda')
parallel(valid)
```
#total time of queue.get taken

### Versions
Collecting environment information...
PyTorch version: 1.10.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22000-SP0
Is CUDA available: True
CUDA runtime version: 11.5.119
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 Laptop GPU
Nvidia driver version: 512.77
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] pytorchfi==0.6.0
[pip3] torch==1.10.0+cu113
[pip3] torch-tb-profiler==0.3.1
[pip3] torchaudio==0.10.0+cu113
[pip3] torchei==0.0.4
[pip3] torchinfo==1.5.4
[pip3] torchstat==0.0.7
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.11.1+cu113
[conda] Could not collect
cc @VitalyFedyunin
|
https://github.com/pytorch/pytorch/issues/77732
|
closed
|
[
"module: multiprocessing",
"triaged"
] | 2022-05-18T07:41:34Z
| 2022-06-29T08:18:00Z
| null |
Force1ess
|
pytorch/vision
| 6,034
|
Question about center-ness branch in FCOS
|
Hi, thank you for your great work. I'm learning FCOS these days. I find some differences about position of center-ness between code and paper. In paper(https://arxiv.org/abs/1904.01355), the center-ness branch is put together with the classification branch.

But in the code, the center-ness and regression branches are put together.
https://github.com/pytorch/vision/blob/a1232c212d7cf84806189910ba83bc36bcea916c/torchvision/models/detection/fcos.py#L202-L233
Could you tell me why? thanks.
|
https://github.com/pytorch/vision/issues/6034
|
closed
|
[
"question"
] | 2022-05-17T07:59:37Z
| 2022-05-18T00:47:41Z
| null |
WZMIAOMIAO
|
pytorch/TensorRT
| 1,070
|
β [Question] How to convert Torch-TensorRT module to TRT engine?
|
## β Question
How to convert Torch-TensorRT module (*.ts) to TRT engine? Is there any Python API to do that?
## What you have already tried
In examples, I found
```cpp
auto engine = torch_tensorrt::ts::convert_method_to_trt_engine(mod, "forward", compile_spec);
```
in https://github.com/pytorch/TensorRT/blob/master/examples/int8/qat/main.cpp
If this is the correct way to do converting? If yes, is there any Python API?
## Environment
amd64 + Linux
all software is newest version
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/1070
|
closed
|
[
"question"
] | 2022-05-17T07:36:45Z
| 2022-05-23T16:16:13Z
| null |
lingffff
|
pytorch/pytorch
| 77,589
|
How to handle __module__ attribute for Public API bindings
|
While working on the NN onboarding lab (with corresponding closed PR: #77425 ), after registering the functional version of new module in `torch/nn/functional.py` The following test would fail ` pytest test/test_public_bindings.py` with:
```Bash
Full list:
# torch.nn.functional.bias:
- Is public: it is an attribute that does not start with `_` on a module that does not have `__all__` defined
- Does NOT look public: because its `__module__` attribute (`torch._C._nn`) is not within the torch library or does not start with the submodule where it is defined (`torch.nn.functional`)
- You can do either of these two things to fix this problem:
- To make it NOT public: either define a `__all__` for `torch.nn.functional` or add a `_` at the beginning of the name
- To make it look public: make sure the `__module__` is properly set and points to a submodule of `torch.nn.functional`
```
I defined the functional version analogously to the linear module:
```Python
bias = _add_docstr(
torch._C._nn.bias,
r"""
bias(input, bias) -> Tensor
Adds a bias vector the last dimension of input tensor
Shape:
- Input: math:`(*, num\_features)` where `*` means any number of
additional dimensions, including none
- Bias: :math:`(num\_features)` or :math:`()`
- Output: :math:`(*, num\_features)` where `*` means any number of
additional dimensions, including none, same shape as Input
""")
```
I add this function 'bias' to the allowlist here: `test/allowlist_for_publicAPI.json` in the list for `"torch.nn.functional"`
When reading the test function though it says that no new functions should be added to this list. If I def bias above and then implement `bias.__module__ = 'torch.nn.functional'` This does indeed work.
Is that the correct solution?
Would it be a nicer API if there was a function analogous to `_add_docstr` which also defined the `__module__` attribute when setting the doc string.
cc @mruberry
|
https://github.com/pytorch/pytorch/issues/77589
|
open
|
[
"module: tests",
"triaged"
] | 2022-05-16T20:40:52Z
| 2022-05-17T14:37:45Z
| null |
drisspg
|
huggingface/datasets
| 4,352
|
When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way
|
## Describe the bug
Recently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not match the types that came back. Because of this, i ended up in tracebacks deep inside arrow_dataset.py and arrow_writer.py with exceptions that [did not make clear what the problem was](https://github.com/huggingface/datasets/issues/4349). In short i ended up with overflows and the OS killing processes when Arrow was attempting to write. It wasn't until I dug into `def write_batch` and the loop that loops over cols that I figured out what was going on.
It seems like `.map()` could set a boolean that it's checked that for at least 1 instance from the dataset, the returned data's types match the types provided by the `features` param and error out with a clear exception if they don't. This would make the cause of the issue much more understandable and save people time. This could be construed as a feature but it feels more like a bug to me.
## Steps to reproduce the bug
I don't have explicit code to repro the bug, but ill show an example
Code prior to the fix:
```python
def preprocess(examples):
# returns an encoded data dict with keys that match the features, but the types do not match
...
def get_encoded_data(data):
dataset = Dataset.from_pandas(data)
unique_labels = data['audit_type'].unique().tolist()
features = Features({
'image': Array3D(dtype="uint8", shape=(3, 224, 224))),
'input_ids': Sequence(feature=Value(dtype='int64'))),
'attention_mask': Sequence(Value(dtype='int64'))),
'token_type_ids': Sequence(Value(dtype='int64'))),
'bbox': Array2D(dtype="int64", shape=(512, 4))),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names)
```
The Features set that fixed it:
```python
features = Features({
'image': Sequence(Array3D(dtype="uint8", shape=(3, 224, 224))),
'input_ids': Sequence(Sequence(feature=Value(dtype='int64'))),
'attention_mask': Sequence(Sequence(Value(dtype='int64'))),
'token_type_ids': Sequence(Sequence(Value(dtype='int64'))),
'bbox': Sequence(Array2D(dtype="int64", shape=(512, 4))),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
```
The difference between my original code (which was based on documentation) and the working code is the addition of the `Sequence(...)` to 4/5 features as I am working with paginated data and the doc examples are not.
## Expected results
Dataset.map() attempts to validate the data types for each Feature on the first iteration and errors out if they are not validated.
## Actual results
Specify the actual results or traceback.
Based on the value of `writer_batch_size`, execution errors out when Arrow attempts to write because the types do not match, though its error messages dont make this obvious
Example errors:
```
OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB.
(offset overflow while concatenating arrays)
```
```
zsh: killed python doc_classification.py
UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
datasets version: 2.1.0
Platform: macOS-12.2.1-arm64-arm-64bit
Python version: 3.9.12
PyArrow version: 6.0.1
Pandas version: 1.4.2
|
https://github.com/huggingface/datasets/issues/4352
|
open
|
[
"bug"
] | 2022-05-14T17:55:15Z
| 2022-05-16T15:09:17Z
| null |
plamb-viso
|
huggingface/optimum
| 191
|
Not possible to configure GPU in pipelines nor leveraging batch_size parallelisation
|
When setting the `device` variable in the `pipeline` function/class to `>= 0`, an error appears `AttributeError: 'ORTModelForCausalLM' object has no attribute 'to' - when running in GPU`. This was initially reported in #161 so opening this issue to encompass supporting the `device` parameter in the ORT classes. This is important as otherwise it won't be possible to allow configuration of CPU/GPU similar to normal transformer libraries.
Is there currently a workaround to ensure that the class is run on GPU? By default it seems this woudl eb set in CPU even when GPU is available:
```python
>>> m = ORTModelForCausalLM.from_pretrained("gpt2", from_transformers=True)
>>> t = AutoTokenizer.from_pretrained("gpt2")
>>> pp = pipeline("text-generation", model=m, tokenizer=t)
>>> pp.device
device(type='cpu')
```
This is still the case even with the `optimum[onnxruntime-gpu]` package. I have validated by testing against a normal transformer with `batch_size=X` (ie `pp = pipeline("text-generation", model=m, tokenizer=t, batch_size=128)`) and it seems there is no optimization with parallel processing with optimum, whereas normal transformer is orders of magnitude faster (which is most likely as it's not utilizing the parallelism)
I can confirm that the model is loaded with GPU correctly:
```python
>>> m.device
device(type='cuda', index=0)
```
And GPU is configured correctly:
```python
>>> from optimum.onnxruntime.utils import _is_gpu_available
>>> _is_gpu_available()
True
```
Is there a way to enable GPU for processing with batching in optimum?
|
https://github.com/huggingface/optimum/issues/191
|
closed
|
[
"inference"
] | 2022-05-14T05:05:51Z
| 2022-09-05T08:37:46Z
| 4
|
axsaucedo
|
pytorch/vision
| 6,011
|
Imagenet Version not documented?
|
### π The doc issue
Hello torchvision team,
First, thanks for the epic work you are all putting into this tool! I would like to know the exact version of imagenet used at pertaining different models in torchvision, for research purposes regarding model inversion. All of them use the 2012 Imagenet Dataset version or maybe some newer version?
Thank you,
Tudor
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/vision/issues/6011
|
open
|
[
"question"
] | 2022-05-13T11:24:32Z
| 2022-05-13T11:51:24Z
| null |
tudorcebere
|
huggingface/datasets
| 4,343
|
Metrics documentation is not accessible in the datasets doc UI
|
**Is your feature request related to a problem? Please describe.**
Search for a metric name like "seqeval" yields no results on https://huggingface.co/docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the metric expects as an input, for example for `squad` there is a [key `id`](https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L42) documented only in the function doc but not in the `README.md`, and one needs to go look into the code to understand what the metric expects.
**Describe the solution you'd like**
Have the documentation for metrics appear as well in the doc UI, e.g. this https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L21-L63
I know there are plans to migrate metrics to the evaluate library, but just pointing this out.
|
https://github.com/huggingface/datasets/issues/4343
|
closed
|
[
"enhancement",
"Metric discussion"
] | 2022-05-13T07:46:30Z
| 2022-06-03T08:50:25Z
| 1
|
fxmarty
|
huggingface/optimum
| 183
|
about run_glue.py
|
how to enable GPU when run run_glue.py
|
https://github.com/huggingface/optimum/issues/183
|
closed
|
[] | 2022-05-12T12:13:16Z
| 2022-06-23T13:35:25Z
| 1
|
yichuan-w
|
huggingface/dataset-viewer
| 255
|
Create a custom nginx image?
|
I think it would be clearer to create a custom nginx image, in /services/reverse-proxy, than the current "hack" with a template and env vars on the official nginx image.
This way, all the services (API, worker, reverse-proxy) would follow the same flow.
|
https://github.com/huggingface/dataset-viewer/issues/255
|
closed
|
[
"question"
] | 2022-05-12T08:48:12Z
| 2022-09-16T17:43:30Z
| null |
severo
|
huggingface/datasets
| 4,323
|
Audio can not find value["bytes"]
|
## Describe the bug
I wrote down _generate_examples like:

but where is the bytes?

## Expected results
value["bytes"] is not None, so i can make datasets with bytes, not path
## bytes looks like:
blah blah~~
\xfe\x03\x00\xfb\x06\x1c\x0bo\x074\x03\xaf\x01\x13\x04\xbc\x06\x8c\x05y\x05,\t7\x08\xaf\x03\xc0\xfe\xe8\xfc\x94\xfe\xb7\xfd\xea\xfa\xd5\xf9$\xf9>\xf9\x1f\xf8\r\xf5F\xf49\xf4\xda\xf5-\xf8\n\xf8k\xf8\x07\xfb\x18\xfd\xd9\xfdv\xfd"\xfe\xcc\x01\x1c\x04\x08\x04@\x04{\x06^\tf\t\x1e\x07\x8b\x06\x02\x08\x13\t\x07\x08 \x06g\x06"\x06\xa0\x03\xc6\x002\xff \xff\x1d\xff\x19\xfd?\xfb\xdb\xfa\xfc\xfa$\xfb}\xf9\xe5\xf7\xf9\xf7\xce\xf8.\xf9b\xf9\xc5\xf9\xc0\xfb\xfa\xfcP\xfc\xba\xfbQ\xfc1\xfe\x9f\xff\x12\x00\xa2\x00\x18\x02Z\x03\x02\x04\xb1\x03\xc5\x03W\x04\x82\x04\x8f\x04U\x04\xb6\x04\x10\x05{\x04\x83\x02\x17\x01\x1d\x00\xa0\xff\xec\xfe\x03\xfe#\xfe\xc2\xfe2\xff\xe6\xfe\x9a\xfe~\x01\x91\x08\xb3\tU\x05\x10\x024\x02\xe4\x05\xa8\x07\xa7\x053\x07I\n\x91\x07v\x02\x95\xfd\xbb\xfd\x96\xff\x01\xfe\x1e\xfb\xbb\xf9S\xf8!\xf8\xf4\xf5\xd6\xf3\xf7\xf3l\xf4d\xf6l\xf7d\xf6b\xf7\xc1\xfa(\xfd\xcf\xfd*\xfdq\xfe\xe9\x01\xa8\x03t\x03\x17\x04B\x07\xce\t\t\t\xeb\x06\x0c\x07\x95\x08\x92\t\xbc\x07O\x06\xfb\x06\xd2\x06U\x04\x00\x02\x92\x00\xdc\x00\x84\x00 \xfeT\xfc\xf1\xfb\x82\xfc\x97\xfb}\xf9\x00\xf8_\xf8\x0b\xf9\xe5\xf8\xe2\xf7\xaa\xf8\xb2\xfa\x10\xfbl\xfa\xf5\xf9Y\xfb\xc0\xfd\xe8\xfe\xec\xfe1\x00\xad\x01\xec\x02E\x03\x13\x03\x9b\x03o\x04\xce\x04\xa8\x04\xb2\x04\x1b\x05\xc0\x05\xd2\x04\xe8\x02z\x01\xbe\x00\xae\x00\x07\x00$\xff|\xff\x8e\x00\x13\x00\x10\xff\x98\xff0\x05{\x0b\x05\t\xaa\x03\x82\x01n\x03
blah blah~~
that function not return None
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:2.2.1
- Platform:ubuntu 18.04
- Python version:3.6.9
- PyArrow version:6.0.1
|
https://github.com/huggingface/datasets/issues/4323
|
closed
|
[
"bug"
] | 2022-05-12T08:31:58Z
| 2022-07-07T13:16:08Z
| 9
|
YooSungHyun
|
pytorch/pytorch
| 77,341
|
The input of the forward part of my model is a tuple, which cannot be converted to onnx format according to the existing methods. Can you tell me how to solve it
|
### π Describe the bug
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Linear(32, 16)
self.relu1 = nn.ReLU(inplace=True)
self.relu2 = nn.ReLU(inplace=True)
self.fc = nn.Linear(32, 2)
def forward(self, x):
x1, x2 = x
x1 = self.conv1(x1)
x1 = self.relu1(x1)
x2 = self.conv1(x2)
x2 = self.relu1(x2)
out = torch.cat((x1, x2), dim=-1)
out = self.fc(out)
return out
model = Model()
model.eval()
x1 = torch.randn((2, 10, 32))
x2 = torch.randn((2, 10, 32))
x = (x1, x2)
torch.onnx.export(model,
x,
'model.onnx',
input_names=["input"],
output_names=["output"],
dynamic_axes={'input': {0: 'batch'}, 'output': {0: 'batch'}}
)
print("Done")
### Versions
Be like title!
|
https://github.com/pytorch/pytorch/issues/77341
|
closed
|
[
"module: onnx",
"triaged"
] | 2022-05-12T06:38:49Z
| 2022-05-18T01:04:49Z
| null |
singaln
|
pytorch/extension-ffi
| 26
|
How to fix "undefined symbol: state error" once importing a c shared library?
|
I'm trying to import the compiled c shared library "_crop_and_resize.so", but I am receiving below error!
pytorch version = 1.9.0+cu102
Torchvision version = 0.9.1
python version = 3.6.10
```
>>> import _crop_and_resize as _backend
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /home/username/DeepFacade01/roialign/roi_align/_ext/crop_and_resize/_crop_and_resize.so: undefined symbol: state
>>>
```
|
https://github.com/pytorch/extension-ffi/issues/26
|
closed
|
[] | 2022-05-12T00:01:49Z
| 2022-05-14T22:33:53Z
| null |
Abbsalehi
|
pytorch/examples
| 1,004
|
error: the following arguments are required: DIR
|
Excuse meοΌhow can I deal with this problemοΌ
<img width="1227" alt="image" src="https://user-images.githubusercontent.com/58496897/167763473-f5d2a189-3ac5-4e77-9451-c6817065d5ed.png">
|
https://github.com/pytorch/examples/issues/1004
|
closed
|
[] | 2022-05-11T03:31:07Z
| 2022-07-01T16:07:30Z
| 1
|
Elijah123463
|
pytorch/pytorch
| 77,228
|
How can i remove 'lib/libtorch_cuda.so' gracefully to make deploy more small. γQuestions and Helpγ
|
i want import torch in my project . and i will not use 'cuda' clearly .
how can i to remove 'lib/libtorch_cuda.so' gracefully to make deploy package more smaller. (serverless deploy)
i remove lib/libtorch_cuda.so ,then cmd 'python3 index.py' . the result show...
**Traceback (most recent call last):
File "index.py", line 7, in <module>
import torch
File "/root/python/src/pic-linux_all/torch/__init__.py", line 199, in <module>
from torch._C import * # noqa: F403
ImportError: libtorch_cuda.so: cannot open shared object file: No such file or directory**
what should I do.
### torch : i use 'pip install' to install it
### Versions
Python version: 3.8.0 (default, May 11 2022, 08:57:48) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] (64-bit runtime)
Python platform: Linux-3.10.0-514.26.2.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
|
https://github.com/pytorch/pytorch/issues/77228
|
closed
|
[
"triaged"
] | 2022-05-11T03:27:31Z
| 2022-05-12T00:26:04Z
| null |
wangping886
|
huggingface/dataset-viewer
| 241
|
Setup the users directly in the images, not in Kubernetes?
|
See the second point in https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/: using `runAsUser` / `runAsGroup` is a (relative) security risk.
|
https://github.com/huggingface/dataset-viewer/issues/241
|
closed
|
[
"question"
] | 2022-05-10T15:15:49Z
| 2022-09-19T08:57:20Z
| null |
severo
|
pytorch/TensorRT
| 1,049
|
β [Question] How can I move the converted tensorRT model in a Jetson system?
|
## β Question
I optimized a pytorch module with torch-TensorRT. How can I move the engine to a Jetson?
## What you have already tried
I tried torch.jit.load('trt_traced_model.ts')
but get **__torch__.torch.classes.tensorrt.Engine** error
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 10.0
- OS (e.g., Linux): ARM Ubuntu 18
- How you installed PyTorch : pip from offical Nvidia support
- Python version: 3.6
- CUDA version: 10.2
- GPU models and configuration: Jetson NX
## Additional context
I have a Jetson NX system with jetpack 4.6, torch v0.10.0 and torchvision v0.11.0 where I want to deploy a tensorRT model.
For that in my main computer I installed this repository and converted my model to tensorRT successfully. I need to move it into the Jetson for production.
This is the code that I use to export to tensorRT (main computer)
```
model.cuda().eval()
model = torch.jit.trace(model, [torch.rand(1, 3, 224, 224).cuda()])
trt_model_fp32 = torch_tensorrt.compile(model,
inputs=[torch_tensorrt.Input((1, 3, 224, 224))],
enabled_precisions=torch.float32, # Run with FP32
)
torch.jit.save(trt_model_fp32, dir)
```
This is in my Jetson
`model = torch.jit.load(dir)`
but i get **__torch__.torch.classes.tensorrt.Engine** error
Jetson hasn't installed torch-tensorRT. How can I move the tensorRT model? Do I need to install this repo also in the Jetson?
Thanks!
|
https://github.com/pytorch/TensorRT/issues/1049
|
closed
|
[
"question"
] | 2022-05-10T15:08:47Z
| 2022-05-10T15:45:51Z
| null |
mjack3
|
huggingface/datasets
| 4,304
|
Language code search does direct matches
|
## Describe the bug
Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging) encourages addition of the additional codes ("_expected format is BCP47 tags separated for ';' e.g. 'en-US;fr-FR'_") but this would lead to those datasets being hidden in datasets search.
## Steps to reproduce the bug
1. Add a dataset using a variant tag (e.g. [`sq-AL`](https://huggingface.co/datasets?languages=languages:sq-AL))
2. Look for datasets using the full code
3. Note that they're missing when just the language is searched for (e.g. [`sq`](https://huggingface.co/datasets?languages=languages:sq))
Some datasets are already affected by this - e.g. `AmazonScience/massive` is listed under `sq-AL` but not `sq`.
One workaround is for dataset creators to add an additional root language tag to dataset YAML metadata, but it's unclear how to communicate this. It might be possible to index the search on `languagecode.split('-')[0]` but I wanted to float this issue before trying to write any code :)
## Expected results
Datasets using longer bcp47 tags also appear under searches for just the language code; e.g. Quebecois datasets (`fr-CA`) would come up when looking for French datasets with no region specification (`fr`), or US English (`en-US`) datasets would come up when searching for English datasets (`en`).
## Actual results
The language codes seem to be directly string matched, excluding datasets with specific language tags from non-specific searches.
## Environment info
(web app)
|
https://github.com/huggingface/datasets/issues/4304
|
open
|
[
"bug"
] | 2022-05-10T11:59:16Z
| 2022-05-10T12:38:42Z
| 1
|
leondz
|
pytorch/TensorRT
| 1,047
|
can torch-tensorrt-1.1.0 support libtorch1.9 and cuda10.2?
|
## β Question
I want to know if torch-tensorrt-1.1.0 can be compiled with libtorch1.9 and cuda-10.2 ?
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.9.0):
- CPU Architecture: x86
- OS (e.g., Linux): linux
- CUDA version:10.2
- GPU models and configuration:T4
|
https://github.com/pytorch/TensorRT/issues/1047
|
closed
|
[
"question"
] | 2022-05-10T11:54:58Z
| 2022-05-11T07:27:45Z
| null |
f291400
|
pytorch/TensorRT
| 1,045
|
β __torch__.torch.classes.tensorrt.Engine what does it mean?
|
Hello community and thanks for this repo.
## β Question
How can I load a tensorRT model after using torch.jit.save?
## What you have already tried
```
import torch
model = torch.jit.load('trt_model.torch-tensorrt') # give error __torch__.torch.classes.tensorrt.Engine
```
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.10
- CPU Architecture: x64
- OS (e.g., Linux): 20.04
- How you installed PyTorch: conda
- Python version: 3.8
- CUDA version: 11.6
- GPU models and configuration: Nvidia RTX3090
- Information: torchvision installed by pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases
## Additional context
My code is very simple:
```
import torch
import torch_tensorrt
traced_model = torch.jit.trace(eager_model, [torch.rand(1, 3, 224, 224).to(device)])
trt_model = torch_tensorrt.compile(traced_model,
inputs= [torch_tensorrt.Input((1, 3, 224, 224))],
enabled_precisions={torch.float32})
torch.jit.save(trt_model, 'trt_model.torch-tensorrt')
model = torch.jit.load('trt_model.torch-tensorrt') # give error __torch__.torch.classes.tensorrt.Engine
```
At the end, I want to move the trt_model.torch-tensorrt file into an Jetson for load with torch.jit.load
Thanks
|
https://github.com/pytorch/TensorRT/issues/1045
|
closed
|
[
"question"
] | 2022-05-10T09:56:05Z
| 2022-09-03T02:25:25Z
| null |
mjack3
|
pytorch/data
| 391
|
Allow users to provide `auth` and other data to `HttpReader`
|
### π The feature
This should extend the functionality of `HttpReader` to send more complicated POST request.
For authentication, users don't necessarily need to provide via `http://user:password@domain.com/`. They should be able to provide `auth` to the `HttpReader` and relay it to `request`.
https://github.com/pytorch/data/blob/8b95954ce431ade5905448ebd9a2909e30566377/torchdata/datapipes/iter/load/online.py#L38-L43
### Motivation, pitch
Versatile `HttpReader`
### Alternatives
_No response_
### Additional context
_No response_
|
https://github.com/meta-pytorch/data/issues/391
|
closed
|
[
"good first issue",
"help wanted"
] | 2022-05-09T22:36:27Z
| 2022-05-11T19:28:14Z
| 3
|
ejguan
|
pytorch/TensorRT
| 1,034
|
torch_tensorrt.compile dynamic input shape failed
|
## dynamic input shape failed


if set min_shape=[1,3,h, h] and op_shape= [1,3, h, h] and max_shape = [1,3, h, h] , which h is 32 or 512 or 1024, it works. but if set
min_shape = [1, 3, 32, 32] and op_shape=[1,3,512,512] and max_shape = [1, 3, 1024, 1024], it is failed .
## Environment

|
https://github.com/pytorch/TensorRT/issues/1034
|
closed
|
[
"question",
"component: core",
"No Activity"
] | 2022-05-09T08:25:50Z
| 2022-08-21T00:02:41Z
| null |
f291400
|
pytorch/pytorch
| 77,016
|
Where is fx2trt fx to tensorrt tool?
|
### π The doc issue
I found there are some PR:
https://github.com/jerryzh168/pytorch/tree/fb09fd4ab4ba618db148f9dfc035be589efb9355/torch/fx/experimental/fx2trt
which persist of fx2trt tool, where does it goes in main stream pytorch code?
### Suggest a potential alternative/fix
_No response_
|
https://github.com/pytorch/pytorch/issues/77016
|
open
|
[
"triaged",
"module: fx"
] | 2022-05-07T08:43:04Z
| 2022-07-20T21:25:20Z
| null |
lucasjinreal
|
pytorch/serve
| 1,609
|
How to set model batch size with TS_ environmental var
|
## π Documentation
Hi, I can't seem to figure out how to set the batch size with an environmental parameter.
My `config.properties` looks like this:
```
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
number_of_netty_threads=32
enable_envvars_config=true
job_queue_size=1000
model_store=/opt/ml/model
load_models=all
enable_metrics_api=false
models={\
"model": {\
"1.0": {\
"defaultVersion": true,\
"marName": "model.mar",\
"runtime": "python3",\
"minWorkers": 1,\
"maxWorkers": 4,\
"batchSize": 16,\
"maxBatchDelay": 50,\
"responseTimeout": 120\
}\
}\
}
```
But I would like to be able to override `batchSize` with an env variable so that load testing is more simple (just creating endpoints with different env params instead of needing to generate different config files)
|
https://github.com/pytorch/serve/issues/1609
|
closed
|
[] | 2022-05-05T14:25:43Z
| 2022-05-09T21:52:41Z
| null |
austinmw
|
pytorch/vision
| 5,945
|
Training recipe for these weights
|
https://github.com/pytorch/vision/blob/62740807c18e68bb0acd85895dca527f9a655bd5/torchvision/models/vision_transformer.py#L377
Does anyone know how these weights were generated. Where they training from scratch only on ImageNet 1k or was it pre-trained on ImageNet 21k? Looking at the original Vision transformer paper: https://arxiv.org/abs/2010.11929 I'm not quite sure where the accuracy numbers in these lines are coming from:
```python
class ViT_B_32_Weights(WeightsEnum):
IMAGENET1K_V1 = Weights(
url="https://download.pytorch.org/models/vit_b_32-d86f8d99.pth",
transforms=partial(ImageClassification, crop_size=224),
meta={
**_COMMON_META,
"num_params": 88224232,
"min_size": (224, 224),
"recipe": "https://github.com/pytorch/vision/tree/main/references/classification#vit_b_32",
"metrics": {
"acc@1": 75.912,
"acc@5": 92.466,
},
},
)
DEFAULT = IMAGENET1K_V1
```
Here's the corresponding numbers presented in the original Vision Transformer paper, ViT-B/32 accuracy of 75.912 is not in either the ImageNet 1k or the ImageNet 21k columns:

cc @datumbox
|
https://github.com/pytorch/vision/issues/5945
|
closed
|
[
"question",
"module: models"
] | 2022-05-04T21:07:25Z
| 2022-05-05T16:49:12Z
| null |
briancheung
|
pytorch/serve
| 1,606
|
How to distribute multi models to each gpu?
|
I have two models: model0,model1 and two gpus: gpu0,gpu1. I want to set model0 to gpu0,model0 to gpu1,it means that the work of model0 will always on gpu0 and model1 is on gpu1.
How to make it?
Is it possible to implement by serve configuration or handle.py?
Could you help me?Thank you very much!
|
https://github.com/pytorch/serve/issues/1606
|
open
|
[
"enhancement"
] | 2022-05-04T16:08:39Z
| 2022-05-12T01:56:17Z
| null |
dzcmingdi
|
pytorch/data
| 382
|
The protocol of fsspec can be a list of strings rather than a single string
|
### π Describe the bug
https://github.com/pytorch/data/blob/92d18b088eb43b9805bed5c90a0afca87292a338/torchdata/datapipes/iter/load/fsspec.py#L61-L62
The `fs.protocol` can be a list rather than a string. For example of `s3`, it will return a list of `['s3', 's3a']`.
Then, there will be an error due to `self.root.startswith(fs.protocol)`. We can't run `startswith` with a list.
### Versions
main
|
https://github.com/meta-pytorch/data/issues/382
|
closed
|
[
"good first issue"
] | 2022-05-03T21:47:05Z
| 2022-05-04T16:50:16Z
| 1
|
ejguan
|
pytorch/TensorRT
| 1,019
|
Missing 3 input files: libnvinfer_plugin.so, libcudnn.so and libnvinfer.so
|
## β Question
I've been looking at all the great progress done previously when it comes to using Torch-TensorRT on Windows.
I made progress to the point that it seems like only 1 thing is missing. I'm missing the 3 .so mentioned above.
How are they supposed to be built? Am I missing something? Is there any fix that I missed?
## What you have already tried
I followed the guides from from #856
## Environment
Windows 10, trying to build for Visual Studio usage
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.11.0
- CPU Architecture: i9
- OS (e.g., Linux): Windows 10
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): libtorch
- Build command you used (if compiling from source): bazel
- Are you using local sources or building from archives: building from archive
- Python version: 3.9
- CUDA version: 11.5
- GPU models and configuration: RTX3090
|
https://github.com/pytorch/TensorRT/issues/1019
|
closed
|
[
"question",
"channel: windows"
] | 2022-05-03T01:10:39Z
| 2022-08-01T16:01:45Z
| null |
fschvart
|
pytorch/TensorRT
| 1,014
|
β [Question] Building torch_tensorrt.lib on Windows
|
## β Question
I am wondering how to build the torch_tensorrt.lib on Windows.
## What you have already tried
I have followed #960 and #856 (with the same WORKSPACE as the latter) and managed to successfully build torch_tensorrt.dll. However, I need the .lib file in order to compile my Libtorch program. I tried linking to some of the .lib files that were created already (like bazel-out\x64_windows-opt\bin\cpp\torch_tensorrt.lo.lib), but that didn't work. I expect it's a fairly simple bazel command, but I have no idea where to put it.
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.10.0 (release)
- CPU Architecture: x86-64
- OS (e.g., Linux): Windows 10
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): libtorch from pytorch.org
- Build command you used (if compiling from source): bazel build //:libtorchtrt --compilation_mode opt
- CUDA version: 11.3
- Any other relevant information: Using VS2019
## Additional context
My libtorch program runs fine even if I include the torch-tensorrt headers, but throws the following errors as soon as I try to use torch_tensorrt::torchscript::CompileSpec and call torch_tensorrt::torchscript::compile:
Error LNK1120 2 unresolved externals Omkar 1.10.0+cu113 B:\Programming\_Current Projects\HelloLibTorch\x64\Release\HelloTorch.exe 1
Error LNK2019 unresolved external symbol "public: __cdecl torch_tensorrt::torchscript::CompileSpec::CompileSpec(class std::vector<class std::vector<__int64,class std::allocator<__int64> >,class std::allocator<class std::vector<__int64,class std::allocator<__int64> > > >)" (??0CompileSpec@torchscript@torch_tensorrt@@QEAA@V?$vector@V?$vector@_JV?$allocator@_J@std@@@std@@V?$allocator@V?$vector@_JV?$allocator@_J@std@@@std@@@2@@std@@@Z) referenced in function main Omkar 1.10.0+cu113 B:\Programming\_Current Projects\HelloLibTorch\main.obj 1
Error LNK2019 unresolved external symbol "struct torch::jit::Module __cdecl torch_tensorrt::torchscript::compile(struct torch::jit::Module const &,struct torch_tensorrt::torchscript::CompileSpec)" (?compile@torchscript@torch_tensorrt@@YA?AUModule@jit@torch@@AEBU345@UCompileSpec@12@@Z) referenced in function main Omkar 1.10.0+cu113 B:\Programming\_Current Projects\HelloLibTorch\main.obj 1
|
https://github.com/pytorch/TensorRT/issues/1014
|
closed
|
[
"question",
"channel: windows"
] | 2022-04-29T14:24:59Z
| 2022-09-02T18:09:26Z
| null |
jonahclarsen
|
huggingface/datasets
| 4,238
|
Dataset caching policy
|
## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values)
852 if value not in self._str2int:
853 value = str(value).strip()
--> 854 output.append(self._str2int[str(value)])
855 else:
856 # No names provided, try to integerize
KeyError: '\\N'
```
The file now is cleanup up, but I still get the error. This happens even if I inspect the local cached contents, and cleanup the files locally:
```python
from datasets import load_dataset_builder
dataset_builder = load_dataset_builder("loretoparisi/tatoeba-sentences")
print(dataset_builder.cache_dir)
print(dataset_builder.info.features)
print(dataset_builder.info.splits)
```
```
Using custom data configuration loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd
/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519
None
None
```
and removing files located at `/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-*`.
Is there any remote file caching policy in place? If so, is it possibile to programmatically disable it?
Currently it seems that the file `test.csv` on the repo [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences/blob/main/test.csv) is cached remotely. In fact I download locally the file from raw link, the file is up-to-date; but If I use it within `datasets` as shown above, it gives to me always the first revision of the file, not the last.
Thank you.
## Steps to reproduce the bug
```python
from datasets import load_dataset,Features,Value,ClassLabel
class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"]
features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})
num_labels = features['label'].num_classes
data_files = { "train": "train.csv", "test": "test.csv" }
sentences = load_dataset(
"loretoparisi/tatoeba-sentences",
data_files=data_files,
delimiter='\t',
column_names=['label', 'text'],
)
# You can make this part faster with num_proc=<some int>
sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is no
|
https://github.com/huggingface/datasets/issues/4238
|
closed
|
[
"bug"
] | 2022-04-27T10:42:11Z
| 2022-04-27T16:29:25Z
| 3
|
loretoparisi
|
huggingface/datasets
| 4,235
|
How to load VERY LARGE dataset?
|
### System Info
```shell
I am using transformer trainer while meeting the issue.
The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of data seperately, and results in low efficiency.
I wonder if there are any tricks like Sharding in huggingface trainer.
Looking forward to your reply.
```
### Who can help?
Trainer: @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
None
### Expected behavior
```shell
I wonder if there are any tricks like fairseq Sharding very large datasets https://fairseq.readthedocs.io/en/latest/getting_started.html.
Thanks a lot!
```
|
https://github.com/huggingface/datasets/issues/4235
|
closed
|
[
"bug"
] | 2022-04-27T07:50:13Z
| 2023-07-25T15:07:57Z
| 1
|
CaoYiqingT
|
pytorch/TensorRT
| 1,006
|
[Question]Doesn't torch tensorrt support LSTM-based decoder optimization??
|
## β Question
Doesn't torch tensorrt support LSTM-based decoder optimization? The reason for asking this question is that the model forward and model test structures learned in the seq2seq structure are different (beam search, sequence inference ..), and the optimized model cannot be used by inputting only training forward logic.
## Environment
Tensorrt 22.03 docker image:
https://docs.nvidia.com/deeplearning/tensorrt/container-release-notes/rel_22-03.html#rel_22-03
|
https://github.com/pytorch/TensorRT/issues/1006
|
closed
|
[
"question",
"No Activity"
] | 2022-04-27T06:50:45Z
| 2022-11-10T00:02:45Z
| null |
koliaok
|
huggingface/datasets
| 4,230
|
Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?
|

But on huggingface datasets:

Where is the German data?
|
https://github.com/huggingface/datasets/issues/4230
|
closed
|
[
"enhancement"
] | 2022-04-27T00:53:52Z
| 2023-07-25T15:10:15Z
| null |
beyondguo
|
huggingface/datasets
| 4,221
|
Dictionary Feature
|
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance.
|
https://github.com/huggingface/datasets/issues/4221
|
closed
|
[
"question"
] | 2022-04-26T12:50:18Z
| 2022-04-29T14:52:19Z
| null |
jordiae
|
pytorch/TensorRT
| 1,001
|
β [Question] How to differentiate a Torch-TensorRT model from a pure TorchScript model?
|
## β Question
<!-- Your question -->
I'm developing a C++ inference server to deploy Torch-TensorRT models and TorchScript models. Since the Torch-TensorRT compilation process is done AOT, Is there a way to know wether the given .pt model file is a Torch-TensorRT model or a pure TorchScript model?
Thanks!
|
https://github.com/pytorch/TensorRT/issues/1001
|
closed
|
[
"question"
] | 2022-04-26T12:29:20Z
| 2022-04-27T02:05:50Z
| null |
tiandi111
|
pytorch/vision
| 5,872
|
Keypoint RCNN visibility flag for keypoints
|
### π The feature
Hello All,
This is only my first day posting a request here so I apologize for any errors on my part. Also, sorry for the long post below.
The purpose of this post is to request an improvement/correction for the visibility flag behavior of Keypoint RCNN. Based on my results and those of other users I have encountered on different forums and sites, Keypoint RCNN always predicts a flag value of v=1 for all keypoints, no matter the training flag value for v>0 (even v=0), and predicts coordinates for them as well. In other words, the model does not appear to actually learn the flag value. My understanding is that the flag should be learned and is supposed to follow the COCO convention (v=0 βnot in imageβ; v=1 βoccludedβ; v=2 βvisibleβ) but does not do so.
### Motivation, pitch
Given the usefulness of the visibility flags, being able to accurately predict them and use the information during inference to mark occluded vs. visible keypoints would be an important addition to the model capability. My understanding is that this is already supposed to be the case, but for some reason the documentation as well as the model behavior on this are lacking. I have found the performance of Keypoint RCNN overall to be very good and I have successfully fine-tuned it on my custom (multiclass) dataset with very good success in predicting the class, bbox, and keypoints. It would be very helpful to be able to distinguish between keypoints using visibility flag.
### Alternatives
_No response_
### Additional context
My hope in writing here is to request and encourage updating of the model to address the issue/addition suggested. If not, then if I could please get some help in tracking down the source code where Keypoint RCNN is converting all flags to v=1 and handling/training flags so that I might be able to modify this behavior, as the model does not seem to learn the flag values presently. In my use case, what I want is for Keypoint RCNN to successfully predict the right flag (e.g. v=0) so that I can use it later on, or at least predict a coordinate of (0.0,0.0) (or some other fixed value) for keypoints with v=0. The need is to be able to distinguish between visible and occluded keypoints. Even just two learned flags that work as expected (v=0 and v=1) would be very useful to have. Any suggestions or guidance would be great. Thanks for taking the time to reply.
cc @datumbox @YosuaMichael
|
https://github.com/pytorch/vision/issues/5872
|
open
|
[
"question",
"topic: object detection"
] | 2022-04-24T21:44:35Z
| 2024-08-26T08:33:51Z
| null |
mbadal1996
|
pytorch/torchx
| 470
|
Improve torchx/resources README
|
## π Documentation
## Link
https://github.com/pytorch/torchx/tree/main/resources
## What does it currently say?
```
**Creating EKS cluster**
eksctl create cluster -f torchx-dev-eks.yml
**Creating KFP**
kfctl apply -V -f torchx-dev-kfp.yml
```
## What should it say?
For the **Creating EKS Cluster** it should actually list out how to create `torchx-dev-eks.yml`. The instructions are in `torchx-dev-eks-template.yml`, so just pulling those out to the README would be good.
For **Creating KFP**, it is missing the steps to generate `torchx-dev-kfp.yml`. I'm assuming you do this by following the instructions on the aws eks kfp website (https://www.kubeflow.org/docs/distributions/aws/deploy/install-kubeflow/), but a quick look at those docs doesn't seem like its obvious.
## Why?
Following the README step by step doesn't work due to missing files.
|
https://github.com/meta-pytorch/torchx/issues/470
|
closed
|
[
"documentation"
] | 2022-04-22T18:03:56Z
| 2022-06-02T21:26:12Z
| 1
|
kiukchung
|
pytorch/PiPPy
| 149
|
Figure out how to get `**kwargs` working with MetaTracer
|
https://github.com/pytorch/PiPPy/pull/138/files#diff-6d49246d94990874a38b3d05e50ea765d5c0a75270de5eec6dcda377f934976dR251
Michael B from HF is also looking into this, maybe we'll figure something out together
|
https://github.com/pytorch/PiPPy/issues/149
|
closed
|
[] | 2022-04-21T16:34:00Z
| 2022-06-10T18:19:27Z
| null |
jamesr66a
|
pytorch/vision
| 5,845
|
about paste_mask_in_image question in mask rcnn
|
First of all, thanks for your great work.
Recently, I was studying Mask R-CNN code in this repo. I have some questions, and I hope you could answer it when you are free.
First question, Why do I need to expand the mask and box when mapping mask back to the original scale. I read the original paper of Mask R-CNN, which only said "The mΓm floating-number mask output is then resized to the RoI size, and binarized at a threshold of 0.5.".
https://github.com/pytorch/vision/blob/35d1d9d3f01016c65ac7f3d0700d2474929acdea/torchvision/models/detection/roi_heads.py#L474-L477
Second question, What is the function of TO_REMOVE here?
https://github.com/pytorch/vision/blob/35d1d9d3f01016c65ac7f3d0700d2474929acdea/torchvision/models/detection/roi_heads.py#L403-L409
Look forward to your reply. :laughing:
cc @datumbox @YosuaMichael
|
https://github.com/pytorch/vision/issues/5845
|
closed
|
[
"question",
"topic: object detection"
] | 2022-04-21T08:52:39Z
| 2022-05-18T00:51:04Z
| null |
WZMIAOMIAO
|
pytorch/torchx
| 464
|
Volcano job scheduling issues due to bad upgrade
|
This is an after the fact issue to help anyone who stumbles upon it later resolve the issue.
## Pod won't schedule due to CreateContainerConfigError
```
Warning Failed 12m (x12 over 15m) kubelet Error: couldn't find key VC_PYTHON-0_HOSTS in ConfigMap default/torchxcomponentspython-bwg4m0sktd9mwc-svc
```
```
state:
waiting:
message: couldn't find key VC_PYTHON-0_HOSTS in ConfigMap default/torchxcomponentspython-bwg4m0sktd9mwc-svc
reason: CreateContainerConfigError
```
This is likely due to a Volcano version upgrade issue. Volcano 1.4 changed the ENV key format to correctly handle `-` characters. This means if a job was submitted under Volcano 1.3 and then is upgraded to Volcano 1.4 before running the job will fail to schedule. You just need to relaunch your job under the new version.
## Partial Upgrade Issues
```
Error creating pods: [failed to create pod pv5xp2lpf65vz-python-0-0, err: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": the server could not find the requested resource", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002a125a0), Code:500}}]
```
When you upgrade Volcano you need to completely delete the `volcano-system` namespace and all resources within it before running `kubectl apply .../development.yaml`. If you don't, some of the setup jobs resources will conflict and won't run for the new version leaving the cluster in a bad state.
|
https://github.com/meta-pytorch/torchx/issues/464
|
closed
|
[
"bug",
"documentation",
"kubernetes"
] | 2022-04-20T19:14:15Z
| 2022-04-20T20:30:06Z
| 0
|
d4l3k
|
pytorch/vision
| 5,838
|
return_layers problem about fasterrcnn_mobilenet_v3_large_fpn
|
### π Describe the bug
There may be a problem with the setting of return_layers in fasterrcnn_mobilenet_v3_large_fpn. If the default setting is used, the resolution of collected feature map is the same. As a result, the effect of detecting small targets will become worse.
https://github.com/pytorch/vision/blob/e8cb0bacd86c49e67a7e1a5f83c6da866bc451cf/torchvision/models/detection/backbone_utils.py#L225-L226
test code:
```python
import torch
from torchvision.models.detection import fasterrcnn_mobilenet_v3_large_fpn
model = fasterrcnn_mobilenet_v3_large_fpn(pretrained_backbone=False)
img = torch.randn(1, 3, 224, 224)
outputs = model.backbone(img)
[print(f"{k} shape: {v.shape}") for k, v in outputs.items()]
```
output:
```
0 shape: torch.Size([1, 256, 7, 7])
1 shape: torch.Size([1, 256, 7, 7])
pool shape: torch.Size([1, 256, 4, 4])
```
`feauture map: 0` and `feature map: 1` have same resolution(`7x7`).
may need to change:
```
returned_layers = [num_stages - 2, num_stages - 1]
```
to:
```
returned_layers = [num_stages - 3, num_stages - 1]
```
output:
```
0 shape: torch.Size([1, 256, 14, 14])
1 shape: torch.Size([1, 256, 7, 7])
pool shape: torch.Size([1, 256, 4, 4])
```
### Versions
```
PyTorch version: 1.10.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: Quadro P620
Nvidia driver version: 470.103.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.3
[pip3] torch==1.10.0+cpu
[pip3] torchaudio==0.10.0+cpu
[pip3] torchvision==0.11.1+cpu
[conda] numpy 1.21.3 pypi_0 pypi
[conda] torch 1.10.0+cpu pypi_0 pypi
[conda] torchaudio 0.10.0+cpu pypi_0 pypi
[conda] torchvision 0.11.1+cpu pypi_0 pypi
```
cc @datumbox @YosuaMichael
|
https://github.com/pytorch/vision/issues/5838
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2022-04-20T04:32:20Z
| 2022-04-21T07:47:05Z
| null |
WZMIAOMIAO
|
pytorch/data
| 364
|
Linter for DataPipe/DataLoader2
|
### π The feature
This issue proposes the addition of a linter for DataPipes and DataLoader2. The linter can analyze the graph of DataPipes and input arguments to DataLoaderV, and inform the users if any errors may occur ahead of time. The incomplete list of issues that the linter may try to analyze and raise is below. Please feel free to edit the list directly to add more or comment below.
Essential:
- [ ] Multiple references to the same iterator/DataPipe
- This can cause issue when serialized, suggest users to `fork`
- [ ] Duplicate usage of shuffle/batch/collate
- [ ] Shuffle/batch/collate are missing?
- [ ] Warn if shuffling is not done?
- [ ] Warn if sharding is not specificed for Distributed/Multiprocessing
- [ ] Warn about shuffling before sharding (not mandatory because inputs may be pre-shuffled)
- [ ] Multiprocess/distributed behavior related to sharding/shuffling
- [ ] Warn if filter appears between on_disk_cache and end_caching sections.
- [ ] Find unreachable children within graph and warns (because they might prevent buffers from being empty in `fork` and etc)
- [ ] Warn about passing DataPipes that have already been partially read (invalid state), but are passed into DataLoader (and we might have to force `reset` the DataPipe in DataLoader)
- [ ] Detect what external packages are not installed within DataPipe graph
Nice-to-have:
- [ ] Check DataPipe object size and warn if it is too big (e.g. premature initialization of large structures)
- [ ] Check if `fork` datapipe creates two or more copies of `StreamWrapper` or `IOBase`
### Motivation, pitch
Having a linter will encourage best practices of DataPipe usages and reduces the number of unexpected bugs/behaviors in the data loading process during runtime.
### Alternatives
Only raise exceptions during runtime.
### Additional context
This linter is expected to work with DataPipes and DataLoaderV2. We should consider if it should work with the original DataLoader as well (and how).
cc: @VitalyFedyunin @ejguan
|
https://github.com/meta-pytorch/data/issues/364
|
open
|
[
"help wanted"
] | 2022-04-19T21:49:54Z
| 2023-04-11T16:58:51Z
| 5
|
NivekT
|
pytorch/TensorRT
| 987
|
β [Question] How do you add CUDA kernels used for implemented plugins ?
|
## β Question
How do you add CUDA kernels used for implemented plugins ? I have developed my own implementation for several layers that are not supported yet by Torch-TensorRT. I'm not familiar with the bazel compilation flow and i would like to know how to compile .cu files in Torch-TensorRT.
Current provided Torch-TensorRT plugins make calls to external libraries (cuDNN for example) but there is no example about how to add a custom plugins that call CUDA kernels.
## Additional context
In addition it could be nice to have a clear way on how to get the PyTorch signature of the methods that we want to encapsulate.
Cheers
David
|
https://github.com/pytorch/TensorRT/issues/987
|
closed
|
[
"question",
"No Activity",
"component: plugins"
] | 2022-04-19T15:59:59Z
| 2022-08-12T00:02:25Z
| null |
david-PHR
|
huggingface/datasets
| 4,181
|
Support streaming FLEURS dataset
|
## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
```
Am I the one who added this dataset ? Yes
Can I fix this somehow in the script? @lhoestq @severo
|
https://github.com/huggingface/datasets/issues/4181
|
closed
|
[
"dataset bug"
] | 2022-04-19T11:09:56Z
| 2022-07-25T11:44:02Z
| 9
|
patrickvonplaten
|
pytorch/pytorch
| 76,023
|
How to disable check onnx in torch.onnx.export in pytorch1.11 version?
|
### π The doc issue
Old params were removed, now how to disable check on onnx when export?
### Suggest a potential alternative/fix
Also, why disable this feature? Some onnx using customized op can not pass check.
|
https://github.com/pytorch/pytorch/issues/76023
|
closed
|
[
"module: onnx",
"triaged",
"onnx-needs-info"
] | 2022-04-19T08:26:42Z
| 2022-05-05T04:57:24Z
| null |
lucasjinreal
|
pytorch/TensorRT
| 985
|
Error Code 1: Myelin (Compiled against cuBLASLt 10.2.2.0 but running against cuBLASLt 11.4.2.0.)
|
Hi I am using TensorRT for an images in python but getting this issue.
**I am Yolort to infer image.**
[https://github.com/zhiqwang/yolov5-rt-stack](url)
```
import os
import torch
import cv2
from yolort.utils import Visualizer
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
cuda_visible = "0"
os.environ["CUDA_VISIBLE_DEVICES"] = cuda_visible
from yolort.runtime import PredictorTRT
assert torch.cuda.is_available()
device = torch.device('cuda')
engine_path = "yolov5n6.engine"
y_runtime = PredictorTRT(engine_path, device=device)
img_path = r"D:\new_york.jpg"
img_raw = cv2.imread(img_path)
label_source = r"D:\coco.names"
label_path = label_source.split("/")[-1]
y_runtime.warmup()
predictions_trt = y_runtime.predict(img_path)
print(predictions_trt)
```
**Here is my environment**
```
>python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.23.0
Libc version: N/A
Python version: 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19041-SP0
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 511.65
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.11.0+cu113
[pip3] torchaudio==0.11.0+cu113
[pip3] torchvision==0.12.0+cu113
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h59b6b97_2
[conda] libblas 3.9.0 12_win64_mkl conda-forge
[conda] libcblas 3.9.0 12_win64_mkl conda-forge
[conda] liblapack 3.9.0 12_win64_mkl conda-forge
[conda] mkl 2021.4.0 h0e2418a_729 conda-forge
[conda] mkl-service 2.4.0 py39h6b0492b_0 conda-forge
[conda] mkl_fft 1.3.1 py39h0cb33c3_1 conda-forge
[conda] mkl_random 1.2.2 py39h2e25243_0 conda-forge
[conda] mypy_extensions 0.4.3 py39hcbf5309_5 conda-forge
[conda] numpy 1.22.3 pypi_0 pypi
[conda] numpy-base 1.20.3 py39hc2deb75_0
[conda] numpydoc 1.2.1 pyhd8ed1ab_2 conda-forge
[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.11.0 py39_cu113 pytorch
[conda] torchvision 0.12.0 py39_cu113 pytorch
```

|
https://github.com/pytorch/TensorRT/issues/985
|
closed
|
[
"question"
] | 2022-04-19T06:40:10Z
| 2022-04-20T10:02:08Z
| null |
IamNaQi
|
huggingface/optimum
| 147
|
Support for electra model
|
I came across this tool and it looks very interesting but i am trying to use electra model and i can see this is not supported. By this
`"electra is not supported yet. Only ['albert', 'bart', 'mbart', 'bert', 'ibert', 'camembert', 'distilbert', 'longformer', 'marian', 'roberta', 't5', 'xlm-roberta', 'gpt2', 'gpt-neo', 'layoutlm'] are supported. If you want to support electra please propose a PR or open up an issue`.
is any plans for electra models in future.
Example of models https://huggingface.co/german-nlp-group/electra-base-german-uncased
|
https://github.com/huggingface/optimum/issues/147
|
closed
|
[] | 2022-04-15T11:03:21Z
| 2022-04-21T07:24:48Z
| 1
|
OriAlpha
|
pytorch/TensorRT
| 977
|
β [Question] how to enable "torch fallback"
|
## β Question
I was told that torch-trt was able to partially convert graph to tensorrt while keep the unsupported part running on torch-runtime.
And I also hava Found some 'Torch Fallback' or 'torch_fallback' str at the source code.
So I generate a module containing `torch.argmax` , which is not supported by torch-tensorrt. And give it a shot, but it failed.
I hava two question:
1. Is the fallback feature really supported by torch-tensorrt or is going to be supported?
2. If allready supported, is there any sample showing how to use it.
## What you have already tried
take a look at this script:
```python
import torch
import torch_tensorrt
import numpy as np
from torchvision import models
class MyModel(torch.nn.Module):
def __init__(self):
super(MyModel, self).__init__()
models_dict = {
"resnet50_v2": models.resnet50,
"resnet101_v2": models.resnet101,
"resnet152_v2": models.resnet152,
"mobilenet_v2": models.mobilenet_v2,
"shufflenet_v2": models.shufflenet_v2_x1_0,
"densenet169": models.densenet169
}
self.model = models_dict['resnet50_v2'](pretrained=False)
def forward(self, x):
x = self.model(x)
return torch.argmax(x, -1)
def main():
model = MyModel().eval().cuda() #.cuda()
x = torch.from_numpy(np.random.randn(1,3,224,224).astype(np.float32)).cuda()
scripted_model = torch.jit.script(model)
compile_settings = {
"inputs": [x],
"enabled_precisions": {torch.float},
"torch_fallback": { # also tryied with Torch Fallback
"enabled": True
"min_block_size": 1
"forced_fallback_operators": [
]
"forced_fallback_modules": [
]
}
}
trt_ts_module = torch_tensorrt.ts.compile(scripted_model, **compile_settings)
print(trt_ts_module)
torch_tensorrt_out = trt_ts_module(x)
print('torch_tensorrt_out shape: \n', torch_tensorrt_out.shape, print(torch_tensorrt_out))
pytorch_out = model(x)
print('pytorch out shape: \n', pytorch_out.shape, pytorch_out)
# torch._C._jit_to_backend is buggy, spec will be transformed into wrong json structure.
def main2():
model = MyModel().eval().cuda() #.cuda()
x = torch.from_numpy(np.random.randn(1,3,224,224).astype(np.float32))
scripted_model = torch.jit.script(model)
spec = {
"forward":
torch_tensorrt.ts.TensorRTCompileSpec({
"inputs": [torch_tensorrt.Input([1, 3, 224, 224], dtype=torch.float)],
"enabled_precisions": {torch.float},
"refit": False,
"debug": False,
"device": {
"device_type": torch_tensorrt.DeviceType.GPU,
"gpu_id": 0,
"dla_core": 0,
"allow_gpu_fallback": True
},
"capability": torch_tensorrt.EngineCapability.default,
"num_min_timing_iters": 2,
"num_avg_timing_iters": 1,
})
}
trt_ts_module = torch._C._jit_to_backend("tensorrt", script_model, spec)
print(trt_ts_module)
torch_tensorrt_out = trt_ts_module(x)
print('torch_tensorrt_out shape: \n', torch_tensorrt_out.shape, print(torch_tensorrt_out))
pytorch_out = model(x)
print('pytorch out shape: \n', pytorch_out.shape, pytorch_out)
if __name__ == "__main__":
main()
```
get output:
```bash
Traceback (most recent call last):
File "./torch_trt_custom.py", line 86, in <module>
main()
File "./torch_trt_custom.py", line 42, in main
trt_ts_module = torch_tensorrt.ts.compile(scripted_model, **compile_settings)
TypeError: compile() got an unexpected keyword argument 'torch_fallback'
```
## Environment
ngc pytorch 22.02
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/977
|
closed
|
[
"question"
] | 2022-04-15T08:25:12Z
| 2022-04-15T09:28:54Z
| null |
WingEdge777
|
pytorch/pytorch
| 75,723
|
[ONNX] How to export fx quantized model to onnx?
|
### π The feature, motivation and pitch
FX is great! How to export fx quantized model to onnx?
### Alternatives
Currently, I have traced the quantized int8 model to torchscript, it works OK.
### Additional context
I just wonder, If torch already supported export fx model to onnx, how to do it? I got error:
```
RuntimeError: Exporting the operator quantize_per_tensor to ONNX opset version 13 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.
```
If not support, then, when will support? What's the obstacles behind it?
**this is really needed, so that bring the gap between int8 quantize and other forward framework through onnx**
cc @ezyang @SherlockNoMad
|
https://github.com/pytorch/pytorch/issues/75723
|
closed
|
[
"module: onnx",
"triaged",
"onnx-needs-info",
"module: fx"
] | 2022-04-13T07:40:14Z
| 2022-11-15T23:44:03Z
| null |
lucasjinreal
|
huggingface/tokenizers
| 979
|
What is the correct format for file for tokenizer.train_from_files?
|
I am trying to use this library and train a new model with my own data. But before I start building my corpora, I want to understand what file format should I be looking for, if I am feeding it to [`train_from_files`](https://docs.rs/tokenizers/0.11.3/tokenizers/tokenizer/struct.TokenizerImpl.html#method.train_from_files)? Is there a standard for that? It would be great if that can be documented.
|
https://github.com/huggingface/tokenizers/issues/979
|
closed
|
[] | 2022-04-12T22:54:39Z
| 2022-04-14T07:05:58Z
| null |
winston0410
|
pytorch/examples
| 987
|
What accuracy should we expect when training Alexnet from scratch on ImageNet?
|
## π Documentation
The README https://github.com/pytorch/examples/blob/main/imagenet/README.md is very helpful when getting started with training AlexNet.
We are able to successfully train AlexNet to approximately 56% top-1 and 79% top-5 accuracy on the validation set. But this is still a fair bit below Krizhevsky's published results of circa 83% or 85% top-5 accuracy on these training sets.
We are training with the default recommendations for a single GPU in the README for AlexNet:
```
python main.py -a alexnet --lr 0.01 --gpu 0 /data/datasets/imagenet/
```
What out-of the box accuracy should we expect when training AlexNet on ImageNet with the default PyTorch implementation?
What sort of hyperparameter changes do you recommend to duplicate Alex Krizhevsky's accuracies?
|
https://github.com/pytorch/examples/issues/987
|
open
|
[
"reproducibility"
] | 2022-04-11T20:56:15Z
| 2023-01-12T03:26:38Z
| 8
|
yoderj
|
huggingface/datasets
| 4,141
|
Why is the dataset not visible under the dataset preview section?
|
## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
|
https://github.com/huggingface/datasets/issues/4141
|
closed
|
[
"dataset-viewer"
] | 2022-04-11T08:36:42Z
| 2022-04-11T18:55:32Z
| 0
|
Nid989
|
huggingface/datasets
| 4,139
|
Dataset viewer issue for Winoground
|
## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files from the interface, so I assume I'm granted to access it. I'd assume the permission somehow doesn't propagate to the dataset viewer tool.
Am I the one who added this dataset ? No
|
https://github.com/huggingface/datasets/issues/4139
|
closed
|
[
"dataset-viewer",
"dataset-viewer-gated"
] | 2022-04-11T06:11:41Z
| 2022-06-21T16:43:58Z
| 11
|
alcinos
|
huggingface/datasets
| 4,138
|
Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract()
|
## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Status code: 400
Exception: TypeError
Message: xwalk() got an unexpected keyword argument 'topdown'
Couldn't find where "xwalk" come from. How can I fix this?
Am I the one who added this dataset ? Yes
|
https://github.com/huggingface/datasets/issues/4138
|
closed
|
[] | 2022-04-11T02:07:13Z
| 2022-04-19T03:15:46Z
| 5
|
iluvvatar
|
huggingface/datasets
| 4,134
|
ELI5 supporting documents
|
if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs??
|
https://github.com/huggingface/datasets/issues/4134
|
open
|
[
"question"
] | 2022-04-08T23:36:27Z
| 2022-04-13T13:52:46Z
| null |
saurabh-0077
|
huggingface/dataset-viewer
| 204
|
Reduce the size of the endpoint responses?
|
Currently, the data contains a lot of redundancy, for example every row of the `/rows` response contains three fields for the dataset, config and split, and their value is the same for all the rows. It comes from a previous version in which we were able to request rows for several configs or splits at the same time.
Changing the format would require changing the moon-landing client.
|
https://github.com/huggingface/dataset-viewer/issues/204
|
closed
|
[
"question"
] | 2022-04-08T15:31:35Z
| 2022-08-24T18:03:38Z
| null |
severo
|
pytorch/text
| 1,677
|
what is currently the ideal effective torchtext pipeline for almost any nlp tasks
|
## searching the ideal torchtext pipeline
**Description**
hey there, so ive been using the legacy version of torchtext for quite sometime as it provides easier ways to load custom dataset and custom pretrained word embeddings locally and i can semlessly implement it for seq2seq, text classification, pos tagging, language modeling etc. most importantly i could use Buckeriterator to sort samples based on their length and group batches based on similar length thus minimize padding.
Ive read that the torchdata has these functionalities implemented but couldnt find any tangible resources.
**I have 3 requirements:**
1. loading any custom dataset locally.
2. loading any custom pre-trained embedding locally (fasttext, GLoVe)
3. being able to implement sort and batch by length to get minimum padding
|
https://github.com/pytorch/text/issues/1677
|
open
|
[] | 2022-04-07T13:29:10Z
| 2022-04-07T13:29:10Z
| null |
StephennFernandes
|
pytorch/data
| 352
|
DataLoader tutorial does not handle num_workers > 0
|
I just wanted to document an issue with the tutorials https://pytorch.org/data/beta/tutorial.html#working-with-dataloader
The code in the tutorial will not work when running multiple DataLoader processes as the datapipe will be duplicated across workers:
```py
dl = DataLoader(dataset=datapipe, batch_size=2, shuffle=True, num_workers=2)
for i, e in enumerate(dl):
print(e)
```
gives
```
{'label': tensor([7, 0], dtype=torch.int32), 'data': tensor([[0.5105, 0.7899],
[0.0152, 0.5981]], dtype=torch.float64)}
{'label': tensor([7, 0], dtype=torch.int32), 'data': tensor([[0.5105, 0.7899],
[0.0152, 0.5981]], dtype=torch.float64)}
{'label': tensor([4, 6], dtype=torch.int32), 'data': tensor([[0.9998, 0.5452],
[0.8515, 0.8264]], dtype=torch.float64)}
{'label': tensor([4, 6], dtype=torch.int32), 'data': tensor([[0.9998, 0.5452],
[0.8515, 0.8264]], dtype=torch.float64)}
{'label': tensor([1, 9], dtype=torch.int32), 'data': tensor([[0.8423, 0.3664],
[0.6397, 0.6408]], dtype=torch.float64)}
{'label': tensor([1, 9], dtype=torch.int32), 'data': tensor([[0.8423, 0.3664],
[0.6397, 0.6408]], dtype=torch.float64)}
...
```
Even though this is still beta, it may still be worth letting users know about such pitfalls.
Also, since there are various ways to achieve the sharding, it could be useful to settle on a definite canonical way of handling all this.
|
https://github.com/meta-pytorch/data/issues/352
|
closed
|
[
"documentation"
] | 2022-04-07T13:00:41Z
| 2022-06-10T20:02:57Z
| 3
|
NicolasHug
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.