Dataset Preview
Go to dataset viewer
url (string)text (string)num_labels (sequence)arr_labels (sequence)labels (sequence)
"https://api.github.com/repos/huggingface/transformers/issues/7627"
" TITLE Added sampler'set_epoch when use distributed training COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY `run_squad.py` file is independent of `Trainer Class`(https://github.com/huggingface/transformers/issues/4398). Therefore, there is no method related to `set_epoch` in distributed training."
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/13514"
" TITLE separate model card git push from the rest COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? - After model card metadata contents validation was deployed to the Hub, we need to ensure transformer's trainer git push are not blocked because of an invalid README.mld yaml. - as discussed with @julien-c @Pierrci @sgugger and @LysandreJik the first step to match Hub's model card validation system is to avoid failing a whole git push after training, for the only reason that README.md metadata is not valid. - therefore, I tried in this PR to git push the training result independently from the modelcard update, so that the modelcard update failing does not fail the rest, keeping only logging for README.Md push failures. - Relates to https://github.com/huggingface/huggingface_hub/pull/326 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? "
[ 32, 51, 45 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "model card", "work in progress", "trainer" ]
"https://api.github.com/repos/huggingface/transformers/issues/8077"
" TITLE Longformer crashes for position embeddings indexing? COMMENTS 7 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: apex ddp ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @patrickvonplaten maybe? ## Information Model I am using (Bert, XLNet ...): Longformer The problem arises when using: * [ ] the official example scripts: (give details below) * [x ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. use apex ddp with longformerforsequenceclassification <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> my code snippet: ```python def train(self): self.model.train() losses = [] if isinstance(self.train_loader.sampler, DistributedSampler): self.train_loader.sampler.set_epoch(self.epoch) for qids, dids, queries, documents, y in self.train_loader: encoded = self._tokenizer.batch_encode_plus(batch_text_or_text_pairs=list(zip(queries, documents)), truncation="longest_first", add_special_tokens=True, max_length = self.max_len, padding="max_length", is_pretokenized=False, return_tensors="pt", return_attention_mask=True, return_token_type_ids=True) input_ids = encoded["input_ids"].cuda() attention_mask = encoded["attention_mask"].cuda() token_type_ids = encoded["token_type_ids"].cuda() y = torch.tensor(y).unsqueeze(1).cuda() global_attention_mask = self.get_global_attention(encoded["input_ids"], self.max_len, self._tokenizer.sep_token_id)[0].cuda() self.optimizer.zero_grad() outputs = self.model( input_ids=input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask, labels=y ) loss = outputs[0] with amp.scale_loss(loss, self.optimizer) as scaled_loss: scaled_loss.backward() self.optimizer.step() ``` Where the data are queries and documents that are either relevant (y=1) or irrelevant (y=0). Each input is the concatenation of a query and a document. ```get_global_attention()``` is a function to give global attention to query tokens. I find that for some batches (no all batches!), the code would give the following errors, which are very confusing to me: ``` INFO:__main__:Namespace(apex_level='O2', batch_size=1, cased=1, debug=0, encoder_lr=1e-05, eval_step=1, finetune_embedding=0, local_rank=0, model_path='allenai/longformer-base-4096', model_type='longformer', num_epochs=20, num_ft_encoders=2, num_neg=1, projector_lr=1e-05, seed=611) Some weights of the model checkpoint at allenai/longformer-base-4096 were not used when initializing LongformerForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight'] - This IS expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of LongformerForSequenceClassification were not initialized from the model checkpoint at allenai/longformer-base-4096 and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. INFO:__main__:Reading data from /....../sampled INFO:root:Number of positive query-document pairs in [train] set: 67 INFO:root:Number of labelled query-document pairs in [dev] set: 2000 INFO:root:Number of labelled query-document pairs in [test] set: 2000 INFO:__main__:Data reading done ... INFO:__main__:adding 10-th encoder to optimizer... INFO:__main__:adding 11-th encoder to optimizer... Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights. Defaults for this optimization level are: enabled : True opt_level : O2 cast_model_type : torch.float16 patch_torch_functions : False keep_batchnorm_fp32 : True master_weights : True loss_scale : dynamic Processing user overrides (additional kwargs that are not None)... After processing overrides, optimization options are: enabled : True opt_level : O2 cast_model_type : torch.float16 patch_torch_functions : False keep_batchnorm_fp32 : True master_weights : True loss_scale : dynamic INFO:__main__:process[0]: training epoch 0 ... /home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/tokenization_utils.py:547: FutureWarning: `is_pretokenized` is deprecated and will be removed in a future version, use `is_split_into_words` instead. warnings.warn( Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0 /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: ...... (saving space) /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "finetune-marco.py", line 93, in <module> marco.run() File "/mnt/nfs/work1/allan/user/LF-for-IR/Marco.py", line 167, in run self.train() File "/mnt/nfs/work1/allan/user/LF-for-IR/Marco.py", line 223, in train outputs = self.model( File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/apex/amp/_initialize.py", line 196, in new_fwd output = old_fwd(*applier(args, input_caster), File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/apex/parallel/distributed.py", line 560, in forward result = self.module(*inputs, **kwargs) File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 1442, in forward outputs = self.longformer( File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 1262, in forward encoder_outputs = self.encoder( File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 903, in forward layer_outputs = layer_module( File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 849, in forward self_attn_outputs = self.attention( File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 793, in forward self_outputs = self.self( File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 246, in forward is_global_attn = is_index_global_attn.flatten().any().item() RuntimeError: CUDA error: device-side assert triggered NCCL error in: /opt/conda/conda-bld/pytorch_1591914886554/work/torch/lib/c10d/../c10d/NCCLUtils.hpp:69, unhandled cuda error, NCCL version 2.4.8 Traceback (most recent call last): File "/home/user/miniconda3/envs/marco/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/user/miniconda3/envs/marco/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/distributed/launch.py", line 263, in <module> main() File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/distributed/launch.py", line 258, in main raise subprocess.CalledProcessError(returncode=process.returncode, subprocess.CalledProcessError: Command '['/home/user/miniconda3/envs/marco/bin/python', '-u', 'finetune-marco.py', '--local_rank=0', '--model_type', 'longformer', '--model_path', 'allenai/longformer-base-4096', '--batch_size', '1', '--finetune_embedding', '0', '--cased', '1', '--num_neg', '1', '--eval_step', '1', '--num_epochs', '20', '--apex_level', 'O2', '--encoder_lr', '1e-5', '--projector_lr', '1e-5', '--num_ft_encoders', '2', '--seed', '611']' died with <Signals.SIGABRT: 6>. ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I think this error is uncalled for. I have tried our pretrained models like base BERT models and they ran just fine. Can someone help interpret the error message here? Thanks!"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/8573"
" TITLE Bert that receives text triplet as an input COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY I would like to train bert on triplets of texts as inputs (for example, something like (context, question, answer)). encode_plus (https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.encode_plus) receives either a single text, or a text_pair. Is there a way to use it with triplets?"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/8761"
" TITLE Create README.md COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 --> "
[ 32 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "model card" ]
"https://api.github.com/repos/huggingface/transformers/issues/17072"
" TITLE KeyError "labels" occurring in distill_classifier.py official example notebook COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.27 - Python version: 3.8.2 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` ### Who can help? @VictorSanh @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Running the official [Distilling Zero Shot Classification.ipynb ](https://colab.research.google.com/drive/1mjBjd0cR8G57ZpsnFCS3ngGyo5nCa9ya?usp=sharing#scrollTo=ECt06ndcnpyb) results in a KeyError: 'labels' Here is the full output for reference: huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) 05/03/2022 09:50:19 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False 05/03/2022 09:50:19 - INFO - __main__ - Training/evaluation parameters DistillTrainingArguments( _n_gpu=0, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_pin_memory=True, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, debug=[], deepspeed=None, disable_tqdm=False, do_eval=True, do_predict=False, do_train=True, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=IntervalStrategy.NO, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, gradient_accumulation_steps=1, gradient_checkpointing=False, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_model_id=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=<HUB_TOKEN>, ignore_data_skip=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, load_best_model_at_end=False, local_rank=-1, log_level=-1, log_level_replica=-1, log_on_each_node=True, logging_dir=./distilbert-base-uncased-agnews-student/runs/May03_09-50-19_CHI-LX-L-035, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=500, logging_strategy=IntervalStrategy.STEPS, lr_scheduler_type=SchedulerType.LINEAR, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, no_cuda=False, num_train_epochs=1.0, optim=OptimizerNames.ADAMW_HF, output_dir=./distilbert-base-uncased-agnews-student, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=128, per_device_train_batch_size=32, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=<PUSH_TO_HUB_TOKEN>, remove_unused_columns=True, report_to=[], resume_from_checkpoint=None, run_name=./distilbert-base-uncased-agnews-student, save_on_each_node=False, save_steps=500, save_strategy=IntervalStrategy.STEPS, save_total_limit=0, seed=42, sharded_ddp=[], skip_memory_metrics=True, tf32=None, tpu_metrics_debug=False, tpu_num_cores=None, use_legacy_prediction_loop=False, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.0, xpu_backend=None, ) 05/03/2022 09:50:19 - INFO - __main__ - Generating predictions from zero-shot teacher model [INFO|configuration_utils.py:654] 2022-05-03 09:50:19,224 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb [INFO|configuration_utils.py:690] 2022-05-03 09:50:19,225 >> Model config RobertaConfig { "_name_or_path": "roberta-large-mnli", "_num_labels": 3, "architectures": [ "RobertaForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "id2label": { "0": "CONTRADICTION", "1": "NEUTRAL", "2": "ENTAILMENT" }, "initializer_range": 0.02, "intermediate_size": 4096, "label2id": { "CONTRADICTION": 0, "ENTAILMENT": 2, "NEUTRAL": 1 }, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 16, "num_hidden_layers": 24, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.18.0", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 } [INFO|modeling_utils.py:1772] 2022-05-03 09:50:19,391 >> loading weights file https://huggingface.co/roberta-large-mnli/resolve/main/pytorch_model.bin from cache at /home/eknochenhauer/.cache/huggingface/transformers/63cbd98723b89863bcd86a8002e823de3004a139513559246690c65521cdc9b9.38ef55c51c84ab2e78e5a0e2ea9c25830fd074df70d2f10071eb9a1bc1586ca0 [WARNING|modeling_utils.py:2048] 2022-05-03 09:50:22,672 >> Some weights of the model checkpoint at roberta-large-mnli were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [INFO|modeling_utils.py:2065] 2022-05-03 09:50:22,672 >> All the weights of RobertaForSequenceClassification were initialized from the model checkpoint at roberta-large-mnli. If your task is similar to the task the model of the checkpoint was trained on, you can already use RobertaForSequenceClassification for predictions without further training. [INFO|tokenization_auto.py:344] 2022-05-03 09:50:22,808 >> Could not locate the tokenizer configuration file, will try to use the model config instead. [INFO|configuration_utils.py:654] 2022-05-03 09:50:22,950 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb [INFO|configuration_utils.py:690] 2022-05-03 09:50:22,951 >> Model config RobertaConfig { "_name_or_path": "roberta-large-mnli", "_num_labels": 3, "architectures": [ "RobertaForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "id2label": { "0": "CONTRADICTION", "1": "NEUTRAL", "2": "ENTAILMENT" }, "initializer_range": 0.02, "intermediate_size": 4096, "label2id": { "CONTRADICTION": 0, "ENTAILMENT": 2, "NEUTRAL": 1 }, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 16, "num_hidden_layers": 24, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.18.0", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 } [INFO|tokenization_utils_base.py:1778] 2022-05-03 09:50:23,934 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/vocab.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/64a1d72b2bd05b0aff1a4dd9e7a90a6eea0312b4f914e80b0a923aa8f72219bd.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab [INFO|tokenization_utils_base.py:1778] 2022-05-03 09:50:23,934 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/merges.txt from cache at /home/eknochenhauer/.cache/huggingface/transformers/425529714b758f50b6d3f93f8093d859856fd41cf1cec7c8edf2ab44aee632b6.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b [INFO|tokenization_utils_base.py:1778] 2022-05-03 09:50:23,934 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/tokenizer.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/d077eac6b48c43618a441cba6eab600a5cc6383b98e7eada6d1ad4d3f3cc457e.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730 [INFO|tokenization_utils_base.py:1778] 2022-05-03 09:50:23,934 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/added_tokens.json from cache at None [INFO|tokenization_utils_base.py:1778] 2022-05-03 09:50:23,934 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/special_tokens_map.json from cache at None [INFO|tokenization_utils_base.py:1778] 2022-05-03 09:50:23,934 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/tokenizer_config.json from cache at None [INFO|configuration_utils.py:654] 2022-05-03 09:50:24,079 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb [INFO|configuration_utils.py:690] 2022-05-03 09:50:24,079 >> Model config RobertaConfig { "_name_or_path": "roberta-large-mnli", "_num_labels": 3, "architectures": [ "RobertaForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "id2label": { "0": "CONTRADICTION", "1": "NEUTRAL", "2": "ENTAILMENT" }, "initializer_range": 0.02, "intermediate_size": 4096, "label2id": { "CONTRADICTION": 0, "ENTAILMENT": 2, "NEUTRAL": 1 }, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 16, "num_hidden_layers": 24, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.18.0", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 } 0%| | 0/2500 [00:00<?, ?it/s]/home/eknochenhauer/repos/themis-key-phrase-filtering-and-aggregation/.venv/lib/python3.8/site-packages/torch/autocast_mode.py:162: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling') 100%|█████████████████████████████████████| 2500/2500 [5:37:27<00:00, 8.10s/it] 05/03/2022 15:27:51 - INFO - __main__ - Initializing student model [INFO|configuration_utils.py:654] 2022-05-03 15:27:51,844 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333 [INFO|configuration_utils.py:690] 2022-05-03 15:27:51,846 >> Model config DistilBertConfig { "_name_or_path": "distilbert-base-uncased", "activation": "gelu", "architectures": [ "DistilBertForMaskedLM" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2", "3": "LABEL_3" }, "initializer_range": 0.02, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2, "LABEL_3": 3 }, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "transformers_version": "4.18.0", "vocab_size": 30522 } [INFO|modeling_utils.py:1772] 2022-05-03 15:27:52,023 >> loading weights file https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin from cache at /home/eknochenhauer/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a [WARNING|modeling_utils.py:2048] 2022-05-03 15:27:52,616 >> Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_projector.bias', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_layer_norm.weight', 'vocab_transform.bias'] - This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:2059] 2022-05-03 15:27:52,616 >> Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [INFO|configuration_utils.py:654] 2022-05-03 15:27:52,923 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333 [INFO|configuration_utils.py:690] 2022-05-03 15:27:52,926 >> Model config DistilBertConfig { "_name_or_path": "distilbert-base-uncased", "activation": "gelu", "architectures": [ "DistilBertForMaskedLM" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "transformers_version": "4.18.0", "vocab_size": 30522 } [INFO|tokenization_utils_base.py:1778] 2022-05-03 15:27:53,860 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt from cache at /home/eknochenhauer/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99 [INFO|tokenization_utils_base.py:1778] 2022-05-03 15:27:53,861 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4 [INFO|tokenization_utils_base.py:1778] 2022-05-03 15:27:53,861 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/added_tokens.json from cache at None [INFO|tokenization_utils_base.py:1778] 2022-05-03 15:27:53,861 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/special_tokens_map.json from cache at None [INFO|tokenization_utils_base.py:1778] 2022-05-03 15:27:53,861 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79 [INFO|configuration_utils.py:654] 2022-05-03 15:27:54,019 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333 [INFO|configuration_utils.py:690] 2022-05-03 15:27:54,021 >> Model config DistilBertConfig { "_name_or_path": "distilbert-base-uncased", "activation": "gelu", "architectures": [ "DistilBertForMaskedLM" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "transformers_version": "4.18.0", "vocab_size": 30522 } 100%|███████████████████████████████████| 20000/20000 [00:07<00:00, 2840.94ex/s] 05/03/2022 15:28:01 - INFO - __main__ - Training student model on teacher predictions [INFO|trainer.py:566] 2022-05-03 15:28:01,148 >> The following columns in the training set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message. /home/eknochenhauer/repos/themis-key-phrase-filtering-and-aggregation/.venv/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( [INFO|trainer.py:1290] 2022-05-03 15:28:01,165 >> ***** Running training ***** [INFO|trainer.py:1291] 2022-05-03 15:28:01,165 >> Num examples = 20000 [INFO|trainer.py:1292] 2022-05-03 15:28:01,165 >> Num Epochs = 1 [INFO|trainer.py:1293] 2022-05-03 15:28:01,165 >> Instantaneous batch size per device = 32 [INFO|trainer.py:1294] 2022-05-03 15:28:01,165 >> Total train batch size (w. parallel, distributed & accumulation) = 32 [INFO|trainer.py:1295] 2022-05-03 15:28:01,165 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1296] 2022-05-03 15:28:01,165 >> Total optimization steps = 625 0%| | 0/625 [00:00<?, ?it/s]Traceback (most recent call last): File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 338, in <module> main() File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 328, in main trainer.train() File "/home/eknochenhauer/repos/themis-key-phrase-filtering-and-aggregation/.venv/lib/python3.8/site-packages/transformers/trainer.py", line 1422, in train tr_loss_step = self.training_step(model, inputs) File "/home/eknochenhauer/repos/themis-key-phrase-filtering-and-aggregation/.venv/lib/python3.8/site-packages/transformers/trainer.py", line 2011, in training_step loss = self.compute_loss(model, inputs) File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 119, in compute_loss target_p = inputs["labels"] File "/home/eknochenhauer/repos/themis-key-phrase-filtering-and-aggregation/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 235, in __getitem__ return self.data[item] KeyError: 'labels' 0%| | 0/625 [00:00<?, ?it/s] ### Expected behavior ```shell No KeyError. Successful training of student classifier and output available in output_dir. ``` "
[ 17 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
"https://api.github.com/repos/huggingface/transformers/issues/12680"
" TITLE Running out of memory when resume training. COMMENTS 9 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Might be similar problem as #11317, node runs out of cpu memory (512GB). To reproduce: (i) ``` deepspeed --hostfile myhostfile \ ${_PATH}/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path hyunwoongko/blenderbot-9B \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --deepspeed ${_PATH}/tests/deepspeed/ds_config_zero3.json \ --logging_steps 1 \ --fp16 \ --overwrite_output_dir \ --save_steps 10 \ --gradient_accumulation_steps 1 \ --evaluation_strategy="steps" \ --max_train_samples 10024 \ --max_eval_samples 32 \ --max_source_length 128 --max_target_length 128 \ --eval_steps 5 ``` (ii) Afterwards in order to resume I use the option `--resume_from_checkpoint /tmp/tst-summarization/checkpoint-10`. A workaround is to export the FP32 weights using the script `zero_to_fp32.py` as described in [https://huggingface.co/transformers/master/main_classes/deepspeed.html#getting-the-model-weights-out](https://huggingface.co/transformers/master/main_classes/deepspeed.html#getting-the-model-weights-out) and restart directly from `pytorch_model.bin`, nevertheless it would be better to resume directly from the deepspeed checkpoint, if possible. torch: 1.8.1+cu111 transformers: 4.9.0.dev0 deepspeed: 0.4.4+d1a7a55 log: [log.txt](https://github.com/huggingface/transformers/files/6808841/log.txt) @stas00 "
[ 56 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ "DeepSpeed" ]
"https://api.github.com/repos/huggingface/transformers/issues/11129"
" TITLE denoising with sentence permutation, and language sampling COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation When training or fine tuning models, data collator provided in huggingface isn't enough. For example, if we want to further pretrain `mBART` or `XLM-R`, where language sampling or sentence permutation are needed, which is hard to do with huggingface datasets API since it loads all language datasets at first. Thanks! "
[ 12 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
"https://api.github.com/repos/huggingface/transformers/issues/8000"
" TITLE How to load tokenizer for models without vocab.txt? COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I want to use xlm-roberta-large model, but "https://huggingface.co/"just give the file named "xlm-roberta-large-tokenizer.json", and have no "vocab.txt", so how to use the package “XLMRobertaTokenizer” to load the the file "xlm-roberta-large-tokenizer.json"? <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/15564"
" TITLE How to fine-tune NLP tasks docs COMMENTS 10 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY This PR focuses on adding guides for how-to fine-tune models for NLP tasks. Main additions: - Change the TOC structure to include a nested section in the how-to guides for fine-tuning for NLP tasks. - Separate each task in `custom_datasets` into their own pages. This will make it easier for users to find and read about a specific one. Also, when you click on any of the subsections that share the same name (`Preprocess`, `Fine-tune with Trainer API`, and `Fine-tune with TensorFlow`), you are automatically redirected to the sequence classification section even if you wanted to look at the question answering section. - Add a guide for causal/masked language modeling. To do: - [x] Add guides for multiple choice, translation and summarization."
[ 23 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Documentation" ]
"https://api.github.com/repos/huggingface/transformers/issues/16967"
" TITLE cannot import name 'RegNetModel' from 'transformers' COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info ```shell python 3.8 transformers 4.18.0 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import RegNetModel ### Expected behavior ```shell how to import RegNetModel ? ``` "
[ 17 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
"https://api.github.com/repos/huggingface/transformers/issues/10426"
" TITLE [WIP] CLIP COMMENTS 5 REACTIONS +1: 2 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR adds OpenAI's CLIP model. original repo: https://github.com/openai/CLIP initial demo: https://colab.research.google.com/drive/1hwiCuKvw7hwSlE8yv7J1dh280PlYgPef?usp=sharing"
[ 3 ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
"https://api.github.com/repos/huggingface/transformers/issues/11630"
" TITLE Simplify GPT-Neo local attention implementation COMMENTS 10 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 1 BODY # What does this PR do? The current local self attention implementation for gpt-neo follows the implementation in mtf, which makes it hard to understand. This pull request implements local self attention by changing the bias mask in global attention. Local self attention is a sliding window where each token can only attend to the previous window_size tokens. This implementation clearly reflects this. [Measured performance](https://gist.github.com/finetuneanon/5b2186c3555b652f387c86160cd89b55) (apply just 330686a3c0520c9727fe5ebed385e708d0178d77 and 269c497be1691556c830f61fa8f90001c692722f on #11320 patched to use) shows no noticable difference between implementations with respect to speed or VRAM usage. The results of both implementations are also identical. Fixes #11320 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? Since this PR mostly removes code, no additional tests or documentation were written. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. gpt-neo: @patil-suraj "
[ 3 ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
"https://api.github.com/repos/huggingface/transformers/issues/12768"
" TITLE Alphafold 2.0 COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # 🌟 New model addition It would be so amazing to have Alphafold model in huggingface. 😍 I don't know if there is any plan to add these kind of models to huggingface repo. <!-- Important information --> ## Model description ## Open source status * [x] the model implementation is available: ([github](https://github.com/deepmind/alphafold)) * [] the model weights are available: ([github](https://github.com/deepmind/alphafold)) * [x] who are the authors: (@deepmind)"
[ 39 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
"https://api.github.com/repos/huggingface/transformers/issues/9069"
" TITLE Fix some typos COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY "
[ 32 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "model card" ]
"https://api.github.com/repos/huggingface/transformers/issues/17207"
" TITLE Add UL2: Unifying Language Learning Paradigms COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description UL2 is a unified framework for pretraining models that are universally effective across datasets and setups. UL2 uses Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Code and weights (20 billion parameter models): https://github.com/google-research/google-research/tree/master/ul2 The code is based on T5x (which is JAX/FLAX): https://github.com/google-research/t5x"
[ 39 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
"https://api.github.com/repos/huggingface/transformers/issues/9966"
" TITLE Bump bleach from 3.1.5 to 3.3.0 in /examples/research_projects/lxmert COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Bumps [bleach](https://github.com/mozilla/bleach) from 3.1.5 to 3.3.0. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/mozilla/bleach/blob/master/CHANGES">bleach's changelog</a>.</em></p> <blockquote> <h2>Version 3.3.0 (February 1st, 2021)</h2> <p><strong>Backwards incompatible changes</strong></p> <ul> <li>clean escapes HTML comments even when strip_comments=False</li> </ul> <p><strong>Security fixes</strong></p> <ul> <li>Fix bug 1621692 / GHSA-m6xf-fq7q-8743. See the advisory for details.</li> </ul> <p><strong>Features</strong></p> <p>None</p> <p><strong>Bug fixes</strong></p> <p>None</p> <h2>Version 3.2.3 (January 26th, 2021)</h2> <p><strong>Security fixes</strong></p> <p>None</p> <p><strong>Features</strong></p> <p>None</p> <p><strong>Bug fixes</strong></p> <ul> <li>fix clean and linkify raising ValueErrors for certain inputs. Thank you <a href="https://github.com/Google-Autofuzz"><code>@Google-Autofuzz</code></a>.</li> </ul> <h2>Version 3.2.2 (January 20th, 2021)</h2> <p><strong>Security fixes</strong></p> <p>None</p> <p><strong>Features</strong></p> <ul> <li>Migrate CI to Github Actions. Thank you <a href="https://github.com/hugovk"><code>@hugovk</code></a>.</li> </ul> <p><strong>Bug fixes</strong></p> <ul> <li>fix linkify raising an IndexError on certain inputs. Thank you <a href="https://github.com/Google-Autofuzz"><code>@Google-Autofuzz</code></a>.</li> </ul> <p>Version 3.2.1 (September 18th, 2020)</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/mozilla/bleach/commit/79b7a3c5e56a09d1d323a5006afa59b56162eb13"><code>79b7a3c</code></a> Merge pull request from GHSA-vv2x-vrpj-qqpq</li> <li><a href="https://github.com/mozilla/bleach/commit/842fcb4a05e59d9a22dafb8c51865ee79d753c03"><code>842fcb4</code></a> Update for v3.3.0 release</li> <li><a href="https://github.com/mozilla/bleach/commit/1334134d34397966a7f7cfebd38639e9ba2c680e"><code>1334134</code></a> sanitizer: escape HTML comments</li> <li><a href="https://github.com/mozilla/bleach/commit/c045a8b2a02bfb77bb9cacd5d3e5926c056074d2"><code>c045a8b</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/mozilla/bleach/issues/581">#581</a> from mozilla/nit-fixes</li> <li><a href="https://github.com/mozilla/bleach/commit/491abb06ce89012d852f4c5ab3aff8f572532611"><code>491abb0</code></a> fix typo s/vnedoring/vendoring/</li> <li><a href="https://github.com/mozilla/bleach/commit/10b1c5dda8ebceffce1d8f7d66d4b309b4f8c0cf"><code>10b1c5d</code></a> vendor: add html5lib-1.1.dist-info/REQUESTED</li> <li><a href="https://github.com/mozilla/bleach/commit/cd838c3b527021f2780d77718488fa03d81f08e3"><code>cd838c3</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/mozilla/bleach/issues/579">#579</a> from mozilla/validate-convert-entity-code-points</li> <li><a href="https://github.com/mozilla/bleach/commit/612b8080ada0fba45f0575bfcd4f3a0bda7bfaca"><code>612b808</code></a> Update for v3.2.3 release</li> <li><a href="https://github.com/mozilla/bleach/commit/6879f6a67058c0d5977a8aa580b6338c9d34ff0e"><code>6879f6a</code></a> html5lib_shim: validate unicode points for convert_entity</li> <li><a href="https://github.com/mozilla/bleach/commit/90cb80be961aaf650ebc65b2ba2b789a2e9b129f"><code>90cb80b</code></a> Update for v3.2.2 release</li> <li>Additional commits viewable in <a href="https://github.com/mozilla/bleach/compare/v3.1.5...v3.3.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=bleach&package-manager=pip&previous-version=3.1.5&new-version=3.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>"
[ 5 ]
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "dependencies" ]
"https://api.github.com/repos/huggingface/transformers/issues/11677"
" TITLE Identify issue in slow torch tests COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Pull request to try and identify the source of the hangs in the torch slow CI. Torch slow CI was taking three hours per run until a few days ago, and has since jumped to 6+ hours, for an unknown reason. The job ends up being killed as it goes over the timeout, so the resulting time might end up being even larger than six hours. Example of run that took 3 hours (April 20, 2021): https://github.com/huggingface/transformers/actions/runs/765376348 Example of run that took 6+ hours (April 21, 2021: https://github.com/huggingface/transformers/actions/runs/768949009 Here is an example of a run that took 6+ hours, while completing the full common tests: https://github.com/huggingface/transformers/runs/2443524960?check_suite_focus=true The common tests took 5h56 minutes to complete, and the pipeline tests took more than 4 hours to complete before being apparently killed by CI, so there was clearly something going wrong here. In order to investigate the root cause of the issue, opening a PR here. Tests will be conducted on a testing machine with the exact same configuration as the other CI machines. Investigating on a single run, on a single GPU machine. The approach is discussed with @stas00, who is helping out and offered some of the steps below. ## Step 1 The first step is ensuring this is not an error linked to the machine itself, so we first start by running the job on the machine without changing anything to it. We only add a 240-minute timeout so that it can go on to step 2 if it goes over the 4 hour mark (as we know it should take less than 3 hours to complete) See run for first step here: https://github.com/huggingface/transformers/runs/2554755801 Edit: First run errored out at 6 hours like on other machines. I do not think it is a setup issue. ## Step 2 (if step 1 doesn't resolve the issue) The second step is twofold: removing `pytest-xdist` as we do not leverage it (we're using a single worker), and adding `pytest-timeout` with a timeout of 300 seconds. See run for second step here: https://github.com/huggingface/transformers/runs/2554760360 ## Step 3 (if step 1 & 2 don't resolve the issue) Do a manual run - at the 3 hour mark, it should be hanging. As it is hanging, try and retrieve information about what is information. For example, with the following: ``` pip install py-spy # dumps traceback for each thread sudo py-spy dump --pid PID ``` ## Step 4 (if no step above resolves the issue) The diff between the two jobs (3hr and 6hr) doesn't seem to have anything that would make the tests hang - but reverting to the previous repository state could help us identify the culprit. Diff: https://github.com/huggingface/transformers/compare/95037a1..95dab34 Additionally, Stas identified two difference in dependencies between the two runs: ``` -datasets-1.5.0 +datasets-1.6.0 -nltk-3.6.1 +nltk-3.6.2 ``` Those should be investigated at the same time."
[ 3, 20 ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP", "Testing" ]
"https://api.github.com/repos/huggingface/transformers/issues/15543"
" TITLE Move generic PyTorch utils function from modeling_utils.py to pytorch_utils COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # 🚀 Feature request > Since we're creating a new module file, can we maybe move some other functions there? Like the no_init_weight context manager, and: > > all pruning stuff > apply_chunking_to_forward > get_parameter_device > get_parameter_dtype > so that the modeling utils file stays focused on PreTrainedModel and the layers it defines? Taken from comment here: https://github.com/huggingface/transformers/pull/15498#pullrequestreview-874783288 ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> "
[ 52 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "Good First Issue" ]
"https://api.github.com/repos/huggingface/transformers/issues/15857"
" TITLE Supporting multiple evaluation datasets in `Trainer` and `Seq2seqTrainer` COMMENTS 6 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # 🚀 Feature request Support for evaluating on multiple validation datasets when using the `Trainer` class. ## Motivation This is a common use case in research. Imagine that you train your model on some data, and you have validation data coming from two distinct distributions. You want to compute two sets of metrics - one for the validation dataset with the same distribution as the training data and one for the validation dataset with known distribution. ## Your contribution - Happy to submit an example with my own code (assuming the research makes sense) so that others see how this can be achieved in practice. - Could update relevant posts on the huggingface forum so that other users requiring this feature can see how it can be done. ## Things I have tried Inspired by this post [here](https://discuss.huggingface.co/t/evaluating-your-model-on-more-than-one-dataset/1544) and @sgugger's solution, I set out to see if implementing the `on_evaluate` callback is possible, but I can't figure out how to get the other validation datasets to it - the callaback can only access the trainer init arguments/state but none of these objects can be passed additional data loaders. How can this be approached instead? My current solution is not clean, but may work: 1. Use `setattr` to add an attribute to the trainer after init, call it `additional_eval_datasets` 2. Override the `_maybe_log_save_evaluate` method as follows: - Call the `Trainer` superclass method first to do what the trainer would normally do - loop through the additional datasets, calling the `Trainer.evaluate` for each dataset with appropriate inputs This is a bit hacky as I should not be overriding a private method, but override `evaluate` instead. However, the implementation would be more concise in this way. Please feedback of any undesirable side effects that this may lead to - I did read through the `Trainer` source code and did not spot any pitfalls! Of course, this approach can be adapted in order to support evaluation on multiple datasets natively in the `Trainer`. "
[ 3 ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
"https://api.github.com/repos/huggingface/transformers/issues/10293"
" TITLE [pretrained] model classes aren't checking the arch of the pretrained model it loads COMMENTS 12 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY While comparing different models trained on xsum (most of which are Bart) I made a mistake and passed "google/pegasus-xsum" to `BartForConditionalGeneration` ``` BartForConditionalGeneration.from_pretrained("google/pegasus-xsum") ``` I got: ``` Some weights of the model checkpoint at google/pegasus-xsum were not used when initializing BartForConditionalGeneration: ['model.encoder.layer_norm.weight', 'model.encoder.layer_norm.bias', 'model.decoder.layer_norm.weight', 'model.decoder.layer_norm.bias'] - This IS expected if you are initializing BartForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BartForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BartForConditionalGeneration were not initialized from the model checkpoint at google/pegasus-xsum and are newly initialized: ['model.encoder.embed_positions.weight', 'model.encoder.layernorm_embedding.weight', 'model.encoder.layernorm_embedding.bias', 'model.decoder.embed_positions.weight', 'model.decoder.layernorm_embedding.weight', 'model.decoder.layernorm_embedding.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Traceback (most recent call last): File "./bart-summarize2.py", line 8, in <module> tokenizer = BartTokenizer.from_pretrained(mname) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 1788, in from_pretrained return cls._from_pretrained( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 1860, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/roberta/tokenization_roberta.py", line 159, in __init__ super().__init__( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/gpt2/tokenization_gpt2.py", line 179, in __init__ with open(vocab_file, encoding="utf-8") as vocab_handle: TypeError: expected str, bytes or os.PathLike object, not NoneType ``` Any reason why the model class doesn't check that it's being fed a wrong architecture? It could detect that and give a corresponding error message, rather than spitting random errors like above? I was pretty sure it was a bug in pegasus model until I noticed that pegasus != Bart. Thanks. @LysandreJik "
[ 52 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "Good First Issue" ]
"https://api.github.com/repos/huggingface/transformers/issues/17132"
" TITLE Dataset streaming example not working COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.173.el7-x86_64-with-glibc2.10 - Python version: 3.8.12 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0a0+17540c5 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.4.2 (gpu) - Jax version: 0.3.10 - JaxLib version: 0.3.10 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Following the guide to train a model in streaming mode using the [dataset-streaming](https://github.com/huggingface/transformers/tree/main/examples/research_projects/jax-projects/dataset-streaming) directory results in the following error. ``` [11:11:16] - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/en/en_part_480.txt.gz Token indices sequence length is longer than the specified maximum sequence length for this model (1195 > 512). Running this sequence through the model will result in indexing errors Traceback (most recent call last): File "./run_mlm_flax_stream.py", line 549, in <module> eval_samples = advance_iter_and_group_samples(training_iter, data_args.num_eval_samples, max_seq_length) File "./run_mlm_flax_stream.py", line 284, in advance_iter_and_group_samples samples = {k: samples[k] + tokenized_samples[k] for k in tokenized_samples.keys()} File "./run_mlm_flax_stream.py", line 284, in <dictcomp> samples = {k: samples[k] + tokenized_samples[k] for k in tokenized_samples.keys()} TypeError: can only concatenate list (not "int") to list ``` ### Expected behavior ```shell Model training to start. ``` "
[ 17 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
"https://api.github.com/repos/huggingface/transformers/issues/7538"
" TITLE T5 supervised denoising task COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # 🚀 Feature request Hi everyone! I'm experimenting with T5 and I would like to fine-tune a specific pre-trained model of mine for tackling the 'filling the mask' task. To be clear, I have the following: I love \<mask> and Mario. Where \<mask> can be a single token or span. At the moment I framed the problem in this way: - input: I love <extra_id_0> and Mario. - output/label: luca The task I want to tackle is different from the canonical unsupervised one where I was able to perform it correctly. Do you think that the discussed framing presented above is enough? From the result that I got, it doesn't seem so. "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/9992"
" TITLE Adversarial/amnesic heads COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # 🚀 Feature request Task heads that backpropagate deliberately reversed gradients to the encoder. A flag requesting this behavior when constructing a task head. ## Motivation Transfer learning experiments lend themselves to questions about the extent to which two tasks rely on the same information about a word/sentence, and to experiments probing whether and how word encodings contain/correspond to syntax trees, lemmas, frequencies, and other objects of linguistic/psycholinguistic study. A difficulty is that a pretrained model, without fine-tuning, may already encode certain information too thoroughly and accessibly for intermediate training to make much of a difference. For example, BERT's masked language modeling objective produces word encodings in which syntax information is readily accessible. Intermediate training on a syntax task requires training a task head to extract this information, of course, but it will result in very little reorganization of the encoder itself. Adversarial training, such as the amnesic probing of Elazar et al. 2020, can avoid this pitfall. Intermediate training can aim to burn particular information *out* of the encodings, and measure how much this impairs trainability of the target task. Strictly reversing the sense of the training data won't do it though; getting all the answers exactly wrong requires just as much domain knowledge as getting them all right does. And randomizing the labels on training data may just result in a feckless task head, one that discards useful information passed to it from the encoder, rather than affecting the encoder itself. Ideally, then, the task head would be trained toward correctly reproducing gold-standard labels, but would flip all its gradients before backpropagating them to the shared encoder, thus training it not to produce precisely the signals that the task head found most informative. The following work by Cory Shain illustrates flipping gradients in this way (although it's not applied to shared-encoder transfer learning, but rather to development of encoders that disentangle semantics from syntax). https://docs.google.com/presentation/d/1E89yZ8jXXeSARDLmlksOCJo83QZdNbd7phBrR_dRogg/edit#slide=id.g79452223cd_0_19 https://github.com/coryshain/synsemnet ## Your contribution I am deeply unfamiliar with pytorch, unfortunately, and utterly ignorant of tensorflow. I can't offer much."
[ 12 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
"https://api.github.com/repos/huggingface/transformers/issues/15232"
" TITLE Wav2Vec2ForPreTraining doc example has None loss COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: Linux-5.11.0-46-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyTorch version (GPU?): 1.10.1 (True) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Models: - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l Documentation: @sgugger ## Information I'm trying to additionally pretrain Wav2Vec2.0 model on my dataset. [In the docs](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForPreTraining) you have an example for running the `Wav2Vec2ForPreTraining`: ```python import torch from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2ForPreTraining from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices from datasets import load_dataset import soundfile as sf feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("patrickvonplaten/wav2vec2-base") model = Wav2Vec2ForPreTraining.from_pretrained("patrickvonplaten/wav2vec2-base") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) input_values = feature_extractor(ds["speech"][0], return_tensors="pt").input_values # Batch size 1 # compute masked indices batch_size, raw_sequence_length = input_values.shape sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length) mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.2, mask_length=2) with torch.no_grad(): outputs = model(input_values, mask_time_indices=mask_time_indices) # compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states) cosine_sim = torch.cosine_similarity( outputs.projected_states, outputs.projected_quantized_states, dim=-1 ) # show that cosine similarity is much higher than random assert cosine_sim[mask_time_indices].mean() > 0.5 # for contrastive loss training model should be put into train mode model.train() loss = model(input_values, mask_time_indices=mask_time_indices).loss ``` If you print the `loss` that you get in the end, you will get `None`. This happens because [in the definition](https://github.com/huggingface/transformers/blob/f4b7420dfe419fe653908f091976517635a119e6/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1511) you have to pass `sampled_negative_indices` in order to get not `None` loss. ## To reproduce Steps to reproduce the behavior: 1. Run the above code 2. `print(loss)` in the end ## Expected behavior Expected to have some example on how to get the actual loss and train the model. "
[ 52, 6 ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "Good First Issue", "Good First Documentation Issue" ]
"https://api.github.com/repos/huggingface/transformers/issues/8957"
" TITLE Update README.txt COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 --> "
[ 32 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "model card" ]
"https://api.github.com/repos/huggingface/transformers/issues/17136"
" TITLE PyTorch FSDP integration in Trainer COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? PyTorch recently upstreamed the Fairscale FSDP into PyTorch Distributed with additional optimizations. This PR is aimed at integrating it into Trainer API. - It enables Distributed Training at Scale. It's a wrapper for sharding Module parameters across data parallel workers. This is inspired by Xu et al. as well as the ZeRO Stage 3 from DeepSpeed. - PyTorch FSDP will focus more on production readiness and long-term support. This includes better integration with ecosystems and improvements on performance, usability, reliability, debuggability and composability. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? "
[ 54 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
[ "PyTorch FSDP" ]
"https://api.github.com/repos/huggingface/transformers/issues/15739"
" TITLE Add compatibility for Postponed Evaluation of Annotations (PEP 563) COMMENTS 6 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Hello, The code says that it will add compatibility for Postponed Evaluation of Annotations ([PEP 563](https://www.python.org/dev/peps/pep-0563/)) when Python 3.9 is released (which already happened on 2020.10.5). Is there any plan to complete this? https://github.com/huggingface/transformers/blob/2c2a31ffbcfe03339b1721348781aac4fc05bc5e/src/transformers/hf_argparser.py#L85-L90"
[ 52 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "Good First Issue" ]
"https://api.github.com/repos/huggingface/transformers/issues/16949"
" TITLE LayoutLMV3 COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model. For example, LayoutLMv3 can be fine-tuned for both text-centric tasks, including form understanding, receipt understanding, and document visual question answering, and image-centric tasks such as document image classification and document layout analysis. LayoutLMv3 greatly simplifies training and reduces the number of parameters compared to v3, making it an important milestone in document understanding. [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, Preprint 2022. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation [Huggingface Pretrained Download](https://huggingface.co/microsoft/layoutlmv3-base) "
[ 39 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
"https://api.github.com/repos/huggingface/transformers/issues/7719"
" TITLE wrong decoder_input_ids[:,0] for MarianMT models ? COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux - Python version: 3.7.0 - PyTorch version (GPU?): 1.6.0 - Using GPU in script?: Yes ### Who can help: @sshleifer ## Information Model I am using (Bert, XLNet ...): MarianMTModel The problem arises when using: * [x] the official example scripts: (give details below) ```from transformers import MarianTokenizer tok = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-de') src_texts = [ "I am a small frog.", "Tom asked his teacher for advice."] tgt_texts = ["Ich bin ein kleiner Frosch.", "Tom bat seinen Lehrer um Rat."] # optional batch_enc: BatchEncoding = tok.prepare_seq2seq_batch(src_texts, tgt_texts=tgt_texts) # model(**batch) should work ``` model(**batch) doesn't work as intended because [shift_tokens_right](https://github.com/huggingface/transformers/blob/03ec02a667d5ed3075ea65b9f89ef7135e97f6b4/src/transformers/modeling_bart.py#L226) adds eos token to generate the target sequence. ``` shift_tokens_right(batch["labels"], model.config.pad_token_id) ``` returns ``` [[0, 105, 495, 53, 5324, 17279, 649, 3], [0, 2136, 8818, 715, 5832, 91, 688, 3]] ``` instead of ``` [[58100, 105, 495, 53, 5324, 17279, 649, 3], [58100, 2136, 8818, 715, 5832, 91, 688, 3]] ``` Here, "58100" is the decoder_start_token_id."
[ 28 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "marian" ]
"https://api.github.com/repos/huggingface/transformers/issues/16846"
" TITLE Range Error for BERT Masked Language Modeling on IMDB COMMENTS 7 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.10.0+cu111 (False) - Tensorflow version (GPU?): 2.8.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://colab.research.google.com/drive/1ZpYRkJVMF5r3MukUheEFtgDvqax4YCxM?usp=sharing ### Expected behavior ```shell Evaluation to complete and give me a perplexity score, as it does [here](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section3_tf.ipynb) ``` "
[ 17 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
"https://api.github.com/repos/huggingface/transformers/issues/7462"
" TITLE RAG - how to precompute custom document index? COMMENTS 18 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Was wondering if there was any code snippet / blog post showing how one could load their own documents and index them, so they can be used by the RAG retriever. Cheers!"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/16403"
" TITLE R3M: A Universal Visual Representation for Robot Manipulation COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # 🌟 New model addition ## Model description We pre-train a visual representation using the Ego4D human video dataset using a combination of time-contrastive learning, video-language alignment,and an L1 penalty to encourage sparse and compact representations. The resulting representation, R3M, can be used as a frozen perception module for downstream policy learning. Across a suite of 12 simulated robot manipulation tasks, we find that R3M improves task success by over 20% compared to training from scratch and by over 10% compared to state-of-the-art visual representations like CLIP and MoCo. Furthermore, R3M enables a Franka Emika Panda arm to learn a range of manipulation tasks in a real, cluttered apartment given just 20 demonstrations. <!-- Important information --> ## Open source status * [x] the model implementation is available:(https://github.com/facebookresearch/r3m) * [x] the model weights are available: https://github.com/facebookresearch/r3m/blob/main/r3m/example.py * [x] who are the authors: @suraj-nair-1 "
[ 39 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
"https://api.github.com/repos/huggingface/transformers/issues/8337"
" TITLE pip cannot install transformers with python version 3.X version on Ubuntu 18.04 COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: ``` NAME="Ubuntu" VERSION="18.04.5 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.5 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic ``` - Python version: python3.X - PyTorch version (GPU?): no - Tensorflow version (GPU?): na - Using GPU in script?: na - Using distributed or parallel set-up in script?:na ### Who can help tokenizers: @mfuntowicz ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] pip install transformers ## To reproduce Steps to reproduce the behavior: 1. Create a VM with Ubuntu 18.04 2. Install any python3.X version 3. pip3 install transformers <!-- Requirement already satisfied: tokenizers==0.9.2 in /usr/local/lib/python3.6/dist-packages (from transformers) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) Requirement already satisfied: click in /usr/lib/python3/dist-packages (from sacremoses->transformers) Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf->transformers) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/lib/python3/dist-packages (from requests->transformers) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) Building wheels for collected packages: sentencepiece Running setup.py bdist_wheel for sentencepiece ... error Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-baldc7a3/sentencepiece/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmp0rhc8tr_pip-wheel- --python-tag cp36: running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.6 creating build/lib.linux-x86_64-3.6/sentencepiece copying src/sentencepiece/__init__.py -> build/lib.linux-x86_64-3.6/sentencepiece copying src/sentencepiece/sentencepiece_model_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece copying src/sentencepiece/sentencepiece_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece running build_ext /bin/sh: 1: pkg-config: not found Cloning into 'sentencepiece'... Note: checking out '8336bbd0c1cfba02a879afe625bf1ddaf7cd93c5'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> ./build_bundled.sh: 15: ./build_bundled.sh: cmake: not found make: *** No targets specified and no makefile found. Stop. make: *** No rule to make target 'install'. Stop. env: ‘pkg-config’: No such file or directory Failed to find sentencepiece pkg-config ---------------------------------------- Failed building wheel for sentencepiece Running setup.py clean for sentencepiece Failed to build sentencepiece Installing collected packages: sentencepiece, transformers Running setup.py install for sentencepiece ... error Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-baldc7a3/sentencepiece/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-cfvofwnm-record/install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build/lib.linux-x86_64-3.6 creating build/lib.linux-x86_64-3.6/sentencepiece copying src/sentencepiece/__init__.py -> build/lib.linux-x86_64-3.6/sentencepiece copying src/sentencepiece/sentencepiece_model_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece copying src/sentencepiece/sentencepiece_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece running build_ext /bin/sh: 1: pkg-config: not found mkdir: cannot create directory ‘bundled’: File exists fatal: destination path 'sentencepiece' already exists and is not an empty directory. fatal: destination path 'sentencepiece' already exists and is not an empty directory. mkdir: cannot create directory ‘build’: File exists ./build_bundled.sh: 15: ./build_bundled.sh: cmake: not found make: *** No targets specified and no makefile found. Stop. make: *** No rule to make target 'install'. Stop. env: ‘pkg-config’: No such file or directory Failed to find sentencepiece pkg-config ---------------------------------------- Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-baldc7a3/sentencepiece/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-cfvofwnm-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-baldc7a3/sentencepiece/ -> ## Expected behavior Successful installation of transformers "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/8348"
" TITLE PEGASUS generation/decoding VERY Slow COMMENTS 6 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info - `transformers` version: 3.4.0 - Platform: Linux-4.19.112+-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.0+cu110 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @sshleifer ## Information Model I am using: *PEGASUS-large*, *PEGASUS-cnn_dailymail*, *PEGASUS-xsum* The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I compared the generation/decoding performance of BART and PEGASUS both loaded via AutoModelSeq2SeqForLM fine-tuned on a custom dataset for the same amount of time. Generation was done using basic greedy decoding. PEGASUS models are anywhere between 5-15x slower than BART. Fine-tuning speed was on-par for both models. ## To reproduce Steps to reproduce the behavior: 1. Load PEGASUS and BART from any of the mentioned checkpoints (using AutoModelSeq2SeqForLM) 1. Fine-tune models 2. Decode using greedy decoding 3. Compare fine-tuning performance with other Seq2Seq Model (BART-large) ## Expected behavior Decoding performance on-par with BART-large? "
[ 41, 7 ]
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix", "Summarization" ]
"https://api.github.com/repos/huggingface/transformers/issues/7827"
" TITLE Do I need to apply the softmax function to my logit before calculating the CrossEntropyLoss? COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Hello, I am trying to compute the CrossEntropyLoss directly by using this code: ``` loss_fct = CrossEntropyLoss() mc_loss = loss_fct(reshaped_logits, mc_labels) ``` If the reshaped_logits contain the logit values before softmax, should I apply `nn.softmax` function before I do `loss_fct(reshaped_logits, mc_labels)`? Thank you,"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/15489"
" TITLE Create a custom model guide COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY First draft of a guide for how to create a model without using any of the `Auto...` classes to give users a better idea of what's happening behind the automagic. The goal of this guide is to show users alternative methods for creating a model. It also demonstrates how users can instantiate these classes themselves if they want to customize or experiment with the default attributes/parameters loaded from a pretrained model. So in a sense, it is also a guide for creating a custom model. Some feedback I would appreciate: - Would adding a graphic showing `configuration -> model <- tokenizer/feature extractor/processor` help show how a model is created? - Would adding an end-to-end code sample (setting up a custom configuration, model and tokenizer) at the end help tie everything together? - Are there more details I can add around creating a custom model? - How is my tone for training a model from scratch? I feel like it might be a bit too harsh (see line 118). 😅 Thanks and let me know if I'm missing anything!"
[ 23 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Documentation" ]
"https://api.github.com/repos/huggingface/transformers/issues/10958"
" TITLE Returning Confidence Score For Extractive QA Task When Using Non-Pipeline Approach COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # 🚀 Feature request HF's Extract QA pipeline provides an excellent interface for start. It returns 4 values including a **probability score / confidence score**. Unfortunately same is not the case when using the non-pipeline approach i.e using model and tokenizer for question answering. [Both methods are mentioned here, The pipeline one and the other](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) ## Motivation The confidence score will help a lot in various tasks. For example when I am developing a complete pipeline for QA, consisting of recall, retriever and some other models for entity matching and etc. I need to calculate scores of each models and then rank the final list of documents based on the weighted sum of score from each model. I believe this is a very common practice among NLP practitioners and not just for QA task. The point is confidence scores are usually a pretty standard requirement for each model output because we have to take further actions based on its score. ## Your contribution I want to. but unfortunately I am not at the level where I can understand the code. I have went through the code and I believe its the "decode" function in "QuestionAnsweringPipeline" class which has the code the which generates the probability scores. If you guys can just provide an interface for it or provide docs for how to calculate this score using the model and tokenizer approach then that would be great too. And if you do decide to do this then please also add this addition to docs in the link mentioned at the top. Thanks. "
[ 12 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
"https://api.github.com/repos/huggingface/transformers/issues/8183"
" TITLE Summarization outputs on T5-small gets truncated COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- --> I been fine tuning t5-small for my own dataset but every time I set a max_length it just truncates the output. For example my input statement is : **When I first entered high school I was very nervous as it was a new school for me and it was a big adjustment. I was overwhelmed with work and mentally wasn't staying optimistic as I found it hard to manage my time and make friends. I felt like I wasn't good enough, and this caused me to treat myself like I wasn't worthy of being at such a place. In terms of behavior to others, I would say it made me more shy while still adapting to the new environment.** and my output is as follows: **when I first entered high school I was very nervous as it was a new school for me and it was a** My generate is as follows: **( input, min_length= 0, max_length=25, length_penalty=2.0, num_beams=4, early_stopping=True )** Is it possible for me to make it not truncate at the end? and also make it generate a reasonable summary ? <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/12169"
" TITLE Allow setting permissions of downloaded models (via envvar) COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY In our research group we all have user accounts on a server where we each run our own experiments (Ubuntu behind the scenes). By default, everyone is downloading `transformers` models to their own home directory. Let's say we have 20 researchers, that might mean that we have 20 duplicates of "bert-base-cased" on the server (and of many other models). This is not efficient at all and takes too much room to our liking. We have tried creating a 777 directory as TRANSFORMERS_CACHE globally, but that does not work. If I download a model, some of the downloaded files get a read/write access for me as the creator of the file. This means that others cannot use the model (permission denied). Our suggestion or request would be to have an option when downloading a model to also set its permissions for all downloaded files. Preferably adjustable via a (system-wide) environment variable. This would probably need to be added in file_utils.py, similar to other options like "local_files_only". I currently do not have time to work on this myself, but I am open to any feedback of course."
[ 12 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
"https://api.github.com/repos/huggingface/transformers/issues/7727"
" TITLE what is the perplexity of distilbert-base-uncased ? COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # ❓ Questions & Help ## Details In the [readme](https://github.com/huggingface/transformers/tree/master/examples/distillation) , it is said that distilbert-base-uncased is pretraind on the same data used to pretrain Bert, so I wonder what is the final perplexity or cross entropy of the pretrain? "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/7338"
" TITLE BufferedWriter takes most of the time COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Speed and Memory Benchmarks: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` english = pipeline( "question-answering", model="distilbert-base-uncased-distilled-squad", tokenizer="distilbert-base-uncased-distilled-squad" ) text1 = """It comes as pubs, bars, restaurants and other hospitality venues in England are told they must have a 22:00 closing time from Thursday. Full details will be set out by the prime minister in Parliament later. Boris Johnson is meeting the first ministers of Scotland, Wales and Northern Ireland and will address the nation in a live broadcast at 20:00 BST on Tuesday. As well as the early closing time for hospitality venues, he is expected to announce they will be restricted by law to table service only. """ %%prun english({'question': 'Which country is the news about?', 'context': text1}) ``` The profiling result is ``` 6256 function calls (6155 primitive calls) in 1.097 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.713 0.713 0.713 0.713 {method 'write' of '_io.BufferedWriter' objects} 37 0.229 0.006 0.229 0.006 {method 'matmul' of 'torch._C._TensorBase' objects} 12 0.030 0.002 0.030 0.002 {built-in method matmul} 5 0.020 0.004 0.020 0.004 {method 'dump' of '_pickle.Pickler' objects} 6 0.019 0.003 0.019 0.003 {method 'softmax' of 'torch._C._TensorBase' objects} 33 0.012 0.000 0.012 0.000 {method 'acquire' of '_thread.lock' objects} 3 0.009 0.003 0.009 0.003 {built-in method posix.waitpid} 6 0.009 0.002 0.009 0.002 {method 'masked_fill_' of 'torch._C._TensorBase' objects} 6 0.009 0.001 0.009 0.001 {built-in method torch._C._nn.gelu} 37 0.006 0.000 0.235 0.006 functional.py:1655(linear) 94/1 0.005 0.000 0.325 0.325 module.py:710(_call_impl) ... ``` ## Expected behavior Most time is spent on inference such as the method `matmul`. "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/9619"
" TITLE Train robertatokenizer failed due to pad token not found COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Windows - Python version: 3.8.5 - PyTorch version (GPU?): 1.7? 3080 RTX - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help tokenizers: @mfuntowicz Trainer: @sgugger --> ## Information Model I am using (Bert, XLNet ...): Roberta The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: My first step is to download some of the esperberto data from the sites mentioned in this tutorial https://huggingface.co/blog/how-to-train Few issues 1. Regarding the tutorial, they make you train a ByteLevelBPETokenizer but this is never used in the training code. The training code isn't even in the tutorial 👎 2. I came across this https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb It looks good except whatever ByteLevelBPETokenizer is never used in the training process so I tried to find a way to use it. I tried two approaches both result in the same outcome. I tried using the BPE and not ByteLevelBPETokenizer. I have no clue what is the best practice or why neither of them are working. This is my code to do the tokenizer. You can uncomment whatever ``` #! pip install tokenizers #%% Import Statements from pathlib import Path from transformers import RobertaTokenizer from tokenizers import Tokenizer from tokenizers.trainers import BpeTrainer from tokenizers.models import BPE from tokenizers.pre_tokenizers import Whitespace # from tokenizers import ByteLevelBPETokenizer # from tokenizers.implementations import ByteLevelBPETokenizer from tokenizers.processors import BertProcessing import os.path as osp #%% Train Tokenizer if (not osp.exists('models/BPEtokenizer.json')): paths = [str(x) for x in Path("./eo_data/").glob("**/*.txt")] # Initialize a tokenizer # tokenizer = ByteLevelBPETokenizer() # # Customize training # tokenizer.train(files=paths, vocab_size=52000, min_frequency=3, special_tokens=[ # "<s>", # "<pad>", # "</s>", # "<unk>", # "<mask>" # ]) tokenizer = Tokenizer(BPE()) trainer = BpeTrainer(vocab_size=52000,min_frequency=3, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) tokenizer.pre_tokenizer = Whitespace() tokenizer.train(trainer, paths) # Save files to disk tokenizer.save('models/BPEtokenizer.json') #%% Tokenize tokenizer = Tokenizer.from_file('models/BPEtokenizer.json') # tokenizer._tokenizer.post_processor = BertProcessing( # ("</s>", tokenizer.token_to_id("</s>")), # ("<s>", tokenizer.token_to_id("<s>")), # ) # tokenizer.enable_truncation(max_length=512) output = tokenizer.encode("Mi estas Julien.😁") print(output.tokens) print(output.ids) # Encoding(num_tokens=7, ...) # tokens: ['<s>', 'Mi', 'Ġestas', 'ĠJuli', 'en', '.', '</s>'] ``` This is my code to do training ``` import torch from transformers import RobertaConfig from transformers import RobertaTokenizerFast from transformers import RobertaForMaskedLM from transformers import LineByLineTextDataset from transformers import DataCollatorForLanguageModeling from pathlib import Path from transformers import DataCollatorForLanguageModeling from tokenizers import ByteLevelBPETokenizer from transformers import PreTrainedTokenizerFast from tokenizers import Tokenizer # Tutorial from https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=BzMqR-dzF4Ro device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # tokenizer = RobertaTokenizerFast.from_pretrained("./EsperBERTo", max_len=512) # tokenizer = ByteLevelBPETokenizer("models/esperberto-vocab.json","models/esperberto-merges.txt") # ? This actually doesn't work. You will get an error saying tokenizer is not callable. tokenizer = PreTrainedTokenizerFast(tokenizer_file='models/BPEtokenizer.json') # tokenizer = Tokenizer.from_file('models/BPEtokenizer.json') mlm=False config = RobertaConfig( vocab_size=52_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1, ) # Training from scratch model = RobertaForMaskedLM(config=config) model.num_parameters() paths = [str(x) for x in Path("eo_data/").glob("**/*.txt")] # Build the dataset dataset = LineByLineTextDataset(tokenizer=tokenizer, file_path="eo_data/shuff-orig/eo/eo.txt",block_size=128) # mlm = mask modeling language data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=mlm, mlm_probability=0.15 ) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="models/EsperBERTo-small", overwrite_output_dir=True, num_train_epochs=1000, per_gpu_train_batch_size=64, save_steps=10_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset) trainer.train() ``` I keep getting the error `Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.` Also I couldn't set mlm=True either. Do you have any good tutorials on how to train your own set of data using Roberta? If anyone wants to pull my files you can grab them and the dataset here https://1drv.ms/u/s!Apa0_j-AivqTpqNz7r0M3NNhCm2W_A?e=BMLvqv If you guys resolve this then I'll update and post a public google colab "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/8576"
" TITLE run_pl_glue.py token_type_id error on fresh install COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY If you try to run the run_glue.py example with e.g. roberta from a fresh install of the library, it errors out with the following error: ``` Traceback (most recent call last): File "examples/text-classification/run_pl_glue.py", line 228, in <module> main() File "examples/text-classification/run_pl_glue.py", line 218, in main trainer = generic_train(model, args) File "/home/ejp416/complexity/examples/lightning_base.py", line 400, in generic_train trainer.fit(model) File "/home/ejp416/miniconda3/envs/complexity2/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn result = fn(self, *args, **kwargs) File "/home/ejp416/miniconda3/envs/complexity2/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1072, in fit model = self.accelerator_backend.setup(model) File "/home/ejp416/miniconda3/envs/complexity2/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_backend.py", line 34, in setup self.trainer.call_setup_hook(model) File "/home/ejp416/miniconda3/envs/complexity2/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1444, in call_setup_hook model.setup(stage_name) File "/home/ejp416/complexity/examples/lightning_base.py", line 175, in setup self.train_loader = self.get_dataloader("train", self.hparams.train_batch_size, shuffle=True) File "examples/text-classification/run_pl_glue.py", line 98, in get_dataloader all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long) TypeError: an integer is required (got type NoneType) ``` To reproduce, run e.g. ``` python examples/text-classification/run_pl_glue.py --model_name_or_path roberta-base --output_dir ./blah --task mnli --do_train --data_dir ./glue_data/MNLI --max_seq_length 512 --max_grad_norm inf --adam_epsilon 1e-6 --weight_decay 0.1 --num_train_epochs 2 --train_batch_size 2 --eval_batch_size 4 --learning_rate 1e-5 --seed 12 --gradient_accumulation_steps 8 --gpus 1 ``` The reason is that roberta does not have segment ids so token_type_ids is set to null in the data loader, causing torch.tensor to freak out. There's probably a more elegant long-term solution for this, but it's easy to fix by just setting it to 0 instead of null for those models. This issue has come up before in other scripts: - https://github.com/huggingface/transformers/pull/3801 - https://github.com/huggingface/transformers/issues/3810"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/9236"
" TITLE mBART finetuned on XSUM COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info - `transformers` version: 4.1.0.dev0 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.0.dev20201216+cu110 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: distributed ### Who can help mBART: @patrickvonplaten examples/seq2seq: @patil-suraj ## Information Model I am using: mBART The problem arises when using: * [ X ] the official example scripts: I used the official seq2seq training example here: https://github.com/huggingface/transformers/tree/master/examples/seq2seq * [ X ] my own modified scripts: (give details below) my training script is as follows (no changes to finetune_trainer.py): ```shell python -m torch.distributed.launch --nproc_per_node=2 finetune_trainer.py \ --data_dir "./xsum" \ --output_dir "./my_models" \ --overwrite_output_dir \ --model_name_or_path "facebook/mbart-large-cc25" \ --fp16 \ --freeze_encoder \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --learning_rate=3e-5 \ --do_train --do_eval --do_predict \ --evaluation_strategy steps \ --predict_with_generate \ --n_val 1000 \ --max_target_length=60 \ --val_max_target_length=60 \ --test_max_target_length=100 \ "$@" ``` The tasks I am working on is: * [ X ] XSUM ## To reproduce Steps to reproduce the behavior: 1. follow https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md 2. launch training script with my modifications (batch_size, freeze_encoder, max_target_length ..) 3. inference on two texts (french and english) using the following code: ```python def sum_mbart_xsum(text): print("---------------------------------------------------------------------------------") print(" MBART large xsum ") print("---------------------------------------------------------------------------------") tokenizer = MBartTokenizer.from_pretrained("/home/mohamed/Desktop/Summarization/mbart-xsum") model = MBartForConditionalGeneration.from_pretrained("/home/mohamed/Desktop/Summarization/mbart-xsum") article_input_ids = tokenizer.batch_encode_plus([text], return_tensors='pt', max_length=1024, truncation=True)[ 'input_ids'] summary_ids = model.generate(article_input_ids, num_beams=6, length_penalty=1.0, max_length=142, no_repeat_ngram_size=3) summary_txt = tokenizer.decode(summary_ids.squeeze(), skip_special_tokens=True) return summary_txt ``` ## Results 1- eval/test results: ```jsonc { "epoch": 3.0, "test_gen_len": 28.1, "test_loss": 1.7692, "test_n_ojbs": -1, "test_rouge1": 32.7618, "test_rouge2": 12.022, "test_rougeL": 25.6512, "test_rougeLsum": 25.6499, "test_runtime": 2778.8939, "test_samples_per_second": -0.0, "train_n_ojbs": -1, "train_runtime": 94633.1507, "train_samples_per_second": -0.0, "val_gen_len": 28.0, "val_loss": 1.7993, "val_n_ojbs": 1000, "val_rouge1": 32.9862, "val_rouge2": 11.528, "val_rougeL": 25.6517, "val_rougeLsum": 25.7055, "val_runtime": 267.0092, "val_samples_per_second": 3.745 } ``` 2- Inference: * Out of context summarizations (gives sth related to training data) --> (sth wrong with my finetuning configuration or inference function?) * For French texts, results are in English and poor summary --> (why the language changes?) ## Expected behavior My main objective of finetuning mBART on Xsum is to evaluate the multilingual level of mBART. Basically answering the following question: Should I finetune on a dataset with multiple languages to be able to summarize in multiple languages? Or the multilingual characteristic is preserved with mBART and just finetune on english (xsum) only Current problems: 1- (inference results and questions above) 2- Why `facebook/bart-large-xsum` understands french (even if bart is trained on english)? "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/16496"
" TITLE Support reduce_bucket_size="auto" for deepspeed stages <3 COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #16495 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stas00 "
[ 56 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ "DeepSpeed" ]
"https://api.github.com/repos/huggingface/transformers/issues/7909"
" TITLE pegasus/cnn_dm 12-2 distillation performing poorly COMMENTS 8 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-1028-aws-x86_64-with-debian-buster-sid - Python version: 3.7.6 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @sshleifer ## Information I am trying to distil the pegasus model to reduce the runtime and memory requirements. I am following **No Teacher Distillation** approach. However, the model generates bad quality text. The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name): CNN * [ ] my own task or dataset: (give details below) ## To reproduce I have trained the model using below command: **Download data:** wget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz tar -xzvf cnn_dm_v2.tgz # empty lines removed mv cnn_cln cnn_dm **Command to train:** python finetune.py --learning_rate=3e-5 --do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 6 --freeze_encoder --freeze_embeds --data_dir ./cnn_dm/ --max_target_length 142 --val_max_target_length=142 --train_batch_size=1 --eval_batch_size=1 --gradient_accumulation_steps=256 --model_name_or_path sshleifer/student_pegasus_cnn_12_2 --tokenizer_name google/pegasus-cnn_dailymail --warmup_steps 500 --output_dir distilpegasus-cnn-12-2 --gpus 1 --adafactor --num_workers=0 --fp16_opt_level=O1 --fp16 **Inference code:** ``` from transformers import PegasusForConditionalGeneration, PegasusTokenizer, PegasusConfig import torch PEGASUS_MODEL = '/home/ubuntu/finetune/transformers/examples/seq2seq/distilpegasus-cnn-12-2/best_tfmr' PEGASUS_TOKENIZER = 'google/pegasus-cnn_dailymail' class PegasusSummarizer: def __init__(self): self.torch_device = 'cpu' self.tokenizer = PegasusTokenizer.from_pretrained(PEGASUS_TOKENIZER) self.model = PegasusForConditionalGeneration.from_pretrained(PEGASUS_MODEL).to(self.torch_device) def summarize(self, text): src_text = text batch = self.tokenizer.prepare_seq2seq_batch([src_text],truncation=True,padding='longest').to(self.torch_device) translated = self.model.generate(**batch) tgt_text = self.tokenizer.batch_decode(translated, skip_special_tokens=True) return tgt_text summarizer = PegasusSummarizer() print(summarizer.summarize('''(CNN)For the first time in eight years, a TV legend returned to doing what he does best. Contestants told to "come on down!" on the April 1 edition of "The Price Is Right" encountered not host Drew Carey but another familiar face in charge of the proceedings. Instead, there was Bob Barker, who hosted the TV game show for 35 years before stepping down in 2007. Looking spry at 91, Barker handled the first price-guessing game of the show, the classic "Lucky Seven," before turning hosting duties over to Carey, who finished up. Despite being away from the show for most of the past eight years, Barker didn't seem to miss a beat.''')) ``` **Output:** ['"It\'s time for the first time in a five-year anniversary of the show.'] **Output of google/pegasus-cnn_dailymail model**:['Barker hosted "The Price Is Right" for 35 years.<n>He stepped down in 2007.'] test_results.txt output: src_pad_frac = tensor(0., device='cuda:0') src_pad_tok = tensor(0, device='cuda:0') step_count = 26 test_avg_gen_len = 48.63716275021758 test_avg_gen_time = 1.3503953615824382 test_avg_loss = 3.6937525272369385 test_avg_rouge1 = 19.983542428198433 test_avg_rouge2 = 4.130034786771105 test_avg_rougeL = 14.352700217580503 test_avg_rougeLsum = 18.460456248912102 test_loss = tensor(3.6938, device='cuda:0') test_rouge2 = tensor(4.1300, device='cuda:0') tpb = tensor(511, device='cuda:0') val_avg_gen_len = 50.144 val_avg_gen_time = 1.513235685825348 val_avg_loss = 3.77506422996521 val_avg_rouge1 = 16.9548154 val_avg_rouge2 = 3.1666046 val_avg_rougeL = 12.980990400000001 val_avg_rougeLsum = 15.404284 val_loss = tensor(3.7751, device='cuda:0') val_rouge2 = tensor(3.1666, device='cuda:0') <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I expect the output to be much cleaner and higher Rouge score. Any help in this regard would be of great help. I am trying to retrain the model by removing **--freeze_encoder**. <!-- A clear and concise description of what you would expect to happen. --> "
[ 42 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "pegasus" ]
"https://api.github.com/repos/huggingface/transformers/issues/8221"
" TITLE [GPT2] Loss NaN after some time with FP16 COMMENTS 5 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: Linux-4.4.0-176-generic-x86_64-with-glibc2.17 - Python version: 3.8.6 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes, HF datasets ### Who can help @LysandreJik @sgugger ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [ ] the official example scripts: * [x] my own modified scripts: examples/language_modeling/run_language_modeling.py The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset ## To reproduce Steps to reproduce the behavior: 1. Run with ```--fp16 --n_ctx 2048``` 2. ``` warnings.warn('Was asked to gather along dimension 0, but all ' [W python_anomaly_mode.cpp:104] Warning: Error detected in SoftmaxBackward. Traceback of forward call that caused the error: File "/usr/lib/python3.8/threading.py", line 890, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/usr/lib/python3.8/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 765, in forward transformer_outputs = self.transformer( File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 645, in forward outputs = block( File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 285, in forward attn_outputs = self.attn( File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 235, in forward attn_outputs = self._attn(query, key, value, attention_mask, head_mask, output_attentions) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 176, in _attn w = nn.Softmax(dim=-1)(w) File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 1198, in forward return F.softmax(input, self.dim, _stacklevel=5) File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 1512, in softmax ret = input.softmax(dim) (function _print_stack) Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 349, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 313, in main trainer.train(model_path=model_path) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 756, in train tr_loss += self.training_step(model, inputs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1065, in training_step self.scaler.scale(loss).backward() File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 130, in backward Variable._execution_engine.run_backward( RuntimeError: Function 'SoftmaxBackward' returned nan values in its 0th output. 0%| | 0/19506024 [00:25<?, ?it/s] ``` ## Expected behavior Not print Nan "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/8319"
" TITLE Does Tokenizer provide parameters to split the number? COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I want to split "2004年06月25日" into [2, 0, 0, 5, 年, 0, 6, 月, 2, 5,日],not [2005, 年, 06, 月, 25, 日]. How can i do it the easiest? now I use these API : BertTokenizer BertTokenizerFast tokenizer.tokenize("2004年06月25日") Thanks. <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/11600"
" TITLE [consistent use] `F` vs. `nn.functional` COMMENTS 8 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY We use 3 different ways of doing the same: 1. `F.foo()` 2. `nn.functional.foo()` 3. `torch.nn.functional.foo()` and these could also be imported: 4. `from torch.nn.functional import foo; foo()` Asking others it appears that `F` is not quite liked, so it's 2, 3 or 4. 2 and 3 often lead to longer lines which autoformatter wraps, leading to 3 lines of code instead of 1 and which gives less readable code. So it seems that option 4 might be the best outcome. For 2, the global update would be easy: ``` find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|from torch.nn import functional as F||' {} \; find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|import torch.nn.functional as F||' {} \; find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's| F\.| nn.functional.|g' {} \; make fixup ``` For 4, it will take much more work, but can be semi-automated. @LysandreJik, @sgugger, @patrickvonplaten "
[ 3 ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
"https://api.github.com/repos/huggingface/transformers/issues/8185"
" TITLE TensorFlow Longformer model as a saved model with attention outputs COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY The TensorFlow implementation of the Longformer model has an issue with using the `tf.saved_model.save` API alongside the `output_attentions=True` configuration attribute. The test is skipped currently due to this issue."
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/8375"
" TITLE RobertaTokenizerFast is around 10 times slower than BertTokenizerFast #510 COMMENTS 6 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-5.4.38-t2.el7.x86_64-x86_64-with-centos-7.7.1908-Core - Python version: 3.7.9 - PyTorch version (GPU?): 1.4.0+cu100 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @mfuntowicz <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSTM: @stas00 examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Also opened this issue on the tokenizers repo: https://github.com/huggingface/tokenizers/issues/510 `RobertaTokenizerFast` with 60k vocab size is around 50 times slower than the `BertTokenizerFast` for `transformers==3.3.1`. This is really slowing down the processing time for training the language model. Is there a suggested fix for this? Model I am using (Bert, XLNet ...): RobertaTokenizerFast The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ```python In [37]: from transformers import BertTokenizerFast, RobertaTokenizerFast ...: In [38]: bert_tokenizer = BertTokenizerFast.from_pretrained(bert_tokenizer_dir, max_len=512, truncation=True) ...: roberta_tokenizer = RobertaTokenizerFast.from_pretrained(roberta_tokenizer_dir, max_len=512, truncation=True) ...: In [39]: line = "I am trying out this code for testing tokenizers and it is super good. Huge victory. But the difference in speed between BERT tokenizer and Robert ...: a tokenizers is quite slow." In [40]: bert_tokenizer.encode_plus(line, add_special_tokens=True, truncation=True) Out[40]: {'input_ids': [2, 51, 1881, 21557, 3212, 2937, 3590, 1945, 40605, 53576, 23981, 1013, 1985, 2179, 1943, 3863, 3841, 20, 38092, 65353, 20, 3180, 1931, 48800, 1822, 12679, 26777, 18732, 53576, 23981, 1985, 20839, 1016, 53576, 23981, 1013, 1943, 66009, 16390, 20, 3], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} In [41]: roberta_tokenizer.encode_plus(line, add_special_tokens=True, truncation=True) Out[41]: {'input_ids': [0, 47, 979, 28120, 3145, 2924, 6628, 1046, 6747, 458, 618, 2063, 781, 719, 994, 1527, 891, 6074, 6464, 20, 550, 11852, 21228, 3085, 20, 16339, 898, 20420, 40977, 494, 24738, 34187, 444, 13841, 618, 2063, 27250, 994, 12086, 16449, 618, 2063, 781, 719, 891, 517, 1492, 23819, 20, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} In [42]: %%timeit -n 10000 ...: tokens = roberta_tokenizer.encode_plus(line, add_special_tokens=True, truncation=True) ...: 10000 loops, best of 5: 9.09 ms per loop In [43]: %%timeit -n 10000 ...: tokens = bert_tokenizer.encode_plus(line, add_special_tokens=True, truncation=True) ...: 10000 loops, best of 5: 181 µs per loop In [44]: roberta_tokenizer.vocab_size Out[44]: 60000 In [45]: bert_tokenizer.vocab_size Out[45]: 100000 In [47]: import transformers In [48]: import tokenizers In [49]: transformers.__version__ Out[49]: '3.3.1' In [50]: tokenizers.__version__ Out[50]: '0.8.1.rc2' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I expect to not be much difference here. Why is this the case? <!-- A clear and concise description of what you would expect to happen. --> "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/16955"
" TITLE config.json not found! COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.0-1063-azure-x86_64-with-glibc2.10 - Python version: 3.8.3 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.7.1+cu110 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ``` ### Who can help? @sgugger I am training a NER model following tutorial: ```python from transformers import TrainingArguments args = TrainingArguments( "saved_models_bert-finetuned-ner-100examples-with-aug", learning_rate=2e-5, num_train_epochs=100, weight_decay=0.01, per_device_train_batch_size = 32, per_device_eval_batch_size = 32, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end = True, metric_for_best_model = 'f1' ) from transformers import Trainer trainer = Trainer( model=model, args=args, train_dataset=new_training_dataset, eval_dataset=tokenized_datasets["validation"].select(range(100)), data_collator=data_collator, compute_metrics=compute_metrics, tokenizer=tokenizer, ) trainer.train() ``` Then I got this error: ```shell --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Input In [28], in <cell line: 14>() 1 from transformers import Trainer 3 trainer = Trainer( 4 model=model, 5 args=args, (...) 11 tokenizer=tokenizer, 12 ) ---> 14 trainer.train() File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1512, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1509 self.control.should_training_stop = True 1511 self.control = self.callback_handler.on_epoch_end(args, self.state, self.control) -> 1512 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) 1514 if DebugOption.TPU_METRICS_DEBUG in self.args.debug: 1515 if is_torch_tpu_available(): 1516 # tpu-comment: Logging debug metrics for PyTorch/XLA (compile, execute times, ops, etc.) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1628, in Trainer._maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval) 1625 self._report_to_hp_search(trial, epoch, metrics) 1627 if self.control.should_save: -> 1628 self._save_checkpoint(model, trial, metrics=metrics) 1629 self.control = self.callback_handler.on_save(self.args, self.state, self.control) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1700, in Trainer._save_checkpoint(self, model, trial, metrics) 1697 self.store_flos() 1699 output_dir = os.path.join(run_dir, checkpoint_folder) -> 1700 self.save_model(output_dir, _internal_call=True) 1701 if self.deepspeed: 1702 # under zero3 model file itself doesn't get saved since it's bogus! Unless deepspeed 1703 # config `stage3_gather_16bit_weights_on_model_save` is True 1704 self.deepspeed.save_checkpoint(output_dir) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2128, in Trainer.save_model(self, output_dir, _internal_call) 2125 self.deepspeed.save_checkpoint(output_dir) 2127 elif self.args.should_save: -> 2128 self._save(output_dir) 2130 # Push to the Hub when `save_model` is called by the user. 2131 if self.args.push_to_hub and not _internal_call: File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2180, in Trainer._save(self, output_dir, state_dict) 2178 torch.save(state_dict, os.path.join(output_dir, WEIGHTS_NAME)) 2179 else: -> 2180 self.model.save_pretrained(output_dir, state_dict=state_dict) 2181 if self.tokenizer is not None: 2182 self.tokenizer.save_pretrained(output_dir) File /opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py:1352, in PreTrainedModel.save_pretrained(self, save_directory, save_config, state_dict, save_function, push_to_hub, max_shard_size, **kwargs) 1350 # Save the config 1351 if save_config: -> 1352 model_to_save.config.save_pretrained(save_directory) 1354 # Save the model 1355 if state_dict is None: File /opt/conda/lib/python3.8/site-packages/transformers/configuration_utils.py:440, in PretrainedConfig.save_pretrained(self, save_directory, push_to_hub, **kwargs) 437 # If we save using the predefined names, we can load using `from_pretrained` 438 output_config_file = os.path.join(save_directory, CONFIG_NAME) --> 440 self.to_json_file(output_config_file, use_diff=True) 441 logger.info(f"Configuration saved in {output_config_file}") 443 if push_to_hub: File /opt/conda/lib/python3.8/site-packages/transformers/configuration_utils.py:805, in PretrainedConfig.to_json_file(self, json_file_path, use_diff) 794 def to_json_file(self, json_file_path: Union[str, os.PathLike], use_diff: bool = True): 795 """ 796 Save this instance to a JSON file. 797 (...) 803 is serialized to JSON file. 804 """ --> 805 with open(json_file_path, "w", encoding="utf-8") as writer: 806 writer.write(self.to_json_string(use_diff=use_diff)) FileNotFoundError: [Errno 2] No such file or directory: 'saved_models_bert-finetuned-ner-100examples-with-aug/checkpoint-6/config.json' ``` This is so wired! From my understanding, the config.json file should be written, so such error shouldn't occur. ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am not sure if this can be reproduced in another machine. btw, I am using A100. ### Expected behavior ```shell A normal training... ``` "
[ 17 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
"https://api.github.com/repos/huggingface/transformers/issues/9052"
" TITLE Add caching mechanism to BERT/RoBERTa/GPT2 for Seq2Seq accelerated generation COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> All Seq2Seq models that make use of `generate()` usually allow `past_key_values` to be cached for both the cross-attention layer and the uni-directional decoder self-attention layer. For this feature request we should implement the feature for Bert2Bert, and, Roberta2Roberta. We should implement this feature analogs to how it is implemented in Bart. This means that we should - 1) add the caching mechanism in the AttentionLayer as shown here for Bart: https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_bart.py#L234 - 2) pass the `past_key_values` as tuple through the layers, making sure that it's optional for the cross-attention layer: https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_bart.py#L433 - 3) Adapt the mask correspondingly. The easiest option is probably to just copy how it's done in Bart and remove the old attention_masking logic (making sure that all tests pass): https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_bart.py#L91 and https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_bart.py#L76 - 4) Add a test for `BertLMHeadModel` and `RobertaForCausalLM` that verifies that the caching mechanism works as expected: https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/tests/test_modeling_bart.py#L287 - 5) "Turn on" caching for Encoder-Decoder (this should be the last step and this might cause some other problems - happy to help here!): https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L427 This might be a good issue for you @patil-suraj if interested :-) ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> "
[ 43 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Good Second Issue" ]
"https://api.github.com/repos/huggingface/transformers/issues/7307"
" TITLE Cuda OOM training gpt2-xl with Trainer in multi-GPUs COMMENTS 5 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # ❓ Questions & Help I am currently trying to finetune the gpt2-xl. I have 2 tesla T4 GPUs. However, I get the CUDA OOM error... when I look at the use of the gpus I see that the first one is full, but the second one still has enough space. Here is my code: ``` from transformers import TextDataset,DataCollatorForLanguageModeling, AutoTokenizer import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') from transformers import GPT2LMHeadModel, Trainer, TrainingArguments model = GPT2LMHeadModel.from_pretrained("gpt2-xl").to(device) from transformers import TextDataset,DataCollatorForLanguageModeling, AutoTokenizer, TrainingArguments, Trainer tokenizer = AutoTokenizer.from_pretrained("gpt2-xl") train_dataset = TextDataset( tokenizer=tokenizer, file_path='dataset_training.txt', block_size=128) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=False, ) training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=2, # total # of training epochs per_device_train_batch_size=1, # batch size per device during training warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, prediction_loss_only=True, ) trainer.train() ``` I get "CUDA out of memory. Tried to allocate 40.00 MiB (GPU 0; 14.73 GiB total capacity; 13.61 GiB already allocated; 31.88 MiB free; 13.98 GiB reserved in total by PyTorch)" When I run nvidia-smi I see: ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.87.01 Driver Version: 418.87.01 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+================ | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 75C P0 34W / 70W | 15047MiB / 15079MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla T4 Off | 00000000:00:05.0 Off | 0 | | N/A 56C P0 29W / 70W | 9479MiB / 15079MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================== | 0 1756 C /opt/conda/bin/python 15037MiB | | 1 1756 C /opt/conda/bin/python 9469MiB | +-----------------------------------------------------------------------------+ ``` **My question is:** Am I making a mistake? or how can a large model be trained with multiple GPUs? "
[ 41, 55, 31 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ "wontfix", "Distributed Training / Models", "gpt2" ]
"https://api.github.com/repos/huggingface/transformers/issues/7853"
" TITLE SequenceSummary class in modeling_utils.py COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Hello, I have a question about the documentation strings provided for the forward function of the SequenceSummary class from modeling_utils.py: https://github.com/huggingface/transformers/blob/dc552b9b7025ea9c38717f30ad3d69c2a972049d/src/transformers/modeling_utils.py#L1484 So when `cls_index` is not specified as the argument in SequenceSummary() statement, is the last token of the sequence used for the classification task? The entire sentence for the description is somewhat awkward... Thanks,"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/7620"
" TITLE Downloading DPR model ('facebook/dpr-ctx_encoder-single-nq-base') COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!--Hi, I want to use model build on the natural questions. I have to question: 1.. I see an example of the above-mentioned model. here it is ``` from transformers import DPRReader, DPRReaderTokenizer tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base') model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base', return_dict=True) encoded_inputs = tokenizer( questions=["What is love ?"], titles=["Haddaway"], texts=["'What Is Love' is a song recorded by the artist Haddaway"], return_tensors='pt' ) outputs = model(**encoded_inputs) start_logits = outputs.stat_logits end_logits = outputs.end_logits relevance_logits = outputs.relevance_logits ``` So I want to run the command which gives me the answers here. It gives me just the start and end position of answer. 2.. Can I use my context (my document) where I want to search model the answer? is it possible here? Thanks in advance--> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/8321"
" TITLE tensorboard.compat.tensorflow_stub.errors.AlreadyExistsError: Directory already exists COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Hi I am running finetune_trainer.py on cloud with TPUs, here is the error, I appreciate your help thanks ```json { "textPayload": "Traceback (most recent call last):\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 330, in _mp_start_fn\n _start_fn(index, pf_cfg, fn, args)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 324, in _start_fn\n fn(gindex, *args)\n File \"/workdir/seq2seq/finetune_trainer.py\", line 303, in _mp_fn\n app.run(main, flags_parser=parse_flags)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py\", line 300, in run\n _run_main(main, args)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py\", line 251, in _run_main\n sys.exit(main(argv))\n File \"/workdir/seq2seq/finetune_trainer.py\", line 246, in main\n data_args=data_args,\n File \"/workdir/seq2seq/seq2seq_trainer.py\", line 37, in __init__\n super().__init__(*args, **kwargs)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py\", line 318, in __init__\n self.control = self.callback_handler.on_init_end(self.args, self.state, self.control)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_callback.py\", line 325, in on_init_end\n return self.call_event(\"on_init_end\", args, state, control)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_callback.py\", line 376, in call_event\n **kwargs,\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/integrations.py\", line 213, in on_init_end\n self.tb_writer = SummaryWriter(log_dir=args.logging_dir)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py\", line 221, in __init__\n self._get_file_writer()\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py\", line 252, in _get_file_writer\n self.flush_secs, self.filename_suffix)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py\", line 62, in __init__\n log_dir, max_queue, flush_secs, filename_suffix)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/tensorboard/summary/writer/event_file_writer.py\", line 77, in __init__\n tf.io.gfile.makedirs(logdir)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/io/gfile.py\", line 673, in makedirs\n return get_filesystem(path).makedirs(path)\ntensorboard.compat.tensorflow_stub.errors.AlreadyExistsError: Directory already exists\n", "insertId": "5rl5rhwn8m5sobp9q", "resource": { "type": "k8s_container", "labels": { "container_name": "seq2seq", "location": "europe-west4-a", "project_id": "try-ideas-for-rmi", "namespace_name": "ruse-xgcp", "cluster_name": "xcloud-v3-donut-europe-west4-a", "pod_name": "20201105.df.e2753.0-7mxdc" } }, "timestamp": "2020-11-05T11:28:19.975413564Z", "severity": "ERROR", "labels": { "k8s-pod/job-name": "20201105.df.e2753.0", "k8s-pod/controller-uid": "4974cfc1-b3df-4882-9fd1-9095f4a944d9", "k8s-pod/jobowner": "ruse-xgcp", "k8s-pod/app": "xcloud", "k8s-pod/serviceName": "xc-20201105-df-e2753-0" }, "logName": "projects/try-ideas-for-rmi/logs/stderr", "receiveTimestamp": "2020-11-05T11:28:23.810886436Z" } ```"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/15763"
" TITLE Add TUNet COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 2 rocket: 0 eyes: 0 BODY # 🌟 New model addition The TUNet model for audio superresolution ## Model description [TUNet: A Block-online Bandwidth Extension Model based on Transformers and Self-supervised Pretraining](https://arxiv.org/abs/2110.13492v3) is a paper by Viet-Anh Nguyen, Anh H. T. Nguyen, and Andy W. H. Khong, introducing a model for audio superresolution based on transformers. Audio superresolution allows for the upsampling of audio with minimal loss of quality. This is very useful for ASR tasks that require audio to be resampled during preprocessing, which has a big impact on transcriptions depending on the native sample rate. ## Open source status * [x] the model implementation is available: [The official repo](https://github.com/NXTProduct/TUNet) * [x] the model weights are available: [ONNX weights](https://github.com/NXTProduct/TUNet/tree/master/lightning_logs) * [x] who are the authors: Viet-Anh Nguyen, Anh H. T. Nguyen, and Andy W. H. Khong "
[ 39 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
"https://api.github.com/repos/huggingface/transformers/issues/7723"
" TITLE T5: Finetuning the language modelling objective on a new dataset COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # ❓ Questions & Help I was wondering if there is a way I could fine tune T5 model on my own dataset. There are scripts for fine tuning help for GPT2, BERT and XLNET in language modelling examples, so was thinking if that could be extended to T5 as well? <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/9235"
" TITLE run_mlm.py crashes when saving model checkpoint COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info - `transformers` version: 4.0.1 - Platform: Google Cloud - Python version: 3.6.10 - PyTorch version (GPU?): 1.7 - Tensorflow version (GPU?): - Using GPU in script?: NO. Using TPU - Using distributed or parallel set-up in script?: YES ### Who can help @LysandreJik @mfuntowicz @sgugger ## Information I'm trying to train al Albert model from scratch, with a custom tokenizer, on Google Cloud TPUS. The problem arises when saving the model checkpoints, more specifically when trying to save the tokenizer. I'm using your example script run_mlm.py. The problem arises when using: * [ ] the official example scripts: run_mlm.py The tasks I am working on is: * [ ] an official GLUE/SQUaD task: Masked Language Modelling. ## To reproduce Steps to reproduce the behavior: 1. Run run_mlm.py with the following params: python transformers/examples/xla_spawn.py --num_cores 8 \ transformers/examples/language-modeling/run_mlm.py \ --model_type albert \ --train_file texts_train.txt \ --validation_file good_texts_valid.txt \ --output_dir modelo_prueba \ --tokenizer_name ./tokenizadores/definitivo \ --overwrite_output_dir \ --line_by_line \ --pad_to_max_len \ --do_train \ --do_eval \ --evaluation_strategy steps \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 32 \ --learning_rate 1e-3 \ --max_steps 500 \ --save_steps 100 \ --save_total_limit 15 \ --overwrite_cache \ --max_seq_length 512 \ --eval_accumulation_steps 10 \ --logging_steps 100 \ --config_name ./config/albert-base-v2.json \ At step 100, the following error arises: ``` INFO|trainer.py:1141] 2020-12-21 15:46:34,157 >> Saving model checkpoint to modelo_prueba/checkpoint-100 [INFO|configuration_utils.py:281] 2020-12-21 15:46:34,158 >> Configuration saved in modelo_prueba/checkpoint-100/config.json [INFO|modeling_utils.py:741] 2020-12-21 15:46:34,556 >> Model weights saved in modelo_prueba/checkpoint-100/pytorch_model.bin Exception in device=TPU:0: expected str, bytes or os.PathLike object, not NoneType Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm.py", line 405, in _mp_fn main() File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm.py", line 379, in main trainer.train(model_path=model_path) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 777, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 848, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 869, in _save_checkpoint self.save_model(output_dir) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 1135, in save_model self._save_tpu(output_dir) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 1157, in _save_tpu self.tokenizer.save_pretrained(output_dir) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1972, in save_pretrained filename_prefix=filename_prefix, File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py", line 524, in _save_pretrained vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/models/albert/tokenization_albert_fast.py", line 252, in save_vocabulary if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file): File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/posixpath.py", line 378, in abspath path = os.fspath(path) TypeError: expected str, bytes or os.PathLike object, not NoneType ``` ## Expected behavior The expected behavior is that the script doesn't crash. Moreover, it's completely unnecessary to save the tokenizer in trainer.py, as the tokenizer is already trained and doesn't need to be saved again. "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/7999"
" TITLE Add model cards for DynaBERT COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? Add model cards for DynaBERT."
[ 32 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "model card" ]
"https://api.github.com/repos/huggingface/transformers/issues/17185"
" TITLE Unable to retrieve layers from model in tensorflow COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info ```shell transformers version: 4.18.0 platform: Google Colab python version: 3.7.13 ``` ### Who can help? @Rocketknight1 I am training a Roberta-large model for a classification task and I am using pre-trained model to start with. But for my task I want to freeze the embedding layer and the first few encoding layers, so that I can fine-tune the attention weights of the last few encoding layers. But I cannot access the layers while using tensorflow. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction from transformers import RobertaTokenizer, TFRobertaModel import tensorflow as tf model = TFRobertaModel.from_pretrained('roberta-large') model.get_layer(2) ### Expected behavior ```shell This should have returned a layer instance but rather throws error `ValueError: Was asked to retrieve layer at index 10 but model only has 1 layers.` ``` "
[ 17 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
"https://api.github.com/repos/huggingface/transformers/issues/15947"
" TITLE Tranformers documentation translation to Spanish COMMENTS 22 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 12 rocket: 0 eyes: 0 BODY Hi! Let's bring the documentation to all the Spanish-speaking community :) Who would want to translate? **Please follow the instructions in the [Translating guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md)**. Here is a list of the files ready for translation. Let us know here if you'd like to translate any and we'll add your name to the list. Some notes: - Please translate using an informal tone (imagine you are talking with a friend about `transformers` 🤗). For example, use `Tú` instead of `Usted`; or `colabora` instead of `colabore`. - Add your translations to the folder called `es` inside the [`source` folder](https://github.com/huggingface/transformers/tree/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source). - Once you're finished, open a pull request and tag this issue by including `#issue-number` in the description, where `issue-number` is the number of this issue. - 🙋 If you'd like others to help you with the translation, you can also post in our [forums](https://discuss.huggingface.co/) or tag [@espejelomar](https://twitter.com/espejelomar) on Twitter to gain some visibility. ### Get Started section - [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/quicktour.mdx). @Duedme - [x] [installation.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/installation.mdx). @lilianabs ### Tutorial section - [x] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/pipeline_tutorial.mdx) @FernandoLpz - [x] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) WIP @Duedme - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/preprocessing.mdx) WIP @yharyarias - [x] [training.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/training.mdx) @yharyarias - [x] [accelerate.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/accelerate.mdx) @Sangohe - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/model_sharing.mdx) WIP @Gerard-170 - [x] [multilingual.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/multilingual.mdx) @SimplyJuanjo ## How-to guides - [x] [fast_tokenizers.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/fast_tokenizers.mdx "fast_tokenizers.mdx") @jloayza10 - [ ] [create_a_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/create_a_model.mdx "create_a_model.mdx") WIP @ignacioct - [ ] [custom_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/custom_models.mdx "custom_models.mdx") - [ ] [run_scripts.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/run_scripts.mdx "run_scripts.mdx") - [ ] [sagemaker.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/sagemaker.mdx "sagemaker.mdx") WIP @SimplyJuanjo - [ ] [converting_tensorflow_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/converting_tensorflow_models.mdx "converting_tensorflow_models.mdx") - [ ] [serialization.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/serialization.mdx "serialization.mdx") - [ ] [performance.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/performance.mdx "performance.mdx") - [ ] [parallelism.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/parallelism.mdx "parallelism.mdx") - [ ] [benchmarks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/benchmarks.mdx "benchmarks.mdx") - [ ] [migration.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/migration.mdx "migration.mdx") ## FINE-TUNE FOR DOWNSTREAM TASKS - [ ] [sequence_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/sequence_classification.mdx "sequence_classification.mdx") - [ ] [token_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/token_classification.mdx "token_classification.mdx") - [ ] [question_answering.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/question_answering.mdx "question_answering.mdx") - [x] [language_modeling.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/language_modeling.mdx "language_modeling.mdx") @jQuinRivero - [ ] [translation.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/translation.mdx "translation.mdx") - [ ] [summarization.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/summarization.mdx "summarization.mdx") - [ ] [audio_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/audio_classification.mdx "audio_classification.mdx") - [ ] [asr.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/asr.mdx "asr.mdx") - [ ] [image_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/image_classification.mdx "image_classification.mdx") - [ ] [multiple_choice.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/multiple_choice.mdx "multiple_choice.mdx") - [ ] [troubleshooting.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/troubleshooting.mdx "troubleshooting.mdx") - [ ] [debugging.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/debugging.mdx "debugging.mdx") - [ ] [community.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/community.mdx "community.mdx") - [ ] [add_new_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/add_new_model.mdx "docs/source/en/add_new_model.mdx") - [ ] [add_new_pipeline.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/add_new_pipeline.mdx "add_new_pipeline.mdx") - [ ] [testing.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/testing.mdx "testing.mdx") - [ ] [pr_checks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/pr_checks.mdx "pr_checks.mdx") ## CONCEPTUAL GUIDES - [x] [philosophy.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/philosophy.mdx "philosophy.mdx") @[jkmg](https://github.com/jkmg) - [ ] [glossary.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/glossary.mdx "glossary.mdx") - [ ] [pad_truncation.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/pad_truncation.mdx "docs/source/en/pad_truncation.mdx") - [ ] [bertology.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/bertology.mdx "bertology.mdx") - [ ] [perplexity.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/perplexity.mdx "perplexity.mdx") The files would be stored inside a new folder `source_es` inside [`transformers/docs`](https://github.com/huggingface/transformers/tree/master/docs). FYI @osanseviero @stevhliu @sgugger @mishig25 "
[ 23 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Documentation" ]
"https://api.github.com/repos/huggingface/transformers/issues/7629"
" TITLE Update model card - Fix arxiv link COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Minor changes: Add arxiv link + Layout improvement + fix typos # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->"
[ 32 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "model card" ]
"https://api.github.com/repos/huggingface/transformers/issues/13216"
" TITLE Use DS callable API to allow hf_scheduler + ds_optimizer COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 1 rocket: 0 eyes: 0 BODY This PR: - Used the (new) Callable api of deepspeed.initialize() to enable combining hf schedulers with deepspeed optimizers. - `create_scheduler` now has an optional `optimizer` arg - Updates relevant unit test. Blocking events: All unblocked now. - [x] depends on deepspeed PR [1316](https://github.com/microsoft/DeepSpeed/pull/1316). - [x] needs new deepspeed version after PR is merged and need to update the dependencies when that happens. deepspeed: @stas00. "
[ 56 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ "DeepSpeed" ]
"https://api.github.com/repos/huggingface/transformers/issues/13972"
" TITLE LayoutLMv2Processor does not accept the XLMRobertaTokenizerFast COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.3 - Platform: Linux-4.19.128-microsoft-standard-x86_64-with-glibc2.2.5 - Python version: 3.8.12 - PyTorch version (GPU?): 1.9.1+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @NielsRogge <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - LayoutLMv2 --> ## Information Model I am using: LayoutXLM The problem arises when using: * [x] the official example scripts: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/RVL-CDIP/Fine_tuning_LayoutLMv2ForSequenceClassification_on_RVL_CDIP.ipynb The tasks I am working on is: * [x] an official task: SequenceClassification ## To reproduce Steps to reproduce the behavior: When we replace the layoutlmv2 tokenizer in cell 8 of https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/RVL-CDIP/Fine_tuning_LayoutLMv2ForSequenceClassification_on_RVL_CDIP.ipynb ```python from transformers import LayoutLMv2FeatureExtractor, LayoutLMv2Tokenizer, LayoutLMv2Processor feature_extractor = LayoutLMv2FeatureExtractor() tokenizer = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased") processor = LayoutLMv2Processor(feature_extractor, tokenizer) ``` with the layoutxlm tokenizer as described in https://huggingface.co/transformers/model_doc/layoutxlm.html ```python from transformers import LayoutLMv2FeatureExtractor, LayoutLMv2Tokenizer, LayoutLMv2Processor, AutoTokenizer feature_extractor = LayoutLMv2FeatureExtractor() tokenizer = AutoTokenizer.from_pretrained('microsoft/layoutxlm-base') processor = LayoutLMv2Processor(feature_extractor, tokenizer) ``` the following error occurs ```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /tmp/ipykernel_3433/3030379235.py in <module> 5 tokenizer = AutoTokenizer.from_pretrained('microsoft/layoutxlm-base') 6 #tokenizer = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased") ----> 7 processor = LayoutLMv2Processor(feature_extractor, tokenizer) ~/.cache/pypoetry/virtualenvs/stp-experiment0-RgVp7VCN-py3.8/lib/python3.8/site-packages/transformers/models/layoutlmv2/processing_layoutlmv2.py in __init__(self, feature_extractor, tokenizer) 54 ) 55 if not isinstance(tokenizer, (LayoutLMv2Tokenizer, LayoutLMv2TokenizerFast)): ---> 56 raise ValueError( 57 f"`tokenizer` has to be of type {LayoutLMv2Tokenizer.__class__} or {LayoutLMv2TokenizerFast.__class__}, but is {type(tokenizer)}" 58 ) ValueError: `tokenizer` has to be of type <class 'type'> or <class 'type'>, but is <class 'transformers.models.xlm_roberta.tokenization_xlm_roberta_fast.XLMRobertaTokenizerFast'> ``` It looks like the LayoutLMv2Processor does not accept the XLMRobertaTokenizerFast. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> That the LayoutLMv2Processor accepts the XLMRobertaTokenizerFast."
[ 52 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "Good First Issue" ]
"https://api.github.com/repos/huggingface/transformers/issues/10193"
" TITLE Make use of our copy-consistency script for task-specific models COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY This is an intermediate issue, which is why it gets both the good first issue and good second issue tags. We have an automated script to check when copies of the same code are consistent inside the library, which allows us to avoid subclassing and keep all code for one model's forward pass inside one file (see our [philosophy]() for more details on this). The XxxModelForYyy are very similar to one another and should be able to leverage that functionality, so we can easily change only one file when there is a bug/docstring to tweak and all the others are updated. More precisely, models that have a pooler layer could probably base themselves on BERT and models that don't could be based on ELECTRA. The Seq2Seq models that are a bit particular could be based on BART. To enable this, the checkpoint use in the decorator `@add_code_sample_docstrings` needs to be defined in a constant (otherwise it will end up being copied which we don't want) so to tackle this issue, your mission, should you accept it, will have two steps: 1. Define in all modeling files a `_CHECKPOINT_FOR_DOC` at the beginning (with `_TOKENIZER_FOR_DOC` and `_CONFIG_FOR_DOC`) that should then be used in all the XxxModelForYyy. 2. Adds the relevant `# Copied from xxx with Xxx -> Yyy` whenever possible."
[ 52, 43 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "Good First Issue", "Good Second Issue" ]
"https://api.github.com/repos/huggingface/transformers/issues/9582"
" TITLE [deepspeed doc] install issues + 1-gpu deployment COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY This PR extends the DeepSpeed/FairScale integration documentation to: * add extensive general troubleshooting for CUDA-extensions (applies to fairscale, deepspeed, apex or any other python pytorch extension with CUDA C++ code) - these are very likely to be encountered by our users - all notes are based on my first hand encounters with these issues - 2 of which I run into yesterday while trying to build fairscale and deepspeed on Sylvain's hardware which he let me use to run the recent benchmarks. so I figured others are likely to have similar issues and neither fairscale nor deepspeed have these documented anywhere. * adds deployment for 1 gpu DeepSpeed notes * reformats sub-headers so that it's easier to link to specific sections @sgugger "
[ 56 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ "DeepSpeed" ]
"https://api.github.com/repos/huggingface/transformers/issues/9035"
" TITLE Improve coverage of the documentation COMMENTS 6 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 1 heart: 0 rocket: 0 eyes: 0 BODY Currently, some public classes are not documented anywhere because we didn't create the corresponding doc pages. Those missing pages are: - Benchmark classes - Bert Japanese - Data collators If someone feels like working on one of those, please tag yourself with a comment on this issue. Once the objects are properly documented, they can be removed from the `SHOULD_BE_DOCUMENTED` constant in [this file](https://github.com/huggingface/transformers/blob/1310e1a758edc8e89ec363db76863c771fbeb1de/utils/check_repo.py#L374). "
[ 23, 52 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "Documentation", "Good First Issue" ]
"https://api.github.com/repos/huggingface/transformers/issues/8144"
" TITLE ETA on TFEncoderDecoderModel and is BERTShare from https://arxiv.org/pdf/1907.12461.pdf planned? COMMENTS 4 REACTIONS +1: 2 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 1 BODY # 🚀 Feature request Is there a plan for BERTShare from https://arxiv.org/pdf/1907.12461.pdf to be an option for the EncoderDecoderModel? Also, I can see that an TFEncoderDecoderModel is on the 'ToDo' list for the [EncoderDecoder Framework](https://github.com/huggingface/transformers/projects/23). Any chance of an expected time of completion of this would be greatly appreciated. ## Motivation Having an easy to use seq2seq model integrated into hugging face (with TensorFlow) would help my research immensely. Also, models like BERTShare are much more parameter efficient. ## Your contribution I am happy to help in any form. Not sure where help is needed tbh. "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/7336"
" TITLE Error when fine-tune RoBERTa on NSP using Trainer COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-45-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: IDK. - Using distributed or parallel set-up in script?: IDK. ### Who can help Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten nlp datasets: [different repo](https://github.com/huggingface/nlp) ## Information Model I am using RoBERTa trained for Polish lanugage: [polish-roberta](https://github.com/sdadas/polish-roberta), version [robreta_base_transformers](https://github.com/sdadas/polish-roberta/releases/download/models/roberta_base_transformers.zip). The problem arises when using: * [ ] my own modified scripts: ```python from transformers import (BertForNextSentencePrediction, BertTokenizer, RobertaModel, RobertaTokenizer, Trainer, TrainingArguments) from transformers.data.datasets.language_modeling import TextDatasetForNextSentencePrediction from transformers.data.data_collator import DataCollatorForNextSentencePrediction from argparse import ArgumentParser def parse_args(): parser = ArgumentParser("Fine-tune RoBERTa in Next Sentence Prediction.") parser.add_argument("-m", "--model_path", dest="model_path", required=True, help="Path to RoBERTa model.") parser.add_argument("-o", "--output_path", dest="output_path", required=True, help="Path to directory of fine-tuned model.") parser.add_argument("-d", "--dataset_path", dest="dataset_path", required=True, help="Path to dataset.") args = parser.parse_args() return args if __name__ == "__main__": args = parse_args() tokenizer = RobertaTokenizer.from_pretrained(args.model_path) finetune_model = BertForNextSentencePrediction.from_pretrained(args.model_path) training_args = TrainingArguments( output_dir=args.output_path, num_train_epochs=3, per_device_train_batch_size=1, per_device_eval_batch_size=1, warmup_steps=500, weight_decay=0.01, logging_dir='./logs', ) data_collator = DataCollatorForNextSentencePrediction( tokenizer=tokenizer, mlm=False, block_size=512, nsp_probability=0.5, ) train_dataset = TextDatasetForNextSentencePrediction( tokenizer=tokenizer, file_path=args.dataset_path, block_size=512, ) trainer = Trainer( model=finetune_model, args=training_args, train_dataset=train_dataset, data_collator=data_collator, ) trainer.train() trainer.save_model(args.output_path) ``` The tasks I am working on is: * [ ] my own task or dataset based on TextDatasetForNextSentencePrediction input format: ```bash <doc1_turn1> <doc1_turn2> <doc2_turn1> <doc2_turn2> ... ``` ## To reproduce Steps to reproduce the behavior: 1. `python finetune_roberta.py -m <model_dir> -o output/ -d <dataset_path>` ```bash Special tokens have been added in the vocabulary, make sure the associated word emebedding are fine-tuned or trained. Some weights of the model checkpoint at roberta_base/ were not used when initializing BertForNextSentencePrediction: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.encoder.layer.1.attention.self.key.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.query.weight', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.7.attention.self.value.bias', 'roberta.encoder.layer.7.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.9.attention.self.value.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.11.attention.self.value.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias'] - This IS expected if you are initializing BertForNextSentencePrediction from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing BertForNextSentencePrediction from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForNextSentencePrediction were not initialized from the model checkpoint at roberta_base/ and are newly initialized: ['embeddings.word_embeddings.weight', 'embeddings.position_embeddings.weight', 'embeddings.token_type_embeddings.weight', 'embeddings.LayerNorm.weight', 'embeddings.LayerNorm.bias', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.4.output.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.7.output.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.8.attention.self.key.weight', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.8.output.dense.bias', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.11.attention.self.key.bias', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'pooler.dense.weight', 'pooler.dense.bias', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Epoch: 0%| | 0/3 [00:00<?, ?it/s/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' {'loss': 0.676176025390625, 'learning_rate': 5e-05, 'epoch': 0.3427004797806717, 'step': 500} | 499/1459 [04:30<08:09, 1.96it/s] /home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning) {'loss': 0.671025390625, 'learning_rate': 4.355171524374517e-05, 'epoch': 0.6854009595613434, 'step': 1000}███████████▎ | 999/1459 [08:47<03:53, 1.97it/s] Traceback (most recent call last):███████████████████████████████████████████████████████████████████████████████████████ | 1033/1459 [09:06<03:38, 1.95it/s] File "finetune_roberta.py", line 75, in <module> trainer.train() File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/transformers/trainer.py", line 699, in train for step, inputs in enumerate(epoch_iterator): File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/transformers/data/data_collator.py", line 358, in __call__ input_id, segment_id, attention_mask, label = self.create_examples_from_document(doc, i, examples) File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/transformers/data/data_collator.py", line 446, in create_examples_from_document random_start = random.randint(0, len(random_document) - 1) File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/random.py", line 248, in randint return self.randrange(a, b+1) File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/random.py", line 226, in randrange raise ValueError("empty range for randrange() (%d, %d, %d)" % (istart, istop, width)) ValueError: empty range for randrange() (0, 0, 0) Epoch: 0%| | 0/3 [09:09<?, ?it/s] Iteration: 71%|████████████████████████████████████████████████████████████████████████████████████████████████████████ | 1033/1459 [09:09<03:46, 1.88it/s] ``` ## Expected behavior Model is fine-tuned on NSP taks on given dataset and after that model is saved. "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/11445"
" TITLE CLIP COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR adds the [CLIP](https://github.com/openai/CLIP) model. CLIP is a multi-modal vision+language model which uses a transformer model for encoding both the images and text. - The model here is designed such that both `CLIPTextModel` and `CLIPVisionModel` can be loaded independently, and composed together to get the `CLIPModel`. - Both `CLIPTextModel` and `CLIPVisionModel` use the shared encoder class `CLIPEncoder`. - The config classes are also kept in separate i.e `CLIPTextConfig` and `CLIPVisionConfig`. This could be kept in one config class but then we would have to add two arguments for each config value i.e `text_hidden_size` for text model `vision_hidden_size` for vision model etc. One issue here is that when we load an individual model, like `CLIPTextModel` using the weights of the whole `CLIPModel` the config ends up containing both text and vision config dicts, this does not cause any issue but could be confusing to look at. One important thing to note here is that CLIP's tokenizer does have a pad token defined for it, but they use 0 as `pad_token_id` to pad the text, but the token, but the token associated with 0 is not a pad token. So here, to able to do padding I've added `pad_token_id` as a `property` which returns 0. I would be happy to hear if there is some other way to achieve this. Also, I've added a processor class here but not sure if we really need it for this model. We could easily use the extractor for the vision model and tokenizer for the text model. Would love your review about the design @LysandreJik , @patrickvonplaten , @sgugger."
[ 14 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "PR for Model Addition" ]
"https://api.github.com/repos/huggingface/transformers/issues/13528"
" TITLE Trainer's create_model_card creates an invalid yaml metadata `datasets: - null` COMMENTS 12 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info - any env ### Who can help - discussed with @julien-c @sgugger and @LysandreJik ## Information - The hub will soon reject push with invalid model card metadata, - **only when `datasets`, `model-index` or `license` are present**, their content need to follow the specification cf. https://github.com/huggingface/huggingface_hub/pull/342 ## To reproduce Steps to reproduce the behavior: 1. Train a model 2. Do not association any datasets 3. The trained model and the model card are rejected by the server ## Expected behavior trainer.py git push should be successfull, even with the coming patch https://github.com/huggingface/transformers/pull/13514 "
[ 32, 9 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "model card", "Should Fix" ]
"https://api.github.com/repos/huggingface/transformers/issues/12628"
" TITLE GPTNeo Error Attempting to Generate Text COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info - `transformers` version: 4.9.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu) - Jax version: 0.2.16 - JaxLib version: 0.1.68 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no - Using TPU: Yes ### Who can help @patil-suraj and @patrickvonplaten Models: GPTNeo Library: - flax transformers --> ## Information Model I am using (Bert, XLNet ...): GPTNeo ## To reproduce Steps to reproduce the behavior: Here is a Google Colab for reproducing: https://colab.research.google.com/drive/1tba52h5t-BP3g13FMdPXVjKqpoLTlGvP?usp=sharing For convenience here is the error msg: ``` TypeError: dynamic_update_slice update shape must be smaller than operand shape, got update shape (1, 45) for operand shape (1, 20). ``` I was originally getting the same error as #12081. However, when I attempted to implement the same fix as in that issue, I got the above error. The error might be because I am using the "ForCausalLM" version of GPTNeo. However, there is no LMHead version ## Expected behavior Generate the output sequence "
[ 3 ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
"https://api.github.com/repos/huggingface/transformers/issues/9941"
" TITLE Converting pretrained tf2 bert model to pytorch model for using FillMaskPipeline COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Sincerely greeting to every NLP researchers and HuggingFace members, I am currently working on masked word prediction in a sentence like what FillMaskPipeline did. (i.e. This is a [MASK] test. Where the result of [MASK] might gives {'sequence': '[CLS]This is a [MASK] test.[SEP]', 'score': floating point, 'token': integer, 'token_str': string } as one of the candidate output) In my past few works I downloaded bert-base-chinese as my base model, which can be found in [original bert](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip), then keep running the pretraining steps on my custom dataset using run_pretraining [script](https://github.com/google-research/bert/blob/master/run_pretraining.py), finally using [transformers-cli](https://huggingface.co/transformers/converting_tensorflow_models.html) to convert my bert_model.ckpt to pytorch_model.bin and apply FillMaskPipeline to do masked word prediction. Everything work fine when I use the script under tensorflow-gpu==1.13.1 and CUDA 10.0 with GTX 1080 Ti. However, in current task I want to switch from 1080ti to RTX3090 to have larger batch size but in [Access to checkpoint](https://github.com/tensorflow/models/tree/master/official/nlp/bert#access-to-pretrained-checkpoints) the bert-base-chinese model haven't release yet, so I first convert the tf1 bert-base-chinese model to tf2 by using this [script](https://github.com/tensorflow/models/blob/master/official/nlp/bert/tf2_encoder_checkpoint_converter.py) with following args, !python tf2_encoder_checkpoint_converter.py --checkpoint_to_convert=$BASE_DIR/bert_model.ckpt --converted_checkpoint_path=tmp/ --bert_config_file=$BASE_DIR/bert_config.json after model being converted I use tf2 [create_pretraining_data.py](https://github.com/tensorflow/models/blob/master/official/nlp/data/create_pretraining_data.py) on my dataset and run [run_pretraining.py](https://github.com/tensorflow/models/blob/master/official/nlp/bert/run_pretraining.py). The problem I encounter now is after finished training I have no idea how to do masked prediction task on my data now. I tried [converting ckpt script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_original_tf2_checkpoint_to_pytorch.py) but notice the BertSelfAttention layer can not be convert due to MLM heads, also I didn't found any related script doing similar work as FillMaskPipeline in tensorflow, my questions are, - Is it possible to convert pretrained tf2 model to pytorch without any addition modification on convert_bert_original_tf2_checkpoint_to_pytorch.py (I have no idea how to move forward and I also try to ignore the layer that can not convert but it raise the error as following png) ![image](https://user-images.githubusercontent.com/56808566/106563015-206cc180-6566-11eb-9356-000d9281df64.png) - Is there any related script in tensorflow 1 and tensorflow 2 I can refer to, I spent some time finding related posts but with no luck. I merely understand some tensorflow so if anyone can give me direction on doing masked prediction would be much more appreciated. ### System Info My environments are: (using pipreqs) Python version: 3.7.5 CUDA used to build PyTorch: cuda_11.0_bu.TC445_37.28845127_0 OS: Ubuntu 18.04.5 LTS GCC version: 7.5.0 ### Versions of relevant libraries: (using pipreqs) numpy==1.19.5 transformers==4.2.2 six==1.15.0 torch==1.7.1+cu110 tensorflow_gpu==2.2.0 gin_config==0.1.1 absl_py==0.11.0 tensorflow_hub==0.11.0 sentencepiece==0.1.94 absl==0.0 torchvision==0.8.2+cu110 bert4keras==0.9.9 gin==0.1.006 tensorflow==2.4.1 Any suggestions, helps, and relative post that I can dig into would be appreciated. Thanks in advance!"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/9028"
" TITLE Initial README for `t5-base-indonesian-summarization-cased` model COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Initial README for Indonesian T5 Summarization Base Model"
[ 32 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "model card" ]
"https://api.github.com/repos/huggingface/transformers/issues/7416"
" TITLE Possible error in MBart Tokenization script -- target lang code is only present in seq once COMMENTS 5 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info - `transformers` version: current - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (False) - Using GPU in script?: No. - Using distributed or parallel set-up in script?: No. ### Who can help MBart: @sshleifer ## Information Model I am using is MBart. The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: ```py from transformers import MBartTokenizer tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro') example_english_phrase = " UN Chief Says There Is No Military Solution in Syria" expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria" batch: dict = tokenizer.prepare_seq2seq_batch( example_english_phrase, src_lang="en_XX", tgt_lang="ro_RO", tgt_texts=expected_translation_romanian ) ``` ``` -snip- 'labels': tensor([[ 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2, 250020]])} ``` The target language code is only present once in the target sequence. `print(tokenizer.lang_code_to_id["ro_RO"])` `250020` ## Expected behavior ``` 'labels': tensor([[ 250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2, 250020]])} ``` Here, the target language code is first and last, as I believe MBart (https://arxiv.org/pdf/2001.08210.pdf, top of page 3) says. MBart Excerpt: ``` For each instance of a batch we sample a language id symbol <LID> ... sentences in the instance are separated by the end of sentence (</S>) token. Then, we append the selected<LID> ``` Here is the code I believe is wrong: ```py def set_tgt_lang_special_tokens(self, lang: str) -> None: """Reset the special tokens to the target language setting. Prefix [tgt_lang_code], suffix =[eos].""" self.cur_lang_code = self.lang_code_to_id[lang] self.prefix_tokens = [] self.suffix_tokens = [self.eos_token_id, self.cur_lang_code] ``` To me, the comment implies the language code should be first as well. I tested it locally, and merely adding `self.cur_lang_code` to `self.prefix_tokens` resolves the issue. I do not know if I am misunderstanding the purpose of this script or misuing it. My above code is copied from the "MBartTokenizer" example at https://huggingface.co/transformers/master/model_doc/mbart.html#overview If I didn't make a mistake, I'd be more than happy to open a PR to change that one lines and fix it."
[ 23 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Documentation" ]
"https://api.github.com/repos/huggingface/transformers/issues/7558"
" TITLE [Model card] SinhalaBERTo model. COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY This is the model card for keshan/SinhalaBERTo model."
[ 32 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "model card" ]
"https://api.github.com/repos/huggingface/transformers/issues/15902"
" TITLE [WIP] Add Fusion-in-Decoder COMMENTS 3 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 2 rocket: 3 eyes: 0 BODY # What does this PR do? This PR adds the Fusion-in-Decoder model to the repository. Paper: https://arxiv.org/abs/2007.01282 Code: https://github.com/facebookresearch/FiD ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @patil-suraj, @patrickvonplaten, @qqaatw"
[ 3 ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
"https://api.github.com/repos/huggingface/transformers/issues/17211"
" TITLE CUDA out of memory in Seq2SeqTrainer class COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info ```shell - `transformers` version: 4.11.0 - Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9 - Python version: 3.6.13 - PyTorch version (GPU?): 1.10.2+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` ### Who can help? @sgugger @patrickvonplaten Hello! I am trying to finetune "sshleifer/distill-pegasus-xsum-16-4" model for a seq2seq2 generation task(specifically summarization) on my own custom dataset(~1800 training data points) using hugging face transformers Seq2SeqTrainer but encountered CUDA OOM error. I am trying to follow the [finetune-summarization notebook](https://github.com/huggingface/notebooks/blob/main/examples/summarization.ipynb) mentioned by @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Libraries ```bash import transformers from datasets import load_dataset, load_metric from transformers import AutoTokenizer from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer import nltk import numpy as np ``` Data ```bash data_files = { "train": "data/train.jsonl", "validation": "data/val.jsonl" } raw_datasets = load_dataset('json', data_files=data_files) ``` Load tokenizer and model ```bash model_checkpoint = 'sshleifer/distill-pegasus-xsum-16-4' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) ``` Process Data ```bash if model_checkpoint in ["t5-small", "t5-base", "t5-larg", "t5-3b", "t5-11b"]: prefix = "summarize: " else: prefix = "" max_input_length = 1024 max_target_length = 128 def preprocess_function(examples): inputs = [prefix + doc for doc in examples["document"]] model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(examples["summary"], max_length=max_target_length, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs tokenized_datasets = raw_datasets.map(preprocess_function, batched=True) ``` Trainer ```bash metric = load_metric("rouge") batch_size = 2 model_name = model_checkpoint.split("/")[-1] args = Seq2SeqTrainingArguments( f"{model_name}-finetuned-xsum", evaluation_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, weight_decay=0.01, save_total_limit=2, num_train_epochs=1, predict_with_generate=True, fp16=True, ) data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) def compute_metrics(eval_pred): predictions, labels = eval_pred decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # Rouge expects a newline after each sentence decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds] decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) # Extract a few results result = {key: value.mid.fmeasure * 100 for key, value in result.items()} # Add mean generated length prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions] result["gen_len"] = np.mean(prediction_lens) return {k: round(v, 4) for k, v in result.items()} trainer = Seq2SeqTrainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics ) trainer.train() ``` Error ```bash The following columns in the training set don't have a corresponding argument in `PegasusForConditionalGeneration.forward` and have been ignored: summary, document. ***** Running training ***** Num examples = 1599 Num Epochs = 1 Instantaneous batch size per device = 2 Total train batch size (w. parallel, distributed & accumulation) = 2 Gradient Accumulation steps = 1 Total optimization steps = 800 --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-33-3435b262f1ae> in <module> ----> 1 trainer.train() ~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1310 tr_loss_step = self.training_step(model, inputs) 1311 else: -> 1312 tr_loss_step = self.training_step(model, inputs) 1313 1314 if args.logging_nan_inf_filter and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step)): ~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/trainer.py in training_step(self, model, inputs) 1837 if self.use_amp: 1838 with autocast(): -> 1839 loss = self.compute_loss(model, inputs) 1840 else: 1841 loss = self.compute_loss(model, inputs) ~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs) 1871 else: 1872 labels = None -> 1873 outputs = model(**inputs) 1874 # Save past state if it exists 1875 # TODO: this needs to be fixed and made cleaner later. ~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(*input, **kwargs) 1103 # Do not call functions when jit is used 1104 full_backward_hooks, non_full_backward_hooks = [], [] ~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1391 output_attentions=output_attentions, 1392 output_hidden_states=output_hidden_states, -> 1393 return_dict=return_dict, 1394 ) 1395 lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias ~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(*input, **kwargs) 1103 # Do not call functions when jit is used 1104 full_backward_hooks, non_full_backward_hooks = [], [] ~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 1226 output_attentions=output_attentions, 1227 output_hidden_states=output_hidden_states, -> 1228 return_dict=return_dict, 1229 ) 1230 # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True ~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(*input, **kwargs) 1103 # Do not call functions when jit is used 1104 full_backward_hooks, non_full_backward_hooks = [], [] ~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict) 796 attention_mask, 797 layer_head_mask=(head_mask[idx] if head_mask is not None else None), --> 798 output_attentions=output_attentions, 799 ) 800 ~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(*input, **kwargs) 1103 # Do not call functions when jit is used 1104 full_backward_hooks, non_full_backward_hooks = [], [] ~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, hidden_states, attention_mask, layer_head_mask, output_attentions) 320 attention_mask=attention_mask, 321 layer_head_mask=layer_head_mask, --> 322 output_attentions=output_attentions, 323 ) 324 hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) ~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(*input, **kwargs) 1103 # Do not call functions when jit is used 1104 full_backward_hooks, non_full_backward_hooks = [], [] ~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions) 207 # self_attention 208 key_states = self._shape(self.k_proj(hidden_states), -1, bsz) --> 209 value_states = self._shape(self.v_proj(hidden_states), -1, bsz) 210 211 if self.is_decoder: ~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(*input, **kwargs) 1103 # Do not call functions when jit is used 1104 full_backward_hooks, non_full_backward_hooks = [], [] ~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/linear.py in forward(self, input) 101 102 def forward(self, input: Tensor) -> Tensor: --> 103 return F.linear(input, self.weight, self.bias) 104 105 def extra_repr(self) -> str: ~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/functional.py in linear(input, weight, bias) 1846 if has_torch_function_variadic(input, weight, bias): 1847 return handle_torch_function(linear, (input, weight, bias), input, weight, bias=bias) -> 1848 return torch._C._nn.linear(input, weight, bias) 1849 1850 RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.76 GiB total capacity; 13.65 GiB already allocated; 11.75 MiB free; 13.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ``` The error can be re-produced on loading any open-source summarization dataset. ` raw_datasets = load_dataset("xsum") ` ### Expected behavior ```shell Finetune the summarization model. ``` "
[ 17 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
"https://api.github.com/repos/huggingface/transformers/issues/11101"
" TITLE Confusion COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Like you mention below: I am doing a question answering task I was using BERT originally so can I convert from BERT to this? To download and use any of the pretrained models on your given task, you just need to use those three lines of codes (PyTorch version): >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> model = AutoModel.from_pretrained("bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) or for TensorFlow: >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> model = TFAutoModel.from_pretrained("bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs)"
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Migration" ]
"https://api.github.com/repos/huggingface/transformers/issues/7566"
" TITLE Trainer incorrectly checks pytorch version COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-glibc2.27 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes(not applicable) - Using distributed or parallel set-up in script?: not applicable ### Who can help @sgugger and @prajjwal1 (he added it, according to git blame) ## Information I'm running token classification example on my own data, and i've faced trouble with fp16 training with torch 1.6.0. Script says that i need apex installed to use fp16 option. However, apex is not required since torch 1.6.0 came out with native amp support. I've dived into trainer code, and found that there is version checking line: https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/src/transformers/trainer.py#L65 Apparently, it is slightly incorrect. There should be <= instead of <. So it will not try to import apex if torch version is greater OR EQUAL to 1.6 ```python import torch from packaging import version print(version.parse(torch.__version__) < version.parse("1.6")) # -> False print(version.parse(torch.__version__) <= version.parse("1.6")) # -> True ``` ## To reproduce 1. Install torch 1.6.0(and do not install apex) 2. clone repository 3. cd into examples/token-classification 4. add '--fp16' to the bottom of run.sh script 5. execute run.sh script ## Expected behavior Script works well on torch 1.6.0 without apex installed "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/13322"
" TITLE DestilGTP2 code from pytorch-transformers does not work in transformers, I made a basic example COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY How would i convert this to new version of transformers. Or is it possible to somehow use DestilGTP2 with pytorch-transformers. use_transformers = True if use_transformers: import torch from transformers import GPT2Tokenizer, GPT2Model, GPT2LMHeadModel tokenizer1 = GPT2Tokenizer.from_pretrained('distilgpt2',cache_dir="/var/software/Models/") model1 = GPT2LMHeadModel.from_pretrained('distilgpt2',cache_dir="/var/software/Models/") model1.eval() model1.to('cuda') text = "Who was Jim Henson ?" indexed_tokens = tokenizer1.encode(text) tokens_tensor = torch.tensor([indexed_tokens]) tokens_tensor = tokens_tensor.to('cuda') with torch.no_grad(): predictions_1 = model1(tokens_tensor) print(predictions_1) else: import torch from pytorch_transformers import GPT2Tokenizer, GPT2Model, GPT2LMHeadModel tokenizer1 = GPT2Tokenizer.from_pretrained('gpt2',cache_dir="/var/software/Models/") # cache_dir=None model1 = GPT2LMHeadModel.from_pretrained('gpt2',cache_dir="/var/software/Models/") model1.eval() model1.to('cuda') text = "Who was Jim Henson ?" indexed_tokens = tokenizer1.encode(text) tokens_tensor = torch.tensor([indexed_tokens]) tokens_tensor = tokens_tensor.to('cuda') with torch.no_grad(): predictions_1 = model1(tokens_tensor) print(predictions_1) When i try i get an error, and tried to follow the guide but do not get what the new tokeniser does differently. "
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Migration" ]
"https://api.github.com/repos/huggingface/transformers/issues/8687"
" TITLE added bangla-bert-sentiment model card COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Hi, I added model card for bangla-bert-sentiment model. Please check and if possible merge. thanks and regards"
[ 32 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "model card" ]
"https://api.github.com/repos/huggingface/transformers/issues/8683"
" TITLE use the torchscript in a gpt model is slower than origin one. COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:2.1.1 - Platform:Linux version 4.15.0-76-generic (buildd@lcy01-amd64-029) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) - Python version:3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 - Using GPU in script?:No -GPU-tesla k80 ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information when i am using torchscipts to speed up the interference of my gpt2 model, I found it is slower than the origin one traced model 0.6959998607635498 origin model 0.3259282112121582 The problem arises when using: * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] my own task : gpt2 LM ## To reproduce Steps to reproduce the behavior: follow the code below https://github.com/lonelydancer/algorithm/blob/master/test.py <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> the traced model is faster."
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/9333"
" TITLE TF Longformer has some graph compilation/execution issue COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY TF longformer has the following issues to make it 100% graph compilation/execution compliant. I succeed to fix most of the issues but two still remains: 1. The first issue starts at line [1762](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_tf_longformer.py#L1762). The test to know if the inputs needs to be padded prevent the graph to be compiled because `input_ids`, `position_ids` and `input_embeds` can be `None` at the end of the main branch. As a solution I propose to export the padding process (from line 1769 to 1786) outside the `if` as if `padding_len == 0` the calls to `tf.pad(...)` and `tf.concat(...)` will have no effect on the different inputs. 2. The second issue is at line [1527](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_tf_longformer.py#L1527). Here `all_global_attentions` can be either a tuple or `None` in a same execution because `is_global_attn` is not defined globally but during the execution. I don't know how to solve this one. As a first test you can run: ``` from transformers import TFLongformerModel model = TFLongformerModel.from_pretrained("lysandre/tiny-longformer-random", output_attentions=True, output_hidden_states=True) model.save("path") ``` Ping @patrickvonplaten the Longformer expert :)"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/12140"
" TITLE [FLAX] port GPTNeo to Flax COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Port the existing GPTNeo Model to FLAX"
[ 39 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
"https://api.github.com/repos/huggingface/transformers/issues/15759"
" TITLE Add EfficientNet Model - PyTorch COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # 🌟 New model addition I would like to add the EfficientNet model to the Transformers library ## Model description EfficientNet was proposed in the paper, **[EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946)**. EfficientNet is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a compound coefficient. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients. EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. <!-- Important information --> ## Open source status * [x] the model implementation is available: [EfficientNet-PyTorch](https://github.com/lukemelas/EfficientNet-PyTorch) * [x] the model weights are available: [PyTorch weights](https://github.com/lukemelas/EfficientNet-PyTorch/releases/tag/1.0) * [x] who are the authors: Mingxing Tan, Quoc V. Le "
[ 39 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
"https://api.github.com/repos/huggingface/transformers/issues/8443"
" TITLE Dropout p is changing after loading COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ## Environment info - `transformers` version: 3.5.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik, @sgugger ## Information Model I am using (Bert, XLNet ...): Bert, Roberta The problem arises when using: * [ *] the official example scripts: Using information given in this link: https://huggingface.co/transformers/master/custom_datasets.html The tasks I am working on is: * [ *] my own task or dataset: text classification ## To reproduce Steps to reproduce the behavior: 1. I'm trying to change dropout probability. I'm using one of these methods for Bert instance: ```python model.classifier.dropout.p=0.7 model.classifier.dropout = nn.Dropout(0.7) ``` 2. After training is completed, model is saved ```python model.save_pretrained('xxx/bert') ``` 3. Model is loaded in another session using this code snippet. But after loading, model.classifier.dropout.p is changing to 0.1 which is in the config file. ```python model = BertForSequenceClassification.from_pretrained("xxx/bert", num_labels = 3, output_attentions = False, output_hidden_states = False, ) ``` ## Expected behavior Dropout p is changing to default value after loading the model. But the model is modified so that it shouldn't do that behavior "
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/11499"
" TITLE [DeepSpeed] fp32 support COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Things we need to sync with the upcoming `deepspeed==0.3.16` release: - `zero.Init` now takes a config as an argument - fp32-support integration, plus doc and tests - start troubleshooting section ### Future TODO will probably do in the next PR: - switch `from_config()` to perform the same `zero.Init` as `from_pretrained` + add test. ### Blocking events PRs waiting to be merged before this PR can be merged: - [x] https://github.com/microsoft/DeepSpeed/pull/1008 `zero.Init(config=ds_config)` new arg - [x] https://github.com/microsoft/DeepSpeed/pull/1004 fp32 support - [x] new release is needed 0.3.16 @sgugger "
[ 56 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ "DeepSpeed" ]
"https://api.github.com/repos/huggingface/transformers/issues/8479"
" TITLE Fix SqueezeBERT for masked language model COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This corrects a mistake in the implementation of SqueezeBertForMaskedLM. Fixes #8277 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Yes: https://github.com/huggingface/transformers/issues/8277 - [x] Did you make sure to update the documentation with your changes? Here are the - [ ] Did you write any new necessary tests? _No tests added._ ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @sgugger @LysandreJik @ontocord "
[ 32 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "model card" ]
"https://api.github.com/repos/huggingface/transformers/issues/13505"
" TITLE Insufficient memory occurs during finetune COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # 📚 Migration ## Information <!-- Important information --> Model I am using (Bert,bert-base-chinese): Language I am using the model on (Chinese ...): The problem arises when using: * [x] the official example scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## Details * Here is my bash ``` export CUDA_VISIBLE_DEVICES=1,2 python run_mlm.py \ --model_name_or_path bert-base-chinese \ --train_file /home/xjyu/zhh/workspace/one-stop-ocr/char2char.allchars/quantize-train/data/scripts/wiki-part/wiki-part.split.nosign.txt \ --validation_file /home/xjyu/zhh/workspace/one-stop-ocr/char2char.allchars/quantize-train/data/scripts/wiki-part/test.txt \ --do_train \ --do_eval \ --output_dir ./test-mlm \ --line_by_line \ ``` When I performed finetune on bert->`run_mlm.py`, it took about 0.6 epoch,the output_dir file "test-mlm" took up 91G RAM,I am really surprised. It stands to reason that the output file cannot be so big,But i don't know where the problem is. * here is my machine +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.64 Driver Version: 440.64 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 108... Off | 00000000:02:00.0 Off | N/A | | 0% 32C P8 9W / 250W | 0MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce GTX 108... Off | 00000000:03:00.0 Off | N/A | | 0% 28C P8 9W / 250W | 0MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 GeForce GTX 108... Off | 00000000:83:00.0 Off | N/A | | 77% 80C P2 73W / 250W | 4865MiB / 11177MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 3 GeForce GTX 108... Off | 00000000:84:00.0 Off | N/A | | 0% 65C P2 68W / 250W | 5574MiB / 11178MiB | 0% Default | +-------------------------------+----------------------+----------------------+ <!-- A clear and concise description of the migration issue. If you have code snippets, please provide it here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: <!-- IMPORTANT: which version of the former library do you use? --> * `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): ## Checklist - [x] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [x] I checked if a related official extension example runs on my machine. @sgugger "
[ 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Migration" ]
"https://api.github.com/repos/huggingface/transformers/issues/11085"
" TITLE Add DistilBertForCausalLM COMMENTS 14 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? Similar to the `BertLMHeadModel` this PR aims to add a `DistilBertForCausalLM` model in `modeling_distilbert.py`. Fixes #7397 Replaces #8387 ## Who can review? @patil-suraj "
[ 3 ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
"https://api.github.com/repos/huggingface/transformers/issues/13845"
" TITLE [RFC] Add `modeling_xxx_fusion.py` to support kernel fusion COMMENTS 10 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 6 rocket: 0 eyes: 0 BODY ## Introduction I am an engineer currently working on 3D model parallelism for transformers. When the tensor model parallelism (https://github.com/huggingface/transformers/pull/13726) is done, I am going to introduce [kernel fusion](https://stackoverflow.com/questions/53305830/cuda-how-does-kernel-fusion-improve-performance-on-memory-bound-applications-on) feature to transformers. ![image](https://user-images.githubusercontent.com/38183241/135726581-ee305818-c78a-439f-90b4-30cd1edbc1fe.png) For this, I want to create a new modeling file called `modeling_xxx_fusion.py`. This work is currently being discussed with @stas00 and @RezaYazdaniAminabadi (DeepSpeed team). ## Kernel fusion API ```python from transformers import BertForMaskedLM # create model model = BertForMaskedLM.from_pretrained("bert-base-cased") # 1. fuse_modules # `fuse_modules` is function level fusion, It supports a wide variety of models. # all arguments is `True` as default model.fuse_modules() # fuse selective modules model.fuse_modules( word_embedding=True, scale_mask_softmax=True, layer_norm=True, bias_act=True, bias_dropout_residual=False, cross_entropy=True, ) # 2. fuse_layers # `fuse_layers` is block level (attention & mlp) fusion, only a few models are supported. # argument (`inference`) is `None` -> `not self.training` of `torch.nn.Module` as default. model.fuse_layers(inference=None) # fuse layers for inference model.fuse_layers(inference=True) # fuse layers for training model.fuse_layers(inference=False) ``` ## Implementation The internal module of each model will be re-implemented using kernel fusion method, and the existed module will be replaced with the fused module. The following example is an example of `BertOutput(nn.Module)`. ```python # transformers/models/bert/modeling_bert.py class BertOutput(nn.Module): def __init__(self, config): super().__init__() self.dense = nn.Linear(config.intermediate_size, config.hidden_size) self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.dropout = nn.Dropout(config.hidden_dropout_prob) def forward(self, hidden_states, input_tensor): hidden_states = self.dense(hidden_states) hidden_states = self.dropout(hidden_states) hidden_states = self.LayerNorm(hidden_states + input_tensor) return hidden_states ``` ```python # transformers/models/bert/modeling_bert_fusion.py class FusedBertOutput(BertOutput): def forward(self, hidden_states, input_tensor): hidden_states = hidden_states @ self.dense.weight.t() hidden_states = FusedBiasDropoutResidual.apply(hidden_states, self.dense.bias, input_tensor) hidden_states = FusedLayerNorm.apply(hidden_states, self.LayerNorm.weight, self.LayerNorm.bias) return hidden_states ``` When the user calls the `fuse_modules()` method, the kernel fusion engine finds `BertOutput` and replaces it with `FusedBertOutput`. and user calls `fused_layers` method, engine finds `BertLayer` and replcases it with `FusedBertLayer`. This is the method that `parallelformers` parallelized transformers models flexibly, and the `deepspeed` also supports kernel fusion in this way. However, the current version of `deepspeed` fuses the entire transformer layer, so the supported models are very limited. For example, bigbird requires random attention mechanism. in this case random attention must be implemented in the custom cuda kernel. However, because the number of models is so large, it is impossible to implement them all. So I propose a flexible way to fuse the kernel on a per-function. This is a strategy of triage. The area that can be fused performs fusion, and the area that can not be fused uses the torch's default module. ```python # kernel_fusion_utils.py class KernelFusionMixin(object): def fuse_modules(...): assert self._is_able_to_fuse, "error message" ... implementation ... def fuse_layers(...) assert self._is_able_to_fuse, "error message" ... implementation ... ``` ```python # modeling_utils.py class PreTrainedModel(..., KernelFusionMixin): _is_parallelizable = ... _is_able_to_fuse = False. # <--- Only models that can be fused have `True`. ``` This is a draft. The API can be changed at any time. I look forward to feedback. I'm going to show you this soon with a framework I'm making. (Like parallelformers, we will pre-open the repositories on our side and merge them later on transformers and deepspeed.) cc. @Stas00 @RezaYazdaniAminabadi @Sylvain "
[ 30, 3 ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Performance", "WIP" ]
"https://api.github.com/repos/huggingface/transformers/issues/9214"
" TITLE Save underlying BertModel only COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY I currently have a custom model on top of a pretrained BERT Model, which is effectively just dropout and a linear layer on top for classification. I want to be able to train this classifier, updating the underlying weights, but then to save the `BERTModel` underneath (w/o linear classification layer) so that I can load from file and use this as my input to the same custom model. Is there a way I can access and save the underlying transformer to then be used again like this? Code for clarification: ``` class BERTBinaryClassifier(torch.nn.Module): def __init__(self, model_name_or_path: str): super(BERTBinaryClassifier, self).__init__() self.bert = ModelSelection.get_model( model_name=model_name_or_path) self.drop = torch.nn.Dropout(p=0.3) self.out = torch.nn.Linear(self.bert.config.hidden_size, 2).cuda() def forward(self, inputs): logits, embs, *_ = self.bert(**inputs) output = self.drop(logits) return self.out(output), embs ``` Right now I'm just outputting the logits and hidden_state for future usage but I'd like to be able to use this same function to effectively load a `BertModel` like I do at the start here with `self.bert` (which just loads `BertModel.from_pretrained` and save my Pytorch classifier as a whole, separately. Would it be as simple as say accessing `self.bert.bert` and then saving it that way?"
[ 41 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "wontfix" ]
"https://api.github.com/repos/huggingface/transformers/issues/12363"
" TITLE [examples] add `main_process_first` context manager to datasets map calls COMMENTS 7 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY We need to replay this addition that has been modelled in `run_translation.py` in https://github.com/huggingface/transformers/pull/12351 to all other pytorch examples The actual changes for the model example are: https://github.com/huggingface/transformers/pull/12351/files#diff-09777f56cee1060a535a72ce99a6c96cdb7f330c8cc3f9dcca442b3f7768237a (just `run_translation.py`) Here is a time-saver: ``` find examples/pytorch -type f -exec perl -0777 -pi -e 's|^(\s+)(train_dataset = train_dataset.map\(.*?\))|x($1, $2)|msge; BEGIN {sub x {($p, $t) = @_ ; $t =~ s/^/ /msg; return qq[${p}with training_args.main_process_first(desc="train dataset map pre-processing"):\n$p$t] } }' {} \; find examples/pytorch -type f -exec perl -0777 -pi -e 's|^(\s+)(eval_dataset = eval_dataset.map\(.*?\))|x($1, $2)|msge; BEGIN {sub x {($p, $t) = @_ ; $t =~ s/^/ /msg; return qq[${p}with training_args.main_process_first(desc="validation dataset map pre-processing"):\n$p$t] } }' {} \; find examples/pytorch -type f -exec perl -0777 -pi -e 's|^(\s+)(predict_dataset = predict_dataset.map\(.*?\))|x($1, $2)|msge; BEGIN {sub x {($p, $t) = @_ ; $t =~ s/^/ /msg; return qq[${p}with training_args.main_process_first(desc="prediction dataset map pre-processing"):\n$p$t] } }' {} \; git checkout examples/pytorch/translation/run_translation.py make fixup ``` I noticed other scripts may have other `datasets.map` calls, which get automatically rewritten by the scripts above, so please review the changes to see if the `desc` needs to be modified. But we want to use the context manager on all of these calls, it's possible that the perl rewrite scripts didn't catch some. - also this template needs to have this change as well: `templates/adding_a_new_example_script/\{\{cookiecutter.directory_name\}\}/run_\{\{cookiecutter.example_shortcut\}\}.py` can do via perl or manually or whatever other way works for you. And please validate that scripts still work, by either running: ``` RUN_SLOW=1 pytest examples/pytorch/test_examples.py ``` or running each script manually as explained in its corresponding `README.md` file. This issue is open to all and should be very simple to complete, the main effort is to validate. And thank you for your contribution! "
[ 52 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "Good First Issue" ]
"https://api.github.com/repos/huggingface/transformers/issues/8570"
" TITLE [T5] Add open / closed book answering models COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 3 rocket: 0 eyes: 0 BODY # 🌟 New model addition ## Model description Check here: https://github.com/google-research/google-research/tree/master/t5_closed_book_qa <!-- Important information --> ## Open source status * [ ] the model implementation is available: (give details) * [ ] the model weights are available: (give details) * [ ] who are the authors: (mention them, if possible by @gh-username) "
[ 39 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "New model" ]
"https://api.github.com/repos/huggingface/transformers/issues/8630"
" TITLE Create README.md COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 --> "
[ 32 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "model card" ]
"https://api.github.com/repos/huggingface/transformers/issues/12263"
" TITLE Add VisualBERT demo notebook COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY In continuation with #10534, this PR adds demo for VisualBERT model. I am planning to base it on the `LXMERT` examples, hence the copy-paste of files for now."
[ 3 ]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "WIP" ]
End of preview (truncated to 100 rows)

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
1
Add dataset card
Evaluate models HF Leaderboard