repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
86
closed
code in run_squad.py line 263
# Zero-pad up to the sequence length. while len(input_ids) < max_seq_length: input_ids.append(0) input_mask.append(0) segment_ids.append(0) in segment_ids array,1 indicates token from passage and 0 indicate token form query. when padding,why segment_ids filled with 0,which represents query
12-04-2018 11:08:09
12-04-2018 11:08:09
![image](https://user-images.githubusercontent.com/11830865/49438135-61d37c00-f7f8-11e8-8b2a-a7222bd30f0e.png) <|||||>Hi, what is your question?<|||||>Strictly speaking, the zero-padding in segment_ids leads to ambiguous tensor entries, because 0 can mean both "first sentence" (or query in another task?) and "padding". But in practice this isn't a problem because anything related to padding gets masked out later.
transformers
85
closed
How to use pre-trained SQUAD model?
After training squad, I have a model file in a local folder: ``` -rw-rw-r-- 1 khashab2 cs_danr 4.7M Nov 21 19:20 dev-v1.1.json -rw-rw-r-- 1 khashab2 cs_danr 3.4K Nov 29 22:52 evaluate-v1.1.py drwxrwsr-x 2 khashab2 cs_danr 10 Nov 30 14:57 out2 -rw-rw-r-- 1 khashab2 cs_danr 29M Nov 21 19:20 train-v1.1.json -rw-rw-r-- 1 khashab2 cs_danr 490M Nov 29 23:14 train-v1.1.json_bert-base-uncased_384_128_64 -rw-rw-r-- 1 khashab2 cs_danr 490M Nov 30 15:05 train-v1.1.json_bert-large-uncased_384_128_64 ``` I want to use this pre-trained model to make predictions. Is there any example that I can follow this? (if not any pointers?) I looked into the instructions and didn't find anything relevant on this.
12-04-2018 03:13:30
12-04-2018 03:13:30
Hi there are now examples on how you can save and reload the models in the examples (`run_classifier`, `run_squad` and `run_swag`)
transformers
84
closed
elementwise_mean -> mean (thinking ahead to pytorch 1.0)
under the pytorch 1.0 nightly this test generates ``` UserWarning: reduction='elementwise_mean' is deprecated, please use reduction='mean' instead. ``` so this PR fixes that.
12-03-2018 23:59:40
12-03-2018 23:59:40
oops, doesn't work under current pytorch, never mind
transformers
83
closed
Error while runing example
Hi! I have a problem when running the example, could you please give me a hint on what may I be doing wrong? I use: `PYTHONPATH=. python examples/run_classifier.py --task_name MNLI --do_train --do_eval --do_lower_case --data_dir ../GLUE-baselines/glue_data/MNLI/ --bert_model bert-base-uncased --max_seq_len 40 --train_batch_size 10 --output_dir mnli/` And obtain: ``` ... 12/03/2018 21:11:10 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 12/03/2018 21:11:10 - INFO - __main__ - label: entailment (id = 1) 12/03/2018 21:11:10 - INFO - __main__ - *** Example *** 12/03/2018 21:11:10 - INFO - __main__ - guid: train-3 12/03/2018 21:11:10 - INFO - __main__ - tokens: [CLS] how do you know ? all this is their information again . [SEP] this information belongs to them . [SEP] 12/03/2018 21:11:10 - INFO - __main__ - input_ids: 101 2129 2079 2017 2113 1029 2035 2023 2003 2037 2592 2153 1012 102 2023 2592 7460 2000 2068 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12/03/2018 21:11:10 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12/03/2018 21:11:10 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12/03/2018 21:11:10 - INFO - __main__ - label: entailment (id = 1) 12/03/2018 21:11:10 - INFO - __main__ - *** Example *** 12/03/2018 21:11:10 - INFO - __main__ - guid: train-4 12/03/2018 21:11:10 - INFO - __main__ - tokens: [CLS] yeah i tell you what though if you go price some of those tennis shoes i can see why now you know they ' re getting up in [SEP] the tennis shoes have a range of prices . [SEP] 12/03/2018 21:11:10 - INFO - __main__ - input_ids: 101 3398 1045 2425 2017 2054 2295 2065 2017 2175 3976 2070 1997 2216 5093 6007 1045 2064 2156 2339 2085 2017 2113 2027 1005 2128 2893 2039 1999 102 1996 5093 6007 2031 1037 2846 1997 7597 1012 102 12/03/2018 21:11:10 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 12/03/2018 21:11:10 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 12/03/2018 21:11:10 - INFO - __main__ - label: neutral (id = 2) 12/03/2018 21:14:39 - INFO - __main__ - ***** Running training ***** 12/03/2018 21:14:39 - INFO - __main__ - Num examples = 392702 12/03/2018 21:14:39 - INFO - __main__ - Batch size = 10 12/03/2018 21:14:39 - INFO - __main__ - Num steps = 117810 Epoch: 0%| | 0/3 [00:00<?, ?it/sTHCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THC/generic/THCTensorMath.cu line=26 error=59 : device-side assert triggered | 0/39271 [00:00<?, ?it/s] /opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed. Traceback (most recent call last): File "examples/run_classifier.py", line 637, in <module> main() File "examples/run_classifier.py", line 558, in main loss.backward() File "/home/kchledowski/anaconda2/envs/glue/lib/python3.6/site-packages/torch/tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/kchledowski/anaconda2/envs/glue/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THC/generic/THCTensorMath.cu:26 ``` I would be very grateful for any suggestions where to look. Thanks!
12-03-2018 20:21:12
12-03-2018 20:21:12
Hi! In case you haven't already, modifying the source at https://github.com/huggingface/pytorch-pretrained-BERT/blob/e60e8a606837ff7f49e583de8492e55575155eb6/examples/run_classifier.py#L491 and turning it into `cache_dir=PYTORCH_PRETRAINED_BERT_CACHE / 'distributed_{}'.format(args.local_rank), num_labels = 3)` should get your finetuning started (you have three labels, `["contradiction", "entailment", "neutral"]`)<|||||>Thanks, it worked. I think it could be great if 3 classes was default when choosing MNLI :)
transformers
82
closed
AttributeError: 'tuple' object has no attribute 'backward'
Traceback (most recent call last): | 0/11 [00:00<?, ?it/s] File "examples/run_classifier.py", line 637, in <module> main() File "examples/run_classifier.py", line 558, in main loss.backward() AttributeError: 'tuple' object has no attribute 'backward'
12-03-2018 16:06:20
12-03-2018 16:06:20
Looks like there was a code change which changed the forward method of the model involved here from returning a tensor to returning a tuple of tensors and the example hasn't been updated yet to reflect that change. There's probably a line in run_classifier.py like ```Python loss = model(input...) ``` which now needs to be ```Python loss, something_else = model(input...) ```<|||||>> Looks like there was a code change which changed the forward method of the model involved here from returning a tensor to returning a tuple of tensors and the example hasn't been updated yet to reflect that change. There's probably a line in run_classifier.py like > > ```python > loss = model(input...) > ``` > > which now needs to be > > ```python > loss, something_else = model(input...) > ``` You are right! Thx!
transformers
81
closed
There is some problem in supporting continuously training
I change the run_classfifier.py in order to support continuously training. i save the model.state_dict() and the BertAdam optimizer.state_dict(), and I load them when start continuously training. However, After some epochs, the loss will increase little by little and finally end with a large loss value. I do not know the reason. Please help me.
12-03-2018 12:00:09
12-03-2018 12:00:09
Hi @ZacharyWaseda, continuous training is an open-research problem. You should rather seek some solution in the papers/workshop/conference discussing researches in this field. This is not my personal field of expertise so I can only direct you to google and other search engine for more information.
transformers
80
closed
How can I apply BERT to a cloze task?
Hi, I have a dataset like : From Monday to Friday most people are busy working or studying, but in the evenings and weekends they are free and _ themselves. And there are four candidates for the missing blank area: ["love", "work", "enjoy", "play"], here "enjoy" is the correct answer, it is a cloze-style task, and it looks like the maskLM in the BERT, the difference is that I don't want to search the candidate from all the tokens but the four given candidates, how can I do this? It looks like negtive sampling method. Do you have any idea? Thank you!
12-03-2018 10:58:43
12-03-2018 10:58:43
I think that you best option would be to use the masked language modeling head and restrict the output of the softmax layer to your candidates. I think the following code does the job: ``` import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') text = 'From Monday to Friday most people are busy working or studying, '\ 'but in the evenings and weekends they are free and _ themselves.' tokenized_text = tokenizer.tokenize(text) masked_index = tokenized_text.index('_') tokenized_text[masked_index] = '[MASK]' candidates = ['love', 'work', 'enjoy', 'play'] candidates_ids = tokenizer.convert_tokens_to_ids(candidates) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) segments_ids = [0] * len(tokenized_text) tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) language_model = BertForMaskedLM.from_pretrained('bert-base-uncased') language_model.eval() predictions = language_model(tokens_tensor, segments_tensors) predictions_candidates = predictions[0, masked_index, candidates_ids] answer_idx = torch.argmax(predictions_candidates).item() print(f'The most likely word is "{candidates[answer_idx]}".') ``` When run, this code prints: ``` The most likely word is "enjoy". ```<|||||>The solution of @rodgzilla looks good. Don't hesitate to re-open the issue if you have other questions.<|||||>Just a note that this solution does not help you if any of your candidates are out of your model's whole-word vocabulary. (A work-around is required to deal with BERT's reliance on word-piece tokens.)<|||||>> > I think that you best option would be to use the masked language modeling head and restrict the output of the softmax layer to your candidates. > > I think the following code does the job: > > ``` > > import torch > > from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM > > > > tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') > > text = 'From Monday to Friday most people are busy working or studying, '\ > > 'but in the evenings and weekends they are free and _ themselves.' > > tokenized_text = tokenizer.tokenize(text) > > > > masked_index = tokenized_text.index('_') > > tokenized_text[masked_index] = '[MASK]' > > > > candidates = ['love', 'work', 'enjoy', 'play'] > > candidates_ids = tokenizer.convert_tokens_to_ids(candidates) > > > > indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) > > > > segments_ids = [0] * len(tokenized_text) > > > > tokens_tensor = torch.tensor([indexed_tokens]) > > segments_tensors = torch.tensor([segments_ids]) > > > > language_model = BertForMaskedLM.from_pretrained('bert-base-uncased') > > language_model.eval() > > > > predictions = language_model(tokens_tensor, segments_tensors) > > predictions_candidates = predictions[0, masked_index, candidates_ids] > > answer_idx = torch.argmax(predictions_candidates).item() > > > > print(f'The most likely word is "{candidates[answer_idx]}".') > > ``` > > > > > > When run, this code prints: > > ``` > > The most likely word is "enjoy". > > ``` > > Thanks, you solution is good
transformers
79
closed
numpy.core._internal.AxisError: axis 1 is out of bounds for array of dimension 1
hello, when I am running run_classifier.py with MRPC dataset, there seems to be an mistake. the mistake is as following: <img width="752" alt="default" src="https://user-images.githubusercontent.com/29532760/49360256-9de0e100-f713-11e8-9a5c-d9f2bc5331e6.PNG"> the mistake is happening when training is over and the model is for evaluating ``` with torch.no_grad(): tmp_eval_loss, logits = model(input_ids, segment_ids, input_mask, label_ids) ``` here I found the size of logits is [] I'm using python3.5 and torch=0.4.1, I don't know how to fix it.
12-03-2018 07:56:56
12-03-2018 07:56:56
Hi, just update the repo to the current master, this should have been fixed this weekend (re-open the issue of it's not).
transformers
78
closed
TypeError: object of type 'WindowsPath' has no len()
Hi, when I run "tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')", the error "TypeError: object of type 'WindowsPath' has no len()" occurs, what is the problem? Thank you for your excellent code!
12-02-2018 12:03:51
12-02-2018 12:03:51
Can you post a more detailed log?<|||||>I install your PyTorch pretrained bert with pip like "pip install pytorch-pretrained-bert", then I run the code in Usage section like: `import torch` `from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM` `# Load pre-trained model tokenizer (vocabulary)` `tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')` but there is an error occurs, the error information is: Traceback (most recent call last): File "<ipython-input-2-7725148c607d>", line 5, in <module> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') File "C:\Users\Deep\Anaconda3\lib\site-packages\pytorch_pretrained_bert\tokenization.py", line 117, in from_pretrained resolved_vocab_file = cached_path(vocab_file, cache_dir=cache_dir) File "C:\Users\Deep\Anaconda3\lib\site-packages\pytorch_pretrained_bert\file_utils.py", line 88, in cached_path return get_from_cache(url_or_filename, cache_dir) File "C:\Users\Deep\Anaconda3\lib\site-packages\pytorch_pretrained_bert\file_utils.py", line 169, in get_from_cache os.makedirs(cache_dir, exist_ok=True) File "C:\Users\Deep\Anaconda3\lib\os.py", line 226, in makedirs head, tail = path.split(name) File "C:\Users\Deep\Anaconda3\lib\ntpath.py", line 204, in split d, p = splitdrive(p) File "C:\Users\Deep\Anaconda3\lib\ntpath.py", line 139, in splitdrive if len(p) >= 2: TypeError: object of type 'WindowsPath' has no len()<|||||>Strange error. I am only using standard library here. Maybe it has something to do with your installation of Conda. You can try to manually specify a cache directory for the package by either: - setting the environment variable `PYTORCH_PRETRAINED_BERT_CACHE=XXX` to a directory `XXX` you created to store the downloaded models. - sending the path to this directory to the tokenizer and model using the `cache_dir=XXX` arguments, for example: `tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', cache_dir=XXX)`<|||||>I follow your second instruction and change the code to: `tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', cache_dir='C:/Users/Deep/Anaconda3/Lib/site-packages')` It works! Thank you!
transformers
77
closed
Correct assignement for logits in classifier example
I tried to address https://github.com/huggingface/pytorch-pretrained-BERT/issues/76 should be correct, but there's likely a more efficient way.
12-02-2018 11:38:51
12-02-2018 11:38:51
Ok thanks, that should work for now. I simplified the output of the classes indeed (only send back loss when a label is provided) so this example broke.
transformers
76
closed
Wrong signature in model call in run_classifier.py example (?)
I think that https://github.com/huggingface/pytorch-pretrained-BERT/blob/063be09b714bf4d2fbbc3de7f52c45b8bc6817eb/examples/run_classifier.py#L608 may well have a problem, as it's not consistent with https://github.com/huggingface/pytorch-pretrained-BERT/blob/063be09b714bf4d2fbbc3de7f52c45b8bc6817eb/examples/run_classifier.py#L549 nor with https://github.com/huggingface/pytorch-pretrained-BERT/blob/063be09b714bf4d2fbbc3de7f52c45b8bc6817eb/pytorch_pretrained_bert/modeling.py#L875 and this currently breaks the example. One quick patch would be to replace that line with ``` tmp_eval_loss = model(input_ids, segment_ids, input_mask, label_ids) logits = model(input_ids, segment_ids, input_mask) ``` But I am not so sure, there are likely better ways.
12-01-2018 19:34:40
12-01-2018 19:34:40
You are right, I also encountered this small error.<|||||>Thanks for noticing, fixed in #77.
transformers
75
closed
Point typo fix
12-01-2018 00:07:09
12-01-2018 00:07:09
transformers
74
closed
Update finetuning example in README adding --do_lower_case
Should be consistent with the fact that an uncased model is used
12-01-2018 00:06:52
12-01-2018 00:06:52
Indeed
transformers
73
closed
Third release
This third release comprise the following updates: - added the two new pre-trained model from Google: `bert-large-cased` and `bert-multilingual-cased`, - added a model for token-level classification: `BertForTokenClassification`, - added tests for every model class, with and without labels, - fixed tokenizer loading function `BertTokenizer.from_pretrained()` when loading from a directory containing a pretrained model, - fixed typos in model docstrings and completed the docstrings, - improved examples (added `do_lower_case`arguments).
11-30-2018 22:10:22
11-30-2018 22:10:22
transformers
72
closed
Fix internal hyperlink typo
Fix #tup to #tpu
11-30-2018 21:13:47
11-30-2018 21:13:47
transformers
71
closed
run_squad script gets stuck
Hello, I am trying to run the squad fine tuning script, but it hangs after printing out a few predictions. I am attaching the log. Can you help take a look? I am running the script on a machine with 8 M40s. [bert_squad.log](https://github.com/huggingface/pytorch-pretrained-BERT/files/2634588/bert_squad.log) Best, Samyam
11-30-2018 18:39:54
11-30-2018 18:39:54
Never mind, it just needed time to process the examples. It might be good to have the progress bar inside convert_examples_to_features.<|||||>Maybe try distributed training? I don't think PyTorch `DataParallel` will be very efficient on 8 GPUs due to the python GIL.<|||||>Thanks for the suggestion. I will try that. Currently, its showing me about 9 hours to fine tune bert-large on squad with batch size of 32 using DataParallel. The performance improves quite a bit if a if I use a batch size of 256 with gradient accumulate, which makes sense as this reduces the frequency of communication of the gradients. A question I have is, does the learning rate adapt automatically to the batch size being used? Have you tried larger batch sizes?
transformers
70
closed
fix typo in input for masked lm loss function
Fixing #55 . There was still a typo.
11-30-2018 15:56:00
11-30-2018 15:56:00
thanks
transformers
69
closed
cannot access to pretrained vocab file on S3
Hi, thanks for develop well-made pytorch version of BERT. Unfortunately, pretrained vocab files are not reachable. error traceback is below. > File "/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/tokenization.py", line 124, in from_pretrained resolved_vocab_file = cached_path(vocab_file) File "/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/file_utils.py", line 88, in cached_path return get_from_cache(url_or_filename, cache_dir) File "/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/file_utils.py", line 178, in get_from_cache .format(url, response.status_code)) OSError: HEAD request failed for url https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt with status code 404
11-30-2018 13:57:03
11-30-2018 13:57:03
I have the same issue. > OSError: HEAD request failed for url https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt with status code 404 It would be nice to be able to cache the vocab files as well as the model weights out of the box.<|||||>I found temporary solution for this issue. `BertTokenizer.from_pretrained` method accepts local file instead of model_name ex) `BertTokenizer.from_pretrained('/dir/to/vocab/bert-base-uncased-vocab.txt')` vocab txt file can be downloaded from [google bert repo](https://github.com/google-research/bert#pre-trained-models).<|||||>The files are back. Sorry, wrong manipulation while adding the new models.<|||||>> I found temporary solution for this issue. > `BertTokenizer.from_pretrained` method accepts local file instead of model_name > ex) `BertTokenizer.from_pretrained('/dir/to/vocab/bert-base-uncased-vocab.txt')` > Well, this solution doesn't seem to be working now, I get `OSError: Model name 'path/to/model/vocab.txt' was not found in tokenizers model name list (bart-/model/large, bart-large-mnli, bart-large-cnn, bart-large-xsum). We assumed 'path/to/model/vocab.txt' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url. `<|||||>> I found temporary solution for this issue. > `BertTokenizer.from_pretrained` method accepts local file instead of model_name > ex) `BertTokenizer.from_pretrained('/dir/to/vocab/bert-base-uncased-vocab.txt')` > > vocab txt file can be downloaded from [google bert repo](https://github.com/google-research/bert#pre-trained-models). Hi, I add this file, however I got another error: *** json.decoder.JSONDecodeError: Expecting value: line 1 column 2 (char 1) Any help please?
transformers
68
closed
Accuracy on classification task is lower than the official tensorflow version
Hi, I am running the same task with the same hyper parameters as the official Google Tensorflow implementation of BERT, however, I am getting around 1.5% lower accuracy. Can you please give any hint about the possible cause? Thanks!
11-30-2018 06:30:56
11-30-2018 06:30:56
Hi! Could it be different seeds? See e.g. https://github.com/huggingface/pytorch-pretrained-BERT/issues/53#issuecomment-441565229<|||||>Hi @ejld, yes BERT has a large variance on many fine-tuning tasks (see also the discussion in #64). You should try a bunch of different seeds (like 10 seeds for example) and compare the mean and standard deviation of the results.
transformers
67
closed
`TypeError: object of type 'NoneType' has no len()` when tuning on squad
When running the following command for tuning on squad, I am getting a petty error inside logger `TypeError: object of type 'NoneType' has no len()`. Any thoughts what could be the main cause of the problem? Full log: ``` python3.6 examples/run_squad.py \ > --bert_model bert-base-uncased \ > --do_train \ > --do_predict \ > --train_file $SQUAD_DIR/train-v1.1.json \ > --predict_file $SQUAD_DIR/dev-v1.1.json \ > --train_batch_size 12 \ > --learning_rate 3e-5 \ > --num_train_epochs 2.0 \ > --max_seq_length 384 \ > --doc_stride 128 \ > --output_dir out . . . 11/29/2018 23:10:14 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11/29/2018 23:10:14 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11/29/2018 23:10:14 - INFO - __main__ - start_position: 47 11/29/2018 23:10:14 - INFO - __main__ - end_position: 48 11/29/2018 23:10:14 - INFO - __main__ - answer: the 1870s 11/29/2018 23:14:38 - INFO - __main__ - Saving train features into cached file /shared/shelley/khashab2/pytorch-pretrained-BERT/squad/train-v1.1.json_bert-base-uncased_384_128_64 11/29/2018 23:14:51 - INFO - __main__ - ***** Running training ***** 11/29/2018 23:14:51 - INFO - __main__ - Num orig examples = 87599 Traceback (most recent call last): File "examples/run_squad.py", line 989, in <module> main() File "examples/run_squad.py", line 884, in main logger.info(" Num split examples = %d", len(train_features)) TypeError: object of type 'NoneType' has no len() ```
11-30-2018 05:48:04
11-30-2018 05:48:04
Oh I see, this should be fixed in `master` by 257a35134a1bd378b16aa985ee76675289ff439c just update your repo please.
transformers
66
closed
speedup by truncating unused part
11-29-2018 14:56:39
11-29-2018 14:56:39
Hi Mathis, Thanks for that. I think it's better for the user to send inputs that they truncated themselves rather than doing that hidden inside the model. Best, Thomas
transformers
65
closed
3 sentences as input for BertForSequenceClassification?
Hi there, Thanks for releasing this awesome repo, it does lots people like me a great favor. So far I've tried sentence-pair BertForSequenceClassification task, and it indeed work. I'd like to know if it is possible to use BertForSequenceClassification to model triple sentences classification problem and its input can be described as below: **[CLS]A[SEP]B[SEP]C[SEP]** Expecting for your reply! Thanks & Regards
11-29-2018 09:18:21
11-29-2018 09:18:21
Technically it is possible but BERT was not pretrained to handle multiple SEP tokens between sentences and does not have a third token_type, so I think it won't be easy to make it work. You may also want to use a new token for the second separation.<|||||>> Technically it is possible but BERT was not pretrained to handle multiple SEP tokens between sentences and does not have a third token_type, so I think it won't be easy to make it work. You may also want to use a new token for the second separation. Hi artemisart, Thanks for your reply. So, if someone wanna take multiple sentences as input of BertForSequenceClassification, let's say a whole passage, an alternative way is to concatenate them into a single "sentence" and then fit it in, right?<|||||>I you don't have a separation (like question/answer) then yes you can just concatenate them (but you are still limited to 512 tokens).<|||||>@mikelkl I would also go with the solution and answer of @artemisart.<|||||>@artemisart hi, if i have a single sentence classification task, should the max length of sentence limited to half of 512, that is to say 256?<|||||>No, it will be better if you use the full 512 tokens.<|||||>wouldn't concatenating the whole passage into a single sentence mean losing context of each sentence? @artemisart <|||||>No it shouldn't <|||||>What if I want to check on a huge corpus, that even concatenating into one sentence exceeds the 512 token limit? @artemisart<|||||>@thedrowsywinger maybe u should try Transformer-XL<|||||>> I you don't have a separation (like question/answer) then yes you can just concatenate them (but you are still limited to 512 tokens). I have 3 inputs, 1 of the input contains conversation (QUERY, ANSWER). QUERY: I want to ask a question. > ANSWER: Sure, ask away. > QUERY: How is the weather today? > ANSWER: It is nice and sunny. > QUERY: Okay, nice to know. > ANSWER: Would you like to know anything else? How can I tell the model to separate the turns of conversation? Model is classification model. I was thinking to add a new special token <EOT> between the turns but could not get it work.
transformers
64
closed
Feature extraction for sequential labelling
Hi, I have a question in terms of using BERT for sequential labeling task. Please correct me if I'm wrong. My understanding is: 1. Use BertModel loaded with pretrained weights instead of MaskedBertModel. 2. In such case, take a sequence of tokens as input, BertModel would output a list of hidden states, I only use the top layer hidden states as the embedding for that sequence. 3. Then to fine tune the model, add a linear fully connected layer and softmax to make final decision. Is this entire process correct? I followed this procedure but could not have any results. Thank you!
11-29-2018 03:33:09
11-29-2018 03:33:09
Well that seems like a good approach. Maybe you can find some inspiration in the code of the `BertForQuestionAnswering` model? It is not exactly what you are doing but maybe it can help.<|||||>Thanks. It worked. However, a interesting issue about BERT is that it's highly sensitive to learning rate, which makes it very difficult to combine with other models<|||||>@zhaoxy92 what sequence labeling task are you doing? I've got CoNLL'03 NER running with the ``bert-base-cased`` model, and also found the same sensitivity to hyper-parameters. The best dev F1 score i've gotten after ~~half a day~~ a day of trying some parameters is ~~92.4~~ 94.6, which is a bit lower than the 96.4 dev score for BERT_base reported in the paper. I guess more tuning will increase the score some more. The best configuration for me so far is: - Batch size: 160 (on four P40 GPUs with 24GB RAM each). Smaller batch sizes that fit on one or two GPUs give bad results. - Optimizer: Adam with learning rate 1e-4. Tried BertAdam with learning rate 1e-5, but it didn't seem to converge. - fp16/fp32: Only fp32 works. Tried fp16 (half precision) to allow larger batch sizes, but this gave really low scores, with and without loss scaling. Also, properly averaging the loss is important: Not just ``loss /= batch_size``. You need to take into account padding and word pieces without predictions (https://github.com/google-research/bert/issues/33#issuecomment-436726952). If you have a mask tensor that indicates which bert inputs correspond to tagged tokens, then the proper averaging is ``loss /= mask.float().sum`` Another tip, truncating the input (https://github.com/huggingface/pytorch-pretrained-BERT/pull/66) enables much larger batch sizes. Without it the largest possible batch size was 56, but with truncating 160 is possible.<|||||>I am also working on CoNLL03. Similar results as you got.<|||||>@bheinzerling with the risk of going off topic here, would you mind sharing your code? I'd love to read and adapt it for a similar sequential classification task.<|||||>I have some code for preparing batches here: https://github.com/bheinzerling/dougu/blob/2f54b14d588f17d77b7a8bca9f4e5eb38d6a2805/dougu/bert.py#L98 The important methods are subword_tokenize_to_ids and subword_tokenize, you can probably ignore the other stuff. With this, feature extraction for each sentence, i.e. a list of tokens, is simply: ```Python bert = dougu.bert.Bert.Model("bert-base-cased") featurized_sentences = [] for tokens in sentences: features = {} features["bert_ids"], features["bert_mask"], features["bert_token_starts"] = bert.subword_tokenize_to_ids(tokens) featurized_sentences.append(features) ``` Then I use a custom collate function for a DataLoader that turns featurized_sentences into batches: ```Python def collate_fn(featurized_sentences_batch): bert_batch = [torch.cat(features[key] for features in featurized_sentences], dim=0) for key in ("bert_ids", "bert_mask", "bert_token_starts")] return bert_batch ``` A simple sequence tagger module would look something like this: ```Python class SequenceTagger(torch.nn.Module): def __init__(self, data_parallel=True): bert = BertModel.from_pretrained("bert-base-cased").to(device=torch.device("cuda")) if data_parallel: self.bert = torch.nn.DataParallel(bert) else: self.bert = bert bert_dim = 786 # (or get the dim from BertEmbeddings) n_labels = 5 # need to set this for your task self.out = torch.nn.Linear(bert_dim, n_labels) ... # droput, log_softmax... def forward(self, bert_batch, true_labels): bert_ids, bert_mask, bert_token_starts = bert_batch # truncate to longest sequence length in batch (usually much smaller than 512) to save GPU RAM max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item() if max_length < bert_ids.shape[1]: bert_ids = bert_ids[:, :max_length] bert_mask = bert_mask[:, :max_length] segment_ids = torch.zeros_like(bert_mask) # dummy segment IDs, since we only have one sentence bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1] # select the states representing each token start, for each instance in the batch bert_token_reprs = [ layer[starts.nonzero().squeeze(1)] for layer, starts in zip(bert_last_layer, bert_token_starts)] # need to pad because sentence length varies padded_bert_token_reprs = pad_sequence( bert_token_reprs, batch_first=True, padding_value=-1) # output/classification layer: input bert states and get log probabilities for cross entropy loss pred_logits = self.log_softmax(self.out(self.dropout(padded_bert_token_reprs))) mask = true_labels != -1 # I did set label = -1 for all padding tokens somewhere else loss = cross_entropy(pred_logits, true_labels) # average/reduce the loss according to the actual number of of predictions (i.e. one prediction per token). loss /= mask.float().sum() return loss ``` Wrote this without checking if it runs (my actual code is tied into some other things so I cannot just copy&paste it), but it should help you get started.<|||||>@bheinzerling Thanks a lot for the starter, got awesome results!<|||||>Thanks for sharing these tips here! It helps a lot. I tried to finetune BERT on multiple imbalanced datasets and found the result quite unstable... For an imbalanced dataset, I mean there are much more O labels than the others under the {B,I,O} tagging scheme. Tried weighted cross-entropy loss but the performance is still not as expected. Has anyone met the same issue? Thanks!<|||||>Hi~@bheinzerling I uesd batch size=16, and lr=2e-5, get the dev F1=0.951 and test F1=0.914 which lower than ELMO. What about your result now? <|||||>@kugwzk I didn't do any more CoNLL'03 runs since the numbers reported in the BERT paper were apparently achieved by using document context, which is different from the standard sentence-based evaluation. You can find more details here: https://github.com/allenai/allennlp/pull/2067#issuecomment-443961816<|||||>Hmmm...I think they should tell that in the paper...And do you know where to find that they used document context?<|||||>That's what the folks over at allennlp said. I don't know where they got this information, maybe personal communication with one of the BERT authors?<|||||>Anyway, thank you very much for tell me that.<|||||>https://github.com/kamalkraj/BERT-NER Replicated results from BERT paper<|||||>https://github.com/JianLiu91/bert_ner gives a solution that is very easy to understand. However, I still wonder whether is the best practice.<|||||>Hi all, I am trying to train the BERT model on some data that I have. However, I am having trouble understanding how to adjust the labels following tokenization. I am trying to perform word level classification (similar to NER) If I have the following tokenized sentence and its' labels: ``` original_tokens = ['The', <start>', 'eng-30-01258617-a', '<end>', 'frailty'] original_labels = [0, 2, 3, 4, 1] ``` Then after using the BERT tokenizer I get the following: `bert_tokens = ['[CLS]', 'the', '<start>', 'eng-30-01258617-a', '<end>', 'frail', '##ty', '[SEP]']` Also, I adjust my label array as follows: `bert_labels = [0, 2, 3, 4, 1, 1]` **N.B**. Tokens such as eng-30-01258617-a are not tokenized further as I included an ignore list which contains words and tokens that I do not want tokenized and I swapped them with the [unusedXXX] tokens found in the vocab.txt file. Notice how the last word 'frailty' is transformed into ['frail', '##ty'] and the label '1' which was used for the whole word is now placed under each word piece. Is this the correct way of doing it? If you would like a more in-depth explanation of what I am trying to achieve you can read the following: https://stackoverflow.com/questions/56129165/how-to-handle-labels-when-using-the-berts-wordpiece-tokenizer Any help would be greatly appreciated! Thanks in advance<|||||>@dangal95, adjusting the original labels is probably not the best way. A simpler method that works well is described in this issue, here https://github.com/huggingface/pytorch-pretrained-BERT/issues/64#issuecomment-443703063<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@nijianmo Hi, I am recently considering using weighted loss in NER task. I wonder if you have tried weighted crf or weighted softmax in pytorch implementation. If so, did you get a good performance ? Thanks in advance.<|||||>Many thanks to @bheinzerling! For those who may concern , I've implemented a NER model based on pytorch-transformers and @bheinzerling's idea, which might help you get a quick start on it. Welcome to check [this](https://github.com/weizhepei/BERT-NER) out.<|||||>> I have some code for preparing batches here: > > https://github.com/bheinzerling/dougu/blob/2f54b14d588f17d77b7a8bca9f4e5eb38d6a2805/dougu/bert.py#L98 > > The important methods are subword_tokenize_to_ids and subword_tokenize, you can probably ignore the other stuff. > > With this, feature extraction for each sentence, i.e. a list of tokens, is simply: > > ```python > bert = dougu.bert.Bert.Model("bert-base-cased") > featurized_sentences = [] > for tokens in sentences: > features = {} > features["bert_ids"], features["bert_mask"], features["bert_token_starts"] = bert.subword_tokenize_to_ids(tokens) > featurized_sentences.append(features) > ``` > > Then I use a custom collate function for a DataLoader that turns featurized_sentences into batches: > > ```python > def collate_fn(featurized_sentences_batch): > bert_batch = [torch.cat(features[key] for features in featurized_sentences], dim=0) for key in ("bert_ids", "bert_mask", "bert_token_starts")] > return bert_batch > ``` > > A simple sequence tagger module would look something like this: > > ```python > class SequenceTagger(torch.nn.Module): > def __init__(self, data_parallel=True): > bert = BertModel.from_pretrained("bert-base-cased").to(device=torch.device("cuda")) > if data_parallel: > self.bert = torch.nn.DataParallel(bert) > else: > self.bert = bert > bert_dim = 786 # (or get the dim from BertEmbeddings) > n_labels = 5 # need to set this for your task > self.out = torch.nn.Linear(bert_dim, n_labels) > ... # droput, log_softmax... > > def forward(self, bert_batch, true_labels): > bert_ids, bert_mask, bert_token_starts = bert_batch > # truncate to longest sequence length in batch (usually much smaller than 512) to save GPU RAM > max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item() > if max_length < bert_ids.shape[1]: > bert_ids = bert_ids[:, :max_length] > bert_mask = bert_mask[:, :max_length] > > segment_ids = torch.zeros_like(bert_mask) # dummy segment IDs, since we only have one sentence > bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1] > # select the states representing each token start, for each instance in the batch > bert_token_reprs = [ > layer[starts.nonzero().squeeze(1)] > for layer, starts in zip(bert_last_layer, bert_token_starts)] > # need to pad because sentence length varies > padded_bert_token_reprs = pad_sequence( > bert_token_reprs, batch_first=True, padding_value=-1) > # output/classification layer: input bert states and get log probabilities for cross entropy loss > pred_logits = self.log_softmax(self.out(self.dropout(padded_bert_token_reprs))) > mask = true_labels != -1 # I did set label = -1 for all padding tokens somewhere else > loss = cross_entropy(pred_logits, true_labels) > # average/reduce the loss according to the actual number of of predictions (i.e. one prediction per token). > loss /= mask.float().sum() > return loss > ``` > > Wrote this without checking if it runs (my actual code is tied into some other things so I cannot just copy&paste it), but it should help you get started. I did not realize there is a method subword_tokenize until seeing your post. I did spend a lot of time wirte this method.<|||||>> That's what the folks over at allennlp said. I don't know where they got this information, maybe personal communication with one of the BERT authors? Just adding a bit of clarification since I revisited the paper after reading that comment. From the BERT Paper Section 5.3 (https://arxiv.org/pdf/1810.04805.pdf) In this section, we compare the two approaches by applying BERT to the CoNLL-2003 Named Entity Recognition (NER) task (Tjong Kim Sang and De Meulder, 2003). In the input to BERT, we use a case-preserving WordPiece model, and we include the maximal document context provided by the data. <|||||>@ramithp that was added in v2 of the paper, but wasn't present in v1, which is the version the discussion here refers to<|||||>@bheinzerling Yeah, I just realized that. No wonder I couldn't remember seeing it earlier. Thanks for confirming it. Just wanted to add that bit to the thread in case there were others that haven't read the revision.<|||||>@zhaoxy92 @thomwolf @bheinzerling @srslynow @rremani Sorry about tag all of you. I wonder how to set the weight decay other than the BERT structure, for example the crf parameter after BERT output. Should I set it to be 0.01 or 0? Sorry again for tagging all of you because it is kind of urgent. <|||||>> @zhaoxy92 @thomwolf @bheinzerling @srslynow @rremani > Sorry about tag all of you. I wonder how to set the weight decay other than the BERT structure, for example the crf parameter after BERT output. Should I set it to be 0.01 or 0? Sorry again for tagging all of you because it is kind of urgent. This repository does not use a CRF for NER classification? Anyway, parameters of a CRF depend on the data distribution you have. These links might be usefull: https://towardsdatascience.com/conditional-random-field-tutorial-in-pytorch-ca0d04499463 and https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html<|||||>@srslynow Thanks for your answer! I am familiar with CRF, but kind of confused how to set the weight decay when the CRF is connected with BERT. The authors or huggingface seem not to have mentioned how to set weight decay beside the BERT structure.<|||||>Thanks to https://github.com/huggingface/transformers/issues/64#issuecomment-443703063, I could get the implementation to work - for anyone else that's struggling to reproduce the results: https://github.com/chnsh/BERT-NER-CoNLL<|||||>BERT-NER in Tensorflow 2.0 https://github.com/kamalkraj/BERT-NER-TF<|||||>> ple sequence tagger > I have some code for preparing batches here: > > https://github.com/bheinzerling/dougu/blob/2f54b14d588f17d77b7a8bca9f4e5eb38d6a2805/dougu/bert.py#L98 > > The important methods are subword_tokenize_to_ids and subword_tokenize, you can probably ignore the other stuff. > > With this, feature extraction for each sentence, i.e. a list of tokens, is simply: > > ```python > bert = dougu.bert.Bert.Model("bert-base-cased") > featurized_sentences = [] > for tokens in sentences: > features = {} > features["bert_ids"], features["bert_mask"], features["bert_token_starts"] = bert.subword_tokenize_to_ids(tokens) > featurized_sentences.append(features) > ``` > > Then I use a custom collate function for a DataLoader that turns featurized_sentences into batches: > > ```python > def collate_fn(featurized_sentences_batch): > bert_batch = [torch.cat(features[key] for features in featurized_sentences], dim=0) for key in ("bert_ids", "bert_mask", "bert_token_starts")] > return bert_batch > ``` > > A simple sequence tagger module would look something like this: > > ```python > class SequenceTagger(torch.nn.Module): > def __init__(self, data_parallel=True): > bert = BertModel.from_pretrained("bert-base-cased").to(device=torch.device("cuda")) > if data_parallel: > self.bert = torch.nn.DataParallel(bert) > else: > self.bert = bert > bert_dim = 786 # (or get the dim from BertEmbeddings) > n_labels = 5 # need to set this for your task > self.out = torch.nn.Linear(bert_dim, n_labels) > ... # droput, log_softmax... > > def forward(self, bert_batch, true_labels): > bert_ids, bert_mask, bert_token_starts = bert_batch > # truncate to longest sequence length in batch (usually much smaller than 512) to save GPU RAM > max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item() > if max_length < bert_ids.shape[1]: > bert_ids = bert_ids[:, :max_length] > bert_mask = bert_mask[:, :max_length] > > segment_ids = torch.zeros_like(bert_mask) # dummy segment IDs, since we only have one sentence > bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1] > # select the states representing each token start, for each instance in the batch > bert_token_reprs = [ > layer[starts.nonzero().squeeze(1)] > for layer, starts in zip(bert_last_layer, bert_token_starts)] > # need to pad because sentence length varies > padded_bert_token_reprs = pad_sequence( > bert_token_reprs, batch_first=True, padding_value=-1) > # output/classification layer: input bert states and get log probabilities for cross entropy loss > pred_logits = self.log_softmax(self.out(self.dropout(padded_bert_token_reprs))) > mask = true_labels != -1 # I did set label = -1 for all padding tokens somewhere else > loss = cross_entropy(pred_logits, true_labels) > # average/reduce the loss according to the actual number of of predictions (i.e. one prediction per token). > loss /= mask.float().sum() > return loss > ``` > > Wrote this without checking if it runs (my actual code is tied into some other things so I cannot just copy&paste it), but it should help you get started. > ```python > bert_last_layer > ``` Hi, I am trying to make your code work, and here is my setup: I re-declare as free functions and constants everything that is needed ``` import numpy as np from pytorch_transformers import BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') SEP = "[SEP]" MASK = '[MASK]' CLS = "[CLS]" max_len = 100 def flatten(list_of_lists): for list in list_of_lists: for item in list: yield item def convert_tokens_to_ids(tokens, pad=True): token_ids = tokenizer.convert_tokens_to_ids(tokens) ids = torch.tensor([token_ids]).to(device="cpu") assert ids.size(1) < max_len if pad: padded_ids = torch.zeros(1, max_len).to(ids) padded_ids[0, :ids.size(1)] = ids mask = torch.zeros(1, max_len).to(ids) mask[0, :ids.size(1)] = 1 return padded_ids, mask else: return ids def subword_tokenize(tokens): """Segment each token into subwords while keeping track of token boundaries. Parameters ---------- tokens: A sequence of strings, representing input tokens. Returns ------- A tuple consisting of: - A list of subwords, flanked by the special symbols required by Bert (CLS and SEP). - An array of indices into the list of subwords, indicating that the corresponding subword is the start of a new token. For example, [1, 3, 4, 7] means that the subwords 1, 3, 4, 7 are token starts, while all other subwords (0, 2, 5, 6, 8...) are in or at the end of tokens. This list allows selecting Bert hidden states that represent tokens, which is necessary in sequence labeling. """ subwords = list(map(tokenizer.tokenize, tokens)) print ("subwords: ", subwords) subword_lengths = list(map(len, subwords)) subwords = [CLS] + list(flatten(subwords)) + [SEP] print ("subwords: ", subwords) token_start_idxs = 1 + np.cumsum([0] + subword_lengths[:-1]) return subwords, token_start_idxs def subword_tokenize_to_ids(tokens): """Segment each token into subwords while keeping track of token boundaries and convert subwords into IDs. Parameters ---------- tokens: A sequence of strings, representing input tokens. Returns ------- A tuple consisting of: - A list of subword IDs, including IDs of the special symbols (CLS and SEP) required by Bert. - A mask indicating padding tokens. - An array of indices into the list of subwords. See doc of subword_tokenize. """ subwords, token_start_idxs = subword_tokenize(tokens) subword_ids, mask = convert_tokens_to_ids(subwords) token_starts = torch.zeros(1, 100).to(subword_ids) token_starts[0, token_start_idxs] = 1 return subword_ids, mask, token_starts ``` and then i try to add your extra code. i try to understand the code for this simple case: ``` sentences = [["the", "rolerationing", "ends"], ["A", "sequence", "of", "strings" ,",", "representing", "input", "tokens", "."]] ``` it is ```max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item() ``` which is 11 Some questions: 1) ``` bert(bert_ids, segment_ids) ``` is this the same with ```bert(bert_ids)``` ? In that case the following is not needed: ```segment_ids = torch.zeros_like(bert_mask) # dummy segment IDs, since we only have one sentence``` Also i do not understand what the comment means... ( # dummy segment IDs, since we only have one sentence) 2) ```bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1]``` why do you take the last one? Here -1 is the last sentence. Why do we say last layer? Also for the above simple example its size is torch.Size([11, 768]). Is this what we want? <|||||>Is this development makes outdated this conversation? Can you please clarify? https://github.com/huggingface/transformers/blob/93d2fff0716d83df168ca0686d16bc4cd7ccb366/examples/utils_ner.py#L85<|||||>I guess so <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> Thanks for sharing these tips here! It helps a lot. > > I tried to finetune BERT on multiple imbalanced datasets and found the result quite unstable... For an imbalanced dataset, I mean there are much more O labels than the others under the {B,I,O} tagging scheme. Tried weighted cross-entropy loss but the performance is still not as expected. Has anyone met the same issue? > > Thanks! Hi @nijianmo, did you find any workaround for this? Thanks!<|||||>Hi everyone! Thanks for your posts! I was wondering - could anyone post an explicit example of how the properly formatted data for NER using BERT would look like? It is not entirely clean to me from the paper and the comments I've found. Let's say we have a following sentence and labels: ```{python} sent = "John Johanson lives in Ramat Gan." labels = ['B-PERS', 'I-PERS', 'O', 'O', 'B-LOC', 'I-LOC'] ``` Would data that we input to the model be something like this: ```{python} sent = ['[CLS]', 'john', 'johan', '##son', 'lives', 'in', 'ramat', 'gan', '.', '[SEP]'] labels = ['O', 'B-PERS', 'I-PERS', 'I-PERS', 'O', 'O', 'B-LOC', 'I-LOC', 'O', 'O'] attention_mask = [0, 1, 1, 1, 1, 1, 1, 1, 1, 0] sentence_id = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ``` ? Thank you!<|||||>``` labels = ['B-PERS', 'I-PERS', 'O', 'B-LOC', 'I-LOC'] labels2id = {'B-PERS': 0, 'I-PERS': 1, 'O': 2, 'B-LOC': 3, 'I-LOC': 4} sent = ['[CLS]', 'john', 'johan', '##son', 'lives', 'in', 'ramat', 'gan', '.', '[SEP]'] labels = [2, 0, 1, 1, 2, 2, 3, 4, 2, 2] attention_mask = [0, 1, 1, 1, 1, 1, 1, 1, 1, 0] sentence_id = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ``` @AlxndrMlk <|||||>> I have some code for preparing batches here: > > https://github.com/bheinzerling/dougu/blob/2f54b14d588f17d77b7a8bca9f4e5eb38d6a2805/dougu/bert.py#L98 > > The important methods are subword_tokenize_to_ids and subword_tokenize, you can probably ignore the other stuff. > > With this, feature extraction for each sentence, i.e. a list of tokens, is simply: > > ```python > bert = dougu.bert.Bert.Model("bert-base-cased") > featurized_sentences = [] > for tokens in sentences: > features = {} > features["bert_ids"], features["bert_mask"], features["bert_token_starts"] = bert.subword_tokenize_to_ids(tokens) > featurized_sentences.append(features) > ``` > > Then I use a custom collate function for a DataLoader that turns featurized_sentences into batches: > > ```python > def collate_fn(featurized_sentences_batch): > bert_batch = [torch.cat(features[key] for features in featurized_sentences], dim=0) for key in ("bert_ids", "bert_mask", "bert_token_starts")] > return bert_batch > ``` > > A simple sequence tagger module would look something like this: > > ```python > class SequenceTagger(torch.nn.Module): > def __init__(self, data_parallel=True): > bert = BertModel.from_pretrained("bert-base-cased").to(device=torch.device("cuda")) > if data_parallel: > self.bert = torch.nn.DataParallel(bert) > else: > self.bert = bert > bert_dim = 786 # (or get the dim from BertEmbeddings) > n_labels = 5 # need to set this for your task > self.out = torch.nn.Linear(bert_dim, n_labels) > ... # droput, log_softmax... > > def forward(self, bert_batch, true_labels): > bert_ids, bert_mask, bert_token_starts = bert_batch > # truncate to longest sequence length in batch (usually much smaller than 512) to save GPU RAM > max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item() > if max_length < bert_ids.shape[1]: > bert_ids = bert_ids[:, :max_length] > bert_mask = bert_mask[:, :max_length] > > segment_ids = torch.zeros_like(bert_mask) # dummy segment IDs, since we only have one sentence > bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1] > # select the states representing each token start, for each instance in the batch > bert_token_reprs = [ > layer[starts.nonzero().squeeze(1)] > for layer, starts in zip(bert_last_layer, bert_token_starts)] > # need to pad because sentence length varies > padded_bert_token_reprs = pad_sequence( > bert_token_reprs, batch_first=True, padding_value=-1) > # output/classification layer: input bert states and get log probabilities for cross entropy loss > pred_logits = self.log_softmax(self.out(self.dropout(padded_bert_token_reprs))) > mask = true_labels != -1 # I did set label = -1 for all padding tokens somewhere else > loss = cross_entropy(pred_logits, true_labels) > # average/reduce the loss according to the actual number of of predictions (i.e. one prediction per token). > loss /= mask.float().sum() > return loss > ``` > > Wrote this without checking if it runs (my actual code is tied into some other things so I cannot just copy&paste it), but it should help you get started. @bheinzerling The line` bert_last_layer = bert_layers[0][-1]` just takes the hidden representation of the last training example in the batch. Is this intended?<|||||>@sougata-fiz When I wrote that code, `self.bert(bert_ids, segment_ids)` returned a tuple, of which the first element contained all hidden states. I think this changed at some point. What BertModel's forward returns now is described here: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L648, so you would have to make the appropriate changes. Alternatively, you could also try the TokenClassification models, which have since been added: https://huggingface.co/transformers/v2.5.0/model_doc/auto.html#automodelfortokenclassification<|||||>> @dangal95, adjusting the original labels is probably not the best way. A simpler method that works well is described in this issue, here [#64 (comment)](https://github.com/huggingface/transformers/issues/64#issuecomment-443703063) Hi, could you explain why adjusting the original labels is not suggested? It seems quite easy and straightforward. ```python3 # reference: https://github.com/huggingface/transformers/issues/64#issuecomment-443703063 def flatten(list_of_lists): for list in list_of_lists: for item in list: yield item def subword_tokenize(tokens, labels): assert len(tokens) == len(labels) subwords = list(map(tokenizer.tokenize, tokens)) subword_lengths = list(map(len, subwords)) subwords = [CLS] + list(flatten(subwords)) + [SEP] token_start_idxs = 1 + np.cumsum([0] + subword_lengths[:-1]) bert_labels = [[label] + (sublen-1) * ["X"] for sublen, label in zip(subword_lengths, labels)] bert_labels = ["O"] + list(flatten(bert_labels)) + ["O"] assert len(subwords) == len(bert_labels) return subwords, token_start_idxs, bert_labels ``` ``` >> tokens = tokenizer.basic_tokenizer.tokenize("John Johanson lives in Ramat Gan.") >> print(tokens) ['john', 'johanson', 'lives', 'in', 'ramat', 'gan', '.'] >> labels = ['B-PERS', 'I-PERS', 'O', 'O', 'B-LOC', 'I-LOC', 'O'] >> subword_tokenize(tokens, labels) (['[CLS]', 'john', 'johan', '##son', 'lives', 'in', 'rama', '##t', 'gan', '.', '[SEP]'], array([1, 2, 4, 5, 6, 8, 9]), ['O', 'B-PERS', 'I-PERS', 'X', 'O', 'O', 'B-LOC', 'X', 'I-LOC', 'O', 'O']) ```<|||||>> ``` > labels = ['B-PERS', 'I-PERS', 'O', 'B-LOC', 'I-LOC'] > labels2id = {'B-PERS': 0, 'I-PERS': 1, 'O': 2, 'B-LOC': 3, 'I-LOC': 4} > sent = ['[CLS]', 'john', 'johan', '##son', 'lives', 'in', 'ramat', 'gan', '.', '[SEP]'] > labels = [2, 0, 1, 1, 2, 2, 3, 4, 2, 2] > attention_mask = [0, 1, 1, 1, 1, 1, 1, 1, 1, 0] > sentence_id = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] > ``` > > @AlxndrMlk Hello,if we have the following sentence: ``` sent = "Johanson lives in Ramat Gan." labels = ['B-PERS', 'O', 'O', 'B-LOC', 'I-LOC'] ``` Would “Johanson” be processed like this? ``` 'johan', '##son' 'B-PERS' 'I-PERS' ``` or like this? ``` 'johan', '##son' 'B-PERS' 'B-PERS' ``` thank you! <|||||>> > ``` > > labels = ['B-PERS', 'I-PERS', 'O', 'B-LOC', 'I-LOC'] > > labels2id = {'B-PERS': 0, 'I-PERS': 1, 'O': 2, 'B-LOC': 3, 'I-LOC': 4} > > sent = ['[CLS]', 'john', 'johan', '##son', 'lives', 'in', 'ramat', 'gan', '.', '[SEP]'] > > labels = [2, 0, 1, 1, 2, 2, 3, 4, 2, 2] > > attention_mask = [0, 1, 1, 1, 1, 1, 1, 1, 1, 0] > > sentence_id = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] > > ``` > > > > > > @AlxndrMlk > > Hello,if we have the following sentence: > > ``` > sent = "Johanson lives in Ramat Gan." > labels = ['B-PERS', 'O', 'O', 'B-LOC', 'I-LOC'] > ``` > > Would “Johanson” be processed like this? > > ``` > 'johan', '##son' > 'B-PERS' 'I-PERS' > ``` > > or like this? > > ``` > 'johan', '##son' > 'B-PERS' 'B-PERS' > ``` > > thanks you! The middle one is right, you need to add a label to labels ‘I-PERS’<|||||>> ``` > labels = ['B-PERS', 'I-PERS', 'O', 'B-LOC', 'I-LOC'] > labels2id = {'B-PERS': 0, 'I-PERS': 1, 'O': 2, 'B-LOC': 3, 'I-LOC': 4} > sent = ['[CLS]', 'john', 'johan', '##son', 'lives', 'in', 'ramat', 'gan', '.', '[SEP]'] > labels = [2, 0, 1, 1, 2, 2, 3, 4, 2, 2] > attention_mask = [0, 1, 1, 1, 1, 1, 1, 1, 1, 0] > sentence_id = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] > ``` Hello, I'm confused about the labels for [CLS] and [PAD] tokens. Assume that I have originally have 4 labels for each word [0, 1, 2, 3, 4] should I add [CLS] and [PAD] as another label? I see that in the example here [CLS] and [SEP] takes labels '2'. Does making the attention 0 for those positions solve this?<|||||>This repository have showed how to add a CRF layer on transformers to get a better performance on token classification task. https://github.com/shushanxingzhe/transformers_ner<|||||>tks alot @shushanxingzhe <|||||>@shushanxingzhe : I think you are using label 'O' as padding label in your code. From my view point, you should have another label 'PAD' for padding instead using 'O' label<|||||>Could someone please tell me how to use CRF with decode padding. When i code as below, i always get err: expected seq=18 but got 13 for next line "tags = torch.Tensor(tags)" if labels is not None: log_likelihood, tags = self.crf(logits, labels,attn_mask), self.crf.decode(logits,attn_mask) loss = 0 - log_likelihood else: tags = self.crf.decode(logits,attn_mask)<|||||>Can we just remove the non-first subtokens during feature processing if we are treating NER problem as a classification problem? Example: labels = ['B-PERS', 'I-PERS', 'O', 'B-LOC', 'I-LOC'] labels2id = {'B-PERS': 0, 'I-PERS': 1, 'O': 2, 'B-LOC': 3, 'I-LOC': 4} sent = ['[CLS]', 'john', 'johan', '##son', 'lives', 'in', 'ramat', 'gan', '.', '[SEP]'] cleaned_sent = ['[CLS]', 'john', 'johan', 'lives', 'in', 'ramat', 'gan', '.', '[SEP]']
transformers
63
closed
Unseen Vocab
Thank you so much for this well-documented and easy-to-understand implementation! I remember meeting you at WeCNLP and am so happy to see you push out usable implementations of the SOA in pytorch for the community!!!!! I have a question: The convert_tokens_to_ids method in the BertTokenizer that provides input to the BertEncoder uses an OrderedDict for the vocab attribute, which throws an error (e.g. `KeyError: 'ketorolac'`) for any words not in the vocab. Can I create another vocab object that adds unseen words and use that in the tokenizer? Does the pretrained BertEncoder depend on the default id mapping? It seems to me that ideally in the long-term, this repo would incorporate character level embeddings to deal with unseen words, but idk if that is necessary for this use-case.
11-28-2018 22:38:57
11-28-2018 22:38:57
If you tokenize properly the input (tokenize before convert_tokens), it automatically 'fallbacks' to subword/character-level(-like) embedding. You can add new words in the vocabulary but you'll have to train the corresponding embeddings.<|||||>Hi @siddsach, Thanks for your kind words! @artemisart is right, BPE progressively falls-back on character level embeddings for unseen words.<|||||>> If you tokenize properly the input (tokenize before convert_tokens), it automatically 'fallbacks' to subword/character-level(-like) embedding. > You can add new words in the vocabulary but you'll have to train the corresponding embeddings. Hi, what do you mean `tokenize properly the input (tokenize before convert_tokens)` ? Can you refer a tokenization sample (before and after) or a sample code if any? thank you
transformers
62
closed
Specify a model from a specific directory for extract_features.py
I have downloaded the model and vocab files into a specific location, using their original file names, so my directory for bert-base-cased contains: ``` bert-base-cased-vocab.txt bert_config.json pytorch_model.bin ``` But when I try to specify the directory which contains these files for the `--bert_model` parameter of `extract_features.py` I get the following error: ``` ValueError: Can't find a vocabulary file at path <THEDIRECTORYPATHISPECIFIED> ... ``` When I specify a file that exists and is a proper file, the error messages seem to indicate that the program wants to untar and uncompress the files. Is there no way to just specify a specific directory that contains the vocab, config, and model files?
11-28-2018 17:04:39
11-28-2018 17:04:39
The last update broke this, but you can fix this in tokenization.py, you have to add this after `vocab_file = pretrained_model_name`: ``` if os.path.isdir(vocab_file): vocab_file = os.path.join(vocab_file, "vocab.txt") ``` <|||||>Thank you, is it fair to assume that this will get accepted as an issue and fixed in a future update/release?<|||||>Yes :-) There is a new release planned for tonight that will fix this (among other things, basically all the other open issues).<|||||>Ok, this is now included in the new release 0.3.0 (by #73).
transformers
61
closed
BERTConfigs in example usages in `modeling.py` are not OK (?)
Hi! In the `config` definition https://github.com/huggingface/pytorch-pretrained-BERT/blob/21f0196412115876da1c38652d22d1f7a14b36ff/pytorch_pretrained_bert/modeling.py#L848 in the Example usage of `BertForSequenceClassification` in `modeling.py`, there's things I don't understand: - `vocab_size` in not an acceptable parameter name, by looking at the `BertConfig` class definition https://github.com/huggingface/pytorch-pretrained-BERT/blob/21f0196412115876da1c38652d22d1f7a14b36ff/pytorch_pretrained_bert/modeling.py#L70 - even by changing `vocab_size` into `vocab_size_or_config_json_file`, for the choice of the other params given in the example i.e. ``` vocab_size=32000, hidden_size=512, num_hidden_layers=8, num_attention_heads=6, intermediate_size=1024 ``` I get: `ValueError: The hidden size (512) is not a multiple of the number of attention heads (6)` I think that something similar may be true for the other classes as well, `BertForQuestionAnswering`, `BertForNextSentencePrediction`, etc. Am I missing something?
11-28-2018 14:53:01
11-28-2018 14:53:01
Hi @davidefiocco, you are right, I updated the docstrings in the new release 0.3.0.
transformers
60
closed
Updated quick-start example with `BertForMaskedLM`
As `convert_ids_to_tokens` returns a list, the code in the README currently throws an `AssertionError`, so I propose a quick fix.
11-28-2018 13:54:01
11-28-2018 13:54:01
Nice, thanks @davidefiocco
transformers
59
closed
not good when I use BERT for seq2seq model in keyphrase generation
Hi, recently, I am researching about Keyphrase generation. Usually, people use seq2seq with attention model to deal with such problem. Specifically I use the framework: https://github.com/memray/seq2seq-keyphrase-pytorch, which is implementation of http://memray.me/uploads/acl17-keyphrase-generation.pdf . Now I just change its encoder part to BERT, but the result is not good. The experiment comparison of two models is in the attachment. Can you give me some advice if what I did is reasonable and if BERT is suitable for doing such a thing? Thanks. [RNN vs BERT in Keyphrase generation.pdf](https://github.com/huggingface/pytorch-pretrained-BERT/files/2623599/RNN.vs.BERT.in.Keyphrase.generation.pdf)
11-28-2018 08:44:24
11-28-2018 08:44:24
have u tried transformer decoder ?instead of rnn decoder. <|||||>not yet, I will try. But I think rnn decoder should not be such bad. <|||||>> not yet, I will try. But I think rnn decoder should not be such bad. emmm,maybe u should used mean of last layer to initialize decoder, not the last token representation of last layer. I am also very concerned about the results of using transformer decoder. If you are done, can you tell me? Thank you.<|||||>I think the batch size of RNN with BERT is too small. pleas see > https://github.com/memray/seq2seq-keyphrase-pytorch/blob/master/pykp/dataloader.py line 377-378<|||||>I don't know what you mean by giving me this link. I set to 10 really because of the memory problem. Actually, when sentence length is 512, the max batch size is only 5, if it is 6 or bigger there will be memory error for my GPU. <|||||>> > not yet, I will try. But I think rnn decoder should not be such bad. > > emmm,maybe u should used mean of last layer to initialize decoder, not the last token representation of last layer. > I am also very concerned about the results of using transformer decoder. If you are done, can you tell me? Thank you. You are right. Maybe the mean is better, I will try as well. Thanks.<|||||>May i ask a question? R u chinese?23333<|||||>Cause for one example, it has N targets. We wanna put all targets in the same batch. 10 is too small that the targets of one example would be in different batches probably.<|||||>I know, but ... the same problem ... my memory is limited .. so ... PS. I am Chinese <|||||>> I know, but ... the same problem ... my memory is limited .. so ... > > PS. I am Chinese i am as well hahaha<|||||>是不是语料的问题,bert是在wiki上训练的。我用kp20k训练了一个mini bert,在测试集上的accuracy目前是80%,你要不要试试用我这个作为encoder?<|||||>这个80%具体是什么数值这么高?f1 score吗? 你的encoder能不能发来看一下呢 谢 waynedane <notifications@github.com>于2018年11月28日 周三下午11:14写道: > 是不是语料的问题,bert是在wiki上训练的。我用kp20k训练了一个mini > bert,在测试集上的accuracy目前是80%,你要不要试试用我这个作为encoder? > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/pytorch-pretrained-BERT/issues/59#issuecomment-442482124>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AHCjdIT1G6Icse3LK2SZXO194JJTiM1Qks5uzqhMgaJpZM4Y3HWV> > . > <|||||>accuracy 是masklm和nextsentence两个任务的,不是key phrase generation,我没说清楚,抱歉。我的算力有限,两块p100, 快一个月了,目前还没训练完。80%是当前的表现。<|||||>你提到的mini bert 是什么意思?<|||||>我大概理解你的意思了,你相当于是用kp20重新预训练一个bert,不过这样做... 感觉确实蛮麻烦。 <|||||>> 我大概理解你的意思了,你相当于是用kp20重新预训练一个bert,不过这样做... 感觉确实蛮麻烦。 是的,用的是 Junseong Kim的代码:https://github.com/codertimo/BERT-pytorch ,模型规模比谷歌的BERT-Base Uncased都小很多。这个是L-8 H-256 A-8.我把目前训练的checkpoint和vocab文件发给你<|||||>但是你这个checkpoint,我的这个版本能直接用吗,还是说我必须装你的那个版本的代码?<|||||>你可以发到我邮箱 whqwill@126.com , 谢<|||||>> 但是你这个checkpoint,我的这个版本能直接用吗,还是说我必须装你的那个版本的代码? 可以根据Junseong Kim 的代码创建一个bert model然后加载参数,不一定得安装<|||||>好的把。那你把checkpoint 发给我试试。 <|||||>Hi guys, I would like to keep the issues of this repository focused on the package it-self. I also think it's better to keep the conversation in english so everybody can participate. Please move this conversation to your repository: https://github.com/memray/seq2seq-keyphrase-pytorch or emails. Thanks, I am closing this discussion. Best,<|||||>> accuracy 是masklm和nextsentence两个任务的,不是key phrase generation,我没说清楚,抱歉。我的算力有限,两块p100, 快一个月了,目前还没训练完。80%是当前的表现。 你好,能把mini版模型发我一下吗,993001803@qq.com,谢谢啦。 <|||||>hi, @whqwill I have some doubts about the usage manner of bert with RNN. In bert with RNN method, I see you only consider the last term's representation (I mean the TN's) as the input to RNN decoder, why not use the other term's representation, like T1 to TN-1 ? I think the last term's information is too less to represent all the context information.
transformers
58
closed
Bug fix in examples;correct t_total for distributed training;run pred…
Bug fix in examples; correct t_total for distributed training; run prediction for full dataset
11-27-2018 09:10:10
11-27-2018 09:10:10
Thanks @lliimsft!
transformers
57
closed
Missing function convert_to_unicode in tokenization.py
The function _convert_to_unicode_ is not in tokenization.py but used to be there in v0.1.2. When fine tuning with run_classifier.py, you get an ImportError: cannot import name 'convert_to_unicode'. https://github.com/huggingface/pytorch-pretrained-BERT/blob/ce37b8e4819142171b61558e64f7dcb0286e9937/examples/run_classifier.py#L33
11-26-2018 21:50:15
11-26-2018 21:50:15
Fixed in master, thanks!
transformers
56
closed
[Feature request ] Add support for the new cased version of the multilingual model
https://github.com/google-research/bert/commit/332a68723c34062b8f58e5fec3e430db4563320a
11-26-2018 10:56:18
11-26-2018 10:56:18
Hi @elyase, this model is now added in the new release 0.3.0. I also added the other new model by Google (`bert-large-cased`)
transformers
55
closed
Loss calculation error
https://github.com/huggingface/pytorch-pretrained-BERT/blob/982339d82984466fde3b1466f657a03200aa2ffb/pytorch_pretrained_bert/modeling.py#L744 Got `ValueError: Expected target size (1, 30522), got torch.Size([1, 11])` at line 744 of `modeling.py`. I think the line should be changed to `masked_lm_loss = loss_fct(prediction_scores.view([-1, self.config.vocab_size]), masked_lm_labels.view([-1]))`.
11-25-2018 03:48:17
11-25-2018 03:48:17
Hi Jian, can you give me a small (self-contained) example showing how to get this error?<|||||>Hi Thomas! I modified the code in your `README.md` for an example: ```python from pytorch_pretrained_bert.modeling import BertForMaskedLM, BertConfig from pytorch_pretrained_bert import BertTokenizer import torch model = BertForMaskedLM.from_pretrained('bert-base-uncased') # Tokenized input tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') text = "Who was Jim Henson ? Jim Henson was a puppeteer" tokenized_text = tokenizer.tokenize(text) # Mask a token that we will try to predict back with `BertForMaskedLM` masked_index = 6 tokenized_text[masked_index] = '[MASK]' # Convert token to vocabulary indices indexed_truths = tokenizer.convert_tokens_to_ids(tokenized_text) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) # Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) indexed_truths_tensor = torch.tensor([indexed_truths]) # Evaluate loss model.eval() masked_lm_logits_scores = model(tokens_tensor, masked_lm_labels=indexed_truths_tensor) print(masked_lm_logits_scores) ```<|||||>Thank you, you are right, I fixed that on master. It will be in the next release.
transformers
54
closed
example in BertForSequenceClassification() conflicts with the api
Hi, firstly, admire u for the great job. but I encounter 2 problems when i use it: **1**. `UnicodeDecodeError: 'gbk' codec can't decode byte 0x85 in position 4527: illegal multibyte sequence`, same problem as ISSUE 52 when I excute the `BertTokenizer.from_pretrained('bert-base-uncased')`, but I successfully excute `BertForNextSentencePrediction.from_pretrained('bert-base-uncased')`, >.< **2**. in the pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py, line 761 --> ``` `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] **with the token types indices selected in [0, 1]**. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). ``` but in the following example, in **line 784**--> `token_type_ids = torch.LongTensor([[0, 0, 1], [0, **2**, 0]])`, why the '2' appears? I am confused. Otherwise, is the situation similar to '0, 1, 0 ' correct ? Or it should be similar to [000000111111] , that is continuous '0' and continuous '1' ? ty.
11-24-2018 07:27:50
11-24-2018 07:27:50
Hi, (1) is solved on master. I will release a new release soon with the fixes on pip. In the mean time you can install from sources if you want. I fixed the typo in the docstring you mention in (2), thanks, it should be a `1` instead of a `2`.
transformers
53
closed
Multi-GPU training vs Distributed training
Hi, I have a question about Multi-GPU vs Distributed training, probably unrelated to BERT itself. I have a 4-GPU server, and was trying to run `run_classifier.py` in two ways: (a) run single-node distributed training with 4 processes and minibatch of 32 each (b) run Multi-GPU training with minibatch of 128, and all other hyperparams keep the same Intuitively I believe a and b should yield the closed accuracy and training times. Below please find my observations: 1. (a) runs ~20% faster than (b). 2. (b) yields a better final evaluation accuracy of ~4% than (a) The first looks like reasonable since I guess the loss.mean() is done by CPU which may be slower than using NCCL directly? However, I don't quite understand the second observation. Can you please give any hint or reference about the possible cause? Thanks!
11-24-2018 00:49:45
11-24-2018 00:49:45
Hi, Thanks for the feedback, it's always interesting to compare the various possible ways to train the model indeed. The most likely cause for (2) is that MRPC is a small dataset and the model shows a high variance in the results depending on the initialization of the weights for example (see the original BERT repo on that also). The distributed and multi-gpu setups probably do not use the random generators in the exact same order which lead to different initializations. You can have an intuition of that by training with different seeds, you will see there is easily a 10% variation in the final accuracy... If you can do that, a better way to compare the results would thus be to take something like 10 different seeds for each training condition and compare the mean and standard deviation of the results.<|||||>Thanks for your feedback! After some investigations, it looks like `t_total` is not set properly for distributed training in BertAdam. The actual `t_total` per distributed worker should be divided by the worker count. I have included the following fix in my PR https://github.com/huggingface/pytorch-pretrained-BERT/pull/58 ``` t_total = num_train_steps if args.local_rank != -1: t_total = t_total // torch.distributed.get_world_size() optimizer = BertAdam(optimizer_grouped_parameters, lr=args.learning_rate, warmup=args.warmup_proportion, t_total=t_total) ```
transformers
52
closed
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 3920: character maps to <undefined>
Installed pytorch-pretrained-BERT from source, Python 3.7, Windows 10 When I run the following snippet: import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') I get the following: --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-2-7725148c607d> in <module>() 3 4 # Load pre-trained model tokenizer (vocabulary) ----> 5 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') ~\Anaconda3\lib\site-packages\pytorch_pretrained_bert\tokenization.py in from_pretrained(cls, pretrained_model_name, do_lower_case) 139 vocab_file, resolved_vocab_file)) 140 # Instantiate tokenizer. --> 141 tokenizer = cls(resolved_vocab_file, do_lower_case) 142 except FileNotFoundError: 143 logger.error( ~\Anaconda3\lib\site-packages\pytorch_pretrained_bert\tokenization.py in __init__(self, vocab_file, do_lower_case) 93 "Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained " 94 "model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file)) ---> 95 self.vocab = load_vocab(vocab_file) 96 self.ids_to_tokens = collections.OrderedDict( 97 [(ids, tok) for tok, ids in self.vocab.items()]) ~\Anaconda3\lib\site-packages\pytorch_pretrained_bert\tokenization.py in load_vocab(vocab_file) 68 with open(vocab_file, "r", encoding="utf8") as reader: 69 while True: ---> 70 token = convert_to_unicode(reader.readline()) 71 if not token: 72 break ~\Anaconda3\lib\encodings\cp1252.py in decode(self, input, final) 21 class IncrementalDecoder(codecs.IncrementalDecoder): 22 def decode(self, input, final=False): ---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0] 24 25 class StreamWriter(Codec,codecs.StreamWriter): UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 3920: character maps to <undefined>
11-22-2018 15:42:08
11-22-2018 15:42:08
I am facing the same problem. Fixed it with "with open(vocab_file, "r"**, encoding="utf-8"**) as reader:" in line 68 of tokenization.py<|||||>Thanks, it's fixed on master and will be included in the next release.
transformers
51
closed
Missing options/arguments in run_squad.py for BERT Large
Thanks for the great code..However, the `run_squad.py` for BERT Large seems to not have the `vocab_file` and `bert_config_file` (or other) options/arguments. Did you push the latest version? Also, it is looking for a pytorch model file (a bin file). Does it need to be there? I also had to add this line to the file to make BERT base to run on Squad 1.1: `parser.add_argument('--do_lower_case', action="store_true", default=True, help="Lowercase the input")`
11-21-2018 15:10:45
11-21-2018 15:10:45
Yes, the readme example was for an older version. I have updated them with the simplified parameters used in the current release. Thanks.
transformers
50
closed
pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py error
attributeError: 'BertForPreTraining' object has no attribute 'global_step'
11-21-2018 10:36:49
11-21-2018 10:36:49
Maybe some additional information could help me help you?<|||||>Initialize PyTorch weight ['cls', 'seq_relationship', 'output_weights'] Skipping cls/seq_relationship/output_weights/adam_m Skipping cls/seq_relationship/output_weights/adam_v Traceback (most recent call last): File "/home/tiandan.cxj/python/model_serving_python/lib/python3.5/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/tiandan.cxj/python/model_serving_python/lib/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/tiandan.cxj/platform/pytorch_BERT/pytorch-pretrained-BERT/pytorch_pretrained_bert/__main__.py", line 19, in <module> convert_tf_checkpoint_to_pytorch(TF_CHECKPOINT, TF_CONFIG, PYTORCH_DUMP_OUTPUT) File "/home/tiandan.cxj/platform/pytorch_BERT/pytorch-pretrained-BERT/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 69, in convert_tf_checkpoint_to_pytorch pointer = getattr(pointer, l[0]) File "/home/tiandan.cxj/python/model_serving_python/lib/python3.5/site-packages/torch/nn/modules/module.py", line 518, in __getattr__ type(self).__name__, name)) AttributeError: 'BertForPreTraining' object has no attribute 'global_step'<|||||>Hum I will see if I can let people import any kind of TF model in PyTorch, that's a bit risky so it has to be done properly. In the meantime you can add `global_step` in the list line 53 of `convert_tf_checkpoint_to_pytorch.py`<|||||>@thomwolf sir, i am also same issue. it doen't resolve. how i am convert my finetuned pretrained model to pytorch? ``` export BERT_BASE_DIR=/home/dell/backup/NWP/bert-base-uncased/bert_tensorflow_e100 pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \ $BERT_BASE_DIR/model.ckpt-100 \ $BERT_BASE_DIR/bert_config.json \ $BERT_BASE_DIR/pytorch_model.bin ``` ``` Traceback (most recent call last): File "/home/dell/Downloads/Downloads/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/dell/Downloads/Downloads/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/dell/backup/bert_env/lib/python3.6/site-packages/pytorch_pretrained_bert/__main__.py", line 19, in <module> convert_tf_checkpoint_to_pytorch(TF_CHECKPOINT, TF_CONFIG, PYTORCH_DUMP_OUTPUT) File "/home/dell/backup/bert_env/lib/python3.6/site-packages/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 69, in convert_tf_checkpoint_to_pytorch pointer = getattr(pointer, l[0]) File "/home/dell/backup/bert_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 535, in __getattr__ type(self).__name__, name)) AttributeError: 'BertForPreTraining' object has no attribute 'global_step' ``` sir how to resolve this issue? thanks. <|||||>thanks @thomwolf sir. it was resolved.<|||||>I added the global_step to the skipping list in the modelling.py . Still facing the error. Am I missing something?
transformers
49
closed
Multilingual Issue
Dear authors, I have two questions. First, how can I use multilingual pre-trained BERT in pytorch? Is it all download model to $BERT_BASE_DIR? Second is tokenization issue. For Chinese and Japanese, tokenizer may works, however, for Korean, it shows different result that I expected ``` import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') text = "안녕하세요" tokenized_text = tokenizer.tokenize(text) print(tokenized_text) ``` ` ['ᄋ', '##ᅡ', '##ᆫ', '##ᄂ', '##ᅧ', '##ᆼ', '##ᄒ', '##ᅡ', '##ᄉ', '##ᅦ', '##ᄋ', '##ᅭ'] The result is based on not 'character' but 'byte-based character' May it comes from unicode issue. (I expect ['안녕', '##하세요'])
11-21-2018 09:32:32
11-21-2018 09:32:32
Hi, you can use the multilingual model as [indicated in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#loading-google-ais-pre-trained-weigths-and-pytorch-dump) with the commands: ```python tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual') model = BertModel.from_pretrained('bert-base-multilingual') ```` This will load the multilingual vocabulary (which should contain korean) that your command was not loading.
transformers
48
closed
example for is next sentence
Can you make up a working example for 'is next sentence' Is this expected to work properly ? ``` # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Tokenized input text = "Who was Jim Morrison ? Jim Morrison was a puppeteer" tokenized_text = tokenizer.tokenize(text) # Convert token to vocabulary indices indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) # Define sentence A and B indices associated to 1st and 2nd sentences (see paper) segments_ids = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1] # Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) # Load pre-trained model (weights) model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased') model.eval() # Predict is Next Sentence ? predictions = model(tokens_tensor, segments_tensors) ```
11-21-2018 03:16:00
11-21-2018 03:16:00
I think it should work. You should get a [1, 2] tensor of logits where `predictions[0, 0]` is the score of Next sentence being `True` and `predictions[0, 1]` is the score of Next sentence being `False`. So just take the max of the two (or use a `SoftMax` to get probabilities). Did you try it? The model behaves better on longer sentences of course (it's mainly trained on 512 tokens inputs).<|||||>Closing that for now, feel free to reopen if there is another issue.<|||||>Guys, are [CLS] and [SEP] tokens mandatory for this example?<|||||>This is not super clear, even wrong in the examples, but there is this note in the docstring for `BertModel`: ``` `pooled_output`: a torch.FloatTensor of size [batch_size, hidden_size] which is the output of a classifier pretrained on top of the hidden state associated to the first character of the input (`CLF`) to train on the Next-Sentence task (see BERT's paper). ``` That seems to suggest pretty strongly that you have to put in the `CLF` token.<|||||>```import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Tokenized input text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = tokenizer.tokenize(text) # Convert token to vocabulary indices indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) # Define sentence A and B indices associated to 1st and 2nd sentences (see paper) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) # Load pre-trained model (weights) model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased') model.eval() # Predict is Next Sentence ? predictions = model(tokens_tensor, segments_tensors ) print(predictions) ``` ``` tensor([[ 6.3714, -6.3910]], grad_fn=<AddmmBackward>) ``` How do i infer this as true or false<|||||>Those are the logits, because you did not pass the `next_sentence_label`. My understanding is that you could apply a softmax and get the probability for the sequence to be a possible sequence. `Sentence 1: How old are you?` `Sentence 2: The Eiffel Tower is in Paris` `tensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>)` `Sentence 1: How old are you?` `Sentence 2: I am 193 years old` `tensor([[ 6.0164, -5.7138]], grad_fn=<AddmmBackward>)` For the first example the probability that the second sentence is a probable continuation is very low. For the second example the probability is very high (I am looking at the first logit)<|||||>predictions = model(tokens_tensor, segments_tensors ) I try the code more than once,why I have the different result? sometime predictions[0, 0] is higher ,however, the same sentence pair,predictions[0, 0] is lower.<|||||>Maybe your model is not in evaluation mode (`model.eval()`)? You need to do this to desactivate the dropout modules.<|||||>It is OK.THANKS A LOT.<|||||>`error: --> 197 embeddings = words_embeddings + position_embeddings + token_type_embeddings 198 embeddings = self.LayerNorm(embeddings) 199 embeddings = self.dropout(embeddings) The size of tensor a (21) must match the size of tensor b (14) at non-singleton dimension 1` The above issues get resolved, when I added few extra 1's and 0's to make the shape similar tokens_tensor and segments_tensors. Just wondering am I using in a right way. My predictions output is a tensor array of size 21 X 30522 . And what I believe the example is to predict the word which is [MASK] . Can you also please guide how to predict the next sentence? <|||||>> Maybe your model is not in evaluation mode (`model.eval()`)? > You need to do this to desactivate the dropout modules. @thomwolf Actually even when I used model.eval() I still got different results. I observed this when I use every model of the package (BertModel, BertForNextSentencePrediction etc). Only when I fixed the length of the input (e.g. to 128), I can get the same results. In this way I have to pad 0 to indexed_tokens so it has a fixed length. Could you explain why is like this, or did I make any mistake? Thank you so much!<|||||>> > Maybe your model is not in evaluation mode (`model.eval()`)? > > You need to do this to desactivate the dropout modules. > > @thomwolf Actually even when I used model.eval() I still got different results. I observed this when I use every model of the package (BertModel, BertForNextSentencePrediction etc). Only when I fixed the length of the input (e.g. to 128), I can get the same results. In this way I have to pad 0 to indexed_tokens so it has a fixed length. > > Could you explain why is like this, or did I make any mistake? > > Thank you so much! Make sure 1) input_ids, input_mask, segment_ids have same length 2) vocabulary file for tokenizer is from the same config dir as your bert_config.json I had symilar symptoms when vocab and config was from diferent berts<|||||>I noticed that the probability for longer sentences, regardless of how much they are related to the same subject, is higher than the shorter ones. For example, I added some random sentences to the end of the first or second part and observed significant increase in the first logit value. Is it a way to regularize the model for the next sentence prediction? <|||||>@pbabvey I am observing the same thing. are the probabilities length normalized?<|||||>> Those are the logits, because you did not pass the `next_sentence_label`. > > My understanding is that you could apply a softmax and get the probability for the sequence to be a possible sequence. > > `Sentence 1: How old are you?` > `Sentence 2: The Eiffel Tower is in Paris` > `tensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>)` > `Sentence 1: How old are you?` > `Sentence 2: I am 193 years old` > `tensor([[ 6.0164, -5.7138]], grad_fn=<AddmmBackward>)` > > For the first example the probability that the second sentence is a probable continuation is very low. > For the second example the probability is very high (I am looking at the first logit) im getting different scores for the sentences that you have tried . please advise why i'm getting it below is my code . import torch from transformers import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction tokenizer=BertTokenizer.from_pretrained('bert-base-uncased') BertNSP=BertForNextSentencePrediction.from_pretrained('bert-base-uncased') text1 = "How old are you?" text2 = "The Eiffel Tower is in Paris" text1_toks = ["[CLS]"] + tokenizer.tokenize(text1) + ["[SEP]"] text2_toks = tokenizer.tokenize(text2) + ["[SEP]"] text=text1_toks+text2_toks print(text) indexed_tokens = tokenizer.convert_tokens_to_ids(text1_toks + text2_toks) segments_ids = [0]*len(text1_toks) + [1]*len(text2_toks) tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) print(indexed_tokens) print(segments_ids) BertNSP.eval() prediction = BertNSP(tokens_tensor, segments_tensors) prediction=prediction[0] # tuple to tensor print(predictions) softmax = torch.nn.Softmax(dim=1) prediction_sm = softmax(prediction) print (prediction_sm) o/p of predictions tensor([[ 2.1772, -0.8097]], grad_fn=) o/p of prediction_sm tensor([[0.9923, 0.0077]], grad_fn=) why is the score still high 0.9923 even after apply softmax ?<|||||>> > Those are the logits, because you did not pass the `next_sentence_label`. > > My understanding is that you could apply a softmax and get the probability for the sequence to be a possible sequence. > > `Sentence 1: How old are you?` > > `Sentence 2: The Eiffel Tower is in Paris` > > `tensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>)` > > `Sentence 1: How old are you?` > > `Sentence 2: I am 193 years old` > > `tensor([[ 6.0164, -5.7138]], grad_fn=<AddmmBackward>)` > > For the first example the probability that the second sentence is a probable continuation is very low. > > For the second example the probability is very high (I am looking at the first logit) > > im getting different scores for the sentences that you have tried . please advise why i'm getting it below is my code . > > import torch > from transformers import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction > tokenizer=BertTokenizer.from_pretrained('bert-base-uncased') > BertNSP=BertForNextSentencePrediction.from_pretrained('bert-base-uncased') > > text1 = "How old are you?" > text2 = "The Eiffel Tower is in Paris" > > text1_toks = ["[CLS]"] + tokenizer.tokenize(text1) + ["[SEP]"] > text2_toks = tokenizer.tokenize(text2) + ["[SEP]"] > text=text1_toks+text2_toks > print(text) > indexed_tokens = tokenizer.convert_tokens_to_ids(text1_toks + text2_toks) > segments_ids = [0]*len(text1_toks) + [1]*len(text2_toks) > > tokens_tensor = torch.tensor([indexed_tokens]) > segments_tensors = torch.tensor([segments_ids]) > print(indexed_tokens) > print(segments_ids) > BertNSP.eval() > prediction = BertNSP(tokens_tensor, segments_tensors) > prediction=prediction[0] # tuple to tensor > print(predictions) > softmax = torch.nn.Softmax(dim=1) > prediction_sm = softmax(prediction) > print (prediction_sm) > > o/p of predictions > tensor([[ 2.1772, -0.8097]], grad_fn=) > > o/p of prediction_sm > tensor([[0.9923, 0.0077]], grad_fn=) > > why is the score still high 0.9923 even after apply softmax ? I am facing the same issue. No matter what sentences I use, I always get very high probability of the second sentence being related to the first. <|||||>@parth126 have you seen https://github.com/huggingface/transformers/issues/1788 and is it related to your issue?<|||||>> @parth126 have you seen #1788 and is it related to your issue? Yes it was the same issue. And the solution worked like a charm. Many thanks @LysandreJik <|||||>@LysandreJik thanks for the information
transformers
47
closed
Fine-Tuned BERT-base on Squad v1.
I have fine-tuned the TF model on SQuAD v1 and I've made the weights available at: https://s3.eu-west-2.amazonaws.com/nlpfiles/squad_bert_base.tgz I get 88.5 FM using these weights on SQuAD dev. (If I recall correctly I get roughly 82 EM). I think it may be beneficial to have these weights here, so that people could play with SQuAD and BERT without the need of fine-tuning, which requires a decent enough setup. Let me know what you think!
11-20-2018 17:04:09
11-20-2018 17:04:09
Thanks for the details. This PyTorch repo is starting to be used by a larger community so we would have to be a little more precise than just rough numbers if we want to include such pre-trained weights. If you want to add your weights to the repo, you should convert the weights in the PyTorch repo model and get evaluation results on SQuAD with the PyTorch model so everybody has a clean knowledge of what they are using. Otherwise I think it's better that people do their own training and know what are the capabilities of the fine-tuned model they are using. Feel free to come back and re-open the issue if this something you would like to do. <|||||>@thomwolf On SQuAD v1.1, BERT (single) scored 85.083 EM and 91.835 F1 as reported in their paper but when I fine-tuned BERT using `run_squad.py` I got {"exact_match": 81.0975, "f1": 88.7005}. Why there is a difference? What I am missing?
transformers
46
closed
Assertion `srcIndex < srcSelectDimSize` failed.
Sorry to bother you I recently have used your extract_features.py to extract features of some data set but failed. The error information is as follows: `/opt/conda/conda-bld/pytorch_1532584813488/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [11,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "examples/extract_features.py", line 405, in <module> main() File "examples/extract_features.py", line 375, in main all_encoder_layers, _ = model(input_ids, token_type_ids=None, attention_mask=input_mask) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 610, in forward output_all_encoded_layers=output_all_encoded_layers) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 328, in forward hidden_states = layer_module(hidden_states, attention_mask) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 313, in forward attention_output = self.attention(hidden_states, attention_mask) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 273, in forward self_output = self.self(input_tensor, attention_mask) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 224, in forward mixed_query_layer = self.query(hidden_states) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 55, in forward return F.linear(input, self.weight, self.bias) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/functional.py", line 1026, in linear output = input.matmul(weight.t()) RuntimeError: cublas runtime error : resource allocation failed at /opt/conda/conda-bld/pytorch_1532584813488/work/aten/src/THC/THCGeneral.cpp:333 ` It seems that the index_select function in the models crashed. I read my own data from json files and construct examples from them. I set the batch-size equals 1 and I modified the max_seq_length to the max_length of the input sentences. Thanks for your help!
11-20-2018 12:50:41
11-20-2018 12:50:41
Your log is very hard to read. Can you format it cleanly?<|||||>I'm so sorry The first error log is as follows: ```bash /opt/conda/conda-bld/pytorch_1532584813488/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [11,0,0], thread: [95,0,0] Assertion \`srcIndex < srcSelectDimSize\` failed. ```` And then the Traceback finally points to line 1026 torch/nn/functional.py in linear: `output = input.matmul(weight.t())` It seems that somewhere crashed while using `torch.index_select() `, do you think it is because my sentence is too long? I will check other aspects, thank you very much<|||||>It seems like a failed resource allocation. Maybe you don't have enough RAM or your GPU is too small ?<|||||>My GPU has 12400 MB and I think that's enough, may be I should use 'yield' to input the data one by one? I will load less data to try, thanks u a lot! <|||||>Ok feel free to re-open the issue if you still have troubles.<|||||>Hi @SparkJiao I met the same issue here, how did you resolve this?<|||||>I have the same issue, did you resolve this? @zyfedward @SparkJiao <|||||>@nv-quan, do you mind opening a new issue with the template so that we may help?<|||||>I have forgot how to reproduce the problem but the `index_select` error usually happened due to wrong index. You can use a smaller batch size and run the script on CPU to check the full traceback since the traceback while using GPU is delayed.
transformers
45
closed
Issue of `bert_model` arg in `run_classify.py`
Hi, I am trying to understand the `bert_model` arg in `run_classify.py`. In the file, I can see ``` tokenizer = BertTokenizer.from_pretrained(args.bert_model) ``` where `bert_model` is expected to be the vocab text file of the model However, I also see ``` model = BertForSequenceClassification.from_pretrained(args.bert_model, len(label_list)) ``` where `bert_model` is expected to be a archive file containing the model checkpoint and config. Please help to advice the correct use of `bert_model` if I have my pretrained model converted locally already. Thanks!
11-20-2018 09:48:09
11-20-2018 09:48:09
Hi, please read [this section](https://github.com/huggingface/pytorch-pretrained-BERT#loading-google-ais-pre-trained-weigths-and-pytorch-dump) of the readme.
transformers
44
closed
Race condition when prepare pretrained model in distributed training
Hi, I launched two processes per node to run distributed run_classifier.py. However, I am occasionally get below error: ``` 11/20/2018 09:31:48 - INFO - pytorch_pretrained_bert.file_utils - copying /tmp/tmpa25_y4es to cache at /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 93%|█████████▎| 381028352/407873900 [00:11<00:01, 14366075.22B/s] 94%|█████████▍| 383812608/407873900 [00:11<00:01, 16210783.00B/s] 95%|█████████▍| 386455552/407873900 [00:11<00:01, 16205260.89B/s]11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.file_utils - creating metadata file for /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.file_utils - removing temp file /tmp/tmpa25_y4es 95%|█████████▌| 388946944/407873900 [00:11<00:01, 18097539.03B/s]11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpvxvnr8_1 97%|█████████▋| 393660416/407873900 [00:11<00:00, 22199883.93B/s] 98%|█████████▊| 399411200/407873900 [00:11<00:00, 27211860.00B/s] 99%|█████████▉| 405128192/407873900 [00:11<00:00, 32287252.94B/s] 100%|██████████| 407873900/407873900 [00:11<00:00, 34098120.40B/s] 11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.file_utils - copying /tmp/tmp5fcm4v8x to cache at /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba Traceback (most recent call last): File "examples/run_classifier.py", line 629, in <module> main() File "examples/run_classifier.py", line 485, in main model = BertForSequenceClassification.from_pretrained(args.bert_model, len(label_list)) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/site-packages/pytorch_pretrained_bert-0.1.2-py3.6.egg/pytorch_pretrained_bert/modeling.py", line 495, in from_pretrained archive.extractall(tempdir) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 2007, in extractall numeric_owner=numeric_owner) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 2049, in extract numeric_owner=numeric_owner) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 2119, in _extract_member self.makefile(tarinfo, targetpath) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 2168, in makefile copyfileobj(source, target, tarinfo.size, ReadError, bufsize) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 248, in copyfileobj buf = src.read(bufsize) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/gzip.py", line 276, in read return self._buffer.read(size) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/_compression.py", line 68, in readinto data = self.read(len(byte_view)) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/gzip.py", line 482, in read raise EOFError("Compressed file ended before the " EOFError: Compressed file ended before the end-of-stream marker was reached ``` It looks like a race-condition that two processes are simultaneously writing model file to `/root/.pytorch_pretrained_bert/`. Please help to advice any workaround. Thanks!
11-20-2018 09:40:25
11-20-2018 09:40:25
My current workaround is to set the env var `PYTORCH_PRETRAINED_BERT_CACHE` to a different path per process before import `pytorch_pretrained_bert`. But I think the module itself should handle this properly<|||||>I see, thanks for the feedback. I will find a way to make that better in the next release. Not sure we need to store the model gzipped anyway since they mostly contains a torch dump which is already compressed.<|||||>Ok, I've added a `cache_dir` option in `from_pretrained` in the master to specify a different cache dir for a script. I will release the updated version today on pip. Thanks for the feedback.<|||||>Thanks for fixing this. Since the way I use this repo is to add ./pytorch_pretrained_bert in PYTHONPATH, so I think directly add the following import in `run_classifier.py` and `run_squad.py` is more appropriate in my case ``` from pytorch_pretrained_bert.file_utils import PYTORCH_PRETRAINED_BERT_CACHE ``` which is included in my PR: https://github.com/huggingface/pytorch-pretrained-BERT/pull/58
transformers
43
closed
grad is None in squad example
Hi, guys, I try the `run_squad` example with ``` Traceback (most recent call last): | 0/7331 [00:00<?, ?it/s] File "examples/run_squad.py", line 973, in <module> main() File "examples/run_squad.py", line 904, in main param.grad.data = param.grad.data / args.loss_scale AttributeError: 'NoneType' object has no attribute 'data' ``` I find one of the param.grads is None, so the param.grad.data doesn't exist. by the way I down load the data by myself from the urls in this prject. my os is ubuntu 18.04, pytorch 0.41 gpu 1080t anyone else encounters this situation? wanna help, please, thx in advance...
11-20-2018 08:38:03
11-20-2018 08:38:03
Oh you're right. I've just fixed that. you can try to pull the current master and test again.<|||||>@thomwolf it works, thanks
transformers
42
closed
Fixed UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2
I encountered `UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 3793: ordinal not in range(128)` when running the starter example shown under the Usage section. It turned out to be related to the `load_vocab` function in `tokenization.py`. Forcing `open` to use encoding `utf8` solved this issue on my machine.
11-20-2018 04:09:44
11-20-2018 04:09:44
Thanks!
transformers
41
closed
Typo in README
I think I spotted a typo in the README file under the Usage header. There is a piece of code that uses `BertTokenizer` and the typo is on this line: `tokenized_text = "Who was Jim Henson ? Jim Henson was a puppeteer"` I think `tokenized_text` should be replaced with `text`, since the next line is `tokenized_text = tokenizer.tokenize(text)`
11-20-2018 03:52:35
11-20-2018 03:52:35
Yes
transformers
40
closed
update pip package name
dashes not underscores
11-19-2018 17:50:54
11-19-2018 17:50:54
transformers
39
closed
Command-line interface Document Bug
There is a bug in README.md about Command-line interface: `export BERT_BASE_DIR=chinese_L-12_H-768_A-12` **Wrong:** ``` pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \ --tf_checkpoint_path $BERT_BASE_DIR/bert_model.ckpt.index \ --bert_config_file $BERT_BASE_DIR/bert_config.json \ --pytorch_dump_path $BERT_BASE_DIR/pytorch_model.bin ``` **Right:** ``` pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \ $BERT_BASE_DIR/bert_model.ckpt.index \ $BERT_BASE_DIR/bert_config.json \ $BERT_BASE_DIR/pytorch_model.bin ```
11-19-2018 16:42:56
11-19-2018 16:42:56
Thanks!
transformers
38
closed
truncated normal initializer
I have a reasonable truncated normal approximation. (Actually that is what tf does). https://discuss.pytorch.org/t/implementing-truncated-normal-initializer/4778/16?u=ruotianluo
11-19-2018 16:35:08
11-19-2018 16:35:08
We could try that. Not sure how important it is though. Did you try it?<|||||>Ok I think we will stick to the normal_initializer for now. Thanks for indicating this option!
transformers
37
closed
using BERT as a language Model
I was trying to use BERT as a language model to assign a score(could be PPL score) of a given sentence. Something like P("He is go to school")=0.008 P("He is going to school")=0.08 Which is indicating that the probability of second sentence is higher than first sentence. Is there a way to get a score like this? Thanks
11-19-2018 15:26:20
11-19-2018 15:26:20
I don't think you can do that with Bert. The masked LM loss is not a Language Modeling loss, it doesn't work nicely with the [chain rule](https://en.wikipedia.org/wiki/Chain_rule_%28probability%29) like the usual Language Modeling loss. Please see the discussion on the TensorFlow repo on that [here](https://github.com/google-research/bert/issues/35).<|||||>Hello @thomwolf I can see it is possible to assign score by using [BERT ](https://github.com/google-research/bert/issues/139#issuecomment-441322849). By masking each word sequentially. Then score sentence by summary of word score. Here is how people were doing it for [Tensorflow](https://github.com/xu-song/bert-as-language-model). I am trying to do following ``` import numpy as np import torch from pytorch_pretrained_bert import BertTokenizer,BertForMaskedLM # Load pre-trained model (weights) with torch.no_grad(): model = BertForMaskedLM.from_pretrained('bert-large-cased') model.eval() # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-large-cased') def score(sentence): tokenize_input = tokenizer.tokenize(sentence) tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)]) sentence_loss=0. for i,word in enumerate(tokenize_input): tokenize_input[i]='[MASK]' mask_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)]) word_loss=model(mask_input, masked_lm_labels=tensor_input).data.numpy() sentence_loss +=word_loss #print("Word: %s : %f"%(word, np.exp(-word_loss))) return np.exp(sentence_loss/len(tokenize_input)) ``` ``` score("There is a book on the table") 88.899999 ``` Is it the right way to assign score using BERT? <|||||>> Hello @thomwolf I can see it is possible to assign score by using [BERT ](https://github.com/google-research/bert/issues/139#issuecomment-441322849). By masking each word sequentially. Then score sentence by summary of word score. Here is how people were doing it for [Tensorflow](https://github.com/xu-song/bert-as-language-model). I am trying to do following > > ``` > import numpy as np > import torch > from pytorch_pretrained_bert import BertTokenizer,BertForMaskedLM > # Load pre-trained model (weights) > with torch.no_grad(): > model = BertForMaskedLM.from_pretrained('bert-large-cased') > model.eval() > # Load pre-trained model tokenizer (vocabulary) > tokenizer = BertTokenizer.from_pretrained('bert-large-cased') > def score(sentence): > tokenize_input = tokenizer.tokenize(sentence) > tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)]) > sentence_loss=0. > for i,word in enumerate(tokenize_input): > > tokenize_input[i]='[MASK]' > mask_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)]) > word_loss=model(mask_input, masked_lm_labels=tensor_input).data.numpy() > sentence_loss +=word_loss > #print("Word: %s : %f"%(word, np.exp(-word_loss))) > > return np.exp(sentence_loss/len(tokenize_input)) > ``` > > ``` > score("There is a book on the table") > 88.899999 > ``` > > Is it the right way to assign score using BERT? no, you masked word but not restore.<|||||>@mdasadul Did you managed to do it?<|||||>Yes please check my tweet on this @mdasaduluofa On Wed, May 27, 2020, 1:37 PM orko19 <notifications@github.com> wrote: > @mdasadul <https://github.com/mdasadul> Did you managed to do it? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/37#issuecomment-634485380>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AB5DO5N2MGF6QCTAZ3L3NITRTS7J3ANCNFSM4GFFKJJA> > . > <|||||>@mdasadul Do you mean this one? https://twitter.com/mdasaduluofa/status/1181917072999231489/photo/1 I see this it for GPT-2, do you have a code for BERT?<|||||>It should be similar. Following code is for distilBert ```import math from torch.multiprocessing import TimeoutError, Pool,set_start_method,Queue import torch.multiprocessing as mp import torch from transformers import DistilBertTokenizer,DistilBertForMaskedLM from flask import Flask,request import json try: set_start_method('spawn') except RuntimeError: pass device = torch.device("cuda" if torch.cuda.is_available() else "cpu") def load_model(): model = DistilBertForMaskedLM.from_pretrained('distilbert-base-uncased').to(device) model.eval() tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') return tokenizer, model tokenizer, model =load_model() #st.text('Done!') def score(sentence): if len(sentence.strip().split())<=1 : return 10000 tokenize_input = tokenizer.tokenize(sentence) if len(tokenize_input)>512: return 10000 input_ids = torch.tensor(tokenizer.encode(tokenize_input)).unsqueeze(0).to(device) with torch.no_grad(): loss=model(input_ids,masked_lm_labels = input_ids)[0] return math.exp(loss.item()/len(tokenize_input))``` <|||||>@mdasadul I get the error: `TypeError: forward() got an unexpected keyword argument 'masked_lm_labels'` Also, can you please explain why for following steps are necessary: 1. `unsqueeze(0)` 2. add `torch.no_grad()` 3. add `model.eval()`<|||||>The score is equivalent to perplexity. Hence lower the score better the sentence, right?<|||||>Yes that is right Md Asadul Islam Machine Learning Engineer Scribendi Inc On Mon, Jul 6, 2020 at 11:54 PM nlp-sudo <notifications@github.com> wrote: > The score is equivalent to perplexity. Hence lower the score better the > sentence, right? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/37#issuecomment-654618996>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AB5DO5KTBQJEEM7J72TCH2LR2K2AVANCNFSM4GFFKJJA> > . > <|||||>@mdasadul I get the error: ``` return math.exp(loss.item() / len(tokenize_input)) ValueError: only one element tensors can be converted to Python scalars ``` Any idea why?<|||||>Yes, your sentence needs to be longer than 1 word. PPL of 1 word sentence doesn't mean anything. Please try with longer sentences Md Asadul Islam Machine Learning Engineer Scribendi Inc On Sun, Mar 14, 2021 at 7:48 AM orenschonlab ***@***.***> wrote: > @mdasadul <https://github.com/mdasadul> I get the error: > > return math.exp(loss.item() / len(tokenize_input)) > ValueError: only one element tensors can be converted to Python scalars > > Any idea why? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/37#issuecomment-798893364>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AB5DO5ITYG7M6TG2XV5NZ6LTDSPARANCNFSM4GFFKJJA> > . > <|||||>@mdasadul I have a sentence with more than 1 word and still get the error sentence is `' Harry had never believed he would'` input_ids is tensor`([[ 101, 4302, 2018, 2196, 3373, 2002, 2052, 102]])`<|||||>Below is an example from the official docs on how to implement GPT2 to determine perplexity. https://huggingface.co/transformers/perplexity.html<|||||>@EricFillion But how can it be used for a sentence, not for a dataset? Meaning I want the perplexity of the sentence: `Harry had never believed he would`<|||||>@orenschonlab Try below ``` import torch import sys import numpy as np from transformers import GPT2Tokenizer, GPT2LMHeadModel # Load pre-trained model (weights) with torch.no_grad(): model = GPT2LMHeadModel.from_pretrained('gpt2') model.eval() # Load pre-trained model tokenizer (vocabulary) tokenizer = GPT2Tokenizer.from_pretrained('gpt2') def score(sentence): tokenize_input = tokenizer.encode(sentence) tensor_input = torch.tensor([tokenize_input]) loss=model(tensor_input, labels=tensor_input)[0] return np.exp(loss.detach().numpy()) if __name__=='__main__': for line in sys.stdin: if line.strip() !='': print(line.strip()+'\t'+ str(score(line.strip()))) else: break ```<|||||>> @EricFillion But how can it be used for a sentence, not for a dataset? > Meaning I want the perplexity of the sentence: > `Harry had never believed he would` I just played around with the code @mdasadul posted above. It works perfectly and is nice and concise. It outputted the same scores from the official documentation for short inputs. If you're still interested in using the method from the official documentation, then you can replace "'\n\n'.join(test['text'])" with the text you wish to determine the perplexity of. You'll also want to add ".item()" to ppl to convert the tensor to a float. <|||||>This repo is quite useful. It supports Huggingface models. https://github.com/awslabs/mlm-scoring
transformers
36
closed
How to detokenize a BertTokenizer output?
I was wondering if there's a proper way of detokenizing the output tokens, i.e., constructing the sentence back from the tokens? Considering the fact that the word-piece tokenisation introduces lots of `#`s.
11-19-2018 04:39:04
11-19-2018 04:39:04
You can remove ' ##' but you cannot know if there was a space around punctuations tokens or uppercase words.<|||||>Yes. I don't plan to include a reverse conversion of tokens in the tokenizer. For an example on how to keep track of the original characters position, please read the `run_squad.py` example.<|||||>In my case, I do: ``` tokens = ['[UNK]', '[CLS]', '[SEP]', 'want', '##ed', 'wa', 'un', 'runn', '##ing', ','] text = ' '.join([x for x in tokens]) fine_text = text.replace(' ##', '') ``` <|||||>Apostrophe is considered as a punctuation mark, but often it is an integrated part of the word. Regular `.tokenize()` always converts apostrophe to the stand alone token, so the information to which word it belongs is lost. If the original sentence contains apostrophes, it is impossible to recreate the original sentence from its' tokens (for example when apostrophe is a last symbol in some word `convert_tokens_to_string()` will join it with the following one). In order to overcome this, one can check the surroundings of the apostrophe and add `##` immediately after the tokenization. For example: ``` sent = "The Smiths' used their son's car" tokens = tokenizer.tokenize(sent) ``` now if you fix `tokens` to look like: **original** `=>['the', 'smith', '##s', "'", 'used', 'their', 'son', "'", 's', 'car']` **fixed** ` => ['the', 'smith', '##s', "##'", 'used', 'their', 'son', "##'", '##s', 'car']` you will be able to restore the original words. <|||||>@thomwolf could you point to the specific section of `run_squad.py` that handles this, I'm having trouble EDIT: is it this bit from `processors/squad.py`? ```python tok_to_orig_index = [] orig_to_tok_index = [] all_doc_tokens = [] for (i, token) in enumerate(example.doc_tokens): orig_to_tok_index.append(len(all_doc_tokens)) sub_tokens = tokenizer.tokenize(token) for sub_token in sub_tokens: tok_to_orig_index.append(i) all_doc_tokens.append(sub_token) ```
transformers
35
closed
issues with accents on convert_ids_to_tokens()
Hello, the BertTokenizer seems loose accents when convert_ids_to_tokens() is used : Example: - original sentence: "great breakfasts in a nice furnished cafè, slightly bohemian." - corresponding list of token produced : ['great', 'breakfast', '##s', 'in', 'a', 'nice', 'fur', '##nis', '##hed', 'cafe', ',', 'slightly', 'bohemia', '##n', '.'] Here the problem is in "cafe" that loses its accent. I'm using BertTokenizer.from_pretrained('Bert-base-multilingual') as the tokenizer, I also tried with "Bert-base-uncased" and experienced the same issue. Thanks for this great work!
11-18-2018 20:41:24
11-18-2018 20:41:24
This is expected behaviour and is how the multilingual and the uncased models were trained. From the [original repo](https://github.com/google-research/bert/blob/master/README.md): > We are releasing the BERT-Base and BERT-Large models from the paper. Uncased means that the text has been lowercased before WordPiece tokenization, e.g., John Smith becomes john smith. The Uncased model also strips out any accent markers. <|||||>Yes this is expected.
transformers
34
closed
Can not find vocabulary file for Chinese model
After I convert the TF model to pytorch model, I run a classification task on a new Chinese dataset, but get this: CUDA_VISIBLE_DEVICES=3 python run_classifier.py --task_name weibo --do_eval --do_train --bert_model chinese_L-12_H-768_A-12 --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir bert_result 11/18/2018 21:56:59 - INFO - __main__ - device cuda n_gpu 1 distributed training False 11/18/2018 21:56:59 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file chinese_L-12_H-768_A-12 Traceback (most recent call last): File "run_classifier.py", line 661, in <module> main() File "run_classifier.py", line 508, in main tokenizer = BertTokenizer.from_pretrained(args.bert_model) File "/home/lin/jpmorgan/pytorch-pretrained-BERT/pytorch_pretrained_bert/tokenization.py", line 141, in from_pretrained tokenizer = cls(resolved_vocab_file, do_lower_case) File "/home/lin/jpmorgan/pytorch-pretrained-BERT/pytorch_pretrained_bert/tokenization.py", line 94, in __init__ "model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file)) ValueError: Can't find a vocabulary file at path 'chinese_L-12_H-768_A-12'. To load the vocabulary from a Google pretrained model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`
11-18-2018 14:33:58
11-18-2018 14:33:58
need to specify the path of vocab.txt for: tokenizer = BertTokenizer.from_pretrained(args.bert_model)<|||||>@zlinao ,i try to load the vocab using the following code: tokenizer = BertTokenizer.from_pretrained("bert-base-chinese//vocab.txt" however,get errors 11/19/2018 15:33:13 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file bert-base-chinese//vocab.txt Traceback (most recent call last): File "E:/PythonWorkSpace/PytorchBert/BertTest/torchTest.py", line 6, in <module> tokenizer = BertTokenizer.from_pretrained("bert-base-chinese//vocab.txt") File "C:\anaconda\lib\site-packages\pytorch_pretrained_bert-0.1.2-py3.6.egg\pytorch_pretrained_bert\tokenization.py", line 141, in from_pretrained File "C:\anaconda\lib\site-packages\pytorch_pretrained_bert-0.1.2-py3.6.egg\pytorch_pretrained_bert\tokenization.py", line 95, in __init__ File "C:\anaconda\lib\site-packages\pytorch_pretrained_bert-0.1.2-py3.6.egg\pytorch_pretrained_bert\tokenization.py", line 70, in load_vocab UnicodeDecodeError: 'gbk' codec can't decode byte 0x81 in position 1564: illegal multibyte sequenc do you have the same problem?<|||||>Hi, Why don't you guys just do `tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')` as [indicated in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#loading-google-ais-pre-trained-weigths-and-pytorch-dump) and the `run_classifier.py` example?<|||||>> Hi, > Why don't you guys just do `tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')` as [indicated in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#loading-google-ais-pre-trained-weigths-and-pytorch-dump) and the `run_classifier.py` example? Yes, it is easier to use shortcut name. Thanks for your great work.<|||||>> @zlinao ,i try to load the vocab using the following code: > tokenizer = BertTokenizer.from_pretrained("bert-base-chinese//vocab.txt" > > however,get errors > 11/19/2018 15:33:13 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file bert-base-chinese//vocab.txt > Traceback (most recent call last): > File "E:/PythonWorkSpace/PytorchBert/BertTest/torchTest.py", line 6, in > tokenizer = BertTokenizer.from_pretrained("bert-base-chinese//vocab.txt") > File "C:\anaconda\lib\site-packages\pytorch_pretrained_bert-0.1.2-py3.6.egg\pytorch_pretrained_bert\tokenization.py", line 141, in from_pretrained > File "C:\anaconda\lib\site-packages\pytorch_pretrained_bert-0.1.2-py3.6.egg\pytorch_pretrained_bert\tokenization.py", line 95, in **init** > File "C:\anaconda\lib\site-packages\pytorch_pretrained_bert-0.1.2-py3.6.egg\pytorch_pretrained_bert\tokenization.py", line 70, in load_vocab > UnicodeDecodeError: 'gbk' codec can't decode byte 0x81 in position 1564: illegal multibyte sequenc > > do you have the same problem? you can change you encoding to 'utf-8' when you load the vocab.txt
transformers
33
closed
[Bug report] Ineffective no_decay when using BERTAdam
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L505-L508 With this code, all parameters are decayed because the condition "parameter_name in no_decay" will never be satisfied. I've made a PR #32 to fix it.
11-18-2018 08:28:52
11-18-2018 08:28:52
You're right, thanks!
transformers
32
closed
Fix ineffective no_decay bug when using BERTAdam
With the original code, all parameters are decayed because the condition "parameter_name in no_decay" will never be satisfied.
11-18-2018 08:21:37
11-18-2018 08:21:37
thanks!<|||||>Question - wouldn't `.named_parameters()` for the model return a tuple `(name, param_tensor)`, where name looks similar to these ``` ['bert.embeddings.word_embeddings.weight', 'bert.embeddings.position_embeddings.weight', 'bert.embeddings.token_type_embeddings.weight', 'bert.embeddings.LayerNorm.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.0.attention.self.query.weight', 'bert.encoder.layer.0.attention.self.query.bias', 'bert.encoder.layer.0.attention.self.key.weight', 'bert.encoder.layer.0.attention.self.key.bias', 'bert.encoder.layer.0.attention.self.value.weight', 'bert.encoder.layer.0.attention.self.value.bias', 'bert.encoder.layer.0.attention.output.dense.weight', 'bert.encoder.layer.0.attention.output.dense.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', ... ... 'classifier.linear.weight', 'classifier.linear.bias'] ``` therefore requiring slightly smarter conditions than just `in`? Something along the lines? ``` [p for n, p in param_optimizer if any(True for x in no_decay if n.endswith(x))] ```<|||||>Don't mind my comment, tested it further this morning and everything seems to work as expected!
transformers
31
closed
BERT model for Machine Translation
Is there a way to use any of the provided pre-trained models in the repository for machine translation task? Thanks
11-18-2018 02:10:15
11-18-2018 02:10:15
Hi Kerem, I don't think so. Have a look at the fairsep repo maybe.<|||||>@thomwolf hi there, I couldn't find out anything about the fairsep repo. Could you post a link? Thanks!<|||||>Hi, I am talking about this repo: https://github.com/pytorch/fairseq. Have a look at their Transformer's models for machine translation.<|||||>I have conducted several MT experiments which fixed the embeddings by using BERT, **UNFORTUNATELY**, I find it makes performance worse. @JasonVann @thomwolf <|||||>Hey! FAIR has demonstrated that using BERT for unsupervised translation greatly improves BLEU. Paper: https://arxiv.org/abs/1901.07291 Repo: https://github.com/facebookresearch/XLM Older papers showing pre-training with LM (not MLM) helps Seq2Seq: https://arxiv.org/abs/1611.02683 Hope this helps!<|||||>These links are useful. Does anyone know if BERT improves things also for supervised translation? Thanks. <|||||>> Does anyone know if BERT improves things also for supervised translation? Also interested<|||||>Because BERT is an encoder, I guess we need a decoder. I looked here: https://jalammar.github.io/ and it seems Openai Transformer is a decoder. But I cannot find a repo for it. https://www.tensorflow.org/alpha/tutorials/text/transformer I think Bert outputs a vector of size 768. Can we just do a `reshape` and use the decoder in that transformer notebook? In general can I just `reshape` and try out a bunch of decoders?<|||||>> These links are useful. > > Does anyone know if BERT improves things also for supervised translation? > > Thanks. https://arxiv.org/pdf/1901.07291.pdf seems to suggest that it does improve the results for supervised translation as well. However this paper is not about using BERT embeddings, rather about pre-training the encoder and decoder on an Masked Language Modelling objective. The biggest benefit comes from initializing the encoder with the weights from BERT, and surprisingly using it to initialize the decoder also brings small benefits, even though if I understand correctly you still have to randomly initialize the weights for the encoder attention module, since it's not present in the pre-trained network. EDIT: of course the pre-trained network needs to have been trained on multi-lingual data, as stated in the paper<|||||>I have managed to replace transformer's encoder with a pretrained bert encoder, however experiment results were very poor. It dropped BLEU score by about 4 The source code is available here: https://github.com/torshie/bert-nmt , implemented as a fairseq user model. It may not work out of box, some minor tweeks may be needed.<|||||>Could be relevant: [Towards Making the Most of BERT in Neural Machine Translation](https://arxiv.org/pdf/1908.05672.pdf) [On the use of BERT for Neural Machine Translation](https://arxiv.org/pdf/1909.12744.pdf)<|||||>Also have a look at [MASS](https://github.com/microsoft/MASS) and [XLM](https://github.com/facebookresearch/XLM).<|||||>Yes. It is possible to use both BERT as encoder and GPT as decoder and glue them together. There is a recent paper on this: Multilingual Translation via Grafting Pre-trained Language Models https://aclanthology.org/2021.findings-emnlp.233.pdf https://github.com/sunzewei2715/Graformer
transformers
30
closed
[Feature request] Add example of finetuning the pretrained models on custom corpus
11-17-2018 15:19:58
11-17-2018 15:19:58
Hi I don't plan to add that in the near future but feel free to open a PR if you would like to share an additional example.<|||||>Necrobumping this for reference, as this is addressed in https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py
transformers
29
closed
First release
11-17-2018 11:19:41
11-17-2018 11:19:41
transformers
28
closed
speed is very slow
convert samples to features, is very slow
11-17-2018 06:51:54
11-17-2018 06:51:54
Running on a GPU, I find that dumping extracted features takes up most time. So you may optimize it yourself. <|||||>Hi, these examples are provided as starting point to write your own training scripts using the package modules. I don't plan to update them any further.
transformers
27
closed
how to load checkpoint?
i download the model from bert, it only has model.ckpt.data,model.ckpt.meta and model.ckpt.index, i donnot which to load, what is checkpoint file for convert.py?
11-17-2018 06:23:28
11-17-2018 06:23:28
Converting TensorFlow checkpoint from ../dataset/bert/uncased_L-12_H-768_A-12/bert_model Traceback (most recent call last): File "convert_tf_checkpoint_to_pytorch.py", line 111, in <module> convert() File "convert_tf_checkpoint_to_pytorch.py", line 60, in convert init_vars = tf.train.list_variables(path) File "/home/susht3/local/anaconda3/envs/susht/lib/python3.6/site-packages/tensorflow/python/training/checkpoint_utils.py", line 95, in list_variables reader = load_checkpoint(ckpt_dir_or_file) File "/home/susht3/local/anaconda3/envs/susht/lib/python3.6/site-packages/tensorflow/python/training/checkpoint_utils.py", line 64, in load_checkpoint return pywrap_tensorflow.NewCheckpointReader(filename) File "/home/susht3/local/anaconda3/envs/susht/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 326, in NewCheckpointReader return CheckpointReader(compat.as_bytes(filepattern), status) File "/home/susht3/local/anaconda3/envs/susht/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ../dataset/bert/uncased_L-12_H-768_A-12/bert_model<|||||>@susht3 what was your fix? <|||||>I encountered a similar issue and didn't find a solution with ALBERT. I tried using the `export_checkpoint.py` file in ALBERT and sent that into the `convert_tf_checkpoint_to_pytorch` command and there was no error. However the resulting `pytorch.bin` output was unusable :\<|||||>@dan-hu-spring do you mind opening a new issue with your issue, so that we may take a look?
transformers
26
closed
Checkpoints not saved
There is an option `save_checkpoints_steps` that seems to control checkpointing. However, there is no actual saving operation in the `run_*` scripts. So, should we add that functionality or remove this argument?
11-16-2018 18:50:27
11-16-2018 18:50:27
In the `run_squad.py`script, I added the following lines after the training loop: ``` logger.info(***** Saving fine-tuned model *****) output_model_file = os.path.join(args.output_dir, "pytorch_model.bin") if n_gpu > 1: torch.save(model.module.bert.state_dict(), output_model_file) else: torch.save(model.bert.state_dict(), output_model_file) ``` The code runs and I was able to load the model to test on the Adversarial SQuAD datasets. I do not use the other `run_*` scripts but this may be applicable as well. Edit: the files have been modified in the latest commits so I think it's now necessary to check the loading of fine-tuned models in the script.<|||||>You are right this argument was not used. I removed it, thanks. These examples are provided as starting point to write training scripts for the package module. I don't plan to update them any further (except fixing bugs).<|||||>> In the `run_squad.py`script, I added the following lines after the training loop: > > ``` > logger.info(***** Saving fine-tuned model *****) > output_model_file = os.path.join(args.output_dir, "pytorch_model.bin") > if n_gpu > 1: > torch.save(model.module.bert.state_dict(), output_model_file) > else: > torch.save(model.bert.state_dict(), output_model_file) > ``` > The code runs and I was able to load the model to test on the Adversarial SQuAD datasets. > > I do not use the other `run_*` scripts but this may be applicable as well. > > Edit: the files have been modified in the latest commits so I think it's now necessary to check the loading of fine-tuned models in the script. what is your result on adversarial-squad?<|||||>At that time I got: **AddSent** BERT base 58.7 EM / 66.2 F1 BERT large 65.5 EM / 71.9 F1 **AddOneSent** BERT base 67.0 EM / 74.7 F1 BERT large 72.7 EM / 79.1 F1 <|||||>> At that time I got: > **AddSent** > BERT base 58.7 EM / 66.2 F1 > BERT large 65.5 EM / 71.9 F1 > > **AddOneSent** > BERT base 67.0 EM / 74.7 F1 > BERT large 72.7 EM / 79.1 F1 Thanks a lot! Do you release your paper? i want to cite your result and paper in my paper.<|||||>Unfortunately it was not part of a paper, just preliminary results.
transformers
25
closed
can you push the run-pretraining and create_pretraining_data codes?
just want to study codes, don't need to have same pre-train performance.
11-16-2018 08:15:33
11-16-2018 08:15:33
Hi, I don't have plan for that in the near future.
transformers
24
closed
[Feature request] Port SQuAD 2.0 support
Recently the Google team added support for Squad 2.0: https://github.com/google-research/bert/commit/60454702590a6c69bd45c5d4258c7e17b8a3e1da Would be great to also have it available in the Pytorch version.
11-15-2018 23:47:04
11-15-2018 23:47:04
Hi, I don't have plan for that in the near future but feel free to open a PR.
transformers
23
closed
ValueError while using --optimize_on_cpu
> Traceback (most recent call last): | 1/87970 [00:00<8:35:35, 2.84it/s] File "./run_squad.py", line 990, in <module> main() File "./run_squad.py", line 922, in main is_nan = set_optimizer_params_grad(param_optimizer, model.named_parameters(), test_nan=True) File "./run_squad.py", line 691, in set_optimizer_params_grad if test_nan and torch.isnan(param_model.grad).sum() > 0: File "/people/sanjay/anaconda2/envs/bert_pytorch/lib/python3.5/site-packages/torch/functional.py", line 289, in isnan raise ValueError("The argument is not a tensor", str(tensor)) ValueError: ('The argument is not a tensor', 'None') Command: CUDA_VISIBLE_DEVICES=0 python ./run_squad.py \ --vocab_file bert_large/uncased_L-24_H-1024_A-16/vocab.txt \ --bert_config_file bert_large/uncased_L-24_H-1024_A-16/bert_config.json \ --init_checkpoint bert_large/uncased_L-24_H-1024_A-16/pytorch_model.bin \ --do_lower_case \ --do_train \ --do_predict \ --train_file squad_dir/train-v1.1.json \ --predict_file squad_dir/dev-v1.1.json \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir outputs \ --train_batch_size 4 \ --gradient_accumulation_steps 2 \ --optimize_on_cpu Error while using --optimize_on_cpu only. Works fine without the argument. GPU: Nvidia GTX 1080Ti Single GPU. PS: I can only fit in train_batch_size 4 on the memory of a single GPU.
11-15-2018 16:53:12
11-15-2018 16:53:12
Thanks! I pushed a fix for that, you can try it again. You should be able to increase a bit the batch size. By the way, the real batch size that is used on the gpu is `train_batch_size / gradient_accumulation_steps` so `2` in your case. I think you should be able to go to `3` with `--optimize_on_cpu` The recommended batch_size to get good results (EM, F1) with BERT large on SQuaD is `24`. You can try the following possibilities to get to this batch_size: - keeping the same 'real batch size' that you currently have but just a bigger batch_size `--train_batch_size 24 --gradient_accumulation_steps 12` - trying a 'real batch size' of 3 with optimization on cpu `--train_batch_size 24 --gradient_accumulation_steps 8 --optimize_on_cpu` - switching to fp16 (implies optimization on cpu): `--train_batch_size 24 --gradient_accumulation_steps 6 or 4 --fp16` If your GPU supports fp16, the last solution should be the fastest, otherwise the second should be the fastest. The first solution should work out-of-the box and give better results (EM, F1) but you won't have any speed-up.<|||||>Should be fixed now. Don't hesitate to re-open an issue if needed. Thanks for the feedback!<|||||>Yes it works now! With > --train_batch_size 24 --gradient_accumulation_steps 8 --optimize_on_cpu I get {"exact_match": 83.78429517502366, "f1": 90.75733469379139} which is pretty close. Thanks for this amazing work!
transformers
22
closed
adding `no_cuda` flag
The `--no_cuda` flag is missing from the flagset in `extract_features.py`. On running the current code, the following error occurs. ``` (py3.5) [rahul pytorch-pretrained-BERT]$ python extract_features.py \ > --input_file=./input.txt \ > --output_file=./output.jsonl \ > --vocab_file=$BERT_BASE_DIR/vocab.txt \ > --bert_config_file=$BERT_BASE_DIR/bert_config.json \ > --init_checkpoint=$BERT_BASE_DIR/pytorch_model.bin \ > --layers=-4 \ > --max_seq_length=128 \ > --batch_size=8 Traceback (most recent call last): File "extract_features.py", line 306, in <module> main() File "extract_features.py", line 223, in main device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu") AttributeError: 'Namespace' object has no attribute 'no_cuda' ```
11-15-2018 10:33:03
11-15-2018 10:33:03
Thanks, I've added that manually (the library organization has changed a bit with the first pip release).
transformers
21
closed
Fix some glitches in extract_features.py
Do the following fixing to make the extract_features.py runnable: 1. Add no_cuda argument 2. Fix the "not all arguments converted during string formatting" error thrown at line 230
11-15-2018 07:49:20
11-15-2018 07:49:20
Thanks, I've pushed these fixes in the first release (the organization of the library changed quite a bit).
transformers
20
closed
model loading the checkpoint error
RuntimeError: Error(s) in loading state_dict for BertModel: size mismatch for embeddings.token_type_embeddings.weight: copying a param of torch.Size([16, 768]) from checkpoint, where the shape is torch.Size([2, 768]) in current model.
11-14-2018 08:13:34
11-14-2018 08:13:34
But I print the model.embeddings.token_type_embeddings it was Embedding(16,768) .<|||||>which model are you loading?<|||||>> which model are you loading? the pre-trained model chinese_L-12_H-768_A-12<|||||>mycode: bert_config = BertConfig.from_json_file('bert_config.json') model=BertModel(bert_config) model.load_state_dict(torch.load('pytorch_model.bin')) The error: RuntimeError: Error(s) in loading state_dict for BertModel: size mismatch for embeddings.token_type_embeddings.weight: copying a param of torch.Size([16, 768]) from checkpoint, where the shape is torch.Size([2, 768]) in current model. <|||||>I'm testing the chinese model. Do you use the `config.json` of the chinese_L-12_H-768_A-12 ? Can you send the content of your `config_json` ?<|||||>> I'm testing the chinese model. > Do you use the `config.json` of the chinese_L-12_H-768_A-12 ? > Can you send the content of your `config_json` ? In the 'config.json' of the chinese_L-12_H-768_A-12 ,the type_vocab_size=2.But I change the config.type_vocab_size=16, it still error.<|||||>> I'm testing the chinese model. > Do you use the `config.json` of the chinese_L-12_H-768_A-12 ? > Can you send the content of your `config_json` ? { "attention_probs_dropout_prob": 0.1, "directionality": "bidi", "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "pooler_fc_size": 768, "pooler_num_attention_heads": 12, "pooler_num_fc_layers": 3, "pooler_size_per_head": 128, "pooler_type": "first_token_transform", "type_vocab_size": 2, "vocab_size": 21128 } I change my code: bert_config = BertConfig.from_json_file('bert_config.json') bert_config.type_vocab_size=16 model=BertModel(bert_config) model.load_state_dict(torch.load('pytorch_model.bin')) it still error.<|||||>> I see you have `"type_vocab_size": 2` in your config file, how is that? Yes,but I change it in my code.<|||||>> is your `pytorch_model.bin` the good converted model of the chinese one (and not of an English one)? I think it's good.<|||||>Ok, I have the models. I think `type_vocab_size` should be 2 also for chinese. I am wondering why it is 16 in your `pytorch_model.bin`<|||||>I have no idea.Did my model make the wrong convert?<|||||>I am testing that right now. I haven't played with the multi-lingual models yet.<|||||>> I am testing that right now. I haven't played with the multi-lingual models yet. I also use it for the first time.I am looking forward to your test results.<|||||>> I am testing that right now. I haven't played with the multi-lingual models yet. When I was converting the model . Traceback (most recent call last): File "convert_tf_checkpoint_to_pytorch.py", line 95, in <module> convert() File "convert_tf_checkpoint_to_pytorch.py", line 85, in convert assert pointer.shape == array.shape AssertionError: (torch.Size([16, 768]), (2, 768)) <|||||>are you supplying a config file with `"type_vocab_size": 2` to the conversion script?<|||||>> are you supplying a config file with `"type_vocab_size": 2` to the conversion script? I used the 'bert_config.json' of the chinese_L-12_H-768_A-12 when I was converting .<|||||>Ok, I think I found the issue, your BertConfig is not build from the configuration file for some reason and thus use the default value of `type_vocab_size` in BertConfig which is 16. This error happen on my system when I use `config = BertConfig('bert_config.json')` instead of `config = BertConfig.from_json_file('bert_config.json')`. I will make sure these two ways of initializing the configuration file (from parameters or from json file) cannot be messed up.<|||||>> 运行时错误:加载 BertModel state_dict时出错:embeddings.token_type_embeddings 的大小不匹配.weight: > 复制火炬参数。大小([16, 768]) 从检查点开始,其中形状为火炬。当前模型中的大小([2, 768] i have the same problem as you. did you solve the problem?
transformers
19
closed
will you push the pytorch code for the pre-training process?
Can you push the pytorch code for the pre-training process,such as MLM task, please? I really want to study, but I can't understand tensorflow, it's so complex. thanks!!!
11-14-2018 06:30:59
11-14-2018 06:30:59
Hi, I don't have plan for that in the near future.
transformers
18
closed
include the output layer in the model using the pretrained weights
This is to be able to load the final output layer (bert.output_layer) from the TensorFlow pre-trained model. In particular, it is a fully connected layer that is used to map the final hidden layer to the vocabulary size, to then apply the softmax, as follows: logits = bert.output_layer(sequence_output) log_softmax = nn.LogSoftmax(dim=-1) log_probs = log_softmax(logits)
11-13-2018 16:15:03
11-13-2018 16:15:03
Thanks for that. I've ended up taking a more modular approach in the first pip release of the library.
transformers
17
closed
activation function in BERTIntermediate
Was previously hardcoded to gelu because pretrained BERT models use gelu. Changed to make BERTIntermediate use functions and "gelu", "relu" or "swish" from `config`.
11-13-2018 15:47:46
11-13-2018 15:47:46
Looks good, thanks for that!
transformers
16
closed
Excluding AdamWeightDecayOptimizer internal variables from restoring
I tried to use convert_tf_checkpoint_to_pytorch.py script to convert my pretrained model, but in order to do so, I had to make some minor tweaks. I thought I would share in case you find it useful.
11-13-2018 15:13:18
11-13-2018 15:13:18
Is your pre-trained model a TensorFlow model?<|||||>Yes<|||||>Nice, thanks for that!
transformers
15
closed
activation function in BERTIntermediate
BERTConfig is not used for `BERTIntermediate`'s activation function. `intermediate_act_fn` is always `gelu`. Is this normal? https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/modeling.py#L240
11-13-2018 15:09:33
11-13-2018 15:09:33
Yes, I hard coded that since the pre-trained models are all trained with gelu anyway.<|||||>ok. but since config is there anyway, isn't it cleaner to use it (to avoid errors for people using configs that use a different activation for some reason) ?<|||||>Yes we can, I'll change that in the coming first release (unless you would like to submit a PR which I would be happy to merge).<|||||>yeah let me clean up and I'll PR
transformers
14
closed
fixed typo
When test with SQuAD
11-12-2018 01:18:24
11-12-2018 01:18:24
Hi, Thanks for the PR, we don't want to add a shell script to the repo. I will correct the typo, Best, Thom
transformers
13
closed
Bug in run_classifier.py
If I am running only evaluation and not training, there are errors as tr_loss and nb_tr_steps are undefined.
11-10-2018 17:16:01
11-10-2018 17:16:01
transformers
12
closed
py2 code
if I convert code to python2 version of code, it can't converage ; Would you present py2 code?
11-10-2018 13:23:31
11-10-2018 13:23:31
Hi, we won't provide a python 2 version but if you want to do a python 2/3 compatible version feel free to open a PR.
transformers
11
closed
Swapped to_seq_len/from_seq_len in comment
I'm pretty sure this comment: https://github.com/huggingface/pytorch-pretrained-BERT/blob/2c5d993ba48841575d9c58f0754bca00b288431c/modeling.py#L339-L343 should instead say: ``` # Sizes are [batch_size, 1, 1, to_seq_length] # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] ``` When masking out tokens for attention, it doesn't matter what happens to attention *from* padding tokens, only that there is no attention *to* padding tokens. I don't believe the code is doing what the comment currently suggests because that would be an implementation flaw.
11-09-2018 06:13:08
11-09-2018 06:13:08
Yes! fixed the comment
transformers
10
closed
Is there a plan to have a FP16 for GPU so to have larger batch size or longer text documents support ?
Is there a plan to have an FP16 for GPU so to have a larger batch size or longer text documents support?
11-09-2018 02:23:34
11-09-2018 02:23:34
Yes probably. I am testing fp16 right now. If it works well I will push it to the repo.<|||||>Ok I've added FP16 support (see updated readme)<|||||>Thanks for this quick updates.<|||||>I'm not able to work with FP16 for pytorch BERT code. Particularly for BertForSequenceClassification, which I tried and got the issue **Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target** when I enabled fp16. Also when using `logits = logits.half() labels = labels.half()` then the epoch time also increased.
transformers
9
closed
Crash at the end of training
Hi, I tried running the Squad model this morning (on a single GPU with gradient accumulation over 3 steps) but after 3 hours of training, my job failed with the following output: I was running the code, unmodified, from commit 3bfbc21376af691b912f3b6256bbeaf8e0046ba8 Is this an issue you know about? ``` 11/08/2018 17:50:03 - INFO - __main__ - device cuda n_gpu 1 distributed training False 11/08/2018 17:50:18 - INFO - __main__ - *** Example *** 11/08/2018 17:50:18 - INFO - __main__ - unique_id: 1000000000 11/08/2018 17:50:18 - INFO - __main__ - example_index: 0 11/08/2018 17:50:18 - INFO - __main__ - doc_span_index: 0 11/08/2018 17:50:18 - INFO - __main__ - tokens: [CLS] to whom did the virgin mary allegedly appear in 1858 in lou ##rdes france ? [SEP] architectural ##ly , the school has a catholic character . atop the main building ' s gold dome is a golden statue of the virgin mary . immediately in front of the main building and facing it , is a copper statue of christ with arms up ##rai ##sed with the legend " ve ##ni ##te ad me om ##nes " . next to the main building is the basilica of the sacred heart . immediately behind the basilica is the gr ##otto , a marian place of prayer and reflection . it is a replica of the gr ##otto at lou ##rdes , france where the virgin mary reputed ##ly appeared to saint bern ##ade ##tte so ##ub ##iro ##us in 1858 . at the end of the main drive ( and in a direct line that connects through 3 statues and the gold dome ) , is a simple , modern stone statue of mary . [SEP] 11/08/2018 17:50:18 - INFO - __main__ - token_to_orig_map: 17:0 18:0 19:0 20:1 21:2 22:3 23:4 24:5 25:6 26:6 27:7 28:8 29:9 30:10 31:10 32:10 33:11 34:12 35:13 36:14 37:15 38:16 39:17 40:18 41:19 42:20 43:20 44:21 45:22 46:23 47:24 48:25 49:26 50:27 51:28 52:29 53:30 54:30 55:31 56:32 57:33 58:34 59:35 60:36 61:37 62:38 63:39 64:39 65:39 66:40 67:41 68:42 69:43 70:43 71:43 72:43 73:44 74:45 75:46 76:46 77:46 78:46 79:47 80:48 81:49 82:50 83:51 84:52 85:53 86:54 87:55 88:56 89:57 90:58 91:58 92:59 93:60 94:61 95:62 96:63 97:64 98:65 99:65 100:65 101:66 102:67 103:68 104:69 105:70 106:71 107:72 108:72 109:73 110:74 111:75 112:76 113:77 114:78 115:79 116:79 117:80 118:81 119:81 120:81 121:82 122:83 123:84 124:85 125:86 126:87 127:87 128:88 129:89 130:90 131:91 132:91 133:91 134:92 135:92 136:92 137:92 138:93 139:94 140:94 141:95 142:96 143:97 144:98 145:99 146:100 147:101 148:102 149:102 150:103 151:104 152:105 153:106 154:107 155:108 156:109 157:110 158:111 159:112 160:113 161:114 162:115 163:115 164:115 165:116 166:117 167:118 168:118 169:119 170:120 171:121 172:122 173:123 174:123 11/08/2018 17:50:18 - INFO - __main__ - token_is_max_context: 17:True 18:True 19:True 20:True 21:True 22:True 23:True 24:True 25:True 26:True 27:True 28:True 29:True 30:True 31:True 32:True 33:True 34:True 35:True 36:True 37:True 38:True 39:True 40:True 41:True 42:True 43:True 44:True 45:True 46:True 47:True 48:True 49:True 50:True 51:True 52:True 53:True 54:True 55:True 56:True 57:True 58:True 59:True 60:True 61:True 62:True 63:True 64:True 65:True 66:True 67:True 68:True 69:True 70:True 71:True 72:True 73:True 74:True 75:True 76:True 77:True 78:True 79:True 80:True 81:True 82:True 83:True 84:True 85:True 86:True 87:True 88:True 89:True 90:True 91:True 92:True 93:True 94:True 95:True 96:True 97:True 98:True 99:True 100:True 101:True 102:True 103:True 104:True 105:True 106:True 107:True 108:True 109:True 110:True 111:True 112:True 113:True 114:True 115:True 116:True 117:True 118:True 119:True 120:True 121:True 122:True 123:True 124:True 125:True 126:True 127:True 128:True 129:True 130:True 131:True 132:True 133:True 134:True 135:True 136:True 137:True 138:True 139:True 140:True 141:True 142:True 143:True 144:True 145:True 146:True 147:True 148:True 149:True 150:True 151:True 152:True 153:True 154:True 155:True 156:True 157:True 158:True 159:True 160:True 161:True 162:True 163:True 164:True 165:True 166:True 167:True 168:True 169:True 170:True 171:True 172:True 173:True 174:True 11/08/2018 17:50:18 - INFO - __main__ - input_ids: 101 2000 3183 2106 1996 6261 2984 9382 3711 1999 8517 1999 10223 26371 2605 1029 102 6549 2135 1010 1996 2082 2038 1037 3234 2839 1012 10234 1996 2364 2311 1005 1055 2751 8514 2003 1037 3585 6231 1997 1996 6261 2984 1012 3202 1999 2392 1997 1996 2364 2311 1998 5307 2009 1010 2003 1037 6967 6231 1997 4828 2007 2608 2039 14995 6924 2007 1996 5722 1000 2310 3490 2618 4748 2033 18168 5267 1000 1012 2279 2000 1996 2364 2311 2003 1996 13546 1997 1996 6730 2540 1012 3202 2369 1996 13546 2003 1996 24665 23052 1010 1037 14042 2173 1997 7083 1998 9185 1012 2009 2003 1037 15059 1997 1996 24665 23052 2012 10223 26371 1010 2605 2073 1996 6261 2984 22353 2135 2596 2000 3002 16595 9648 4674 2061 12083 9711 2271 1999 8517 1012 2012 1996 2203 1997 1996 2364 3298 1006 1998 1999 1037 3622 2240 2008 8539 2083 1017 11342 1998 1996 2751 8514 1007 1010 2003 1037 3722 1010 2715 2962 6231 1997 2984 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11/08/2018 17:50:18 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... [truncated] ... Iteration: 100%|█████████▉| 29314/29324 [3:27:55<00:04, 2.36it/s] Iteration: 100%|█████████▉| 29315/29324 [3:27:55<00:03, 2.44it/s] Iteration: 100%|█████████▉| 29316/29324 [3:27:56<00:03, 2.26it/s] Iteration: 100%|█████████▉| 29317/29324 [3:27:56<00:02, 2.35it/s] Iteration: 100%|█████████▉| 29318/29324 [3:27:56<00:02, 2.44it/s] Iteration: 100%|█████████▉| 29319/29324 [3:27:57<00:02, 2.25it/s] Iteration: 100%|█████████▉| 29320/29324 [3:27:57<00:01, 2.35it/s] Iteration: 100%|█████████▉| 29321/29324 [3:27:58<00:01, 2.41it/s] Iteration: 100%|█████████▉| 29322/29324 [3:27:58<00:00, 2.25it/s] Iteration: 100%|█████████▉| 29323/29324 [3:27:59<00:00, 2.36it/s]Traceback (most recent call last): File "code/run_squad.py", line 929, in <module> main() File "code/run_squad.py", line 862, in main loss = model(input_ids, segment_ids, input_mask, start_positions, end_positions) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/0x0d4ff90d01fa4168983197b17d73bb0c_dependencies/code/modeling.py", line 467, in forward start_loss = loss_fct(start_logits, start_positions) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 862, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1550, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1403, in nll_loss if input.size(0) != target.size(0): RuntimeError: dimension specified as 0 but tensor has no dimensions Exception ignored in: <bound method tqdm.__del__ of Iteration: 100%|█████████▉| 29323/29324 [3:27:59<00:00, 2.36it/s]> Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 931, in __del__ self.close() File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 1133, in close self._decr_instances(self) File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 496, in _decr_instances cls.monitor.exit() File "/usr/local/lib/python3.6/dist-packages/tqdm/_monitor.py", line 52, in exit self.join() File "/usr/lib/python3.6/threading.py", line 1053, in join raise RuntimeError("cannot join current thread") RuntimeError: cannot join current thread ```
11-08-2018 22:01:57
11-08-2018 22:01:57
Here's the specific command I ran for more context: ``` python3.6 code/run_squad.py \ --bert_config_file bert/bert_config.json \ --vocab_file bert/vocab.txt \ --output_dir output \ --train_file data/original/train.json \ --predict_file data/original/dev.json \ --init_checkpoint bert-pytorch/pytorch_model.bin \ --do_lower_case \ --do_train \ --do_predict \ --train_batch_size 10 \ --gradient_accumulation_steps 3 \ --accumulate_gradients 3 ```<|||||>Hi Kerem, yes I fixed this bug yesterday in commit 2c5d993 (a bug with batches of dimension 1) You can try again with the current version and it should be fine. I got good results with these hyperparameters last night: ```bash python run_squad.py \ --vocab_file $BERT_BASE_DIR/vocab.txt \ --bert_config_file $BERT_BASE_DIR/bert_config.json \ --init_checkpoint $BERT_PYTORCH_DIR/pytorch_model.bin \ --do_train \ --do_predict \ --do_lower_case --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir ../debug_squad/ ``` I found: ```bash {"f1": 88.52381567990474, "exact_match": 81.22043519394512} ``` Feel free to reopen the issue if needed.
transformers
8
closed
fixed small typos in the README.md
11-08-2018 18:24:27
11-08-2018 18:24:27
Many thanks!
transformers
7
closed
Develop
Fixing `run_squad.py` pre-processing bug. Various clean-ups: - the weight initialization was not optimal (tf. truncated_normal_initializer(stddev=0.02) was translated in weight.data.normal_(0.02) instead of weight.data.normal_(mean=0.0, std=0.02) which likely affected the performance of run_classifer.py also. - gradient accumulation loss was not averaged over the accumulation steps which would have required to change the hyper-parameters for using accumulation. - the evaluation was not done with torch.no_grad() and thus sub-optimal in terms of speed/memory.
11-07-2018 22:34:18
11-07-2018 22:34:18
transformers
6
closed
Failure during pytest (and solution for python3)
``` foo@bar:~/foo/bar/pytorch-pretrained-BERT$ pytest -sv ./tests/ ===================================================================================================================== test session starts ===================================================================================================================== platform linux -- Python 3.6.6, pytest-3.9.1, py-1.7.0, pluggy-0.8.0 -- /home/foo/.pyenv/versions/anaconda3-5.1.0/bin/python cachedir: .pytest_cache rootdir: /data1/users/foo/bar/pytorch-pretrained-BERT, inifile: plugins: remotedata-0.3.0, openfiles-0.3.0, doctestplus-0.1.3, cov-2.6.0, arraydiff-0.2, flaky-3.4.0 collected 0 items / 3 errors =========================================================================================================================== ERRORS ============================================================================================================================ ___________________________________________________________________________________________________________ ERROR collecting tests/modeling_test.py ___________________________________________________________________________________________________________ ImportError while importing test module '/data1/users/foo/bar/pytorch-pretrained-BERT/tests/modeling_test.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: tests/modeling_test.py:25: in <module> import modeling E ModuleNotFoundError: No module named 'modeling' _________________________________________________________________________________________________________ ERROR collecting tests/optimization_test.py _________________________________________________________________________________________________________ ImportError while importing test module '/data1/users/foo/bar/pytorch-pretrained-BERT/tests/optimization_test.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: tests/optimization_test.py:23: in <module> import optimization E ModuleNotFoundError: No module named 'optimization' _________________________________________________________________________________________________________ ERROR collecting tests/tokenization_test.py _________________________________________________________________________________________________________ ImportError while importing test module '/data1/users/foo/bar/pytorch-pretrained-BERT/tests/tokenization_test.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: tests/tokenization_test.py:22: in <module> import tokenization E ModuleNotFoundError: No module named 'tokenization' ===Flaky Test Report=== ===End Flaky Test Report=== !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 3 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! =================================================================================================================== 3 error in 0.60 seconds ================================================================================================================== ``` In python 3, `python -m pytest -sv tests/` works fine.
11-06-2018 08:23:29
11-06-2018 08:23:29
Thanks, I update the readme.
transformers
5
closed
MRPC hyperparameters question
When describing how you reproduced the MRPC results, you say: "Our test ran on a few seeds with the original implementation hyper-parameters gave evaluation results between 82 and 87." and you link to the SQuAD hyperparameters (https://github.com/google-research/bert#squad). Is the link a mistake? Or did you use the SQuAD hyperparameters for tuning on MRPC? More generally, I'm wondering if there's a reason the MRPC dev set accuracy is slightly lower (in [82, 87] vs. [84, 88] reported by Google)
11-06-2018 05:30:36
11-06-2018 05:30:36
Hi Ethan, Thanks we used the MRPC hyper-parameters indeed, I corrected the README. Regarding the dev set accuracy, I am not really surprised there is a slightly lower accuracy with the PyTorch version (even though the variance is high so it's hard to get something significant). That is something that is generally observed (see for example [the work of Remi Cadene](https://github.com/Cadene/pretrained-models.pytorch)) and we also experienced that with [our TF->PT port of the OpenAI GPT model](https://github.com/huggingface/pytorch-openai-transformer-lm). My personal feeling is that there are slight differences in the way the backends of TensorFlow and PyTorch handle the operations and these differences make the pre-trained weights sub-optimal for PyTorch. <|||||>Great, thanks for clarifying that. Regarding the slightly lower accuracy, that makes sense. Thanks for your help and for releasing this!<|||||>Maybe it would help to train the Tensorflow pre-trained weights for e.g. one epoch in PyTorch (using the MLM and next-sentence objective)? That may help transfer to other tasks, depending on what the issue is<|||||>Hi @ethanjperez, actually the weight initialization fix (`tf. truncated_normal_initializer(stddev=0.02)` was translated in `weight.data.normal_(0.02)` instead of `weight.data.normal_(mean=0.0, std=0.02)` fixed in 2a97fe22) has brought us back to the TensorFlow results on MRPC (between 84 and 88%). I am closing this issue.<|||||>@thomwolf Great to hear - thanks for working to fix it!
transformers
4
closed
Fix typo in subheader BertForQuestionAnswering
Should say `BertForQuestionAnswering`, but says `BertForSequenceClassification`.
11-05-2018 23:04:03
11-05-2018 23:04:03
exact thanks !
transformers
3
closed
run_squad questions
Thanks a lot for the port! I have some minor questions, for the run_squad file, I see two options for accumulating gradients, accumulate_gradients and gradient_accumulation_steps but it seems to me that it can be combined into one. The other one is for the global_step variable, seems we are only counting but not using this variable in gradient accumulating. Thanks again!
11-05-2018 21:35:51
11-05-2018 21:35:51
It also seems to me that the SQuAD 1.1 can not reproduce the google tensorflow version performance.<|||||>> It also seems to me that the SQuAD 1.1 can not reproduce the google tensorflow version performance. What batch size are you running?<|||||>I'm running on 4 GPU with a batch size of 48, the result is {"exact_match": 21.551561021759696, "f1": 41.785968963154055}<|||||>Just ran on 1 GPU batch size of 10, the result is {"exact_match": 21.778618732261116, "f1": 41.83593185416649} Actually it might be with the eval code Ill look into it<|||||>Sure, Thanks, I'm checking for the reason too, will report if find anything.<|||||>The predictions file is only outputting one word. Need to find out if the bug is in the model itself or write predictions function in run_squad.py. The correct answer always seems to be in the nbest_predictions, but its never selected.<|||||>What performance does Hugging Face get on SQuAD using this reimplementation?<|||||>Hi all, We were not able to try SQuAD on a multi-GPU with the correct batch_size until recently so we relied on the standard deviations computed in the [notebooks](https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/notebooks) to compare the predicted hidden states and losses for the SQuAD script. I was able to try on a multi-GPU today and there is indeed a strong difference. We got about the same results that you get: F1 of 41.8 and exact match of 21.7. I am investigating that right now, my personal guess is that this may be related to things outside the model it-self like the optimizer or the post-processing in SQuAD as these were not compared between the TF and PT models. I will keep you guys updated in this issue and I add a mention in the readme that the SQuAD example doesn't work yet. If you have some insights, feel free to participate in the discussion.<|||||>If you're comparing activations, it may be worth comparing gradients as well to see if you receive similarly low gradients standard deviations for identical batches. You might see that the gradient is not comparable from the last layer itself (due to e.g. difference in how PyTorch may handle weight decay / optimization differently); you may also see that gradients only become not comparable only after a particular point in backpropagation, and that would show perhaps that the backward pass for a particular function differs between PyTorch and Tensorflow<|||||>Ok guys thanks for waiting, we've nailed down the culprit which was in fact a bug in the pre-processing logic (more exactly this dumb typo https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/run_squad.py#L865). I took the occasion to clean up a few things I noticed while walking through the code: - the weight initialization was not optimal (`tf. truncated_normal_initializer(stddev=0.02)` was translated in `weight.data.normal_(0.02)` instead of `weight.data.normal_(mean=0.0, std=0.02)` which likely affected the performance of `run_classifer.py` also. - gradient accumulation loss was not averaged over the accumulation steps which would have required to change the hyper-parameters for using accumulation. - the evaluation was not done with `torch.no_grad()` and thus sub-optimal in terms of speed/memory. These fixes are pushed on the `develop` branch right now. All in all I think we are pretty good now and none of these issues affected the core PyTorch model (the BERT Transformer it-self) so if you only used `extract_features.py` you were good from the beginning. And `run_classifer.py` was ok apart from the sub-optimal additional weights initialization. I will merge the develop branch as soon as we got the final results confirmed (currently it's been training for 20 minutes (0.3 epoch) on 4GPU with a batch size of 56 and we are already above 85 on F1 on SQuAD and 77 in exact match so I'm rather confident and I think you guys can play with it too now). I am also cleaning up the code base to prepare for a first release that we will put on pip for easier access.<|||||>@thomwolf This is awesome - thank you! Do you know what the final SQuAD results were from the training run you started?<|||||>I got `{"exact_match": 80.07568590350047, "f1": 87.6494485519583}` with slightly sub-optimal parameters (`max_seq 300` instead of `384` which means more answers are truncated and a `batch_size 56` for 2 epochs of training which is probably a too big batch size and/or 1 epoch should suffice). It trains in about 1h/epoch on 4 GPUs with such a big batch size and truncated examples.<|||||>Using the same HP as the TensorFlow version we are actually slightly better on F1 than the original implementation (on the default random seed we used): `{"f1": 88.52381567990474, "exact_match": 81.22043519394512}` versus TF: `{"f1": 88.41249612335034, "exact_match": 81.2488174077578}` I am trying `BERT-large` on SQuAD now which is totally do-able on a 4 GPU server with the recommended batch-size of 24 (about 16h of expected training time using the `--optimize_on_cpu` option and 2 steps of gradient accumulation). I will update the readme with the results.<|||||>Great, I saw the BERT-large ones as well - thank you for sharing these results! How long did the BERT-base SQuAD training take on a single GPU when you tried it? I saw BERT-large took ~18 hours over 4 K-80's<|||||>Hi Ethan, I didn't try SQuAD on a single-GPU. On four k-80 (not k40), BERT-base took 5h to train on SQuAD.
transformers
2
closed
Port tokenization for the multilingual model
11-05-2018 21:35:36
11-05-2018 21:35:36
Thanks for that, sorry for the delay
transformers
1
closed
Create DataParallel model if several GPUs
11-03-2018 14:10:20
11-03-2018 14:10:20