repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers
| 1,626
|
What is currently the best way to add a custom dictionary to a neural machine translator that uses the transformer architecture?
|
## ❓ Questions & Help
It's common to add a custom dictionary to a machine translator to ensure that terminology from a specific domain is correctly translated. For example, the term server should be translated differently when the document is about data centers, vs when the document is about restaurants.
With a transformer model, this is not very obvious to do, since words are not aligned 1:1. I've seen a couple of papers on this topic, but I'm not sure which would be the best one to use. What are the best practices for this problem?
One paper I found that seem to describe what I'm looking for is [here](aclweb.org/anthology/W18-6318.pdf ) - I have a bunch of questions regarding the paper, which I'm happy to discuss here as well. I'm also wondering if there are other approaches.
|
https://github.com/huggingface/transformers/issues/1626
|
closed
|
[
"wontfix"
] | 2019-10-24T17:48:10Z
| 2020-01-04T09:41:58Z
| null |
moyid
|
huggingface/neuralcoref
| 219
|
Pre-trained english model
|
Hi,
Is the pre-trained english model shipped with coref a model trained on the CoNLL and Ontonotes datasets?
Thanks!
|
https://github.com/huggingface/neuralcoref/issues/219
|
closed
|
[
"question",
"training"
] | 2019-10-17T18:49:51Z
| 2019-10-17T20:06:00Z
| null |
masonedmison
|
huggingface/neuralcoref
| 218
|
State-of-the-art benchmark
|
Hi,
You are claiming neuralCoref to be state-of-the-art for coreference resolution. Do you have any benchmark supporting the claim? I would like to include it in my paper. Also can it be cited yet?
|
https://github.com/huggingface/neuralcoref/issues/218
|
closed
|
[
"question",
"perf / accuracy"
] | 2019-10-17T15:30:16Z
| 2019-10-21T13:59:12Z
| null |
Masum06
|
huggingface/neuralcoref
| 217
|
train conll with BERT
|
Hi
I would like to train the conll-2012 data with BERT, for this the common thing is to first convert data to NLI format, then use the NLI bert for it, I was wondering if you could assist and the BERT-based codes to this repo. I really appreciate for your help.
thanks a lot
Best
Julia
|
https://github.com/huggingface/neuralcoref/issues/217
|
closed
|
[
"question"
] | 2019-10-17T09:25:01Z
| 2019-10-17T15:33:22Z
| null |
ghost
|
huggingface/transformers
| 1,543
|
Where is pytorch-pretrained-BERT?
|
## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
As the title shows, where is pytorch-pretrained-BERT? Please tell me the path, THX.
|
https://github.com/huggingface/transformers/issues/1543
|
closed
|
[] | 2019-10-17T07:46:13Z
| 2019-12-05T10:27:31Z
| null |
Foehnc
|
huggingface/transformers
| 1,503
|
What is the best way to handle sequences > max_len for tasks like abstract summarization?
|
What is the best way to handle situations where a sequence in your dataset exceeds the max length defined for a model?
For example, if I'm working on an abstract summarization task with a Bert model having a `max_position_embeddings=512` and tokenizer with `max_len=512`, how should I handle documents where the tokens to evaluate exceed 512?
Is there a recommended practice for this situation?
Thanks
|
https://github.com/huggingface/transformers/issues/1503
|
closed
|
[
"wontfix"
] | 2019-10-12T00:40:50Z
| 2020-02-17T13:26:11Z
| null |
ohmeow
|
huggingface/neuralcoref
| 203
|
training new language(French)
|
How can I get data like the English forme (there is any tool to do that) ?
|
https://github.com/huggingface/neuralcoref/issues/203
|
closed
|
[
"question",
"training"
] | 2019-09-23T13:16:42Z
| 2019-10-14T07:48:00Z
| null |
Berrougui
|
huggingface/transformers
| 1,299
|
What is the best CPU inference acceleration solution for BERT now?
|
Thank you very much.
Thank you very much.
Thank you very much.
|
https://github.com/huggingface/transformers/issues/1299
|
closed
|
[
"wontfix"
] | 2019-09-20T02:50:55Z
| 2019-11-20T01:42:25Z
| null |
guotong1988
|
huggingface/transformers
| 1,150
|
What is the relationship between `run_lm_finetuning.py` and the scripts in `lm_finetuning`?
|
## ❓ Questions & Help
It looks like there are now two scripts for running LM fine-tuning. While `run_lm_finetuning` seems to be newer, the documentation in `lm_finetuning` seems to indicate that there is more subtlety to generating the right data for performing LM fine-tuning in the BERT format. Does the new script take this into account?
Sorry if I'm missing something obvious!
|
https://github.com/huggingface/transformers/issues/1150
|
closed
|
[
"wontfix"
] | 2019-08-29T18:15:45Z
| 2019-12-30T15:04:14Z
| null |
zphang
|
huggingface/sentence-transformers
| 6
|
What is the classical loss for doc ranking problem? Thank you.
|
Based on my understanding, Multiple Negatives Ranking Loss is a better loss for doc ranking problem.
What is the former classical loss for doc ranking problem?
Thank you very much.
|
https://github.com/huggingface/sentence-transformers/issues/6
|
closed
|
[] | 2019-08-05T03:49:39Z
| 2019-08-05T08:21:27Z
| null |
guotong1988
|
huggingface/neuralcoref
| 187
|
Where is the conll parser?
|
In the [instructions](https://github.com/huggingface/neuralcoref/blob/master/neuralcoref/train/training.md) there is a reference to a conll parser and conll processing scripts, but those links are dead. They have been removed but it's not clear to me why.
|
https://github.com/huggingface/neuralcoref/issues/187
|
closed
|
[
"training",
"docs"
] | 2019-07-23T20:59:40Z
| 2019-12-17T06:30:02Z
| null |
BramVanroy
|
huggingface/transformers
| 805
|
Where is "run_bert_classifier.py"?
|
Thanks for this great repo.
Is there any equivalent to [the previous run_bert_classifier.py](https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/examples/run_bert_classifier.py)?
|
https://github.com/huggingface/transformers/issues/805
|
closed
|
[] | 2019-07-17T14:57:53Z
| 2020-12-02T15:59:46Z
| null |
amirj
|
huggingface/transformers
| 739
|
where is "pytorch_model.bin"?
|
https://github.com/huggingface/transformers/issues/739
|
closed
|
[
"wontfix"
] | 2019-06-28T15:09:50Z
| 2019-09-03T17:19:30Z
| null |
jufengada
|
|
huggingface/neuralcoref
| 175
|
what version of python is required to run the script ?
|
I did try to run the script as described in CoNLL 2012 to produce the *._conll files.
However without any success.
the command:
skeleton2conll.sh -D [path_to_ontonotes_train_folder] [path_to_skeleton_train_folder]
always returns that "please make sure that you are pointing to the directory 'conll-2012'"
I did all as pointed on the web site and according other posts about that, which i did check.
I work under windows 10 , so to run .sh file I use cygwin.
However, i'm not sure what version of python I need , is it 2.7 or 3.xx and is this related to the issue i have?
Any help is welcome.
|
https://github.com/huggingface/neuralcoref/issues/175
|
closed
|
[
"training",
"usage"
] | 2019-06-20T16:01:14Z
| 2019-09-26T12:31:16Z
| null |
dtsonov
|
huggingface/neuralcoref
| 172
|
Accuracy Report of model
|
Hi,
I am doing a research on co-reference resolution and comparing different models currently available. I have been looking for an accuracy measure of your model but couldn't find it in your GitHub Repo. description nor in your medium blog.
Can you report how much accuracy (F1/Precision/Recall) have you achieved using this model and on which test dataset?
Thank you!
|
https://github.com/huggingface/neuralcoref/issues/172
|
closed
|
[
"question",
"wontfix",
"perf / accuracy"
] | 2019-06-18T07:26:35Z
| 2019-10-16T08:48:23Z
| null |
uahmad235
|
huggingface/neuralcoref
| 162
|
Citation in publication
|
Can you provide an article so that this work can be cited?
|
https://github.com/huggingface/neuralcoref/issues/162
|
closed
|
[
"question",
"wontfix"
] | 2019-05-10T05:56:55Z
| 2019-10-16T08:47:51Z
| null |
pradipcyb
|
huggingface/transformers
| 591
|
What is the use of [SEP]?
|
Hello. I know that [CLS] means the start of a sentence and [SEP] makes BERT know the second sentence has begun. [SEP] can’t stop one sentence from extracting information from another sentence. However, I have a question.
If I have 2 sentences, which are s1 and s2., and our fine-tuning task is the same. In one way, I add special tokens and the input looks like [CLS]+s1+[SEP] + s2 + [SEP]. In another, I make the input look like [CLS] + s1 + s2 + [SEP]. When I input them to BERT respectively, what is the difference between them? Will the s1 in second one integrate more information from s2 than the s1 in first one does? Will the token embeddings change a lot between the 2 methods?
Thanks for any help!
|
https://github.com/huggingface/transformers/issues/591
|
closed
|
[] | 2019-05-07T04:12:16Z
| 2019-05-21T10:51:31Z
| null |
RomanShen
|
huggingface/neuralcoref
| 157
|
Performance?
|
Hi there,
Thanks for the nice package!
Are there any performance comparisons with other systems? (say, Lee et el'18: https://arxiv.org/pdf/1804.05392.pdf).
|
https://github.com/huggingface/neuralcoref/issues/157
|
closed
|
[
"question",
"perf / accuracy"
] | 2019-04-30T21:38:56Z
| 2019-10-16T08:48:09Z
| null |
danyaljj
|
huggingface/transformers
| 370
|
What is Synthetic Self-Training?
|
The current best performing model on[ SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) is BERT + N-Gram Masking + Synthetic Self-Training (ensemble):

What is Synthetic Self-Training?
|
https://github.com/huggingface/transformers/issues/370
|
closed
|
[
"Discussion",
"wontfix"
] | 2019-03-12T20:40:50Z
| 2019-07-13T20:58:32Z
| null |
hsm207
|
huggingface/transformers
| 320
|
what is the batch size we can use for SQUAD task?
|
I am running the squad example.
I have a Tesla M60 GPU which has about 8GB of memory. For bert-large-uncased model, I can only take batch size as 2, even after I used --fp16. Is it normal?
|
https://github.com/huggingface/transformers/issues/320
|
closed
|
[] | 2019-02-26T08:56:20Z
| 2019-03-03T00:21:25Z
| null |
leonwyang
|
huggingface/transformers
| 233
|
What is get_lr() meaning in the optimizer.py
|
I use a Model based on BertModel, and when I use the BertAdam the learning rate isn't changed. And when I use `get_lr()`, the return result is `[0]`. And I see the length of state isn't 0, but why I get that?
|
https://github.com/huggingface/transformers/issues/233
|
closed
|
[] | 2019-01-28T13:19:06Z
| 2019-02-05T16:12:33Z
| null |
kugwzk
|
huggingface/transformers
| 205
|
What is the meaning of Attention Mask
|
Hi, I noticed that there is something called `Attention Mask` in the model.
In the annotation of class `BertForQuestionAnswering`,
```python
`attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices
selected in [0, 1]. It's a mask to be used if the input sequence length is smaller than the max
input sequence length in the current batch. It's the mask that we typically use for attention when
a batch has varying length sentences.
```
And its usage is in class `BertSelfAttention`, function `forward`,
```python
# Apply the attention mask is (precomputed for all layers in BertModel forward() function)
attention_scores = attention_scores + attention_mask
```
It seems the attention_mask is used to add 1 to the scores for positions that is taken up by real tokens, and add 0 to the positions outside current sequence.
Then, why not set the scores to `-inf` where the positions are outside the current sequence. Then pass the scores to a softmax layer, those score will become 0 as we want.
|
https://github.com/huggingface/transformers/issues/205
|
closed
|
[] | 2019-01-18T14:04:11Z
| 2022-08-19T19:37:44Z
| null |
jianyucai
|
huggingface/neuralcoref
| 127
|
Can't find mention type in doc class
|
I can't find the mention type of a span, so I just copy get_span_type function to get mention types as follows.
Maybe it could be merged into doc object.
```
ACCEPTED_ENTS = ["PERSON", "NORP", "FACILITY", "ORG", "GPE", "LOC", "PRODUCT", "EVENT", "WORK_OF_ART", "LANGUAGE"]
MENTION_TYPE = {"PRONOMINAL": 0, "NOMINAL": 1, "PROPER": 2, "LIST": 3}
PRP_TAGS = ["PRP", "PRP$"]
CONJ_TAGS = ["CC", ","]
PROPER_TAGS = ["NNP", "NNPS"]
def get_span_type(span):
''' Find the type of a Span '''
if any(t.tag_ in CONJ_TAGS and t.ent_type_ not in ACCEPTED_ENTS for t in span):
mention_type = MENTION_TYPE["LIST"]
elif span.root.tag_ in PRP_TAGS:
mention_type = MENTION_TYPE["PRONOMINAL"]
elif span.root.ent_type_ in ACCEPTED_ENTS or span.root.tag_ in PROPER_TAGS:
mention_type = MENTION_TYPE["PROPER"]
else:
mention_type = MENTION_TYPE["NOMINAL"]
return mention_type
```
|
https://github.com/huggingface/neuralcoref/issues/127
|
closed
|
[
"question",
"wontfix"
] | 2019-01-16T07:46:38Z
| 2019-06-21T08:31:42Z
| null |
joe32140
|
huggingface/transformers
| 114
|
What is the best dataset structure for BERT?
|
First I want to say thanks for setting up all this!
I am using BertForSequenceClassification and am wondering what the optimal way is to structure my sequences.
Right now my sequences are blog post which could be upwards to 400 words long.
Would it be better to split my blog posts in sentences and use the sentences as my sequences instead?
Thanks!
|
https://github.com/huggingface/transformers/issues/114
|
closed
|
[] | 2018-12-11T16:28:00Z
| 2018-12-11T20:57:45Z
| null |
wahlforss
|
huggingface/neuralcoref
| 113
|
what is the different between en_coref models?
|
for three models, en_coref_lg, en_coref_md, en_coref_sm, which one has best performance? only consider the performance, is lg best?
|
https://github.com/huggingface/neuralcoref/issues/113
|
closed
|
[] | 2018-11-30T06:36:43Z
| 2019-04-11T12:14:11Z
| null |
Jasperty
|
huggingface/neuralcoref
| 110
|
Doesn't work when span is merged.
|
```python
nlp = spacy.load('en_coref_sm')
text = nlp("Michelle Obama is the wife of former U.S. President Barack Obama. Prior to her role as first lady, she was a lawyer.")
spans = list(text.noun_chunks)
for span in spans:
span.merge()
for word in text:
print(word)
if(word._.in_coref):
print(text._.coref_clusters)
```
When the above code is run, it gives the following error:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-98-4252d464f86d> in <module>()
1 for word in text:
2 print(word)
----> 3 if(word._.in_coref):
4 print(text._.coref_clusters)
~\Anaconda3\lib\site-packages\spacy\tokens\underscore.py in __getattr__(self, name)
29 default, method, getter, setter = self._extensions[name]
30 if getter is not None:
---> 31 return getter(self._obj)
32 elif method is not None:
33 return functools.partial(method, self._obj)
neuralcoref.pyx in __iter__()
span.pyx in __iter__()
span.pyx in spacy.tokens.span.Span._recalculate_indices()
IndexError: [E037] Error calculating span: Can't find a token ending at character offset 78.
```
|
https://github.com/huggingface/neuralcoref/issues/110
|
closed
|
[
"question",
"wontfix"
] | 2018-11-21T10:12:43Z
| 2019-06-17T14:22:21Z
| null |
lahsuk
|
huggingface/pytorch-openai-transformer-lm
| 19
|
what is the use of dropout in the Transformer?
|
https://github.com/huggingface/pytorch-openai-transformer-lm/blob/55ba4d78407ae12c7454dc8f3342f476be3dece5/model_pytorch.py#L161
|
https://github.com/huggingface/pytorch-openai-transformer-lm/issues/19
|
open
|
[] | 2018-07-05T16:18:48Z
| 2018-07-09T13:59:41Z
| null |
teucer
|
huggingface/neuralcoref
| 10
|
what is the training data for this project?
|
is it the same to clark and manning paper?
|
https://github.com/huggingface/neuralcoref/issues/10
|
closed
|
[] | 2017-12-04T22:16:52Z
| 2017-12-19T01:40:18Z
| null |
xinyadu
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.