Update README.md
Browse files
README.md
CHANGED
@@ -9,10 +9,10 @@ tags:
|
|
9 |
- keyphrase-generation
|
10 |
---
|
11 |
|
12 |
-
# Model Card for bart-
|
13 |
|
14 |
<!-- Provide a quick summary of what the model is/does. [Optional] -->
|
15 |
-
This is a BART-
|
16 |
|
17 |
|
18 |
# Model Details
|
@@ -20,24 +20,24 @@ This is a BART-large model for keyphrase generation.
|
|
20 |
## Model Description
|
21 |
|
22 |
<!-- Provide a longer summary of what this model is/does. -->
|
23 |
-
This is a BART-
|
24 |
-
We specifically, we fine-tuned [bart-
|
25 |
During fine-tuning, gold keyphrases are arranged in the present-absent order which was found to give the best results.
|
26 |
|
27 |
- **Developed by:** [boudinfl](https://boudinfl.github.io/)
|
28 |
- **Model type:** Language model
|
29 |
- **Language(s) (NLP):** en
|
30 |
- **License:** apache-2.0
|
31 |
-
- **Parent Model:** [bart-
|
32 |
|
33 |
# Usage
|
34 |
|
35 |
```python
|
36 |
from transformers import BartForConditionalGeneration, AutoTokenizer
|
37 |
|
38 |
-
model = BartForConditionalGeneration.from_pretrained('
|
39 |
|
40 |
-
tokenizer = AutoTokenizer.from_pretrained('
|
41 |
|
42 |
# inputs are formatted as "title<s>abstract"
|
43 |
input_text = "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension<s>We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance."
|
@@ -67,9 +67,9 @@ This model was fine-tuned on the training split of the [KP20k dataset](https://h
|
|
67 |
|
68 |
## Training Procedure
|
69 |
|
70 |
-
We fine-tuned [bart-
|
71 |
During fine-tuning, gold keyphrases are arranged in the present-absent order which was found to give the best results.
|
72 |
-
The model is trained for
|
73 |
|
74 |
# Evaluation
|
75 |
|
@@ -81,15 +81,15 @@ We also report scores for present and absent keyphrases separately to get more i
|
|
81 |
|
82 |
| Eval | \\(F_1@M\\) | \\(F_1@5\\) | \\(F_1@10\\) |
|
83 |
| :-------- | ----------: | ----------: | -----------: |
|
84 |
-
| all |
|
85 |
-
| present |
|
86 |
-
| absent | 4
|
87 |
|
88 |
# Environmental Impact
|
89 |
|
90 |
Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.432 kgCO2eq/kWh.
|
91 |
-
A cumulative of
|
92 |
|
93 |
-
Total emissions are estimated to be
|
94 |
|
95 |
Estimations were conducted using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
|
|
9 |
- keyphrase-generation
|
10 |
---
|
11 |
|
12 |
+
# Model Card for bart-base-kp20k
|
13 |
|
14 |
<!-- Provide a quick summary of what the model is/does. [Optional] -->
|
15 |
+
This is a BART-base model for keyphrase generation.
|
16 |
|
17 |
|
18 |
# Model Details
|
|
|
20 |
## Model Description
|
21 |
|
22 |
<!-- Provide a longer summary of what this model is/does. -->
|
23 |
+
This is a BART-base model for keyphrase generation.
|
24 |
+
We specifically, we fine-tuned [bart-base](https://huggingface.co/facebook/bart-base) on the KP20k dataset in a ONE2MANY setting, that is, given a source text as input, the task is to generate keyphrases as a single sequence of delimiter-separated phrases.
|
25 |
During fine-tuning, gold keyphrases are arranged in the present-absent order which was found to give the best results.
|
26 |
|
27 |
- **Developed by:** [boudinfl](https://boudinfl.github.io/)
|
28 |
- **Model type:** Language model
|
29 |
- **Language(s) (NLP):** en
|
30 |
- **License:** apache-2.0
|
31 |
+
- **Parent Model:** [bart-base](https://huggingface.co/facebook/bart-base)
|
32 |
|
33 |
# Usage
|
34 |
|
35 |
```python
|
36 |
from transformers import BartForConditionalGeneration, AutoTokenizer
|
37 |
|
38 |
+
model = BartForConditionalGeneration.from_pretrained('taln-ls2n/bart-base-kp20k')
|
39 |
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained('taln-ls2n/bart-base-kp20k')
|
41 |
|
42 |
# inputs are formatted as "title<s>abstract"
|
43 |
input_text = "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension<s>We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance."
|
|
|
67 |
|
68 |
## Training Procedure
|
69 |
|
70 |
+
We fine-tuned [bart-base](https://huggingface.co/facebook/bart-base) on the training split of the [KP20k dataset](https://huggingface.co/datasets/taln-ls2n/kp20k) in a ONE2MANY setting, that is, given a source text as input, the task is to generate keyphrases as a single sequence of delimiter-separated phrases.
|
71 |
During fine-tuning, gold keyphrases are arranged in the present-absent order which was found to give the best results.
|
72 |
+
The model is trained for 15 epochs.
|
73 |
|
74 |
# Evaluation
|
75 |
|
|
|
81 |
|
82 |
| Eval | \\(F_1@M\\) | \\(F_1@5\\) | \\(F_1@10\\) |
|
83 |
| :-------- | ----------: | ----------: | -----------: |
|
84 |
+
| all | 28.7 | 28.0 | 25.4 |
|
85 |
+
| present | 37.3 | 35.5 | 29.2 |
|
86 |
+
| absent | 2.4 | 5.9 | 5.8 |
|
87 |
|
88 |
# Environmental Impact
|
89 |
|
90 |
Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.432 kgCO2eq/kWh.
|
91 |
+
A cumulative of 63 hours of computation was performed on hardware of type RTX 2080 Ti (TDP of 250W).
|
92 |
|
93 |
+
Total emissions are estimated to be 6.8 kgCO2eq of which 0 percents were directly offset.
|
94 |
|
95 |
Estimations were conducted using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|