Update README.md
Browse files
README.md
CHANGED
@@ -12,21 +12,21 @@ should probably proofread and complete it, then remove this comment. -->
|
|
12 |
|
13 |
# gpt2-finetuned-scientific-articles
|
14 |
|
15 |
-
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on
|
16 |
It achieves the following results on the evaluation set:
|
17 |
- Loss: 2.3793
|
18 |
|
19 |
## Model description
|
20 |
|
21 |
-
|
22 |
|
23 |
## Intended uses & limitations
|
24 |
|
25 |
-
|
26 |
|
27 |
## Training and evaluation data
|
28 |
|
29 |
-
|
30 |
|
31 |
## Training procedure
|
32 |
|
|
|
12 |
|
13 |
# gpt2-finetuned-scientific-articles
|
14 |
|
15 |
+
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on scientific articles.
|
16 |
It achieves the following results on the evaluation set:
|
17 |
- Loss: 2.3793
|
18 |
|
19 |
## Model description
|
20 |
|
21 |
+
This model is a casual language modeling GPT2 fine-tuned on scientific articles.
|
22 |
|
23 |
## Intended uses & limitations
|
24 |
|
25 |
+
This model is intended for prompts about algorithmic bias.
|
26 |
|
27 |
## Training and evaluation data
|
28 |
|
29 |
+
This model is trained on fully freely accessible articles obtained from a PubMed Central search on algorithmic bias. The pmc_result_algorithmicbias.txt file contains the list of PMC's used. Due to technical and time limitations, only fine-tuned on the introduction sections, but planned on training on other sections.
|
30 |
|
31 |
## Training procedure
|
32 |
|