lchaloupsky commited on
Commit
78fb005
1 Parent(s): 5c6c2e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -9,7 +9,7 @@ datasets:
9
  This model was trained as a part of the [master thesis](https://dspace.cuni.cz/handle/20.500.11956/176356?locale-attribute=en) on the Czech part of the [OSCAR](https://huggingface.co/datasets/oscar) dataset.
10
 
11
  ## Introduction
12
- Czech-GPT2-OSCAR (Czech GPT-2 small) is a state-of-the-art language model for Czech based on the GPT-2 small model. Unlike the original GPT-2 small model, this model is trained to predict only 512 tokens instead of 1024 as it serves as a basis for the [Czech-GPT2-Medical|https://huggingface.co/lchaloupsky/czech-gpt2-medical].
13
 
14
  The model was trained the Czech part of the [OSCAR](https://huggingface.co/datasets/oscar) dataset using Transfer Learning and Fine-tuning techniques in about a week on one NVIDIA A100 SXM4 40GB and with a total of 21 GB of training data.
15
 
 
9
  This model was trained as a part of the [master thesis](https://dspace.cuni.cz/handle/20.500.11956/176356?locale-attribute=en) on the Czech part of the [OSCAR](https://huggingface.co/datasets/oscar) dataset.
10
 
11
  ## Introduction
12
+ Czech-GPT2-OSCAR (Czech GPT-2 small) is a state-of-the-art language model for Czech based on the GPT-2 small model. Unlike the original GPT-2 small model, this model is trained to predict only 512 tokens instead of 1024 as it serves as a basis for the [Czech-GPT2-Medical](https://huggingface.co/lchaloupsky/czech-gpt2-medical]).
13
 
14
  The model was trained the Czech part of the [OSCAR](https://huggingface.co/datasets/oscar) dataset using Transfer Learning and Fine-tuning techniques in about a week on one NVIDIA A100 SXM4 40GB and with a total of 21 GB of training data.
15