Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,7 @@ Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) a
|
|
34 |
## Training data
|
35 |
|
36 |
Slovak GPT-J-162M was trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model.
|
37 |
-
The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information.
|
38 |
|
39 |
## Training procedure
|
40 |
|
|
|
34 |
## Training data
|
35 |
|
36 |
Slovak GPT-J-162M was trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model.
|
37 |
+
The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate parts of the corpus, the model still might generate sensitive content or leak sensitive information.
|
38 |
|
39 |
## Training procedure
|
40 |
|