meliksahturker
commited on
Commit
•
63459e3
1
Parent(s):
a2d7c59
Upload TFMBartForConditionalGeneration
Browse files- README.md +45 -81
- config.json +1 -3
- generation_config.json +1 -2
- tf_model.h5 +2 -2
README.md
CHANGED
@@ -1,83 +1,47 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
inference:
|
8 |
-
parameters:
|
9 |
-
max_new_tokens: 128
|
10 |
-
widget:
|
11 |
-
- text: >-
|
12 |
-
Ben buraya bazı <MASK> istiyorum.
|
13 |
-
example_title: Masked Language Modeling
|
14 |
-
license: cc-by-nc-sa-4.0
|
15 |
---
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
VBART
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
Its purpose is to serve as the base model to be fine-tuned for downstream tasks.
|
57 |
-
|
58 |
-
### Training Procedure
|
59 |
-
Pre-trained for a total of 63B tokens.
|
60 |
-
#### Hardware
|
61 |
-
- **GPUs**: 8 x Nvidia A100-80 GB
|
62 |
-
#### Software
|
63 |
-
- TensorFlow
|
64 |
-
#### Hyperparameters
|
65 |
-
##### Pretraining
|
66 |
-
- **Training regime:** fp16 mixed precision
|
67 |
-
- **Training objective**: Sentence permutation and span masking (using mask lengths sampled from Poisson distribution λ=3.5, masking 30% of tokens)
|
68 |
-
- **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
|
69 |
-
- **Scheduler**: Custom scheduler from the original Transformers paper (20,000 warm-up steps)
|
70 |
-
- **Dropout**: 0.1
|
71 |
-
- **Initial Learning rate**: 5e-6
|
72 |
-
- **Training tokens**: 63B
|
73 |
-
|
74 |
-
|
75 |
-
## Citation
|
76 |
-
```
|
77 |
-
@article{turker2024vbart,
|
78 |
-
title={VBART: The Turkish LLM},
|
79 |
-
author={Turker, Meliksah and Ari, Erdi and Han, Aydin},
|
80 |
-
journal={arXiv preprint arXiv:2403.01308},
|
81 |
-
year={2024}
|
82 |
-
}
|
83 |
-
```
|
|
|
1 |
---
|
2 |
+
tags:
|
3 |
+
- generated_from_keras_callback
|
4 |
+
model-index:
|
5 |
+
- name: VBART-Medium-Base
|
6 |
+
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
8 |
+
|
9 |
+
<!-- This model card has been generated automatically according to the information Keras had access to. You should
|
10 |
+
probably proofread and complete it, then remove this comment. -->
|
11 |
+
|
12 |
+
# VBART-Medium-Base
|
13 |
+
|
14 |
+
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
|
15 |
+
It achieves the following results on the evaluation set:
|
16 |
+
|
17 |
+
|
18 |
+
## Model description
|
19 |
+
|
20 |
+
More information needed
|
21 |
+
|
22 |
+
## Intended uses & limitations
|
23 |
+
|
24 |
+
More information needed
|
25 |
+
|
26 |
+
## Training and evaluation data
|
27 |
+
|
28 |
+
More information needed
|
29 |
+
|
30 |
+
## Training procedure
|
31 |
+
|
32 |
+
### Training hyperparameters
|
33 |
+
|
34 |
+
The following hyperparameters were used during training:
|
35 |
+
- optimizer: None
|
36 |
+
- training_precision: float32
|
37 |
+
|
38 |
+
### Training results
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
### Framework versions
|
43 |
+
|
44 |
+
- Transformers 4.39.0
|
45 |
+
- TensorFlow 2.13.0
|
46 |
+
- Datasets 2.18.0
|
47 |
+
- Tokenizers 0.15.2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
config.json
CHANGED
@@ -1,5 +1,4 @@
|
|
1 |
{
|
2 |
-
"_name_or_path": "tfhf_model_checkpointmedium_epoch_0024_opt.hdf5",
|
3 |
"activation_dropout": 0.0,
|
4 |
"activation_function": "gelu",
|
5 |
"architectures": [
|
@@ -28,8 +27,7 @@
|
|
28 |
"num_hidden_layers": 6,
|
29 |
"pad_token_id": 0,
|
30 |
"scale_embedding": false,
|
31 |
-
"torch_dtype": "float32",
|
32 |
"transformers_version": "4.39.0",
|
33 |
"use_cache": true,
|
34 |
-
"vocab_size":
|
35 |
}
|
|
|
1 |
{
|
|
|
2 |
"activation_dropout": 0.0,
|
3 |
"activation_function": "gelu",
|
4 |
"architectures": [
|
|
|
27 |
"num_hidden_layers": 6,
|
28 |
"pad_token_id": 0,
|
29 |
"scale_embedding": false,
|
|
|
30 |
"transformers_version": "4.39.0",
|
31 |
"use_cache": true,
|
32 |
+
"vocab_size": 32001
|
33 |
}
|
generation_config.json
CHANGED
@@ -5,6 +5,5 @@
|
|
5 |
"eos_token_id": 3,
|
6 |
"forced_eos_token_id": 3,
|
7 |
"pad_token_id": 0,
|
8 |
-
"transformers_version": "4.39.0"
|
9 |
-
"max_new_tokens": 128
|
10 |
}
|
|
|
5 |
"eos_token_id": 3,
|
6 |
"forced_eos_token_id": 3,
|
7 |
"pad_token_id": 0,
|
8 |
+
"transformers_version": "4.39.0"
|
|
|
9 |
}
|
tf_model.h5
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f933c639f9f5e2c36e9824199dab76f83a00aab175f6444e88468e8300ce8355
|
3 |
+
size 502008588
|