meliksahturker commited on
Commit
63459e3
1 Parent(s): a2d7c59

Upload TFMBartForConditionalGeneration

Browse files
Files changed (4) hide show
  1. README.md +45 -81
  2. config.json +1 -3
  3. generation_config.json +1 -2
  4. tf_model.h5 +2 -2
README.md CHANGED
@@ -1,83 +1,47 @@
1
  ---
2
- language:
3
- - tr
4
- arXiv: 2403.01308
5
- library_name: transformers
6
- pipeline_tag: text2text-generation
7
- inference:
8
- parameters:
9
- max_new_tokens: 128
10
- widget:
11
- - text: >-
12
- Ben buraya bazı <MASK> istiyorum.
13
- example_title: Masked Language Modeling
14
- license: cc-by-nc-sa-4.0
15
  ---
16
- # VBART Model Card
17
-
18
- ## Model Description
19
-
20
- VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from scratch on a large scale. It was pre-trained by VNGRS in February 2023.
21
- The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
22
- It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
23
-
24
- This repository contains pre-trained TensorFlow and Safetensors weights of VBARTMedium-Base.
25
-
26
- - **Developed by:** [VNGRS-AI](https://vngrs.com/ai/)
27
- - **Model type:** Transformer encoder-decoder based on mBART architecture
28
- - **Language(s) (NLP):** Turkish
29
- - **License:** CC BY-NC-SA 4.0
30
- - **Finetuned from:** VBART-Large
31
- - **Paper:** [arXiv](https://arxiv.org/abs/2403.01308)
32
- ## How to Get Started with the Model
33
- ```python
34
- from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
35
-
36
- tokenizer = AutoTokenizer.from_pretrained("vngrs-ai/VBART-Medium-Base",
37
- model_input_names=['input_ids', 'attention_mask'])
38
- # Uncomment the device_map kwarg and delete the closing bracket to use model for inference on GPU
39
- model = AutoModelForSeq2SeqLM.from_pretrained("vngrs-ai/VBART-Medium-Base")#, device_map="auto")
40
-
41
- # Input text
42
- input_text = "Ben buraya bazı <MASK> istiyorum."
43
-
44
- token_input = tokenizer(input_text, return_tensors="pt")#.to('cuda')
45
- outputs = model.generate(**token_input)
46
- print(tokenizer.decode(outputs[0]))
47
- ```
48
-
49
- ## Training Details
50
-
51
- ### Training Data
52
- The base model is pre-trained on [vngrs-web-corpus](https://huggingface.co/datasets/vngrs-ai/vngrs-web-corpus). It is curated by cleaning and filtering Turkish parts of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4) datasets. These datasets consist of documents of unstructured web crawl data. More information about the dataset can be found on their respective pages. Data is filtered using a set of heuristics and certain rules, explained in the appendix of our [paper](https://arxiv.org/abs/2403.01308).
53
-
54
- ### Limitations
55
- This model is the pre-trained base model and is capable of masked language modeling.
56
- Its purpose is to serve as the base model to be fine-tuned for downstream tasks.
57
-
58
- ### Training Procedure
59
- Pre-trained for a total of 63B tokens.
60
- #### Hardware
61
- - **GPUs**: 8 x Nvidia A100-80 GB
62
- #### Software
63
- - TensorFlow
64
- #### Hyperparameters
65
- ##### Pretraining
66
- - **Training regime:** fp16 mixed precision
67
- - **Training objective**: Sentence permutation and span masking (using mask lengths sampled from Poisson distribution λ=3.5, masking 30% of tokens)
68
- - **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
69
- - **Scheduler**: Custom scheduler from the original Transformers paper (20,000 warm-up steps)
70
- - **Dropout**: 0.1
71
- - **Initial Learning rate**: 5e-6
72
- - **Training tokens**: 63B
73
-
74
-
75
- ## Citation
76
- ```
77
- @article{turker2024vbart,
78
- title={VBART: The Turkish LLM},
79
- author={Turker, Meliksah and Ari, Erdi and Han, Aydin},
80
- journal={arXiv preprint arXiv:2403.01308},
81
- year={2024}
82
- }
83
- ```
 
1
  ---
2
+ tags:
3
+ - generated_from_keras_callback
4
+ model-index:
5
+ - name: VBART-Medium-Base
6
+ results: []
 
 
 
 
 
 
 
 
7
  ---
8
+
9
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
10
+ probably proofread and complete it, then remove this comment. -->
11
+
12
+ # VBART-Medium-Base
13
+
14
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
+ It achieves the following results on the evaluation set:
16
+
17
+
18
+ ## Model description
19
+
20
+ More information needed
21
+
22
+ ## Intended uses & limitations
23
+
24
+ More information needed
25
+
26
+ ## Training and evaluation data
27
+
28
+ More information needed
29
+
30
+ ## Training procedure
31
+
32
+ ### Training hyperparameters
33
+
34
+ The following hyperparameters were used during training:
35
+ - optimizer: None
36
+ - training_precision: float32
37
+
38
+ ### Training results
39
+
40
+
41
+
42
+ ### Framework versions
43
+
44
+ - Transformers 4.39.0
45
+ - TensorFlow 2.13.0
46
+ - Datasets 2.18.0
47
+ - Tokenizers 0.15.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -1,5 +1,4 @@
1
  {
2
- "_name_or_path": "tfhf_model_checkpointmedium_epoch_0024_opt.hdf5",
3
  "activation_dropout": 0.0,
4
  "activation_function": "gelu",
5
  "architectures": [
@@ -28,8 +27,7 @@
28
  "num_hidden_layers": 6,
29
  "pad_token_id": 0,
30
  "scale_embedding": false,
31
- "torch_dtype": "float32",
32
  "transformers_version": "4.39.0",
33
  "use_cache": true,
34
- "vocab_size": 32000
35
  }
 
1
  {
 
2
  "activation_dropout": 0.0,
3
  "activation_function": "gelu",
4
  "architectures": [
 
27
  "num_hidden_layers": 6,
28
  "pad_token_id": 0,
29
  "scale_embedding": false,
 
30
  "transformers_version": "4.39.0",
31
  "use_cache": true,
32
+ "vocab_size": 32001
33
  }
generation_config.json CHANGED
@@ -5,6 +5,5 @@
5
  "eos_token_id": 3,
6
  "forced_eos_token_id": 3,
7
  "pad_token_id": 0,
8
- "transformers_version": "4.39.0",
9
- "max_new_tokens": 128
10
  }
 
5
  "eos_token_id": 3,
6
  "forced_eos_token_id": 3,
7
  "pad_token_id": 0,
8
+ "transformers_version": "4.39.0"
 
9
  }
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:da8417d8d822f93590092b1e6502c0967edf5a59dffb8ded9686707dd28fc3f7
3
- size 502005512
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f933c639f9f5e2c36e9824199dab76f83a00aab175f6444e88468e8300ce8355
3
+ size 502008588