Bachstelze commited on
Commit
4ba7123
1 Parent(s): 380ab22

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -0
README.md CHANGED
@@ -1,3 +1,30 @@
1
  ---
 
 
 
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - text2text-generation
4
+
5
  license: mit
6
  ---
7
+ datasets:
8
+ - CohereForAI/aya_dataset
9
+
10
+
11
+ license: mit
12
+ ---
13
+ # Model Card of instructionMBERTv1 for Bertology
14
+
15
+ A minimalistic instruction model with an already good analysed and pretrained encoder like roBERTa.
16
+ So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
17
+
18
+ The training code is released at the [instructionBERT repository](https://gitlab.com/Bachstelze/instructionbert).
19
+ We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
20
+
21
+ ## Training parameters
22
+
23
+ - base model: "google-bert/bert-base-multilingual-cased"
24
+ - trained for 8 epochs
25
+ - batch size of 16
26
+ - 20000 warm-up steps
27
+ - learning rate of 0.0001
28
+
29
+ ## Purpose of instructionMBERT
30
+ InstructionMBERT is intended for research purposes. The model-generated text should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.