|
--- |
|
tags: |
|
- text2text-generation |
|
license: mit |
|
datasets: |
|
- CohereForAI/aya_dataset |
|
--- |
|
# Model Card of instructionMBERTv1 for Bertology |
|
|
|
A minimalistic multilingual instruction model with an already good analysed and pretrained encoder like mBERT. |
|
So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf). |
|
|
|
The training code is released at the [instructionBERT repository](https://gitlab.com/Bachstelze/instructionbert). |
|
We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose. |
|
|
|
## Training parameters |
|
|
|
- base model: "google-bert/bert-base-multilingual-cased" |
|
- trained for 8 epochs |
|
- batch size of 16 |
|
- 20000 warm-up steps |
|
- learning rate of 0.0001 |
|
|
|
## Purpose of instructionMBERT |
|
InstructionMBERT is intended for research purposes. The model-generated text should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications. |