Bioul2-mini-nl8
Pretrained T5 model on Biological dataset using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in this paper and first released at this page. The UL2 objective was introduced in this paper and first released on this page.
Note: The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice.
Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
BioT5 is a transformers model pretrained on a very large corpus of biological data (25 million abstracts) in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
This model used the T5 v1.1 improvements compared to the original T5 model during the pretraining:
GEGLU activation in feed-forward hidden layer, rather than ReLU - see here Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning Pretrained on self-supervised objective only without mixing in the downstream tasks No parameter sharing between embedding and classifier layer This model also used the "efficient" T5 architecture findings presented in this paper. In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the t5-efficient-mini-nl8 architecture's layer depth which means both the encoder and the decoder have 8 transformer layers compared to the original T5 "mini" model's architecture of 4 transformer layers.
In total, this model has 72 million parameters.
UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of three denoising tasks: (1) R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective; (2) X-denoising (or extreme span corruption); and (3) S-denoising (or sequential PrefixLM). During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training denoising task. During the pretraining, a paradigm token is inserted to the input ([NLU] for R-denoising, [NLG] for X-denoising, or [S2S] for S-denoising) indicating the denoising task at hand. Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream fine-tuning tasks.
Intended uses & limitations This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. Note: You most likely need to fine-tune these T5/UL2 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from here, for example.
Note: For fine-tuning, most likely you can get better results if you insert a prefix token of [NLU], [NLG], or [S2S] to your input texts. For general language understanding fine-tuning tasks, you could use the [NLU] token. For GPT-style causal language generation, you could use the [S2S] token. The token [NLG] of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token [NLG] could maybe be used for language generation fine-tuning too.
Acknowledgements
This project would not have been possible without compute generously provided by Google through the Google TPU Research Cloud. Thanks to the Finnish-NLP authors for releasing their code for the UL2 objective, associated task definitions and their guidance. Thanks to Yeb Havinga for helping me get started with the t5x framework.
- Downloads last month
- 1