Siddharth63 commited on
Commit
6ac7ad8
1 Parent(s): 44e3431

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -1,5 +1,7 @@
1
  ---
2
- license: other
 
 
3
  ---
4
 
5
  # Bioul2-small
@@ -34,4 +36,4 @@ This model was only pretrained in a self-supervised way excluding any supervised
34
  Note: For fine-tuning, most likely you can get better results if you insert a prefix token of [NLU], [NLG], or [S2S] to your input texts. For general language understanding fine-tuning tasks, you could use the [NLU] token. For GPT-style causal language generation, you could use the [S2S] token. The token [NLG] of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token [NLG] could maybe be used for language generation fine-tuning too.
35
 
36
  ## Acknowledgements
37
- This project would not have been possible without compute generously provided by Google through the [Google TPU Research Cloud](https://sites.research.google/trc/about/). Thanks to the [Finnish-NLP](https://huggingface.co/Finnish-NLP) authors for releasing their code for the UL2 objective, associated task definitions and their guidance. Thanks to [Yeb Havinga](https://huggingface.co/yhavinga) for helping me get started with the t5x framework.
 
1
  ---
2
+ license: artistic-2.0
3
+ datasets:
4
+ - Siddharth63/biological_dataset
5
  ---
6
 
7
  # Bioul2-small
 
36
  Note: For fine-tuning, most likely you can get better results if you insert a prefix token of [NLU], [NLG], or [S2S] to your input texts. For general language understanding fine-tuning tasks, you could use the [NLU] token. For GPT-style causal language generation, you could use the [S2S] token. The token [NLG] of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token [NLG] could maybe be used for language generation fine-tuning too.
37
 
38
  ## Acknowledgements
39
+ This project would not have been possible without compute generously provided by Google through the [Google TPU Research Cloud](https://sites.research.google/trc/about/). Thanks to the [Finnish-NLP](https://huggingface.co/Finnish-NLP) authors for releasing their code for the UL2 objective, associated task definitions and their guidance. Thanks to [Yeb Havinga](https://huggingface.co/yhavinga) for helping me get started with the t5x framework.