tFINE-900m-e16-d32-1024ctx
Pretrained T5 model with nanoT5:
- ~900m parameters, 16 layers in encoder, 32 layers in decoder
- sentencepiece tokenizer with 48k vocab & byte-pair fallback
- handles whitespaces etc correctly (unlike standard T5 tokenizer)
- 1024 ctx during pretrain
relative_attention_num_buckets
increased to 48 from standard 32 for context length upscaling
Experiment logs
Training consisted of two phases:
- Downloads last month
- 31
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.