jbochi commited on
Commit
53ab2f0
1 Parent(s): d5a023a

List all models

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -431,6 +431,12 @@ T5ForConditionalGeneration files for Google's [Madlad-400](https://github.com/go
431
 
432
  Article: [MADLAD-400: A Multilingual And Document-Level Large Audited Dataset](https://arxiv.org/abs/2309.04662)
433
 
 
 
 
 
 
 
434
  Abstract:
435
 
436
  > We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.
 
431
 
432
  Article: [MADLAD-400: A Multilingual And Document-Level Large Audited Dataset](https://arxiv.org/abs/2309.04662)
433
 
434
+ Available models:
435
+ - [3B](https://huggingface.co/jbochi/madlad400-3b-mt)
436
+ - [7B](https://huggingface.co/jbochi/madlad400-7b-mt)
437
+ - [7B-BT](https://huggingface.co/jbochi/madlad400-7b-mt-bt)
438
+ - [10B](https://huggingface.co/jbochi/madlad400-10b-mt)
439
+
440
  Abstract:
441
 
442
  > We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.