jbochi commited on
Commit
2cf57e0
1 Parent(s): 4f38ad1

Add candle instructions

Browse files
Files changed (1) hide show
  1. README.md +32 -1
README.md CHANGED
@@ -441,6 +441,10 @@ Abstract:
441
 
442
  > We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.
443
 
 
 
 
 
444
  ```python
445
  from transformers import T5ForConditionalGeneration, T5Tokenizer, GenerationConfig
446
 
@@ -455,4 +459,31 @@ tokenizer.decode(outputs[0], skip_special_tokens=True)
455
  # Eu adoro pizza!
456
  ```
457
 
458
- Colab to generate these files is [here](https://colab.research.google.com/drive/1rZ2NRyl2zwmg0sQ2Wi-uZZF48iVYulTC#scrollTo=pVODoE6gA9sw).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
441
 
442
  > We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.
443
 
444
+ ## Usage
445
+
446
+ Usage with Huggingface's transformers:
447
+
448
  ```python
449
  from transformers import T5ForConditionalGeneration, T5Tokenizer, GenerationConfig
450
 
 
459
  # Eu adoro pizza!
460
  ```
461
 
462
+ Usage with [candle](https://github.com/huggingface/candle):
463
+
464
+ ```bash
465
+ $ cargo run --example t5 --release -- \
466
+ --model-id "jbochi/madlad400-3b-mt" \
467
+ --prompt "<2de> How are you, my friend?" \
468
+ --decode --temperature 0
469
+ ```
470
+
471
+ We also provide a quantized model (1.65 GB vs the original 11.8 GB file):
472
+
473
+ ```
474
+ cargo run --example quantized-t5 --release -- \
475
+ --model-id "jbochi/madlad400-3b-mt" --weight-file "model-q4k.gguf" \
476
+ --prompt "<2de> How are you, my friend?" \
477
+ --temperature 0
478
+ ...
479
+ Wie geht es dir, mein Freund?
480
+ ```
481
+
482
+
483
+ ## Model conversion
484
+
485
+ I'm not affiliated with Google and was not involved in this research.
486
+
487
+ The colab I used to generate these files is [here](https://colab.research.google.com/drive/1rZ2NRyl2zwmg0sQ2Wi-uZZF48iVYulTC#scrollTo=pVODoE6gA9sw).
488
+
489
+ Quantization was done with candle following this [instruction](https://github.com/huggingface/candle/tree/main/candle-examples/examples/quantized-t5#generating-quantized-weight-files).