m-ric HF staff commited on
Commit
d78ddf9
1 Parent(s): 1a07f7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -210,6 +210,11 @@ metrics:
210
  - spbleu
211
  - chrf++
212
  inference: false
 
 
 
 
 
213
  ---
214
 
215
  # NLLB-200
@@ -219,7 +224,7 @@ This is the model card of NLLB-200's 3.3B variant.
219
  Here are the [metrics](https://tinyurl.com/nllb200dense3bmetrics) for that particular checkpoint.
220
 
221
  - Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
222
- - Paper or other resource for more information NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv, 2022
223
  - License: CC-BY-NC
224
  - Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
225
 
 
210
  - spbleu
211
  - chrf++
212
  inference: false
213
+
214
+ co2_eq_emissions:
215
+ emissions: 104310000
216
+ source: "Paper: No Language Left Behind: Scaling Human-Centered Machine Translation. This is the number for the whole NLLB-200 project, that includes other models."
217
+ hardware_used: "NVIDIA A100"
218
  ---
219
 
220
  # NLLB-200
 
224
  Here are the [metrics](https://tinyurl.com/nllb200dense3bmetrics) for that particular checkpoint.
225
 
226
  - Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
227
+ - Paper or other resource for more information: [NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation](https://huggingface.co/papers/2207.04672)
228
  - License: CC-BY-NC
229
  - Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
230