m-ric HF staff commited on
Commit
b1db352
1 Parent(s): f8d333a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -210,6 +210,11 @@ metrics:
210
  - spbleu
211
  - chrf++
212
  inference: false
 
 
 
 
 
213
  ---
214
 
215
  # NLLB-200
@@ -219,7 +224,7 @@ This is the model card of NLLB-200's distilled 600M variant.
219
  Here are the [metrics](https://tinyurl.com/nllb200densedst600mmetrics) for that particular checkpoint.
220
 
221
  - Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
222
- - Paper or other resource for more information NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv, 2022
223
  - License: CC-BY-NC
224
  - Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
225
 
@@ -250,4 +255,4 @@ SentencePiece model is released along with NLLB-200.
250
  • Our model has been tested on the Wikimedia domain with limited investigation on other domains supported in NLLB-MD. In addition, the supported languages may have variations that our model is not capturing. Users should make appropriate assessments.
251
 
252
  ## Carbon Footprint Details
253
- • The carbon dioxide (CO2e) estimate is reported in Section 8.8.
 
210
  - spbleu
211
  - chrf++
212
  inference: false
213
+
214
+ co2_eq_emissions:
215
+ emissions: 104310000
216
+ source: "No Language Left Behind: Scaling Human-Centered Machine Translation"
217
+ hardware_used: "NVIDIA A100"
218
  ---
219
 
220
  # NLLB-200
 
224
  Here are the [metrics](https://tinyurl.com/nllb200densedst600mmetrics) for that particular checkpoint.
225
 
226
  - Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
227
+ - Paper or other resource for more information: [NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation](https://huggingface.co/papers/2207.04672)
228
  - License: CC-BY-NC
229
  - Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
230
 
 
255
  • Our model has been tested on the Wikimedia domain with limited investigation on other domains supported in NLLB-MD. In addition, the supported languages may have variations that our model is not capturing. Users should make appropriate assessments.
256
 
257
  ## Carbon Footprint Details
258
+ • The carbon dioxide (CO2e) estimate is reported in Section 8.8 and in the model card.