Datasets:

Modalities:
Text
ArXiv:
Libraries:
Datasets
mauriceweber commited on
Commit
19f2e7e
1 Parent(s): a706d31

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -209,8 +209,8 @@ RedPajama-V2 is an open dataset for training large language models and includes
209
  | ccnet_language_score | score of the language identification model | CCNet | [CCNet](https://github.com/facebookresearch/cc_net) |
210
  | ccnet_length | number of characters | CCNet | [CCNet](https://github.com/facebookresearch/cc_net) |
211
  | ccnet_nlines | number of lines | CCNet | [CCNet](https://github.com/facebookresearch/cc_net) |
212
- | ccnet_original_length | number of characters before in-document line deduplication | CCNet | [CCNet](https://github.com/facebookresearch/cc_net) |
213
- | ccnet_original_nlines | number of lines before in-document line deduplication | CCNet | [CCNet](https://github.com/facebookresearch/cc_net) |
214
  | ccnet_perplexity | perplexity of an LM trained on Wikipedia | CCNet | [CCNet](https://github.com/facebookresearch/cc_net) |
215
  | rps_doc_books_importance | Given a bag of {1,2}-wordgram model trained on Books p, and a model trained on the source domain q, This is the logarithm of the ratio p(doc)/q(doc). | ML Heuristics | [Importance Resampling (Xie et al.)](https://arxiv.org/abs/2302.03169) |
216
  | rps_doc_openwebtext_importance | Given a bag of {1,2}-wordgram model trained on OpenWebText p, and a model trained on the source domain q, this is the logarithm of the ratio p(doc)/q(doc). | ML Heuristics | [Importance Resampling (Xie et al.)](https://arxiv.org/abs/2302.03169) |
 
209
  | ccnet_language_score | score of the language identification model | CCNet | [CCNet](https://github.com/facebookresearch/cc_net) |
210
  | ccnet_length | number of characters | CCNet | [CCNet](https://github.com/facebookresearch/cc_net) |
211
  | ccnet_nlines | number of lines | CCNet | [CCNet](https://github.com/facebookresearch/cc_net) |
212
+ | ccnet_original_length | number of characters before line-level deduplication | CCNet | [CCNet](https://github.com/facebookresearch/cc_net) |
213
+ | ccnet_original_nlines | number of lines before line-level deduplication | CCNet | [CCNet](https://github.com/facebookresearch/cc_net) |
214
  | ccnet_perplexity | perplexity of an LM trained on Wikipedia | CCNet | [CCNet](https://github.com/facebookresearch/cc_net) |
215
  | rps_doc_books_importance | Given a bag of {1,2}-wordgram model trained on Books p, and a model trained on the source domain q, This is the logarithm of the ratio p(doc)/q(doc). | ML Heuristics | [Importance Resampling (Xie et al.)](https://arxiv.org/abs/2302.03169) |
216
  | rps_doc_openwebtext_importance | Given a bag of {1,2}-wordgram model trained on OpenWebText p, and a model trained on the source domain q, this is the logarithm of the ratio p(doc)/q(doc). | ML Heuristics | [Importance Resampling (Xie et al.)](https://arxiv.org/abs/2302.03169) |