goldfish-models commited on
Commit
28a3c24
1 Parent(s): 312bcd4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -14,7 +14,7 @@ library_name: transformers
14
  pipeline_tag: text-generation
15
  tags:
16
  - goldfish
17
-
18
  ---
19
 
20
  # kin_latn_full
@@ -25,7 +25,7 @@ The Goldfish models are trained primarily for comparability across languages and
25
 
26
  Note: kin_latn is an [individual language](https://iso639-3.sil.org/code_tables/639/data) code. It is not contained in any macrolanguage codes contained in Goldfish (for script latn).
27
 
28
- All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://github.com/tylerachang/goldfish/blob/main/goldfish_paper_20240815.pdf).
29
 
30
  Training code and sample usage: https://github.com/tylerachang/goldfish
31
 
@@ -35,6 +35,7 @@ Sample usage also in this Google Colab: [link](https://colab.research.google.com
35
 
36
  To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json.
37
  All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
 
38
  Details for this model specifically:
39
 
40
  * Architecture: gpt2
@@ -64,5 +65,6 @@ If you use this model, please cite:
64
  author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
65
  journal={Preprint},
66
  year={2024},
 
67
  }
68
  ```
 
14
  pipeline_tag: text-generation
15
  tags:
16
  - goldfish
17
+ - arxiv:2408.10441
18
  ---
19
 
20
  # kin_latn_full
 
25
 
26
  Note: kin_latn is an [individual language](https://iso639-3.sil.org/code_tables/639/data) code. It is not contained in any macrolanguage codes contained in Goldfish (for script latn).
27
 
28
+ All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://www.arxiv.org/abs/2408.10441).
29
 
30
  Training code and sample usage: https://github.com/tylerachang/goldfish
31
 
 
35
 
36
  To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json.
37
  All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
38
+ For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)!
39
  Details for this model specifically:
40
 
41
  * Architecture: gpt2
 
65
  author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
66
  journal={Preprint},
67
  year={2024},
68
+ url={https://www.arxiv.org/abs/2408.10441},
69
  }
70
  ```