omarkamali commited on
Commit
7caef94
·
verified ·
1 Parent(s): 6c55f26

Upload all models and assets for av (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +310 -141
  2. models/embeddings/monolingual/av_128d.bin +2 -2
  3. models/embeddings/monolingual/av_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/av_32d.bin +2 -2
  5. models/embeddings/monolingual/av_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/av_64d.bin +2 -2
  7. models/embeddings/monolingual/av_64d_metadata.json +5 -3
  8. models/subword_markov/av_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/av_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/av_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/av_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/av_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/av_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/av_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/av_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/av_2gram_subword.parquet +2 -2
  17. models/subword_ngram/av_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/av_3gram_subword.parquet +2 -2
  19. models/subword_ngram/av_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/av_4gram_subword.parquet +2 -2
  21. models/subword_ngram/av_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/av_tokenizer_16k.model +2 -2
  23. models/tokenizer/av_tokenizer_16k.vocab +0 -0
  24. models/tokenizer/av_tokenizer_32k.model +2 -2
  25. models/tokenizer/av_tokenizer_32k.vocab +0 -0
  26. models/tokenizer/av_tokenizer_64k.model +2 -2
  27. models/tokenizer/av_tokenizer_64k.vocab +0 -0
  28. models/tokenizer/av_tokenizer_8k.model +2 -2
  29. models/tokenizer/av_tokenizer_8k.vocab +0 -0
  30. models/vocabulary/av_vocabulary.parquet +2 -2
  31. models/vocabulary/av_vocabulary_metadata.json +10 -9
  32. models/word_markov/av_markov_ctx1_word.parquet +2 -2
  33. models/word_markov/av_markov_ctx1_word_metadata.json +2 -2
  34. models/word_markov/av_markov_ctx2_word.parquet +2 -2
  35. models/word_markov/av_markov_ctx2_word_metadata.json +2 -2
  36. models/word_markov/av_markov_ctx3_word.parquet +2 -2
  37. models/word_markov/av_markov_ctx3_word_metadata.json +2 -2
  38. models/word_markov/av_markov_ctx4_word.parquet +2 -2
  39. models/word_markov/av_markov_ctx4_word_metadata.json +2 -2
  40. models/word_ngram/av_2gram_word.parquet +2 -2
  41. models/word_ngram/av_2gram_word_metadata.json +2 -2
  42. models/word_ngram/av_3gram_word.parquet +2 -2
  43. models/word_ngram/av_3gram_word_metadata.json +2 -2
  44. models/word_ngram/av_4gram_word.parquet +2 -2
  45. models/word_ngram/av_4gram_word_metadata.json +2 -2
  46. visualizations/embedding_isotropy.png +0 -0
  47. visualizations/embedding_norms.png +0 -0
  48. visualizations/embedding_similarity.png +2 -2
  49. visualizations/markov_branching.png +0 -0
  50. visualizations/markov_contexts.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 4.583
27
  - name: best_isotropy
28
  type: isotropy
29
  value: 0.8716
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 38576
33
- generated: 2025-12-27
34
  ---
35
 
36
  # AV - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,61 +70,57 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **8k** | 3.534x | 3.49 | 0.0801% | 219,599 |
76
- | **16k** | 3.897x | 3.84 | 0.0884% | 199,103 |
77
- | **32k** | 4.254x | 4.20 | 0.0965% | 182,410 |
78
- | **64k** | 4.583x 🏆 | 4.52 | 0.1039% | 169,325 |
79
 
80
  ### Tokenization Examples
81
 
82
  Below are sample sentences tokenized with each vocabulary size:
83
 
84
- **Sample 1:** `Кванирукъ (латиназул мацӀалда ventriculus) гӀадамасул лага-черх.
85
-
86
- Категория:Г...`
87
 
88
  | Vocab | Tokens | Count |
89
  |-------|--------|-------|
90
- | 8k | `▁кван ир укъ ▁( латиназул ▁мацӏалда ▁v ent ric ul ... (+14 more)` | 24 |
91
- | 16k | `▁кван ир укъ ▁( латиназул ▁мацӏалда ▁v ent ric ulus ... (+13 more)` | 23 |
92
- | 32k | `▁кванирукъ ▁( латиназул ▁мацӏалда ▁v ent ric ulus ) ▁— ... (+11 more)` | 21 |
93
- | 64k | `▁кванирукъ ▁( латиназул ▁мацӏалда ▁vent ric ulus ) ▁— ▁гӏадамасул ... (+10 more)` | 20 |
94
 
95
- **Sample 2:** `Гудермес ( Россиялъул Буртиялъ жумхӀурияталда бугеб шагьар.
96
- Сунж-хъалаялдаса...`
97
 
98
  | Vocab | Tokens | Count |
99
  |-------|--------|-------|
100
- | 8k | `▁г уд ер мес ▁( ▁) ▁— ▁россиялъул ▁бурт иялъ ... (+36 more)` | 46 |
101
- | 16k | `▁гуд ер мес ▁( ▁) ▁— ▁россиялъул ▁бурт иялъ ▁жум ... (+33 more)` | 43 |
102
- | 32k | `▁гудермес ▁( ▁) ▁— ▁россиялъул ▁буртиялъ ▁жумхӏ урият алда ▁бугеб ... (+25 more)` | 35 |
103
- | 64k | `▁гудермес ▁( ▁) ▁— ▁россиялъул ▁буртиялъ ▁жумхӏурият алда ▁бугеб ▁шагьар ... (+22 more)` | 32 |
104
-
105
- **Sample 3:** `Лъугьа-бахъинал
106
 
107
- Гьаруна
108
-
109
- Хвана
110
-
111
-
112
- Категория:1927`
113
 
114
  | Vocab | Tokens | Count |
115
  |-------|--------|-------|
116
- | 8k | `▁лъугьа - бахъинал ▁гьаруна ▁хвана ▁категория : 1 9 2 ... (+1 more)` | 11 |
117
- | 16k | `▁лъугьа - бахъинал ▁гьаруна ▁хвана ▁категория : 1 9 2 ... (+1 more)` | 11 |
118
- | 32k | `▁лъугьа - бахъинал ▁гьаруна ▁хвана ▁категория : 1 9 2 ... (+1 more)` | 11 |
119
- | 64k | `▁лъугьа - бахъинал ▁гьаруна ▁хвана ▁категория : 1 9 2 ... (+1 more)` | 11 |
120
 
121
 
122
  ### Key Findings
123
 
124
- - **Best Compression:** 64k achieves 4.583x compression
125
- - **Lowest UNK Rate:** 8k with 0.0801% unknown tokens
126
  - **Trade-off:** Larger vocabularies improve compression but increase model size
127
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
128
 
@@ -131,57 +129,89 @@ Below are sample sentences tokenized with each vocabulary size:
131
 
132
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
133
 
 
 
134
  ![N-gram Coverage](visualizations/ngram_coverage.png)
135
 
136
  ### Results
137
 
138
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
139
- |--------|------------|---------|----------------|------------------|-------------------|
140
- | **2-gram** | 5,221 🏆 | 12.35 | 14,725 | 20.8% | 49.4% |
141
- | **2-gram** | 502 🏆 | 8.97 | 5,314 | 55.0% | 94.9% |
142
- | **3-gram** | 8,074 | 12.98 | 19,718 | 16.9% | 42.5% |
143
- | **3-gram** | 4,078 | 11.99 | 36,896 | 22.5% | 60.1% |
144
- | **4-gram** | 18,096 | 14.14 | 39,973 | 12.6% | 31.2% |
145
- | **4-gram** | 18,482 | 14.17 | 151,649 | 12.4% | 35.7% |
146
 
147
  ### Top 5 N-grams by Size
148
 
149
- **2-grams:**
150
 
151
  | Rank | N-gram | Count |
152
  |------|--------|-------|
153
- | 1 | `категория :` | 6,060 |
154
- | 2 | `) .` | 2,431 |
155
- | 3 | `) ,` | 2,098 |
156
- | 4 | `) —` | 1,555 |
157
- | 5 | `. —` | 1,376 |
158
 
159
- **3-grams:**
160
 
161
  | Rank | N-gram | Count |
162
  |------|--------|-------|
163
- | 1 | `. география росу` | 645 |
164
- | 2 | `география росу буго` | 645 |
165
- | 3 | `. категория :` | 622 |
166
- | 4 | `мугъчӏваял категория :` | 614 |
167
- | 5 | `лъугьа - бахъинал` | 597 |
168
 
169
- **4-grams:**
170
 
171
  | Rank | N-gram | Count |
172
  |------|--------|-------|
173
- | 1 | `. география росу буго` | 630 |
174
- | 2 | `география росу буго мухъалъул` | 513 |
175
- | 3 | `. мугъчӏваял категория :` | 483 |
176
- | 4 | `лъугьа - бахъинал гьаруна` | 471 |
177
- | 5 | `- бахъинал гьаруна хвана` | 461 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
178
 
179
 
180
  ### Key Findings
181
 
182
- - **Best Perplexity:** 2-gram with 502
183
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
184
- - **Coverage:** Top-1000 patterns cover ~36% of corpus
185
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
186
 
187
  ---
@@ -189,55 +219,86 @@ Below are sample sentences tokenized with each vocabulary size:
189
 
190
  ![Markov Entropy](visualizations/markov_entropy.png)
191
 
 
 
192
  ![Markov Branching](visualizations/markov_branching.png)
193
 
194
  ### Results
195
 
196
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
197
- |---------|-------------|------------|------------------|-----------------|----------------|
198
- | **1** | 0.5741 | 1.489 | 3.65 | 105,500 | 42.6% |
199
- | **1** | 1.3715 | 2.587 | 12.28 | 1,091 | 0.0% |
200
- | **2** | 0.1898 | 1.141 | 1.41 | 384,678 | 81.0% |
201
- | **2** | 1.0636 | 2.090 | 6.00 | 13,391 | 0.0% |
202
- | **3** | 0.0614 | 1.043 | 1.11 | 542,652 | 93.9% |
203
- | **3** | 0.8006 | 1.742 | 3.64 | 80,309 | 19.9% |
204
- | **4** | 0.0249 🏆 | 1.017 | 1.04 | 599,240 | 97.5% |
205
- | **4** | 0.5368 🏆 | 1.451 | 2.24 | 292,181 | 46.3% |
206
 
207
- ### Generated Text Samples
208
 
209
- Below are text samples generated from each Markov chain model:
210
 
211
  **Context Size 1:**
212
 
213
- 1. `. хундерилифастандартгьо ̄ ̄ л . costumes caucasus circassians caucasus . anatidae хъизан патагӏ...`
214
- 2. `, къагiидаби . амма руго . байрамал лъугьа - бакъбаккул кавказияб календар категория : гардарики ,`
215
- 3. `- абилеб ) мугъчӏваял категория : « вечера на хадидже категория : пко « моноклеr »`
216
 
217
  **Context Size 2:**
218
 
219
- 1. `категория : гӏанди - гӏорул жанилъуда , ралъдал гьумералдаса 1869 метраялъ тіадегіан . хіоралъул тіа...`
220
- 2. `) . эратосфениде ( iii гӏ . байбихьи ) букіана патрикиясул титул , гьелдаса хадуб дагъистаналде .`
221
- 3. `) , лачен ( falco peregrinus ) , продолжительность 2 ч . ii – i гіасрабазул гіорхъода`
222
 
223
  **Context Size 3:**
224
 
225
- 1. `география росу буго мухъалъул центер уркарахъалдаса 50 км - лъ жанубияб бакътӏерхьудехун . демографи...`
226
- 2. `. география росу буго ралъдал гьурматӏама 606 метралъ борхалъуда , мухъалъул центер хунзахъа шималия...`
227
- 3. `. категория : ираналъул останал * категория : азиялъул исламиял хіаракатчагіи категория : тіалибан`
228
 
229
  **Context Size 4:**
230
 
231
- 1. `. география росу буго мухъалъул марказ лъаратӏаса 11 км - алъ шималалдехун . демография ккола моноэт...`
232
- 2. `география росу буго мухъалъул центер дешлахӏаралдаса 13 км - лъ рикӏкӏад . история 1886 соналъул бая...`
233
- 3. `. мугъчӏваял категория : гӏандалазул бол чагӏи категория : кавказалъул имамзаби`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
234
 
235
 
236
  ### Key Findings
237
 
238
- - **Best Predictability:** Context-4 with 97.5% predictability
239
  - **Branching Factor:** Decreases with context size (more deterministic)
240
- - **Memory Trade-off:** Larger contexts require more storage (292,181 contexts)
241
  - **Recommendation:** Context-3 or Context-4 for text generation
242
 
243
  ---
@@ -253,64 +314,64 @@ Below are text samples generated from each Markov chain model:
253
 
254
  | Metric | Value |
255
  |--------|-------|
256
- | Vocabulary Size | 38,576 |
257
- | Total Tokens | 474,364 |
258
- | Mean Frequency | 12.30 |
259
  | Median Frequency | 3 |
260
- | Frequency Std Dev | 81.10 |
261
 
262
  ### Most Common Words
263
 
264
  | Rank | Word | Frequency |
265
  |------|------|-----------|
266
- | 1 | ва | 7,190 |
267
- | 2 | категория | 6,086 |
268
- | 3 | буго | 5,703 |
269
- | 4 | бугеб | 2,911 |
270
- | 5 | ккола | 2,903 |
271
- | 6 | росу | 2,847 |
272
- | 7 | мухъалъул | 2,671 |
273
- | 8 | гьеб | 2,187 |
274
- | 9 | дагъистаналъул | 1,923 |
275
- | 10 | росдал | 1,903 |
276
 
277
  ### Least Common Words (from vocabulary)
278
 
279
  | Rank | Word | Frequency |
280
  |------|------|-----------|
281
- | 1 | уркутамахьи | 2 |
282
- | 2 | континуумалде | 2 |
283
- | 3 | къулецӏмаги | 2 |
284
- | 4 | гьаркӏасуниб | 2 |
285
- | 5 | махӏарги | 2 |
286
- | 6 | пилибхиталъул | 2 |
287
- | 7 | заповедникалда | 2 |
288
- | 8 | пилибхит | 2 |
289
- | 9 | лъалъадул | 2 |
290
- | 10 | хӏанчӏи | 2 |
291
 
292
  ### Zipf's Law Analysis
293
 
294
  | Metric | Value |
295
  |--------|-------|
296
- | Zipf Coefficient | 0.9487 |
297
- | R² (Goodness of Fit) | 0.992879 |
298
  | Adherence Quality | **excellent** |
299
 
300
  ### Coverage Analysis
301
 
302
  | Top N Words | Coverage |
303
  |-------------|----------|
304
- | Top 100 | 22.2% |
305
- | Top 1,000 | 49.8% |
306
- | Top 5,000 | 72.6% |
307
- | Top 10,000 | 82.2% |
308
 
309
  ### Key Findings
310
 
311
- - **Zipf Compliance:** R²=0.9929 indicates excellent adherence to Zipf's law
312
- - **High Frequency Dominance:** Top 100 words cover 22.2% of corpus
313
- - **Long Tail:** 28,576 words needed for remaining 17.8% coverage
314
 
315
  ---
316
  ## 5. Word Embeddings Evaluation
@@ -323,24 +384,129 @@ Below are text samples generated from each Markov chain model:
323
 
324
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
325
 
326
- ### Model Comparison
327
 
328
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
329
- |-------|------------|-----------|----------|----------|----------|
330
- | **mono_32d** | 12,900 | 32 | 4.114 | 0.854 | 0.8716 🏆 |
331
- | **mono_64d** | 12,900 | 64 | 4.625 | 0.771 | 0.7752 |
332
- | **mono_128d** | 12,900 | 128 | 4.775 | 0.759 | 0.3123 |
333
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
334
 
335
  ### Key Findings
336
 
337
  - **Best Isotropy:** mono_32d with 0.8716 (more uniform distribution)
338
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
339
- - **Vocabulary Coverage:** All models cover 12,900 words
340
- - **Recommendation:** 100d for balanced semantic capture and efficiency
341
 
342
  ---
343
- ## 6. Summary & Recommendations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
344
 
345
  ![Performance Dashboard](visualizations/performance_dashboard.png)
346
 
@@ -348,11 +514,12 @@ Below are text samples generated from each Markov chain model:
348
 
349
  | Component | Recommended | Rationale |
350
  |-----------|-------------|-----------|
351
- | Tokenizer | **32k BPE** | Best compression (4.58x) with low UNK rate |
352
- | N-gram | **5-gram** | Lowest perplexity (502) |
353
- | Markov | **Context-4** | Highest predictability (97.5%) |
354
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
355
 
 
356
  ---
357
  ## Appendix: Metrics Glossary & Interpretation Guide
358
 
@@ -542,7 +709,8 @@ If you use these models in your research, please cite:
542
  author = {Kamali, Omar},
543
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
544
  year = {2025},
545
- publisher = {HuggingFace},
 
546
  url = {https://huggingface.co/wikilangs}
547
  institution = {Omneity Labs}
548
  }
@@ -558,7 +726,8 @@ MIT License - Free for academic and commercial use.
558
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
559
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
560
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
561
  ---
562
  *Generated by Wikilangs Models Pipeline*
563
 
564
- *Report Date: 2025-12-27 20:39:38*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 4.697
27
  - name: best_isotropy
28
  type: isotropy
29
  value: 0.8716
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # AV - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **8k** | 3.636x | 3.64 | 0.0717% | 252,363 |
84
+ | **16k** | 4.040x | 4.04 | 0.0797% | 227,147 |
85
+ | **32k** | 4.391x | 4.40 | 0.0866% | 208,961 |
86
+ | **64k** | 4.697x 🏆 | 4.70 | 0.0927% | 195,348 |
87
 
88
  ### Tokenization Examples
89
 
90
  Below are sample sentences tokenized with each vocabulary size:
91
 
92
+ **Sample 1:** `Хъипчахъ () гъорлъе уна жибго Хъипчахъ росу. Гьеб росулъ гьабула . росаби`
 
 
93
 
94
  | Vocab | Tokens | Count |
95
  |-------|--------|-------|
96
+ | 8k | `▁хъ ип ч ахъ ▁() ▁гъорлъе ▁уна ▁жибго ▁хъ ип ... (+9 more)` | 19 |
97
+ | 16k | `▁хъ ип ч ахъ ▁() ▁гъорлъе ▁уна ▁жибго ▁хъ ип ... (+9 more)` | 19 |
98
+ | 32k | `▁хъипчахъ ▁() ▁гъорлъе ▁уна ▁жибго ▁хъипчахъ ▁росу . ▁гьеб ▁росулъ ... (+3 more)` | 13 |
99
+ | 64k | `▁хъипчахъ ▁() ▁гъорлъе ▁уна ▁жибго ▁хъипчахъ ▁росу . ▁гьеб ▁росулъ ... (+3 more)` | 13 |
100
 
101
+ **Sample 2:** `26-абилеб июльгрегорианияб календаралда рекъон къо (високоснияб соналъ — свер...`
 
102
 
103
  | Vocab | Tokens | Count |
104
  |-------|--------|-------|
105
+ | 8k | `▁ 2 6 - абилеб ▁июль ▁— ▁грегорианияб ▁календаралда ▁рекъон ... (+19 more)` | 29 |
106
+ | 16k | `▁ 2 6 - абилеб ▁июль ▁— ▁грегорианияб ▁календаралда ▁рекъон ... (+19 more)` | 29 |
107
+ | 32k | `▁ 2 6 - абилеб ▁июль ▁— ▁грегорианияб ▁календаралда ▁рекъон ... (+19 more)` | 29 |
108
+ | 64k | `▁ 2 6 - абилеб ▁июль ▁— ▁грегорианияб ▁календаралда ▁рекъон ... (+19 more)` | 29 |
 
 
109
 
110
+ **Sample 3:** `() ккола Билкан районалда гъорлъе унеб росу. росаби`
 
 
 
 
 
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
+ | 8k | `▁() ▁ккола ▁билкан ▁районалда ▁гъорлъе ▁унеб ▁росу . ▁росаби` | 9 |
115
+ | 16k | `▁() ▁ккола ▁билкан ▁районалда ▁гъорлъе ▁унеб ▁росу . ▁росаби` | 9 |
116
+ | 32k | `▁() ▁ккола ▁билкан ▁районалда ▁гъорлъе ▁унеб ▁росу . ▁росаби` | 9 |
117
+ | 64k | `▁() ▁ккола ▁билкан ▁районалда ▁гъорлъе ▁унеб ▁росу . ▁росаби` | 9 |
118
 
119
 
120
  ### Key Findings
121
 
122
+ - **Best Compression:** 64k achieves 4.697x compression
123
+ - **Lowest UNK Rate:** 8k with 0.0717% unknown tokens
124
  - **Trade-off:** Larger vocabularies improve compression but increase model size
125
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
126
 
 
129
 
130
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
131
 
132
+ ![N-gram Unique](visualizations/ngram_unique.png)
133
+
134
  ![N-gram Coverage](visualizations/ngram_coverage.png)
135
 
136
  ### Results
137
 
138
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
139
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
140
+ | **2-gram** | Word | 3,247 | 11.66 | 6,413 | 22.5% | 54.2% |
141
+ | **2-gram** | Subword | 428 🏆 | 8.74 | 4,133 | 57.8% | 96.7% |
142
+ | **3-gram** | Word | 2,834 | 11.47 | 6,427 | 26.0% | 57.0% |
143
+ | **3-gram** | Subword | 3,424 | 11.74 | 28,949 | 23.6% | 62.9% |
144
+ | **4-gram** | Word | 8,629 | 13.07 | 17,392 | 16.9% | 37.2% |
145
+ | **4-gram** | Subword | 15,875 | 13.95 | 119,337 | 12.4% | 36.8% |
146
 
147
  ### Top 5 N-grams by Size
148
 
149
+ **2-grams (Word):**
150
 
151
  | Rank | N-gram | Count |
152
  |------|--------|-------|
153
+ | 1 | `росу буго` | 509 |
154
+ | 2 | `лъугьа бахъинал` | 496 |
155
+ | 3 | `география росу` | 461 |
156
+ | 4 | `цо цо` | 455 |
157
+ | 5 | `of the` | 441 |
158
 
159
+ **3-grams (Word):**
160
 
161
  | Rank | N-gram | Count |
162
  |------|--------|-------|
163
+ | 1 | `география росу буго` | 448 |
164
+ | 2 | `лъугьа бахъинал гьаруна` | 368 |
165
+ | 3 | `бахъинал гьаруна хвана` | 358 |
166
+ | 4 | `байрамал лъугьа бахъинал` | 353 |
167
+ | 5 | `гьаруна хвана ишараби` | 352 |
168
 
169
+ **4-grams (Word):**
170
 
171
  | Rank | N-gram | Count |
172
  |------|--------|-------|
173
+ | 1 | `лъугьа ��ахъинал гьаруна хвана` | 358 |
174
+ | 2 | `байрамал лъугьа бахъинал гьаруна` | 352 |
175
+ | 3 | `къо байрамал лъугьа бахъинал` | 351 |
176
+ | 4 | `бахъинал гьаруна хвана ишараби` | 349 |
177
+ | 5 | `демография ккола моноэтникияб авар` | 329 |
178
+
179
+ **2-grams (Subword):**
180
+
181
+ | Rank | N-gram | Count |
182
+ |------|--------|-------|
183
+ | 1 | `а л` | 82,724 |
184
+ | 2 | `л _` | 63,062 |
185
+ | 3 | `л ъ` | 52,236 |
186
+ | 4 | `а _` | 52,185 |
187
+ | 5 | `у л` | 49,900 |
188
+
189
+ **3-grams (Subword):**
190
+
191
+ | Rank | N-gram | Count |
192
+ |------|--------|-------|
193
+ | 1 | `у л _` | 33,240 |
194
+ | 2 | `л ъ у` | 30,603 |
195
+ | 3 | `ъ у л` | 25,387 |
196
+ | 4 | `а л ъ` | 23,574 |
197
+ | 5 | `_ г ь` | 22,295 |
198
+
199
+ **4-grams (Subword):**
200
+
201
+ | Rank | N-gram | Count |
202
+ |------|--------|-------|
203
+ | 1 | `л ъ у л` | 23,988 |
204
+ | 2 | `ъ у л _` | 21,518 |
205
+ | 3 | `а л ъ у` | 16,083 |
206
+ | 4 | `а л д а` | 11,383 |
207
+ | 5 | `_ г ь е` | 11,094 |
208
 
209
 
210
  ### Key Findings
211
 
212
+ - **Best Perplexity:** 2-gram (subword) with 428
213
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
214
+ - **Coverage:** Top-1000 patterns cover ~37% of corpus
215
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
216
 
217
  ---
 
219
 
220
  ![Markov Entropy](visualizations/markov_entropy.png)
221
 
222
+ ![Markov Contexts](visualizations/markov_contexts.png)
223
+
224
  ![Markov Branching](visualizations/markov_branching.png)
225
 
226
  ### Results
227
 
228
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
229
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
230
+ | **1** | Word | 0.6602 | 1.580 | 3.57 | 91,234 | 34.0% |
231
+ | **1** | Subword | 1.1781 | 2.263 | 9.32 | 1,145 | 0.0% |
232
+ | **2** | Word | 0.1256 | 1.091 | 1.21 | 324,656 | 87.4% |
233
+ | **2** | Subword | 0.9990 | 1.999 | 5.68 | 10,664 | 0.1% |
234
+ | **3** | Word | 0.0281 | 1.020 | 1.04 | 392,645 | 97.2% |
235
+ | **3** | Subword | 0.7935 | 1.733 | 3.66 | 60,534 | 20.6% |
236
+ | **4** | Word | 0.0114 🏆 | 1.008 | 1.02 | 406,500 | 98.9% |
237
+ | **4** | Subword | 0.5614 | 1.476 | 2.33 | 221,628 | 43.9% |
238
 
239
+ ### Generated Text Samples (Word-based)
240
 
241
+ Below are text samples generated from each word-based Markov chain model:
242
 
243
  **Context Size 1:**
244
 
245
+ 1. `ва бищун хирияб рокьуе рецц гьабун росулӏ историкияб кьучӏги x гіасру ккола гъуниб округалъул цӏигӏу...`
246
+ 2. `буго гьединго гьолокьги бекьизабун буго шумеразул ги къасимехалъ бачӏингун чагӏазда макьаби контрола...`
247
+ 3. `ккола гьижрияб соналъул 29 август цояб гьелъул буго гьанже батизе бегьула 1 гуржистаналъул бищун бор...`
248
 
249
  **Context Size 2:**
250
 
251
+ 1. `росу буго лъарагӏлъиялда хасавхъала мухъалда хасавхъалаялдаса 24 км ялъ жанубиябгин бакъбаккудехун а...`
252
+ 2. `лъугьа бахъинал гьаруна хвана ишараби мугъчӏ��аял гь балагье хіужаби иццал адабият гіурус маціалда бу...`
253
+ 3. `география росу буго лъарагӏлъиялда дибирилросу мухъалда дибирилросуялдаса 10 км ялъ шималиябгин бакъ...`
254
 
255
  **Context Size 3:**
256
 
257
+ 1. `география росу буго мухъалъул марказ хӏебдаса 15 километралъ бакъбаккудехун халкъ мугъчӏваял регӏела...`
258
+ 2. `лъугьа бахъинал гьаруна хвана ишараби мугъчӏваял гь балагье трактат адабият тайпаби изданиял`
259
+ 3. `байрамал лъугьа бахъинал гьаруна хвана ишараби мугъчӏваял гь балагье трактат адабият тайпаби издания...`
260
 
261
  **Context Size 4:**
262
 
263
+ 1. `байрамал лъугьа бахъинал гьаруна хвана ишараби мугъчӏваял гь балагье`
264
+ 2. `къо байрамал лъугьа бахъинал гьаруна хвана ишараби мугъчӏваял гь балагье`
265
+ 3. `лъугьа бахъинал гьаруна хвана ишараби мугъчӏваял гь балагье`
266
+
267
+
268
+ ### Generated Text Samples (Subword-based)
269
+
270
+ Below are text samples generated from each subword-based Markov chain model:
271
+
272
+ **Context Size 1:**
273
+
274
+ 1. `_iv–_гъугіо_usth`
275
+ 2. `абиза_и._д_2%_ке`
276
+ 3. `л_—_ilissoldan_|`
277
+
278
+ **Context Size 2:**
279
+
280
+ 1. `алъахъану_кконие_`
281
+ 2. `л_адекалабаяракӏ)`
282
+ 3. `лъул_кіаялъул_на_`
283
+
284
+ **Context Size 3:**
285
+
286
+ 1. `ул_къотӏагораний_в`
287
+ 2. `лъул_реал_карт_гӏа`
288
+ 3. `ъулгун_ар-рип_хъал`
289
+
290
+ **Context Size 4:**
291
+
292
+ 1. `лъул_хіалалда_чӏали`
293
+ 2. `ъул_большая_и_казбе`
294
+ 3. `алъул_руго_9:_мугъч`
295
 
296
 
297
  ### Key Findings
298
 
299
+ - **Best Predictability:** Context-4 (word) with 98.9% predictability
300
  - **Branching Factor:** Decreases with context size (more deterministic)
301
+ - **Memory Trade-off:** Larger contexts require more storage (221,628 contexts)
302
  - **Recommendation:** Context-3 or Context-4 for text generation
303
 
304
  ---
 
314
 
315
  | Metric | Value |
316
  |--------|-------|
317
+ | Vocabulary Size | 34,392 |
318
+ | Total Tokens | 405,867 |
319
+ | Mean Frequency | 11.80 |
320
  | Median Frequency | 3 |
321
+ | Frequency Std Dev | 73.46 |
322
 
323
  ### Most Common Words
324
 
325
  | Rank | Word | Frequency |
326
  |------|------|-----------|
327
+ | 1 | ва | 7,245 |
328
+ | 2 | буго | 5,074 |
329
+ | 3 | ккола | 2,830 |
330
+ | 4 | бугеб | 2,699 |
331
+ | 5 | гьеб | 2,222 |
332
+ | 6 | росу | 2,175 |
333
+ | 7 | мухъалъул | 2,030 |
334
+ | 8 | цо | 1,833 |
335
+ | 9 | the | 1,815 |
336
+ | 10 | соналъ | 1,799 |
337
 
338
  ### Least Common Words (from vocabulary)
339
 
340
  | Rank | Word | Frequency |
341
  |------|------|-----------|
342
+ | 1 | долтул | 2 |
343
+ | 2 | кӏалалдаса | 2 |
344
+ | 3 | шаргі | 2 |
345
+ | 4 | харитӏун | 2 |
346
+ | 5 | луткунги | 2 |
347
+ | 6 | беглъуда | 2 |
348
+ | 7 | къацӏар | 2 |
349
+ | 8 | мичегь | 2 |
350
+ | 9 | хъурукал | 2 |
351
+ | 10 | мягьле | 2 |
352
 
353
  ### Zipf's Law Analysis
354
 
355
  | Metric | Value |
356
  |--------|-------|
357
+ | Zipf Coefficient | 0.9506 |
358
+ | R² (Goodness of Fit) | 0.993368 |
359
  | Adherence Quality | **excellent** |
360
 
361
  ### Coverage Analysis
362
 
363
  | Top N Words | Coverage |
364
  |-------------|----------|
365
+ | Top 100 | 22.5% |
366
+ | Top 1,000 | 50.8% |
367
+ | Top 5,000 | 73.6% |
368
+ | Top 10,000 | 83.3% |
369
 
370
  ### Key Findings
371
 
372
+ - **Zipf Compliance:** R²=0.9934 indicates excellent adherence to Zipf's law
373
+ - **High Frequency Dominance:** Top 100 words cover 22.5% of corpus
374
+ - **Long Tail:** 24,392 words needed for remaining 16.7% coverage
375
 
376
  ---
377
  ## 5. Word Embeddings Evaluation
 
384
 
385
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
386
 
 
387
 
388
+ ### 5.1 Cross-Lingual Alignment
389
+
390
+ > *Note: Multilingual alignment visualization not available for this language.*
391
+
392
+
393
+ ### 5.2 Model Comparison
394
+
395
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
396
+ |-------|-----------|----------|------------------|---------------|----------------|
397
+ | **mono_32d** | 32 | 0.8716 🏆 | 0.3278 | N/A | N/A |
398
+ | **mono_64d** | 64 | 0.7240 | 0.2821 | N/A | N/A |
399
+ | **mono_128d** | 128 | 0.2461 | 0.2702 | N/A | N/A |
400
 
401
  ### Key Findings
402
 
403
  - **Best Isotropy:** mono_32d with 0.8716 (more uniform distribution)
404
+ - **Semantic Density:** Average pairwise similarity of 0.2934. Lower values indicate better semantic separation.
405
+ - **Alignment Quality:** No aligned models evaluated in this run.
406
+ - **Recommendation:** 128d aligned for best cross-lingual performance
407
 
408
  ---
409
+ ## 6. Morphological Analysis (Experimental)
410
+
411
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
412
+
413
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
414
+
415
+ ### 6.1 Productivity & Complexity
416
+
417
+ | Metric | Value | Interpretation | Recommendation |
418
+ |--------|-------|----------------|----------------|
419
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
420
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
421
+
422
+ ### 6.2 Affix Inventory (Productive Units)
423
+
424
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
425
+
426
+ #### Productive Prefixes
427
+ | Prefix | Examples |
428
+ |--------|----------|
429
+ | `-гь` | гьамчукъотӏи, гьечӏони, гьаркьалги |
430
+ | `-гӏ` | гӏасру, гӏаракъи, гӏелмуялде |
431
+ | `-ма` | материялъул, машгьадалда, магіарухъ |
432
+
433
+ #### Productive Suffixes
434
+ | Suffix | Examples |
435
+ |--------|----------|
436
+ | `-л` | сабабал, материялъул, рикӏкӏиналъул |
437
+ | `-а` | елена, современника, шагьаралда |
438
+ | `-ул` | материялъул, рикӏкӏиналъул, хӏажиевасул |
439
+ | `-да` | шагьаралда, машгьадалда, флорида |
440
+ | `-ъул` | материялъул, рикӏкӏиналъул, медициналъул |
441
+ | `-лъул` | материялъул, рикӏкӏиналъул, медициналъул |
442
+ | `-ал` | сабабал, кьурахарал, къезавидал |
443
+ | `-лда` | шагьаралда, машгьадалда, борталда |
444
+
445
+ ### 6.3 Bound Stems (Lexical Roots)
446
+
447
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
448
+
449
+ | Stem | Cohesion | Substitutability | Examples |
450
+ |------|----------|------------------|----------|
451
+ | `алъу` | 1.82x | 100 contexts | алъул, далъун, ралъуе |
452
+ | `агьа` | 1.89x | 59 contexts | дагьа, багьа, загьаб |
453
+ | `ялъу` | 2.04x | 43 contexts | ялъул, аялъул, ялъуни |
454
+ | `ьабу` | 2.16x | 29 contexts | гьабу, гьабун, кьабун |
455
+ | `иялъ` | 1.96x | 36 contexts | абиялъе, химиялъ, лъиялъе |
456
+ | `иялд` | 1.83x | 35 contexts | сиялда, азиялда, азиялде |
457
+ | `анал` | 1.42x | 70 contexts | данал, канал, ханал |
458
+ | `ралъ` | 1.49x | 53 contexts | ралъад, ралъуе, хералъ |
459
+ | `буге` | 2.00x | 17 contexts | бугел, бугез, бугеб |
460
+ | `иста` | 2.02x | 16 contexts | систан, христа, лазистан |
461
+ | `лдас` | 2.06x | 15 contexts | лдаса, алдаса, ялдаса |
462
+ | `азда` | 1.62x | 32 contexts | мазда, раздан, ишазда |
463
+
464
+ ### 6.4 Affix Compatibility (Co-occurrence)
465
+
466
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
467
+
468
+ | Prefix | Suffix | Frequency | Examples |
469
+ |--------|--------|-----------|----------|
470
+ | `-гь` | `-л` | 44 words | гьавамухъал, гьудулзабазул |
471
+ | `-ма` | `-л` | 40 words | мажлисалъул, маринил |
472
+ | `-гӏ` | `-л` | 38 words | гӏурусазул, гӏалиевалъул |
473
+ | `-ма` | `-а` | 35 words | макъалоялда, малъана |
474
+ | `-гӏ` | `-а` | 29 words | гӏуцӏиялда, гӏодула |
475
+ | `-гь` | `-а` | 28 words | гьада, гьала |
476
+ | `-гӏ` | `-ул` | 25 words | гӏурусазул, гӏалиевалъул |
477
+ | `-гь` | `-ул` | 24 words | гьудулзабазул, гьезул |
478
+ | `-ма` | `-ул` | 21 words | мажлисалъул, мактабалъул |
479
+ | `-ма` | `-да` | 16 words | макъалоялда, макъалаялда |
480
+
481
+ ### 6.5 Recursive Morpheme Segmentation
482
+
483
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
484
+
485
+ | Word | Suggested Split | Confidence | Stem |
486
+ |------|-----------------|------------|------|
487
+ | руччабаздаги | **`руччабаз-да-ги`** | 6.0 | `руччабаз` |
488
+ | хронологиялъул | **`хронология-лъул`** | 4.5 | `хронология` |
489
+ | теориялда | **`теория-лда`** | 4.5 | `теория` |
490
+ | къавмазул | **`къавмаз-ул`** | 4.5 | `къавмаз` |
491
+ | къанагӏатал | **`къанагӏат-ал`** | 4.5 | `къанагӏат` |
492
+ | групалъул | **`група-лъул`** | 4.5 | `група` |
493
+ | ракьалъул | **`ракьа-лъул`** | 4.5 | `ракьа` |
494
+ | такрарлъул | **`такрар-лъул`** | 4.5 | `такрар` |
495
+ | алвеолариялги | **`алвеолариял-ги`** | 4.5 | `алвеолариял` |
496
+ | европалъул | **`европа-лъул`** | 4.5 | `европа` |
497
+ | гьабулаго | **`гь-абулаго`** | 4.5 | `абулаго` |
498
+ | рахъалъги | **`рахъалъ-ги`** | 4.5 | `рахъалъ` |
499
+ | пассажирги | **`пассажир-ги`** | 4.5 | `пассажир` |
500
+ | партиялъул | **`партия-лъул`** | 4.5 | `партия` |
501
+ | оппозициялъул | **`оппозиция-лъул`** | 4.5 | `оппозиция` |
502
+
503
+ ### 6.6 Linguistic Interpretation
504
+
505
+ > **Automated Insight:**
506
+ The language AV appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
507
+
508
+ ---
509
+ ## 7. Summary & Recommendations
510
 
511
  ![Performance Dashboard](visualizations/performance_dashboard.png)
512
 
 
514
 
515
  | Component | Recommended | Rationale |
516
  |-----------|-------------|-----------|
517
+ | Tokenizer | **64k BPE** | Best compression (4.70x) |
518
+ | N-gram | **2-gram** | Lowest perplexity (428) |
519
+ | Markov | **Context-4** | Highest predictability (98.9%) |
520
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
521
 
522
+
523
  ---
524
  ## Appendix: Metrics Glossary & Interpretation Guide
525
 
 
709
  author = {Kamali, Omar},
710
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
711
  year = {2025},
712
+ doi = {10.5281/zenodo.18073153},
713
+ publisher = {Zenodo},
714
  url = {https://huggingface.co/wikilangs}
715
  institution = {Omneity Labs}
716
  }
 
726
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
727
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
728
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
729
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
730
  ---
731
  *Generated by Wikilangs Models Pipeline*
732
 
733
+ *Report Date: 2026-01-03 05:23:28*
models/embeddings/monolingual/av_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e9b6638db57121afe5ec40e5d975a427a9f37c786498e2a2d32874648d6e940b
3
- size 1037515926
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:086998639c3328d1f88a224eb653bef49f8aa011b5880d7feb85792cbe742361
3
+ size 1036208926
models/embeddings/monolingual/av_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 12900
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 11654
15
  }
models/embeddings/monolingual/av_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a659bfb895d802a760b82e4bfa4c8d06af1ca7e7e48c9edcbe7eea312922c4c5
3
- size 259608726
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6290250d27a46d72b90ede94292690c45a0a3f14bdbf05ab7aa5d07aa2093541
3
+ size 259258654
models/embeddings/monolingual/av_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 12900
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 11654
15
  }
models/embeddings/monolingual/av_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:42a486c86184075e6fbcad6c04646238858ba9785c971dda95529c259785636b
3
- size 518911126
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b709ff729efabdd809f80746b620dba08607a5b3377d6237ee6f2434e1eb3c2
3
+ size 518242078
models/embeddings/monolingual/av_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 12900
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 11654
15
  }
models/subword_markov/av_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:80d9f829d3520985c3ec2e555c4f56f88029ba50c0f8e0a8d143025208979f1b
3
- size 96598
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4517ac9631ea8dbfee38f5e0a123dfdfec510e3a9c1219d351bc1fd509b60c17
3
+ size 81084
models/subword_markov/av_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "av",
5
- "unique_contexts": 1091,
6
- "total_transitions": 4387638
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "av",
5
+ "unique_contexts": 1145,
6
+ "total_transitions": 3671343
7
  }
models/subword_markov/av_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6e31801ab66f88e5b5e45a7c74584663475f8a158e42122c638b42657aca01fe
3
- size 605158
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2851c37f629eaaf5084dca27968d6cf91fd8843aa330d489a4440f208d3e60cd
3
+ size 486043
models/subword_markov/av_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "av",
5
- "unique_contexts": 13391,
6
- "total_transitions": 4383567
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "av",
5
+ "unique_contexts": 10664,
6
+ "total_transitions": 3667770
7
  }
models/subword_markov/av_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2e9124d975e5792b36fc393927806d32e658235b480893ccef7d0dde1592724b
3
- size 2084996
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e93984ebab9588d00463857f800a79e61319598925a11d5785c24e04809162f3
3
+ size 1681476
models/subword_markov/av_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "av",
5
- "unique_contexts": 80309,
6
- "total_transitions": 4379496
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "av",
5
+ "unique_contexts": 60534,
6
+ "total_transitions": 3664197
7
  }
models/subword_markov/av_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1e309e2636831f32a2161c4e5b0ad24463660acaea5b1b23d507678e2b7e207f
3
- size 5912238
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df78890cb55850cde1e4fbcdb06710bcbf3c0a04113c2fb5a81c53eb29e9a8bc
3
+ size 4643917
models/subword_markov/av_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "av",
5
- "unique_contexts": 292181,
6
- "total_transitions": 4375425
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "av",
5
+ "unique_contexts": 221628,
6
+ "total_transitions": 3660624
7
  }
models/subword_ngram/av_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4efe6ef223dd1f6472674537d1c596680156ddacece7718273c5a6229338ee6f
3
- size 67795
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:933b0de1b63cb20550e97fa2419bacf817141c74827bc4e101e5d76c7780509c
3
+ size 54611
models/subword_ngram/av_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "av",
5
- "unique_ngrams": 5314,
6
- "total_ngrams": 4387638
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "av",
5
+ "unique_ngrams": 4133,
6
+ "total_ngrams": 3671343
7
  }
models/subword_ngram/av_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fe60524de25890ebbb4579aeb1d0dc4e16f60cf7c9a21aadae938fe7d8a651c0
3
- size 482417
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca32b51e264e1da8fb8ad605682f565e4a0fd97590d147b080946a6cf8993da2
3
+ size 370254
models/subword_ngram/av_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "av",
5
- "unique_ngrams": 36896,
6
- "total_ngrams": 4383567
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "av",
5
+ "unique_ngrams": 28949,
6
+ "total_ngrams": 3667770
7
  }
models/subword_ngram/av_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0969deb3aaf25140d6a8f6829e34559fe563aa08e2863a2d91328f85f8f16ac0
3
- size 1846055
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1511e4aa7f03fe2637243fa72bb121ccae146a0024ee7a81f233a708cc2778bb
3
+ size 1469426
models/subword_ngram/av_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "av",
5
- "unique_ngrams": 151649,
6
- "total_ngrams": 4379496
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "av",
5
+ "unique_ngrams": 119337,
6
+ "total_ngrams": 3664197
7
  }
models/tokenizer/av_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:37ab57b7217643d6cfb077a937974ffd4dcebcfabf852aa82bfdfe4ce2a0500d
3
- size 576619
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95f05447a801c104acaac9e4a45ef1ade9a750529b7be0b69f4ee702b40bdd0f
3
+ size 579855
models/tokenizer/av_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/av_tokenizer_32k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:28d374a06b6b7aa8a4d3941bcd4695e5533544ad636d5d0bb610fb7793bc4354
3
- size 936914
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcc76ebd732e7de0cc3f1d1a7445d6e9812fc5861347bfbdb764a34ab378093a
3
+ size 943739
models/tokenizer/av_tokenizer_32k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/av_tokenizer_64k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d5521eccdbb9f46291a027fc56f120b592ff6dfdafcd09218747f5dfebcdc505
3
- size 1716704
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e25c37c305a3f71acc7fde1f68cea5d1b999e9cc76e1a9ae7e946535b348e7eb
3
+ size 1709450
models/tokenizer/av_tokenizer_64k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/av_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0fe4689b0c6b4beca83bfdeb9649a3570f36538f43ca7cf4f02c432b94214c16
3
- size 403337
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69224dbe8521456dbe0800e5c74893d56ecd3d316ce5785f5ea5922ff2f1af24
3
+ size 404399
models/tokenizer/av_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/av_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3288037a303bb285d697f0b1691c0479ba6d9e29db6034c70012c05716d329f1
3
- size 738845
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7310d663e46f21c8df63d5030b41ef164690182783e77aaf70f0bb2efff7bb0
3
+ size 661413
models/vocabulary/av_vocabulary_metadata.json CHANGED
@@ -1,16 +1,17 @@
1
  {
2
  "language": "av",
3
- "vocabulary_size": 38576,
 
4
  "statistics": {
5
- "type_token_ratio": 0.19482218346291424,
6
  "coverage": {
7
- "top_100": 0.1948757649215124,
8
- "top_1000": 0.43634707482188784,
9
- "top_5000": 0.6366234812427942,
10
- "top_10000": 0.7208073432465191
11
  },
12
- "hapax_count": 66868,
13
- "hapax_ratio": 0.6341565191001859,
14
- "total_documents": 4071
15
  }
16
  }
 
1
  {
2
  "language": "av",
3
+ "vocabulary_size": 34392,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.1973921591928009,
7
  "coverage": {
8
+ "top_100": 0.19702269707347111,
9
+ "top_1000": 0.4456231702442555,
10
+ "top_5000": 0.6456771851739821,
11
+ "top_10000": 0.7303661131936867
12
  },
13
+ "hapax_count": 56968,
14
+ "hapax_ratio": 0.623555166374781,
15
+ "total_documents": 3573
16
  }
17
  }
models/word_markov/av_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:63f373412e062fd6835774cb1266eaa3e245850dba38a8e6151d4ac7a4759ede
3
- size 5274705
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e06648f80de88e4993ac542ab602cc9aac6b01b06f56ff96c14658afaf6be279
3
+ size 4391092
models/word_markov/av_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "av",
5
- "unique_contexts": 105500,
6
- "total_transitions": 728256
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "av",
5
+ "unique_contexts": 91234,
6
+ "total_transitions": 459262
7
  }
models/word_markov/av_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb88d7a9d6079d49efd1ad1b30db00e9d3c96aab81c84a0f8d0ef679e10fe3d4
3
- size 11187735
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c68afd2ecd43feece23866755466060024bed9599ffd5b4fb7f0eebba9e44b7c
3
+ size 9606282
models/word_markov/av_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "av",
5
- "unique_contexts": 384678,
6
- "total_transitions": 724185
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "av",
5
+ "unique_contexts": 324656,
6
+ "total_transitions": 455689
7
  }
models/word_markov/av_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:907faf6f55ecd04d74aa3d7f21d0111dda47ad2d999e00d289b0665dfe132da5
3
- size 14841527
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f09b691f7a4f20607265b8fbe10331f64de10a5d0492129209c70e0fac37837
3
+ size 11942432
models/word_markov/av_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "av",
5
- "unique_contexts": 542652,
6
- "total_transitions": 720116
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "av",
5
+ "unique_contexts": 392645,
6
+ "total_transitions": 452116
7
  }
models/word_markov/av_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:910fed69ecfd672367e2d66b7147fecc45ddedd0acda63f49e00c5b1add64c18
3
- size 17509163
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d983a9226228a9202fb65e4494c1671eacd7872fafdff4b618a0f3e99c703539
3
+ size 14036981
models/word_markov/av_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "av",
5
- "unique_contexts": 599240,
6
- "total_transitions": 716050
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "av",
5
+ "unique_contexts": 406500,
6
+ "total_transitions": 448543
7
  }
models/word_ngram/av_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6cb354b1f97cbdfc6d757a9c19d07428b4d51d46a086e2f6d4633877e66a449f
3
- size 309626
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca8324fa6f1298f13453cb69c80bda1c34da42ca4d16390fc0f21e33c68f286c
3
+ size 164610
models/word_ngram/av_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "av",
5
- "unique_ngrams": 14725,
6
- "total_ngrams": 728256
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "av",
5
+ "unique_ngrams": 6413,
6
+ "total_ngrams": 459262
7
  }
models/word_ngram/av_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d2a68230c75c38fad71bc4e8c7c054c54a5ac1bae080c287811284115b39aee8
3
- size 461245
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d863fc44b978ead18ea6982afba0be260335838763c4435ad08e18f827ded683
3
+ size 202995
models/word_ngram/av_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "av",
5
- "unique_ngrams": 19718,
6
- "total_ngrams": 724185
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "av",
5
+ "unique_ngrams": 6427,
6
+ "total_ngrams": 455689
7
  }
models/word_ngram/av_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3a2c985bdc681c13eadcce7267033132b7282125a774fd8af76143f2674a46d5
3
- size 1007425
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca081b844faf3f648da186cea540ee1d8a8481a461588ae8b2a7f220a80f8acb
3
+ size 568270
models/word_ngram/av_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "av",
5
- "unique_ngrams": 39973,
6
- "total_ngrams": 720116
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "av",
5
+ "unique_ngrams": 17392,
6
+ "total_ngrams": 452116
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: 49d128d025264876851cfb18b7c8bdb724fe900d062ccbff1730a3b689597492
  • Pointer size: 131 Bytes
  • Size of remote file: 161 kB

Git LFS Details

  • SHA256: 189a73d0276599f73b227cf70548226a450314778d0479b0b53b03f28abcd0b4
  • Pointer size: 131 Bytes
  • Size of remote file: 157 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED