nikitast commited on
Commit
a4fede0
1 Parent(s): 5977716

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ Model for Single Language Classification in texts. Supports 10 languages: ru, uk
22
 
23
  Model trained on small parts of Open Subtitles, Oscar and Tatoeba datasets (~9k samples per language).
24
 
25
- The metrics obtained from validation part of dataset (~1k samples per language).
26
 
27
  | eval_accuracy | eval_az_f1-score | eval_az_precision | eval_az_recall | eval_az_support | eval_be_f1-score | eval_be_precision | eval_be_recall | eval_be_support | eval_de_f1-score | eval_de_precision | eval_de_recall | eval_de_support | eval_en_f1-score | eval_en_precision | eval_en_recall | eval_en_support | eval_he_f1-score | eval_he_precision | eval_he_recall | eval_he_support | eval_hy_f1-score | eval_hy_precision | eval_hy_recall | eval_hy_support | eval_ka_f1-score | eval_ka_precision | eval_ka_recall | eval_ka_support | eval_kk_f1-score | eval_kk_precision | eval_kk_recall | eval_kk_support | eval_loss | eval_macro avg_f1-score | eval_macro avg_precision | eval_macro avg_recall | eval_macro avg_support | eval_ru_f1-score | eval_ru_precision | eval_ru_recall | eval_ru_support | eval_uk_f1-score | eval_uk_precision | eval_uk_recall | eval_uk_support | eval_weighted avg_f1-score | eval_weighted avg_precision | eval_weighted avg_recall | eval_weighted avg_support |
28
  | ------------- | ---------------- | ----------------- | -------------- | --------------- | ------------------ | ----------------- | ------------------ | --------------- | ------------------ | ----------------- | ------------------ | --------------- | ------------------ | ----------------- | ------------------ | --------------- | ------------------ | ----------------- | ----------------- | --------------- | ------------------ | ----------------- | ------------------ | --------------- | ---------------- | ----------------- | -------------- | --------------- | ------------------ | ----------------- | ------------------ | --------------- | ------------------- | ----------------------- | ------------------------ | --------------------- | ---------------------- | ------------------ | ----------------- | ------------------ | --------------- | ------------------ | ----------------- | ------------------ | --------------- | -------------------------- | --------------------------- | ------------------------ | ------------------------- |
 
22
 
23
  Model trained on small parts of Open Subtitles, Oscar and Tatoeba datasets (~9k samples per language).
24
 
25
+ The metrics obtained from validation on part of dataset (~1k samples per language).
26
 
27
  | eval_accuracy | eval_az_f1-score | eval_az_precision | eval_az_recall | eval_az_support | eval_be_f1-score | eval_be_precision | eval_be_recall | eval_be_support | eval_de_f1-score | eval_de_precision | eval_de_recall | eval_de_support | eval_en_f1-score | eval_en_precision | eval_en_recall | eval_en_support | eval_he_f1-score | eval_he_precision | eval_he_recall | eval_he_support | eval_hy_f1-score | eval_hy_precision | eval_hy_recall | eval_hy_support | eval_ka_f1-score | eval_ka_precision | eval_ka_recall | eval_ka_support | eval_kk_f1-score | eval_kk_precision | eval_kk_recall | eval_kk_support | eval_loss | eval_macro avg_f1-score | eval_macro avg_precision | eval_macro avg_recall | eval_macro avg_support | eval_ru_f1-score | eval_ru_precision | eval_ru_recall | eval_ru_support | eval_uk_f1-score | eval_uk_precision | eval_uk_recall | eval_uk_support | eval_weighted avg_f1-score | eval_weighted avg_precision | eval_weighted avg_recall | eval_weighted avg_support |
28
  | ------------- | ---------------- | ----------------- | -------------- | --------------- | ------------------ | ----------------- | ------------------ | --------------- | ------------------ | ----------------- | ------------------ | --------------- | ------------------ | ----------------- | ------------------ | --------------- | ------------------ | ----------------- | ----------------- | --------------- | ------------------ | ----------------- | ------------------ | --------------- | ---------------- | ----------------- | -------------- | --------------- | ------------------ | ----------------- | ------------------ | --------------- | ------------------- | ----------------------- | ------------------------ | --------------------- | ---------------------- | ------------------ | ----------------- | ------------------ | --------------- | ------------------ | ----------------- | ------------------ | --------------- | -------------------------- | --------------------------- | ------------------------ | ------------------------- |