cartesinus commited on
Commit
bcfa1ab
1 Parent(s): 5465ae8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -1,12 +1,11 @@
1
  ---
2
  license: mit
3
  tags:
4
- - generated_from_trainer
5
- - natural-language-understanding
6
- - nlu
7
  - machine translation
8
  - iva
9
  - virtual assistants
 
 
10
  metrics:
11
  - bleu
12
  model-index:
@@ -16,6 +15,7 @@ datasets:
16
  - cartesinus/iva_mt_wslot
17
  language:
18
  - pl
 
19
  ---
20
 
21
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -47,8 +47,8 @@ BLEU was measured with [sacreBLEU](https://github.com/mjpost/sacrebleu) library.
47
 
48
  ## Model description, intended uses & limitations
49
 
50
- Model is biased towards virtual assistant (IVA) sentences in prediction/translation. These sentences are short, most of them are short, imperatives. It can be observed in
51
- above results where WMT results are very low while in-domain test is very high.
52
 
53
  This model will most probably force IVA translations on your text. As long as sentences that you are translating are more or less similar to massive and leyzer domains it
54
  will be ok. If you will translate out-of-domain sentenences (such as for example News, Medical) that are not very similar then results will drop significantly up to the
 
1
  ---
2
  license: mit
3
  tags:
 
 
 
4
  - machine translation
5
  - iva
6
  - virtual assistants
7
+ - natural-language-understanding
8
+ - nlu
9
  metrics:
10
  - bleu
11
  model-index:
 
15
  - cartesinus/iva_mt_wslot
16
  language:
17
  - pl
18
+ - en
19
  ---
20
 
21
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
47
 
48
  ## Model description, intended uses & limitations
49
 
50
+ Model is biased towards virtual assistant (IVA) sentences in prediction/translation. These sentences are short, imperatives with a lot of name entities (slots) and
51
+ particular vocabulary (for example settings name). It can be observed in above results where WMT results are very low while in-domain test is very high.
52
 
53
  This model will most probably force IVA translations on your text. As long as sentences that you are translating are more or less similar to massive and leyzer domains it
54
  will be ok. If you will translate out-of-domain sentenences (such as for example News, Medical) that are not very similar then results will drop significantly up to the