jordimas commited on
Commit
f10caed
1 Parent(s): 282adb8

Initial version

Browse files
Files changed (5) hide show
  1. .gitattributes +2 -0
  2. README.md +58 -3
  3. model.bin +3 -0
  4. shared_vocabulary.txt +0 -0
  5. sp_m.model +3 -0
.gitattributes CHANGED
@@ -25,3 +25,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ model.bin filter=lfs diff=lfs merge=lfs -text
29
+ sp_m.model filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,58 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ca
4
+ - en
5
+
6
+ tags:
7
+ - translation
8
+
9
+ library_name: opennmt
10
+ license: mit
11
+ metrics:
12
+ - bleu
13
+
14
+ inference: false
15
+ ---
16
+
17
+ ### Introduction
18
+
19
+ Catalan - English translation model based on OpenNMT. These are the same models that we have in production at https://www.softcatala.org/traductor/.
20
+
21
+
22
+ ### Usage
23
+
24
+ Install the necessary dependencies:
25
+
26
+
27
+ ```bash
28
+ pip3 install ctranslate2 pyonmttok
29
+ ```
30
+
31
+ Simple translation using Python:
32
+
33
+ ```
34
+
35
+ Simple tokenization & translation using Python:
36
+
37
+
38
+ ```python
39
+ import ctranslate2
40
+ import pyonmttok
41
+ from huggingface_hub import snapshot_download
42
+ model_dir = snapshot_download(repo_id="softcatala/opennmt-cat-eng", revision="main")
43
+
44
+ tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/sp_m.model")
45
+ tokenized=tokenizer.tokenize("Hola món")
46
+
47
+ import ctranslate2
48
+ translator = ctranslate2.Translator(model_dir)
49
+ translated = translator.translate_batch([tokenized[0]])
50
+ print(tokenizer.detokenize(translated[0][0]['tokens']))
51
+ ```
52
+
53
+ ## Benchmarks
54
+
55
+ | testset | BLEU |
56
+ |---------------------------------------|-------|
57
+ | test dataset (from train/dev/test) | 46.9 |
58
+ | Flores101 dataset | 41.2 |
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69aae95c67116ca854eb2caf4f91d7409a3b3c0606711210dbac7748f4a837ce
3
+ size 122582230
shared_vocabulary.txt ADDED
The diff for this file is too large to render. See raw diff
 
sp_m.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee17b2bbb3792b3280657ec591ceb013c4916fd987e545add5f347c33515c668
3
+ size 1146694