julien-c HF staff commited on
Commit
3913c44
1 Parent(s): c28df99

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/henryk/bert-base-multilingual-cased-finetuned-polish-squad1/README.md

Files changed (1) hide show
  1. README.md +93 -93
README.md CHANGED
@@ -1,94 +1,94 @@
1
- ---
2
- language: polish
3
- ---
4
-
5
- # Multilingual + Polish SQuAD1.1
6
-
7
- This model is the multilingual model provided by the Google research team with a fine-tuned dutch Q&A downstream task.
8
-
9
- ## Details of the language model
10
-
11
- Language model ([**bert-base-multilingual-cased**](https://github.com/google-research/bert/blob/master/multilingual.md)):
12
- 12-layer, 768-hidden, 12-heads, 110M parameters.
13
- Trained on cased text in the top 104 languages with the largest Wikipedias.
14
-
15
- ## Details of the downstream task
16
- Using the `mtranslate` Python module, [**SQuAD1.1**](https://rajpurkar.github.io/SQuAD-explorer/) was machine-translated. In order to find the start tokens, the direct translations of the answers were searched in the corresponding paragraphs. Due to the different translations depending on the context (missing context in the pure answer), the answer could not always be found in the text, and thus a loss of question-answer examples occurred. This is a potential problem where errors can occur in the data set.
17
-
18
- | Dataset | # Q&A |
19
- | ---------------------- | ----- |
20
- | SQuAD1.1 Train | 87.7 K |
21
- | Polish SQuAD1.1 Train | 39.5 K |
22
- | SQuAD1.1 Dev | 10.6 K |
23
- | Polish SQuAD1.1 Dev | 2.6 K |
24
-
25
-
26
- ## Model benchmark
27
-
28
- | Model | EM | F1 |
29
- | ---------------------- | ----- | ----- |
30
- | [SlavicBERT](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) | **60.89** | 71.68 |
31
- | [polBERT](https://huggingface.co/dkleczek/bert-base-polish-uncased-v1) | 57.46 | 68.87 |
32
- | [multiBERT](https://huggingface.co/bert-base-multilingual-cased) | 60.67 | **71.89** |
33
- | [xlm](https://huggingface.co/xlm-mlm-100-1280) | 47.98 | 59.42 |
34
- ## Model training
35
-
36
- The model was trained on a **Tesla V100** GPU with the following command:
37
-
38
- ```python
39
- export SQUAD_DIR=path/to/pl_squad
40
-
41
- python run_squad.py
42
- --model_type bert \
43
- --model_name_or_path bert-base-multilingual-cased \
44
- --do_train \
45
- --do_eval \
46
- --train_file $SQUAD_DIR/pl_squadv1_train_clean.json \
47
- --predict_file $SQUAD_DIR/pl_squadv1_dev_clean.json \
48
- --num_train_epochs 2 \
49
- --max_seq_length 384 \
50
- --doc_stride 128 \
51
- --save_steps=8000 \
52
- --output_dir ../../output \
53
- --overwrite_cache \
54
- --overwrite_output_dir
55
- ```
56
-
57
- **Results**:
58
-
59
- {'exact': 60.670731707317074, 'f1': 71.8952193697293, 'total': 2624, 'HasAns_exact': 60.670731707317074, 'HasAns_f1': 71.8952193697293,
60
- 'HasAns_total': 2624, 'best_exact': 60.670731707317074, 'best_exact_thresh': 0.0, 'best_f1': 71.8952193697293, 'best_f1_thresh': 0.0}
61
-
62
- ## Model in action
63
-
64
- Fast usage with **pipelines**:
65
-
66
- ```python
67
- from transformers import pipeline
68
-
69
- qa_pipeline = pipeline(
70
- "question-answering",
71
- model="henryk/bert-base-multilingual-cased-finetuned-polish-squad1",
72
- tokenizer="henryk/bert-base-multilingual-cased-finetuned-polish-squad1"
73
- )
74
-
75
- qa_pipeline({
76
- 'context': "Warszawa jest największym miastem w Polsce pod względem liczby ludności i powierzchni",
77
- 'question': "Jakie jest największe miasto w Polsce?"})
78
-
79
- ```
80
-
81
- # Output:
82
-
83
- ```json
84
- {
85
- "score": 0.9988,
86
- "start": 0,
87
- "end": 8,
88
- "answer": "Warszawa"
89
- }
90
- ```
91
-
92
- ## Contact
93
-
94
  Please do not hesitate to contact me via [LinkedIn](https://www.linkedin.com/in/henryk-borzymowski-0755a2167/) if you want to discuss or get access to the Polish version of SQuAD.
1
+ ---
2
+ language: pl
3
+ ---
4
+
5
+ # Multilingual + Polish SQuAD1.1
6
+
7
+ This model is the multilingual model provided by the Google research team with a fine-tuned polish Q&A downstream task.
8
+
9
+ ## Details of the language model
10
+
11
+ Language model ([**bert-base-multilingual-cased**](https://github.com/google-research/bert/blob/master/multilingual.md)):
12
+ 12-layer, 768-hidden, 12-heads, 110M parameters.
13
+ Trained on cased text in the top 104 languages with the largest Wikipedias.
14
+
15
+ ## Details of the downstream task
16
+ Using the `mtranslate` Python module, [**SQuAD1.1**](https://rajpurkar.github.io/SQuAD-explorer/) was machine-translated. In order to find the start tokens, the direct translations of the answers were searched in the corresponding paragraphs. Due to the different translations depending on the context (missing context in the pure answer), the answer could not always be found in the text, and thus a loss of question-answer examples occurred. This is a potential problem where errors can occur in the data set.
17
+
18
+ | Dataset | # Q&A |
19
+ | ---------------------- | ----- |
20
+ | SQuAD1.1 Train | 87.7 K |
21
+ | Polish SQuAD1.1 Train | 39.5 K |
22
+ | SQuAD1.1 Dev | 10.6 K |
23
+ | Polish SQuAD1.1 Dev | 2.6 K |
24
+
25
+
26
+ ## Model benchmark
27
+
28
+ | Model | EM | F1 |
29
+ | ---------------------- | ----- | ----- |
30
+ | [SlavicBERT](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) | **60.89** | 71.68 |
31
+ | [polBERT](https://huggingface.co/dkleczek/bert-base-polish-uncased-v1) | 57.46 | 68.87 |
32
+ | [multiBERT](https://huggingface.co/bert-base-multilingual-cased) | 60.67 | **71.89** |
33
+ | [xlm](https://huggingface.co/xlm-mlm-100-1280) | 47.98 | 59.42 |
34
+ ## Model training
35
+
36
+ The model was trained on a **Tesla V100** GPU with the following command:
37
+
38
+ ```python
39
+ export SQUAD_DIR=path/to/pl_squad
40
+
41
+ python run_squad.py
42
+ --model_type bert \
43
+ --model_name_or_path bert-base-multilingual-cased \
44
+ --do_train \
45
+ --do_eval \
46
+ --train_file $SQUAD_DIR/pl_squadv1_train_clean.json \
47
+ --predict_file $SQUAD_DIR/pl_squadv1_dev_clean.json \
48
+ --num_train_epochs 2 \
49
+ --max_seq_length 384 \
50
+ --doc_stride 128 \
51
+ --save_steps=8000 \
52
+ --output_dir ../../output \
53
+ --overwrite_cache \
54
+ --overwrite_output_dir
55
+ ```
56
+
57
+ **Results**:
58
+
59
+ {'exact': 60.670731707317074, 'f1': 71.8952193697293, 'total': 2624, 'HasAns_exact': 60.670731707317074, 'HasAns_f1': 71.8952193697293,
60
+ 'HasAns_total': 2624, 'best_exact': 60.670731707317074, 'best_exact_thresh': 0.0, 'best_f1': 71.8952193697293, 'best_f1_thresh': 0.0}
61
+
62
+ ## Model in action
63
+
64
+ Fast usage with **pipelines**:
65
+
66
+ ```python
67
+ from transformers import pipeline
68
+
69
+ qa_pipeline = pipeline(
70
+ "question-answering",
71
+ model="henryk/bert-base-multilingual-cased-finetuned-polish-squad1",
72
+ tokenizer="henryk/bert-base-multilingual-cased-finetuned-polish-squad1"
73
+ )
74
+
75
+ qa_pipeline({
76
+ 'context': "Warszawa jest największym miastem w Polsce pod względem liczby ludności i powierzchni",
77
+ 'question': "Jakie jest największe miasto w Polsce?"})
78
+
79
+ ```
80
+
81
+ # Output:
82
+
83
+ ```json
84
+ {
85
+ "score": 0.9988,
86
+ "start": 0,
87
+ "end": 8,
88
+ "answer": "Warszawa"
89
+ }
90
+ ```
91
+
92
+ ## Contact
93
+
94
  Please do not hesitate to contact me via [LinkedIn](https://www.linkedin.com/in/henryk-borzymowski-0755a2167/) if you want to discuss or get access to the Polish version of SQuAD.