julien-c HF staff commited on
Commit
f25016c
โ€ข
1 Parent(s): 41baf98

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it/README.md

Files changed (1) hide show
  1. README.md +96 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: it
3
+ ---
4
+
5
+ # UmBERTo Wikipedia Uncased + italian SQuAD v1 ๐Ÿ“š ๐Ÿง โ“
6
+
7
+ [UmBERTo-Wikipedia-Uncased](https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1) fine-tuned on [Italian SQUAD v1 dataset](https://github.com/crux82/squad-it) for **Q&A** downstream task.
8
+
9
+ ## Details of the downstream task (Q&A) - Model ๐Ÿง 
10
+
11
+ [UmBERTo](https://github.com/musixmatchresearch/umberto) is a Roberta-based Language Model trained on large Italian Corpora and uses two innovative approaches: SentencePiece and Whole Word Masking.
12
+ UmBERTo-Wikipedia-Uncased Training is trained on a relative small corpus (~7GB) extracted from Wikipedia-ITA.
13
+
14
+
15
+ ## Details of the downstream task (Q&A) - Dataset ๐Ÿ“š
16
+
17
+ [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) [Rajpurkar et al. 2016] is a large scale dataset for training of question answering systems on factoid questions. It contains more than 100,000 question-answer pairs about passages from 536 articles chosen from various domains of Wikipedia.
18
+
19
+ **SQuAD-it** is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset into Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian. The dataset contains more than 60,000 question/answer pairs derived from the original English dataset.
20
+
21
+ ## Model training ๐Ÿ‹๏ธโ€
22
+
23
+ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
24
+
25
+ ```bash
26
+ python transformers/examples/question-answering/run_squad.py \
27
+ --model_type bert \
28
+ --model_name_or_path 'Musixmatch/umberto-wikipedia-uncased-v1' \
29
+ --do_eval \
30
+ --do_train \
31
+ --do_lower_case \
32
+ --train_file '/content/dataset/SQuAD_it-train.json' \
33
+ --predict_file '/content/dataset/SQuAD_it-test.json' \
34
+ --per_gpu_train_batch_size 16 \
35
+ --learning_rate 3e-5 \
36
+ --num_train_epochs 10 \
37
+ --max_seq_length 384 \
38
+ --doc_stride 128 \
39
+ --output_dir /content/drive/My\ Drive/umberto-uncased-finetuned-squadv1-it \
40
+ --overwrite_output_dir \
41
+ --save_steps 1000
42
+ ```
43
+ With 10 epochs the model overfits the train dataset so I evaluated the different checkpoints created during training (every 1000 steps) and chose the best (In this case the one created at 17000 steps).
44
+
45
+ ## Test set Results ๐Ÿงพ
46
+
47
+ | Metric | # Value |
48
+ | ------ | --------- |
49
+ | **EM** | **60.50** |
50
+ | **F1** | **72.41** |
51
+
52
+
53
+
54
+ ```json
55
+ {
56
+ 'exact': 60.50729399395453,
57
+ 'f1': 72.4141113348361,
58
+ 'total': 7609,
59
+ 'HasAns_exact': 60.50729399395453,
60
+ 'HasAns_f1': 72.4141113348361,
61
+ 'HasAns_total': 7609,
62
+ 'best_exact': 60.50729399395453,
63
+ 'best_exact_thresh': 0.0,
64
+ 'best_f1': 72.4141113348361,
65
+ 'best_f1_thresh': 0.0
66
+ }
67
+ ```
68
+
69
+ ## Comparison โš–๏ธ
70
+
71
+ | Model | EM | F1 score |
72
+ | -------------------------------------------------------------------------------------------------------------------------------- | --------- | --------- |
73
+ | [DrQA-it trained on SQuAD-it ](https://github.com/crux82/squad-it/blob/master/README.md#evaluating-a-neural-model-over-squad-it) | 56.1 | 65.9 |
74
+ | This one |60.50 |72.41 |
75
+ | [bert-italian-finedtuned-squadv1-it-alfa](https://huggingface.co/mrm8488/bert-italian-finedtuned-squadv1-it-alfa) |**62.51** |**74.16** | | **62.51** | **74.16** |
76
+
77
+
78
+ ### Model in action ๐Ÿš€
79
+
80
+ Fast usage with **pipelines**:
81
+
82
+ ```python
83
+ from transformers import pipeline
84
+
85
+ QnA_pipeline = pipeline('question-answering', model='mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it')
86
+
87
+ QnA_pipeline({
88
+ 'context': 'Marco Aurelio era un imperatore romano che praticava lo stoicismo come filosofia di vita .',
89
+ 'question': 'Quale filosofia seguรฌ Marco Aurelio ?'
90
+ })
91
+ # Output:
92
+ {'answer': 'stoicismo', 'end': 65, 'score': 0.9477770241566028, 'start': 56}
93
+ ```
94
+
95
+ > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
96
+ > Made with <span style="color: #e25555;">&hearts;</span> in Spain