julien-c HF staff commited on
Commit
78b19d1
1 Parent(s): 41f4f29

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/abhilash1910/french-roberta/README.md

Files changed (1) hide show
  1. README.md +131 -0
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Roberta Trained Model For Masked Language Model On French Corpus :robot:
2
+
3
+
4
+ This is a Masked Language Model trained with [Roberta](https://huggingface.co/transformers/model_doc/roberta.html) on a small French News Corpus(Leipzig corpora).
5
+ The model is built using Huggingface transformers.
6
+ The model can be found at :[French-Roberta](https://huggingface.co/abhilash1910/french-roberta)
7
+
8
+
9
+ ## Specifications
10
+
11
+
12
+ The corpus for training is taken from Leipzig Corpora (French News) , and is trained on a small set of the corpus (300K).
13
+
14
+
15
+ ## Model Specification
16
+
17
+
18
+ The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications:
19
+ 1. vocab_size=32000
20
+ 2. max_position_embeddings=514
21
+ 3. num_attention_heads=12
22
+ 4. num_hidden_layers=6
23
+ 5. type_vocab_size=1
24
+
25
+
26
+ This is trained by using RobertaConfig from transformers package.The total training parameters :68124416
27
+ The model is trained for 100 epochs with a gpu batch size of 64 units.
28
+ More details for building custom models can be found at the [HuggingFace Blog](https://huggingface.co/blog/how-to-train)
29
+
30
+
31
+
32
+ ## Usage Specifications
33
+
34
+
35
+ For using this model, we have to first import AutoTokenizer and AutoModelWithLMHead Modules from transformers
36
+ After that we have to specify, the pre-trained model,which in this case is 'abhilash1910/french-roberta' for the tokenizers and the model.
37
+
38
+
39
+ ```python
40
+ from transformers import AutoTokenizer, AutoModelWithLMHead
41
+
42
+ tokenizer = AutoTokenizer.from_pretrained("abhilash1910/french-roberta")
43
+
44
+ model = AutoModelWithLMHead.from_pretrained("abhilash1910/french-roberta")
45
+ ```
46
+
47
+
48
+ After this the model will be downloaded, it will take some time to download all the model files.
49
+ For testing the model, we have to import pipeline module from transformers and create a masked output model for inference as follows:
50
+
51
+
52
+ ```python
53
+ from transformers import pipeline
54
+ model_mask = pipeline('fill-mask', model='abhilash1910/french-roberta')
55
+ model_mask("Le tweet <mask>.")
56
+ ```
57
+
58
+
59
+ Some of the examples are also provided with generic French sentences:
60
+
61
+ Example 1:
62
+
63
+
64
+ ```python
65
+ model_mask("À ce jour, <mask> projet a entraîné")
66
+ ```
67
+
68
+
69
+ Output:
70
+
71
+
72
+ ```bash
73
+ [{'sequence': '<s>À ce jour, belles projet a entraîné</s>',
74
+ 'score': 0.18685665726661682,
75
+ 'token': 6504,
76
+ 'token_str': 'Ġbelles'},
77
+ {'sequence': '<s>À ce jour,- projet a entraîné</s>',
78
+ 'score': 0.0005200508167035878,
79
+ 'token': 17,
80
+ 'token_str': '-'},
81
+ {'sequence': '<s>À ce jour, de projet a entraîné</s>',
82
+ 'score': 0.00045729897101409733,
83
+ 'token': 268,
84
+ 'token_str': 'Ġde'},
85
+ {'sequence': '<s>À ce jour, du projet a entraîné</s>',
86
+ 'score': 0.0004307595663703978,
87
+ 'token': 326,
88
+ 'token_str': 'Ġdu'},
89
+ {'sequence': '<s>À ce jour," projet a entraîné</s>',
90
+ 'score': 0.0004219160182401538,
91
+ 'token': 6,
92
+ 'token_str': '"'}]
93
+ ```
94
+
95
+ Example 2:
96
+
97
+ ```python
98
+ model_mask("C'est un <mask>")
99
+ ```
100
+
101
+ Output:
102
+
103
+ ```bash
104
+ [{'sequence': "<s>C'est un belles</s>",
105
+ 'score': 0.16440927982330322,
106
+ 'token': 6504,
107
+ 'token_str': 'Ġbelles'},
108
+ {'sequence': "<s>C'est un de</s>",
109
+ 'score': 0.0005495127406902611,
110
+ 'token': 268,
111
+ 'token_str': 'Ġde'},
112
+ {'sequence': "<s>C'est un du</s>",
113
+ 'score': 0.00044988933950662613,
114
+ 'token': 326,
115
+ 'token_str': 'Ġdu'},
116
+ {'sequence': "<s>C'est un-</s>",
117
+ 'score': 0.00044542422983795404,
118
+ 'token': 17,
119
+ 'token_str': '-'},
120
+ {'sequence': "<s>C'est un\t</s>",
121
+ 'score': 0.00037563967634923756,
122
+ 'token': 202,
123
+ 'token_str': 'ĉ'}]
124
+ ```
125
+
126
+
127
+ ## Resources
128
+
129
+ For all resources , please look into the [HuggingFace](https://huggingface.co/) Site and the [Repositories](https://github.com/huggingface).
130
+
131
+