thuan00 commited on
Commit
a7a1f3e
1 Parent(s): 1f3c794
Files changed (1) hide show
  1. README.md +155 -0
README.md CHANGED
@@ -1,3 +1,158 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ pipeline_tag: sentence-similarity
4
+ tags:
5
+ - sentence-transformers
6
+ - feature-extraction
7
+ - sentence-similarity
8
+ - transformers
9
  ---
10
+
11
+ # {MODEL_NAME}
12
+
13
+ This is a Vietnamese [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like questions answering or semantic search.
14
+
15
+
16
+ <!--- Describe your model here -->
17
+
18
+ ## Usage (Sentence-Transformers)
19
+
20
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
21
+
22
+ ```
23
+ pip install -U sentence-transformers
24
+ ```
25
+
26
+ Then you can use the model like this:
27
+
28
+ ```python
29
+ from sentence_transformers import SentenceTransformer
30
+ sentences = ["This is an example sentence", "Each sentence is converted"]
31
+
32
+ model = SentenceTransformer('{MODEL_NAME}')
33
+ embeddings = model.encode(sentences)
34
+ print(embeddings)
35
+ ```
36
+
37
+
38
+
39
+ ## Usage (HuggingFace Transformers)
40
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
41
+
42
+ ```python
43
+ from transformers import AutoTokenizer, AutoModel
44
+ import torch
45
+
46
+
47
+ #Mean Pooling - Take attention mask into account for correct averaging
48
+ def mean_pooling(model_output, attention_mask):
49
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
50
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
51
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
52
+
53
+
54
+ # Sentences we want sentence embeddings for
55
+ sentences = ['This is an example sentence', 'Each sentence is converted']
56
+
57
+ # Load model from HuggingFace Hub
58
+ tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
59
+ model = AutoModel.from_pretrained('{MODEL_NAME}')
60
+
61
+ # Tokenize sentences
62
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
63
+
64
+ # Compute token embeddings
65
+ with torch.no_grad():
66
+ model_output = model(**encoded_input)
67
+
68
+ # Perform pooling. In this case, mean pooling.
69
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
70
+
71
+ print("Sentence embeddings:")
72
+ print(sentence_embeddings)
73
+ ```
74
+
75
+
76
+
77
+ ## Evaluation Results
78
+
79
+ <!--- Describe how your model was evaluated -->
80
+
81
+ The thesis will be available on [https://github.com/ncthuan/uet-qa](https://github.com/ncthuan/uet-qa) with evaluation results in chapter 4.
82
+
83
+ paraphrase-multilingual-minilm: 75 recall@10, 49 MRR@10
84
+
85
+ this model: 85 recall@10, 58 MRR@10
86
+
87
+ ## Training
88
+
89
+ It was distilled using English-Vietnamese parallel data with this [training script](https://github.com/ncthuan/uet-qa/blob/main/scripts/train/make_multilingual.py)
90
+ that follows the work of [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://www.sbert.net/examples/training/multilingual/README.html)
91
+
92
+ teacher: msmarco-MiniLM-L12-cos-v5
93
+
94
+ student: paraphrase-multilingual-minilm-L12-v2
95
+
96
+ Data: PhoMT, MKQA, MLQA, XQuAD
97
+
98
+ The model was trained with the parameters:
99
+
100
+ **DataLoader**:
101
+
102
+ `torch.utils.data.dataloader.DataLoader` of length 40148 with parameters:
103
+ ```
104
+ {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
105
+ ```
106
+
107
+ **Loss**:
108
+
109
+ `sentence_transformers.losses.MSELoss.MSELoss`
110
+
111
+ Parameters of the fit()-Method:
112
+ ```
113
+ {
114
+ "epochs": 2,
115
+ "evaluation_steps": 2000,
116
+ "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
117
+ "max_grad_norm": 1,
118
+ "optimizer_class": "<class 'transformers.optimization.AdamW'>",
119
+ "optimizer_params": {
120
+ "correct_bias": false,
121
+ "eps": 1e-06,
122
+ "lr": 1e-05
123
+ },
124
+ "scheduler": "WarmupLinear",
125
+ "steps_per_epoch": null,
126
+ "warmup_steps": 2000,
127
+ "weight_decay": 0.005
128
+ }
129
+ ```
130
+
131
+
132
+ ## Full Model Architecture
133
+ ```
134
+ SentenceTransformer(
135
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
136
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
137
+ )
138
+ ```
139
+
140
+ ## Citing & Authors
141
+
142
+ <!--- Describe where people can find more information -->
143
+ @inproceedings{reimers-2020-multilingual-sentence-bert,
144
+ title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation",
145
+ author = "Reimers, Nils and Gurevych, Iryna",
146
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
147
+ month = "11",
148
+ year = "2020",
149
+ publisher = "Association for Computational Linguistics",
150
+ url = "https://arxiv.org/abs/2004.09813",
151
+ }
152
+
153
+ @article{thuan2022-uetqa,
154
+ title={{Extractive question answering system on regulations for University of Engineering and Technology}},
155
+ author={Nguyen, Thuan},
156
+ journal={Undergraduate Thesis, University of Engineering and Technology, Vietnam National University Hanoi},
157
+ year={2022}
158
+ }