Hsejong commited on
Commit
cd721b6
1 Parent(s): 6e56e34

feat: initial model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+ - transformers
8
+
9
+ ---
10
+
11
+ # {MODEL_NAME}
12
+
13
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
+
15
+ <!--- Describe your model here -->
16
+
17
+ ## Usage (Sentence-Transformers)
18
+
19
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
20
+
21
+ ```
22
+ pip install -U sentence-transformers
23
+ ```
24
+
25
+ Then you can use the model like this:
26
+
27
+ ```python
28
+ from sentence_transformers import SentenceTransformer
29
+ sentences = ["This is an example sentence", "Each sentence is converted"]
30
+
31
+ model = SentenceTransformer('{MODEL_NAME}')
32
+ embeddings = model.encode(sentences)
33
+ print(embeddings)
34
+ ```
35
+
36
+
37
+
38
+ ## Usage (HuggingFace Transformers)
39
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
40
+
41
+ ```python
42
+ from transformers import AutoTokenizer, AutoModel
43
+ import torch
44
+
45
+
46
+ #Mean Pooling - Take attention mask into account for correct averaging
47
+ def mean_pooling(model_output, attention_mask):
48
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
49
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
50
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
51
+
52
+
53
+ # Sentences we want sentence embeddings for
54
+ sentences = ['This is an example sentence', 'Each sentence is converted']
55
+
56
+ # Load model from HuggingFace Hub
57
+ tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
58
+ model = AutoModel.from_pretrained('{MODEL_NAME}')
59
+
60
+ # Tokenize sentences
61
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
62
+
63
+ # Compute token embeddings
64
+ with torch.no_grad():
65
+ model_output = model(**encoded_input)
66
+
67
+ # Perform pooling. In this case, mean pooling.
68
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
69
+
70
+ print("Sentence embeddings:")
71
+ print(sentence_embeddings)
72
+ ```
73
+
74
+
75
+
76
+ ## Evaluation Results
77
+
78
+ <!--- Describe how your model was evaluated -->
79
+
80
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
81
+
82
+
83
+ ## Training
84
+ The model was trained with the parameters:
85
+
86
+ **DataLoader**:
87
+
88
+ `torch.utils.data.dataloader.DataLoader` of length 657 with parameters:
89
+ ```
90
+ {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
+ ```
92
+
93
+ **Loss**:
94
+
95
+ `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
96
+
97
+ Parameters of the fit()-Method:
98
+ ```
99
+ {
100
+ "epochs": 4,
101
+ "evaluation_steps": 65,
102
+ "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
103
+ "max_grad_norm": 1,
104
+ "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
105
+ "optimizer_params": {
106
+ "lr": 2e-05
107
+ },
108
+ "scheduler": "WarmupLinear",
109
+ "steps_per_epoch": null,
110
+ "warmup_steps": 263,
111
+ "weight_decay": 0.01
112
+ }
113
+ ```
114
+
115
+
116
+ ## Full Model Architecture
117
+ ```
118
+ SentenceTransformer(
119
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: RobertaModel
120
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
121
+ )
122
+ ```
123
+
124
+ ## Citing & Authors
125
+
126
+ <!--- Describe where people can find more information -->
config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "output/klue-roberta-base-nli1-bs16-msl512/",
3
+ "architectures": [
4
+ "RobertaModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "gradient_checkpointing": false,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 768,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "layer_norm_eps": 1e-05,
17
+ "max_position_embeddings": 514,
18
+ "model_type": "roberta",
19
+ "num_attention_heads": 12,
20
+ "num_hidden_layers": 12,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "tokenizer_class": "BertTokenizer",
24
+ "torch_dtype": "float32",
25
+ "transformers_version": "4.31.0",
26
+ "type_vocab_size": 1,
27
+ "use_cache": true,
28
+ "vocab_size": 32000
29
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.2",
4
+ "transformers": "4.31.0",
5
+ "pytorch": "2.0.1+cu117"
6
+ }
7
+ }
eval/similarity_evaluation_valid_results.csv ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
2
+ 0,65,0.9397180417883825,0.8997032916554961,0.9281230021957088,0.8951364363565821,0.9278334507651644,0.8947616244748227,0.9279987439751025,0.8836106541282114
3
+ 0,130,0.9467646003622199,0.8997879336621797,0.9323236983323834,0.8954624103732041,0.9319862137002098,0.8949557493061492,0.9368133748412237,0.8863092985952522
4
+ 0,195,0.9524440802589137,0.9059198108425421,0.9390892680407712,0.9018267390439249,0.9388120047131085,0.9014440099634338,0.9433630462414969,0.8936728076900706
5
+ 0,260,0.9558503619756855,0.9096203021625693,0.9436354010215867,0.9054495740451188,0.9433631320892714,0.9049079996922738,0.9467686254579458,0.8974534783028725
6
+ 0,325,0.9602636567789736,0.9140792525988931,0.945922298235708,0.9094782254085788,0.9456010810957995,0.9087768671003811,0.9508911615679242,0.9019059623564778
7
+ 0,390,0.9633306210411153,0.9176179052493523,0.9519733054344874,0.9147622122482736,0.9516962752140914,0.9140971504746324,0.9549196619404319,0.904278492417633
8
+ 0,455,0.9667519473991255,0.9268178564034224,0.9538190061960852,0.9214662261299333,0.953591250906403,0.9209745369382636,0.9548435247797162,0.9103650928794131
9
+ 0,520,0.9687463163187907,0.9238176554181362,0.955279237384536,0.9186452617908893,0.9550526894834263,0.9181371992937516,0.9566300850933879,0.906274186364099
10
+ 0,585,0.9731723506717536,0.9297811349076567,0.962544367949314,0.9258668038908932,0.9623338198910715,0.9255036720835859,0.9662976582578139,0.917598212792534
11
+ 0,650,0.9723950712010244,0.9260417971406891,0.9604982830532731,0.9225501329947894,0.9603409188022717,0.9222192388780612,0.9652922538859398,0.9140626971323333
12
+ 0,-1,0.9734472612619323,0.9279052033177427,0.9620196776521199,0.9245799585585033,0.9618615529320177,0.9242543803318268,0.9667544492061751,0.9161436314875016
13
+ 1,65,0.977136322209463,0.9365805782882851,0.966352706879718,0.9322777058025175,0.9661156880945487,0.9317885261936104,0.9690514773912836,0.9222033806711032
14
+ 1,130,0.9773317140949469,0.9361053655295819,0.9657575228452885,0.9312128593574395,0.9655393189750835,0.9307994145654148,0.9688953907685667,0.9218864627873229
15
+ 1,195,0.9790804361023989,0.9406097367267001,0.9674244110610143,0.9347626626277596,0.96715284071697,0.9341874075228144,0.9707217062769302,0.9264363233127493
16
+ 1,260,0.9799406170359077,0.9416735210427958,0.9697903156035584,0.937648373699498,0.9695292842627807,0.9371212773804855,0.9724215769333037,0.927875590748102
17
+ 1,325,0.9813407868077642,0.9463220177499658,0.9719601320595326,0.9417118521147295,0.9717093830439766,0.9411434939098584,0.9731029102344584,0.9311656409225095
18
+ 1,390,0.9813920419962971,0.9450604976344475,0.9714459691007024,0.9409815586859929,0.9712035787023396,0.9403999313648955,0.9741357831969913,0.9320844555305395
19
+ 1,455,0.9822911653840454,0.9476459861796513,0.9717686088101519,0.9426282621328926,0.9715435451295829,0.9420436832997521,0.9753265189846397,0.9351357520577127
20
+ 1,520,0.9832989407315973,0.9497439146624707,0.9736153288319503,0.9446210593914863,0.9734270975581039,0.9441172531217471,0.9754816701643675,0.9350298896112667
21
+ 1,585,0.9843074059905532,0.951582104986396,0.974712344967034,0.9463011618319386,0.9744861867298844,0.9456611437883052,0.9764161533951881,0.9364407393127286
22
+ 1,650,0.9848298904017834,0.9510097289704765,0.9738884307734064,0.9450193822771155,0.9736433806229147,0.9443801178169237,0.9774401043318692,0.9367177714844552
23
+ 1,-1,0.9850104484183532,0.9517841287518398,0.9737542038635272,0.945401759479648,0.9734981067702955,0.9447624236098516,0.9774148033235439,0.9374137289399112
24
+ 2,65,0.9856787310776634,0.9541400299714328,0.9752881035782285,0.9481081844321829,0.9750419982061992,0.9474831151081546,0.9786404428857512,0.9399081117878797
25
+ 2,130,0.9858923588594372,0.9552241478814965,0.9749908782016579,0.9475181514700012,0.9747197025701316,0.9468671106983767,0.9774463125299748,0.9385671479252102
26
+ 2,195,0.985882965457757,0.9551783935980619,0.9751432408606462,0.9481688758842548,0.9748980606457652,0.9475724979647874,0.9771082855004137,0.937948922456979
27
+ 2,260,0.9867771959860862,0.9574331607345133,0.9757641236863004,0.950235807830165,0.9754866850511219,0.9495487189344209,0.9772626265052373,0.939363416810044
28
+ 2,325,0.9870661047022189,0.9585309049483671,0.9765270287913699,0.9513307922379122,0.9762889645700558,0.9507478325217367,0.9779405718924181,0.9406031734154542
29
+ 2,390,0.9872057577946448,0.9585539272561922,0.976480586592896,0.9509301989580807,0.9762355784674278,0.950254471325649,0.978189781041654,0.9408767991893319
30
+ 2,455,0.9875871772309558,0.9594407961030731,0.9755981728074499,0.9512371675015346,0.9753374854560395,0.9505555903079056,0.9788713731537289,0.9430863071763944
31
+ 2,520,0.9878075906110508,0.959747546196216,0.9764491292645125,0.9521241117543808,0.9761807106388679,0.9513710159475061,0.9788847260957106,0.9425067048897223
32
+ 2,585,0.988277239643948,0.9615339017051097,0.975959161933236,0.9523424764029575,0.9756649400595382,0.9515734431561506,0.978946484598059,0.9437443344037908
33
+ 2,650,0.9886696497878326,0.9618989379850472,0.976872089169885,0.9534292502899788,0.9766105064688907,0.952703358537665,0.9801228095917935,0.9449494800835931
34
+ 2,-1,0.9887386339177828,0.9621055700775988,0.9768793540339284,0.9536098313390711,0.9766178208502598,0.9528861369094724,0.9801538465552652,0.9451850486872193
35
+ 3,65,0.9884509800738993,0.9618397378670528,0.9761823254379098,0.9524860690755492,0.9759014698941226,0.9518024919751444,0.9793556062205341,0.9441844772579496
36
+ 3,130,0.9890851755928105,0.9638796360807651,0.9768642783405598,0.9540343951998903,0.9765703354954758,0.9533196648958837,0.9795549662870136,0.9455686974374184
37
+ 3,195,0.9893531181595053,0.964482573326025,0.9775011388366351,0.9546578824976749,0.9772195555841078,0.9539313049413517,0.9796692590922823,0.9454167323498517
38
+ 3,260,0.9894246892384747,0.9647605229713063,0.9778193434040702,0.9553277163059063,0.9775466528170723,0.9545745856167165,0.980087633095928,0.9462519675642984
39
+ 3,325,0.989608965448036,0.965412323110227,0.9777251958511626,0.9552260543633707,0.9774475541885524,0.9545097135776509,0.9801259033994062,0.9466896749336672
40
+ 3,390,0.9897538665588328,0.9659378083123605,0.9779653952684633,0.9556827910919455,0.9776842713693268,0.9549699143101867,0.9797826518846978,0.946516574528942
41
+ 3,455,0.9899264819679128,0.9666940157497137,0.9783385474531593,0.9566486771434279,0.9780540614620722,0.9559230694071796,0.98005720193982,0.9474169303777524
42
+ 3,520,0.9899800013912265,0.9665972673525975,0.9779673103988004,0.9561762172032965,0.9776887384813593,0.9554526049639194,0.9802997238875262,0.9478176851510194
43
+ 3,585,0.9899993722951314,0.9665555747529525,0.9780236917266035,0.9561186404597658,0.9777465217206289,0.9553838056786029,0.9803360590698648,0.9477514627693543
44
+ 3,650,0.9900657488200432,0.9667549306874994,0.9780949865786348,0.9562865843416511,0.9778158999213025,0.9555525481008548,0.9803710684396814,0.947906085287222
45
+ 3,-1,0.9900660105045667,0.9667552715579162,0.9780943488748243,0.9562848354163999,0.9778152458988124,0.9555508570413506,0.9803701941991193,0.947905507366447
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:888977fc1db72f8aa601cc4289472d6f451627057861bef159645bed97eb6cda
3
+ size 442539177
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
similarity_evaluation_test_results.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
2
+ -1,-1,0.8965342848178169,0.8994593738812181,0.8944096800717946,0.8926364065652307,0.8938001177448105,0.8921666061090477,0.8832380623728627,0.8825778517324421
special_tokens_map.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "[CLS]",
3
+ "cls_token": "[CLS]",
4
+ "eos_token": "[SEP]",
5
+ "mask_token": "[MASK]",
6
+ "pad_token": "[PAD]",
7
+ "sep_token": "[SEP]",
8
+ "unk_token": "[UNK]"
9
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "[CLS]",
3
+ "clean_up_tokenization_spaces": true,
4
+ "cls_token": "[CLS]",
5
+ "do_basic_tokenize": true,
6
+ "do_lower_case": false,
7
+ "eos_token": "[SEP]",
8
+ "mask_token": "[MASK]",
9
+ "model_max_length": 512,
10
+ "never_split": null,
11
+ "pad_token": "[PAD]",
12
+ "sep_token": "[SEP]",
13
+ "strip_accents": null,
14
+ "tokenize_chinese_chars": true,
15
+ "tokenizer_class": "BertTokenizer",
16
+ "unk_token": "[UNK]"
17
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff