OysterQAQ commited on
Commit
9569cdd
1 Parent(s): 215915e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -2
README.md CHANGED
@@ -1,4 +1,14 @@
1
- ## acgvoc2vec
 
 
 
 
 
 
 
 
 
 
2
 
3
  结构为[sentence-transformers](https://github.com/UKPLab/sentence-transformers),使用其**distiluse-base-multilingual-cased-v2**预训练权重,以5e-5的学习率在动漫相关语句对数据集下进行微调,损失函数为MultipleNegativesRankingLoss。
4
 
@@ -34,4 +44,86 @@
34
  * 动画中文名的简介-简介
35
  * 动画中文名+小标题-对应内容
36
 
37
- 在进行爬取,清洗,处理后得到510w对文本对(还在持续增加),batchzise=80训练了20个epoch,使st的权重能够适应该问题空间,生成融合了领域知识的文本特征向量(体现为有关的文本距离更加接近,例如作品与登场人物,或者来自同一作品的登场人物)。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+
8
+
9
+ ---
10
+
11
+ # {acgvoc2vec}
12
 
13
  结构为[sentence-transformers](https://github.com/UKPLab/sentence-transformers),使用其**distiluse-base-multilingual-cased-v2**预训练权重,以5e-5的学习率在动漫相关语句对数据集下进行微调,损失函数为MultipleNegativesRankingLoss。
14
 
 
44
  * 动画中文名的简介-简介
45
  * 动画中文名+小标题-对应内容
46
 
47
+ 在进行爬取,清洗,处理后得到510w对文本对(还在持续增加),batchzise=80训练了20个epoch,使st的权重能够适应该问题空间,生成融合了领域知识的文本特征向量(体现为有关的文本距离更加接近,例如作品与登场人物,或者来自同一作品的登场人物)。
48
+
49
+ ## Usage (Sentence-Transformers)
50
+
51
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
52
+
53
+ ```
54
+ pip install -U sentence-transformers
55
+ ```
56
+
57
+ Then you can use the model like this:
58
+
59
+ ```python
60
+ from sentence_transformers import SentenceTransformer
61
+ sentences = ["This is an example sentence", "Each sentence is converted"]
62
+
63
+ model = SentenceTransformer('{MODEL_NAME}')
64
+ embeddings = model.encode(sentences)
65
+ print(embeddings)
66
+ ```
67
+
68
+
69
+
70
+ ## Evaluation Results
71
+
72
+ <!--- Describe how your model was evaluated -->
73
+
74
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
75
+
76
+
77
+ ## Training
78
+
79
+ The model was trained with the parameters:
80
+
81
+ **DataLoader**:
82
+
83
+ `torch.utils.data.dataloader.DataLoader` of length 64769 with parameters:
84
+
85
+ ```
86
+ {'batch_size': 80, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
87
+ ```
88
+
89
+ **Loss**:
90
+
91
+ `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
92
+
93
+ ```
94
+ {'scale': 20.0, 'similarity_fct': 'cos_sim'}
95
+ ```
96
+
97
+ Parameters of the fit()-Method:
98
+
99
+ ```
100
+ {
101
+ "epochs": 20,
102
+ "evaluation_steps": 0,
103
+ "evaluator": "NoneType",
104
+ "max_grad_norm": 1,
105
+ "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
106
+ "optimizer_params": {
107
+ "lr": 2e-05
108
+ },
109
+ "scheduler": "WarmupLinear",
110
+ "steps_per_epoch": null,
111
+ "warmup_steps": 129538,
112
+ "weight_decay": 0.01
113
+ }
114
+ ```
115
+
116
+
117
+ ## Full Model Architecture
118
+
119
+ ```
120
+ SentenceTransformer(
121
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
122
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
123
+ (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
124
+ )
125
+ ```
126
+
127
+ ## Citing & Authors
128
+
129
+ <!--- Describe where people can find more information -->