Add SetFit model
Browse files- 1_Pooling/config.json +10 -0
- README.md +232 -0
- config.json +24 -0
- config_sentence_transformers.json +10 -0
- config_setfit.json +8 -0
- model.safetensors +3 -0
- model_head.pkl +3 -0
- modules.json +14 -0
- sentence_bert_config.json +4 -0
- special_tokens_map.json +51 -0
- tokenizer.json +0 -0
- tokenizer_config.json +66 -0
- vocab.txt +0 -0
1_Pooling/config.json
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"word_embedding_dimension": 768,
|
3 |
+
"pooling_mode_cls_token": false,
|
4 |
+
"pooling_mode_mean_tokens": true,
|
5 |
+
"pooling_mode_max_tokens": false,
|
6 |
+
"pooling_mode_mean_sqrt_len_tokens": false,
|
7 |
+
"pooling_mode_weightedmean_tokens": false,
|
8 |
+
"pooling_mode_lasttoken": false,
|
9 |
+
"include_prompt": true
|
10 |
+
}
|
README.md
ADDED
@@ -0,0 +1,232 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: sentence-transformers/paraphrase-mpnet-base-v2
|
3 |
+
library_name: setfit
|
4 |
+
metrics:
|
5 |
+
- accuracy
|
6 |
+
pipeline_tag: text-classification
|
7 |
+
tags:
|
8 |
+
- setfit
|
9 |
+
- sentence-transformers
|
10 |
+
- text-classification
|
11 |
+
- generated_from_setfit_trainer
|
12 |
+
widget:
|
13 |
+
- text: Stocks plummeted on Friday as investors grew increasingly concerned about
|
14 |
+
a potential market correction. The Dow Jones Industrial Average fell 2.5% in a
|
15 |
+
single day, its worst performance since 2020. Analysts warn that the correction
|
16 |
+
could be a sign of a larger issue with the market, and that investors should be
|
17 |
+
prepared for a potential downturn. Meanwhile, the S&P 500 also declined, with
|
18 |
+
tech stocks leading the way down. The correction has left many investors wondering
|
19 |
+
if the market has reached a tipping point, and whether a bear market is on the
|
20 |
+
horizon.
|
21 |
+
- text: The highly anticipated IPO of electric vehicle manufacturer, Rivian, is expected
|
22 |
+
to be a game-changer for the industry. With a strong order book and a proven track
|
23 |
+
record of innovation, Rivian is poised to disrupt the traditional auto sector.
|
24 |
+
As the company prepares to list on the NASDAQ, investors are eagerly awaiting
|
25 |
+
the opportunity to get in on the ground floor of this promising new player. With
|
26 |
+
a market capitalization of over $50 billion, Rivian is set to make a significant
|
27 |
+
impact on the stock market, and we believe it's a buy for long-term investors.
|
28 |
+
- text: 'Just saw Q2 earnings release from @CompanyABC and I''m loving the 25% revenue
|
29 |
+
growth! Their expansion into new markets is paying off. Bullish on this stock
|
30 |
+
for the long haul #finance #stockmarket'
|
31 |
+
- text: US GDP growth rate remains steady at 2.1% in Q1, meeting economist expectations.
|
32 |
+
The Federal Reserve's decision to keep interest rates unchanged is seen as a positive
|
33 |
+
sign for the economy, but some analysts warn of potential risks from rising inflation.
|
34 |
+
- text: The latest GDP report is a clear indication that the economy is headed for
|
35 |
+
a recession. With a decline of 0.5% in the first quarter, it's only a matter of
|
36 |
+
time before businesses start to feel the pinch. The Fed's decision to raise interest
|
37 |
+
rates will only exacerbate the situation, making it even harder for consumers
|
38 |
+
to afford everyday essentials. I'm bearish on the market until we see a significant
|
39 |
+
turnaround in economic indicators.
|
40 |
+
inference: true
|
41 |
+
model-index:
|
42 |
+
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
|
43 |
+
results:
|
44 |
+
- task:
|
45 |
+
type: text-classification
|
46 |
+
name: Text Classification
|
47 |
+
dataset:
|
48 |
+
name: Unknown
|
49 |
+
type: unknown
|
50 |
+
split: test
|
51 |
+
metrics:
|
52 |
+
- type: accuracy
|
53 |
+
value: 0.6520100502512562
|
54 |
+
name: Accuracy
|
55 |
+
---
|
56 |
+
|
57 |
+
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
|
58 |
+
|
59 |
+
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
|
60 |
+
|
61 |
+
The model has been trained using an efficient few-shot learning technique that involves:
|
62 |
+
|
63 |
+
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
|
64 |
+
2. Training a classification head with features from the fine-tuned Sentence Transformer.
|
65 |
+
|
66 |
+
## Model Details
|
67 |
+
|
68 |
+
### Model Description
|
69 |
+
- **Model Type:** SetFit
|
70 |
+
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
|
71 |
+
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
|
72 |
+
- **Maximum Sequence Length:** 512 tokens
|
73 |
+
- **Number of Classes:** 3 classes
|
74 |
+
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
|
75 |
+
<!-- - **Language:** Unknown -->
|
76 |
+
<!-- - **License:** Unknown -->
|
77 |
+
|
78 |
+
### Model Sources
|
79 |
+
|
80 |
+
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
|
81 |
+
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
|
82 |
+
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
|
83 |
+
|
84 |
+
### Model Labels
|
85 |
+
| Label | Examples |
|
86 |
+
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
87 |
+
| 2 | <ul><li>"The US economy is expected to grow at a moderate pace in the coming quarters, with the Federal Reserve maintaining a neutral stance on interest rates. Analysts are predicting a slight increase in GDP, driven by consumer spending and business investment. However, some experts caution that global trade tensions and Brexit uncertainty could impact the country's economic outlook."</li><li>"Just took a closer look at the market trends and I'm not seeing anything that suggests a major shift in the near future. Volatility is still relatively low and the overall sentiment remains stable. Nothing to get too excited about, but nothing to worry about either."</li><li>'The US Securities and Exchange Commission (SEC) has announced a new set of guidelines for financial institutions to follow when reporting their quarterly earnings. The changes are aimed at providing more transparency and clarity for investors. The SEC stated that the new rules will take effect in the next quarter, and companies are expected to comply with the new guidelines by then. The move is seen as a positive step towards improving corporate governance and investor confidence.'</li></ul> |
|
88 |
+
| 1 | <ul><li>"Just got out of the Q1 earnings call and I'm feeling bullish on $AAPL. Revenue growth is through the roof and their new product lineup is going to disrupt the market. Time to buy in! #Apple #StockMarket"</li><li>'The tech sector is poised for significant growth in the coming quarters, driven by the increasing adoption of cloud computing and artificial intelligence. With major players like Amazon and Microsoft leading the charge, investors can expect a substantial boost in stock prices. In fact, a recent report by Morgan Stanley suggests that the tech sector could see a 20% increase in revenue by the end of the year, making it an attractive space for investors looking to capitalize on the market expansion.'</li><li>"AI Breakthroughs Fuel Market Optimism: Experts Predict 30% Surge in Tech Stocks\n\nA recent report from leading AI research firm, DeepMind, has sent shockwaves through the financial community with its groundbreaking findings on neural network advancements. The breakthroughs, which include significant improvements in processing power and efficiency, are expected to have a profound impact on the tech industry. As a result, experts are predicting a 30% surge in tech stocks, with many analysts citing AI as the driving force behind this optimism. With the likes of Google and Microsoft already investing heavily in AI research, it's clear that this trend is here to stay. The Bullish sentiment is palpable, with many investors taking a long-term view on the potential of AI to drive growth and innovation."</li></ul> |
|
89 |
+
| 0 | <ul><li>"The latest GDP report is a major red flag for the economy, with a 0.5% contraction in Q2. This is a clear indication that the Federal Reserve's interest rate hikes are starting to bite, and it's only a matter of time before we see a recession. The market is already pricing in a 75% chance of a rate cut by the end of the year, but I'm not convinced that's enough to stem the tide. With the yield curve still inverted and consumer spending slowing, I'm bearish on the market's prospects for the rest of the year."</li><li>"Shares of Tesla (TSLA) plummeted 8.5% after the electric vehicle maker reported a surprise earnings miss due to a 27% decline in revenue. The company's revenue fell short of analyst expectations, casting a shadow over its growth prospects. This is the third consecutive quarter that Tesla has missed revenue estimates, raising concerns among investors about the company's ability to maintain its market share in a highly competitive EV market."</li><li>"Shares of struggling retailer, Sears Holdings, plummeted 12.5% on Tuesday after the company announced it would be closing an additional 150 stores, exacerbating concerns over its ability to stay afloat. The move is the latest in a series of efforts to stem the bleeding, but investors remain skeptical. With over $1.5 billion in debt and dwindling cash reserves, many are questioning whether Sears can avoid bankruptcy. The company's stock has lost nearly 75% of its value over the past year, leaving it on the brink of collapse. The news sent shockwaves through the retail sector, with other struggling retailers seeing their shares decline in response."</li></ul> |
|
90 |
+
|
91 |
+
## Evaluation
|
92 |
+
|
93 |
+
### Metrics
|
94 |
+
| Label | Accuracy |
|
95 |
+
|:--------|:---------|
|
96 |
+
| **all** | 0.6520 |
|
97 |
+
|
98 |
+
## Uses
|
99 |
+
|
100 |
+
### Direct Use for Inference
|
101 |
+
|
102 |
+
First install the SetFit library:
|
103 |
+
|
104 |
+
```bash
|
105 |
+
pip install setfit
|
106 |
+
```
|
107 |
+
|
108 |
+
Then you can load this model and run inference.
|
109 |
+
|
110 |
+
```python
|
111 |
+
from setfit import SetFitModel
|
112 |
+
|
113 |
+
# Download from the 🤗 Hub
|
114 |
+
model = SetFitModel.from_pretrained("setfit_model_id")
|
115 |
+
# Run inference
|
116 |
+
preds = model("Just saw Q2 earnings release from @CompanyABC and I'm loving the 25% revenue growth! Their expansion into new markets is paying off. Bullish on this stock for the long haul #finance #stockmarket")
|
117 |
+
```
|
118 |
+
|
119 |
+
<!--
|
120 |
+
### Downstream Use
|
121 |
+
|
122 |
+
*List how someone could finetune this model on their own dataset.*
|
123 |
+
-->
|
124 |
+
|
125 |
+
<!--
|
126 |
+
### Out-of-Scope Use
|
127 |
+
|
128 |
+
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
129 |
+
-->
|
130 |
+
|
131 |
+
<!--
|
132 |
+
## Bias, Risks and Limitations
|
133 |
+
|
134 |
+
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
|
135 |
+
-->
|
136 |
+
|
137 |
+
<!--
|
138 |
+
### Recommendations
|
139 |
+
|
140 |
+
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
|
141 |
+
-->
|
142 |
+
|
143 |
+
## Training Details
|
144 |
+
|
145 |
+
### Training Set Metrics
|
146 |
+
| Training set | Min | Median | Max |
|
147 |
+
|:-------------|:----|:--------|:----|
|
148 |
+
| Word count | 27 | 75.1190 | 271 |
|
149 |
+
|
150 |
+
| Label | Training Sample Count |
|
151 |
+
|:------|:----------------------|
|
152 |
+
| 0 | 15 |
|
153 |
+
| 1 | 16 |
|
154 |
+
| 2 | 11 |
|
155 |
+
|
156 |
+
### Training Hyperparameters
|
157 |
+
- batch_size: (16, 16)
|
158 |
+
- num_epochs: (5, 5)
|
159 |
+
- max_steps: -1
|
160 |
+
- sampling_strategy: oversampling
|
161 |
+
- body_learning_rate: (2e-05, 1e-05)
|
162 |
+
- head_learning_rate: 0.01
|
163 |
+
- loss: CosineSimilarityLoss
|
164 |
+
- distance_metric: cosine_distance
|
165 |
+
- margin: 0.25
|
166 |
+
- end_to_end: False
|
167 |
+
- use_amp: False
|
168 |
+
- warmup_proportion: 0.1
|
169 |
+
- seed: 42
|
170 |
+
- eval_max_steps: -1
|
171 |
+
- load_best_model_at_end: True
|
172 |
+
|
173 |
+
### Training Results
|
174 |
+
| Epoch | Step | Training Loss | Validation Loss |
|
175 |
+
|:-------:|:-------:|:-------------:|:---------------:|
|
176 |
+
| 0.0137 | 1 | 0.326 | - |
|
177 |
+
| 0.6849 | 50 | 0.0037 | - |
|
178 |
+
| 1.0 | 73 | - | 0.0065 |
|
179 |
+
| 1.3699 | 100 | 0.0006 | - |
|
180 |
+
| 2.0 | 146 | - | 0.0047 |
|
181 |
+
| 2.0548 | 150 | 0.0002 | - |
|
182 |
+
| 2.7397 | 200 | 0.0003 | - |
|
183 |
+
| 3.0 | 219 | - | 0.0038 |
|
184 |
+
| 3.4247 | 250 | 0.0001 | - |
|
185 |
+
| 4.0 | 292 | - | 0.0038 |
|
186 |
+
| 4.1096 | 300 | 0.0002 | - |
|
187 |
+
| 4.7945 | 350 | 0.0001 | - |
|
188 |
+
| **5.0** | **365** | **-** | **0.0036** |
|
189 |
+
|
190 |
+
* The bold row denotes the saved checkpoint.
|
191 |
+
### Framework Versions
|
192 |
+
- Python: 3.9.19
|
193 |
+
- SetFit: 1.1.0.dev0
|
194 |
+
- Sentence Transformers: 3.0.1
|
195 |
+
- Transformers: 4.39.0
|
196 |
+
- PyTorch: 2.4.0
|
197 |
+
- Datasets: 2.20.0
|
198 |
+
- Tokenizers: 0.15.2
|
199 |
+
|
200 |
+
## Citation
|
201 |
+
|
202 |
+
### BibTeX
|
203 |
+
```bibtex
|
204 |
+
@article{https://doi.org/10.48550/arxiv.2209.11055,
|
205 |
+
doi = {10.48550/ARXIV.2209.11055},
|
206 |
+
url = {https://arxiv.org/abs/2209.11055},
|
207 |
+
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
|
208 |
+
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
|
209 |
+
title = {Efficient Few-Shot Learning Without Prompts},
|
210 |
+
publisher = {arXiv},
|
211 |
+
year = {2022},
|
212 |
+
copyright = {Creative Commons Attribution 4.0 International}
|
213 |
+
}
|
214 |
+
```
|
215 |
+
|
216 |
+
<!--
|
217 |
+
## Glossary
|
218 |
+
|
219 |
+
*Clearly define terms in order to be accessible across audiences.*
|
220 |
+
-->
|
221 |
+
|
222 |
+
<!--
|
223 |
+
## Model Card Authors
|
224 |
+
|
225 |
+
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
|
226 |
+
-->
|
227 |
+
|
228 |
+
<!--
|
229 |
+
## Model Card Contact
|
230 |
+
|
231 |
+
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
|
232 |
+
-->
|
config.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "setfit/step_365",
|
3 |
+
"architectures": [
|
4 |
+
"MPNetModel"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"bos_token_id": 0,
|
8 |
+
"eos_token_id": 2,
|
9 |
+
"hidden_act": "gelu",
|
10 |
+
"hidden_dropout_prob": 0.1,
|
11 |
+
"hidden_size": 768,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 3072,
|
14 |
+
"layer_norm_eps": 1e-05,
|
15 |
+
"max_position_embeddings": 514,
|
16 |
+
"model_type": "mpnet",
|
17 |
+
"num_attention_heads": 12,
|
18 |
+
"num_hidden_layers": 12,
|
19 |
+
"pad_token_id": 1,
|
20 |
+
"relative_attention_num_buckets": 32,
|
21 |
+
"torch_dtype": "float32",
|
22 |
+
"transformers_version": "4.39.0",
|
23 |
+
"vocab_size": 30527
|
24 |
+
}
|
config_sentence_transformers.json
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"__version__": {
|
3 |
+
"sentence_transformers": "3.0.1",
|
4 |
+
"transformers": "4.39.0",
|
5 |
+
"pytorch": "2.4.0"
|
6 |
+
},
|
7 |
+
"prompts": {},
|
8 |
+
"default_prompt_name": null,
|
9 |
+
"similarity_fn_name": null
|
10 |
+
}
|
config_setfit.json
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"normalize_embeddings": false,
|
3 |
+
"labels": [
|
4 |
+
"0",
|
5 |
+
"1",
|
6 |
+
"2"
|
7 |
+
]
|
8 |
+
}
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f3754648749baf4cfd3cd7d596948a47e3f132d6b6a6b1cb44fc4b9884efb9cc
|
3 |
+
size 437967672
|
model_head.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4454ec63134a2bfa5992b549b96e587aede97ed9d138317e05d281a88e6b33ed
|
3 |
+
size 19295
|
modules.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"idx": 0,
|
4 |
+
"name": "0",
|
5 |
+
"path": "",
|
6 |
+
"type": "sentence_transformers.models.Transformer"
|
7 |
+
},
|
8 |
+
{
|
9 |
+
"idx": 1,
|
10 |
+
"name": "1",
|
11 |
+
"path": "1_Pooling",
|
12 |
+
"type": "sentence_transformers.models.Pooling"
|
13 |
+
}
|
14 |
+
]
|
sentence_bert_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"max_seq_length": 512,
|
3 |
+
"do_lower_case": false
|
4 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<s>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": false,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"cls_token": {
|
10 |
+
"content": "<s>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": false,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"eos_token": {
|
17 |
+
"content": "</s>",
|
18 |
+
"lstrip": false,
|
19 |
+
"normalized": false,
|
20 |
+
"rstrip": false,
|
21 |
+
"single_word": false
|
22 |
+
},
|
23 |
+
"mask_token": {
|
24 |
+
"content": "<mask>",
|
25 |
+
"lstrip": true,
|
26 |
+
"normalized": false,
|
27 |
+
"rstrip": false,
|
28 |
+
"single_word": false
|
29 |
+
},
|
30 |
+
"pad_token": {
|
31 |
+
"content": "<pad>",
|
32 |
+
"lstrip": false,
|
33 |
+
"normalized": false,
|
34 |
+
"rstrip": false,
|
35 |
+
"single_word": false
|
36 |
+
},
|
37 |
+
"sep_token": {
|
38 |
+
"content": "</s>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false
|
43 |
+
},
|
44 |
+
"unk_token": {
|
45 |
+
"content": "[UNK]",
|
46 |
+
"lstrip": false,
|
47 |
+
"normalized": false,
|
48 |
+
"rstrip": false,
|
49 |
+
"single_word": false
|
50 |
+
}
|
51 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"added_tokens_decoder": {
|
3 |
+
"0": {
|
4 |
+
"content": "<s>",
|
5 |
+
"lstrip": false,
|
6 |
+
"normalized": false,
|
7 |
+
"rstrip": false,
|
8 |
+
"single_word": false,
|
9 |
+
"special": true
|
10 |
+
},
|
11 |
+
"1": {
|
12 |
+
"content": "<pad>",
|
13 |
+
"lstrip": false,
|
14 |
+
"normalized": false,
|
15 |
+
"rstrip": false,
|
16 |
+
"single_word": false,
|
17 |
+
"special": true
|
18 |
+
},
|
19 |
+
"2": {
|
20 |
+
"content": "</s>",
|
21 |
+
"lstrip": false,
|
22 |
+
"normalized": false,
|
23 |
+
"rstrip": false,
|
24 |
+
"single_word": false,
|
25 |
+
"special": true
|
26 |
+
},
|
27 |
+
"104": {
|
28 |
+
"content": "[UNK]",
|
29 |
+
"lstrip": false,
|
30 |
+
"normalized": false,
|
31 |
+
"rstrip": false,
|
32 |
+
"single_word": false,
|
33 |
+
"special": true
|
34 |
+
},
|
35 |
+
"30526": {
|
36 |
+
"content": "<mask>",
|
37 |
+
"lstrip": true,
|
38 |
+
"normalized": false,
|
39 |
+
"rstrip": false,
|
40 |
+
"single_word": false,
|
41 |
+
"special": true
|
42 |
+
}
|
43 |
+
},
|
44 |
+
"bos_token": "<s>",
|
45 |
+
"clean_up_tokenization_spaces": true,
|
46 |
+
"cls_token": "<s>",
|
47 |
+
"do_basic_tokenize": true,
|
48 |
+
"do_lower_case": true,
|
49 |
+
"eos_token": "</s>",
|
50 |
+
"mask_token": "<mask>",
|
51 |
+
"max_length": 512,
|
52 |
+
"model_max_length": 512,
|
53 |
+
"never_split": null,
|
54 |
+
"pad_to_multiple_of": null,
|
55 |
+
"pad_token": "<pad>",
|
56 |
+
"pad_token_type_id": 0,
|
57 |
+
"padding_side": "right",
|
58 |
+
"sep_token": "</s>",
|
59 |
+
"stride": 0,
|
60 |
+
"strip_accents": null,
|
61 |
+
"tokenize_chinese_chars": true,
|
62 |
+
"tokenizer_class": "MPNetTokenizer",
|
63 |
+
"truncation_side": "right",
|
64 |
+
"truncation_strategy": "longest_first",
|
65 |
+
"unk_token": "[UNK]"
|
66 |
+
}
|
vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|