File size: 11,722 Bytes
a0cb3ab |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: LG전자 올레드 TV OLED55C2FNA 스탠드 윤성 운송료상이 윤성종합가전
- text: '[엡손] EH-LS500W / 4K UHD 4000안시 2,500,000:1 EPSON 빔 프로젝터 초단초점 (주)메리트정보'
- text: 루컴즈 2024년형 50인치 스마트 UHD 구글 TV 4K 에너지효율 1등급 T5003KUG 스탠드 빌리어네어디
- text: 이노스 S8601KU LG 패널 스마트 TV 구글티비 벽걸이 기사방문설치(브라켓별도)_수도권(서울경기인천)_86인치 QLED 구글TV
(주)티지디지털
- text: 삼성 WMN4070SG 벽결이브라켓 삼성고정브라켓 두루엠에스
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.763001415762152
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 6 | <ul><li>'윤씨네 J-SV / 족자스크린 4:3비율 100인치 에스앤피'</li><li>'[ FLAT FLOW ] 플랏플로우 100인치 와이드 분리형 족자스크린 F-HJ100W F-HJ100W (100인치 와이드 족자형) 아이티원'</li><li>'윤씨네 J-SH40 / 와이드 족자스크린 16:9 40인치 에스앤피'</li></ul> |
| 2 | <ul><li>'75인치 189cm 4K UHD 비즈니스TV LH75BECH 스탠드 에너지효율등급 1등급 우수한 내구성 주식회사 쇼핑하는니체'</li><li>'[LG] 55인치 UHD 단독형 사이니지 3시리즈 (55UL3J) 고정형 벽걸이 설치 주식회사 케이엠시스템'</li><li>'[LG] 55인치 비디오월 슬림 베젤 1.74 mm, 500nit (55VM5J) 벽걸이 설치 (별도문의) 주식회사 케이엠시스템'</li></ul> |
| 5 | <ul><li>'벤큐 GS50 풀HD 캠핑용 빔프로젝터 안드로이드 아이폰 무선미러링 배터리내장 블루투스 (주)아솔컴퍼니'</li><li>'에이서 DX227 🧡정품 신형🧡 5200안시 XGA 20000:1 DLP 회의용 교육용 강당용 멀티용 도움에이브이'</li><li>'[피제이시스] 엡손 EB-L1070U 레이저프로젝터 ❤️정품새상품 ❤️ 주식회사 피제이시스(PJSYS.co.Ltd.)'</li></ul> |
| 0 | <ul><li>'이노스 S2401KU 어반스톡'</li><li>'[무결점] 프리즘 바이런 75인치 1등급 4K HDR 베젤리스TV 패널 2년 무상보증 / BR750UD_기사설치포함 (주)프리즘코리아'</li><li>'[무결점] 프리즘 바이런 55인치 1등급 4K HDR 베젤리스TV 패널 2년 무상보증 / BR550UHD (주)프리즘코리아'</li></ul> |
| 4 | <ul><li>'[PICO 국내 공식판매처] PICO NEO3 Enterprise VR (256GB) / 공공기관 및 공공교육기관 전용 주식회사 메타에듀시스'</li><li>'에듀플레이어 EA400 DVD플레이어 CD/DVD리핑 투웨이 블루투스 EA400 (ED404) 주식회사 에듀플레이어'</li><li>'오큘러스 퀘스트2 Oculus Quest2 올인원 VR게임헤드셋 퀘스트2 128GB (관세 대납) 팽마켓'</li></ul> |
| 1 | <ul><li>'카멜 디지털액자화이트(블랙) / PF1040IPS /10인치 디지털액자(동영상,슬리이드쇼,앨범) 선물용디지털액자PF-1040IPS / 디지털사진액자/ 16:9화면(화이트or 블랙) 블랙 에스라B2B'</li><li>'컴스마트 BM170 15.4형 스마트 디지털 액자 동영상 시계 달력 HDMI 서브 모니터 블루시스템쇼핑몰 주식회사'</li><li>'카멜 디지털액자 10인치 PF-1040IPS 미니모니터 사진 동영상 음악 에스제이인터내셔널'</li></ul> |
| 3 | <ul><li>'COMBO-2000A (금영 (KY)/ 내셔널 (NATIONAL) / 넥스디지탈 (NEX) /넥슨 (NEXN) /뉴썬인더스트리 엔플러스(NPLUS)/ 다비디스플레이 (DAVI) COMBO-2000A 메카트로주식회사'</li><li>'COMBO-119 /APH13000/AP-H3020/AP-H4000/APH-H2300/AP-HH232N/IAS-T1010/IAS-T810/IAS-T82CA 지에이치스토어'</li><li>'COMBO-2201 (AKB75455603 / AKB75635301 / AKB75635305 / AKB75675304 / akb75675306 / AKB75755301) 메카트로'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.7630 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_el13")
# Run inference
preds = model("삼성 WMN4070SG 벽결이브라켓 삼성고정브라켓 두루엠에스")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 10.4229 | 25 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 50 |
| 1 | 50 |
| 2 | 50 |
| 3 | 50 |
| 4 | 50 |
| 5 | 50 |
| 6 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0182 | 1 | 0.4965 | - |
| 0.9091 | 50 | 0.118 | - |
| 1.8182 | 100 | 0.0382 | - |
| 2.7273 | 150 | 0.0008 | - |
| 3.6364 | 200 | 0.0003 | - |
| 4.5455 | 250 | 0.0002 | - |
| 5.4545 | 300 | 0.0002 | - |
| 6.3636 | 350 | 0.0002 | - |
| 7.2727 | 400 | 0.0001 | - |
| 8.1818 | 450 | 0.0001 | - |
| 9.0909 | 500 | 0.0001 | - |
| 10.0 | 550 | 0.0001 | - |
| 10.9091 | 600 | 0.0001 | - |
| 11.8182 | 650 | 0.0001 | - |
| 12.7273 | 700 | 0.0001 | - |
| 13.6364 | 750 | 0.0001 | - |
| 14.5455 | 800 | 0.0001 | - |
| 15.4545 | 850 | 0.0001 | - |
| 16.3636 | 900 | 0.0001 | - |
| 17.2727 | 950 | 0.0001 | - |
| 18.1818 | 1000 | 0.0001 | - |
| 19.0909 | 1050 | 0.0001 | - |
| 20.0 | 1100 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |