SetFit with mini1013/master_domain

This is a SetFit model that can be used for Text Classification. This SetFit model uses mini1013/master_domain as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
3.0
  • '굿나잇 온열안대 수면안대 눈찜질 눈찜질기 눈찜질팩 MinSellAmount 오아월드'
  • '[대구백화점] [누리아이]안구건조증 치료의료기기 누리아이 5800 (위생용시트지 1박스 ) 누리아이 5800 대구백화점'
  • '동국제약 굿잠 스팀안대 3박스 수면 온열안대 (무향/카모마일향 선택) 1_무향 3박스_AA 동국제약_본사직영'
0.0
  • '렌즈집게 렌즈 넣는 집게 끼는 도구 흡착봉 소프트 렌즈집게(핑크) 썬더딜'
  • '메루루 원데이 소프트렌즈 집게 착용 분리 기구 1세트 MinSellAmount 체리팝스'
  • '소프트 통 케이스 빼는도구 접시 용품 흡착봉 뽁뽁이 보관통 하드 렌즈통(블루) 기쁘다희샵'
2.0
  • '초음파 변환장치 진동기 식기 세척기 진동판 생성기 초음파발생기 변환기 D. 20-40K1800W (비고 주파수) 메타몰'
  • '새한 초음파세정기 SH-1050 / 28kHz / 1.2L / 신제품 주식회사 전자코리아'
  • '새한 디지털 초음파 세척기 세정기 SH-1050D 안경 렌즈 귀금속 세척기 서진하이텍'
1.0
  • '휴먼바이오 식염수 중외제약 셀라인 식염수 370ml 20개, 드림 하드 렌즈용 생리 식염수 가이아코리아 휴먼바이오 식염수 500ml 20개 가이아코리아(Gaia Korea)'
  • '리뉴 센서티브 355ml 씨채널안경체인태백점'
  • '바슈롬 바이오트루 300ml 쏜 상점'

Evaluation

Metrics

Label Metric
all 0.9615

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh6")
# Run inference
preds = model("교체용 케이스 소프트 집게 거울 콘텍트 세트 블루 슈가랜드")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 3 9.705 19
Label Training Sample Count
0.0 50
1.0 50
2.0 50
3.0 50

Training Hyperparameters

  • batch_size: (512, 512)
  • num_epochs: (20, 20)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 40
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0312 1 0.4002 -
1.5625 50 0.064 -
3.125 100 0.0021 -
4.6875 150 0.0004 -
6.25 200 0.0001 -
7.8125 250 0.0001 -
9.375 300 0.0 -
10.9375 350 0.0 -
12.5 400 0.0 -
14.0625 450 0.0 -
15.625 500 0.0 -
17.1875 550 0.0 -
18.75 600 0.0 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.1.0.dev0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.46.1
  • PyTorch: 2.4.0+cu121
  • Datasets: 2.20.0
  • Tokenizers: 0.20.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
1,150
Safetensors
Model size
111M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mini1013/master_cate_lh6

Base model

klue/roberta-base
Finetuned
(92)
this model

Evaluation results