Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

SetFit Aspect Model

This is a SetFit model that can be used for Aspect Based Sentiment Analysis (ABSA). A LogisticRegression instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

This model was trained within the context of a larger system for ABSA, which looks like so:

  1. Use a spaCy model to select possible aspect span candidates.
  2. Use this SetFit model to filter these possible aspect span candidates.
  3. Use a SetFit model to classify the filtered aspect span candidates.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
aspect
  • 'story:saranku developer harus menciptakan sebuah story yang sangat menarik agar tidak kehilangan para player karena masalahnya banyak player yg tidak bertahan lama karena repetitif dan monoton tiap update size makin gede doang yg isinya cuma chest baru itupun sampah puzzle yg makin lama makin rumit tapi chest nya sampah story kebanyakan npc teyvat story utama punya mc dilupain gak difokusin map kalo udah kosong ya nyampah bikin size gede doang main 3 tahun rasanya monoton perkembangan buruk'
  • 'reward:tolong ditambah lagi reward untuk gachanya untuk player lama kesulitan mendapatkan primo karena sudah tidak ada lagi quest dan eksplorasi juga sudah 100 dasar developer kapitalis game ini makin lama makin monoton dan tidak ramah untuk player lama yang kekurangan bahan untuk gacha karakter'
  • 'event:cuman saran jangan terlalu pelit biar para player gak kabur sama game sebelah hadiah event quest di perbaiki udah nunggu event lama lama hadiah cuman gitu gitu aja sampek event selesai primogemnya buat 10 pull gacha gak cukup tingakat kesulitan beda hadiah sama saja lama lama yang main pada kabur kalok terlalu pelit dan 1 lagi jariang mohon di perbaiki untuk server indonya trimaksih'
no aspect
  • 'saranku developer:saranku developer harus menciptakan sebuah story yang sangat menarik agar tidak kehilangan para player karena masalahnya banyak player yg tidak bertahan lama karena repetitif dan monoton tiap update size makin gede doang yg isinya cuma chest baru itupun sampah puzzle yg makin lama makin rumit tapi chest nya sampah story kebanyakan npc teyvat story utama punya mc dilupain gak difokusin map kalo udah kosong ya nyampah bikin size gede doang main 3 tahun rasanya monoton perkembangan buruk'
  • 'story:saranku developer harus menciptakan sebuah story yang sangat menarik agar tidak kehilangan para player karena masalahnya banyak player yg tidak bertahan lama karena repetitif dan monoton tiap update size makin gede doang yg isinya cuma chest baru itupun sampah puzzle yg makin lama makin rumit tapi chest nya sampah story kebanyakan npc teyvat story utama punya mc dilupain gak difokusin map kalo udah kosong ya nyampah bikin size gede doang main 3 tahun rasanya monoton perkembangan buruk'
  • 'player:saranku developer harus menciptakan sebuah story yang sangat menarik agar tidak kehilangan para player karena masalahnya banyak player yg tidak bertahan lama karena repetitif dan monoton tiap update size makin gede doang yg isinya cuma chest baru itupun sampah puzzle yg makin lama makin rumit tapi chest nya sampah story kebanyakan npc teyvat story utama punya mc dilupain gak difokusin map kalo udah kosong ya nyampah bikin size gede doang main 3 tahun rasanya monoton perkembangan buruk'

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import AbsaModel

# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
    "Funnyworld1412/review_game_absa-aspect",
    "Funnyworld1412/review_game_absa-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 4 46.6389 94
Label Training Sample Count
no aspect 4189
aspect 990

Training Hyperparameters

  • batch_size: (4, 4)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 1
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0004 1 0.4229 -
0.0193 50 0.3888 -
0.0386 100 0.268 -
0.0579 150 0.3151 -
0.0772 200 0.0962 -
0.0965 250 0.2717 -
0.1158 300 0.2986 -
0.1351 350 0.1456 -
0.1544 400 0.3291 -
0.1737 450 0.4705 -
0.1931 500 0.162 -
0.2124 550 0.227 -
0.2317 600 0.105 -
0.2510 650 0.0809 -
0.2703 700 0.0608 -
0.2896 750 0.0804 -
0.3089 800 0.5065 -
0.3282 850 0.1868 -
0.3475 900 0.2777 -
0.3668 950 0.0483 -
0.3861 1000 0.0174 -
0.4054 1050 0.0361 -
0.4247 1100 0.0208 -
0.4440 1150 0.1162 -
0.4633 1200 0.3258 -
0.4826 1250 0.4762 -
0.5019 1300 0.009 -
0.5212 1350 0.0445 -
0.5405 1400 0.4436 -
0.5598 1450 0.036 -
0.5792 1500 0.2706 -
0.5985 1550 0.2454 -
0.6178 1600 0.0539 -
0.6371 1650 0.2127 -
0.6564 1700 0.174 -
0.6757 1750 0.0915 -
0.6950 1800 0.3465 -
0.7143 1850 0.2593 -
0.7336 1900 0.205 -
0.7529 1950 0.2425 -
0.7722 2000 0.1797 -
0.7915 2050 0.0083 -
0.8108 2100 0.0973 -
0.8301 2150 0.1209 -
0.8494 2200 0.0049 -
0.8687 2250 0.0028 -
0.8880 2300 0.1165 -
0.9073 2350 0.046 -
0.9266 2400 0.2102 -
0.9459 2450 0.1639 -
0.9653 2500 0.0114 -
0.9846 2550 0.3658 -

Framework Versions

  • Python: 3.10.13
  • SetFit: 1.0.3
  • Sentence Transformers: 3.0.1
  • spaCy: 3.7.5
  • Transformers: 4.36.2
  • PyTorch: 2.1.2
  • Datasets: 2.19.2
  • Tokenizers: 0.15.2

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
0
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) has been turned off for this model.