Edit model card

xlm-roberta-base-hebban-reviews

Dataset

  • dataset_name: BramVanroy/hebban-reviews
  • dataset_config: filtered_sentiment
  • dataset_revision: 2.0.0
  • labelcolumn: review_sentiment
  • textcolumn: review_text_without_quotes

Training

  • optim: adamw_hf
  • learning_rate: 5e-05
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • gradient_accumulation_steps: 1
  • max_steps: 5001
  • save_steps: 500
  • metric_for_best_model: qwk

Best checkedpoint based on validation

  • best_metric: 0.741533273748008
  • best_model_checkpoint: trained/hebban-reviews/xlm-roberta-base/checkpoint-2000

Test results of best checkpoint

  • accuracy: 0.8094674556213017
  • f1: 0.812677483587223
  • precision: 0.8173602585519025
  • qwk: 0.7369243423166991
  • recall: 0.8094674556213017

Confusion matrix

cfm

Normalized confusion matrix

norm cfm

Environment

  • cuda_capabilities: 8.0; 8.0
  • cuda_device_count: 2
  • cuda_devices: NVIDIA A100-SXM4-80GB; NVIDIA A100-SXM4-80GB
  • finetuner_commit: 66294c815326c93682003119534cb72009f558c2
  • platform: Linux-4.18.0-305.49.1.el8_4.x86_64-x86_64-with-glibc2.28
  • python_version: 3.9.5
  • toch_version: 1.10.0
  • transformers_version: 4.21.0
Downloads last month
25
Safetensors
Model size
278M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results