Abdulrahman Al-Ghamdi commited on
Commit
55166e2
·
verified ·
1 Parent(s): 2629349

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -13
README.md CHANGED
@@ -15,7 +15,6 @@ pipeline_tag: text-classification
15
  ---
16
 
17
  # 🍽️ Arabic Restaurant Review Sentiment Analysis 🚀
18
- **Model Is Under Development**
19
  ## 📌 Overview
20
  This project fine-tunes a **transformer-based model** to analyze sentiment in **Arabic restaurant reviews**.
21
  We utilized **Hugging Face’s model training pipeline** and deployed the final model as an **interactive Gradio web app**.
@@ -41,22 +40,35 @@ The model was fine-tuned using **Hugging Face Transformers** on a dataset of res
41
  ### **📊 Evaluation Metrics**
42
  | Metric | Score |
43
  |-------------|--------|
44
- | **Eval Loss** | `****` |
45
- | **Accuracy** | `88.71%` |
46
- | **Precision** | `91.07%` |
47
- | **Recall** | `93.31%` |
48
- | **F1-score** | `92.17%` |
 
49
 
50
  ## ⚙️ Training Parameters
51
  ```python
 
 
 
52
  training_args = TrainingArguments(
53
  output_dir="./results",
54
- evaluation_strategy="epoch",
55
- per_device_train_batch_size=4,
56
- per_device_eval_batch_size=4,
57
- num_train_epochs=5,
58
- weight_decay=0.01,
59
- learning_rate=3e-5,
 
 
 
60
  fp16=True,
61
- report_to="none"
 
 
 
 
 
 
62
  )
 
15
  ---
16
 
17
  # 🍽️ Arabic Restaurant Review Sentiment Analysis 🚀
 
18
  ## 📌 Overview
19
  This project fine-tunes a **transformer-based model** to analyze sentiment in **Arabic restaurant reviews**.
20
  We utilized **Hugging Face’s model training pipeline** and deployed the final model as an **interactive Gradio web app**.
 
40
  ### **📊 Evaluation Metrics**
41
  | Metric | Score |
42
  |-------------|--------|
43
+ | **Train Loss**| '0.470'|
44
+ | **Eval Loss** | `0.373` |
45
+ | **Accuracy** | `86.41%` |
46
+ | **Precision** | `87.01%` |
47
+ | **Recall** | `86.49%` |
48
+ | **F1-score** | `86.75%` |
49
 
50
  ## ⚙️ Training Parameters
51
  ```python
52
+ model_name = "aubmindlab/bert-base-arabertv2"
53
+ model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, classifier_dropout=0.5).to(device)
54
+
55
  training_args = TrainingArguments(
56
  output_dir="./results",
57
+ evaluation_strategy="epoch",
58
+ save_strategy="epoch",
59
+ per_device_train_batch_size=8,
60
+ per_device_eval_batch_size=8,
61
+ num_train_epochs=4,
62
+ weight_decay=1,
63
+ learning_rate=1e-5,
64
+ lr_scheduler_type="cosine",
65
+ warmup_ratio=0.1,
66
  fp16=True,
67
+ report_to="none",
68
+ save_total_limit=2,
69
+ gradient_accumulation_steps=2,
70
+ load_best_model_at_end=True,
71
+ max_grad_norm=1.0,
72
+ metric_for_best_model="eval_loss",
73
+ greater_is_better=False,
74
  )