Update README.md
Browse files
README.md
CHANGED
@@ -77,6 +77,25 @@ The following hyperparameters were used during training:
|
|
77 |
- lr_scheduler_type: linear
|
78 |
- num_epochs: 3
|
79 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
### Training results
|
81 |
|
82 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|
|
|
77 |
- lr_scheduler_type: linear
|
78 |
- num_epochs: 3
|
79 |
|
80 |
+
### Example of USE
|
81 |
+
|
82 |
+
Here's a Python snippet demonstrating how to use this model:
|
83 |
+
|
84 |
+
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
85 |
+
|
86 |
+
model_name = "your_huggingface_username/french_emotion_camembert"
|
87 |
+
tokenizer = AutoTokenizer.from_pretrained(model_answer)
|
88 |
+
model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
89 |
+
|
90 |
+
text = "Je suis très heureux de votre service rapide et efficace."
|
91 |
+
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
|
92 |
+
outputs = model(**inputs)
|
93 |
+
|
94 |
+
# Decode and print the predicted emotion
|
95 |
+
prediction = torch.nn.functional.softmax(outputs.logits, dim=-1)
|
96 |
+
predicted_emotion = prediction.argmax().item()
|
97 |
+
print("Predicted emotion:", predicted_emotion)
|
98 |
+
|
99 |
### Training results
|
100 |
|
101 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|