File size: 1,272 Bytes
6815de3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ca77bef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: mit
---
# Model Card for DistilRoBERTaEmotionClassifier

This model was created to demonstrate several MLOps practices and was for education purposes only. Please see the following [GitHub repo](https://github.com/teg-lad/CA4015-MLOPSPipelineImplementation) covering the material

## Model Details

### Model Description

This model was trained on the [Kaggle Emotions](https://www.kaggle.com/datasets/nelgiriyewithana/emotions/data) dataset, which has 6 classes.

+ Sadness (0)
+ Joy (1)
+ Love (2)
+ Anger (3)
+ Fear (4)
+ Surprise (5)

### Model Usage

```
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

# Load the model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("teglad/DistilRoBERTaEmotionClassifier")
tokenizer = AutoTokenizer.from_pretrained("teglad/DistilRoBERTaEmotionClassifier")

# Tokenize the input text, returning PyTorch tensors.
input_ids = tokenizer("Deep Learning models can be so difficult to understand, how do they even work?", return_tensors="pt")

# Pass the input_ids and attention_masks into the model
output = model(**input_ids)

# Get the position of the largest logit, this is the predicted class
prediction = torch.argmax(output.logits, dim=1).tolist()
```