Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,39 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
pipeline_tag: text-classification
|
4 |
---
|
5 |
+
## Roberta for Justification analyst
|
6 |
+
|
7 |
+
This model is a fine-tuned version of the Roberta architecture that has been trained specifically for sequence classification. The fine-tuning process involved using the PyTorch deep learning framework and specific hyperparameters (2-4e, 1-8 epsilon) with Adagrad optimizer.
|
8 |
+
|
9 |
+
---
|
10 |
+
## Example Usage
|
11 |
+
|
12 |
+
To use the model, first load it in PyTorch:
|
13 |
+
|
14 |
+
```python
|
15 |
+
import torch
|
16 |
+
from transformers import RobertaForSequenceClassification, RobertaTokenizer
|
17 |
+
# Load the fine-tuned model
|
18 |
+
model = RobertaForSequenceClassification.from_pretrained('Dzeniks/justification-analyst')
|
19 |
+
|
20 |
+
# Load the tokenizer
|
21 |
+
tokenizer = RobertaTokenizer.from_pretrained('Dzeniks/justification-analyst')
|
22 |
+
|
23 |
+
# Tokenize the input sequence
|
24 |
+
input_text = "This is a sample input sequence"
|
25 |
+
input = tokenizer.encode_plus(claim, evidence, return_tensors="pt")
|
26 |
+
# Use the model to make a prediction
|
27 |
+
model.eval()
|
28 |
+
with torch.no_grad():
|
29 |
+
prediction = model(**x)
|
30 |
+
predictions = torch.argmax(outputs[0], dim=1).item()
|
31 |
+
```
|
32 |
+
|
33 |
+
## Classification Labels
|
34 |
+
|
35 |
+
The model was trained on a dataset consisting of claims and evidence, where the goal was to classify each claim as either supporting, refuting, or not having enough information to make a decision. The labels used for this task are as follows:
|
36 |
+
|
37 |
+
- Label 0: Supports
|
38 |
+
- Label 1: Refutes
|
39 |
+
- Label 2: Not enough information
|