Added use example
Browse files
README.md
CHANGED
@@ -45,4 +45,25 @@ Comparison with `distilbert-base-uncased-finetuned-sst-2-english`:
|
|
45 |
|Mann Whithney U-test|0.00108|0.00108|
|
46 |
|Student t-test | 1.33e-12 | 3.03e-12|
|
47 |
|
48 |
-
Comparison with `xlm-roberta-large` yielded inconclusive results; whereas accuracy was outperformed by this model, the macro F1 score was not. Neither outperformance was statistically significant.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|Mann Whithney U-test|0.00108|0.00108|
|
46 |
|Student t-test | 1.33e-12 | 3.03e-12|
|
47 |
|
48 |
+
Comparison with `xlm-roberta-large` yielded inconclusive results; whereas accuracy was outperformed by this model, the macro F1 score was not. Neither outperformance was statistically significant.
|
49 |
+
|
50 |
+
## Use examples
|
51 |
+
|
52 |
+
```python
|
53 |
+
from simpletransformers.classification import ClassificationModel
|
54 |
+
model_args = {
|
55 |
+
"num_train_epochs": 6,
|
56 |
+
"learning_rate": 3e-6,
|
57 |
+
"train_batch_size": 69}
|
58 |
+
|
59 |
+
model = ClassificationModel(
|
60 |
+
"roberta", "5roop/roberta-base-frenk-hate", use_cuda=True,
|
61 |
+
args=model_args
|
62 |
+
|
63 |
+
)
|
64 |
+
|
65 |
+
predictions, logit_output = model.predict(["Build the wall", "Build the wall of trust"])
|
66 |
+
predictions
|
67 |
+
### Output:
|
68 |
+
### array([1, 0])
|
69 |
+
```
|