Update README.md
Browse files
README.md
CHANGED
@@ -12,8 +12,8 @@ We limited the statement itself to 100 tokens and the context of the statement t
|
|
12 |
|
13 |
**Important**
|
14 |
|
15 |
-
We slightly modified the Classification Head of the XLMRobertaModelForSequenceClassification model (removed the tanh activation and the intermediate linear layer) as that improved the model performance for this task considerably.
|
16 |
-
|
17 |
|
18 |
## How to use
|
19 |
|
|
|
12 |
|
13 |
**Important**
|
14 |
|
15 |
+
We slightly modified the Classification Head of the `XLMRobertaModelForSequenceClassification` model (removed the tanh activation and the intermediate linear layer) as that improved the model performance for this task considerably.
|
16 |
+
To correctly load the full model, include the `trust_remote_code=True` argument when using the `from_pretrained method`.
|
17 |
|
18 |
## How to use
|
19 |
|