Update README.md
Browse files
README.md
CHANGED
@@ -1,13 +1,14 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
-
pipeline_tag: text-classification
|
4 |
widget:
|
5 |
- text: "whaling is part of the culture of various indigenous population and should be allowed for the purpose of maintaining this tradition and way of life and sustenance, among other uses of a whale. against We should ban whaling"
|
6 |
---
|
7 |
|
8 |
-
|
9 |
## Model Usage
|
10 |
|
|
|
|
|
|
|
11 |
```python
|
12 |
|
13 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
@@ -29,14 +30,14 @@ encoding = tokenizer.encode_plus(
|
|
29 |
|
30 |
with torch.no_grad():
|
31 |
test_prediction = trained_model(encoding["input_ids"], encoding["attention_mask"])
|
32 |
-
test_prediction = test_prediction["
|
33 |
|
34 |
```
|
35 |
|
36 |
## Prediction
|
37 |
To make a prediction and map the the outputs to the correct labels.
|
38 |
During the competiton a threshold of 0.25 was used to binarize the output.
|
39 |
-
```
|
40 |
THRESHOLD = 0.25
|
41 |
LABEL_COLUMNS = ['Self-direction: thought','Self-direction: action','Stimulation','Hedonism','Achievement','Power: dominance','Power: resources','Face','Security: personal',
|
42 |
'Security: societal','Tradition','Conformity: rules','Conformity: interpersonal','Humility','Benevolence: caring','Benevolence: dependability','Universalism: concern','Universalism: nature','Universalism: tolerance','Universalism: objectivity']
|
|
|
1 |
---
|
2 |
license: mit
|
|
|
3 |
widget:
|
4 |
- text: "whaling is part of the culture of various indigenous population and should be allowed for the purpose of maintaining this tradition and way of life and sustenance, among other uses of a whale. against We should ban whaling"
|
5 |
---
|
6 |
|
|
|
7 |
## Model Usage
|
8 |
|
9 |
+
This model is built on custom code. So the inference api cannot be used directly. To use the model please follow the steps below...
|
10 |
+
|
11 |
+
|
12 |
```python
|
13 |
|
14 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
|
|
30 |
|
31 |
with torch.no_grad():
|
32 |
test_prediction = trained_model(encoding["input_ids"], encoding["attention_mask"])
|
33 |
+
test_prediction = test_prediction["output"].flatten().numpy()
|
34 |
|
35 |
```
|
36 |
|
37 |
## Prediction
|
38 |
To make a prediction and map the the outputs to the correct labels.
|
39 |
During the competiton a threshold of 0.25 was used to binarize the output.
|
40 |
+
```python
|
41 |
THRESHOLD = 0.25
|
42 |
LABEL_COLUMNS = ['Self-direction: thought','Self-direction: action','Stimulation','Hedonism','Achievement','Power: dominance','Power: resources','Face','Security: personal',
|
43 |
'Security: societal','Tradition','Conformity: rules','Conformity: interpersonal','Humility','Benevolence: caring','Benevolence: dependability','Universalism: concern','Universalism: nature','Universalism: tolerance','Universalism: objectivity']
|