Update README.md
Browse files
README.md
CHANGED
@@ -11,17 +11,20 @@ tags:
|
|
11 |
# Model Card for identrics/BG_propaganda_detector
|
12 |
|
13 |
|
14 |
-
## Model Description
|
15 |
-
|
16 |
-
<!-- Provide a longer summary of what this model is. -->
|
17 |
|
18 |
|
|
|
19 |
|
20 |
- **Developed by:** Identrics
|
21 |
- **Language:** Bulgarian
|
22 |
- **License:** apache-2.0
|
23 |
- **Finetuned from model:** INSAIT-Institute/BgGPT-7B-Instruct-v0.2
|
|
|
|
|
|
|
24 |
|
|
|
|
|
25 |
|
26 |
|
27 |
## Uses
|
@@ -30,7 +33,22 @@ To be used as a binary classifier to identify if propaganda is present in a stri
|
|
30 |
|
31 |
### Example
|
32 |
|
33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
|
36 |
## Training Details
|
|
|
11 |
# Model Card for identrics/BG_propaganda_detector
|
12 |
|
13 |
|
|
|
|
|
|
|
14 |
|
15 |
|
16 |
+
## Model Description
|
17 |
|
18 |
- **Developed by:** Identrics
|
19 |
- **Language:** Bulgarian
|
20 |
- **License:** apache-2.0
|
21 |
- **Finetuned from model:** INSAIT-Institute/BgGPT-7B-Instruct-v0.2
|
22 |
+
- **Context window :** 8192 tokens
|
23 |
+
|
24 |
+
## Model Description
|
25 |
|
26 |
+
This model consists of a fine-tuned version of BgGPT-7B-Instruct-v0.2 for a propaganda detection task. It is effectively a binary classifier, determining wether propaganda is present in the output string.
|
27 |
+
This model was created by [`Identrics`](https://identrics.ai/), in the scope of the Wasper project.
|
28 |
|
29 |
|
30 |
## Uses
|
|
|
33 |
|
34 |
### Example
|
35 |
|
36 |
+
First install direct dependencies:
|
37 |
+
```
|
38 |
+
pip install transformers torch accelerate
|
39 |
+
```
|
40 |
+
|
41 |
+
Then the model can be downloaded and used for inference:
|
42 |
+
```py
|
43 |
+
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
44 |
+
|
45 |
+
model = AutoModelForSequenceClassification.from_pretrained("identrics/EN_propaganda_detector", num_labels=2)
|
46 |
+
tokenizer = AutoTokenizer.from_pretrained("identrics/EN_propaganda_detector")
|
47 |
+
|
48 |
+
tokens = tokenizer("Our country is the most powerful country in the world!", return_tensors="pt")
|
49 |
+
output = model(**tokens)
|
50 |
+
print(output.logits)
|
51 |
+
```
|
52 |
|
53 |
|
54 |
## Training Details
|