boryana commited on
Commit
09e2d66
1 Parent(s): a7c4792

Update README.md (#1)

Browse files

- Update README.md (7269b5af023860f1d56d55f319d1975e9315d415)

Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -24,7 +24,7 @@ tags:
24
  ## Model Description
25
 
26
  This model consists of a fine-tuned version of BgGPT-7B-Instruct-v0.2 for a propaganda detection task. It is effectively a binary classifier, determining wether propaganda is present in the output string.
27
- This model was created by [`Identrics`](https://identrics.ai/), in the scope of the WASPer project.The detailed taxonomy of the full pipeline could be found [here](https://github.com/Identrics/wasper/).
28
 
29
 
30
  ## Uses
@@ -42,8 +42,8 @@ Then the model can be downloaded and used for inference:
42
  ```py
43
  from transformers import AutoModelForSequenceClassification, AutoTokenizer
44
 
45
- model = AutoModelForSequenceClassification.from_pretrained("identrics/EN_propaganda_detector", num_labels=2)
46
- tokenizer = AutoTokenizer.from_pretrained("identrics/BG_propaganda_detector")
47
 
48
  tokens = tokenizer("Газа евтин, американското ядрено гориво евтино, пълно с фотоволтаици а пък тока с 30% нагоре. Защо ?", return_tensors="pt")
49
  output = model(**tokens)
 
24
  ## Model Description
25
 
26
  This model consists of a fine-tuned version of BgGPT-7B-Instruct-v0.2 for a propaganda detection task. It is effectively a binary classifier, determining wether propaganda is present in the output string.
27
+ This model was created by [`Identrics`](https://identrics.ai/), in the scope of the WASPer project. The detailed taxonomy of the full pipeline could be found [here](https://github.com/Identrics/wasper/).
28
 
29
 
30
  ## Uses
 
42
  ```py
43
  from transformers import AutoModelForSequenceClassification, AutoTokenizer
44
 
45
+ model = AutoModelForSequenceClassification.from_pretrained("identrics/wasper_propaganda_detection_bg", num_labels=2)
46
+ tokenizer = AutoTokenizer.from_pretrained("identrics/wasper_propaganda_detection_bg")
47
 
48
  tokens = tokenizer("Газа евтин, американското ядрено гориво евтино, пълно с фотоволтаици а пък тока с 30% нагоре. Защо ?", return_tensors="pt")
49
  output = model(**tokens)