boryana commited on
Commit
78f4396
1 Parent(s): 72a2692

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -22,13 +22,13 @@ tags:
22
 
23
  ## Model Description
24
 
25
- This model consists of a fine-tuned version of google-bert/bert-base-cased for a propaganda detection task. It is effectively a binary classifier, determining wether propaganda is present in the output string.
26
- This model was created by [`Identrics`](https://identrics.ai/), in the scope of the WASPer project. The detailed taxonomy could be found [here](https://github.com/Identrics/wasper/).
27
 
28
 
29
  ## Uses
30
 
31
- To be used as a binary classifier to identify if propaganda is present in a string containing a comment from a social media site
32
 
33
  ### Example
34
 
@@ -49,17 +49,16 @@ output = model(**tokens)
49
  print(output.logits)
50
  ```
51
 
52
-
53
  ## Training Details
54
 
55
- The training datasets for the model consist of a balanced set totaling 840 English examples that include both propaganda and non-propaganda content. These examples are collected from a variety of traditional media and social media sources, ensuring a diverse range of content. Aditionally, the training dataset is enriched with AI-generated samples. The total distribution of the training data is shown in the table below:
56
-
57
 
 
58
 
59
- The model was then tested on a smaller evaluation dataset, achieving an f1 score of 0.807.
 
60
 
61
-
62
- ## Citation
63
 
64
  If you find our work useful, please consider citing WASPer:
65
 
@@ -72,3 +71,4 @@ If you find our work useful, please consider citing WASPer:
72
  }
73
  ```
74
 
 
 
22
 
23
  ## Model Description
24
 
25
+ This model consists of a fine-tuned version of google-bert/bert-base-cased for a propaganda detection task. It is effectively a binary classifier, determining whether propaganda is present in the output string.
26
+ This model was created by [`Identrics`](https://identrics.ai/), in the scope of the WASPer project. The detailed taxonomy of the full pipeline could be found [here](https://github.com/Identrics/wasper/).
27
 
28
 
29
  ## Uses
30
 
31
+ Designed as a binary classifier to determine whether a traditional or social media comment contains propaganda.
32
 
33
  ### Example
34
 
 
49
  print(output.logits)
50
  ```
51
 
 
52
  ## Training Details
53
 
54
+ The training dataset for the model consists of a balanced collection of English examples, including both propaganda and non-propaganda content. These examples were sourced from a variety of traditional media and social media platforms and manually annotated by domain experts. Additionally, the dataset is enriched with AI-generated samples.
 
55
 
56
+ The model achieved an F1 score of **0.807** during evaluation.
57
 
58
+ ## Compute Infrastructure
59
+ The model was fine-tuned using a **GPU / 2xNVIDIA Tesla V100 32GB**.
60
 
61
+ ## Citation [this section is to be updated soon]
 
62
 
63
  If you find our work useful, please consider citing WASPer:
64
 
 
71
  }
72
  ```
73
 
74
+