chkla commited on
Commit
351e339
β€’
1 Parent(s): 8c1849e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -12,7 +12,7 @@ This model was trained on ~25k heterogeneous manually annotated sentences (πŸ“š
12
 
13
  πŸ—ƒ **Dataset**
14
 
15
- The dataset (πŸ“š Stab et al. 2018) consists of **ARGUMENTS** (\~11k) that either support or oppose a topic if it includes a relevant reason for supporting or opposing the topic, or as a **NON-ARGUMENT** (\~14k) if it does not include reasons. The authors focus on controversial topics, i.e., topics that include an obvious polarity to the possible outcomes and compile a final set of eight controversial topics: _abortion, school uniforms, death penalty, marijuana legalization, nuclear energy, cloning, gun control, and minimum wage_.
16
 
17
  | TOPIC | ARGUMENT | NON-ARGUMENT |
18
  |----|----|----|
@@ -27,7 +27,7 @@ The dataset (πŸ“š Stab et al. 2018) consists of **ARGUMENTS** (\~11k) that eithe
27
 
28
  πŸƒπŸΌβ€β™‚οΈ**Model training**
29
 
30
- **RoBERTArg** was fine-tuned on a RoBERTA (base) pre-trained model from HuggingFace using the HuggingFace trainer with the following hyperparameters. The hyperparameters were determined using a hyperparameter search on a 20% validation set.
31
 
32
  ```
33
  training_args = TrainingArguments(
@@ -41,13 +41,13 @@ training_args = TrainingArguments(
41
 
42
  πŸ“Š **Evaluation**
43
 
44
- The model was evaluated using 20% of the sentences (80-20 train-test split).
45
 
46
  | Model | Acc | F1 | R arg | R non | P arg | P non |
47
  |----|----|----|----|----|----|----|
48
  | RoBERTArg | 0.8193 | 0.8021 | 0.8463 | 0.7986 | 0.7623 | 0.8719 |
49
 
50
- Showing the **confusion matrix** using the 20% of the sentences as an evaluation set:
51
 
52
  | | ARGUMENT | NON-ARGUMENT |
53
  |----|----|----|
@@ -56,6 +56,6 @@ Showing the **confusion matrix** using the 20% of the sentences as an evaluation
56
 
57
  ⚠️ **Intended Uses & Potential Limitations**
58
 
59
- The model can be a starting point to dive into the exciting area of argument mining. But be aware. An argument is a complex structure, topic-dependent, and often differs between different text types. Therefore, the model may perform less well on different topics and text types, which are not included in the training set.
60
 
61
  Enjoy and stay tuned! πŸš€
 
12
 
13
  πŸ—ƒ **Dataset**
14
 
15
+ The dataset (πŸ“š Stab et al. 2018) consists of **ARGUMENTS** (\~11k) that either support or oppose a topic if it includes a relevant reason for supporting or opposing the topic, or as a **NON-ARGUMENT** (\~14k) if it does not include reasons. The authors focus on controversial topics, i.e., topics that include "an obvious polarity to the possible outcomes" and compile a final set of eight controversial topics: _abortion, school uniforms, death penalty, marijuana legalization, nuclear energy, cloning, gun control, and minimum wage_.
16
 
17
  | TOPIC | ARGUMENT | NON-ARGUMENT |
18
  |----|----|----|
 
27
 
28
  πŸƒπŸΌβ€β™‚οΈ**Model training**
29
 
30
+ **RoBERTArg** was fine-tuned on a RoBERTA (base) pre-trained model from HuggingFace using the HuggingFace trainer with the following hyperparameters:
31
 
32
  ```
33
  training_args = TrainingArguments(
 
41
 
42
  πŸ“Š **Evaluation**
43
 
44
+ The model was evaluated on an evaluation set (20%):
45
 
46
  | Model | Acc | F1 | R arg | R non | P arg | P non |
47
  |----|----|----|----|----|----|----|
48
  | RoBERTArg | 0.8193 | 0.8021 | 0.8463 | 0.7986 | 0.7623 | 0.8719 |
49
 
50
+ Showing the **confusion matrix** using again the evaluation set:
51
 
52
  | | ARGUMENT | NON-ARGUMENT |
53
  |----|----|----|
 
56
 
57
  ⚠️ **Intended Uses & Potential Limitations**
58
 
59
+ The model can only be a starting point to dive into the exciting field of argument mining. But be aware. An argument is a complex structure, with multiple dependencies. Therefore, the model may perform less well on different topics and text types not included in the training set.
60
 
61
  Enjoy and stay tuned! πŸš€