dannashao commited on
Commit
82c8940
1 Parent(s): a8bdc2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -13,12 +13,10 @@ model-index:
13
  results: []
14
  ---
15
 
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
18
 
19
  # bert-base-uncased-finetuned-srl_arg
20
 
21
- This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.1094
24
  - Precision: 0.8207
@@ -28,15 +26,18 @@ It achieves the following results on the evaluation set:
28
 
29
  ## Model description
30
 
31
- More information needed
 
32
 
33
- ## Intended uses & limitations
34
 
35
- More information needed
 
 
36
 
37
  ## Training and evaluation data
38
 
39
- More information needed
40
 
41
  ## Training procedure
42
 
 
13
  results: []
14
  ---
15
 
 
 
16
 
17
  # bert-base-uncased-finetuned-srl_arg
18
 
19
+ This model is a baseline fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the English Universal Propbank dataset for the Semantics Role Labeling (SRL) task.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.1094
22
  - Precision: 0.8207
 
26
 
27
  ## Model description
28
 
29
+ The appraoch used for the baseline model is basically converting the sentence into the following form:
30
+ > [CLS] This is the sentence content [SEP] is [SEP].
31
 
32
+ And this is realized by simply using the logic of the auto tokenizer: `tokenizer(list1,list2)` will return [CLS] list1 content [SEP] list2 content [SEP].
33
 
34
+ ## Usages
35
+
36
+ The model labels semantics roles given input sentences. See usage examples at https://github.com/dannashao/bertsrl/blob/main/Evaluation.ipynb
37
 
38
  ## Training and evaluation data
39
 
40
+ The English Universal Proposition Bank v1.0 data. See details at https://github.com/UniversalPropositions/UP-1.0
41
 
42
  ## Training procedure
43