nbansal commited on
Commit
680431d
1 Parent(s): 251bfda
Files changed (1) hide show
  1. README.md +14 -11
README.md CHANGED
@@ -27,6 +27,7 @@ computes precision, recall and F1 scores.
27
  ## How to Use
28
 
29
  Sem-F1 takes 2 mandatory arguments:
 
30
  - `predictions` - List of predictions. Format varies based on `tokenize_sentences` and `multi_references` flags.
31
  - `references`: List of references. Format varies based on `tokenize_sentences` and `multi_references` flags.
32
 
@@ -49,19 +50,20 @@ for score in results:
49
 
50
 
51
  Sem-F1 also accepts multiple optional arguments:
52
- - `model_type (str)`: Model to use for encoding sentences. Options: ['pv1', 'stsb', 'use']
53
- - `pv1` - [paraphrase-distilroberta-base-v1](https://huggingface.co/sentence-transformers/paraphrase-distilroberta-base-v1)
54
- - `stsb` - [stsb-roberta-large](https://huggingface.co/sentence-transformers/stsb-roberta-large)
55
- - `use` - [Universal Sentence Encoder](https://huggingface.co/sentence-transformers/use-cmlm-multilingual) (Default)
56
 
57
- Furthermore, you can use any model on Huggingface/SentenceTransformer that is supported by SentenceTransformer
58
- such as `all-mpnet-base-v2` or `roberta-base`
 
 
 
 
 
59
 
60
- - `tokenize_sentences (bool)`: Flag to indicate whether to tokenize the sentences in the input documents. Default: True.
61
- - `multi_references (bool)`: Flag to indicate whether multiple references are provided. Default: False.
62
- - `gpu (Union[bool, str, int, List[Union[str, int]]])`: Whether to use GPU, CPU or multiple-processes for computation.
63
- - `batch_size (int)`: Batch size for encoding. Default: 32.
64
- - `verbose (bool)`: Flag to indicate verbose output. Default: False.
65
 
66
  Refer to the inputs descriptions for more detailed usage as follows:
67
 
@@ -78,6 +80,7 @@ print(metric.inputs_description)
78
 
79
  ### Output Values
80
  List of `Scores` dataclass corresponding to each sample -
 
81
  - `precision: float`: Precision score, which ranges from 0.0 to 1.0.
82
  - `recall: List[float]`: Recall score corresponding to each reference
83
  - `f1: float`: F1 score (between precision and average recall).
 
27
  ## How to Use
28
 
29
  Sem-F1 takes 2 mandatory arguments:
30
+
31
  - `predictions` - List of predictions. Format varies based on `tokenize_sentences` and `multi_references` flags.
32
  - `references`: List of references. Format varies based on `tokenize_sentences` and `multi_references` flags.
33
 
 
50
 
51
 
52
  Sem-F1 also accepts multiple optional arguments:
 
 
 
 
53
 
54
+ - `model_type (str)`: Model to use for encoding sentences. Options: ['pv1', 'stsb', 'use']
55
+ - `pv1` - [paraphrase-distilroberta-base-v1](https://huggingface.co/sentence-transformers/paraphrase-distilroberta-base-v1)
56
+ - `stsb` - [stsb-roberta-large](https://huggingface.co/sentence-transformers/stsb-roberta-large)
57
+ - `use` - [Universal Sentence Encoder](https://huggingface.co/sentence-transformers/use-cmlm-multilingual) (Default)
58
+
59
+ Furthermore, you can use any model on Huggingface/SentenceTransformer that is supported by SentenceTransformer
60
+ such as `all-mpnet-base-v2` or `roberta-base`
61
 
62
+ - `tokenize_sentences (bool)`: Flag to indicate whether to tokenize the sentences in the input documents. Default: True.
63
+ - `multi_references (bool)`: Flag to indicate whether multiple references are provided. Default: False.
64
+ - `gpu (Union[bool, str, int, List[Union[str, int]]])`: Whether to use GPU, CPU or multiple-processes for computation.
65
+ - `batch_size (int)`: Batch size for encoding. Default: 32.
66
+ - `verbose (bool)`: Flag to indicate verbose output. Default: False.
67
 
68
  Refer to the inputs descriptions for more detailed usage as follows:
69
 
 
80
 
81
  ### Output Values
82
  List of `Scores` dataclass corresponding to each sample -
83
+
84
  - `precision: float`: Precision score, which ranges from 0.0 to 1.0.
85
  - `recall: List[float]`: Recall score corresponding to each reference
86
  - `f1: float`: F1 score (between precision and average recall).