gorkaartola commited on
Commit
297dc8d
1 Parent(s): c9a76ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -8
README.md CHANGED
@@ -5,7 +5,7 @@ datasets:
5
  tags:
6
  - evaluate
7
  - metric
8
- description: "TODO: add a description here"
9
  sdk: gradio
10
  sdk_version: 3.0.2
11
  app_file: app.py
@@ -14,19 +14,34 @@ pinned: false
14
 
15
  # Metric Card for metric_for_TP_FP_samples
16
 
17
- ***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
18
-
19
  ## Metric Description
20
- *Give a brief overview of this metric, including what task(s) it is usually used for, if any.*
21
 
22
  ## How to Use
23
- *Give general statement of how to use the metric*
 
 
 
 
 
24
 
25
- *Provide simplest possible example for using the metric*
 
 
 
 
 
 
 
 
 
 
26
 
27
  ### Inputs
28
- *List all input arguments in the format below*
29
- - **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).*
 
 
30
 
31
  ### Output Values
32
 
 
5
  tags:
6
  - evaluate
7
  - metric
8
+ description: This metric is specially designed to measure the performance of sentence classification models over multiclass test datasets containing both True Positive samples, meaning that the label associated to the sentence in the sample is correctly assigned, and False Positive samples, meaning that the label associated to the sentence in the sample is incorrectly assigned.
9
  sdk: gradio
10
  sdk_version: 3.0.2
11
  app_file: app.py
 
14
 
15
  # Metric Card for metric_for_TP_FP_samples
16
 
 
 
17
  ## Metric Description
18
+ This metric is specially designed to measure the performance of sentence classification models over multiclass test datasets containing both True Positive samples, meaning that the label associated to the sentence in the sample is correctly assigned, and False Positive samples, meaning that the label associated to the sentence in the sample is incorrectly assigned.
19
 
20
  ## How to Use
21
+ In addition to the conventional *predictions* and *references* inputs, this metric includes a *kwarg* named *prediction_strategies (list(str))*, that refer to a family of prediction strategies that the metric can handle.
22
+
23
+ The *prediction_strategies* implemented in this metric are:
24
+ - *argmax*, which takes the highest value of the softmax inference logits to select the prediction.
25
+ - *threshold*, which takes all softmax inference logits above a certain value to select the predictions.
26
+ - *topk*, which takes the highest *k* softmax inference logits to select the predictions.
27
 
28
+ The minimum fields required by this metric for the test datasets are the following:
29
+ - *title* containing the first sentence to be compared with different queries representing each class.
30
+ - *label_ids* containing the *id* of the class the sample refers to. Including samples of all the classes is advised.
31
+ - *nli_label* which is '0' if the sample represents a True Positive or '2' if the sample represents a False Positive, meaning that the *label_ids* is incorrectly assigned to the *title*. Including both True Positive and False Positive samples for all classes is advised.
32
+
33
+ Example:
34
+
35
+ |title |label_ids |nli_label |
36
+ |-----------------------------------------------------------------------------------|:---------:|:----------:|
37
+ |'Together we can save the arctic': celebrity advocacy and the Rio Earth Summit 2012| 8 | 0 |
38
+ |Tuple-based semantic and structural mapping for a sustainable interoperability | 16 | 2 |
39
 
40
  ### Inputs
41
+
42
+ - *predictions*, *(numpy.array(float32)[sentences to classify,number of classes])*: numpy array with the softmax logits values of the entailment dimension of the inference on the sentences to be classified for each class.
43
+ - *references* , *(numpy.array(int32)[sentences to classify,2]: numpy array with the reference *label_ids* and *nli_label* of the sentences to be classified, given in the *test_dataset*.
44
+ - A *kwarg* named *prediction_strategies*, (list(str))*.f prediction strategies which must be included within the options lists for the parameter *prediction_strategy_selector* in the [options.py](https://huggingface.co/spaces/gorkaartola/Zero_Shot_Classifier_by_SDGs/blob/main/options.py) file.
45
 
46
  ### Output Values
47