g8a9 commited on
Commit
c4d9ff2
1 Parent(s): 732c933

text edits

Browse files
Files changed (1) hide show
  1. single.py +10 -4
single.py CHANGED
@@ -97,11 +97,17 @@ def body():
97
  """
98
  **Legend**
99
 
100
- - **AOPC Comprehensiveness** (aopc_compr) measures *comprehensiveness*, i.e., if the
101
- explanation captures all the tokens needed to make the prediction. It is computed as the drop in the model probability if the relevant tokens of the explanations are removed. The higher the comprehensiveness, the more faithful is the explanation.
102
- - **AOPC Sufficiency** (aopc_suff) measures *sufficiency*, i.e., if the relevant tokens in the explanation are sufficient to make the prediction. It is computed as the drop in the model probability if only the relevant tokens of the explanations are considered. The lower the sufficiency, the more faithful is the explanation since there is less change in the model prediction.
103
- - **Leave-On-Out TAU Correlation** (taucorr_loo) measures the Kendall rank correlation coefficient τ between the explanation and leave-one-out importances. The latter are computed by omittig individual input tokens and measuring the variation on the model prediction. The closer the τ correlation is to 1, the more faithful is the explanation.
104
 
105
  See the paper for details.
106
  """
107
  )
 
 
 
 
 
 
97
  """
98
  **Legend**
99
 
100
+ - **AOPC Comprehensiveness** (aopc_compr) measures *comprehensiveness*, i.e., if the explanation captures all the tokens needed to make the prediction. Higher is better.
101
+ - **AOPC Sufficiency** (aopc_suff) measures *sufficiency*, i.e., if the relevant tokens in the explanation are sufficient to make the prediction. Lower is better.
102
+
103
+ - **Leave-On-Out TAU Correlation** (taucorr_loo) measures the Kendall rank correlation coefficient τ between the explanation and leave-one-out importances. Closer to 1 is better.
104
 
105
  See the paper for details.
106
  """
107
  )
108
+
109
+ # It is computed as the drop in the model probability if the relevant tokens of the explanations are removed. The higher the comprehensiveness, the more faithful is the explanation.
110
+
111
+ # It is computed as the drop in the model probability if only the relevant tokens of the explanations are considered. The lower the sufficiency, the more faithful is the explanation since there is less change in the model prediction.
112
+
113
+ # The latter are computed by omittig individual input tokens and measuring the variation on the model prediction. The closer the τ correlation is to 1, the more faithful is the explanation.