MarioBarbeque commited on
Commit
37efe23
·
verified ·
1 Parent(s): a3dd5b5

reference Spaces for each evaluation module

Browse files
Files changed (1) hide show
  1. app.py +3 -1
app.py CHANGED
@@ -15,7 +15,9 @@ Check out the original, longstanding issue [here](https://github.com/huggingface
15
  `evaluate.combine()` multiple metrics related to multilabel text classification. Particularly, one cannot `combine` the `f1`, `precision`, and `recall` scores for \
16
  evaluation. I encountered this issue specifically while training [RoBERTa-base-DReiFT](https://huggingface.co/MarioBarbeque/RoBERTa-base-DReiFT) for multilabel \
17
  text classification of 805 labeled medical conditions based on drug reviews. The [following workaround](https://github.com/johngrahamreynolds/FixedMetricsForHF) was
18
- created to address this - follow the link to view the source! \n
 
 
19
 
20
  This Space shows how one can instantiate these custom `evaluate.Metric`s, each with their own unique methodology for averaging across labels, before `combine`-ing them into a
21
  HF `evaluate.CombinedEvaluations` object. From here, we can easily compute each of the metrics simultaneously using `compute`. \n
 
15
  `evaluate.combine()` multiple metrics related to multilabel text classification. Particularly, one cannot `combine` the `f1`, `precision`, and `recall` scores for \
16
  evaluation. I encountered this issue specifically while training [RoBERTa-base-DReiFT](https://huggingface.co/MarioBarbeque/RoBERTa-base-DReiFT) for multilabel \
17
  text classification of 805 labeled medical conditions based on drug reviews. The [following workaround](https://github.com/johngrahamreynolds/FixedMetricsForHF) was
18
+ created to address this - follow the link to view the source! To see each of these abstracted classes at work independently, view the 🤗 Space I've constructed for each:
19
+ [`FixedF1`](https://huggingface.co/spaces/MarioBarbeque/FixedF1), [`FixedPrecision`](https://huggingface.co/spaces/MarioBarbeque/FixedPrecision),
20
+ [`FixedRecall`](https://huggingface.co/spaces/MarioBarbeque/FixedRecall).\n
21
 
22
  This Space shows how one can instantiate these custom `evaluate.Metric`s, each with their own unique methodology for averaging across labels, before `combine`-ing them into a
23
  HF `evaluate.CombinedEvaluations` object. From here, we can easily compute each of the metrics simultaneously using `compute`. \n