John Graham Reynolds commited on
Commit
9230b05
β€’
1 Parent(s): f2422c4

updated app description

Browse files
Files changed (1) hide show
  1. app.py +15 -5
app.py CHANGED
@@ -4,20 +4,30 @@ from fixed_recall import FixedRecall
4
 
5
  import gradio as gr
6
 
7
- title = "πŸ€— Evaluate Fix!"
 
 
 
 
 
 
 
 
 
 
8
 
9
- description = """
10
- This space is designed to show off a workaround for a current bug in the πŸ€— Evaluate library as we introduce ourselves to the entirety of the πŸ€— ecosystem.
11
  """
12
 
13
- article = "Check out [original repo](https://huggingface.co/spaces/kingabzpro/Rick_and_Morty_Bot) housing this code, and a quickly trained [multilabel text \
14
- classicifcation model](https://github.com/johngrahamreynolds/RoBERTa-base-DReiFT/tree/main) that makes use of it during evaluation."
15
 
16
  def show_off(input):
17
  f1 = FixedF1()
18
  precision = FixedPrecision()
19
  recall = FixedRecall()
20
 
 
 
21
  return "Checking this out! Here's what you put in: " + f"""{input} """
22
 
23
 
 
4
 
5
  import gradio as gr
6
 
7
+ title = "'Combine' multiple metrics with this πŸ€— Evaluate πŸͺ² Fix!"
8
+
9
+ description = """<p style='text-align: center'>
10
+ As I introduce myself to the entirety of the πŸ€— ecosystem, I've put together this space to show off a workaround for a current πŸͺ² in the πŸ€— Evaluate library. \n
11
+
12
+ \tCheck out the original, longstanding issue [here](https://github.com/huggingface/evaluate/issues/234). This details how it is currently impossible to \
13
+ 'evaluate.combine()' multiple metrics related to multilabel text classification. Particularly, one cannot 'combine()' the f1, precision, and recall scores for \
14
+ evaluation. I encountered this issue specifically while training [RoBERTa-base-DReiFT](https://huggingface.co/MarioBarbeque/RoBERTa-base-DReiFT) for multilabel \
15
+ text classification of 805 labeled medical conditions based on drug reviews for treatment received for the same underlying conditio. Use the space below for \
16
+ a preview of the workaround! </p>
17
+
18
 
 
 
19
  """
20
 
21
+ article = "<p style='text-align: center'>Check out the [original repo](https://github.com/johngrahamreynolds/FixedMetricsForHF) housing this code, and a quickly \
22
+ trained [multilabel text classicifcation model](https://github.com/johngrahamreynolds/RoBERTa-base-DReiFT/tree/main) that makes use of it during evaluation.</p>"
23
 
24
  def show_off(input):
25
  f1 = FixedF1()
26
  precision = FixedPrecision()
27
  recall = FixedRecall()
28
 
29
+
30
+
31
  return "Checking this out! Here's what you put in: " + f"""{input} """
32
 
33