updating image links to public ones
Browse files- src/about.py +1 -1
src/about.py
CHANGED
@@ -69,7 +69,7 @@ When training a Named Entity Recognition (NER) system, the most common evaluatio
|
|
69 |
Example Sentence: "The patient was diagnosed with a skin cancer disease."
|
70 |
For simplicity, let's assume the an example sentence which contains 10 tokens, with a single two-token disease entity (as shown in the figure below).
|
71 |
"""
|
72 |
-
EVALUATION_EXAMPLE_IMG = """<img src="
|
73 |
LLM_BENCHMARKS_TEXT_2 = """
|
74 |
Token-based evaluation involves obtaining the set of token labels (ground-truth annotations) for the annotated entities and the set of token predictions, comparing these sets, and computing a classification report. Hence, the results for the example above are shown below.
|
75 |
**Token-based metrics:**
|
|
|
69 |
Example Sentence: "The patient was diagnosed with a skin cancer disease."
|
70 |
For simplicity, let's assume the an example sentence which contains 10 tokens, with a single two-token disease entity (as shown in the figure below).
|
71 |
"""
|
72 |
+
EVALUATION_EXAMPLE_IMG = """<img src="https://huggingface.co/spaces/m42-health/clinical_ner_leaderboard/resolve/main/assets/ner_evaluation_example.png" alt="Clinical X HF" width="750" height="500">"""
|
73 |
LLM_BENCHMARKS_TEXT_2 = """
|
74 |
Token-based evaluation involves obtaining the set of token labels (ground-truth annotations) for the annotated entities and the set of token predictions, comparing these sets, and computing a classification report. Hence, the results for the example above are shown below.
|
75 |
**Token-based metrics:**
|