File size: 1,640 Bytes
04f495f
 
 
 
 
 
 
 
391e257
463f53e
 
04f495f
 
14b898e
04f495f
 
21d9ef0
 
04f495f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9568e9d
04f495f
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
import gradio as gr

def render_eval_info():
    text = r"""

The Iqra’Eval challenge provides a shared, transparent platform to benchmark phoneme‐prediction systems on our open testset (“IqraEval/open_testset”).

**Submission Details**  
– Submit a UTF‑8 CSV named **teamName_submission.csv** with exactly two columns:  
  1. **ID**: utterance identifier (e.g. “0000_0001”)  
  2. **Labels**: your predicted phoneme sequence (space‑separated)  

  ```csv
  ID,Labels
  0000_0001,i n n a m a a y a …
  0000_0002,m a a n a n s a …
  ```

**Evaluation Criteria**  
– Leaderboard ranking is based on phoneme‑level **F1‑score**, computed via a two‑stage (detection + diagnostic) hierarchy:

1. **Detection (error vs. correct)**  
   - **TR (True Rejects)**: mispronounced phonemes correctly flagged  
   - **FA (False Accepts)**: mispronunciations missed  
   - **FR (False Rejects)**: correct phonemes wrongly flagged  
   - **TA (True Accepts)**: correct phonemes correctly passed  

**Metrics:**  

- **Precision** = `TR / (TR + FR)`
- **Recall**    = `TR / (TR + FA)`
- **F1**        = `2 · Precision · Recall / (Precision + Recall)`

   2. **Diagnostic (substitution/deletion/insertion errors)**  
See the **Metrics** tab for breakdown into:  
- **DER**: Deletion Error Rate  
- **IER**: Insertion Error Rate  
- **SER**: Substitution Error Rate  

– Once we receive your file (email: **iqraeval-submissions@googlegroups.com**), your submission is auto‑evaluated and placed on the leaderboard.  

    

"""
    return gr.Markdown(text, latex_delimiters=[{ "left": "$", "right": "$", "display": True }])