Spaces:
Runtime error
Runtime error
File size: 2,310 Bytes
04a5a8e 38a8ac5 04a5a8e 52a9ee4 35a1018 40330e2 35a1018 5aa242c 35a1018 5aa242c 35a1018 52a9ee4 5aa242c 35a1018 38a8ac5 35a1018 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
import evaluate
import gradio as gr
from evaluate.utils import launch_gradio_widget
with gr.Blocks() as demo:
gr.Markdown(
"""
# Token Edit Distance
This is an NLP evaluation metric that records the minimum number of token edits (insertions, deletions, and replacements, all weighted equally) to the prediction string in order to make it exactly match the reference string. Uses identical logic to Levenshtein Edit Distance, except applied to tokens (i.e. individual ints in a list) as opposed to individual characters in a string.
## Args:
* predictions: ```List[List[Int]]```, list of predictions to score.
* Each prediction should be tokenized into a list of tokens.
* references: ```List[List[Int]]```, list of references/ground truth output to score against.
* Each reference should be tokenized into a list of tokens.
## Returns:
* "avg_token_edit_distance": ```Float```, average Token Edit Distance for all inputted predictions and references
* "token_edit_distances": ```List[Int]```, the Token Edit Distance for each inputted prediction and reference
## Examples:
```
>>> token_edit_distance_metric = datasets.load_metric('Token Edit Distance')
>>> references = [[15, 4243], [100, 10008]]
>>> predictions = [[15, 4243], [100, 10009]]
>>> results = token_edit_distance_metric.compute(predictions=predictions, references=references)
>>> print(results)
{'avg_token_edit_distance': 0.5, 'token_edit_distances': array([0. 1.])}
```
""")
if __name__ == "__main__":
demo.launch()
# JUNKYARD
# token_edit_distance_metric = evaluate.load("SudharsanSundar/token_edit_distance")
# launch_gradio_widget(module)
#
# def evaluate_metric(table):
# pred = table[]
# pred = map(int, pred)
# ref = map(int, ref)
# return token_edit_distance_metric.compute(predictions=[pred], references=[ref])['avg_token_edit_distance']
#
#
# demo = gr.Interface(
# fn=evaluate_metric,
# inputs=[gr.Dataframe(row_count = (4, "dynamic"),
# col_count=(2,"fixed"),
# label="Input Data",
# interactive=1,
# headers=['Predictions', 'References'],
# datatype="number")],
# outputs="number",
# description=""
# )
# demo.launch()
|