Spaces:
Runtime error
Runtime error
File size: 1,634 Bytes
2a6f858 34d5c3d 2a6f858 34d5c3d 2a6f858 34d5c3d 8840f92 2a6f858 34d5c3d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
title: Exact Match
emoji: 🤗
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 3.0.2
app_file: app.py
pinned: false
tags:
- evaluate
- comparison
description: >-
Returns the rate at which the predictions of one model exactly match those of another model.
---
# Comparison Card for Exact Match
## Comparison description
Given two model predictions the exact match score is 1 if they are the exact same, and is 0 otherwise. The overall exact match score is the average.
- **Example 1**: The exact match score if prediction 1.0 is [0, 1] is 0, given prediction 2 is [0, 1].
- **Example 2**: The exact match score if prediction 0.0 is [0, 1] is 0, given prediction 2 is [1, 0].
- **Example 3**: The exact match score if prediction 0.5 is [0, 1] is 0, given prediction 2 is [1, 1].
## How to use
At minimum, this metric takes as input predictions and references:
```python
>>> exact_match = evaluate.load("exact_match", module_type="comparison")
>>> results = exact_match.compute(predictions1=[0, 1, 1], predictions2=[1, 1, 1])
>>> print(results)
{'exact_match': 0.66}
```
## Output values
Returns a float between 0.0 and 1.0 inclusive.
## Examples
```python
>>> exact_match = evaluate.load("exact_match", module_type="comparison")
>>> results = exact_match.compute(predictions1=[0, 0, 0], predictions2=[1, 1, 1])
>>> print(results)
{'exact_match': 1.0}
```
```python
>>> exact_match = evaluate.load("exact_match", module_type="comparison")
>>> results = exact_match.compute(predictions1=[0, 1, 1], predictions2=[1, 1, 1])
>>> print(results)
{'exact_match': 0.66}
```
## Limitations and bias
## Citations
|