File size: 1,528 Bytes
1b7122d
 
 
abee4d2
 
1b7122d
abee4d2
1b7122d
 
5e64746
 
 
1b7122d
 
4602317
5e64746
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
title: Lingo Judge Metric
emoji: 🐨
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
tags:
- evaluate
- metric
---

## Metric Description

Lingo-Judge is an evaluation metric that aligns closely with human judgement on the LingoQA evaluation suite.

See the project's README at [LingoQA](https://github.com/wayveai/LingoQA) for more information.

## How to use

This metric requires questions, predictions and references as inputs.

```python
>>> metric = evaluate.load("maysonma/lingo_judge_metric")
>>> questions = ["Are there any traffic lights present? If yes, what is their color?"]
>>> references = [["Yes, green."]]
>>> predictions = ["No."]
>>> results = metric.compute(questions=questions, predictions=predictions, references=references)
>>> print(results)
[-3.38348388671875]
```

### Inputs

- **questions** (`list` of `str`): Input questions.
- **predictions** (`list` of `str`): Model predictions.
- **references** (`list` of `list` of `str`): Multiple references per question.

### Output Values

- **scores** (`list` of `float`): Score indicating truthfulness.

## Citation

```bibtex
@article{marcu2023lingoqa,
  title={LingoQA: Video Question Answering for Autonomous Driving}, 
  author={Ana-Maria Marcu and Long Chen and Jan Hünermann and Alice Karnsund and Benoit Hanotte and Prajwal Chidananda and Saurabh Nair and Vijay Badrinarayanan and Alex Kendall and Jamie Shotton and Oleg Sinavski},
  journal={arXiv preprint arXiv:2312.14115},
  year={2023},
}
```