|
--- |
|
library_name: transformers |
|
license: mit |
|
language: |
|
- en |
|
metrics: |
|
- pearsonr |
|
- spearmanr |
|
- accuracy |
|
base_model: |
|
- meta-llama/Llama-3.1-8B-Instruct |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# Model Card for Llama-Prometheus |
|
Llama-Prometheus is a English evaluation model introduced as part of the CIA Suite to assess multilingual Large Language Models (LLMs). |
|
|
|
Llama-Prometheus is fine-tuned on the Feedback-Collection dataset using the same setup as [Prometheus 2](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0), but using the [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) as the base model. All FFT models and LoRA weights part of CIA Suite are available [here](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1). |
|
|
|
|
|
# Model Details |
|
|
|
## Model Description |
|
|
|
- **Model type:** Evaluator Language model |
|
- **Language(s) (NLP):** English |
|
- **Related Models:** [Hercule Models](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1) |
|
- **Resources for more information:** |
|
- [Research paper](https://arxiv.org/abs/2410.13394) |
|
- [GitHub Repo](https://github.com/AI4Bharat/CIA) |
|
|
|
|
|
## Prompt Format |
|
|
|
We’ve developed wrapper functions and classes to make it easy to work with Hercule. Check them out on our [github repository](https://github.com/AI4Bharat/CIA) – we highly recommend using them! |
|
|
|
If you only need to use the model for your specific use case, please follow the prompt format provided below. |
|
|
|
### Reference Guided Direct Assessment |
|
The model expects four input components: an evaluation instruction, a response to evaluate, a scoring rubric, and a reference answer. Use the prompt format provided below, ensuring that you include the instruction, response, reference answer, evaluation criteria, and a detailed score rubric for each score from 1 to 5. |
|
|
|
After running inference, the output will include feedback and a score, separated by the phrase ```[RESULT]```. |
|
``` |
|
###Task Description: |
|
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. |
|
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. |
|
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. |
|
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\" |
|
4. Please do not generate any other opening, closing, and explanations. |
|
|
|
###The instruction to evaluate: |
|
{instruction} |
|
|
|
###Response to evaluate: |
|
{response} |
|
|
|
###Reference Answer (Score 5): |
|
{reference_answer} |
|
|
|
###Score Rubrics: |
|
[{criteria}] |
|
Score 1: {score1_rubric} |
|
Score 2: {score2_rubric} |
|
Score 3: {score3_rubric} |
|
Score 4: {score4_rubric} |
|
Score 5: {score5_rubric} |
|
|
|
###Feedback: |
|
``` |
|
|
|
We use the same evaluation prompt as used in [Prometheus 2](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0). |
|
|
|
## Links for Reference |
|
|
|
- **Repository**: https://github.com/AI4Bharat/CIA |
|
- **Paper**: https://arxiv.org/abs/2410.13394 |
|
- **Point of Contact**: sumanthd@cse.iitm.ac.in, safikhan@ai4bharat.org |
|
|
|
# Citation |
|
|
|
|
|
If you find the following model helpful, please consider citing our paper! |
|
|
|
**BibTeX:** |
|
|
|
```bibtex |
|
@article{doddapaneni2024crosslingual, |
|
title = {Cross-Lingual Auto Evaluation for Assessing Multilingual LLMs}, |
|
author = {Sumanth Doddapaneni and Mohammed Safi Ur Rahman Khan and Dilip Venkatesh and Raj Dabre and Anoop Kunchukuttan and Mitesh M. Khapra}, |
|
year = {2024}, |
|
journal = {arXiv preprint arXiv: 2410.13394} |
|
} |
|
``` |