Datasets:

Languages:
Hindi
Multilinguality:
monolingual
Size Categories:
1<n<100
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
original
Tags:
License:
human-eval / README.md
jaygala24's picture
Update README.md
8a8d9c0 verified
metadata
annotations_creators:
  - expert-generated
language_creators:
  - expert-generated
language:
  - hi
license:
  - cc-by-4.0
multilinguality:
  - monolingual
size_categories:
  - 1<n<100
source_datasets:
  - original
task_categories:
  - text2text-generation
task_ids: []
pretty_name: Airavata HumanEval
language_bcp47:
  - hi-IN
dataset_info:
  - config_name: human-eval
    features:
      - name: id
        dtype: string
      - name: intent
        dtype: string
      - name: domain
        dtype: string
      - name: language
        dtype: string
      - name: prompt
        dtype: string
    splits:
      - name: test
        num_bytes: 34114
        num_examples: 50
    download_size: 21873
    dataset_size: 34114
configs:
  - config_name: human-eval
    data_files:
      - split: test
        path: data/test-*

Airavata HumanEval Prompts

This benchmark contains a set of prompts written by real-users to evaluate LLMs on real-world tasks and test it for different abilities. We collect prompts for 5 abilities listed below:

  • Long: Ability to generate long-form text like writing essays, speeches, reports, etc.
  • Fact-Ops: Ability to give factual opinions and explanations like seeking recommendations, seeking advice, opinions, explanations, etc.
  • Content: Ability to make content accessible like summarizations, layman explanations, etc
  • Lang-Creativity: Ability to be creative in language like finding anagrams, rhyming words, vocabulary enhancement, etc
  • Culture: Ability to answer questions related to Indian Culture.

For each ability we define a list of intents and domains which are provided to the users along with detailed instructions about what prompts are expected.

We recommend the readers to check out our official blog post for more details.

Citation

@misc{airavata2024,
  title = {Introducing Airavata: Hindi Instruction-tuned LLM},
  url = {https://ai4bharat.github.io/airavata},
  author = {Jay Gala and Thanmay Jayakumar and Jaavid Aktar Husain and Aswanth Kumar and Mohammed Safi Ur Rahman Khan and Diptesh Kanojia and Ratish Puduppully and Mitesh Khapra and Raj Dabre and Rudra Murthy and Anoop Kunchukuttan},
  month = {January},
  year = {2024}
}