YashPat's picture
Update README.md
0961992 verified
metadata
license: mit
task_categories:
  - text2text-generation
  - text-generation
  - text-retrieval
  - question-answering
language:
  - en
tags:
  - benchmark
  - llm-evaluation
  - large-language-models
  - large-language-model
  - large-multimodal-models
  - llm-training
  - foundation-models
  - machine-learning
  - deep-learning
configs:
  - config_name: all_responses
    data_files: AllResponses.csv
  - config_name: clean_responses
    data_files: CleanResponses.csv
  - config_name: additional_data
    data_files: KeyQuestions.csv

MSEval Dataset:

A benchmark designed to facilitate evaluation and modify the behavior of a foundation model through different existing techniques in the context of material selection for conceptual design.

The data is collected by conducting a survey of experts in the field of material selection. The same questions mentioned in keyquestions.csv are asked to experts.

This can be used to evaluate a Language model performance and its spread compared to a human evaluation.

To get into a more detailed explanation - use this link [https://arxiv.org/abs/2407.09719v1]


Overview

We introduce MSEval, a benchmark derived from survey results of experts in the field of material selection.

The MixEval consists of two files: CleanResponses and AllResponses. Below presents the dataset file tree:

 MSEval
    │
    ├── AllResponses.csv
    └── CleanResponses.csv
    └── KeyQuestions.csv

Dataset Usage

An example use of the dataset using the datasets library is shown in https://github.com/cmudrc/MSEval

To use this dataset using pandas:

import pandas as pd

df = pd.read_csv("hf://datasets/cmudrc/Material_Selection_Eval/AllResponses.csv")

Replace AllResponses with CleanResponses and KeyQuestions in the pathname if required.

Citation

If you found the dataset useful, please cite:

@misc{jain2024msevaldatasetmaterialselection,
      title={MSEval: A Dataset for Material Selection in Conceptual Design to Evaluate Algorithmic Models}, 
      author={Yash Patawari Jain and Daniele Grandi and Allin Groom and Brandon Cramer and Christopher McComb},
      year={2024},
      eprint={2407.09719},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2407.09719}, 
}

license: mit