Question
stringlengths
2
3
Design
stringclasses
4 values
Criterion
stringclasses
4 values
Q1
Kitchen Utensil Grip
Lightweight
Q2
Kitchen Utensil Grip
Resistant to Heat
Q3
Kitchen Utensil Grip
Corrosion Resistant
Q4
Kitchen Utensil Grip
High Strength
Q5
Spacecraft Component
Lightweight
Q6
Spacecraft Component
Resistant to Heat
Q7
Spacecraft Component
Corrosion Resistant
Q8
Spacecraft Component
High Strength
Q9
Underwater Component
Lightweight
Q10
Underwater Component
Resistant to Heat
Q11
Underwater Component
Corrosion Resistant
Q12
Underwater Component
High Strength
Q13
Safety Helmet
Lightweight
Q14
Safety Helmet
Resistant to Heat
Q15
Safety Helmet
Corrosion Resistant
Q16
Safety Helmet
High Strength

MSEval Dataset:

A benchmark designed to facilitate evaluation and modify the behavior of a foundation model through different existing techniques in the context of material selection for conceptual design.

The data is collected by conducting a survey of experts in the field of material selection. The same questions mentioned in keyquestions.csv are asked to experts.

This can be used to evaluate a Language model performance and its spread compared to a human evaluation.


Overview

We introduce MSEval, a benchmark derived from survey results of experts in the field of material selection.

The MixEval consists of two files: CleanResponses and AllResponses. Below presents the dataset file tree:

 MSEval
    │
    ├── AllResponses.csv
    └── CleanResponses.csv
    └── KeyQuestions.csv

Dataset Usage

An example use of the dataset using the datasets library is shown in https://github.com/cmudrc/MSEval

To use this dataset using pandas:

import pandas as pd

df = pd.read_csv("hf://datasets/cmudrc/Material_Selection_Eval/AllResponses.csv")

Replace AllResponses with CleanResponses and KeyQuestions in the pathname if required.

license: mit

Downloads last month
27
Edit dataset card