File size: 2,315 Bytes
78e8490
4c8b67d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78e8490
1da7fa8
 
 
 
 
 
78e8490
 
 
 
923eea0
 
 
 
 
 
 
 
 
 
0961992
 
78e8490
8028347
 
 
 
 
 
 
 
 
 
 
 
 
887bd8e
 
 
 
 
 
 
 
 
 
 
 
 
2b481f5
0fbf30e
2b481f5
0fbf30e
 
2b481f5
 
 
 
 
 
 
 
 
 
0fbf30e
78e8490
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: mit
task_categories:
- text2text-generation
- text-generation
- text-retrieval
- question-answering
language:
- en
tags:
- benchmark
- llm-evaluation
- large-language-models
- large-language-model
- large-multimodal-models
- llm-training
- foundation-models
- machine-learning
- deep-learning
configs:
- config_name: all_responses
  data_files: "AllResponses.csv"

- config_name: clean_responses
  data_files: "CleanResponses.csv"
  
- config_name: additional_data
  data_files: "KeyQuestions.csv"
---

---
MSEval Dataset:
---

A benchmark designed to facilitate evaluation and modify the behavior of a foundation model through different existing techniques in the context of material selection for conceptual design.

The data is collected by conducting a survey of experts in the field of material selection. The same questions mentioned in keyquestions.csv are asked to experts.

This can be used to evaluate a Language model performance and its spread compared to a human evaluation.

To get into a more detailed explanation - use this link [https://arxiv.org/abs/2407.09719v1]

---

# Overview
We introduce MSEval, a benchmark derived from survey results of experts in the field of material selection.

The MixEval consists of two files: `CleanResponses` and `AllResponses`. Below presents the dataset file tree:

```
 MSEval

    ├── AllResponses.csv
    └── CleanResponses.csv
    └── KeyQuestions.csv
```

# Dataset Usage
An example use of the dataset using the datasets library is shown in https://github.com/cmudrc/MSEval

To use this dataset using pandas:
```
import pandas as pd

df = pd.read_csv("hf://datasets/cmudrc/Material_Selection_Eval/AllResponses.csv")
```

Replace AllResponses with CleanResponses and KeyQuestions in the pathname if required.

# Citation

If you found the dataset useful, please cite:

```bibtex
@misc{jain2024msevaldatasetmaterialselection,
      title={MSEval: A Dataset for Material Selection in Conceptual Design to Evaluate Algorithmic Models}, 
      author={Yash Patawari Jain and Daniele Grandi and Allin Groom and Brandon Cramer and Christopher McComb},
      year={2024},
      eprint={2407.09719},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2407.09719}, 
}
```

license: mit
---