patrickfleith commited on
Commit
a2aa78a
1 Parent(s): 739402e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -1
README.md CHANGED
@@ -11,4 +11,102 @@ tags:
11
  pretty_name: AstroMCQA
12
  size_categories:
13
  - n<1K
14
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  pretty_name: AstroMCQA
12
  size_categories:
13
  - n<1K
14
+ ---
15
+
16
+ # AstroMCQA - Overview
17
+
18
+ ## Purpose and scope
19
+
20
+ The primary purpose of AstroMCQA is for application developers in the domain of space engineering to be able to comparatively assess LLM performances on the specific task of multiple-choice question-answering
21
+
22
+ ## Intended Usage
23
+
24
+ Comparative assessement of differents LLMs, Model evaluation, audit, and model selection. Assessment of different quantization levels, different prompting strategies, and assessing effectiveness of domain adaptation or domain-specific fine-tuning.
25
+
26
+ ## Quickstart
27
+
28
+ - Explore the dataset here:  https://huggingface.co/datasets/patrickfleith/Astro-mcqa/viewer/default/train
29
+ - Evaluate an LLM (Mistral-7b) on AstroMCQA on collab here:
30
+ <a target="_blank" href="https://colab.research.google.com/github/patrickfleith/astro-llms-notebooks/blob/main/Evaluate_an_HuggingFace_LLM_on_a_Domain_Specific_Benchmark_Dataset.ipynb">
31
+   <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
32
+ </a>
33
+
34
+ ## What is AstroMCQA GOOD for?
35
+
36
+ What is AstroMCQA good for?
37
+ The primary purpose of AstroMCQA is for application developers in the domain of space mission design and operations to be able to address some questions such as: which LLM to use and how does it perform in the different subdomains? It enables to benchmark different models, different size, quantization methods, prompt engineering strategies, effectiveness of fine-tuning on the specific task of multiple-choice question-answering in space engineering.
38
+
39
+ ## What is AstroMCQA NOT GOOD for?
40
+
41
+ It is not suitable for training / fine-tuning LLM due to the very limited size of the dataset even if it could be combined with other tasks and science dataset for meta-learning.
42
+
43
+
44
+ # DATASET DESCRIPTION
45
+
46
+ ### Access
47
+
48
+ - Manual download from Hugging face hub: https://huggingface.co/datasets/patrickfleith/Astro-mcqa
49
+ - Or with python:
50
+ ```python
51
+ from datasets import load_dataset
52
+ dataset = load_dataset("patrickfleith/Astro-mcqa")
53
+ ```
54
+
55
+ ### Structure
56
+ 200 expert-created Multiple Choice Questions and Answers, one question per row in a comma separated file. Each instance is made of the following field (column):
57
+ - **question**: a string.
58
+ - **propositions**: a list of string. Each item in the list is one choice. At least one of the propositions correctly answer the question, but there can be multiple correct propositions. Even all propositions can be correct.
59
+ - **labels**: list of integer (0/1). Each element in the labels list correspond to proposition at the same position within the proposition list. A label of 0 means that the proposition is incorrect. A label of 1 means that the proposition is a correct choice to answer the question.
60
+ - **justification**: Optional string. An optional field which may provide a justification of the answer.
61
+ - **answerable**: A boolean, whether the question is answerable or not. At the moment, AstroMCQA only includes answerable questions.
62
+ - **uid**: A unique identifier for the MCQA instance. May be useful for traceability in further processing tasks.
63
+  
64
+ ### Metadata
65
+ Dataset is version controlled and commits history is available here: https://huggingface.co/datasets/patrickfleith/Astro-mcqa/commits/main
66
+  
67
+ ### Languages
68
+ All instances in the dataset are in english
69
+  
70
+ ### Size
71
+ 200 expert-created Multiple Choice Questions and Answers
72
+  
73
+ ### Types of Questions
74
+  
75
+ - Some questions request expected generic knowledge in the field of space science and engineering.
76
+ - Some questions require reasoning capabilities
77
+ - Some questions require mathematical operations since a numerical result is expected (exam-style questions)
78
+  
79
+ ### Topics Covered
80
+ Different subdomains of space engineering are covered, including propulsion, operations, human spaceflight, space environment and effects, space project lifecycle, communication and link analysis, and more.
81
+
82
+ # USAGE AND GUIDELINES
83
+  
84
+ #### Restrictions
85
+ No restriction. Please provide the correct attribution following the license terms.
86
+  
87
+ #### License
88
+ AstroMCQA © 2024 by Patrick Fleith is licensed under Creative Commons Attribution 4.0 International
89
+  
90
+ #### Citation
91
+ P. Fleith, AstroMCQA – Astronautics multiple choice questions and answers benchmark dataset for domain of Space Mission Engineering for LLM Evaluation, (2024).
92
+  
93
+ #### Update Frequency
94
+ May be updated based on feedbacks. If you want to become a contributor, let me know.
95
+  
96
+ #### Have a feedback or spot an error?
97
+ Use the community discussion tab directly on the huggingface Astro-mcqa dataset page.
98
+  
99
+ #### Contact Information
100
+ Reach me here on the community tab or on LinkedIn (Patrick Fleith) with a Note.
101
+
102
+ #### License
103
+
104
+ AstroMCQA © 2024 by Patrick Fleith is licensed under Creative Commons Attribution 4.0 International
105
+
106
+ #### Current Limitations and future work
107
+ - Only 200 multiple choice questions and answers. This makes it useless for fine-tuning purpose, although it could be integrated as part of a larger pool of datasets compiled for a larger fine-tuning.
108
+ - While being a descent size enabling LLM evaluation, the space engineering expert time is scarce and expensive. On average it takes 8 minutes to create one MCQA example. Having more examples would be much better for robustness.
109
+ - The dataset might be biased toward the very low number of annotators.
110
+ - The dataset might be biased toward European Space Programs.
111
+ - The dataset might not cover all subsystems or subdomain of astronautics although we tried to do our best covering the annotator’s domains of expertise.
112
+ - No peer-reviewing. Ideally we would like to have a Quality Control process to ensure high quality, and correctness of each example in the dataset. Given the limited resources, this is not yet possible. Feel free to come and contribute if you feel that is an issue