Astro-mcqa / README.md
patrickfleith's picture
Update README.md
52cd3db verified
metadata
license: cc-by-4.0
task_categories:
  - question-answering
language:
  - en
tags:
  - science
  - space
  - astronautics
pretty_name: AstroMCQA
size_categories:
  - n<1K

AstroMCQA Dataset

Purpose and scope

The primary purpose of AstroMCQA is for application developers in the domain of space engineering to be able to comparatively assess LLM performances on the specific task of multiple-choice question-answering

Intended Usage

Comparative assessement of differents LLMs, Model evaluation, audit, and model selection. Assessment of different quantization levels, different prompting strategies, and assessing effectiveness of domain adaptation or domain-specific fine-tuning.

Quickstart

What is AstroMCQA GOOD for?

What is AstroMCQA good for? The primary purpose of AstroMCQA is for application developers in the domain of space mission design and operations to be able to address some questions such as: which LLM to use and how does it perform in the different subdomains? It enables to benchmark different models, different size, quantization methods, prompt engineering strategies, effectiveness of fine-tuning on the specific task of multiple-choice question-answering in space engineering.

What is AstroMCQA NOT GOOD for?

It is not suitable for training / fine-tuning LLM due to the very limited size of the dataset even if it could be combined with other tasks and science dataset for meta-learning.

DATASET DESCRIPTION

Access

  • Manual download from Hugging face hub: https://huggingface.co/datasets/patrickfleith/Astro-mcqa
  • Or with python:
from datasets import load_dataset
dataset = load_dataset("patrickfleith/Astro-mcqa")

Structure

200 expert-created Multiple Choice Questions and Answers, one question per row in a comma separated file. Each instance is made of the following field (column):

  • question: a string.
  • propositions: a list of string. Each item in the list is one choice. At least one of the propositions correctly answer the question, but there can be multiple correct propositions. Even all propositions can be correct.
  • labels: list of integer (0/1). Each element in the labels list correspond to proposition at the same position within the proposition list. A label of 0 means that the proposition is incorrect. A label of 1 means that the proposition is a correct choice to answer the question.
  • justification: Optional string. An optional field which may provide a justification of the answer.
  • answerable: A boolean, whether the question is answerable or not. At the moment, AstroMCQA only includes answerable questions.
  • uid: A unique identifier for the MCQA instance. May be useful for traceability in further processing tasks.

Metadata

Dataset is version controlled and commits history is available here: https://huggingface.co/datasets/patrickfleith/Astro-mcqa/commits/main  

Languages

All instances in the dataset are in english  

Size

200 expert-created Multiple Choice Questions and Answers  

Types of Questions

  • Some questions request expected generic knowledge in the field of space science and engineering.
  • Some questions require reasoning capabilities
  • Some questions require mathematical operations since a numerical result is expected (exam-style questions)

Topics Covered

Different subdomains of space engineering are covered, including propulsion, operations, human spaceflight, space environment and effects, space project lifecycle, communication and link analysis, and more.

USAGE AND GUIDELINES

License

AstroMCQA © 2024 by Patrick Fleith is licensed under Creative Commons Attribution 4.0 International

Restrictions

No restriction. Please provide the correct attribution following the license terms.  

Citation

P. Fleith, AstroMCQA – Astronautics multiple choice questions and answers benchmark dataset for domain of Space Mission Engineering for LLM Evaluation, (2024).  

Update Frequency

May be updated based on feedbacks. If you want to become a contributor, let me know.  

Have a feedback or spot an error?

Use the community discussion tab directly on the huggingface Astro-mcqa dataset page.  

Contact Information

Reach me here on the community tab or on LinkedIn (Patrick Fleith) with a Note.

Current Limitations and future work

  • Only 200 multiple choice questions and answers. This makes it useless for fine-tuning purpose, although it could be integrated as part of a larger pool of datasets compiled for a larger fine-tuning.
  • While being a descent size enabling LLM evaluation, the space engineering expert time is scarce and expensive. On average it takes 8 minutes to create one MCQA example. Having more examples would be much better for robustness.
  • The dataset might be biased toward the very low number of annotators.
  • The dataset might be biased toward European Space Programs.
  • The dataset might not cover all subsystems or subdomain of astronautics although we tried to do our best covering the annotator’s domains of expertise.
  • No peer-reviewing. Ideally we would like to have a Quality Control process to ensure high quality, and correctness of each example in the dataset. Given the limited resources, this is not yet possible. Feel free to come and contribute if you feel that is an issue