license: cc-by-4.0
Mistral-Astronomy-7b-v0.2
Mistral-Astronomy-7b-v0.2, developed by Phanerozoic, represents a significant advancement in AI-driven astronomical knowledge dissemination. This model, built on the foundation of OpenHermes 2.5-Mistral-7B, has been fine-tuned with an expanded dataset from "Astronomy 2e" by OpenStax, making it particularly adept at addressing complex, current queries in astronomy.
Model Description
- Developed by: Phanerozoic
- Base Model: OpenHermes 2.5-Mistral-7B
- Specialization: Astronomy
- Version: v0.2
- License for Training Data: Creative Commons Attribution 4.0 International (CC BY 4.0)
License Details
The training material, derived from "Astronomy 2e" by OpenStax, adheres to the CC BY 4.0 license. This allows for a wide range of uses, encouraging the dissemination and adaptation of knowledge, provided appropriate attribution is given.
Training Data Enhancements
The training data for v0.2 was meticulously curated by reducing the chunk size in the Q&A converter. This method effectively doubled the dataset, leading to around 2500 diverse question-and-answer pairs, which in turn allowed for a richer and more nuanced understanding of various astronomical phenomena.
Insights from Comparative Testing
Through comparative testing against both its predecessor (v0.1) and the base model, Mistral-Astronomy-7b-v0.2 has demonstrated its proficiency in delivering detailed and nuanced explanations of advanced astronomical topics. The model shows particular strength in addressing current issues in astronomy with depth and clarity. However, a notable limitation in processing complex mathematical equations has been observed, suggesting an area for targeted improvements in future updates.
Performance Metrics
- Perplexity on Wikitext:
- Base Model: 5.10
- v0.1: 5.37
- v0.2: 5.22
- Observations: The v0.2 model shows a reduced perplexity compared to v0.1, indicating an improvement in linguistic fluency and understanding. However, it still lags slightly behind the base model, particularly in handling extended mathematical expressions.
Intended Use
Currently, Mistral-Astronomy-7b-v0.2 serves as an excellent tool for demonstrating the progression of AI in understanding and communicating complex astronomical concepts. While it excels in providing detailed and informative responses, the current limitation in mathematical computation makes it less suitable for applications requiring rigorous quantitative analysis.
Future Development
Future versions will focus on enhancing the model's ability to handle mathematical computations, striving for a balance between qualitative and quantitative accuracy. The aim is to develop a robust model that not only excels in descriptive astronomy but also in tackling the quantitative challenges inherent in the field.
Out-of-Scope Use
Mistral-Astronomy-7b-v0.2 is specialized in astronomy and may not perform optimally in general language tasks or in domains outside of astronomy.
Bias, Risks, and Limitations
- Bias and Risks: As the training data is heavily centered on astronomy, the model may exhibit biases towards astronomical interpretations and contexts.
- Limitations: The current iteration shows limitations vs the base model in processing extended mathematical computations and is not suitable for tasks requiring high precision in mathematical modeling.
Custom Stopping Strings Usage
To optimize the model's output, custom stopping strings have been employed. These include:
- "},"
- "User:"
- "You:"
- ""\n"
- "\nUser"
- "\nUser:"
Training Hyperparameters
- Training Regime: FP32
- Warmup Steps: 10
- Per Device Train Batch Size: 8
- Gradient Accumulation Steps: 16
- Max Steps: 1000
- Learning Rate: 0.0001
- Logging Steps: 1
- Save Steps: 1
- Lora Alpha: 64
- Dimension Count: 32
Compute Infrastructure
The training was conducted on an RTX 6000 Ada GPU, taking approximately 40 minutes. This setup ensured efficient processing of the expanded dataset, contributing to the model's enhanced performance.
Acknowledgments and Attribution
Special appreciation is extended to OpenStax for "Astronomy 2e," which forms the core of the training material. The model's development is also indebted to the foundational work of the Mistral and OpenHermes 2.5 teams. This work is based on "Astronomy 2e" by OpenStax, licensed under Creative Commons Attribution 4.0 International License (CC BY 4.0), with modifications for language modeling purposes. The endorsement by OpenStax or the original authors is not implied.
- Downloads last month
- 5