File size: 1,717 Bytes
f66d0ac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
base_model: universeTBD/astrollama-7b-alpha
tags:
- generated_from_trainer
datasets:
- customized
model-index:
- name: astrollama-7b-chat-alpha
  results: []
---


# AstroLLaMA Chat

**Check out the model in our Hugging Face space!**

[universeTBD/astrollama-7b-chat-alpha](https://huggingface.co/spaces/universeTBD/astrollama-7b-chat-alpha)

<p align="center">
  <img src="https://huggingface.co/universeTBD/astrollama-7b-chat-alpha/resolve/main/images/astrollama-chat-logo.png" alt="AstroLLaMA Chat" width="500px"/>
</p>

## Model description

This model is a fine-tuned version of [universeTBD/astrollama-7b-alpha](https://huggingface.co/universeTBD/astrollama-7b-alpha) on a mix of various open-source conversational datasets, as specified in our research note. 

The underlying model of [AstroLLaMA 7B Alpha](https://huggingface.co/universeTBD/astrollama-7b-alpha) is [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b), which has been trained using 300k arXiv astro-ph abstracts, introductions, and conclusions. For more information, please refer to the research note.

## Intended uses & limitations

With the release of this alpha model, we hope to inspire more astronomers to explore the fine tuning of smaller models.

Please be advised, that the alpha version of our model may not be perfect and robust. At times, it might generate strange or confusing outputs. As this is an alpha version, it has not been aligned with safety guidelines, making it potentially susceptible to misuse.

## Model usage

To achieve the desired features and performance from the chat model, it is essential to adhere to a specific formatting, which is the following: `###Human: {content}`, `###Assistant: {content}`.