update model card
Browse files
README.md
CHANGED
@@ -1,39 +1,90 @@
|
|
1 |
-
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
|
4 |
-
|
5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- **Model type:** decoder-only Transformer
|
7 |
-
- **Language(s) (NLP):**
|
8 |
- **License:** TO BE ADDED
|
9 |
-
- **
|
10 |
-
- **
|
11 |
-
- **
|
|
|
|
|
|
|
12 |
|
13 |
## Uses
|
14 |
-
- This model is intended to be
|
15 |
-
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
## Training
|
28 |
-
-
|
29 |
-
- For
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
## Environmental Impact
|
32 |
-
- **Hardware Type:** H100
|
33 |
-
- **
|
34 |
-
- **
|
35 |
-
- **
|
36 |
-
- **
|
|
|
|
|
37 |
|
38 |
## Citation
|
39 |
-
**BibTeX:**
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
datasets:
|
5 |
+
- OpenAssistant/oasst1
|
6 |
+
- databricks/databricks-dolly-15k
|
7 |
+
base_model: meta-llama/Llama-2-7b-hf
|
8 |
+
tags:
|
9 |
+
- climate
|
10 |
+
co2_eq_emissions:
|
11 |
+
emissions: 265800
|
12 |
+
training_type: "pre-training"
|
13 |
+
geographical_location: "Washington, USA"
|
14 |
+
hardware_used: "8x NVIDIA H100 HBM"
|
15 |
+
---
|
16 |
+
# ClimateGPT 7B FSG
|
17 |
|
18 |
+
<blockquote style="padding: 10px; margin: 0 0 10px; border-left: 5px solid #ddd;">
|
19 |
+
⚠️ This is a research experiment to explore training from scratch on climate related data. If you're just interested in using the model, we recommend to use the Llama 2 based [ClimateGPT 7B](https://huggingface.co/eci-io/climategpt-7b).
|
20 |
+
</blockquote>
|
21 |
+
|
22 |
+
ClimateGPT is an ensemble of AI models designed to augment human decisions on the fast-moving field of climate change.
|
23 |
+
ClimateGPT 7B FSB (from scratch climate) is a 7 billion transformer decoder model that was pre-trained for 319.5B tokens and then continuously pre-training on a collection of 4.2B tokens from curated climate documents.
|
24 |
+
The model is further instruction fine-tuned on a dataset of instruction-completion pairs manually collected by AppTek in cooperation with climate scientists.
|
25 |
+
[ClimateGPT 7B](https://huggingface.co/eci-io/climategpt-7b) outperforms Llama 2 70B Chat on our climate-specific benchmarks.
|
26 |
+
The model is designed to be used together with retrieval augmentation to extend the knowledge, and increase the factuality of the model and with cascaded machine translation to increase the language coverage.
|
27 |
+
|
28 |
+
<blockquote style="padding: 10px; margin: 0 0 10px; border-left: 5px solid #ddd;">
|
29 |
+
A paper describing our approach will be released soon.
|
30 |
+
</blockquote>
|
31 |
+
|
32 |
+
## Model Details
|
33 |
+
- **Trained by:** [AppTek](https://apptek.com)
|
34 |
+
- **Powered by:** [Erasmus AI](https://erasmus.ai)
|
35 |
+
- **Verified by:** [EQTYLab](https://eqtylab.io)
|
36 |
- **Model type:** decoder-only Transformer
|
37 |
+
- **Language(s) (NLP):** English
|
38 |
- **License:** TO BE ADDED
|
39 |
+
- **Continued pre-trained from:** Llama 2 7B
|
40 |
+
- **Context length:** 4K tokens
|
41 |
+
- **Input:** Text-only data
|
42 |
+
- **Output:** Model generates text only
|
43 |
+
- **Paper:** The paper will be released soon.
|
44 |
+
- **Website:** [eci.io](https://eci.io)
|
45 |
|
46 |
## Uses
|
47 |
+
- This is an experimental model and it is only intended to be used to reproduce our results and for LLM research. For any other use-case, we recommend to use [ClimateGPT 7B](https://huggingface.co/eci-io/climategpt-7b), [13B](https://huggingface.co/eci-io/climategpt-13b) or [70B](https://huggingface.co/eci-io/climategpt-70b)
|
48 |
+
- **Despite the efforts from the development team to eliminate them, as every other chat-capable LLMs, this model may generate biased, offensive or inaccurate responses.**
|
49 |
+
|
50 |
+
## Downstream Use
|
51 |
+
|
52 |
+
ClimateGPT 7B FSG is an instruction-tuned model that can be directly used for climate-specific question-answering applications.
|
53 |
+
It was trained to perform well with retrieval augmentation and supports up to 5 references in context.
|
54 |
+
|
55 |
+
The model was trained using ChatML so the following format should be followed when prompting, including the `<|im_start|>`, `<|im_end|>` tags, `system`, `user`, `context` and `assistant` identifiers and `[[0]]`, `[[1]]]` etc. tokens to indicate references.
|
56 |
+
|
57 |
+
"""
|
58 |
+
<|im_start|>system
|
59 |
+
{system_message}<|im_end|>
|
60 |
+
<|im_start|>user
|
61 |
+
{prompt}<|im_end|>
|
62 |
+
<|im_start|>context
|
63 |
+
[[0]] "{reference1_title}", {reference1_year}
|
64 |
+
{reference1_text}
|
65 |
+
[[1]] "{reference2_title}", {reference2_year}
|
66 |
+
{reference2_text}
|
67 |
+
[...]<|im_end|>
|
68 |
+
<|im_start|>assistant
|
69 |
+
"""
|
70 |
|
71 |
## Training
|
72 |
+
- Details on the pre-training data are given in our paper.
|
73 |
+
- For continued pre-training, 4.2B climate domain tokens (tokenized by the Llama tokenizer) are used.
|
74 |
+
- For instruction fine-tuning, about 272K instruction-completion pairs (both in the climate domain but also general domain) are used.
|
75 |
+
|
76 |
+
## Evaluation
|
77 |
+
|
78 |
+
Detailed evaluation results are presented on our model card website: [eci.io/model-card](https://eci.io/model-card)
|
79 |
|
80 |
## Environmental Impact
|
81 |
+
- **Hardware Type:** 8x NVIDIA H100 HBM
|
82 |
+
- **Power Consumption per GPU:** 775W
|
83 |
+
- **Hours used:** 14,288 hrs
|
84 |
+
- **Cloud Provider:** MLFoundry
|
85 |
+
- **Compute Region:** Washington, USA
|
86 |
+
- **Energy Mix:** 100% Hydro Power (24g CO2eq/kWh according to IPCC 2014)
|
87 |
+
- **Carbon Emitted:** 265.8kg CO2eq
|
88 |
|
89 |
## Citation
|
90 |
+
**BibTeX:** Paper will be released soon.
|