Update README.md
Browse files
README.md
CHANGED
@@ -3,9 +3,63 @@ tags:
|
|
3 |
- autotrain
|
4 |
- text-generation
|
5 |
widget:
|
6 |
-
- text: '
|
7 |
library_name: transformers
|
8 |
pipeline_tag: text-generation
|
9 |
---
|
10 |
|
11 |
-
# Model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
- autotrain
|
4 |
- text-generation
|
5 |
widget:
|
6 |
+
- text: 'Tell me about bees.'
|
7 |
library_name: transformers
|
8 |
pipeline_tag: text-generation
|
9 |
---
|
10 |
|
11 |
+
# Model Card for Model neovalle/H4rmoniousPampero
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
## Model Details
|
16 |
+
|
17 |
+
### Model Description
|
18 |
+
|
19 |
+
This is model is a version of HuggingFaceH4/zephyr-7b-alpha fine-tuned with the H4rmony dataset, which aims
|
20 |
+
to better align the model with ecological values through the use of ecolinguistics principles.
|
21 |
+
|
22 |
+
- **Developed by:** Jorge Vallego
|
23 |
+
- **Funded by :** Neovalle Ltd.
|
24 |
+
- **Shared by :** airesearch@neovalle.co.uk
|
25 |
+
- **Model type:** mistral
|
26 |
+
- **Language(s) (NLP):** Primarily English
|
27 |
+
- **License:** MIT
|
28 |
+
- **Finetuned from model:** HuggingFaceH4/zephyr-7b-alpha
|
29 |
+
|
30 |
+
|
31 |
+
## Uses
|
32 |
+
|
33 |
+
Intended as PoC to show the effect of H4rmony dataset.
|
34 |
+
|
35 |
+
### Direct Use
|
36 |
+
|
37 |
+
For testing purposes to gain insight in order to help with the continous improvement of the H4rmony dataset.
|
38 |
+
|
39 |
+
### Downstream Use
|
40 |
+
|
41 |
+
Its direct use in applications is not recommended as this model is under testing for a specific task only
|
42 |
+
|
43 |
+
### Out-of-Scope Use
|
44 |
+
|
45 |
+
Not meant to be used other than testing and evaluation of the H4rmony dataset.
|
46 |
+
|
47 |
+
## Bias, Risks, and Limitations
|
48 |
+
|
49 |
+
This model might produce biased completions already existing in the base model and unintentionally introduced during fine-tuning.
|
50 |
+
|
51 |
+
## How to Get Started with the Model
|
52 |
+
|
53 |
+
It can be loaded and run in a free Colab instance.
|
54 |
+
|
55 |
+
Code to load base and finetuned models to compare outputs:
|
56 |
+
|
57 |
+
https://github.com/Neovalle/H4rmony/blob/main/H4rmoniousPampero.ipynb
|
58 |
+
|
59 |
+
## Training Details
|
60 |
+
|
61 |
+
Autotrained reward model
|
62 |
+
|
63 |
+
### Training Data
|
64 |
+
|
65 |
+
H4rmony Dataset - https://huggingface.co/datasets/neovalle/H4rmony
|