File size: 1,637 Bytes
841a356
 
 
 
 
63dcf26
9a3d741
 
841a356
 
63dcf26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
tags:
- autotrain
- text-generation
widget:
- text: 'Tell me about bees.'
library_name: transformers
pipeline_tag: text-generation
---

# Model Card for Model neovalle/H4rmoniousPampero



## Model Details

### Model Description

This is model is a version of HuggingFaceH4/zephyr-7b-alpha fine-tuned with the H4rmony dataset, which aims 
to better align the model with ecological values through the use of ecolinguistics principles.

- **Developed by:** Jorge Vallego 
- **Funded by :** Neovalle Ltd.
- **Shared by :** airesearch@neovalle.co.uk
- **Model type:** mistral 
- **Language(s) (NLP):** Primarily English
- **License:** MIT
- **Finetuned from model:** HuggingFaceH4/zephyr-7b-alpha


## Uses

Intended as PoC to show the effect of H4rmony dataset.

### Direct Use

For testing purposes to gain insight in order to help with the continous improvement of the H4rmony dataset.

### Downstream Use

Its direct use in applications is not recommended as this model is under testing for a specific task only

### Out-of-Scope Use

Not meant to be used other than testing and evaluation of the H4rmony dataset.

## Bias, Risks, and Limitations

This model might produce biased completions already existing in the base model and unintentionally introduced during fine-tuning.

## How to Get Started with the Model

It can be loaded and run in a free Colab instance.

Code to load base and finetuned models to compare outputs:

https://github.com/Neovalle/H4rmony/blob/main/H4rmoniousPampero.ipynb

## Training Details

Autotrained reward model 

### Training Data

H4rmony Dataset - https://huggingface.co/datasets/neovalle/H4rmony