Model Card for Model neovalle/H4rmoniousPampero
Model Details
Model Description
This is model is a version of HuggingFaceH4/zephyr-7b-alpha fine-tuned with the H4rmony dataset, which aims to better align the model with ecological values through the use of ecolinguistics principles.
- Developed by: Jorge Vallego
- Funded by : Neovalle Ltd.
- Shared by : airesearch@neovalle.co.uk
- Model type: mistral
- Language(s) (NLP): Primarily English
- License: MIT
- Finetuned from model: HuggingFaceH4/zephyr-7b-alpha
Uses
Intended as PoC to show the effect of H4rmony dataset.
Direct Use
For testing purposes to gain insight in order to help with the continous improvement of the H4rmony dataset.
Downstream Use
Its direct use in applications is not recommended as this model is under testing for a specific task only
Out-of-Scope Use
Not meant to be used other than testing and evaluation of the H4rmony dataset.
Bias, Risks, and Limitations
This model might produce biased completions already existing in the base model and unintentionally introduced during fine-tuning.
How to Get Started with the Model
It can be loaded and run in a free Colab instance.
Code to load base and finetuned models to compare outputs:
https://github.com/Neovalle/H4rmony/blob/main/H4rmoniousPampero.ipynb
Training Details
Autotrained reward model
Training Data
H4rmony Dataset - https://huggingface.co/datasets/neovalle/H4rmony
- Downloads last month
- 11