File size: 535 Bytes
9071cc0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
license: apache-2.0
datasets:
- davanstrien/haiku_dpo
language:
- en
tags:
- dpo
- poetry
base_model:
- teknium/OpenHermes-2.5-Mistral-7B
---
# Model Card for HaikuHermes-0.1-7B

This is a very early model which uses the [davanstrien/haiku_dpo](https://huggingface.co/datasets/davanstrien/haiku_dpo) dataset to train [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) using Direct Preference Optimization. 

The eventual goal of this model is for it to write "technically correct" haiku.