File size: 2,156 Bytes
64322d3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
title: "OpenHermes-Emojitron-001 Quantized in GGUF"
tags:
- GGUF
language: en
---
![Image description](https://i.postimg.cc/MGwhtFfF/tsune-fixed.png)
# Tsunemoto GGUF's of OpenHermes-Emojitron-001
This is a GGUF quantization of OpenHermes-Emojitron-001.
## Original Repo Link:
[Original Repository](https://huggingface.co/movaxbx/OpenHermes-Emojitron-001)
## Original Model Card:
---
# OpenHermes-Emojitron-001
![image/png](https://i.postimg.cc/YSKwmNVH/emojitron.jpg)
*๐งโ๐๐๐๐โก๏ธ๐ฃ๏ธ๐ ๐ฎ๐๏ธ๐๐ค๐๐งฉ๐ ๏ธ๐๐ก๐๐*
## Model description
OpenHermes-Emojitron-001 is a state of the art Emoji Fine-tune, based on the OpenHermes 2.5 model.
It only speaks in emoji. Mostly. If using for chat, start it off with a nice ๐ to get going.
Huge thank you to Teknium and Mistral for a wonderful model.
Follow all my updates in ML and AI on Twitter: https://twitter.com/mov_axbx
# Table of Contents
1. [Example Outputs](#example-outputs)
- [Geography knowlege](#geography)
- [Chat about food](#food-chat)
- [Talk about movies](#movie-talk)
2. [Benchmark Results](#benchmark-results)
3. [Prompt Format](#prompt-format)
4. [Quantized Models](#quantized-models)
## Example Outputs
### Geography knowlege:
in what country is london
```
๐ฌ๐ง
```
### Chat about food:
what country has the best food
```
๐ค๐๐ฝ๏ธ
1๏ธโฃ๐ฎ๐ณ
2๏ธโฃ๐ฏ๐ต
3๏ธโฃ๐บ๐ธ
4๏ธโฃ๐ช๐ธ
5๏ธโฃ๐ซ๐ท
```
### Talk about movies:
tell me a time travel story about a delorean
```
๐ธ๐๐ฐ๏ธ๐จโ๐ฌ๐๐
1985๐ด๐๐ญ๐๐๐ฑ๐ก๐ผ๐ข๐๐
1955๐๐จโ๐ฉโ๐งโ๐ฆ๐๐๐๐ฐ๏ธ๐ฅ๐๐๐ฐ๏ธ๐๐
1985๐จโ๐ฌ๐
```
## Benchmark Results
There are no benchmarks for emoji models. Maybe someone can create one. EmojiBench 5K let's gooooooo
# Prompt Format
OpenHermes-Emojitron-001 uses ChatML as the prompt format, just like Open Hermes 2.5
It also appears to handle Mistral format great. Especially since I used that for the finetune (oops)
# Quantized Models:
Coming soon if TheBloke thinks this is worth his ๐ฐ๏ธ
|