File size: 744 Bytes
ea54c55
 
 
fce6c35
 
ea54c55
c25801d
d36a167
 
5950c72
 
 
fa74c18
357d09d
c25801d
 
aceec20
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
tags:
- moe
- merge
license: apache-2.0
---
# What is it? A 2x7B MoE model for Roleplay(?).
You will occur GPT-like responses sometimes, just skip it and reroll (gacha time). Overall, I think it good enough for Roleplaying.
# You may want see this: https://huggingface.co/Alsebay/My_LLMs_Leaderboard
# This model is is a Mixure of Experts (MoE) made with the following models:
- udkai/Turdus
- Kquant03/Samlagast-7B-laser-bf16
# If you used it, please let me know if it good or not. Thank you :) 
# GGUF version here: 
https://huggingface.co/Alsebay/RainyMotip-2x7B-GGUF
# Want more Quantization?
[mradermacher](https://huggingface.co/mradermacher) made full of them. Check it out :)
https://huggingface.co/mradermacher/RainyMotip-2x7B-GGUF