|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- mistral |
|
- trl |
|
- dpo |
|
- uncensored |
|
- roleplay |
|
- fine-tune |
|
base_model: MTSAIR/multi_verse_model |
|
library_name: transformers |
|
datasets: |
|
- grimulkan/theory-of-mind |
|
- grimulkan/physical-reasoning |
|
- ResplendentAI/Luna_Alpaca |
|
- unalignment/toxic-dpo-v0.2 |
|
- kira/math-dpo |
|
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED |
|
--- |
|
# 💫 Pulsar_7B |
|
Pulsar_7B is a fine-tune of [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model), trained on these datasets: |
|
|
|
- grimulkan/theory-of-mind |
|
- grimulkan/physical-reasoning |
|
- ResplendentAI/Luna_Alpaca |
|
- unalignment/toxic-dpo-v0.2 |
|
- kira/math-dpo |
|
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED |
|
|
|
## Quantizations |
|
Thanks to mradermacher, static GGUF quants are available [here](https://huggingface.co/mradermacher/Pulsar_7B-GGUF). |
|
|
|
--- |
|
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |