metadata
license: cc-by-nc-4.0
datasets:
- danlou/based-chat-v0.1-Mistral-Nemo-Base-2407
base_model:
- danlou/relay-v0.1-Mistral-Nemo-2407
pipeline_tag: text-generation
tags:
- axolotl
- lmstudio
- gguf
π Relay v0.1 (Mistral Nemo 2407)
This model page includes GGUF versions of relay-v0.1-Mistral-Nemo-2407. For more details about this model, please see that model page.
Note: If you have access to a CUDA GPU, it's highly recommended you use the main version (HF) of the model with the relaylm.py script, which supports better use of commands (e.g., system messages). The relaylm.py
script also supports 4bit and 8bit bitsandbytes quants.
Custom Preset for LM Studio
To use these GGUF files with LM Studio, you should use this preset configuration. Relay models use ChatML, but not standard roles and system prompts.
After you select and download the GGUF version you want to use: