Text Generation
Transformers
GGUF
Safetensors
PyTorch
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
Safetensors
text-generation-inference
Merge
7b
mistralai/Mistral-7B-Instruct-v0.1
teknium/CollectiveCognition-v1.1-Mistral-7B
mistral-7b
instruct
finetune
gpt4
synthetic data
distillation
sharegpt
en
dataset:CollectiveCognition/chats-data-2023-09-27
Inference Endpoints
conversational
usless
#1
by
kalle07
- opened
all models from this user a one to one copy !!!
neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1.Q5_K_M.gguf
openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1.Q5_K_M.gguf
neural-chat-7b-v3-2-Mistral-7B-Instruct-v0.1.Q5_K_M.gguf
CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1.Q5_K_M.gguf
same file size on the last byte !
same answer on same seed !
These models are all merge (SLERP) of any fine-tuned model on Mistral with Mistral-7B-Instruct-v0.1
. They are not the same, I am also not sure what the last byte means but it takes time and resources to run these experiments for the community.
MaziyarPanahi
changed discussion status to
closed