Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Instruct_Mixtral-8x7B-v0.1_Dolly15K - GGUF

Original model description:

license: apache-2.0 datasets: - databricks/databricks-dolly-15k pipeline_tag: text-generation

Instruct_Mixtral-8x7B-v0.1_Dolly15K

Fine-tuned from Mixtral-8x7B-v0.1, used Dolly15k for the dataset. 85% for training, 14.9% validation, 0.1% test. Trained for 1.0 epochs using QLora. Trained with 1024 context window.

Model Details

  • Trained by: trained by Brillibits.
  • Model type: Instruct_Mixtral-8x7B-v0.1_Dolly15K is an auto-regressive language model based on the Llama 2 transformer architecture.
  • Language(s): English
  • License for Instruct_Mixtral-8x7B-v0.1_Dolly15K: apache-2.0 license

Prompting

Prompt Template With Context

Write a 10-line poem about a given topic

Input:

The topic is about racecars

Output:

Prompt Template Without Context

Who was the was the second president of the United States?

Output:

Professional Assistance

This model and other models like it are great, but where LLMs hold the most promise is when they are applied on custom data to automate a wide variety of tasks

If you have a dataset and want to see if you might be able to apply that data to automate some tasks, and you are looking for professional assistance, contact me here

Downloads last month
351
GGUF
Model size
46.7B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Inference API
Unable to determine this model's library. Check the docs .