Edit model card

image/jpeg

Meta-Llama-3-120B-Instruct

Meta-Llama-3-120B-Instruct is a meta-llama/Meta-Llama-3-70B-Instruct self-merge made with MergeKit.

It was inspired by large merges like:

Special thanks to Eric Hartford for both inspiring and evaluating this model and to Charles Goddard for creating MergeKit.

πŸ” Applications

I recommend using this model for creative writing. It uses the Llama 3 chat template with a default context window of 8K (can be extended with rope theta).

Check the examples in the evaluation section to get an idea of its performance. The model is generally quite unhinged but has a good writing style. It sometimes outputs typos and is a big fan of uppercase.

⚑ Quantized models

Thanks to Bartowski, elinas, the mlx-community and others for providing these models.

πŸ† Evaluation

This model is great for creative writing but struggles in other tasks. I'd say use it with caution and don't expect it to outperform GPT-4 outside of some very specific use cases.

Creative Writing

Thanks to Sam Paech for evaluating this model and sending me his outputs!

image/png

🧩 Configuration

slices:
- sources:
  - layer_range: [0, 20]
    model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
  - layer_range: [10, 30]
    model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
  - layer_range: [20, 40]
    model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
  - layer_range: [30, 50]
    model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
  - layer_range: [40, 60]
    model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
  - layer_range: [50, 70]
    model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
  - layer_range: [60, 80]
    model: meta-llama/Meta-Llama-3-70B-Instruct
merge_method: passthrough
dtype: float16

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/Meta-Llama-3-120B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
156
Safetensors
Model size
122B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mlabonne/Meta-Llama-3-120B-Instruct

Finetuned
(36)
this model
Finetunes
1 model
Merges
1 model
Quantizations
4 models

Spaces using mlabonne/Meta-Llama-3-120B-Instruct 2