Model Card: kevin009/lamatama
Model Description
The kevin009/lamatama
model is a groundbreaking achievement in the field of language modeling, showcasing the power of leveraging a substantial dataset and state-of-the-art training techniques. This model is designed to push the boundaries of what's possible in natural language understanding and generation.
Training Details
- Model Architecture: The
kevin009/lamatama
model is built upon the architecture and tokenizer of Llama 2, ensuring compatibility and easy integration with various open-source projects. - Dataset: It was pretrained on an impressive 3 trillion tokens, a scale that allows for a deep and nuanced understanding of language.
- Training Period: The training process was carried out over 90 days, utilizing 16 A100-40G GPUs, a testament to the model's efficiency and the team's optimization skills.
Fine-tuning
This specific version of the model has been fine-tuned to excel in chat-based applications. It builds upon the TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model, incorporating learnings and optimizations from HF's Zephyr's training recipe.
- Initial Phase: The model was first fine-tuned on a variant of the UltraChat dataset, which is rich in synthetic dialogues generated by ChatGPT.
- Further Alignment: Subsequent alignment was achieved using 🤗 TRL's DPOTrainer with the openbmb/UltraFeedback dataset, comprising 64k prompts and model completions ranked by GPT-4.
How to Use
Ensure you have transformers>=4.34
. For detailed instructions and updates, check out the GitHub page for kevin009/lamatama
.
Installation (for versions <= v4.34)
pip install git+https://github.com/huggingface/transformers.git
pip install accelerate
Example Usage
Here's a quick guide on using kevin009/lamatama
for generating text:
import torch
from transformers import pipeline
# Initialize the pipeline
pipe = pipeline("text-generation", model="kevin009/lamatama", torch_dtype=torch.bfloat16, device_map="auto")
# Sample dialogue with templating
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate"},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"}
]
# Generate prompt and outputs
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Acknowledgements
This model is a product of collaboration and innovative approaches to language modeling. We extend our thanks to all contributors, as well as the creators of the datasets and training methodologies that made kevin009/lamatama
a reality.
This model card introduces kevin009/lamatama
, a versatile and powerful language model fine-tuned for chat applications, demonstrating exceptional understanding and generation capabilities.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 37.15 |
AI2 Reasoning Challenge (25-Shot) | 36.35 |
HellaSwag (10-Shot) | 61.12 |
MMLU (5-Shot) | 24.72 |
TruthfulQA (0-shot) | 37.67 |
Winogrande (5-shot) | 60.77 |
GSM8k (5-shot) | 2.27 |
- Downloads last month
- 1,289
Model tree for kevin009/lamatama
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard36.350
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard61.120
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard24.720
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard37.670
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard60.770
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard2.270