deepseek-coder-1.3b-chat
It was created by starting with the deepseek-coder-1.3b and training it on the open assistant dataset. We have attached the wandb report in pdf form to view the training run at a glance.
Reson
This model was fine tned to allow it to follow direction and is a steeping stone to further training, but still would be good for asking qestions about code.
How to use
You will need the transformers>=4.31
from transformers import AutoTokenizer
import transformers
import torch
model = "AIGym/deepseek-coder-1.3b-chat"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = "What are the values in open source projects?"
formatted_prompt = (
f"### Human: {prompt}### Assistant:"
)
sequences = pipeline(
formatted_prompt,
do_sample=True,
top_k=50,
top_p = 0.7,
num_return_sequences=1,
repetition_penalty=1.1,
max_new_tokens=500,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Referrals
Run Pod - This is who I use to train th emodels on huggingface. If you use it we both get free crdits. - Visit Runpod's Website!
Paypal - If you want to leave a tip, it is appecaheted. - Visit My Paypal!
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 31.74 |
AI2 Reasoning Challenge (25-Shot) | 25.85 |
HellaSwag (10-Shot) | 39.59 |
MMLU (5-Shot) | 26.36 |
TruthfulQA (0-shot) | 43.92 |
Winogrande (5-shot) | 51.70 |
GSM8k (5-shot) | 3.03 |
- Downloads last month
- 2,501
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard25.850
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard39.590
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard26.360
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard43.920
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard51.700
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard3.030