mlx-community/dolphin-2.2-yi-34b-200k
This model was converted to MLX format from cognitivecomputations/dolphin-2.2-yi-34b-200k
using mlx-lm version 0.7.0.
Refer to the original model card for more details on the model.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/dolphin-2.2-yi-34b-200k")
response = generate(model, tokenizer, prompt="hello", verbose=True)
- Downloads last month
- 1
Datasets used to train mlx-community/dolphin-2.2-yi-34b-200k
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard42.150
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard68.180
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard55.470
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard45.930
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard64.560
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard3.710