This is a new kind of model optimization. This model is based on Meta-Llama-3.1-8B.
A paper on the technique is currently being written.
This research was supported with hardware from the appliedAI Institute, whose goal is to generate and communicate high-quality knowledge about trustworthy AI.
Quickstart
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3.1-8B"
pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("Hey how are you doing today?")
SHAMELESS ADVERTISING BREAK
Iβm on the hunt for new challenges and a chance to dive into some exciting research opportunities. Oh, and did I mention I just snagged a top spot on the Open LLM leaderboard? π
Profile
Innovation enthusiast, AI strategist, and interdisciplinary-tech nerd β that's me! With over a decade of experience in research and project management, my professional journey has been largely shaped by my passion for artificial intelligence and its potential to transform various industries. With a solid background in artificial intelligence and machine learning, coupled with a knack for innovation and problem-solving (and a healthy dose of curiosity), I'm excited to bring my skills to a new team.
Originally from Australia, where I earned my degrees in Organic Chemistry and Biochemistry, I moved to Germany in 2004. My academic pursuit continued with a PhD. in Chemistry at the Max Planck Institute of Biochemistry. Today, I leverage my robust educational background and diverse industry experience to drive AI innovations in a wide range of applications. Hobbies? Lots: I've also built the world's most powerful espresso machine and am working to bring GLaDOS to life.
I'm based out of Munich, Germany, but I would be interested in working remotely for a team with more compute than my 2x 4090s π
Reach out via LinkedIn - Dr David Noel Ng
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 26.44 |
IFEval (0-Shot) | 76.85 |
BBH (3-Shot) | 31.09 |
MATH Lvl 5 (4-Shot) | 11.33 |
GPQA (0-shot) | 2.35 |
MuSR (0-shot) | 7.68 |
MMLU-PRO (5-shot) | 29.33 |
- Downloads last month
- 3,046
Model tree for dnhkng/RYS-Llama-3.1-8B-Instruct
Space using dnhkng/RYS-Llama-3.1-8B-Instruct 1
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard76.850
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard31.090
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard11.330
- acc_norm on GPQA (0-shot)Open LLM Leaderboard2.350
- acc_norm on MuSR (0-shot)Open LLM Leaderboard7.680
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard29.330