Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
LoRA-TMLR-2024
's Collections
Instruction Finetuning - Code (Magicoder-Evol-Instruct-110K)
Continued Pretraining - Code (StarCoder-Python)
Instruction Finetuning - Math (MetaMathQA)
Continued Pretraining - Math (OpenWebMath)
Instruction Finetuning - Math (MetaMathQA)
updated
26 days ago
Model and LoRA adapter checkpoints for Llama-2-7B finetuned on MetaMathQA
Upvote
-
LoRA-TMLR-2024/metamath-lora-rank-16-alpha-32
Updated
26 days ago
•
19
LoRA-TMLR-2024/metamath-lora-rank-256-alpha-512
Updated
26 days ago
•
18
LoRA-TMLR-2024/metamath-lora-rank-64-alpha-128
Updated
26 days ago
•
15
LoRA-TMLR-2024/metamath-full-finetuning-lr-1e-05
Updated
24 days ago
•
4
Upvote
-
Share collection
View history
Collection guide
Browse collections