Uploaded model
- Compute sponsored by: Nvidia and Arrow ECS Denmark through Danish Data Science Community
- Developed by: ThatsGroes
- License: apache-2.0
- Finetuned from model : AI-Sweden-Models/Llama-3-8B-instruct
Fine tuned for 1 epoch.
We ended up using 65.62 GB GPU memory (82.92%), of which 49.89 GB (63.04%) was used for LoRa.
[codecarbon INFO @ 21:31:34] Energy consumed for RAM : 0.404226 kWh. RAM Power : 188.78840446472168 W [codecarbon INFO @ 21:31:34] Energy consumed for all GPUs : 0.625855 kWh. Total GPU Power : 82.8216447468557 W [codecarbon INFO @ 21:31:34] Energy consumed for all CPUs : 0.091042 kWh. Total CPU Power : 42.5 W [codecarbon INFO @ 21:31:34] 1.121123 kWh of electricity used since the beginning.
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 167
Model tree for ThatsGroes/Llama-3-8B-instruct-AI-Sweden-SkoleGPT-GGUF
Base model
meta-llama/Meta-Llama-3-8B
Finetuned
AI-Sweden-Models/Llama-3-8B
Finetuned
AI-Sweden-Models/Llama-3-8B-instruct