ThatsGroes
commited on
Commit
•
c6108ae
1
Parent(s):
8e3cae8
Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ datasets:
|
|
16 |
|
17 |
# Uploaded model
|
18 |
|
19 |
-
- **Compute sponsored by:** Nvidia
|
20 |
- **Developed by:** ThatsGroes
|
21 |
- **License:** apache-2.0
|
22 |
- **Finetuned from model :** meta-llama/Llama-3.1-70B-Instruct
|
@@ -25,4 +25,12 @@ LoRA adapter on Llama-3.1-70b loaded in 4-bit. Trained for 1 epoch with rank=lor
|
|
25 |
|
26 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
16 |
|
17 |
# Uploaded model
|
18 |
|
19 |
+
- **Compute sponsored by:** Nvidia and Arrow ECS Denmark through Danish Data Science Community
|
20 |
- **Developed by:** ThatsGroes
|
21 |
- **License:** apache-2.0
|
22 |
- **Finetuned from model :** meta-llama/Llama-3.1-70B-Instruct
|
|
|
25 |
|
26 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
27 |
|
28 |
+
We ended up using 62.52 GB GPU memory (79.00%), of which 23.83 GB (30.12%) was used for LoRa.
|
29 |
+
|
30 |
+
[codecarbon INFO @ 11:07:59] Energy consumed for RAM : 2.574882 kWh. RAM Power : 188.78840446472168 W
|
31 |
+
[codecarbon INFO @ 11:07:59] Energy consumed for all GPUs : 4.045188 kWh. Total GPU Power : 270.22211938762564 W
|
32 |
+
[codecarbon INFO @ 11:07:59] Energy consumed for all CPUs : 0.579916 kWh. Total CPU Power : 42.5 W
|
33 |
+
[codecarbon INFO @ 11:07:59] 7.199986 kWh of electricity used since the beginning.
|
34 |
+
|
35 |
+
|
36 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|