ThatsGroes commited on
Commit
4db106a
1 Parent(s): 969acd8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8,7 +8,7 @@ language:
8
 
9
 
10
  # Munin-7b-alpha instruction fined tuned
11
- Munin-7b-alpha from [Danish Foundation Models](https://www.foundationmodels.dk/) fine-tuned by [yours truly](https://www.linkedin.com/in/kaspergroesludvigsen/) for 1 epoch on [kobprof/skolegpt-instruct](https://huggingface.co/datasets/kobprof/skolegpt-instruct) using the code from [this notebook](https://github.com/alexandrainst/d3a-llm-workshop) by The Alexandra Institute
12
 
13
  Trained on a single Nvidia RTX A4000 GPU using 13.82 GB GPU memory (87.84%), of which 8.71 GB (55.39%) was used for LoRa.
14
 
 
8
 
9
 
10
  # Munin-7b-alpha instruction fined tuned
11
+ [Munin-7b-alpha](https://huggingface.co/danish-foundation-models/munin-7b-alpha) from [Danish Foundation Models](https://www.foundationmodels.dk/) fine-tuned by [yours truly](https://www.linkedin.com/in/kaspergroesludvigsen/) for 1 epoch on [kobprof/skolegpt-instruct](https://huggingface.co/datasets/kobprof/skolegpt-instruct) using the code from [this notebook](https://github.com/alexandrainst/d3a-llm-workshop) by The Alexandra Institute
12
 
13
  Trained on a single Nvidia RTX A4000 GPU using 13.82 GB GPU memory (87.84%), of which 8.71 GB (55.39%) was used for LoRa.
14