KennethEnevoldsen commited on
Commit
2e2694b
1 Parent(s): a43daff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ license: mit
12
  ## Munin 7b e5
13
  This model has 32 layers and the embedding size is 4096.
14
 
15
- This model utilizes the lora adapter layer introduced in the paper [Improving Text Embeddings with Large Language Models](https://arxiv.org/pdf/2401.00368.pdf) along with the [merged model](https://huggingface.co/RJuro/munin-neuralbeagle-7b) by Roman Jurowetzki which merged the [Danish Munin model](https://huggingface.co/danish-foundation-models/munin-7b-alpha) with the [NeuralBeagle](https://huggingface.co/mlabonne/NeuralBeagle14-7B) model.
16
 
17
 
18
  ## Usage
 
12
  ## Munin 7b e5
13
  This model has 32 layers and the embedding size is 4096.
14
 
15
+ This model utilizes the lora adapter layer introduced in the paper [Improving Text Embeddings with Large Language Models](https://arxiv.org/pdf/2401.00368.pdf) along with the [Danish Munin model](https://huggingface.co/danish-foundation-models/munin-7b-alpha) model.
16
 
17
 
18
  ## Usage