LeroyDyer commited on
Commit
aa52a48
1 Parent(s): 7b8dd57

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -41
README.md CHANGED
@@ -1,11 +1,5 @@
1
  ---
2
- base_model:
3
- - mistralai/Mistral-7B-Instruct-v0.2
4
- - NousResearch/Hermes-2-Pro-Mistral-7B
5
  library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
  license: mit
10
  language:
11
  - en
@@ -17,41 +11,6 @@ metrics:
17
  ---
18
  # Mixtral_BaseModel -7B-BBase
19
 
20
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
21
-
22
- ## Merge Details
23
- ### Merge Method
24
-
25
- This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
26
-
27
- ### Models Merged
28
-
29
- The following models were included in the merge:
30
- * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
31
-
32
- One of the best Models
33
-
34
- * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
35
-
36
- Seems to be fine ?? Still to be tested intensly (ie what is a cat!)(write a neural network in vb.net)
37
-
38
- ### Configuration
39
-
40
- The following YAML configuration was used to produce this model:
41
-
42
- ```yaml
43
-
44
- models:
45
- - model: mistralai/Mistral-7B-Instruct-v0.2
46
- parameters:
47
- weight: 1.0
48
- - model: NousResearch/Hermes-2-Pro-Mistral-7B
49
- parameters:
50
- weight: 0.3
51
- merge_method: linear
52
- dtype: float16
53
-
54
- ```
55
  ```python
56
  %pip install llama-index-embeddings-huggingface
57
  %pip install llama-index-llms-llama-cpp
 
1
  ---
 
 
 
2
  library_name: transformers
 
 
 
3
  license: mit
4
  language:
5
  - en
 
11
  ---
12
  # Mixtral_BaseModel -7B-BBase
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ```python
15
  %pip install llama-index-embeddings-huggingface
16
  %pip install llama-index-llms-llama-cpp