mav23 commited on
Commit
61a5894
1 Parent(s): 4dab960

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +55 -0
  3. mistral-7b-v0.1-sharded.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ mistral-7b-v0.1-sharded.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - pretrained
6
+ inference:
7
+ parameters:
8
+ temperature: 0.7
9
+ ---
10
+
11
+ # Note: Sharded Version of the Original "Mistral 7B" Model
12
+
13
+ This is just a version of https://huggingface.co/mistralai/Mistral-7B-v0.1 which is sharded to 2GB maximum parts in order to reduce the RAM required when loading.
14
+
15
+ # Model Card for Mistral-7B-v0.1
16
+
17
+ The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
18
+ Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
19
+
20
+ For full details of this model please read our [Release blog post](https://mistral.ai/news/announcing-mistral-7b/)
21
+
22
+ ## Model Architecture
23
+ Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
24
+ - Grouped-Query Attention
25
+ - Sliding-Window Attention
26
+ - Byte-fallback BPE tokenizer
27
+
28
+ ## Troubleshooting
29
+ - If you see the following error:
30
+ ```
31
+ Traceback (most recent call last):
32
+ File "", line 1, in
33
+ File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
34
+ config, kwargs = AutoConfig.from_pretrained(
35
+ File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
36
+ config_class = CONFIG_MAPPING[config_dict["model_type"]]
37
+ File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
38
+ raise KeyError(key)
39
+ KeyError: 'mistral'
40
+ ```
41
+
42
+ Installing transformers from source should solve the issue:
43
+ ```
44
+ pip install git+https://github.com/huggingface/transformers
45
+ ```
46
+ This should not be required after transformers-v4.33.4.
47
+
48
+
49
+ ## Notice
50
+
51
+ Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
52
+
53
+ ## The Mistral AI Team
54
+
55
+ Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
mistral-7b-v0.1-sharded.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43ca6a4cb76b0edf80dd09c043f50736c7a8021e34b59816a2fcc1498aa57744
3
+ size 4108916800