s3nh commited on
Commit
1a92961
1 Parent(s): fce4248

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ codellama/CodeLlama-70b-Instruct-hf[[:space:]]/capybarahermes-2.5-mistral-7b.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
37
+ codellama/CodeLlama-70b-Instruct-hf[[:space:]]/capybarahermes-2.5-mistral-7b.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ codellama/CodeLlama-70b-Instruct-hf[[:space:]]/capybarahermes-2.5-mistral-7b.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ codellama/CodeLlama-70b-Instruct-hf[[:space:]]/capybarahermes-2.5-mistral-7b.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
40
+ codellama/CodeLlama-70b-Instruct-hf[[:space:]]/capybarahermes-2.5-mistral-7b.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
codellama//CodeLlama-70b-Instruct-hf /README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: openrail
4
+ pipeline_tag: text-generation
5
+ library_name: transformers
6
+ language:
7
+ - zh
8
+ - en
9
+ ---
10
+
11
+
12
+ ## Original model card
13
+
14
+ Buy me a coffee if you like this project ;)
15
+ <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
16
+
17
+ #### Description
18
+
19
+ GGUF Format model files for [This project](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B).
20
+
21
+ ### GGUF Specs
22
+
23
+ GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
24
+
25
+ Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
26
+ Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
27
+ mmap compatibility: models can be loaded using mmap for fast loading and saving.
28
+ Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
29
+ Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
30
+ The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
31
+ This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
32
+ inference or for identifying the model.
33
+
34
+
35
+
36
+ ### inference
37
+
38
+
39
+ User: Tell me story about what is an quantization and what do we need to build.
40
+
41
+ Me: Quantization, in the context of machine learning, refers to the process of mapping a continuous input signal or data into discrete values. In other words, it's like rounding up or down the values of your data so they fit neatly within a certain range or scale. This is useful because many algorithms and systems can operate more efficiently with discrete inputs rather than continuous ones.
42
+
43
+ For example, let's say we have a signal that measures temperature over time. The temperature data is continuous and can have any value between the lowest and highest temperatures. But for our machine learning model, it would be easier to
44
+
45
+ # Original model card
codellama//CodeLlama-70b-Instruct-hf /capybarahermes-2.5-mistral-7b.Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e17bf12b4434dc64e66b8a8ac2fba36d2bf894354770c0c1e4606f4c230abb26
3
+ size 3164577792
codellama//CodeLlama-70b-Instruct-hf /capybarahermes-2.5-mistral-7b.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b4d9c19512700034fae794989f4aba0f925d60e6ab3cde45d36e5c573a544a7
3
+ size 4368450624
codellama//CodeLlama-70b-Instruct-hf /capybarahermes-2.5-mistral-7b.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4775fcad8913508cf10daa6b0eeb4e37a330c2108b4fe1bbbacd8b8583f05e9
3
+ size 5131421760
codellama//CodeLlama-70b-Instruct-hf /capybarahermes-2.5-mistral-7b.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4d9491d330c10f21060b2dad4c4bf2a2456376ac4e7066b5b3835be1649e783
3
+ size 5942078592
codellama//CodeLlama-70b-Instruct-hf /capybarahermes-2.5-mistral-7b.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca37535962425210ad0bc3bd46133867976debdc5a46ef1d7a0e024228b4abc6
3
+ size 7695875072
codellama//CodeLlama-70b-Instruct-hf /capybarahermes-2.5-mistral-7b.fp16.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bcb54fd1785440e79c818b62ad9c1fbc5e057fb1905c7a9d3f694f500d6e7df
3
+ size 14484764640