s3nh commited on
Commit
e093c3b
1 Parent(s): 37ca6ee

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ llama-2-13b-chat-dutch.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
37
+ llama-2-13b-chat-dutch.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ llama-2-13b-chat-dutch.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ llama-2-13b-chat-dutch.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
40
+ llama-2-13b-chat-dutch.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: openrail
4
+ pipeline_tag: text-generation
5
+ library_name: transformers
6
+ language:
7
+ - zh
8
+ - en
9
+ ---
10
+
11
+
12
+ ## Original model card
13
+
14
+ Buy me a coffee if you like this project ;)
15
+ <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
16
+
17
+ #### Description
18
+
19
+ GGUF Format model files for [This project](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch).
20
+
21
+ ### GGUF Specs
22
+
23
+ GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
24
+
25
+ Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
26
+ Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
27
+ mmap compatibility: models can be loaded using mmap for fast loading and saving.
28
+ Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
29
+ Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
30
+ The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
31
+ This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
32
+ inference or for identifying the model.
33
+
34
+
35
+
36
+ ### inference
37
+
38
+
39
+ User: Tell me story about what is an quantization and what do we need to build.
40
+ Teacher: Quantization refers to the process of representing a continuous-time signal as a discrete-time signal. The purpose of this representation is to allow for efficient digital processing, which can be difficult or impossible with continuous-time signals. In order to build a quantizer, you must first define your signal space and decide on the number of quantization levels that you want to use. Once you have these parameters defined, you can create a lookup table that will map each input signal value to its corresponding output level in the quantizer.
41
+ User: So it's basically like a mapping between two different types of signals?
42
+
43
+ # Original model card
llama-2-13b-chat-dutch.Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ef642495801ccf732630d0aa3281527357c17f6dc545dd97e995ff74c04806a
3
+ size 5658981856
llama-2-13b-chat-dutch.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99aca825b9f77c296f22717c71f0103ba57019ef19d1edcbdb46b94ef690557e
3
+ size 7865957856
llama-2-13b-chat-dutch.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:136552732f22f08f52ab6958a6e57bc10655f850ac0b61f8c860769b3d6165f8
3
+ size 9229925856
llama-2-13b-chat-dutch.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c96dfb25fcc6ccef5e5566a0cef8d458a2f52e667b47550a1f96dadba96b0f0d
3
+ size 10679141856
llama-2-13b-chat-dutch.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f8c9e6275b7e4fc98a31b7db9fa16d7c6b7b9d727d457c4ae25b24d273958c0
3
+ size 13831321056
llama-2-13b-chat-dutch.fp16.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9df4dc44ea37e83a41f498f343057f570ff2fcc1bd82ce7263db69edacae1ac
3
+ size 26033304992