matichon commited on
Commit
730ffb2
1 Parent(s): 07f480f

Upload 12 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,14 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ typhoon2-qwen2.5-7b-instruct-fp16-00001-of-00004.gguf filter=lfs diff=lfs merge=lfs -text
37
+ typhoon2-qwen2.5-7b-instruct-fp16-00002-of-00004.gguf filter=lfs diff=lfs merge=lfs -text
38
+ typhoon2-qwen2.5-7b-instruct-fp16-00003-of-00004.gguf filter=lfs diff=lfs merge=lfs -text
39
+ typhoon2-qwen2.5-7b-instruct-fp16-00004-of-00004.gguf filter=lfs diff=lfs merge=lfs -text
40
+ typhoon2-qwen2.5-7b-instruct-iq4_nl.gguf filter=lfs diff=lfs merge=lfs -text
41
+ typhoon2-qwen2.5-7b-instruct-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
42
+ typhoon2-qwen2.5-7b-instruct-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
43
+ typhoon2-qwen2.5-7b-instruct-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ typhoon2-qwen2.5-7b-instruct-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
45
+ typhoon2-qwen2.5-7b-instruct-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
46
+ typhoon2-qwen2.5-7b-instruct-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ base_model: scb10x/typhoon2-qwen2.5-7b-instruct
5
+ tags:
6
+ - llama-cpp
7
+ - gguf-my-repo
8
+ ---
9
+ # Float16-cloud/typhoon2-qwen2.5-7b-instruct-gguf
10
+ This model was converted to GGUF format from [`scb10x/typhoon2-qwen2.5-7b-instruct`](https://huggingface.co/scb10x/typhoon2-qwen2.5-7b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
11
+ Refer to the [original model card](https://huggingface.co/scb10x/typhoon2-qwen2.5-7b-instruct) for more details on the model.
12
+ ## Use with llama.cpp
13
+ Install llama.cpp through brew (works on Mac and Linux)
14
+ ```bash
15
+ brew install llama.cpp
16
+ ```
17
+ Invoke the llama.cpp server or the CLI.
18
+ ### CLI:
19
+ ```bash
20
+ llama-cli --hf-repo Float16-cloud/typhoon2-qwen2.5-7b-instruct-gguf --hf-file typhoon2-qwen2.5-7b-instruct-iq4_nl.gguf -p "The meaning to life and the universe is"
21
+ ```
22
+ ### Server:
23
+ ```bash
24
+ llama-server --hf-repo Float16-cloud/typhoon2-qwen2.5-7b-instruct-gguf --hf-file typhoon2-qwen2.5-7b-instruct-iq4_nl.gguf -c 2048
25
+ ```
26
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
27
+ Step 1: Clone llama.cpp from GitHub.
28
+ ```
29
+ git clone https://github.com/ggerganov/llama.cpp
30
+ ```
31
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
32
+ ```
33
+ cd llama.cpp && LLAMA_CURL=1 make
34
+ ```
35
+ Step 3: Run inference through the main binary.
36
+ ```
37
+ ./llama-cli --hf-repo Float16-cloud/typhoon2-qwen2.5-7b-instruct-gguf --hf-file typhoon2-qwen2.5-7b-instruct-iq4_nl.gguf -p "The meaning to life and the universe is"
38
+ ```
39
+ or
40
+ ```
41
+ ./llama-server --hf-repo Float16-cloud/typhoon2-qwen2.5-7b-instruct-gguf --hf-file typhoon2-qwen2.5-7b-instruct-iq4_nl.gguf -c 2048
42
+ ```
typhoon2-qwen2.5-7b-instruct-fp16-00001-of-00004.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c831d9e1ca5a5e6d099c65193da0cf67425036b93b80a228d5e8542ca95ee40b
3
+ size 3892782176
typhoon2-qwen2.5-7b-instruct-fp16-00002-of-00004.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7106f291c36f2813f5977918eb9b36fdd8c48fe2095f0c1ad9facf872d10a80c
3
+ size 3923648416
typhoon2-qwen2.5-7b-instruct-fp16-00003-of-00004.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d8f124e496520092f45e14c78617b8583661365a66772a8c60e2fd8193dabb8
3
+ size 3997044320
typhoon2-qwen2.5-7b-instruct-fp16-00004-of-00004.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89846971af4d20fce6c1ab4f463ee27ea06e965273fea5f8fa2c1def159ca054
3
+ size 3424378496
typhoon2-qwen2.5-7b-instruct-iq4_nl.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:483f8e093961bc39d9c307b0e5230260d1e8f84b7692183268228541801996bf
3
+ size 4463273760
typhoon2-qwen2.5-7b-instruct-q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d2d2cb23555d379d4b84208ee752215840391dea93a826c021148204cf9471a
3
+ size 4431390496
typhoon2-qwen2.5-7b-instruct-q4_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6cb8d9feacfacf933cb84d4c04bee81c4e0b74b37d61f7485eceec207872c559
3
+ size 4683073312
typhoon2-qwen2.5-7b-instruct-q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c22a4ffb0ed39489644fc8b3527cfc8a55852ea9d42071fa0187d7f412f24785
3
+ size 5315176224
typhoon2-qwen2.5-7b-instruct-q5_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5626b5c48a8ce1422c8aa9664c50f1c5ea6eefbec09e8895fcd857512f74f574
3
+ size 5444831008
typhoon2-qwen2.5-7b-instruct-q6_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2b4cc67ff8164e7a7abd4a88f69a03f6ddb8c58d062c212d844e5dcff23c174
3
+ size 6254198560
typhoon2-qwen2.5-7b-instruct-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42ca11d54333564c4753c68c5441f6f8e946b69db13710b90ba13ba9139f0b00
3
+ size 8098524960