afrideva commited on
Commit
e11c4cc
1 Parent(s): 0b45d81

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mychen76/tinyllama-colorist-v2
3
+ inference: false
4
+ license: apache-2.0
5
+ model_creator: mychen76
6
+ model_name: tinyllama-colorist-v2
7
+ quantized_by: afrideva
8
+ tags:
9
+ - gguf
10
+ - ggml
11
+ - quantized
12
+ - q2_k
13
+ - q3_k_m
14
+ - q4_k_m
15
+ - q5_k_m
16
+ - q6_k
17
+ - q8_0
18
+ pipeline_tag: text-generation
19
+ ---
20
+ # mychen76/tinyllama-colorist-v2-GGUF
21
+
22
+ Quantized GGUF model files for [tinyllama-colorist-v2](https://huggingface.co/mychen76/tinyllama-colorist-v2) from [mychen76](https://huggingface.co/mychen76)
23
+
24
+
25
+ | Name | Quant method | Size |
26
+ | ---- | ---- | ---- |
27
+ | [tinyllama-colorist-v2.q2_k.gguf](https://huggingface.co/afrideva/tinyllama-colorist-v2-GGUF/resolve/main/tinyllama-colorist-v2.q2_k.gguf) | q2_k | 482.15 MB |
28
+ | [tinyllama-colorist-v2.q3_k_m.gguf](https://huggingface.co/afrideva/tinyllama-colorist-v2-GGUF/resolve/main/tinyllama-colorist-v2.q3_k_m.gguf) | q3_k_m | 549.85 MB |
29
+ | [tinyllama-colorist-v2.q4_k_m.gguf](https://huggingface.co/afrideva/tinyllama-colorist-v2-GGUF/resolve/main/tinyllama-colorist-v2.q4_k_m.gguf) | q4_k_m | 667.82 MB |
30
+ | [tinyllama-colorist-v2.q5_k_m.gguf](https://huggingface.co/afrideva/tinyllama-colorist-v2-GGUF/resolve/main/tinyllama-colorist-v2.q5_k_m.gguf) | q5_k_m | 782.05 MB |
31
+ | [tinyllama-colorist-v2.q6_k.gguf](https://huggingface.co/afrideva/tinyllama-colorist-v2-GGUF/resolve/main/tinyllama-colorist-v2.q6_k.gguf) | q6_k | 903.42 MB |
32
+ | [tinyllama-colorist-v2.q8_0.gguf](https://huggingface.co/afrideva/tinyllama-colorist-v2-GGUF/resolve/main/tinyllama-colorist-v2.q8_0.gguf) | q8_0 | 1.17 GB |
33
+
34
+
35
+
36
+ ## Original Model Card:
37
+ MODEL: "mychen76/tinyllama-colorist-v2" - is a finetuned TinyLlama model using color dataset.
38
+
39
+ MOTIVATION: A fun experimental model for using TinyLlama as Llama2 replacement for resource constraint environment.
40
+
41
+ PROMPT FORMAT: "<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant:""
42
+
43
+ MODEL USAGE:
44
+ ```python
45
+ import torch
46
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
47
+ from transformers import pipeline
48
+
49
+ def print_color_space(hex_color):
50
+ def hex_to_rgb(hex_color):
51
+ hex_color = hex_color.lstrip('#')
52
+ return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
53
+ r, g, b = hex_to_rgb(hex_color)
54
+ print(f'{hex_color}: \033[48;2;{r};{g};{b}m \033[0m')
55
+
56
+ tokenizer = AutoTokenizer.from_pretrained(model_id_colorist_final)
57
+ pipe = pipeline(
58
+ "text-generation",
59
+ model=model_id_colorist_final,
60
+ torch_dtype=torch.float16,
61
+ device_map="auto",
62
+ )
63
+
64
+ from time import perf_counter
65
+ start_time = perf_counter()
66
+
67
+ prompt = formatted_prompt('give me a pure brown color')
68
+ sequences = pipe(
69
+ prompt,
70
+ do_sample=True,
71
+ temperature=0.1,
72
+ top_p=0.9,
73
+ num_return_sequences=1,
74
+ eos_token_id=tokenizer.eos_token_id,
75
+ max_new_tokens=12
76
+ )
77
+ for seq in sequences:
78
+ print(f"Result: {seq['generated_text']}")
79
+
80
+ output_time = perf_counter() - start_time
81
+ print(f"Time taken for inference: {round(output_time,2)} seconds")
82
+
83
+ ```
84
+ Result: #807070
85
+ ```
86
+ Result: <|im_start|>user
87
+ give me a pure brown color<|im_end|>
88
+ <|im_start|>assistant: #807070<|im_end>
89
+
90
+ Time taken for inference: 0.19 seconds
91
+ ```
92
+
93
+ Dataset: "burkelibbey/colors"