afrideva commited on
Commit
86e459c
1 Parent(s): 032b188

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: malhajar/phi-2-chat-turkish
3
+ datasets:
4
+ - TFLai/Turkish-Alpaca
5
+ inference: false
6
+ language:
7
+ - tr
8
+ model_creator: malhajar
9
+ model_name: phi-2-chat-turkish
10
+ pipeline_tag: text-generation
11
+ quantized_by: afrideva
12
+ tags:
13
+ - gguf
14
+ - ggml
15
+ - quantized
16
+ - q2_k
17
+ - q3_k_m
18
+ - q4_k_m
19
+ - q5_k_m
20
+ - q6_k
21
+ - q8_0
22
+ ---
23
+ # malhajar/phi-2-chat-turkish-GGUF
24
+
25
+ Quantized GGUF model files for [phi-2-chat-turkish](https://huggingface.co/malhajar/phi-2-chat-turkish) from [malhajar](https://huggingface.co/malhajar)
26
+
27
+
28
+ | Name | Quant method | Size |
29
+ | ---- | ---- | ---- |
30
+ | [phi-2-chat-turkish.fp16.gguf](https://huggingface.co/afrideva/phi-2-chat-turkish-GGUF/resolve/main/phi-2-chat-turkish.fp16.gguf) | fp16 | 5.56 GB |
31
+ | [phi-2-chat-turkish.q2_k.gguf](https://huggingface.co/afrideva/phi-2-chat-turkish-GGUF/resolve/main/phi-2-chat-turkish.q2_k.gguf) | q2_k | 1.17 GB |
32
+ | [phi-2-chat-turkish.q3_k_m.gguf](https://huggingface.co/afrideva/phi-2-chat-turkish-GGUF/resolve/main/phi-2-chat-turkish.q3_k_m.gguf) | q3_k_m | 1.48 GB |
33
+ | [phi-2-chat-turkish.q4_k_m.gguf](https://huggingface.co/afrideva/phi-2-chat-turkish-GGUF/resolve/main/phi-2-chat-turkish.q4_k_m.gguf) | q4_k_m | 1.79 GB |
34
+ | [phi-2-chat-turkish.q5_k_m.gguf](https://huggingface.co/afrideva/phi-2-chat-turkish-GGUF/resolve/main/phi-2-chat-turkish.q5_k_m.gguf) | q5_k_m | 2.07 GB |
35
+ | [phi-2-chat-turkish.q6_k.gguf](https://huggingface.co/afrideva/phi-2-chat-turkish-GGUF/resolve/main/phi-2-chat-turkish.q6_k.gguf) | q6_k | 2.29 GB |
36
+ | [phi-2-chat-turkish.q8_0.gguf](https://huggingface.co/afrideva/phi-2-chat-turkish-GGUF/resolve/main/phi-2-chat-turkish.q8_0.gguf) | q8_0 | 2.96 GB |
37
+
38
+
39
+
40
+ ## Original Model Card:
41
+ # Model Card for Model ID
42
+
43
+ <!-- Provide a quick summary of what the model is/does. -->
44
+ malhajar/phi-2-chat-turkish is a finetuned version of phi-2 using SFT Training.
45
+ This model can answer information in turkish language as it is finetuned on a turkish dataset specifically [`Turkish-Alpaca`]( https://huggingface.co/datasets/TFLai/Turkish-Alpaca)
46
+
47
+ ### Model Description
48
+
49
+ - **Developed by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/)
50
+ - **Language(s) (NLP):** Turkish
51
+ - **Finetuned from model:** [`microsoft/phi-2`](https://huggingface.co/microsoft/phi-2)
52
+
53
+ ### Prompt Template
54
+ ```
55
+ ### Instruction:
56
+
57
+ <prompt> (without the <>)
58
+
59
+ ### Response:
60
+ ```
61
+ ## How to Get Started with the Model
62
+
63
+ Use the code sample provided in the original post to interact with the model.
64
+ ```python
65
+ from transformers import AutoTokenizer,AutoModelForCausalLM
66
+
67
+ model_id = "malhajar/phi-2-chat-turkish"
68
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
69
+ device_map="auto",
70
+ torch_dtype=torch.float16,
71
+ revision="main")
72
+
73
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
74
+
75
+ question: "Türkiyenin en büyük şehir nedir?"
76
+ # For generating a response
77
+ prompt = f'''
78
+ ### Instruction: {question} ### Response:
79
+ '''
80
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids
81
+ output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True,repetition_penalty=1.3
82
+ top_p=0.95,trust_remote_code=True,)
83
+ response = tokenizer.decode(output[0])
84
+
85
+ print(response)
86
+ ```