malhajar commited on
Commit
e02ec17
1 Parent(s): 7d3802d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - TFLai/Turkish-Alpaca
4
+ language:
5
+ - tr
6
+ ---
7
+
8
+ # Model Card for Model ID
9
+
10
+ <!-- Provide a quick summary of what the model is/does. -->
11
+ malhajar/Mixtral-8x7B-v0.1-turkish is a finetuned version of Mixtral-8x7B-v0.1 using SFT Training.
12
+ This model can answer information in turkish language as it is finetuned on a turkish dataset specifically [`Turkish-Alpaca`]( https://huggingface.co/datasets/TFLai/Turkish-Alpaca)
13
+
14
+ ### Model Description
15
+
16
+ - **Developed by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/)
17
+ - **Language(s) (NLP):** Turkish
18
+ - **Finetuned from model:** [`mistralai/Mixtral-8x7B-v0.1`](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
19
+
20
+ ### Prompt Template
21
+ ```
22
+ ### Instruction:
23
+
24
+ <prompt> (without the <>)
25
+
26
+ ### Response:
27
+ ```
28
+ ## How to Get Started with the Model
29
+
30
+ Use the code sample provided in the original post to interact with the model.
31
+ ```python
32
+ from transformers import AutoTokenizer,AutoModelForCausalLM
33
+
34
+ model_id = "malhajar/Mixtral-8x7B-v0.1-turkish"
35
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
36
+ device_map="auto",
37
+ torch_dtype=torch.float16,
38
+ revision="main")
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
41
+
42
+ question: "Türkiyenin en büyük şehir nedir?"
43
+ # For generating a response
44
+ prompt = f'''
45
+ ### Instruction: {question} ### Response:
46
+ '''
47
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids
48
+ output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True,repetition_penalty=1.3
49
+ top_p=0.95,trust_remote_code=True,)
50
+ response = tokenizer.decode(output[0])
51
+
52
+ print(response)
53
+ ```