|
--- |
|
license: cc-by-nc-4.0 |
|
language: |
|
- tr |
|
pipeline_tag: question-answering |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
|
|
Gemma-2b fine-tuned with Turkish Instruction-Response pairs. |
|
|
|
|
|
## Restrictions |
|
|
|
Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms |
|
Please refer to the gemma use restrictions before start using the model. |
|
https://ai.google.dev/gemma/terms#3.2-use |
|
|
|
## Using model |
|
|
|
```Python |
|
import torch,re |
|
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig |
|
|
|
bnb_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_use_double_quant=True, |
|
bnb_4bit_quant_type="nf4", |
|
bnb_4bit_compute_dtype=torch.bfloat16 |
|
) |
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
model_id = "erythropygia/Gemma2b-Turkish-Instruction" |
|
|
|
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0}) |
|
tokenizer = AutoTokenizer.from_pretrained(model_id, add_eos_token=True, padding_side="left") |
|
|
|
def get_completion(query: str, model, tokenizer) -> str: |
|
device = "cuda:0" |
|
|
|
prompt_template = """ |
|
<start_of_turn>user |
|
Alt satırdaki soruya cevap ver:\n |
|
{query} |
|
<end_of_turn>\n<start_of_turn>model |
|
""" |
|
prompt = prompt_template.format(query=query) |
|
|
|
encodeds = tokenizer(prompt, return_tensors="pt", add_special_tokens=True) |
|
|
|
model_inputs = encodeds.to(device) |
|
|
|
|
|
#max_new_tokens = 200, temperature = 0.9, repetition_penalty = 0.5, disabled |
|
#num_return_sequences=1, max_length = 256, |
|
generated_ids = model.generate(**model_inputs, max_new_tokens = 256, do_sample=True, pad_token_id=tokenizer.eos_token_id) |
|
# decoded = tokenizer.batch_decode(generated_ids) |
|
decoded = tokenizer.decode(generated_ids[0], skip_special_tokens=False) |
|
# Kapanmamış etiketleri silmek için düzenli ifade kullanma |
|
|
|
decoded = re.sub(r'<(end_of_turn|start_of_turn|eos|bos)>[^<]*$', '', decoded) |
|
|
|
decoded = re.sub(r'<(end_of_turn|start_of_turn|eos|bos)>', '', decoded) |
|
|
|
return decoded.strip() |
|
|
|
result = get_completion(query="int türünde üç parametre alan ve bunların toplamını döndüren bir işlev oluşturun.", model=model, tokenizer=tokenizer) |
|
print(result) |
|
``` |
|
|
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
- Dataset size: ~75k instruction-response pair. |
|
|
|
### Training Procedure |
|
|
|
#### Training Hyperparameters |
|
|
|
- **Epochs:** 1 |
|
- **Context length:** 1024 |
|
- **LoRA Rank:** 32 |
|
- **LoRA Alpha:** 64 |
|
- **LoRA Dropout:** 0.05 |