--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** Xhaheen - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. # Inference With Unsloth on colab ```python3 import torch major_version, minor_version = torch.cuda.get_device_capability() !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" if major_version >= 8: # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40) !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes else: # Use this for older GPUs (V100, Tesla T4, RTX 20xx) !pip install --no-deps xformers trl peft accelerate bitsandbytes pass from unsloth import FastLanguageModel import torch max_seq_length = 2048 dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = False model, tokenizer = FastLanguageModel.from_pretrained( model_name = "Xhaheen/Shaheen_Gemma_Urdu_", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, device_map="auto" ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference input_prompt = """ ### Instruction: {} ### Input: {} ### Response: {}""" input_text = input_prompt.format( "دیئے گئے موضوع کے بارے میں ایک مختصر پیراگراف لکھیں۔", # instruction "قابل تجدید توانائی کے استعمال کی اہمیت", # input "", # output - leave this blank for generation! ) inputs = tokenizer([input_text], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True) response = tokenizer.batch_decode(outputs) ``` # Inference With Inference with HuggingFace transformers ```python3 from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer model = AutoPeftModelForCausalLM.from_pretrained( "Xhaheen/Shaheen_Gemma_Urdu_", load_in_4bit = False ) tokenizer = AutoTokenizer.from_pretrained("Xhaheen/Shaheen_Gemma_Urdu_") input_prompt = """ ### Instruction: {} ### Input: {} ### Response: {}""" input_text = input_prompt.format( "دیئے گئے موضوع کے بارے میں ایک مختصر پیراگراف لکھیں۔", # instruction "قابل تجدید توانائی کے استعمال کی اہمیت", # input "", # output - leave this blank for generation! ) inputs = tokenizer([input_text], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True) response = tokenizer.batch_decode(outputs)[0] ``` [](https://github.com/unslothai/unsloth)