Edit model card

code-gemma

Google's gemma-2b-it trained code_instructions_122k_alpaca_style dataset

Usage

# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="gnumanth/code-gemma")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("gnumanth/code-gemma")
model = AutoModelForCausalLM.from_pretrained("gnumanth/code-gemma")

Hemanth HM

Downloads last month
2
Safetensors
Model size
2.51B params
Tensor type
FP16
·