Edit model card

falcon7b-linear-equations-merged

This model is a merged version of falcon7b-linear-equations with QLoRA.

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

model = AutoModelForCausalLM.from_pretrained("Menouar/falcon7b-linear-equations-merged")
tokenizer = AutoTokenizer.from_pretrained("Menouar/falcon7b-linear-equations-merged",
                                          device_map="auto",
                                          torch_dtype=torch.float16)

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

outputs = pipe("Solve for y: 10 + 4y -9y +5 = 4 +8y - 2y", 
               max_new_tokens=172, 
               do_sample=True, 
               temperature=0.1,
               top_k=50, top_p=0.1,
               eos_token_id=pipe.tokenizer.eos_token_id,
               pad_token_id=pipe.tokenizer.pad_token_id)

for seq in outputs:
    print(f"{seq['generated_text']}")
Downloads last month
2
Safetensors
Model size
6.92B params
Tensor type
FP16
·

Dataset used to train Menouar/falcon7b-linear-equations-merged