metadata
base_model:
- google/gemma-2-2b
pipeline_tag: text-generation
metrics:
- accuracy
license: mit
language:
- en
tags:
- pytorch
- transformers
- keras
- Spell Checker
Description
The RLM-spell-checker is a fine-tuned version of gemma-2b-V3, enhanced using LoRA (Low-Rank Adaptation) to specialize in spelling correction. LoRA fine-tunes models efficiently by adjusting only a few parameters, allowing the RLM-spell-checker to retain the robust language understanding of gemma-2b-V3 while focusing on identifying and correcting spelling errors. This fine-tuning enables the model to provide context-aware suggestions for corrections, making it a powerful tool for real-time applications like automated writing assistance, chatbots, and word processors. By improving spelling accuracy without interrupting the natural flow of text, the RLM-spell-checker enhances text quality and user experience in various tasks.
Author: Rudra Shah
Running Model
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rudrashah/RLM-spell-checker")
model = AutoModelForCausalLM.from_pretrained("rudrashah/RLM-spell-checker")
sent = "Whaat iss the mision?"
template = "Sentence:\n{org}\n\nCorrect_Grammar:\n{new}"
input_text = template.format(org=sent, new="")
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))