Edit model card

Model Description

Fine-tuning bert-base-uncased model for token-level binary grammatical error detection on English-FCE dataset provided by MultiGED-2023

Get Started with the Model

from transformers import AutoModelForTokenClassification, BertTokenizer

# Load the model
model = AutoModelForTokenClassification.from_pretrained("sahilnishad/BERT-GED-FCE-FT")
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")

# Function to perform inference
def infer(sentence):
    inputs = tokenizer(sentence, return_tensors="pt", add_special_tokens=True)
    with torch.no_grad():
        outputs = model(**inputs)
    return outputs.logits.argmax(-1)

# Example usage
print(infer("Your example sentence here"))

BibTeX:

@misc{sahilnishad_bert_ged_fce_ft,
  author       = {Sahil Nishad},
  title        = {Fine-tuned BERT Model for Grammatical Error Detection on the FCE Dataset},
  year         = {2024},
  url          = {https://huggingface.co/sahilnishad/BERT-GED-FCE-FT},
  note         = {Model available on HuggingFace Hub},
  howpublished = {\url{https://huggingface.co/sahilnishad/BERT-GED-FCE-FT}},
}
Downloads last month
12
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sahilnishad/BERT-GED-FCE-FT

Finetuned
(2153)
this model