Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Fill-Mask PyTorch Model (Camembert)

This model is a fill-mask model that was trained using the PyTorch framework and the Hugging Face Transformers library. It was utilized in Hugging Face's NLP course as an introductory model.

Model Description

This model uses the camembert architecture, a variant of the RoBERTa model adapted for French. It's designed for the fill-mask task, where a portion of input text is masked and the model predicts the missing token.

Features

  • PyTorch: The model was implemented and trained using the PyTorch deep learning framework, which allows for dynamic computation graphs and is known for its flexibility and efficiency.
  • Safetensors: The model utilizes Safetensors, a Python library that provides safer operations for PyTorch Tensors.
  • Transformers: The model was built using the Hugging Face Transformers library, a state-of-the-art NLP library that provides thousands of pre-trained models and easy-to-use implementations of transformer architectures.
  • AutoTrain Compatible: This model is compatible with Hugging Face's AutoTrain, a tool that automates the training of transformer models.

Usage

from transformers import CamembertForMaskedLM, CamembertTokenizer

tokenizer = CamembertTokenizer.from_pretrained('model-name')
model = CamembertForMaskedLM.from_pretrained('model-name')

inputs = tokenizer("Le camembert est <mask>.", return_tensors='pt')
outputs = model(**inputs)
predictions = outputs.logits
predicted_index = torch.argmax(predictions[0, mask_position]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
Downloads last month
2
Safetensors
Model size
111M params
Tensor type
I64
·
F32
·