library_name: transformers license: apache-2.0 language:
- am
- ti ---from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification
model_name = "Hailay/FT_EXLMR" tokenizer = XLMRobertaTokenizer.from_pretrained(model_name) model = XLMRobertaForSequenceClassification.from_pretrained(model_name)
inputs = tokenizer("Your text here", return_tensors="pt") outputs = model(**inputs)
Model Card for Model ID
Model Card Summary: Hailay/FT_EXLMR Model Name: Hailay/FT_EXLMR Type: XLM-RoBERTa model for sequence classification Language(s): [Languages supported by the model] License: [License type, e.g., Apache 2.0] Pre-trained Model: xlm-roberta-base Uses:
Primary: Text classification (e.g., sentiment analysis) Additional: Can be fine-tuned for specific tasks Key Features:
Trained Data: Custom dataset with text and labels Training Details: 3 epochs, learning rate of 1e-5 Evaluation: Accuracy and loss metrics Getting Started:
Code Example: Load the model and tokenizer, then use them for text classification. Considerations:
Bias & Risks: Assess for biases; evaluate suitability for specific applications Environmental Impact: [Details about hardware and training time] Citation:
BibTeX & APA formats available