Usage
To use this model, you can either directly use the Hugging Face transformers
library or you can use the model via the Hugging Face inference API.
Model Information
Training Details
- This model has been fine-tuned for English to Tamil translation.
- Training Duration: Over 10 hours
- Loss Achieved: 0.6
- Model Architecture
- The model architecture is based on the Transformer architecture, specifically optimized for sequence-to-sequence tasks.
Installation
To use this model, you'll need to have the transformers
library installed. You can install it via pip:
pip install transformers
Via Transformers Library
You can use this model in your Python code like this:
Inference
- How to use the model in our notebook:
# Load model directly
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
checkpoint = "aishu15/English-to-Tamil"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
def language_translator(text):
tokenized = tokenizer([text], return_tensors='pt')
out = model.generate(**tokenized, max_length=128)
return tokenizer.decode(out[0],skip_special_tokens=True)
text_to_translate = "hardwork never fail"
output = language_translator(text_to_translate)
print(output)
- Downloads last month
- 32
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.