How to use model with torch:


# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Load the model
model = AutoModel.from_pretrained(model_name)

How to use it on example text:

# Example input
text = "Hello, how are you?"

# Tokenize the input
inputs = tokenizer(text, return_tensors="pt")

# Forward pass through the model
outputs = model(**inputs)
Downloads last month
9
Safetensors
Model size
110M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.