The model is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for the following use case: This model is designed to support various applications in natural language processing and understanding.
How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
from transformers import AutoModel, AutoTokenizer
llm_name = "jina-embeddings-v2-base-en-03052024-c20v-webapp"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name, trust_remote_code=True)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.