Edit model card

Fin_Embed_Large

This is a finetune of BAAI/bge-large-en-v1.5. It is finetuned on Q/A pairs from ~ 50 s&p 500 annual reports.

Usage (Sentence-Transformers)

To use this model sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]

model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)

Evaluation Results

Evaluated on retrieval task using financial documents held out from training data.

Model cos_sim-Accuracy@1 cos_sim-Accuracy@3 cos_sim-Accuracy@5 cos_sim-Accuracy@10 cos_sim-Precision@1 cos_sim-Recall@1 cos_sim-Precision@3 cos_sim-Recall@3 cos_sim-Precision@5 cos_sim-Recall@5 cos_sim-Precision@10 cos_sim-Recall@10
BGE Large 1.5 0.513663092 0.698374265 0.771359391 0.849878935 0.513663092 0.513663092 0.232791422 0.698374265 0.154271878 0.771359391 0.084987893 0.849878935
FIN_EMBED 0.592182636 0.7741266 0.833275683 0.89346247 0.592182636 0.592182636 0.2580422 0.7741266 0.166655137 0.833275683 0.089346247 0.89346247

Training

The model was trained with the parameters:

DataLoader:

torch.utils.data.dataloader.DataLoader of length 443 with parameters:

{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}

Loss:

sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss with parameters:

{'scale': 20.0, 'similarity_fct': 'cos_sim'}

Parameters of the fit()-Method:

{
    "epochs": 2,
    "evaluation_steps": 50,
    "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
    "optimizer_params": {
        "lr": 2e-05
    },
    "scheduler": "WarmupLinear",
    "steps_per_epoch": null,
    "warmup_steps": 88,
    "weight_decay": 0.01
}

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Citing & Authors

Downloads last month
0
Safetensors
Model size
335M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).