PEFT
English

AnglE📐: Angle-optimized Text Embeddings

It is Angle 📐, not Angel 👼.

🔥 A New SOTA Model for Semantic Textual Similarity!

Github: https://github.com/SeanLee97/AnglE

https://arxiv.org/abs/2309.12871

PWC PWC PWC PWC PWC PWC PWC

STS Results

Model STS12 STS13 STS14 STS15 STS16 STSBenchmark SICKRelatedness Avg.
SeanLee97/angle-llama-7b-nli-20231027 78.68 90.58 85.49 89.56 86.91 88.92 81.18 85.90
SeanLee97/angle-llama-7b-nli-v2 79.00 90.56 85.79 89.43 87.00 88.97 80.94 85.96
SeanLee97/angle-llama-13b-nli 79.33 90.65 86.89 90.45 87.32 89.69 81.32 86.52

Usage

python -m pip install -U angle-emb
from angle_emb import AnglE, Prompts

# init
angle = AnglE.from_pretrained('NousResearch/Llama-2-13b-hf', pretrained_lora_path='SeanLee97/angle-llama-13b-nli', load_kbit=16, apply_bfloat16=False)

# set prompt
print('All predefined prompts:', Prompts.list_prompts())
angle.set_prompt(prompt=Prompts.A)
print('prompt:', angle.prompt)

# encode text
vec = angle.encode({'text': 'hello world'}, to_numpy=True)
print(vec)
vecs = angle.encode([{'text': 'hello world1'}, {'text': 'hello world2'}], to_numpy=True)
print(vecs)

Citation

You are welcome to use our code and pre-trained models. If you use our code and pre-trained models, please support us by citing our work as follows:

@article{li2023angle,
  title={AnglE-Optimized Text Embeddings},
  author={Li, Xianming and Li, Jing},
  journal={arXiv preprint arXiv:2309.12871},
  year={2023}
}
Downloads last month
8
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Datasets used to train SeanLee97/angle-llama-13b-nli

Collection including SeanLee97/angle-llama-13b-nli