gte-micro / README.md
Mihaiii's picture
Update README.md
6fd2397 verified
---
license: mit
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- gte
- mteb
model-index:
- name: gte-micro
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.82089552238806
- type: ap
value: 31.260622493912688
- type: f1
value: 62.701989024087304
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 77.11532499999998
- type: ap
value: 71.29001033390622
- type: f1
value: 77.0225646895571
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.93600000000001
- type: f1
value: 39.24591989399245
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 35.237007515497126
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 31.08692637060412
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 55.312310786737015
- type: mrr
value: 69.50842017324011
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 69.56168831168831
- type: f1
value: 68.14675364705445
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 30.20098791829512
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 27.38014535599197
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.224999999999994
- type: f1
value: 39.319662595355354
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 62.17159999999999
- type: ap
value: 58.35784294974692
- type: f1
value: 61.8942294000012
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.68946648426811
- type: f1
value: 86.26529827823835
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 49.69676242590059
- type: f1
value: 33.74537894406717
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.028244788164095
- type: f1
value: 55.31452888309622
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.58708809683928
- type: f1
value: 65.90050839709882
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 27.16644221915073
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.5164150501441
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 45.61660066180842
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 47.86938629331837
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.7980198019802
- type: cos_sim_ap
value: 94.25805747549842
- type: cos_sim_f1
value: 89.56262425447315
- type: cos_sim_precision
value: 89.03162055335969
- type: cos_sim_recall
value: 90.10000000000001
- type: dot_accuracy
value: 99.7980198019802
- type: dot_ap
value: 94.25806137565444
- type: dot_f1
value: 89.56262425447315
- type: dot_precision
value: 89.03162055335969
- type: dot_recall
value: 90.10000000000001
- type: euclidean_accuracy
value: 99.7980198019802
- type: euclidean_ap
value: 94.25805747549843
- type: euclidean_f1
value: 89.56262425447315
- type: euclidean_precision
value: 89.03162055335969
- type: euclidean_recall
value: 90.10000000000001
- type: manhattan_accuracy
value: 99.7980198019802
- type: manhattan_ap
value: 94.35547438808531
- type: manhattan_f1
value: 89.78574987543598
- type: manhattan_precision
value: 89.47368421052632
- type: manhattan_recall
value: 90.10000000000001
- type: max_accuracy
value: 99.7980198019802
- type: max_ap
value: 94.35547438808531
- type: max_f1
value: 89.78574987543598
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 52.619948149973
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.050148689318583
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 66.1018
- type: ap
value: 12.152100246603089
- type: f1
value: 50.78295258419767
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.77532541029994
- type: f1
value: 60.7949438635894
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 40.793779391259136
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.10186564940096
- type: cos_sim_ap
value: 63.85437966517539
- type: cos_sim_f1
value: 60.5209914011128
- type: cos_sim_precision
value: 58.11073336571151
- type: cos_sim_recall
value: 63.13984168865435
- type: dot_accuracy
value: 83.10186564940096
- type: dot_ap
value: 63.85440662982004
- type: dot_f1
value: 60.5209914011128
- type: dot_precision
value: 58.11073336571151
- type: dot_recall
value: 63.13984168865435
- type: euclidean_accuracy
value: 83.10186564940096
- type: euclidean_ap
value: 63.85438236123812
- type: euclidean_f1
value: 60.5209914011128
- type: euclidean_precision
value: 58.11073336571151
- type: euclidean_recall
value: 63.13984168865435
- type: manhattan_accuracy
value: 82.95881266018954
- type: manhattan_ap
value: 63.548796919332496
- type: manhattan_f1
value: 60.2080461210678
- type: manhattan_precision
value: 57.340654094055864
- type: manhattan_recall
value: 63.377308707124016
- type: max_accuracy
value: 83.10186564940096
- type: max_ap
value: 63.85440662982004
- type: max_f1
value: 60.5209914011128
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.93417937672217
- type: cos_sim_ap
value: 84.07115019218789
- type: cos_sim_f1
value: 75.7513225528083
- type: cos_sim_precision
value: 73.8748627881449
- type: cos_sim_recall
value: 77.72559285494303
- type: dot_accuracy
value: 87.93417937672217
- type: dot_ap
value: 84.0711576640934
- type: dot_f1
value: 75.7513225528083
- type: dot_precision
value: 73.8748627881449
- type: dot_recall
value: 77.72559285494303
- type: euclidean_accuracy
value: 87.93417937672217
- type: euclidean_ap
value: 84.07114662252135
- type: euclidean_f1
value: 75.7513225528083
- type: euclidean_precision
value: 73.8748627881449
- type: euclidean_recall
value: 77.72559285494303
- type: manhattan_accuracy
value: 87.90507237940001
- type: manhattan_ap
value: 84.00643428398385
- type: manhattan_f1
value: 75.80849007508735
- type: manhattan_precision
value: 73.28589909443726
- type: manhattan_recall
value: 78.51093316907914
- type: max_accuracy
value: 87.93417937672217
- type: max_ap
value: 84.0711576640934
- type: max_f1
value: 75.80849007508735
---
# gte-micro
This is a distill of [gte-small](https://huggingface.co/thenlper/gte-small).
## Intended purpose
<span style="color:blue">This model is designed for use in semantic-autocomplete ([click here for demo](https://mihaiii.github.io/semantic-autocomplete/)).</span>
## Usage (same as [gte-small](https://huggingface.co/thenlper/gte-small))
Use in [semantic-autocomplete](https://github.com/Mihaiii/semantic-autocomplete)
OR
in code
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
"what is the capital of China?",
"how to implement quick sort in python?",
"Beijing",
"sorting algorithms"
]
tokenizer = AutoTokenizer.from_pretrained("Mihaiii/gte-micro")
model = AutoModel.from_pretrained("Mihaiii/gte-micro")
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
```
Use with sentence-transformers:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['That is a happy person', 'That is a very happy person']
model = SentenceTransformer('Mihaiii/gte-micro')
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
### Limitation (same as [gte-small](https://huggingface.co/thenlper/gte-small))
This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.